Sundry cosmetic fixes across the board.

- Comments are migrated from ``/* */`` to ``//`` per convention.

- ``NewDefaultHistogram`` helper.

- ``Registry.Handler`` and ``Registry.YieldExporter`` deprecation.

- Cleanup of legacy import paths.

- Updating examples to use acknowledged patterns.

- Parameterizing the random parameter namespaces for ``examples/random/main.go``, which is useful for demoing population behaviors.
This commit is contained in:
Matt T. Proud 2013-02-12 02:36:06 +01:00
parent e1bfb0c101
commit 6c3a2ddddb
35 changed files with 492 additions and 663 deletions

View File

@ -1,8 +1,8 @@
// Copyright (c) 2013, Matt T. Proud // Copyright (c) 2013, Matt T. Proud
// All rights reserved. // All rights reserved.
// //
// Use of this source code is governed by a BSD-style license that can be found in // Use of this source code is governed by a BSD-style license that can be found
// the LICENSE file. // in the LICENSE file.
package registry package registry
@ -19,6 +19,8 @@ var (
ProtocolVersionHeader = "X-Prometheus-API-Version" ProtocolVersionHeader = "X-Prometheus-API-Version"
ExpositionResource = "/metrics.json"
baseLabelsKey = "baseLabels" baseLabelsKey = "baseLabels"
docstringKey = "docstring" docstringKey = "docstring"
metricKey = "metric" metricKey = "metric"

View File

@ -1,54 +1,52 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style license that can be found
// in the LICENSE file.
Use of this source code is governed by a BSD-style license that can be found in // registry.go provides a container for centralized exposition of metrics to
the LICENSE file. // their prospective consumers.
*/
/* // registry.Register("human_readable_metric_name", metric)
registry.go provides a container for centralized exposition of metrics to
their prospective consumers.
registry.Register("human_readable_metric_name", metric) // Please try to observe the following rules when naming metrics:
Please try to observe the following rules when naming metrics: // - Use underbars "_" to separate words.
- Use underbars "_" to separate words. // - Have the metric name start from generality and work toward specificity
// toward the end. For example, when working with multiple caching subsystems,
// consider using the following structure "cache" + "user_credentials" →
// "cache_user_credentials" and "cache" + "value_transformations" →
// "cache_value_transformations".
- Have the metric name start from generality and work toward specificity // - Have whatever is being measured follow the system and subsystem names cited
toward the end. For example, when working with multiple caching subsystems, // supra. For instance, with "insertions", "deletions", "evictions",
consider using the following structure "cache" + "user_credentials" // "replacements" of the above cache, they should be named as
"cache_user_credentials" and "cache" + "value_transformations" // "cache_user_credentials_insertions" and "cache_user_credentials_deletions"
"cache_value_transformations". // and "cache_user_credentials_deletions" and
// "cache_user_credentials_evictions".
- Have whatever is being measured follow the system and subsystem names cited // - If what is being measured has a standardized unit around it, consider
supra. For instance, with "insertions", "deletions", "evictions", // providing a unit for it.
"replacements" of the above cache, they should be named as
"cache_user_credentials_insertions" and "cache_user_credentials_deletions" and
"cache_user_credentials_deletions" and "cache_user_credentials_evictions".
- If what is being measured has a standardized unit around it, consider // - Consider adding an additional suffix that designates what the value
providing a unit for it. // represents such as a "total" or "size"---e.g.,
// "cache_user_credentials_size_kb" or
// "cache_user_credentials_insertions_total".
- Consider adding an additional suffix that designates what the value represents // - Give heed to how future-proof the names are. Things may depend on these
such as a "total" or "size"---e.g., "cache_user_credentials_size_kb" or // names; and as your service evolves, the calculated values may take on
"cache_user_credentials_insertions_total". // different meanings, which can be difficult to reflect if deployed code
// depends on antique names.
- Give heed to how future-proof the names are. Things may depend on these // Further considerations:
names; and as your service evolves, the calculated values may take on
different meanings, which can be difficult to reflect if deployed code depends
on antique names.
Further considerations: // - The Registry's exposition mechanism is not backed by authorization and
// authentication. This is something that will need to be addressed for
// production services that are directly exposed to the outside world.
- The Registry's exposition mechanism is not backed by authorization and // - Engage in as little in-process processing of values as possible. The job
authentication. This is something that will need to be addressed for // of processing and aggregation of these values belongs in a separate
production services that are directly exposed to the outside world. // post-processing job. The same goes for archiving. I will need to evaluate
// hooks into something like OpenTSBD.
- Engage in as little in-process processing of values as possible. The job
of processing and aggregation of these values belongs in a separate
post-processing job. The same goes for archiving. I will need to evaluate
hooks into something like OpenTSBD.
*/
package registry package registry

View File

@ -1,19 +1,15 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
Use of this source code is governed by a BSD-style // main.go provides a simple example of how to use this instrumentation
license that can be found in the LICENSE file. // framework in the context of having something that emits values into
*/ // its collectors.
//
/* // The emitted values correspond to uniform, normal, and exponential
main.go provides a simple example of how to use this instrumentation // distributions.
framework in the context of having something that emits values into
its collectors.
The emitted values correspond to uniform, normal, and exponential
distributions.
*/
package main package main
import ( import (
@ -28,34 +24,52 @@ import (
var ( var (
listeningAddress string listeningAddress string
barDomain float64
barMean float64
fooDomain float64
// Create a histogram to track fictitious interservice RPC latency for three
// distinct services.
rpc_latency = metrics.NewHistogram(&metrics.HistogramSpecification{
// Four distinct histogram buckets for values:
// - equally-sized,
// - 0 to 50, 50 to 100, 100 to 150, and 150 to 200.
Starts: metrics.EquallySizedBucketsFor(0, 200, 4),
// Create histogram buckets using an accumulating bucket, a bucket that
// holds sample values subject to an eviction policy:
// - 50 elements are allowed per bucket.
// - Once 50 have been reached, the bucket empties 10 elements, averages the
// evicted elements, and re-appends that back to the bucket.
BucketBuilder: metrics.AccumulatingBucketBuilder(metrics.EvictAndReplaceWith(10, maths.Average), 50),
// The histogram reports percentiles 1, 5, 50, 90, and 99.
ReportablePercentiles: []float64{0.01, 0.05, 0.5, 0.90, 0.99},
})
rpc_calls = metrics.NewCounter()
// If for whatever reason you are resistant to the idea of having a static
// registry for metrics, which is a really bad idea when using Prometheus-
// enabled library code, you can create your own.
customRegistry = registry.NewRegistry()
) )
func init() { func init() {
flag.StringVar(&listeningAddress, "listeningAddress", ":8080", "The address to listen to requests on.") flag.StringVar(&listeningAddress, "listeningAddress", ":8080", "The address to listen to requests on.")
flag.Float64Var(&fooDomain, "random.fooDomain", 200, "The domain for the random parameter foo.")
flag.Float64Var(&barDomain, "random.barDomain", 10, "The domain for the random parameter bar.")
flag.Float64Var(&barMean, "random.barMean", 100, "The mean for the random parameter bar.")
} }
func main() { func main() {
flag.Parse() flag.Parse()
rpc_latency := metrics.NewHistogram(&metrics.HistogramSpecification{
Starts: metrics.EquallySizedBucketsFor(0, 200, 4),
BucketBuilder: metrics.AccumulatingBucketBuilder(metrics.EvictAndReplaceWith(10, maths.Average), 50),
ReportablePercentiles: []float64{0.01, 0.05, 0.5, 0.90, 0.99},
})
rpc_calls := metrics.NewCounter()
metrics := registry.NewRegistry()
metrics.Register("rpc_latency_microseconds", "RPC latency.", registry.NilLabels, rpc_latency)
metrics.Register("rpc_calls_total", "RPC calls.", registry.NilLabels, rpc_calls)
go func() { go func() {
for { for {
rpc_latency.Add(map[string]string{"service": "foo"}, rand.Float64()*200) rpc_latency.Add(map[string]string{"service": "foo"}, rand.Float64()*fooDomain)
rpc_calls.Increment(map[string]string{"service": "foo"}) rpc_calls.Increment(map[string]string{"service": "foo"})
rpc_latency.Add(map[string]string{"service": "bar"}, (rand.NormFloat64()*10.0)+100.0) rpc_latency.Add(map[string]string{"service": "bar"}, (rand.NormFloat64()*barDomain)+barMean)
rpc_calls.Increment(map[string]string{"service": "bar"}) rpc_calls.Increment(map[string]string{"service": "bar"})
rpc_latency.Add(map[string]string{"service": "zed"}, rand.ExpFloat64()) rpc_latency.Add(map[string]string{"service": "zed"}, rand.ExpFloat64())
@ -65,8 +79,11 @@ func main() {
} }
}() }()
exporter := metrics.YieldExporter() http.Handle(registry.ExpositionResource, customRegistry.Handler())
http.Handle("/metrics.json", exporter)
http.ListenAndServe(listeningAddress, nil) http.ListenAndServe(listeningAddress, nil)
} }
func init() {
customRegistry.Register("rpc_latency_microseconds", "RPC latency.", registry.NilLabels, rpc_latency)
customRegistry.Register("rpc_calls_total", "RPC calls.", registry.NilLabels, rpc_calls)
}

View File

@ -1,15 +1,11 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
Use of this source code is governed by a BSD-style // main.go provides a simple skeletal example of how this instrumentation
license that can be found in the LICENSE file. // framework is registered and invoked.
*/
/*
main.go provides a simple skeletal example of how this instrumentation
framework is registered and invoked.
*/
package main package main
import ( import (
@ -29,8 +25,6 @@ func init() {
func main() { func main() {
flag.Parse() flag.Parse()
exporter := registry.DefaultRegistry.YieldExporter() http.Handle(registry.ExpositionResource, registry.DefaultHandler)
http.Handle("/metrics.json", exporter)
http.ListenAndServe(listeningAddress, nil) http.ListenAndServe(listeningAddress, nil)
} }

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package maths package maths
@ -12,9 +10,7 @@ import (
"math" "math"
) )
/* // Go's standard library does not offer a factorial function.
Go's standard library does not offer a factorial function.
*/
func Factorial(of int) int64 { func Factorial(of int) int64 {
if of <= 0 { if of <= 0 {
return 1 return 1
@ -29,11 +25,9 @@ func Factorial(of int) int64 {
return result return result
} }
/* // Calculate the value of a probability density for a given binomial statistic,
Create calculate the value of a probability density for a given binomial // where k is the target count of true cases, n is the number of subjects, and
statistic, where k is the target count of true cases, n is the number of // p is the probability.
subjects, and p is the probability.
*/
func BinomialPDF(k, n int, p float64) float64 { func BinomialPDF(k, n int, p float64) float64 {
binomialCoefficient := float64(Factorial(n)) / float64(Factorial(k)*Factorial(n-k)) binomialCoefficient := float64(Factorial(n)) / float64(Factorial(k)*Factorial(n-k))
intermediate := math.Pow(p, float64(k)) * math.Pow(1-p, float64(n-k)) intermediate := math.Pow(p, float64(k)) * math.Pow(1-p, float64(n-k))

View File

@ -1,26 +1,22 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
Use of this source code is governed by a BSD-style // The maths package provides a number of mathematical-related helpers:
license that can be found in the LICENSE file.
*/
/* // distributions.go provides basic distribution-generating functions that are
The maths package provides a number of mathematical-related helpers: // used primarily in testing contexts.
distributions.go provides basic distribution-generating functions that are // helpers_for_testing.go provides a testing assistents for this package and its
used primarily in testing contexts. // dependents.
helpers_for_testing.go provides a testing assistents for this package and its // maths_test.go provides a test suite for all tests in the maths package
dependents. // hierarchy. It employs the gocheck framework for test scaffolding.
maths_test.go provides a test suite for all tests in the maths package // statistics.go provides basic summary statistics functions for the purpose of
hierarchy. It employs the gocheck framework for test scaffolding. // metrics aggregation.
statistics.go provides basic summary statistics functions for the purpose of // statistics_test.go provides a test complement for the statistics.go module.
metrics aggregation.
statistics_test.go provides a test complement for the statistics.go module.
*/
package maths package maths

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package maths package maths
@ -18,10 +16,8 @@ type isNaNChecker struct {
*CheckerInfo *CheckerInfo
} }
/* // This piece provides a simple tester for the gocheck testing library to
This piece provides a simple tester for the gocheck testing library to // ascertain if a value is not-a-number.
ascertain if a value is not-a-number.
*/
var IsNaN Checker = &isNaNChecker{ var IsNaN Checker = &isNaNChecker{
&CheckerInfo{Name: "IsNaN", Params: []string{"value"}}, &CheckerInfo{Name: "IsNaN", Params: []string{"value"}},
} }

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package maths package maths

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package maths package maths
@ -13,15 +11,11 @@ import (
"sort" "sort"
) )
/* // TODO(mtp): Split this out into a summary statistics file once moving/rolling
TODO(mtp): Split this out into a summary statistics file once moving/rolling // averages are calculated.
averages are calculated.
*/
/* // ReductionMethod provides a method for reducing metrics into a given scalar
ReductionMethod provides a method for reducing metrics into a given scalar // value.
value.
*/
type ReductionMethod func([]float64) float64 type ReductionMethod func([]float64) float64
var Average ReductionMethod = func(input []float64) float64 { var Average ReductionMethod = func(input []float64) float64 {
@ -40,9 +34,7 @@ var Average ReductionMethod = func(input []float64) float64 {
return sum / count return sum / count
} }
/* // Extract the first modal value.
Extract the first modal value.
*/
var FirstMode ReductionMethod = func(input []float64) float64 { var FirstMode ReductionMethod = func(input []float64) float64 {
valuesToFrequency := map[float64]int64{} valuesToFrequency := map[float64]int64{}
var largestTally int64 = math.MinInt64 var largestTally int64 = math.MinInt64
@ -63,9 +55,7 @@ var FirstMode ReductionMethod = func(input []float64) float64 {
return largestTallyValue return largestTallyValue
} }
/* // Calculate the percentile by choosing the nearest neighboring value.
Calculate the percentile by choosing the nearest neighboring value.
*/
func NearestRank(input []float64, percentile float64) float64 { func NearestRank(input []float64, percentile float64) float64 {
inputSize := len(input) inputSize := len(input)
@ -88,9 +78,7 @@ func NearestRank(input []float64, percentile float64) float64 {
return copiedInput[preliminaryIndex] return copiedInput[preliminaryIndex]
} }
/* // Generate a ReductionMethod based off of extracting a given percentile value.
Generate a ReductionMethod based off of extracting a given percentile value.
*/
func NearestRankReducer(percentile float64) ReductionMethod { func NearestRankReducer(percentile float64) ReductionMethod {
return func(input []float64) float64 { return func(input []float64) float64 {
return NearestRank(input, percentile) return NearestRank(input, percentile)

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package maths package maths

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package metrics package metrics
@ -27,11 +25,9 @@ type AccumulatingBucket struct {
observations int observations int
} }
/* // AccumulatingBucketBuilder is a convenience method for generating a
AccumulatingBucketBuilder is a convenience method for generating a // BucketBuilder that produces AccumatingBucket entries with a certain
BucketBuilder that produces AccumatingBucket entries with a certain // behavior set.
behavior set.
*/
func AccumulatingBucketBuilder(evictionPolicy EvictionPolicy, maximumSize int) BucketBuilder { func AccumulatingBucketBuilder(evictionPolicy EvictionPolicy, maximumSize int) BucketBuilder {
return func() Bucket { return func() Bucket {
return &AccumulatingBucket{ return &AccumulatingBucket{
@ -42,10 +38,8 @@ func AccumulatingBucketBuilder(evictionPolicy EvictionPolicy, maximumSize int) B
} }
} }
/* // Add a value to the bucket. Depending on whether the bucket is full, it may
Add a value to the bucket. Depending on whether the bucket is full, it may // trigger an eviction of older items.
trigger an eviction of older items.
*/
func (b *AccumulatingBucket) Add(value float64) { func (b *AccumulatingBucket) Add(value float64) {
b.mutex.Lock() b.mutex.Lock()
defer b.mutex.Unlock() defer b.mutex.Unlock()
@ -100,11 +94,9 @@ func (b *AccumulatingBucket) ValueForIndex(index int) float64 {
sort.Float64s(sortedElements) sort.Float64s(sortedElements)
/* // N.B.(mtp): Interfacing components should not need to comprehend what
N.B.(mtp): Interfacing components should not need to comprehend what // eviction and storage container strategies used; therefore,
eviction and storage container strategies used; therefore, // we adjust this silently.
we adjust this silently.
*/
targetIndex := int(float64(elementCount-1) * (float64(index) / float64(b.observations))) targetIndex := int(float64(elementCount-1) * (float64(index) / float64(b.observations)))
return sortedElements[targetIndex] return sortedElements[targetIndex]

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package metrics package metrics
@ -123,23 +121,18 @@ func (s *S) TestAccumulatingBucketValueForIndex(c *C) {
c.Assert(b.ValueForIndex(i), maths.IsNaN) c.Assert(b.ValueForIndex(i), maths.IsNaN)
} }
/* // The bucket has only observed one item and contains now one item.
The bucket has only observed one item and contains now one item.
*/
b.Add(1.0) b.Add(1.0)
c.Check(b.ValueForIndex(0), Equals, 1.0) c.Check(b.ValueForIndex(0), Equals, 1.0)
/* // Let's sanity check what occurs if presumably an eviction happened and
Let's sanity check what occurs if presumably an eviction happened and // we requested an index larger than what is contained.
we requested an index larger than what is contained.
*/
c.Check(b.ValueForIndex(1), Equals, 1.0) c.Check(b.ValueForIndex(1), Equals, 1.0)
for i := 2.0; i <= 100; i += 1 { for i := 2.0; i <= 100; i += 1 {
b.Add(i) b.Add(i)
/*
TODO(mtp): This is a sin. Provide a mechanism for deterministic testing. // TODO(mtp): This is a sin. Provide a mechanism for deterministic testing.
*/
time.Sleep(1 * time.Millisecond) time.Sleep(1 * time.Millisecond)
} }
@ -149,17 +142,13 @@ func (s *S) TestAccumulatingBucketValueForIndex(c *C) {
for i := 101.0; i <= 150; i += 1 { for i := 101.0; i <= 150; i += 1 {
b.Add(i) b.Add(i)
/* // TODO(mtp): This is a sin. Provide a mechanism for deterministic testing.
TODO(mtp): This is a sin. Provide a mechanism for deterministic testing.
*/
time.Sleep(1 * time.Millisecond) time.Sleep(1 * time.Millisecond)
} }
/* // The bucket's capacity has been exceeded by inputs at this point;
The bucket's capacity has been exceeded by inputs at this point; // consequently, we search for a given element by percentage offset
consequently, we search for a given element by percentage offset // therein.
therein.
*/
c.Check(b.ValueForIndex(0), Equals, 51.0) c.Check(b.ValueForIndex(0), Equals, 51.0)
c.Check(b.ValueForIndex(50), Equals, 84.0) c.Check(b.ValueForIndex(50), Equals, 84.0)
c.Check(b.ValueForIndex(99), Equals, 116.0) c.Check(b.ValueForIndex(99), Equals, 116.0)

View File

@ -1,49 +1,33 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
Use of this source code is governed by a BSD-style // bucket.go provides fundamental interface expectations for various bucket
license that can be found in the LICENSE file. // types.
*/
/*
bucket.go provides fundamental interface expectations for various bucket
types.
*/
package metrics package metrics
/* // The Histogram class and associated types build buckets on their own.
The Histogram class and associated types build buckets on their own.
*/
type BucketBuilder func() Bucket type BucketBuilder func() Bucket
/* // This defines the base Bucket type. The exact behaviors of the bucket are
This defines the base Bucket type. The exact behaviors of the bucket are // at the whim of the implementor.
at the whim of the implementor. //
// A Bucket is used as a container by Histogram as a collection for its
A Bucket is used as a container by Histogram as a collection for its // accumulated samples.
accumulated samples.
*/
type Bucket interface { type Bucket interface {
/* // Add a value to the bucket.
Add a value to the bucket.
*/
Add(value float64) Add(value float64)
/* // Provide a count of observations throughout the bucket's lifetime.
Provide a count of observations throughout the bucket's lifetime.
*/
Observations() int Observations() int
// Reset is responsible for resetting this bucket back to a pristine state. // Reset is responsible for resetting this bucket back to a pristine state.
Reset() Reset()
/* // Provide a humanized representation hereof.
Provide a humanized representation hereof.
*/
String() string String() string
/* // Provide the value from the given in-memory value cache or an estimate
Provide the value from the given in-memory value cache or an estimate // thereof for the given index. The consumer of the bucket's data makes
thereof for the given index. The consumer of the bucket's data makes // no assumptions about the underlying storage mechanisms that the bucket
no assumptions about the underlying storage mechanisms that the bucket // employs.
employs.
*/
ValueForIndex(index int) float64 ValueForIndex(index int) float64
} }

View File

@ -1,14 +1,10 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
Use of this source code is governed by a BSD-style // constants.go provides package-level constants for metrics.
license that can be found in the LICENSE file.
*/
/*
constants.go provides package-level constants for metrics.
*/
package metrics package metrics
const ( const (

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package metrics package metrics
@ -27,20 +25,20 @@ type Counter interface {
String() string String() string
} }
type counterValue struct { type counterVector struct {
labels map[string]string labels map[string]string
value float64 value float64
} }
func NewCounter() Counter { func NewCounter() Counter {
return &counter{ return &counter{
values: map[string]*counterValue{}, values: map[string]*counterVector{},
} }
} }
type counter struct { type counter struct {
mutex sync.RWMutex mutex sync.RWMutex
values map[string]*counterValue values map[string]*counterVector
} }
func (metric *counter) Set(labels map[string]string, value float64) float64 { func (metric *counter) Set(labels map[string]string, value float64) float64 {
@ -55,7 +53,7 @@ func (metric *counter) Set(labels map[string]string, value float64) float64 {
if original, ok := metric.values[signature]; ok { if original, ok := metric.values[signature]; ok {
original.value = value original.value = value
} else { } else {
metric.values[signature] = &counterValue{ metric.values[signature] = &counterVector{
labels: labels, labels: labels,
value: value, value: value,
} }
@ -97,7 +95,7 @@ func (metric *counter) IncrementBy(labels map[string]string, value float64) floa
if original, ok := metric.values[signature]; ok { if original, ok := metric.values[signature]; ok {
original.value += value original.value += value
} else { } else {
metric.values[signature] = &counterValue{ metric.values[signature] = &counterVector{
labels: labels, labels: labels,
value: value, value: value,
} }
@ -122,7 +120,7 @@ func (metric *counter) DecrementBy(labels map[string]string, value float64) floa
if original, ok := metric.values[signature]; ok { if original, ok := metric.values[signature]; ok {
original.value -= value original.value -= value
} else { } else {
metric.values[signature] = &counterValue{ metric.values[signature] = &counterVector{
labels: labels, labels: labels,
value: -1 * value, value: -1 * value,
} }

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package metrics package metrics

View File

@ -1,51 +1,48 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
Use of this source code is governed by a BSD-style // The metrics package provides general descriptors for the concept of
license that can be found in the LICENSE file. // exportable metrics.
*/
/* // accumulating_bucket.go provides a histogram bucket type that accumulates
The metrics package provides general descriptors for the concept of exportable // elements until a given capacity and enacts a given eviction policy upon
metrics. // such a condition.
accumulating_bucket.go provides a histogram bucket type that accumulates // accumulating_bucket_test.go provides a test complement for the
elements until a given capacity and enacts a given eviction policy upon // accumulating_bucket_go module.
such a condition.
accumulating_bucket_test.go provides a test complement for the // eviction.go provides several histogram bucket eviction strategies.
accumulating_bucket_go module.
eviction.go provides several histogram bucket eviction strategies. // eviction_test.go provides a test complement for the eviction.go module.
eviction_test.go provides a test complement for the eviction.go module. // gauge.go provides a scalar metric that one can monitor. It is useful for
// certain cases, such as instantaneous temperature.
gauge.go provides a scalar metric that one can monitor. It is useful for // gauge_test.go provides a test complement for the gauge.go module.
certain cases, such as instantaneous temperature.
gauge_test.go provides a test complement for the gauge.go module. // histogram.go provides a basic histogram metric, which can accumulate scalar
// event values or samples. The underlying histogram implementation is designed
// to be performant in that it accepts tolerable inaccuracies.
histogram.go provides a basic histogram metric, which can accumulate scalar // histogram_test.go provides a test complement for the histogram.go module.
event values or samples. The underlying histogram implementation is designed
to be performant in that it accepts tolerable inaccuracies.
histogram_test.go provides a test complement for the histogram.go module. // metric.go provides fundamental interface expectations for the various
// metrics.
metric.go provides fundamental interface expectations for the various metrics. // metrics_test.go provides a test suite for all tests in the metrics package
// hierarchy. It employs the gocheck framework for test scaffolding.
metrics_test.go provides a test suite for all tests in the metrics package // tallying_bucket.go provides a histogram bucket type that aggregates tallies
hierarchy. It employs the gocheck framework for test scaffolding. // of events that fall into its ranges versus a summary of the values
// themselves.
tallying_bucket.go provides a histogram bucket type that aggregates tallies // tallying_bucket_test.go provides a test complement for the
of events that fall into its ranges versus a summary of the values // tallying_bucket.go module.
themselves.
tallying_bucket_test.go provides a test complement for the // timer.go provides a scalar metric that times how long a given event takes.
tallying_bucket.go module.
timer.go provides a scalar metric that times how long a given event takes. // timer_test.go provides a test complement for the timer.go module.
timer_test.go provides a test complement for the timer.go module.
*/
package metrics package metrics

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package metrics package metrics
@ -15,16 +13,12 @@ import (
"time" "time"
) )
/* // EvictionPolicy implements some sort of garbage collection methodology for
EvictionPolicy implements some sort of garbage collection methodology for // an underlying heap.Interface. This is presently only used for
an underlying heap.Interface. This is presently only used for // AccumulatingBucket.
AccumulatingBucket.
*/
type EvictionPolicy func(h heap.Interface) type EvictionPolicy func(h heap.Interface)
/* // As the name implies, this evicts the oldest x objects from the heap.
As the name implies, this evicts the oldest x objects from the heap.
*/
func EvictOldest(count int) EvictionPolicy { func EvictOldest(count int) EvictionPolicy {
return func(h heap.Interface) { return func(h heap.Interface) {
for i := 0; i < count; i++ { for i := 0; i < count; i++ {
@ -33,10 +27,8 @@ func EvictOldest(count int) EvictionPolicy {
} }
} }
/* // This factory produces an EvictionPolicy that applies some standardized
This factory produces an EvictionPolicy that applies some standardized // reduction methodology on the to-be-terminated values.
reduction methodology on the to-be-terminated values.
*/
func EvictAndReplaceWith(count int, reducer maths.ReductionMethod) EvictionPolicy { func EvictAndReplaceWith(count int, reducer maths.ReductionMethod) EvictionPolicy {
return func(h heap.Interface) { return func(h heap.Interface) {
oldValues := make([]float64, count) oldValues := make([]float64, count)
@ -49,9 +41,8 @@ func EvictAndReplaceWith(count int, reducer maths.ReductionMethod) EvictionPolic
heap.Push(h, &utility.Item{ heap.Push(h, &utility.Item{
Value: reduced, Value: reduced,
/* // TODO(mtp): Parameterize the priority generation since these tools are
TODO(mtp): Parameterize the priority generation since these tools are useful. // useful.
*/
Priority: -1 * time.Now().UnixNano(), Priority: -1 * time.Now().UnixNano(),
}) })
} }

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package metrics package metrics

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package metrics package metrics
@ -14,12 +12,10 @@ import (
"sync" "sync"
) )
/* // A gauge metric merely provides an instantaneous representation of a scalar
A gauge metric merely provides an instantaneous representation of a scalar // value or an accumulation. For instance, if one wants to expose the current
value or an accumulation. For instance, if one wants to expose the current // temperature or the hitherto bandwidth used, this would be the metric for such
temperature or the hitherto bandwidth used, this would be the metric for such // circumstances.
circumstances.
*/
type Gauge interface { type Gauge interface {
AsMarshallable() map[string]interface{} AsMarshallable() map[string]interface{}
ResetAll() ResetAll()
@ -27,20 +23,20 @@ type Gauge interface {
String() string String() string
} }
type gaugeValue struct { type gaugeVector struct {
labels map[string]string labels map[string]string
value float64 value float64
} }
func NewGauge() Gauge { func NewGauge() Gauge {
return &gauge{ return &gauge{
values: map[string]*gaugeValue{}, values: map[string]*gaugeVector{},
} }
} }
type gauge struct { type gauge struct {
mutex sync.RWMutex mutex sync.RWMutex
values map[string]*gaugeValue values map[string]*gaugeVector
} }
func (metric *gauge) String() string { func (metric *gauge) String() string {
@ -65,7 +61,7 @@ func (metric *gauge) Set(labels map[string]string, value float64) float64 {
if original, ok := metric.values[signature]; ok { if original, ok := metric.values[signature]; ok {
original.value = value original.value = value
} else { } else {
metric.values[signature] = &gaugeValue{ metric.values[signature] = &gaugeVector{
labels: labels, labels: labels,
value: value, value: value,
} }

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package metrics package metrics

View File

@ -1,27 +1,24 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package metrics package metrics
import ( import (
"bytes" "bytes"
"fmt" "fmt"
"github.com/prometheus/client_golang/maths"
"github.com/prometheus/client_golang/utility" "github.com/prometheus/client_golang/utility"
"math" "math"
"strconv" "strconv"
"sync" "sync"
) )
/* // This generates count-buckets of equal size distributed along the open
This generates count-buckets of equal size distributed along the open // interval of lower to upper. For instance, {lower=0, upper=10, count=5}
interval of lower to upper. For instance, {lower=0, upper=10, count=5} // yields the following: [0, 2, 4, 6, 8].
yields the following: [0, 2, 4, 6, 8].
*/
func EquallySizedBucketsFor(lower, upper float64, count int) []float64 { func EquallySizedBucketsFor(lower, upper float64, count int) []float64 {
buckets := make([]float64, count) buckets := make([]float64, count)
@ -35,10 +32,8 @@ func EquallySizedBucketsFor(lower, upper float64, count int) []float64 {
return buckets return buckets
} }
/* // This generates log2-sized buckets spanning from lower to upper inclusively
This generates log2-sized buckets spanning from lower to upper inclusively // as well as values beyond it.
as well as values beyond it.
*/
func LogarithmicSizedBucketsFor(lower, upper float64) []float64 { func LogarithmicSizedBucketsFor(lower, upper float64) []float64 {
bucketCount := int(math.Ceil(math.Log2(upper))) bucketCount := int(math.Ceil(math.Log2(upper)))
@ -51,9 +46,7 @@ func LogarithmicSizedBucketsFor(lower, upper float64) []float64 {
return buckets return buckets
} }
/* // A HistogramSpecification defines how a Histogram is to be built.
A HistogramSpecification defines how a Histogram is to be built.
*/
type HistogramSpecification struct { type HistogramSpecification struct {
BucketBuilder BucketBuilder BucketBuilder BucketBuilder
ReportablePercentiles []float64 ReportablePercentiles []float64
@ -67,39 +60,30 @@ type Histogram interface {
String() string String() string
} }
/* // The histogram is an accumulator for samples. It merely routes into which
The histogram is an accumulator for samples. It merely routes into which // bucket to capture an event and provides a percentile calculation mechanism.
to bucket to capture an event and provides a percentile calculation
mechanism.
*/
type histogram struct { type histogram struct {
bucketMaker BucketBuilder bucketMaker BucketBuilder
/* // This represents the open interval's start at which values shall be added to
This represents the open interval's start at which values shall be added to // the bucket. The interval continues until the beginning of the next bucket
the bucket. The interval continues until the beginning of the next bucket // exclusive or positive infinity.
exclusive or positive infinity. //
// N.B.
N.B. // - bucketStarts should be sorted in ascending order;
- bucketStarts should be sorted in ascending order; // - len(bucketStarts) must be equivalent to len(buckets);
- len(bucketStarts) must be equivalent to len(buckets); // - The index of a given bucketStarts' element is presumed to
- The index of a given bucketStarts' element is presumed to // correspond to the appropriate element in buckets.
correspond to the appropriate element in buckets.
*/
bucketStarts []float64 bucketStarts []float64
mutex sync.RWMutex mutex sync.RWMutex
/* // These are the buckets that capture samples as they are emitted to the
These are the buckets that capture samples as they are emitted to the // histogram. Please consult the reference interface and its implements for
histogram. Please consult the reference interface and its implements for // further details about behavior expectations.
further details about behavior expectations. values map[string]*histogramVector
*/ // These are the percentile values that will be reported on marshalling.
values map[string]*histogramValue
/*
These are the percentile values that will be reported on marshalling.
*/
reportablePercentiles []float64 reportablePercentiles []float64
} }
type histogramValue struct { type histogramVector struct {
buckets []Bucket buckets []Bucket
labels map[string]string labels map[string]string
} }
@ -113,12 +97,12 @@ func (h *histogram) Add(labels map[string]string, value float64) {
} }
signature := utility.LabelsToSignature(labels) signature := utility.LabelsToSignature(labels)
var histogram *histogramValue = nil var histogram *histogramVector = nil
if original, ok := h.values[signature]; ok { if original, ok := h.values[signature]; ok {
histogram = original histogram = original
} else { } else {
bucketCount := len(h.bucketStarts) bucketCount := len(h.bucketStarts)
histogram = &histogramValue{ histogram = &histogramVector{
buckets: make([]Bucket, bucketCount), buckets: make([]Bucket, bucketCount),
labels: labels, labels: labels,
} }
@ -161,9 +145,7 @@ func (h *histogram) String() string {
return stringBuffer.String() return stringBuffer.String()
} }
/* // Determine the number of previous observations up to a given index.
Determine the number of previous observations up to a given index.
*/
func previousCumulativeObservations(cumulativeObservations []int, bucketIndex int) int { func previousCumulativeObservations(cumulativeObservations []int, bucketIndex int) int {
if bucketIndex == 0 { if bucketIndex == 0 {
return 0 return 0
@ -172,16 +154,12 @@ func previousCumulativeObservations(cumulativeObservations []int, bucketIndex in
return cumulativeObservations[bucketIndex-1] return cumulativeObservations[bucketIndex-1]
} }
/* // Determine the index for an element given a percentage of length.
Determine the index for an element given a percentage of length.
*/
func prospectiveIndexForPercentile(percentile float64, totalObservations int) int { func prospectiveIndexForPercentile(percentile float64, totalObservations int) int {
return int(percentile * float64(totalObservations-1)) return int(percentile * float64(totalObservations-1))
} }
/* // Determine the next bucket element when interim bucket intervals may be empty.
Determine the next bucket element when interim bucket intervals may be empty.
*/
func (h *histogram) nextNonEmptyBucketElement(signature string, currentIndex, bucketCount int, observationsByBucket []int) (*Bucket, int) { func (h *histogram) nextNonEmptyBucketElement(signature string, currentIndex, bucketCount int, observationsByBucket []int) (*Bucket, int) {
for i := currentIndex; i < bucketCount; i++ { for i := currentIndex; i < bucketCount; i++ {
if observationsByBucket[i] == 0 { if observationsByBucket[i] == 0 {
@ -196,24 +174,18 @@ func (h *histogram) nextNonEmptyBucketElement(signature string, currentIndex, bu
panic("Illegal Condition: There were no remaining buckets to provide a value.") panic("Illegal Condition: There were no remaining buckets to provide a value.")
} }
/* // Find what bucket and element index contains a given percentile value.
Find what bucket and element index contains a given percentile value. // If a percentile is requested that results in a corresponding index that is no
If a percentile is requested that results in a corresponding index that is no // longer contained by the bucket, the index of the last item is returned. This
longer contained by the bucket, the index of the last item is returned. This // may occur if the underlying bucket catalogs values and employs an eviction
may occur if the underlying bucket catalogs values and employs an eviction // strategy.
strategy.
*/
func (h *histogram) bucketForPercentile(signature string, percentile float64) (*Bucket, int) { func (h *histogram) bucketForPercentile(signature string, percentile float64) (*Bucket, int) {
bucketCount := len(h.bucketStarts) bucketCount := len(h.bucketStarts)
/* // This captures the quantity of samples in a given bucket's range.
This captures the quantity of samples in a given bucket's range.
*/
observationsByBucket := make([]int, bucketCount) observationsByBucket := make([]int, bucketCount)
/* // This captures the cumulative quantity of observations from all preceding
This captures the cumulative quantity of observations from all preceding // buckets up and to the end of this bucket.
buckets up and to the end of this bucket.
*/
cumulativeObservationsByBucket := make([]int, bucketCount) cumulativeObservationsByBucket := make([]int, bucketCount)
totalObservations := 0 totalObservations := 0
@ -227,11 +199,9 @@ func (h *histogram) bucketForPercentile(signature string, percentile float64) (*
cumulativeObservationsByBucket[i] = totalObservations cumulativeObservationsByBucket[i] = totalObservations
} }
/* // This captures the index offset where the given percentile value would be
This captures the index offset where the given percentile value would be // were all submitted samples stored and never down-/re-sampled nor deleted
were all submitted samples stored and never down-/re-sampled nor deleted // and housed in a singular array.
and housed in a singular array.
*/
prospectiveIndex := prospectiveIndexForPercentile(percentile, totalObservations) prospectiveIndex := prospectiveIndexForPercentile(percentile, totalObservations)
for i, cumulativeObservation := range cumulativeObservationsByBucket { for i, cumulativeObservation := range cumulativeObservationsByBucket {
@ -239,21 +209,15 @@ func (h *histogram) bucketForPercentile(signature string, percentile float64) (*
continue continue
} }
/* // Find the bucket that contains the given index.
Find the bucket that contains the given index.
*/
if cumulativeObservation >= prospectiveIndex { if cumulativeObservation >= prospectiveIndex {
var subIndex int var subIndex int
/* // This calculates the index within the current bucket where the given
This calculates the index within the current bucket where the given // percentile may be found.
percentile may be found.
*/
subIndex = prospectiveIndex - previousCumulativeObservations(cumulativeObservationsByBucket, i) subIndex = prospectiveIndex - previousCumulativeObservations(cumulativeObservationsByBucket, i)
/* // Sometimes the index may be the last item, in which case we need to
Sometimes the index may be the last item, in which case we need to // take this into account.
take this into account.
*/
if observationsByBucket[i] == subIndex { if observationsByBucket[i] == subIndex {
return h.nextNonEmptyBucketElement(signature, i+1, bucketCount, observationsByBucket) return h.nextNonEmptyBucketElement(signature, i+1, bucketCount, observationsByBucket)
} }
@ -265,11 +229,9 @@ func (h *histogram) bucketForPercentile(signature string, percentile float64) (*
return &histogram.buckets[0], 0 return &histogram.buckets[0], 0
} }
/* // Return the histogram's estimate of the value for a given percentile of
Return the histogram's estimate of the value for a given percentile of // collected samples. The requested percentile is expected to be a real
collected samples. The requested percentile is expected to be a real // value within (0, 1.0].
value within (0, 1.0].
*/
func (h *histogram) percentile(signature string, percentile float64) float64 { func (h *histogram) percentile(signature string, percentile float64) float64 {
bucket, index := h.bucketForPercentile(signature, percentile) bucket, index := h.bucketForPercentile(signature, percentile)
@ -318,16 +280,26 @@ func (h *histogram) ResetAll() {
} }
} }
/* // Produce a histogram from a given specification.
Produce a histogram from a given specification.
*/
func NewHistogram(specification *HistogramSpecification) Histogram { func NewHistogram(specification *HistogramSpecification) Histogram {
metric := &histogram{ metric := &histogram{
bucketMaker: specification.BucketBuilder, bucketMaker: specification.BucketBuilder,
bucketStarts: specification.Starts, bucketStarts: specification.Starts,
reportablePercentiles: specification.ReportablePercentiles, reportablePercentiles: specification.ReportablePercentiles,
values: map[string]*histogramValue{}, values: map[string]*histogramVector{},
} }
return metric return metric
} }
// Furnish a Histogram with unsensible default values and behaviors that is
// strictly useful for prototyping purposes.
func NewDefaultHistogram() Histogram {
return NewHistogram(
&HistogramSpecification{
Starts: LogarithmicSizedBucketsFor(0, 4096),
BucketBuilder: AccumulatingBucketBuilder(EvictAndReplaceWith(10, maths.Average), 50),
ReportablePercentiles: []float64{0.01, 0.05, 0.5, 0.90, 0.99},
},
)
}

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package metrics package metrics

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package metrics package metrics
@ -20,17 +18,13 @@ const (
upperThird = 2.0 * lowerThird upperThird = 2.0 * lowerThird
) )
/* // A TallyingIndexEstimator is responsible for estimating the value of index for
A TallyingIndexEstimator is responsible for estimating the value of index for // a given TallyingBucket, even though a TallyingBucket does not possess a
a given TallyingBucket, even though a TallyingBucket does not possess a // collection of samples. There are a few strategies listed below for how
collection of samples. There are a few strategies listed below for how // this value should be approximated.
this value should be approximated.
*/
type TallyingIndexEstimator func(minimum, maximum float64, index, observations int) float64 type TallyingIndexEstimator func(minimum, maximum float64, index, observations int) float64
/* // Provide a filter for handling empty buckets.
Provide a filter for handling empty buckets.
*/
func emptyFilter(e TallyingIndexEstimator) TallyingIndexEstimator { func emptyFilter(e TallyingIndexEstimator) TallyingIndexEstimator {
return func(minimum, maximum float64, index, observations int) float64 { return func(minimum, maximum float64, index, observations int) float64 {
if observations == 0 { if observations == 0 {
@ -41,31 +35,23 @@ func emptyFilter(e TallyingIndexEstimator) TallyingIndexEstimator {
} }
} }
/* // Report the smallest observed value in the bucket.
Report the smallest observed value in the bucket.
*/
var Minimum TallyingIndexEstimator = emptyFilter(func(minimum, maximum float64, _, observations int) float64 { var Minimum TallyingIndexEstimator = emptyFilter(func(minimum, maximum float64, _, observations int) float64 {
return minimum return minimum
}) })
/* // Report the largest observed value in the bucket.
Report the largest observed value in the bucket.
*/
var Maximum TallyingIndexEstimator = emptyFilter(func(minimum, maximum float64, _, observations int) float64 { var Maximum TallyingIndexEstimator = emptyFilter(func(minimum, maximum float64, _, observations int) float64 {
return maximum return maximum
}) })
/* // Report the average of the extrema.
Report the average of the extrema.
*/
var Average TallyingIndexEstimator = emptyFilter(func(minimum, maximum float64, _, observations int) float64 { var Average TallyingIndexEstimator = emptyFilter(func(minimum, maximum float64, _, observations int) float64 {
return maths.Average([]float64{minimum, maximum}) return maths.Average([]float64{minimum, maximum})
}) })
/* // Report the minimum value of the index is in the lower-third of observations,
Report the minimum value of the index is in the lower-third of observations, // the average if in the middle-third, and the maximum if in the largest third.
the average if in the middle-third, and the maximum if in the largest third.
*/
var Uniform TallyingIndexEstimator = emptyFilter(func(minimum, maximum float64, index, observations int) float64 { var Uniform TallyingIndexEstimator = emptyFilter(func(minimum, maximum float64, index, observations int) float64 {
if observations == 1 { if observations == 1 {
return minimum return minimum
@ -82,11 +68,9 @@ var Uniform TallyingIndexEstimator = emptyFilter(func(minimum, maximum float64,
return maths.Average([]float64{minimum, maximum}) return maths.Average([]float64{minimum, maximum})
}) })
/* // A TallyingBucket is a Bucket that tallies when an object is added to it.
A TallyingBucket is a Bucket that tallies when an object is added to it. // Upon insertion, an object is compared against collected extrema and noted
Upon insertion, an object is compared against collected extrema and noted // as a new minimum or maximum if appropriate.
as a new minimum or maximum if appropriate.
*/
type TallyingBucket struct { type TallyingBucket struct {
estimator TallyingIndexEstimator estimator TallyingIndexEstimator
largestObserved float64 largestObserved float64
@ -140,9 +124,7 @@ func (b *TallyingBucket) Reset() {
b.smallestObserved = math.MaxFloat64 b.smallestObserved = math.MaxFloat64
} }
/* // Produce a TallyingBucket with sane defaults.
Produce a TallyingBucket with sane defaults.
*/
func DefaultTallyingBucket() TallyingBucket { func DefaultTallyingBucket() TallyingBucket {
return TallyingBucket{ return TallyingBucket{
estimator: Minimum, estimator: Minimum,
@ -159,10 +141,8 @@ func CustomTallyingBucket(estimator TallyingIndexEstimator) TallyingBucket {
} }
} }
/* // This is used strictly for testing.
This is used strictly for testing. func tallyingBucketBuilder() Bucket {
*/
func TallyingBucketBuilder() Bucket {
b := DefaultTallyingBucket() b := DefaultTallyingBucket()
return &b return &b
} }

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package metrics package metrics
@ -40,7 +38,7 @@ func (s *S) TestTallyingPercentilesEstimatorUniform(c *C) {
} }
func (s *S) TestTallyingBucketBuilder(c *C) { func (s *S) TestTallyingBucketBuilder(c *C) {
var bucket Bucket = TallyingBucketBuilder() var bucket Bucket = tallyingBucketBuilder()
c.Assert(bucket, Not(IsNil)) c.Assert(bucket, Not(IsNil))
} }

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package metrics package metrics
@ -12,24 +10,18 @@ import (
"time" "time"
) )
/* // This callback is called upon the completion of the timer—i.e., when it stops.
This callback is called upon the completion of the timeri.e., when it stops.
*/
type CompletionCallback func(duration time.Duration) type CompletionCallback func(duration time.Duration)
/* // This is meant to capture a function that a StopWatch can call for purposes
This is meant to capture a function that a StopWatch can call for purposes // of instrumentation.
of instrumentation.
*/
type InstrumentableCall func() type InstrumentableCall func()
/* // StopWatch is the structure that captures instrumentation for durations.
StopWatch is the structure that captures instrumentation for durations.
N.B.(mtp): A major limitation hereof is that the StopWatch protocol cannot // N.B.(mtp): A major limitation hereof is that the StopWatch protocol cannot
retain instrumentation if a panic percolates within the context that is // retain instrumentation if a panic percolates within the context that is
being measured. // being measured.
*/
type StopWatch interface { type StopWatch interface {
Stop() time.Duration Stop() time.Duration
} }
@ -40,9 +32,7 @@ type stopWatch struct {
startTime time.Time startTime time.Time
} }
/* // Return a new StopWatch that is ready for instrumentation.
Return a new StopWatch that is ready for instrumentation.
*/
func Start(onCompletion CompletionCallback) StopWatch { func Start(onCompletion CompletionCallback) StopWatch {
return &stopWatch{ return &stopWatch{
onCompletion: onCompletion, onCompletion: onCompletion,
@ -50,10 +40,8 @@ func Start(onCompletion CompletionCallback) StopWatch {
} }
} }
/* // Stop the StopWatch returning the elapsed duration of its lifetime while
Stop the StopWatch returning the elapsed duration of its lifetime while // firing an optional CompletionCallback in the background.
firing an optional CompletionCallback in the background.
*/
func (s *stopWatch) Stop() time.Duration { func (s *stopWatch) Stop() time.Duration {
s.endTime = time.Now() s.endTime = time.Now()
duration := s.endTime.Sub(s.startTime) duration := s.endTime.Sub(s.startTime)
@ -65,10 +53,8 @@ func (s *stopWatch) Stop() time.Duration {
return duration return duration
} }
/* // Provide a quick way of instrumenting a InstrumentableCall and emitting its
Provide a quick way of instrumenting a InstrumentableCall and emitting its // duration.
duration.
*/
func InstrumentCall(instrumentable InstrumentableCall, onCompletion CompletionCallback) time.Duration { func InstrumentCall(instrumentable InstrumentableCall, onCompletion CompletionCallback) time.Duration {
s := Start(onCompletion) s := Start(onCompletion)
instrumentable() instrumentable()

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package metrics package metrics

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style license that can be found
Use of this source code is governed by a BSD-style license that can be found in // in the LICENSE file.
the LICENSE file.
*/
package registry package registry
@ -22,7 +20,6 @@ import (
"sort" "sort"
"strings" "strings"
"sync" "sync"
"time"
) )
const ( const (
@ -42,18 +39,10 @@ var (
abortOnMisuse bool abortOnMisuse bool
debugRegistration bool debugRegistration bool
useAggressiveSanityChecks bool useAggressiveSanityChecks bool
DefaultHandler = DefaultRegistry.Handler()
) )
/*
This callback accumulates the microsecond duration of the reporting framework's
overhead such that it can be reported.
*/
var requestLatencyAccumulator metrics.CompletionCallback = func(duration time.Duration) {
microseconds := float64(duration / time.Microsecond)
requestLatency.Add(nil, microseconds)
}
// container represents a top-level registered metric that encompasses its // container represents a top-level registered metric that encompasses its
// static metadata. // static metadata.
type container struct { type container struct {
@ -63,36 +52,29 @@ type container struct {
name string name string
} }
/* // Registry is, as the name implies, a registrar where metrics are listed.
Registry is, as the name implies, a registrar where metrics are listed. //
// In most situations, using DefaultRegistry is sufficient versus creating one's
In most situations, using DefaultRegistry is sufficient versus creating one's // own.
own.
*/
type Registry struct { type Registry struct {
mutex sync.RWMutex mutex sync.RWMutex
signatureContainers map[string]container signatureContainers map[string]container
} }
/* // This builds a new metric registry. It is not needed in the majority of
This builds a new metric registry. It is not needed in the majority of // cases.
cases.
*/
func NewRegistry() *Registry { func NewRegistry() *Registry {
return &Registry{ return &Registry{
signatureContainers: make(map[string]container), signatureContainers: make(map[string]container),
} }
} }
/* // This is the default registry with which Metric objects are associated. It
This is the default registry with which Metric objects are associated. It // is primarily a read-only object after server instantiation.
is primarily a read-only object after server instantiation. //
*/
var DefaultRegistry = NewRegistry() var DefaultRegistry = NewRegistry()
/* // Associate a Metric with the DefaultRegistry.
Associate a Metric with the DefaultRegistry.
*/
func Register(name, docstring string, baseLabels map[string]string, metric metrics.Metric) error { func Register(name, docstring string, baseLabels map[string]string, metric metrics.Metric) error {
return DefaultRegistry.Register(name, docstring, baseLabels, metric) return DefaultRegistry.Register(name, docstring, baseLabels, metric)
} }
@ -155,9 +137,7 @@ func (r *Registry) isValidCandidate(name string, baseLabels map[string]string) (
return return
} }
/* // Register a metric with a given name. Name should be globally unique.
Register a metric with a given name. Name should be globally unique.
*/
func (r *Registry) Register(name, docstring string, baseLabels map[string]string, metric metrics.Metric) (err error) { func (r *Registry) Register(name, docstring string, baseLabels map[string]string, metric metrics.Metric) (err error) {
r.mutex.Lock() r.mutex.Lock()
defer r.mutex.Unlock() defer r.mutex.Unlock()
@ -184,6 +164,9 @@ func (r *Registry) Register(name, docstring string, baseLabels map[string]string
// YieldBasicAuthExporter creates a http.HandlerFunc that is protected by HTTP's // YieldBasicAuthExporter creates a http.HandlerFunc that is protected by HTTP's
// basic authentication. // basic authentication.
func (register *Registry) YieldBasicAuthExporter(username, password string) http.HandlerFunc { func (register *Registry) YieldBasicAuthExporter(username, password string) http.HandlerFunc {
// XXX: Work with Daniel to get this removed from the library, as it is really
// superfluous and can be much more elegantly accomplished via
// delegation.
exporter := register.YieldExporter() exporter := register.YieldExporter()
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
@ -277,11 +260,15 @@ func decorateWriter(request *http.Request, writer http.ResponseWriter) io.Writer
return gziper return gziper
} }
/*
Create a http.HandlerFunc that is tied to r Registry such that requests
against it generate a representation of the housed metrics.
*/
func (registry *Registry) YieldExporter() http.HandlerFunc { func (registry *Registry) YieldExporter() http.HandlerFunc {
log.Println("Registry.YieldExporter is deprecated in favor of Registry.Handler.")
return registry.Handler()
}
// Create a http.HandlerFunc that is tied to a Registry such that requests
// against it generate a representation of the housed metrics.
func (registry *Registry) Handler() http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) { return func(w http.ResponseWriter, r *http.Request) {
var instrumentable metrics.InstrumentableCall = func() { var instrumentable metrics.InstrumentableCall = func() {
requestCount.Increment(nil) requestCount.Increment(nil)
@ -294,7 +281,6 @@ func (registry *Registry) YieldExporter() http.HandlerFunc {
writer := decorateWriter(r, w) writer := decorateWriter(r, w)
// TODO(matt): Migrate to ioutil.NopCloser.
if closer, ok := writer.(io.Closer); ok { if closer, ok := writer.(io.Closer); ok {
defer closer.Close() defer closer.Close()
} }

View File

@ -1,8 +1,8 @@
// Copyright (c) 2013, Matt T. Proud // Copyright (c) 2013, Matt T. Proud
// All rights reserved. // All rights reserved.
// //
// Use of this source code is governed by a BSD-style license that can be found in // Use of this source code is governed by a BSD-style license that can be found
// the LICENSE file. // in the LICENSE file.
package registry package registry

View File

@ -1,10 +1,9 @@
/* // Copyright (c) 2013, Matt T. Proud
Copyright (c) 2013, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style license that can be found
Use of this source code is governed by a BSD-style license that can be found in // in the LICENSE file.
the LICENSE file. //
*/
package registry package registry
@ -14,11 +13,9 @@ import (
"time" "time"
) )
/* // Boilerplate metrics about the metrics reporting subservice. These are only
Boilerplate metrics about the metrics reporting subservice. These are only // exposed if the DefaultRegistry's exporter is hooked into the HTTP request
exposed if the DefaultRegistry's exporter is hooked into the HTTP request // handler.
handler.
*/
var ( var (
marshalErrorCount = metrics.NewCounter() marshalErrorCount = metrics.NewCounter()
dumpErrorCount = metrics.NewCounter() dumpErrorCount = metrics.NewCounter()
@ -42,3 +39,11 @@ func init() {
DefaultRegistry.Register("instance_start_time_seconds", "The time at which the current instance started (UTC).", NilLabels, startTime) DefaultRegistry.Register("instance_start_time_seconds", "The time at which the current instance started (UTC).", NilLabels, startTime)
} }
// This callback accumulates the microsecond duration of the reporting
// framework's overhead such that it can be reported.
var requestLatencyAccumulator metrics.CompletionCallback = func(duration time.Duration) {
microseconds := float64(duration / time.Microsecond)
requestLatency.Add(nil, microseconds)
}

View File

@ -1,24 +1,20 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved.
Use of this source code is governed by a BSD-style // Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file. // license that can be found in the LICENSE file.
*/
/* // The utility package provides general purpose helpers to assist with this
The utility package provides general purpose helpers to assist with this // library.
library.
priority_queue.go provides a simple priority queue. // priority_queue.go provides a simple priority queue.
priority_queue_test.go provides a test complement for the priority_queue.go // priority_queue_test.go provides a test complement for the priority_queue.go
module. // module.
test_helper.go provides a testing assistents for this package and its // test_helper.go provides a testing assistents for this package and its
dependents. // dependents.
utility_test.go provides a test suite for all tests in the utility package // utility_test.go provides a test suite for all tests in the utility package
hierarchy. It employs the gocheck framework for test scaffolding. // hierarchy. It employs the gocheck framework for test scaffolding.
*/
package documentation package documentation

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved.
Use of this source code is governed by a BSD-style // Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file. // license that can be found in the LICENSE file.
*/
package utility package utility

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package utility package utility

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package utility package utility

View File

@ -1,10 +1,8 @@
/* // Copyright (c) 2012, Matt T. Proud
Copyright (c) 2012, Matt T. Proud // All rights reserved.
All rights reserved. //
// Use of this source code is governed by a BSD-style
Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file.
license that can be found in the LICENSE file.
*/
package utility package utility