Merge pull request #2 from prometheus/usability/sundry

Sundry cosmetic fixes across the board.
This commit is contained in:
Matt T. Proud 2013-02-13 09:24:06 -08:00
commit d822e70a37
35 changed files with 492 additions and 663 deletions

View File

@ -1,8 +1,8 @@
// Copyright (c) 2013, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be found in
// the LICENSE file.
// Use of this source code is governed by a BSD-style license that can be found
// in the LICENSE file.
package registry
@ -19,6 +19,8 @@ var (
ProtocolVersionHeader = "X-Prometheus-API-Version"
ExpositionResource = "/metrics.json"
baseLabelsKey = "baseLabels"
docstringKey = "docstring"
metricKey = "metric"

View File

@ -1,54 +1,52 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be found
// in the LICENSE file.
Use of this source code is governed by a BSD-style license that can be found in
the LICENSE file.
*/
// registry.go provides a container for centralized exposition of metrics to
// their prospective consumers.
/*
registry.go provides a container for centralized exposition of metrics to
their prospective consumers.
// registry.Register("human_readable_metric_name", metric)
registry.Register("human_readable_metric_name", metric)
// Please try to observe the following rules when naming metrics:
Please try to observe the following rules when naming metrics:
// - Use underbars "_" to separate words.
- Use underbars "_" to separate words.
// - Have the metric name start from generality and work toward specificity
// toward the end. For example, when working with multiple caching subsystems,
// consider using the following structure "cache" + "user_credentials" →
// "cache_user_credentials" and "cache" + "value_transformations" →
// "cache_value_transformations".
- Have the metric name start from generality and work toward specificity
toward the end. For example, when working with multiple caching subsystems,
consider using the following structure "cache" + "user_credentials"
"cache_user_credentials" and "cache" + "value_transformations"
"cache_value_transformations".
// - Have whatever is being measured follow the system and subsystem names cited
// supra. For instance, with "insertions", "deletions", "evictions",
// "replacements" of the above cache, they should be named as
// "cache_user_credentials_insertions" and "cache_user_credentials_deletions"
// and "cache_user_credentials_deletions" and
// "cache_user_credentials_evictions".
- Have whatever is being measured follow the system and subsystem names cited
supra. For instance, with "insertions", "deletions", "evictions",
"replacements" of the above cache, they should be named as
"cache_user_credentials_insertions" and "cache_user_credentials_deletions" and
"cache_user_credentials_deletions" and "cache_user_credentials_evictions".
// - If what is being measured has a standardized unit around it, consider
// providing a unit for it.
- If what is being measured has a standardized unit around it, consider
providing a unit for it.
// - Consider adding an additional suffix that designates what the value
// represents such as a "total" or "size"---e.g.,
// "cache_user_credentials_size_kb" or
// "cache_user_credentials_insertions_total".
- Consider adding an additional suffix that designates what the value represents
such as a "total" or "size"---e.g., "cache_user_credentials_size_kb" or
"cache_user_credentials_insertions_total".
// - Give heed to how future-proof the names are. Things may depend on these
// names; and as your service evolves, the calculated values may take on
// different meanings, which can be difficult to reflect if deployed code
// depends on antique names.
- Give heed to how future-proof the names are. Things may depend on these
names; and as your service evolves, the calculated values may take on
different meanings, which can be difficult to reflect if deployed code depends
on antique names.
// Further considerations:
Further considerations:
// - The Registry's exposition mechanism is not backed by authorization and
// authentication. This is something that will need to be addressed for
// production services that are directly exposed to the outside world.
- The Registry's exposition mechanism is not backed by authorization and
authentication. This is something that will need to be addressed for
production services that are directly exposed to the outside world.
- Engage in as little in-process processing of values as possible. The job
of processing and aggregation of these values belongs in a separate
post-processing job. The same goes for archiving. I will need to evaluate
hooks into something like OpenTSBD.
*/
// - Engage in as little in-process processing of values as possible. The job
// of processing and aggregation of these values belongs in a separate
// post-processing job. The same goes for archiving. I will need to evaluate
// hooks into something like OpenTSBD.
package registry

View File

@ -1,19 +1,15 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
/*
main.go provides a simple example of how to use this instrumentation
framework in the context of having something that emits values into
its collectors.
The emitted values correspond to uniform, normal, and exponential
distributions.
*/
// main.go provides a simple example of how to use this instrumentation
// framework in the context of having something that emits values into
// its collectors.
//
// The emitted values correspond to uniform, normal, and exponential
// distributions.
package main
import (
@ -28,34 +24,52 @@ import (
var (
listeningAddress string
barDomain float64
barMean float64
fooDomain float64
// Create a histogram to track fictitious interservice RPC latency for three
// distinct services.
rpc_latency = metrics.NewHistogram(&metrics.HistogramSpecification{
// Four distinct histogram buckets for values:
// - equally-sized,
// - 0 to 50, 50 to 100, 100 to 150, and 150 to 200.
Starts: metrics.EquallySizedBucketsFor(0, 200, 4),
// Create histogram buckets using an accumulating bucket, a bucket that
// holds sample values subject to an eviction policy:
// - 50 elements are allowed per bucket.
// - Once 50 have been reached, the bucket empties 10 elements, averages the
// evicted elements, and re-appends that back to the bucket.
BucketBuilder: metrics.AccumulatingBucketBuilder(metrics.EvictAndReplaceWith(10, maths.Average), 50),
// The histogram reports percentiles 1, 5, 50, 90, and 99.
ReportablePercentiles: []float64{0.01, 0.05, 0.5, 0.90, 0.99},
})
rpc_calls = metrics.NewCounter()
// If for whatever reason you are resistant to the idea of having a static
// registry for metrics, which is a really bad idea when using Prometheus-
// enabled library code, you can create your own.
customRegistry = registry.NewRegistry()
)
func init() {
flag.StringVar(&listeningAddress, "listeningAddress", ":8080", "The address to listen to requests on.")
flag.Float64Var(&fooDomain, "random.fooDomain", 200, "The domain for the random parameter foo.")
flag.Float64Var(&barDomain, "random.barDomain", 10, "The domain for the random parameter bar.")
flag.Float64Var(&barMean, "random.barMean", 100, "The mean for the random parameter bar.")
}
func main() {
flag.Parse()
rpc_latency := metrics.NewHistogram(&metrics.HistogramSpecification{
Starts: metrics.EquallySizedBucketsFor(0, 200, 4),
BucketBuilder: metrics.AccumulatingBucketBuilder(metrics.EvictAndReplaceWith(10, maths.Average), 50),
ReportablePercentiles: []float64{0.01, 0.05, 0.5, 0.90, 0.99},
})
rpc_calls := metrics.NewCounter()
metrics := registry.NewRegistry()
metrics.Register("rpc_latency_microseconds", "RPC latency.", registry.NilLabels, rpc_latency)
metrics.Register("rpc_calls_total", "RPC calls.", registry.NilLabels, rpc_calls)
go func() {
for {
rpc_latency.Add(map[string]string{"service": "foo"}, rand.Float64()*200)
rpc_latency.Add(map[string]string{"service": "foo"}, rand.Float64()*fooDomain)
rpc_calls.Increment(map[string]string{"service": "foo"})
rpc_latency.Add(map[string]string{"service": "bar"}, (rand.NormFloat64()*10.0)+100.0)
rpc_latency.Add(map[string]string{"service": "bar"}, (rand.NormFloat64()*barDomain)+barMean)
rpc_calls.Increment(map[string]string{"service": "bar"})
rpc_latency.Add(map[string]string{"service": "zed"}, rand.ExpFloat64())
@ -65,8 +79,11 @@ func main() {
}
}()
exporter := metrics.YieldExporter()
http.Handle("/metrics.json", exporter)
http.Handle(registry.ExpositionResource, customRegistry.Handler())
http.ListenAndServe(listeningAddress, nil)
}
func init() {
customRegistry.Register("rpc_latency_microseconds", "RPC latency.", registry.NilLabels, rpc_latency)
customRegistry.Register("rpc_calls_total", "RPC calls.", registry.NilLabels, rpc_calls)
}

View File

@ -1,15 +1,11 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
/*
main.go provides a simple skeletal example of how this instrumentation
framework is registered and invoked.
*/
// main.go provides a simple skeletal example of how this instrumentation
// framework is registered and invoked.
package main
import (
@ -29,8 +25,6 @@ func init() {
func main() {
flag.Parse()
exporter := registry.DefaultRegistry.YieldExporter()
http.Handle("/metrics.json", exporter)
http.Handle(registry.ExpositionResource, registry.DefaultHandler)
http.ListenAndServe(listeningAddress, nil)
}

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package maths
@ -12,9 +10,7 @@ import (
"math"
)
/*
Go's standard library does not offer a factorial function.
*/
// Go's standard library does not offer a factorial function.
func Factorial(of int) int64 {
if of <= 0 {
return 1
@ -29,11 +25,9 @@ func Factorial(of int) int64 {
return result
}
/*
Create calculate the value of a probability density for a given binomial
statistic, where k is the target count of true cases, n is the number of
subjects, and p is the probability.
*/
// Calculate the value of a probability density for a given binomial statistic,
// where k is the target count of true cases, n is the number of subjects, and
// p is the probability.
func BinomialPDF(k, n int, p float64) float64 {
binomialCoefficient := float64(Factorial(n)) / float64(Factorial(k)*Factorial(n-k))
intermediate := math.Pow(p, float64(k)) * math.Pow(1-p, float64(n-k))

View File

@ -1,26 +1,22 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// The maths package provides a number of mathematical-related helpers:
/*
The maths package provides a number of mathematical-related helpers:
// distributions.go provides basic distribution-generating functions that are
// used primarily in testing contexts.
distributions.go provides basic distribution-generating functions that are
used primarily in testing contexts.
// helpers_for_testing.go provides a testing assistents for this package and its
// dependents.
helpers_for_testing.go provides a testing assistents for this package and its
dependents.
// maths_test.go provides a test suite for all tests in the maths package
// hierarchy. It employs the gocheck framework for test scaffolding.
maths_test.go provides a test suite for all tests in the maths package
hierarchy. It employs the gocheck framework for test scaffolding.
// statistics.go provides basic summary statistics functions for the purpose of
// metrics aggregation.
statistics.go provides basic summary statistics functions for the purpose of
metrics aggregation.
statistics_test.go provides a test complement for the statistics.go module.
*/
// statistics_test.go provides a test complement for the statistics.go module.
package maths

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package maths
@ -18,10 +16,8 @@ type isNaNChecker struct {
*CheckerInfo
}
/*
This piece provides a simple tester for the gocheck testing library to
ascertain if a value is not-a-number.
*/
// This piece provides a simple tester for the gocheck testing library to
// ascertain if a value is not-a-number.
var IsNaN Checker = &isNaNChecker{
&CheckerInfo{Name: "IsNaN", Params: []string{"value"}},
}

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package maths

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package maths
@ -13,15 +11,11 @@ import (
"sort"
)
/*
TODO(mtp): Split this out into a summary statistics file once moving/rolling
averages are calculated.
*/
// TODO(mtp): Split this out into a summary statistics file once moving/rolling
// averages are calculated.
/*
ReductionMethod provides a method for reducing metrics into a given scalar
value.
*/
// ReductionMethod provides a method for reducing metrics into a given scalar
// value.
type ReductionMethod func([]float64) float64
var Average ReductionMethod = func(input []float64) float64 {
@ -40,9 +34,7 @@ var Average ReductionMethod = func(input []float64) float64 {
return sum / count
}
/*
Extract the first modal value.
*/
// Extract the first modal value.
var FirstMode ReductionMethod = func(input []float64) float64 {
valuesToFrequency := map[float64]int64{}
var largestTally int64 = math.MinInt64
@ -63,9 +55,7 @@ var FirstMode ReductionMethod = func(input []float64) float64 {
return largestTallyValue
}
/*
Calculate the percentile by choosing the nearest neighboring value.
*/
// Calculate the percentile by choosing the nearest neighboring value.
func NearestRank(input []float64, percentile float64) float64 {
inputSize := len(input)
@ -88,9 +78,7 @@ func NearestRank(input []float64, percentile float64) float64 {
return copiedInput[preliminaryIndex]
}
/*
Generate a ReductionMethod based off of extracting a given percentile value.
*/
// Generate a ReductionMethod based off of extracting a given percentile value.
func NearestRankReducer(percentile float64) ReductionMethod {
return func(input []float64) float64 {
return NearestRank(input, percentile)

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package maths

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package metrics
@ -27,11 +25,9 @@ type AccumulatingBucket struct {
observations int
}
/*
AccumulatingBucketBuilder is a convenience method for generating a
BucketBuilder that produces AccumatingBucket entries with a certain
behavior set.
*/
// AccumulatingBucketBuilder is a convenience method for generating a
// BucketBuilder that produces AccumatingBucket entries with a certain
// behavior set.
func AccumulatingBucketBuilder(evictionPolicy EvictionPolicy, maximumSize int) BucketBuilder {
return func() Bucket {
return &AccumulatingBucket{
@ -42,10 +38,8 @@ func AccumulatingBucketBuilder(evictionPolicy EvictionPolicy, maximumSize int) B
}
}
/*
Add a value to the bucket. Depending on whether the bucket is full, it may
trigger an eviction of older items.
*/
// Add a value to the bucket. Depending on whether the bucket is full, it may
// trigger an eviction of older items.
func (b *AccumulatingBucket) Add(value float64) {
b.mutex.Lock()
defer b.mutex.Unlock()
@ -100,11 +94,9 @@ func (b *AccumulatingBucket) ValueForIndex(index int) float64 {
sort.Float64s(sortedElements)
/*
N.B.(mtp): Interfacing components should not need to comprehend what
eviction and storage container strategies used; therefore,
we adjust this silently.
*/
// N.B.(mtp): Interfacing components should not need to comprehend what
// eviction and storage container strategies used; therefore,
// we adjust this silently.
targetIndex := int(float64(elementCount-1) * (float64(index) / float64(b.observations)))
return sortedElements[targetIndex]

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package metrics
@ -123,23 +121,18 @@ func (s *S) TestAccumulatingBucketValueForIndex(c *C) {
c.Assert(b.ValueForIndex(i), maths.IsNaN)
}
/*
The bucket has only observed one item and contains now one item.
*/
// The bucket has only observed one item and contains now one item.
b.Add(1.0)
c.Check(b.ValueForIndex(0), Equals, 1.0)
/*
Let's sanity check what occurs if presumably an eviction happened and
we requested an index larger than what is contained.
*/
// Let's sanity check what occurs if presumably an eviction happened and
// we requested an index larger than what is contained.
c.Check(b.ValueForIndex(1), Equals, 1.0)
for i := 2.0; i <= 100; i += 1 {
b.Add(i)
/*
TODO(mtp): This is a sin. Provide a mechanism for deterministic testing.
*/
// TODO(mtp): This is a sin. Provide a mechanism for deterministic testing.
time.Sleep(1 * time.Millisecond)
}
@ -149,17 +142,13 @@ func (s *S) TestAccumulatingBucketValueForIndex(c *C) {
for i := 101.0; i <= 150; i += 1 {
b.Add(i)
/*
TODO(mtp): This is a sin. Provide a mechanism for deterministic testing.
*/
// TODO(mtp): This is a sin. Provide a mechanism for deterministic testing.
time.Sleep(1 * time.Millisecond)
}
/*
The bucket's capacity has been exceeded by inputs at this point;
consequently, we search for a given element by percentage offset
therein.
*/
// The bucket's capacity has been exceeded by inputs at this point;
// consequently, we search for a given element by percentage offset
// therein.
c.Check(b.ValueForIndex(0), Equals, 51.0)
c.Check(b.ValueForIndex(50), Equals, 84.0)
c.Check(b.ValueForIndex(99), Equals, 116.0)

View File

@ -1,49 +1,33 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
/*
bucket.go provides fundamental interface expectations for various bucket
types.
*/
// bucket.go provides fundamental interface expectations for various bucket
// types.
package metrics
/*
The Histogram class and associated types build buckets on their own.
*/
// The Histogram class and associated types build buckets on their own.
type BucketBuilder func() Bucket
/*
This defines the base Bucket type. The exact behaviors of the bucket are
at the whim of the implementor.
A Bucket is used as a container by Histogram as a collection for its
accumulated samples.
*/
// This defines the base Bucket type. The exact behaviors of the bucket are
// at the whim of the implementor.
//
// A Bucket is used as a container by Histogram as a collection for its
// accumulated samples.
type Bucket interface {
/*
Add a value to the bucket.
*/
// Add a value to the bucket.
Add(value float64)
/*
Provide a count of observations throughout the bucket's lifetime.
*/
// Provide a count of observations throughout the bucket's lifetime.
Observations() int
// Reset is responsible for resetting this bucket back to a pristine state.
Reset()
/*
Provide a humanized representation hereof.
*/
// Provide a humanized representation hereof.
String() string
/*
Provide the value from the given in-memory value cache or an estimate
thereof for the given index. The consumer of the bucket's data makes
no assumptions about the underlying storage mechanisms that the bucket
employs.
*/
// Provide the value from the given in-memory value cache or an estimate
// thereof for the given index. The consumer of the bucket's data makes
// no assumptions about the underlying storage mechanisms that the bucket
// employs.
ValueForIndex(index int) float64
}

View File

@ -1,14 +1,10 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
/*
constants.go provides package-level constants for metrics.
*/
// constants.go provides package-level constants for metrics.
package metrics
const (

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package metrics
@ -27,20 +25,20 @@ type Counter interface {
String() string
}
type counterValue struct {
type counterVector struct {
labels map[string]string
value float64
}
func NewCounter() Counter {
return &counter{
values: map[string]*counterValue{},
values: map[string]*counterVector{},
}
}
type counter struct {
mutex sync.RWMutex
values map[string]*counterValue
values map[string]*counterVector
}
func (metric *counter) Set(labels map[string]string, value float64) float64 {
@ -55,7 +53,7 @@ func (metric *counter) Set(labels map[string]string, value float64) float64 {
if original, ok := metric.values[signature]; ok {
original.value = value
} else {
metric.values[signature] = &counterValue{
metric.values[signature] = &counterVector{
labels: labels,
value: value,
}
@ -97,7 +95,7 @@ func (metric *counter) IncrementBy(labels map[string]string, value float64) floa
if original, ok := metric.values[signature]; ok {
original.value += value
} else {
metric.values[signature] = &counterValue{
metric.values[signature] = &counterVector{
labels: labels,
value: value,
}
@ -122,7 +120,7 @@ func (metric *counter) DecrementBy(labels map[string]string, value float64) floa
if original, ok := metric.values[signature]; ok {
original.value -= value
} else {
metric.values[signature] = &counterValue{
metric.values[signature] = &counterVector{
labels: labels,
value: -1 * value,
}

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package metrics

View File

@ -1,51 +1,48 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// The metrics package provides general descriptors for the concept of
// exportable metrics.
/*
The metrics package provides general descriptors for the concept of exportable
metrics.
// accumulating_bucket.go provides a histogram bucket type that accumulates
// elements until a given capacity and enacts a given eviction policy upon
// such a condition.
accumulating_bucket.go provides a histogram bucket type that accumulates
elements until a given capacity and enacts a given eviction policy upon
such a condition.
// accumulating_bucket_test.go provides a test complement for the
// accumulating_bucket_go module.
accumulating_bucket_test.go provides a test complement for the
accumulating_bucket_go module.
// eviction.go provides several histogram bucket eviction strategies.
eviction.go provides several histogram bucket eviction strategies.
// eviction_test.go provides a test complement for the eviction.go module.
eviction_test.go provides a test complement for the eviction.go module.
// gauge.go provides a scalar metric that one can monitor. It is useful for
// certain cases, such as instantaneous temperature.
gauge.go provides a scalar metric that one can monitor. It is useful for
certain cases, such as instantaneous temperature.
// gauge_test.go provides a test complement for the gauge.go module.
gauge_test.go provides a test complement for the gauge.go module.
// histogram.go provides a basic histogram metric, which can accumulate scalar
// event values or samples. The underlying histogram implementation is designed
// to be performant in that it accepts tolerable inaccuracies.
histogram.go provides a basic histogram metric, which can accumulate scalar
event values or samples. The underlying histogram implementation is designed
to be performant in that it accepts tolerable inaccuracies.
// histogram_test.go provides a test complement for the histogram.go module.
histogram_test.go provides a test complement for the histogram.go module.
// metric.go provides fundamental interface expectations for the various
// metrics.
metric.go provides fundamental interface expectations for the various metrics.
// metrics_test.go provides a test suite for all tests in the metrics package
// hierarchy. It employs the gocheck framework for test scaffolding.
metrics_test.go provides a test suite for all tests in the metrics package
hierarchy. It employs the gocheck framework for test scaffolding.
// tallying_bucket.go provides a histogram bucket type that aggregates tallies
// of events that fall into its ranges versus a summary of the values
// themselves.
tallying_bucket.go provides a histogram bucket type that aggregates tallies
of events that fall into its ranges versus a summary of the values
themselves.
// tallying_bucket_test.go provides a test complement for the
// tallying_bucket.go module.
tallying_bucket_test.go provides a test complement for the
tallying_bucket.go module.
// timer.go provides a scalar metric that times how long a given event takes.
timer.go provides a scalar metric that times how long a given event takes.
timer_test.go provides a test complement for the timer.go module.
*/
// timer_test.go provides a test complement for the timer.go module.
package metrics

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package metrics
@ -15,16 +13,12 @@ import (
"time"
)
/*
EvictionPolicy implements some sort of garbage collection methodology for
an underlying heap.Interface. This is presently only used for
AccumulatingBucket.
*/
// EvictionPolicy implements some sort of garbage collection methodology for
// an underlying heap.Interface. This is presently only used for
// AccumulatingBucket.
type EvictionPolicy func(h heap.Interface)
/*
As the name implies, this evicts the oldest x objects from the heap.
*/
// As the name implies, this evicts the oldest x objects from the heap.
func EvictOldest(count int) EvictionPolicy {
return func(h heap.Interface) {
for i := 0; i < count; i++ {
@ -33,10 +27,8 @@ func EvictOldest(count int) EvictionPolicy {
}
}
/*
This factory produces an EvictionPolicy that applies some standardized
reduction methodology on the to-be-terminated values.
*/
// This factory produces an EvictionPolicy that applies some standardized
// reduction methodology on the to-be-terminated values.
func EvictAndReplaceWith(count int, reducer maths.ReductionMethod) EvictionPolicy {
return func(h heap.Interface) {
oldValues := make([]float64, count)
@ -49,9 +41,8 @@ func EvictAndReplaceWith(count int, reducer maths.ReductionMethod) EvictionPolic
heap.Push(h, &utility.Item{
Value: reduced,
/*
TODO(mtp): Parameterize the priority generation since these tools are useful.
*/
// TODO(mtp): Parameterize the priority generation since these tools are
// useful.
Priority: -1 * time.Now().UnixNano(),
})
}

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package metrics

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package metrics
@ -14,12 +12,10 @@ import (
"sync"
)
/*
A gauge metric merely provides an instantaneous representation of a scalar
value or an accumulation. For instance, if one wants to expose the current
temperature or the hitherto bandwidth used, this would be the metric for such
circumstances.
*/
// A gauge metric merely provides an instantaneous representation of a scalar
// value or an accumulation. For instance, if one wants to expose the current
// temperature or the hitherto bandwidth used, this would be the metric for such
// circumstances.
type Gauge interface {
AsMarshallable() map[string]interface{}
ResetAll()
@ -27,20 +23,20 @@ type Gauge interface {
String() string
}
type gaugeValue struct {
type gaugeVector struct {
labels map[string]string
value float64
}
func NewGauge() Gauge {
return &gauge{
values: map[string]*gaugeValue{},
values: map[string]*gaugeVector{},
}
}
type gauge struct {
mutex sync.RWMutex
values map[string]*gaugeValue
values map[string]*gaugeVector
}
func (metric *gauge) String() string {
@ -65,7 +61,7 @@ func (metric *gauge) Set(labels map[string]string, value float64) float64 {
if original, ok := metric.values[signature]; ok {
original.value = value
} else {
metric.values[signature] = &gaugeValue{
metric.values[signature] = &gaugeVector{
labels: labels,
value: value,
}

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package metrics

View File

@ -1,27 +1,24 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package metrics
import (
"bytes"
"fmt"
"github.com/prometheus/client_golang/maths"
"github.com/prometheus/client_golang/utility"
"math"
"strconv"
"sync"
)
/*
This generates count-buckets of equal size distributed along the open
interval of lower to upper. For instance, {lower=0, upper=10, count=5}
yields the following: [0, 2, 4, 6, 8].
*/
// This generates count-buckets of equal size distributed along the open
// interval of lower to upper. For instance, {lower=0, upper=10, count=5}
// yields the following: [0, 2, 4, 6, 8].
func EquallySizedBucketsFor(lower, upper float64, count int) []float64 {
buckets := make([]float64, count)
@ -35,10 +32,8 @@ func EquallySizedBucketsFor(lower, upper float64, count int) []float64 {
return buckets
}
/*
This generates log2-sized buckets spanning from lower to upper inclusively
as well as values beyond it.
*/
// This generates log2-sized buckets spanning from lower to upper inclusively
// as well as values beyond it.
func LogarithmicSizedBucketsFor(lower, upper float64) []float64 {
bucketCount := int(math.Ceil(math.Log2(upper)))
@ -51,9 +46,7 @@ func LogarithmicSizedBucketsFor(lower, upper float64) []float64 {
return buckets
}
/*
A HistogramSpecification defines how a Histogram is to be built.
*/
// A HistogramSpecification defines how a Histogram is to be built.
type HistogramSpecification struct {
BucketBuilder BucketBuilder
ReportablePercentiles []float64
@ -67,39 +60,30 @@ type Histogram interface {
String() string
}
/*
The histogram is an accumulator for samples. It merely routes into which
to bucket to capture an event and provides a percentile calculation
mechanism.
*/
// The histogram is an accumulator for samples. It merely routes into which
// bucket to capture an event and provides a percentile calculation mechanism.
type histogram struct {
bucketMaker BucketBuilder
/*
This represents the open interval's start at which values shall be added to
the bucket. The interval continues until the beginning of the next bucket
exclusive or positive infinity.
N.B.
- bucketStarts should be sorted in ascending order;
- len(bucketStarts) must be equivalent to len(buckets);
- The index of a given bucketStarts' element is presumed to
correspond to the appropriate element in buckets.
*/
// This represents the open interval's start at which values shall be added to
// the bucket. The interval continues until the beginning of the next bucket
// exclusive or positive infinity.
//
// N.B.
// - bucketStarts should be sorted in ascending order;
// - len(bucketStarts) must be equivalent to len(buckets);
// - The index of a given bucketStarts' element is presumed to
// correspond to the appropriate element in buckets.
bucketStarts []float64
mutex sync.RWMutex
/*
These are the buckets that capture samples as they are emitted to the
histogram. Please consult the reference interface and its implements for
further details about behavior expectations.
*/
values map[string]*histogramValue
/*
These are the percentile values that will be reported on marshalling.
*/
// These are the buckets that capture samples as they are emitted to the
// histogram. Please consult the reference interface and its implements for
// further details about behavior expectations.
values map[string]*histogramVector
// These are the percentile values that will be reported on marshalling.
reportablePercentiles []float64
}
type histogramValue struct {
type histogramVector struct {
buckets []Bucket
labels map[string]string
}
@ -113,12 +97,12 @@ func (h *histogram) Add(labels map[string]string, value float64) {
}
signature := utility.LabelsToSignature(labels)
var histogram *histogramValue = nil
var histogram *histogramVector = nil
if original, ok := h.values[signature]; ok {
histogram = original
} else {
bucketCount := len(h.bucketStarts)
histogram = &histogramValue{
histogram = &histogramVector{
buckets: make([]Bucket, bucketCount),
labels: labels,
}
@ -161,9 +145,7 @@ func (h *histogram) String() string {
return stringBuffer.String()
}
/*
Determine the number of previous observations up to a given index.
*/
// Determine the number of previous observations up to a given index.
func previousCumulativeObservations(cumulativeObservations []int, bucketIndex int) int {
if bucketIndex == 0 {
return 0
@ -172,16 +154,12 @@ func previousCumulativeObservations(cumulativeObservations []int, bucketIndex in
return cumulativeObservations[bucketIndex-1]
}
/*
Determine the index for an element given a percentage of length.
*/
// Determine the index for an element given a percentage of length.
func prospectiveIndexForPercentile(percentile float64, totalObservations int) int {
return int(percentile * float64(totalObservations-1))
}
/*
Determine the next bucket element when interim bucket intervals may be empty.
*/
// Determine the next bucket element when interim bucket intervals may be empty.
func (h *histogram) nextNonEmptyBucketElement(signature string, currentIndex, bucketCount int, observationsByBucket []int) (*Bucket, int) {
for i := currentIndex; i < bucketCount; i++ {
if observationsByBucket[i] == 0 {
@ -196,24 +174,18 @@ func (h *histogram) nextNonEmptyBucketElement(signature string, currentIndex, bu
panic("Illegal Condition: There were no remaining buckets to provide a value.")
}
/*
Find what bucket and element index contains a given percentile value.
If a percentile is requested that results in a corresponding index that is no
longer contained by the bucket, the index of the last item is returned. This
may occur if the underlying bucket catalogs values and employs an eviction
strategy.
*/
// Find what bucket and element index contains a given percentile value.
// If a percentile is requested that results in a corresponding index that is no
// longer contained by the bucket, the index of the last item is returned. This
// may occur if the underlying bucket catalogs values and employs an eviction
// strategy.
func (h *histogram) bucketForPercentile(signature string, percentile float64) (*Bucket, int) {
bucketCount := len(h.bucketStarts)
/*
This captures the quantity of samples in a given bucket's range.
*/
// This captures the quantity of samples in a given bucket's range.
observationsByBucket := make([]int, bucketCount)
/*
This captures the cumulative quantity of observations from all preceding
buckets up and to the end of this bucket.
*/
// This captures the cumulative quantity of observations from all preceding
// buckets up and to the end of this bucket.
cumulativeObservationsByBucket := make([]int, bucketCount)
totalObservations := 0
@ -227,11 +199,9 @@ func (h *histogram) bucketForPercentile(signature string, percentile float64) (*
cumulativeObservationsByBucket[i] = totalObservations
}
/*
This captures the index offset where the given percentile value would be
were all submitted samples stored and never down-/re-sampled nor deleted
and housed in a singular array.
*/
// This captures the index offset where the given percentile value would be
// were all submitted samples stored and never down-/re-sampled nor deleted
// and housed in a singular array.
prospectiveIndex := prospectiveIndexForPercentile(percentile, totalObservations)
for i, cumulativeObservation := range cumulativeObservationsByBucket {
@ -239,21 +209,15 @@ func (h *histogram) bucketForPercentile(signature string, percentile float64) (*
continue
}
/*
Find the bucket that contains the given index.
*/
// Find the bucket that contains the given index.
if cumulativeObservation >= prospectiveIndex {
var subIndex int
/*
This calculates the index within the current bucket where the given
percentile may be found.
*/
// This calculates the index within the current bucket where the given
// percentile may be found.
subIndex = prospectiveIndex - previousCumulativeObservations(cumulativeObservationsByBucket, i)
/*
Sometimes the index may be the last item, in which case we need to
take this into account.
*/
// Sometimes the index may be the last item, in which case we need to
// take this into account.
if observationsByBucket[i] == subIndex {
return h.nextNonEmptyBucketElement(signature, i+1, bucketCount, observationsByBucket)
}
@ -265,11 +229,9 @@ func (h *histogram) bucketForPercentile(signature string, percentile float64) (*
return &histogram.buckets[0], 0
}
/*
Return the histogram's estimate of the value for a given percentile of
collected samples. The requested percentile is expected to be a real
value within (0, 1.0].
*/
// Return the histogram's estimate of the value for a given percentile of
// collected samples. The requested percentile is expected to be a real
// value within (0, 1.0].
func (h *histogram) percentile(signature string, percentile float64) float64 {
bucket, index := h.bucketForPercentile(signature, percentile)
@ -318,16 +280,26 @@ func (h *histogram) ResetAll() {
}
}
/*
Produce a histogram from a given specification.
*/
// Produce a histogram from a given specification.
func NewHistogram(specification *HistogramSpecification) Histogram {
metric := &histogram{
bucketMaker: specification.BucketBuilder,
bucketStarts: specification.Starts,
reportablePercentiles: specification.ReportablePercentiles,
values: map[string]*histogramValue{},
values: map[string]*histogramVector{},
}
return metric
}
// Furnish a Histogram with unsensible default values and behaviors that is
// strictly useful for prototyping purposes.
func NewDefaultHistogram() Histogram {
return NewHistogram(
&HistogramSpecification{
Starts: LogarithmicSizedBucketsFor(0, 4096),
BucketBuilder: AccumulatingBucketBuilder(EvictAndReplaceWith(10, maths.Average), 50),
ReportablePercentiles: []float64{0.01, 0.05, 0.5, 0.90, 0.99},
},
)
}

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package metrics

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package metrics
@ -20,17 +18,13 @@ const (
upperThird = 2.0 * lowerThird
)
/*
A TallyingIndexEstimator is responsible for estimating the value of index for
a given TallyingBucket, even though a TallyingBucket does not possess a
collection of samples. There are a few strategies listed below for how
this value should be approximated.
*/
// A TallyingIndexEstimator is responsible for estimating the value of index for
// a given TallyingBucket, even though a TallyingBucket does not possess a
// collection of samples. There are a few strategies listed below for how
// this value should be approximated.
type TallyingIndexEstimator func(minimum, maximum float64, index, observations int) float64
/*
Provide a filter for handling empty buckets.
*/
// Provide a filter for handling empty buckets.
func emptyFilter(e TallyingIndexEstimator) TallyingIndexEstimator {
return func(minimum, maximum float64, index, observations int) float64 {
if observations == 0 {
@ -41,31 +35,23 @@ func emptyFilter(e TallyingIndexEstimator) TallyingIndexEstimator {
}
}
/*
Report the smallest observed value in the bucket.
*/
// Report the smallest observed value in the bucket.
var Minimum TallyingIndexEstimator = emptyFilter(func(minimum, maximum float64, _, observations int) float64 {
return minimum
})
/*
Report the largest observed value in the bucket.
*/
// Report the largest observed value in the bucket.
var Maximum TallyingIndexEstimator = emptyFilter(func(minimum, maximum float64, _, observations int) float64 {
return maximum
})
/*
Report the average of the extrema.
*/
// Report the average of the extrema.
var Average TallyingIndexEstimator = emptyFilter(func(minimum, maximum float64, _, observations int) float64 {
return maths.Average([]float64{minimum, maximum})
})
/*
Report the minimum value of the index is in the lower-third of observations,
the average if in the middle-third, and the maximum if in the largest third.
*/
// Report the minimum value of the index is in the lower-third of observations,
// the average if in the middle-third, and the maximum if in the largest third.
var Uniform TallyingIndexEstimator = emptyFilter(func(minimum, maximum float64, index, observations int) float64 {
if observations == 1 {
return minimum
@ -82,11 +68,9 @@ var Uniform TallyingIndexEstimator = emptyFilter(func(minimum, maximum float64,
return maths.Average([]float64{minimum, maximum})
})
/*
A TallyingBucket is a Bucket that tallies when an object is added to it.
Upon insertion, an object is compared against collected extrema and noted
as a new minimum or maximum if appropriate.
*/
// A TallyingBucket is a Bucket that tallies when an object is added to it.
// Upon insertion, an object is compared against collected extrema and noted
// as a new minimum or maximum if appropriate.
type TallyingBucket struct {
estimator TallyingIndexEstimator
largestObserved float64
@ -140,9 +124,7 @@ func (b *TallyingBucket) Reset() {
b.smallestObserved = math.MaxFloat64
}
/*
Produce a TallyingBucket with sane defaults.
*/
// Produce a TallyingBucket with sane defaults.
func DefaultTallyingBucket() TallyingBucket {
return TallyingBucket{
estimator: Minimum,
@ -159,10 +141,8 @@ func CustomTallyingBucket(estimator TallyingIndexEstimator) TallyingBucket {
}
}
/*
This is used strictly for testing.
*/
func TallyingBucketBuilder() Bucket {
// This is used strictly for testing.
func tallyingBucketBuilder() Bucket {
b := DefaultTallyingBucket()
return &b
}

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package metrics
@ -40,7 +38,7 @@ func (s *S) TestTallyingPercentilesEstimatorUniform(c *C) {
}
func (s *S) TestTallyingBucketBuilder(c *C) {
var bucket Bucket = TallyingBucketBuilder()
var bucket Bucket = tallyingBucketBuilder()
c.Assert(bucket, Not(IsNil))
}

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package metrics
@ -12,24 +10,18 @@ import (
"time"
)
/*
This callback is called upon the completion of the timeri.e., when it stops.
*/
// This callback is called upon the completion of the timer—i.e., when it stops.
type CompletionCallback func(duration time.Duration)
/*
This is meant to capture a function that a StopWatch can call for purposes
of instrumentation.
*/
// This is meant to capture a function that a StopWatch can call for purposes
// of instrumentation.
type InstrumentableCall func()
/*
StopWatch is the structure that captures instrumentation for durations.
// StopWatch is the structure that captures instrumentation for durations.
N.B.(mtp): A major limitation hereof is that the StopWatch protocol cannot
retain instrumentation if a panic percolates within the context that is
being measured.
*/
// N.B.(mtp): A major limitation hereof is that the StopWatch protocol cannot
// retain instrumentation if a panic percolates within the context that is
// being measured.
type StopWatch interface {
Stop() time.Duration
}
@ -40,9 +32,7 @@ type stopWatch struct {
startTime time.Time
}
/*
Return a new StopWatch that is ready for instrumentation.
*/
// Return a new StopWatch that is ready for instrumentation.
func Start(onCompletion CompletionCallback) StopWatch {
return &stopWatch{
onCompletion: onCompletion,
@ -50,10 +40,8 @@ func Start(onCompletion CompletionCallback) StopWatch {
}
}
/*
Stop the StopWatch returning the elapsed duration of its lifetime while
firing an optional CompletionCallback in the background.
*/
// Stop the StopWatch returning the elapsed duration of its lifetime while
// firing an optional CompletionCallback in the background.
func (s *stopWatch) Stop() time.Duration {
s.endTime = time.Now()
duration := s.endTime.Sub(s.startTime)
@ -65,10 +53,8 @@ func (s *stopWatch) Stop() time.Duration {
return duration
}
/*
Provide a quick way of instrumenting a InstrumentableCall and emitting its
duration.
*/
// Provide a quick way of instrumenting a InstrumentableCall and emitting its
// duration.
func InstrumentCall(instrumentable InstrumentableCall, onCompletion CompletionCallback) time.Duration {
s := Start(onCompletion)
instrumentable()

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package metrics

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style license that can be found in
the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be found
// in the LICENSE file.
package registry
@ -22,7 +20,6 @@ import (
"sort"
"strings"
"sync"
"time"
)
const (
@ -42,18 +39,10 @@ var (
abortOnMisuse bool
debugRegistration bool
useAggressiveSanityChecks bool
DefaultHandler = DefaultRegistry.Handler()
)
/*
This callback accumulates the microsecond duration of the reporting framework's
overhead such that it can be reported.
*/
var requestLatencyAccumulator metrics.CompletionCallback = func(duration time.Duration) {
microseconds := float64(duration / time.Microsecond)
requestLatency.Add(nil, microseconds)
}
// container represents a top-level registered metric that encompasses its
// static metadata.
type container struct {
@ -63,36 +52,29 @@ type container struct {
name string
}
/*
Registry is, as the name implies, a registrar where metrics are listed.
In most situations, using DefaultRegistry is sufficient versus creating one's
own.
*/
// Registry is, as the name implies, a registrar where metrics are listed.
//
// In most situations, using DefaultRegistry is sufficient versus creating one's
// own.
type Registry struct {
mutex sync.RWMutex
signatureContainers map[string]container
}
/*
This builds a new metric registry. It is not needed in the majority of
cases.
*/
// This builds a new metric registry. It is not needed in the majority of
// cases.
func NewRegistry() *Registry {
return &Registry{
signatureContainers: make(map[string]container),
}
}
/*
This is the default registry with which Metric objects are associated. It
is primarily a read-only object after server instantiation.
*/
// This is the default registry with which Metric objects are associated. It
// is primarily a read-only object after server instantiation.
//
var DefaultRegistry = NewRegistry()
/*
Associate a Metric with the DefaultRegistry.
*/
// Associate a Metric with the DefaultRegistry.
func Register(name, docstring string, baseLabels map[string]string, metric metrics.Metric) error {
return DefaultRegistry.Register(name, docstring, baseLabels, metric)
}
@ -155,9 +137,7 @@ func (r *Registry) isValidCandidate(name string, baseLabels map[string]string) (
return
}
/*
Register a metric with a given name. Name should be globally unique.
*/
// Register a metric with a given name. Name should be globally unique.
func (r *Registry) Register(name, docstring string, baseLabels map[string]string, metric metrics.Metric) (err error) {
r.mutex.Lock()
defer r.mutex.Unlock()
@ -184,6 +164,9 @@ func (r *Registry) Register(name, docstring string, baseLabels map[string]string
// YieldBasicAuthExporter creates a http.HandlerFunc that is protected by HTTP's
// basic authentication.
func (register *Registry) YieldBasicAuthExporter(username, password string) http.HandlerFunc {
// XXX: Work with Daniel to get this removed from the library, as it is really
// superfluous and can be much more elegantly accomplished via
// delegation.
exporter := register.YieldExporter()
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
@ -277,11 +260,15 @@ func decorateWriter(request *http.Request, writer http.ResponseWriter) io.Writer
return gziper
}
/*
Create a http.HandlerFunc that is tied to r Registry such that requests
against it generate a representation of the housed metrics.
*/
func (registry *Registry) YieldExporter() http.HandlerFunc {
log.Println("Registry.YieldExporter is deprecated in favor of Registry.Handler.")
return registry.Handler()
}
// Create a http.HandlerFunc that is tied to a Registry such that requests
// against it generate a representation of the housed metrics.
func (registry *Registry) Handler() http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
var instrumentable metrics.InstrumentableCall = func() {
requestCount.Increment(nil)
@ -294,7 +281,6 @@ func (registry *Registry) YieldExporter() http.HandlerFunc {
writer := decorateWriter(r, w)
// TODO(matt): Migrate to ioutil.NopCloser.
if closer, ok := writer.(io.Closer); ok {
defer closer.Close()
}

View File

@ -1,8 +1,8 @@
// Copyright (c) 2013, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be found in
// the LICENSE file.
// Use of this source code is governed by a BSD-style license that can be found
// in the LICENSE file.
package registry

View File

@ -1,10 +1,9 @@
/*
Copyright (c) 2013, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style license that can be found in
the LICENSE file.
*/
// Copyright (c) 2013, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be found
// in the LICENSE file.
//
package registry
@ -14,11 +13,9 @@ import (
"time"
)
/*
Boilerplate metrics about the metrics reporting subservice. These are only
exposed if the DefaultRegistry's exporter is hooked into the HTTP request
handler.
*/
// Boilerplate metrics about the metrics reporting subservice. These are only
// exposed if the DefaultRegistry's exporter is hooked into the HTTP request
// handler.
var (
marshalErrorCount = metrics.NewCounter()
dumpErrorCount = metrics.NewCounter()
@ -42,3 +39,11 @@ func init() {
DefaultRegistry.Register("instance_start_time_seconds", "The time at which the current instance started (UTC).", NilLabels, startTime)
}
// This callback accumulates the microsecond duration of the reporting
// framework's overhead such that it can be reported.
var requestLatencyAccumulator metrics.CompletionCallback = func(duration time.Duration) {
microseconds := float64(duration / time.Microsecond)
requestLatency.Add(nil, microseconds)
}

View File

@ -1,24 +1,20 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
/*
The utility package provides general purpose helpers to assist with this
library.
// The utility package provides general purpose helpers to assist with this
// library.
priority_queue.go provides a simple priority queue.
// priority_queue.go provides a simple priority queue.
priority_queue_test.go provides a test complement for the priority_queue.go
module.
// priority_queue_test.go provides a test complement for the priority_queue.go
// module.
test_helper.go provides a testing assistents for this package and its
dependents.
// test_helper.go provides a testing assistents for this package and its
// dependents.
utility_test.go provides a test suite for all tests in the utility package
hierarchy. It employs the gocheck framework for test scaffolding.
*/
// utility_test.go provides a test suite for all tests in the utility package
// hierarchy. It employs the gocheck framework for test scaffolding.
package documentation

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package utility

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package utility

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package utility

View File

@ -1,10 +1,8 @@
/*
Copyright (c) 2012, Matt T. Proud
All rights reserved.
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file.
*/
// Copyright (c) 2012, Matt T. Proud
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package utility