Complete rewrite of the exposition library.

This rewrite had may backs and forths. In my git repository, it
consists of 35 commits which I cannot group or merge into reasonable
review buckets. Gerrit breaks fundamental git semantics, so I have to
squash the 35 commits into one for the review.

I'll push this not with refs/for/master, but with refs/for/next so
that we can transition after submission in a controlled fashion.

For the review, I recommend to start with looking at godoc and in
particular the many examples. After that, continue with a line-by-line
detailed review. (The big picture is hopefully as expected after
wrapping up the discussion earlier.)

Change-Id: Ib38cc46493a5139ca29d84020650929d94cac850
This commit is contained in:
Bjoern Rabenstein 2014-05-07 20:08:33 +02:00
parent d3ebb29141
commit 5d40912fd2
83 changed files with 4704 additions and 4128 deletions

View File

@ -1,7 +1,7 @@
language: go
go:
- 1.1
- 1.2.1
script:
- make -f Makefile

217
LICENSE
View File

@ -1,22 +1,201 @@
Copyright (c) 2013, Prometheus Team
All rights reserved.
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
1. Definitions.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2013 Prometheus Team
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -14,19 +14,29 @@
OS = $(shell uname)
ARCH = $(shell uname -m)
MAC_OS_X_VERSION ?= 10.8
BUILD_PATH = $(PWD)/.build
export GO_VERSION = 1.1
export GO_VERSION = 1.2.1
export GOOS = $(subst Darwin,darwin,$(subst Linux,linux,$(OS)))
ifeq ($(GOOS),darwin)
RELEASE_SUFFIX ?= -osx$(MAC_OS_X_VERSION)
else
RELEASE_SUFFIX ?=
endif
export GOARCH = $(subst x86_64,amd64,$(ARCH))
export GOPKG = go$(GO_VERSION).$(GOOS)-$(GOARCH).tar.gz
export GOPKG = go$(GO_VERSION).$(GOOS)-$(GOARCH)$(RELEASE_SUFFIX).tar.gz
export GOROOT = $(BUILD_PATH)/root/go
export GOPATH = $(BUILD_PATH)/root/gopath
export GOCC = $(GOROOT)/bin/go
export GOCC = $(GOROOT)/bin/go
export TMPDIR = /tmp
export GOENV = TMPDIR=$(TMPDIR) GOROOT=$(GOROOT) GOPATH=$(GOPATH)
export GO = $(GOENV) $(GOCC)
export GO = $(GOENV) $(GOCC)
export GOFMT = $(GOROOT)/bin/gofmt
export GODOC = $(GOENV) $(GOROOT)/bin/godoc
BENCHMARK_FILTER ?= .
@ -54,11 +64,10 @@ $(GOCC): $(BUILD_PATH)/root $(BUILD_PATH)/cache/$(GOPKG)
touch $@
build: source_path dependencies
$(MAKE) -C prometheus build
$(MAKE) -C examples build
$(GO) build ./...
dependencies: source_path $(GOCC)
$(GO) get github.com/matttproud/gocheck
$(GO) get -d -t ./...
test: build
$(GO) test ./...
@ -67,14 +76,13 @@ benchmark: build
$(GO) test -benchmem -test.bench="$(BENCHMARK_FILTER)" ./...
advice: test
$(MAKE) -C prometheus advice
$(MAKE) -C examples advice
$(GO) vet ./...
format:
find . -iname '*.go' | grep -v './.build/' | xargs -n1 -P1 $(GOFMT) -w -s=true
search_index:
godoc -index -write_index -index_files='search_index'
$(GODOC) -index -write_index -index_files='search_index'
# source_path is responsible for ensuring that the builder has not done anything
# stupid like working on Prometheus outside of ${GOPATH}.
@ -83,10 +91,9 @@ source_path:
[ -d "$(FULL_GOPATH)" ]
documentation: search_index
godoc -http=:6060 -index -index_files='search_index'
$(GODOC) -http=:6060 -index -index_files='search_index'
clean:
$(MAKE) -C examples clean
rm -rf $(MAKE_ARTIFACTS)
find . -iname '*~' -exec rm -f '{}' ';'
find . -iname '*#' -exec rm -f '{}' ';'

View File

@ -1,96 +1,30 @@
# Overview
These [Go](http://golang.org) packages are an extraction of pieces of
instrumentation code I whipped-up for a personal project that a friend of mine
and I are working on. We were in need for some rudimentary statistics to
observe behaviors of the server's various components, so this was written.
This is the [Prometheus](http://www.prometheus.io)
[Go](http://golang.org) client library. It provides several distinct
functions, and there is separate documentation for each respective
component. You will want to select the appropriate topic below to
continue your journey:
The code here is not a verbatim copy thereof but rather a thoughtful
re-implementation should other folks need to consume and analyze such telemetry.
1. See the [exposition library](prometheus/README.md) if you want to
export metrics to a Prometheus server or pushgateway
N.B. --- I have spent a bit of time working through the model in my head and
probably haven't elucidated my ideas as clearly as I need to. If you examine
examples/{simple,uniform_random}/main.go and registry.go, you'll find several
examples of what types of potential instrumentation use cases this package
addresses. There are probably numerous Go language idiomatic changes that need
to be made, but this task has been deferred for now.
# Continuous Integration
[![Build Status](https://secure.travis-ci.org/prometheus/client_golang.png?branch=master)](http://travis-ci.org/prometheus/client_golang)
# Documentation
Please read the [generated documentation](http://go.pkgdoc.org/github.com/prometheus/client_golang)
for the project's documentation from source code.
# Basic Overview
## Metrics
A metric is a measurement mechanism.
### Gauge
A _Gauge_ is a metric that exposes merely an instantaneous value or some
snapshot thereof.
### Counter
A _Counter_ is a metric that exposes merely a sum or tally of things.
### Histogram
A _Histogram_ is a metric that captures events or samples into _Buckets_. It
exposes its values via percentile estimations.
#### Buckets
A _Bucket_ is a generic container that collects samples and their values. It
prescribes no behavior on its own aside from merely accepting a value,
leaving it up to the concrete implementation to what to do with the injected
values.
##### Accumulating Bucket
An _Accumulating Bucket_ is a _Bucket_ that appends the new sample to a queue
such that the eldest values are evicted according to a given policy.
###### Eviction Policies
Once an _Accumulating Bucket_ reaches capacity, its eviction policy is invoked.
This reaps the oldest N objects subject to certain behavior.
####### Remove Oldest
This merely removes the oldest N items without performing some aggregation
replacement operation on them.
####### Aggregate Oldest
This removes the oldest N items while performing some summary aggregation
operation thereupon, which is then appended to the list in the former values'
place.
##### Tallying Bucket
A _Tallying Bucket_ differs from an _Accumulating Bucket_ in that it never
stores any of the values emitted into it but rather exposes a simplied summary
representation thereof. For instance, if a values therein is requested,
it may situationally emit a minimum, maximum, an average, or any other
reduction mechanism requested.
2. See the [consumption library](extraction/README.md) if you want to
process metrics exported by a Prometheus client. (The Prometheus server
is using that library.)
[![GoDoc](https://godoc.org/github.com/prometheus/client_golang?status.png)](https://godoc.org/github.com/prometheus/client_golang)
# Getting Started
* The source code is periodically indexed: [Go Exposition Client](http://godoc.org/github.com/prometheus/client_golang).
* All of the core developers are accessible via the [Prometheus Developers Mailinglist](https://groups.google.com/forum/?fromgroups#!forum/prometheus-developers).
# Testing
This package employs [gocheck](http://labix.org/gocheck) for testing. Please
ensure that all tests pass by running the following from the project root:
$ go test ./...
The use of gocheck is summarily being phased out; however, old tests that use it
still exist.
# Continuous Integration
[![Build Status](https://secure.travis-ci.org/prometheus/client_golang.png?branch=master)]()
# Contributing
## Contributing
Same as for the `prometheus/prometheus` repository, we are using
Gerrit to manage reviews of pull-requests for this repository. See
[`CONTRIBUTING.md`](https://github.com/prometheus/prometheus/blob/master/CONTRIBUTING.md)
in the `prometheus/prometheus` repository for details (but replace the
`prometheus` repository name by `client_golang`).
Please try to avoid warnings flagged by [`go
vet`](https://godoc.org/code.google.com/p/go.tools/cmd/vet) and by
[`golint`](https://github.com/golang/lint), and pay attention to the
[Go Code Review
Comments](https://code.google.com/p/go-wiki/wiki/CodeReviewComments) and the _Formatting and style_ section of Peter Bourgon's [Go: Best Practices for Production Environments](http://peter.bourgon.org/go-in-production/#formatting-and-style).
See the contributing guidelines for the [Prometheus server](https://github.com/prometheus/prometheus/blob/master/CONTRIBUTING.md).

2
TODO
View File

@ -1,2 +0,0 @@
- Validate repository for Go code fluency and idiomatic adherence.
- Evaluate using atomic types versus locks.

View File

@ -1,40 +0,0 @@
Please try to observe the following rules when naming metrics:
- Use underbars "_" to separate words.
- Have the metric name start from generality and work toward specificity
toward the end. For example, when working with multiple caching subsystems,
consider using the following structure "cache" + "user_credentials" →
"cache_user_credentials" and "cache" + "value_transformations" →
"cache_value_transformations".
- Have whatever is being measured follow the system and subsystem names cited
supra. For instance, with "insertions", "deletions", "evictions",
"replacements" of the above cache, they should be named as
"cache_user_credentials_insertions" and "cache_user_credentials_deletions"
and "cache_user_credentials_deletions" and
"cache_user_credentials_evictions".
- If what is being measured has a standardized unit around it, consider
providing a unit for it.
- Consider adding an additional suffix that designates what the value
represents such as a "total" or "size"---e.g.,
"cache_user_credentials_size_kb" or
"cache_user_credentials_insertions_total".
- Give heed to how future-proof the names are. Things may depend on these
names; and as your service evolves, the calculated values may take on
different meanings, which can be difficult to reflect if deployed code
depends on antique names.
Further considerations:
- The Registry's exposition mechanism is not backed by authorization and
authentication. This is something that will need to be addressed for
production services that are directly exposed to the outside world.
- Engage in as little in-process processing of values as possible. The job
of processing and aggregation of these values belongs in a separate
post-processing job. The same goes for archiving. I will need to evaluate
hooks into something like OpenTSBD.

View File

@ -1,36 +0,0 @@
# Copyright 2013 Prometheus Team
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
all: test
build:
$(MAKE) -C delegator build
$(MAKE) -C random build
$(MAKE) -C simple build
test: build
$(MAKE) -C delegator test
$(MAKE) -C random test
$(MAKE) -C simple test
advice: test
$(MAKE) -C delegator advice
$(MAKE) -C random advice
$(MAKE) -C simple advice
clean:
$(MAKE) -C delegator clean
$(MAKE) -C random clean
$(MAKE) -C simple clean
.PHONY: advice build clean test

View File

@ -1 +0,0 @@
delegator

View File

@ -1,32 +0,0 @@
# Copyright 2013 Prometheus Team
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
MAKE_ARTIFACTS = delegator
all: test
build: delegator
delegator:
$(GO) build .
test: build
$(GO) test . $(GO_TEST_FLAGS)
advice:
$(GO) tool vet .
clean:
rm -f $(MAKE_ARTIFACTS)
.PHONY: advice build clean test

View File

@ -1,54 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// This skeletal example of the telemetry library is provided to demonstrate the
// use of boilerplate HTTP delegation telemetry methods.
package main
import (
"flag"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/exp"
"net/http"
)
// helloHandler demonstrates the DefaultCoarseMux's ability to sniff a
// http.ResponseWriter (specifically http.response) implicit setting of
// a response code.
func helloHandler(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("Hello, hello, hello..."))
}
// goodbyeHandler demonstrates the DefaultCoarseMux's ability to sniff an
// http.ResponseWriter (specifically http.response) explicit setting of
// a response code.
func goodbyeHandler(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusGone)
w.Write([]byte("... and now for the big goodbye!"))
}
// teapotHandler demonstrates the DefaultCoarseMux's ability to sniff an
// http.ResponseWriter (specifically http.response) explicit setting of
// a response code for pure comedic value.
func teapotHandler(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusTeapot)
w.Write([]byte("Short and stout..."))
}
var (
listeningAddress = flag.String("listeningAddress", ":8080", "The address to listen to requests on.")
)
func main() {
flag.Parse()
exp.HandleFunc("/hello", helloHandler)
exp.HandleFunc("/goodbye", goodbyeHandler)
exp.HandleFunc("/teapot", teapotHandler)
exp.Handle(prometheus.ExpositionResource, prometheus.DefaultHandler)
http.ListenAndServe(*listeningAddress, exp.DefaultCoarseMux)
}

View File

@ -1 +0,0 @@
random

View File

@ -1,32 +0,0 @@
# Copyright 2013 Prometheus Team
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
MAKE_ARTIFACTS = random
all: test
build: random
random:
$(GO) build .
test: build
$(GO) test . $(GO_TEST_FLAGS)
advice:
$(GO) tool vet .
clean:
rm -f $(MAKE_ARTIFACTS)
.PHONY: advice clean build test

View File

@ -1,81 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// A simple example of how to use this instrumentation framework in the context
// of having something that emits values into its collectors.
//
// The emitted values correspond to uniform, normal, and exponential
// distributions.
package main
import (
"flag"
"github.com/prometheus/client_golang/prometheus"
"math/rand"
"net/http"
"time"
)
var (
barDomain = flag.Float64("random.fooDomain", 200, "The domain for the random parameter foo.")
barMean = flag.Float64("random.barDomain", 10, "The domain for the random parameter bar.")
fooDomain = flag.Float64("random.barMean", 100, "The mean for the random parameter bar.")
// Create a histogram to track fictitious interservice RPC latency for three
// distinct services.
rpcLatency = prometheus.NewHistogram(&prometheus.HistogramSpecification{
// Four distinct histogram buckets for values:
// - equally-sized,
// - 0 to 50, 50 to 100, 100 to 150, and 150 to 200.
Starts: prometheus.EquallySizedBucketsFor(0, 200, 4),
// Create histogram buckets using an accumulating bucket, a bucket that
// holds sample values subject to an eviction policy:
// - 50 elements are allowed per bucket.
// - Once 50 have been reached, the bucket empties 10 elements, averages the
// evicted elements, and re-appends that back to the bucket.
BucketBuilder: prometheus.AccumulatingBucketBuilder(prometheus.EvictAndReplaceWith(10, prometheus.AverageReducer), 50),
// The histogram reports percentiles 1, 5, 50, 90, and 99.
ReportablePercentiles: []float64{0.01, 0.05, 0.5, 0.90, 0.99},
})
rpcCalls = prometheus.NewCounter()
// If for whatever reason you are resistant to the idea of having a static
// registry for metrics, which is a really bad idea when using Prometheus-
// enabled library code, you can create your own.
customRegistry = prometheus.NewRegistry()
)
func main() {
flag.Parse()
go func() {
for {
rpcLatency.Add(map[string]string{"service": "foo"}, rand.Float64()**fooDomain)
rpcCalls.Increment(map[string]string{"service": "foo"})
rpcLatency.Add(map[string]string{"service": "bar"}, (rand.NormFloat64()**barDomain)+*barMean)
rpcCalls.Increment(map[string]string{"service": "bar"})
rpcLatency.Add(map[string]string{"service": "zed"}, rand.ExpFloat64())
rpcCalls.Increment(map[string]string{"service": "zed"})
time.Sleep(100 * time.Millisecond)
}
}()
http.Handle(prometheus.ExpositionResource, customRegistry.Handler())
http.ListenAndServe(*listeningAddress, nil)
}
func init() {
customRegistry.Register("rpc_latency_microseconds", "RPC latency.", prometheus.NilLabels, rpcLatency)
customRegistry.Register("rpc_calls_total", "RPC calls.", prometheus.NilLabels, rpcCalls)
}
var (
listeningAddress = flag.String("listeningAddress", ":8080", "The address to listen to requests on.")
)

View File

@ -1 +0,0 @@
simple

View File

@ -1,32 +0,0 @@
# Copyright 2013 Prometheus Team
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
MAKE_ARTIFACTS = simple
all: test
build: simple
simple:
$(GO) build .
test: build
$(GO) test . $(GO_TEST_FLAGS)
advice:
$(GO) tool vet .
clean:
rm -f $(MAKE_ARTIFACTS)
.PHONY: advice build clean test

View File

@ -1,26 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// A simple skeletal example of how this instrumentation framework is registered
// and invoked. Literally, this is the bare bones.
package main
import (
"flag"
"github.com/prometheus/client_golang/prometheus"
"net/http"
)
func main() {
flag.Parse()
http.Handle(prometheus.ExpositionResource, prometheus.DefaultHandler)
http.ListenAndServe(*listeningAddress, nil)
}
var (
listeningAddress = flag.String("listeningAddress", ":8080", "The address to listen to requests on.")
)

View File

@ -17,11 +17,9 @@ import (
"errors"
"net/http"
"testing"
"github.com/prometheus/client_golang/test"
)
func testDiscriminatorHTTPHeader(t test.Tester) {
func testDiscriminatorHTTPHeader(t testing.TB) {
var scenarios = []struct {
input map[string]string
output Processor

View File

@ -20,8 +20,9 @@ import (
"sort"
"testing"
"github.com/prometheus/prometheus/utility/test"
"github.com/prometheus/client_golang/model"
"github.com/prometheus/client_golang/test"
)
var test001Time = model.Now()
@ -37,7 +38,7 @@ func (s *testProcessor001ProcessScenario) Ingest(r *Result) error {
return nil
}
func (s *testProcessor001ProcessScenario) test(t test.Tester, set int) {
func (s *testProcessor001ProcessScenario) test(t testing.TB, set int) {
reader, err := os.Open(path.Join("fixtures", s.in))
if err != nil {
t.Fatalf("%d. couldn't open scenario input file %s: %s", set, s.in, err)
@ -64,7 +65,7 @@ func (s *testProcessor001ProcessScenario) test(t test.Tester, set int) {
}
}
func testProcessor001Process(t test.Tester) {
func testProcessor001Process(t testing.TB) {
var scenarios = []testProcessor001ProcessScenario{
{
in: "empty.json",

View File

@ -21,8 +21,9 @@ import (
"sort"
"testing"
"github.com/prometheus/prometheus/utility/test"
"github.com/prometheus/client_golang/model"
"github.com/prometheus/client_golang/test"
)
var test002Time = model.Now()
@ -38,7 +39,7 @@ func (s *testProcessor002ProcessScenario) Ingest(r *Result) error {
return nil
}
func (s *testProcessor002ProcessScenario) test(t test.Tester, set int) {
func (s *testProcessor002ProcessScenario) test(t testing.TB, set int) {
reader, err := os.Open(path.Join("fixtures", s.in))
if err != nil {
t.Fatalf("%d. couldn't open scenario input file %s: %s", set, s.in, err)
@ -65,7 +66,7 @@ func (s *testProcessor002ProcessScenario) test(t test.Tester, set int) {
}
}
func testProcessor002Process(t test.Tester) {
func testProcessor002Process(t testing.TB) {
var scenarios = []testProcessor002ProcessScenario{
{
in: "empty.json",

View File

@ -16,11 +16,9 @@ package model
import (
"sort"
"testing"
"github.com/prometheus/client_golang/test"
)
func testLabelNames(t test.Tester) {
func testLabelNames(t testing.TB) {
var scenarios = []struct {
in LabelNames
out LabelNames

View File

@ -16,11 +16,9 @@ package model
import (
"sort"
"testing"
"github.com/prometheus/client_golang/test"
)
func testLabelValues(t test.Tester) {
func testLabelValues(t testing.TB) {
var scenarios = []struct {
in LabelValues
out LabelValues

View File

@ -13,13 +13,9 @@
package model
import (
"testing"
import "testing"
"github.com/prometheus/client_golang/test"
)
func testMetric(t test.Tester) {
func testMetric(t testing.TB) {
var scenarios = []struct {
input map[string]string
hash uint64

View File

@ -1,8 +1,15 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package model

View File

@ -1,19 +1,24 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package model
import (
"runtime"
"testing"
"github.com/prometheus/client_golang/test"
)
func testLabelsToSignature(t test.Tester) {
func testLabelsToSignature(t testing.TB) {
var scenarios = []struct {
in map[string]string
out uint64

View File

@ -1,28 +0,0 @@
# Copyright 2013 Prometheus Team
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
all: test
build: dependencies
$(GO) build ./...
dependencies: $(GOCC)
$(GO) get -d
test: build
$(GO) test ./... $(GO_TEST_FLAGS)
advice:
$(GO) tool vet .
.PHONY: advice build dependencies test

53
prometheus/README.md Normal file
View File

@ -0,0 +1,53 @@
# Overview
This is the [Prometheus](http://www.prometheus.io) telemetric
instrumentation client [Go](http://golang.org) client library. It
enable authors to define process-space metrics for their servers and
expose them through a web service interface for extraction,
aggregation, and a whole slew of other post processing techniques.
# Installing
$ go get github.com/prometheus/client_golang/prometheus
# Example
```go
package main
import (
"net/http"
"github.com/prometheus/client_golang/prometheus"
)
var (
indexed = prometheus.NewCounter(prometheus.CounterOpts{
Namespace: "my_company",
Subsystem: "indexer",
Name: "documents_indexed",
Help: "The number of documents indexed.",
})
size = prometheus.NewGauge(prometheus.GaugeOpts{
Namespace: "my_company",
Subsystem: "storage",
Name: "documents_total_size_bytes",
Help: "The total size of all documents in the storage."}})
)
func main() {
http.Handle("/metrics", prometheus.Handler())
indexed.Inc()
size.Set(5)
http.ListenAndServe(":8080", nil)
}
func init() {
prometheus.MustRegister(indexed)
prometheus.MustRegister(size)
}
```
# Documentation
[![GoDoc](https://godoc.org/github.com/prometheus/client_golang?status.png)](https://godoc.org/github.com/prometheus/client_golang)

View File

@ -1,120 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package prometheus
import (
"bytes"
"container/heap"
"fmt"
"math"
"sort"
"sync"
"time"
)
type AccumulatingBucket struct {
elements priorityQueue
evictionPolicy EvictionPolicy
maximumSize int
mutex sync.RWMutex
observations int
}
// AccumulatingBucketBuilder is a convenience method for generating a
// BucketBuilder that produces AccumatingBucket entries with a certain
// behavior set.
func AccumulatingBucketBuilder(evictionPolicy EvictionPolicy, maximumSize int) BucketBuilder {
return func() Bucket {
return &AccumulatingBucket{
elements: make(priorityQueue, 0, maximumSize),
evictionPolicy: evictionPolicy,
maximumSize: maximumSize,
}
}
}
// Add a value to the bucket. Depending on whether the bucket is full, it may
// trigger an eviction of older items.
func (b *AccumulatingBucket) Add(value float64) {
b.mutex.Lock()
defer b.mutex.Unlock()
b.observations++
size := len(b.elements)
v := item{
Priority: -1 * time.Now().UnixNano(),
Value: value,
}
if size == b.maximumSize {
b.evictionPolicy(&b.elements)
}
heap.Push(&b.elements, &v)
}
func (b AccumulatingBucket) String() string {
b.mutex.RLock()
defer b.mutex.RUnlock()
buffer := &bytes.Buffer{}
fmt.Fprintf(buffer, "[AccumulatingBucket with %d elements and %d capacity] { ", len(b.elements), b.maximumSize)
for i := 0; i < len(b.elements); i++ {
fmt.Fprintf(buffer, "%f, ", b.elements[i].Value)
}
fmt.Fprintf(buffer, "}")
return buffer.String()
}
func (b AccumulatingBucket) ValueForIndex(index int) float64 {
b.mutex.RLock()
defer b.mutex.RUnlock()
elementCount := len(b.elements)
if elementCount == 0 {
return math.NaN()
}
sortedElements := make([]float64, elementCount)
for i, element := range b.elements {
sortedElements[i] = element.Value.(float64)
}
sort.Float64s(sortedElements)
// N.B.(mtp): Interfacing components should not need to comprehend what
// eviction and storage container strategies used; therefore,
// we adjust this silently.
targetIndex := int(float64(elementCount-1) * (float64(index) / float64(b.observations)))
return sortedElements[targetIndex]
}
func (b AccumulatingBucket) Observations() int {
b.mutex.RLock()
defer b.mutex.RUnlock()
return b.observations
}
func (b *AccumulatingBucket) Reset() {
b.mutex.Lock()
defer b.mutex.RUnlock()
for i := 0; i < b.elements.Len(); i++ {
b.elements.Pop()
}
b.observations = 0
}

View File

@ -1,154 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package prometheus
import (
. "github.com/matttproud/gocheck"
"time"
)
func (s *S) TestAccumulatingBucketBuilderWithEvictOldest(c *C) {
var evictOldestThree EvictionPolicy = EvictOldest(3)
c.Assert(evictOldestThree, Not(IsNil))
bb := AccumulatingBucketBuilder(evictOldestThree, 5)
c.Assert(bb, Not(IsNil))
var b Bucket = bb()
c.Assert(b, Not(IsNil))
c.Check(b.String(), Equals, "[AccumulatingBucket with 0 elements and 5 capacity] { }")
b.Add(1)
c.Check(b.String(), Equals, "[AccumulatingBucket with 1 elements and 5 capacity] { 1.000000, }")
b.Add(2)
c.Check(b.String(), Equals, "[AccumulatingBucket with 2 elements and 5 capacity] { 1.000000, 2.000000, }")
b.Add(3)
c.Check(b.String(), Equals, "[AccumulatingBucket with 3 elements and 5 capacity] { 1.000000, 2.000000, 3.000000, }")
b.Add(4)
c.Check(b.String(), Equals, "[AccumulatingBucket with 4 elements and 5 capacity] { 1.000000, 2.000000, 3.000000, 4.000000, }")
b.Add(5)
c.Check(b.String(), Equals, "[AccumulatingBucket with 5 elements and 5 capacity] { 1.000000, 2.000000, 3.000000, 4.000000, 5.000000, }")
b.Add(6)
c.Check(b.String(), Equals, "[AccumulatingBucket with 3 elements and 5 capacity] { 4.000000, 5.000000, 6.000000, }")
var bucket Bucket = b
c.Assert(bucket, Not(IsNil))
}
func (s *S) TestAccumulatingBucketBuilderWithEvictAndReplaceWithAverage(c *C) {
var evictAndReplaceWithAverage EvictionPolicy = EvictAndReplaceWith(3, AverageReducer)
c.Assert(evictAndReplaceWithAverage, Not(IsNil))
bb := AccumulatingBucketBuilder(evictAndReplaceWithAverage, 5)
c.Assert(bb, Not(IsNil))
var b Bucket = bb()
c.Assert(b, Not(IsNil))
c.Check(b.String(), Equals, "[AccumulatingBucket with 0 elements and 5 capacity] { }")
b.Add(1)
c.Check(b.String(), Equals, "[AccumulatingBucket with 1 elements and 5 capacity] { 1.000000, }")
b.Add(2)
c.Check(b.String(), Equals, "[AccumulatingBucket with 2 elements and 5 capacity] { 1.000000, 2.000000, }")
b.Add(3)
c.Check(b.String(), Equals, "[AccumulatingBucket with 3 elements and 5 capacity] { 1.000000, 2.000000, 3.000000, }")
b.Add(4)
c.Check(b.String(), Equals, "[AccumulatingBucket with 4 elements and 5 capacity] { 1.000000, 2.000000, 3.000000, 4.000000, }")
b.Add(5)
c.Check(b.String(), Equals, "[AccumulatingBucket with 5 elements and 5 capacity] { 1.000000, 2.000000, 3.000000, 4.000000, 5.000000, }")
b.Add(6)
c.Check(b.String(), Equals, "[AccumulatingBucket with 4 elements and 5 capacity] { 4.000000, 5.000000, 2.000000, 6.000000, }")
}
func (s *S) TestAccumulatingBucket(c *C) {
var b AccumulatingBucket = AccumulatingBucket{
elements: make(priorityQueue, 0, 10),
maximumSize: 5,
}
c.Check(b.elements, HasLen, 0)
c.Check(b.observations, Equals, 0)
c.Check(b.Observations(), Equals, 0)
b.Add(5.0)
c.Check(b.elements, HasLen, 1)
c.Check(b.observations, Equals, 1)
c.Check(b.Observations(), Equals, 1)
b.Add(6.0)
b.Add(7.0)
b.Add(8.0)
b.Add(9.0)
c.Check(b.elements, HasLen, 5)
c.Check(b.observations, Equals, 5)
c.Check(b.Observations(), Equals, 5)
}
func (s *S) TestAccumulatingBucketValueForIndex(c *C) {
var b AccumulatingBucket = AccumulatingBucket{
elements: make(priorityQueue, 0, 100),
maximumSize: 100,
evictionPolicy: EvictOldest(50),
}
for i := 0; i <= 100; i++ {
c.Assert(b.ValueForIndex(i), IsNaN)
}
// The bucket has only observed one item and contains now one item.
b.Add(1.0)
c.Check(b.ValueForIndex(0), Equals, 1.0)
// Let's sanity check what occurs if presumably an eviction happened and
// we requested an index larger than what is contained.
c.Check(b.ValueForIndex(1), Equals, 1.0)
for i := 2.0; i <= 100; i += 1 {
b.Add(i)
// TODO(mtp): This is a sin. Provide a mechanism for deterministic testing.
time.Sleep(1 * time.Millisecond)
}
c.Check(b.ValueForIndex(0), Equals, 1.0)
c.Check(b.ValueForIndex(50), Equals, 50.0)
c.Check(b.ValueForIndex(100), Equals, 100.0)
for i := 101.0; i <= 150; i += 1 {
b.Add(i)
// TODO(mtp): This is a sin. Provide a mechanism for deterministic testing.
time.Sleep(1 * time.Millisecond)
}
// The bucket's capacity has been exceeded by inputs at this point;
// consequently, we search for a given element by percentage offset
// therein.
c.Check(b.ValueForIndex(0), Equals, 51.0)
c.Check(b.ValueForIndex(50), Equals, 84.0)
c.Check(b.ValueForIndex(99), Equals, 116.0)
c.Check(b.ValueForIndex(100), Equals, 117.0)
}

View File

@ -0,0 +1,131 @@
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"testing"
)
func BenchmarkCounterWithLabelValues(b *testing.B) {
m := NewCounterVec(
CounterOpts{
Name: "benchmark_counter",
Help: "A counter to benchmark it.",
},
[]string{"one", "two", "three"},
)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
m.WithLabelValues("eins", "zwei", "drei").Inc()
}
}
func BenchmarkCounterWithMappedLabels(b *testing.B) {
m := NewCounterVec(
CounterOpts{
Name: "benchmark_counter",
Help: "A counter to benchmark it.",
},
[]string{"one", "two", "three"},
)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
m.With(Labels{"two": "zwei", "one": "eins", "three": "drei"}).Inc()
}
}
func BenchmarkCounterWithPreparedMappedLabels(b *testing.B) {
m := NewCounterVec(
CounterOpts{
Name: "benchmark_counter",
Help: "A counter to benchmark it.",
},
[]string{"one", "two", "three"},
)
b.ReportAllocs()
b.ResetTimer()
labels := Labels{"two": "zwei", "one": "eins", "three": "drei"}
for i := 0; i < b.N; i++ {
m.With(labels).Inc()
}
}
func BenchmarkCounterNoLabels(b *testing.B) {
m := NewCounter(CounterOpts{
Name: "benchmark_counter",
Help: "A counter to benchmark it.",
})
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
m.Inc()
}
}
func BenchmarkGaugeWithLabelValues(b *testing.B) {
m := NewGaugeVec(
GaugeOpts{
Name: "benchmark_gauge",
Help: "A gauge to benchmark it.",
},
[]string{"one", "two", "three"},
)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
m.WithLabelValues("eins", "zwei", "drei").Set(3.1415)
}
}
func BenchmarkGaugeNoLabels(b *testing.B) {
m := NewGauge(GaugeOpts{
Name: "benchmark_gauge",
Help: "A gauge to benchmark it.",
})
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
m.Set(3.1415)
}
}
func BenchmarkSummaryWithLabelValues(b *testing.B) {
m := NewSummaryVec(
SummaryOpts{
Name: "benchmark_summary",
Help: "A summary to benchmark it.",
},
[]string{"one", "two", "three"},
)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
m.WithLabelValues("eins", "zwei", "drei").Observe(3.1415)
}
}
func BenchmarkSummaryNoLabels(b *testing.B) {
m := NewSummary(SummaryOpts{
Name: "benchmark_summary",
Help: "A summary to benchmark it.",
},
)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
m.Observe(3.1415)
}
}

View File

@ -1,31 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package prometheus
// The Histogram class and associated types build buckets on their own.
type BucketBuilder func() Bucket
// This defines the base Bucket type. The exact behaviors of the bucket are
// at the whim of the implementor.
//
// A Bucket is used as a container by Histogram as a collection for its
// accumulated samples.
type Bucket interface {
// Add a value to the bucket.
Add(value float64)
// Provide a count of observations throughout the bucket's lifetime.
Observations() int
// Reset is responsible for resetting this bucket back to a pristine state.
Reset()
// Provide a humanized representation hereof.
String() string
// Provide the value from the given in-memory value cache or an estimate
// thereof for the given index. The consumer of the bucket's data makes
// no assumptions about the underlying storage mechanisms that the bucket
// employs.
ValueForIndex(index int) float64
}

73
prometheus/collector.go Normal file
View File

@ -0,0 +1,73 @@
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
// Collector is the interface implemented by anything that can be used by
// Prometheus to collect metrics. A Collector has to be registered for
// collection. See Register, MustRegister, RegisterOrGet, and MustRegisterOrGet.
//
// The stock metrics provided by this package (like Gauge, Counter, Summary) are
// also Collectors (which only ever collect one metric, namely itself). An
// implementer of Collector may, however, collect multiple metrics in a
// coordinated fashion and/or create metrics on the fly. Examples for collectors
// already implemented in this library are the metric vectors (i.e. collection
// of multiple instances of the same Metric but with different label values)
// like GaugeVec or SummaryVec, and the ExpvarCollector.
type Collector interface {
// Describe sends the super-set of all possible descriptors of metrics
// collected by this Collector to the provided channel and returns once
// the last descriptor has been sent. The sent descriptors fulfill the
// consistency and uniqueness requirements described in the Desc
// documentation. (It is valid if one and the same Collector sends
// duplicate descriptors. Those duplicates are simply ignored. However,
// two different Collectors must not send duplicate descriptors.) This
// method idempotently sends the same descriptors throughout the
// lifetime of the Collector.
Describe(chan<- *Desc)
// Collect is called by Prometheus when collecting metrics. The
// implementation sends each collected metric via the provided channel
// and returns once the last metric has been sent. The descriptor of
// each sent metric is one of those returned by Describe. Returned
// metrics that share the same descriptor must differ in their variable
// label values. This method may be called concurrently and must
// therefore be implemented in a concurrency safe way. Blocking occurs
// at the expense of total performance of rendering all registered
// metrics. Ideally, Collector implementations support concurrent
// readers.
Collect(chan<- Metric)
}
// SelfCollector implements Collector for a single Metric so that that the
// Metric collects itself. Add it as an anonymous field to a struct that
// implements Metric, and call Init with the Metric itself as an argument.
type SelfCollector struct {
self Metric
}
// Init provides the SelfCollector with a reference to the metric it is supposed
// to collect. It is usually called within the factory function to create a
// metric. See example.
func (c *SelfCollector) Init(self Metric) {
c.self = self
}
// Describe implements Collector.
func (c *SelfCollector) Describe(ch chan<- *Desc) {
ch <- c.self.Desc()
}
// Collect implements Collector.
func (c *SelfCollector) Collect(ch chan<- Metric) {
ch <- c.self
}

View File

@ -1,71 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be found
// in the LICENSE file.
package prometheus
var (
// NilLabels is a nil set of labels merely for end-user convenience.
NilLabels map[string]string
// DefaultHandler is the default http.Handler for exposing telemetric
// data over a web services interface.
DefaultHandler = DefaultRegistry.Handler()
// DefaultRegistry with which Metric objects are associated.
DefaultRegistry = NewRegistry()
)
const (
// FlagNamespace is a prefix to be used to namespace instrumentation
// flags from others.
FlagNamespace = "telemetry."
// APIVersion is the version of the format of the exported data. This
// will match this library's version, which subscribes to the Semantic
// Versioning scheme.
APIVersion = "0.0.4"
// JSONAPIVersion is the version of the JSON export format.
JSONAPIVersion = "0.0.2"
// DelimitedTelemetryContentType is the content type set on telemetry
// data responses in delimited protobuf format.
DelimitedTelemetryContentType = `application/vnd.google.protobuf; proto="io.prometheus.client.MetricFamily"; encoding="delimited"`
// TextTelemetryContentType is the content type set on telemetry data
// responses in text format.
TextTelemetryContentType = `text/plain; version=` + APIVersion
// ProtoTextTelemetryContentType is the content type set on telemetry
// data responses in protobuf text format. (Only used for debugging.)
ProtoTextTelemetryContentType = `application/vnd.google.protobuf; proto="io.prometheus.client.MetricFamily"; encoding="text"`
// ProtoCompactTextTelemetryContentType is the content type set on
// telemetry data responses in protobuf compact text format. (Only used
// for debugging.)
ProtoCompactTextTelemetryContentType = `application/vnd.google.protobuf; proto="io.prometheus.client.MetricFamily"; encoding="compact-text"`
// JSONTelemetryContentType is the content type set on telemetry data
// responses formatted as JSON.
JSONTelemetryContentType = `application/json; schema="prometheus/telemetry"; version=` + JSONAPIVersion
// ExpositionResource is the customary web services endpoint on which
// telemetric data is exposed.
ExpositionResource = "/metrics"
baseLabelsKey = "baseLabels"
docstringKey = "docstring"
metricKey = "metric"
counterTypeValue = "counter"
floatBitCount = 64
floatFormat = 'f'
floatPrecision = 6
gaugeTypeValue = "gauge"
untypedTypeValue = "untyped"
histogramTypeValue = "histogram"
typeKey = "type"
valueKey = "value"
labelsKey = "labels"
)
var blankLabelsSingleton = map[string]string{}

View File

@ -1,194 +1,149 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"encoding/json"
"fmt"
"sync"
dto "github.com/prometheus/client_model/go"
"code.google.com/p/goprotobuf/proto"
"github.com/prometheus/client_golang/model"
"errors"
"hash/fnv"
)
// TODO(matt): Refactor to de-duplicate behaviors.
// Counter is a Metric that represents a single numerical value that only ever
// goes up. That implies that it cannot be used to count items whose number can
// also go down, e.g. the number of currently running goroutines. Those
// "counters" are represented by Gauges.
//
// A Counter is typically used to count requests served, tasks completed, errors
// occurred, etc.
//
// To create Counter instances, use NewCounter.
type Counter interface {
Metric
Collector
Decrement(labels map[string]string) float64
DecrementBy(labels map[string]string, value float64) float64
Increment(labels map[string]string) float64
IncrementBy(labels map[string]string, value float64) float64
Set(labels map[string]string, value float64) float64
// Set is used to set the Counter to an arbitrary value. It is only used
// if you have to transfer a value from an external counter into this
// Prometheus metrics. Do not use it for regular handling of a
// Prometheus counter (as it can be used to break the contract of
// monotonically increasing values).
Set(float64)
// Inc increments the counter by 1.
Inc()
// Add adds the given value to the counter. It panics if the value is <
// 0.
Add(float64)
}
type counterVector struct {
Labels map[string]string `json:"labels"`
Value float64 `json:"value"`
}
// CounterOpts is an alias for Opts. See there for doc comments.
type CounterOpts Opts
func NewCounter() Counter {
return &counter{
values: map[uint64]*counterVector{},
}
// NewCounter creates a new Counter based on the provided CounterOpts.
func NewCounter(opts CounterOpts) Counter {
desc := NewDesc(
BuildFQName(opts.Namespace, opts.Subsystem, opts.Name),
opts.Help,
nil,
opts.ConstLabels,
)
result := &counter{value: value{desc: desc, valType: CounterValue}}
result.Init(result) // Init self-collection.
return result
}
type counter struct {
mutex sync.RWMutex
values map[uint64]*counterVector
value
}
func (metric *counter) Set(labels map[string]string, value float64) float64 {
if labels == nil {
labels = blankLabelsSingleton
func (c *counter) Add(v float64) {
if v < 0 {
panic(errors.New("counter cannot decrease in value"))
}
signature := model.LabelValuesToSignature(labels)
metric.mutex.Lock()
defer metric.mutex.Unlock()
if original, ok := metric.values[signature]; ok {
original.Value = value
} else {
metric.values[signature] = &counterVector{
Labels: labels,
Value: value,
}
}
return value
c.value.Add(v)
}
func (metric *counter) Reset(labels map[string]string) {
signature := model.LabelValuesToSignature(labels)
metric.mutex.Lock()
defer metric.mutex.Unlock()
delete(metric.values, signature)
// CounterVec is a Collector that bundles a set of Counters that all share the
// same Desc, but have different values for their variable labels. This is used
// if you want to count the same thing partitioned by various dimensions
// (e.g. number of http requests, partitioned by response code and
// method). Create instances with NewCounterVec.
//
// CounterVec embeds MetricVec. See there for a full list of methods with
// detailed documentation.
type CounterVec struct {
MetricVec
}
func (metric *counter) ResetAll() {
metric.mutex.Lock()
defer metric.mutex.Unlock()
for key, value := range metric.values {
for label := range value.Labels {
delete(value.Labels, label)
}
delete(metric.values, key)
// NewCounterVec creates a new CounterVec based on the provided CounterOpts and
// partitioned by the given label names. At least one label name must be
// provided.
func NewCounterVec(opts CounterOpts, labelNames []string) *CounterVec {
desc := NewDesc(
BuildFQName(opts.Namespace, opts.Subsystem, opts.Name),
opts.Help,
labelNames,
opts.ConstLabels,
)
return &CounterVec{
MetricVec: MetricVec{
children: map[uint64]Metric{},
desc: desc,
hash: fnv.New64a(),
newMetric: func(lvs ...string) Metric {
result := &counter{value: value{
desc: desc,
valType: CounterValue,
labelPairs: makeLabelPairs(desc, lvs),
}}
result.Init(result) // Init self-collection.
return result
},
},
}
}
func (metric *counter) String() string {
formatString := "[Counter %s]"
metric.mutex.RLock()
defer metric.mutex.RUnlock()
return fmt.Sprintf(formatString, metric.values)
}
func (metric *counter) IncrementBy(labels map[string]string, value float64) float64 {
if labels == nil {
labels = blankLabelsSingleton
// GetMetricWithLabelValues replaces the method of the same name in
// MetricVec. The difference is that this method returns a Counter and not a
// Metric so that no type conversion is required.
func (m *CounterVec) GetMetricWithLabelValues(lvs ...string) (Counter, error) {
metric, err := m.MetricVec.GetMetricWithLabelValues(lvs...)
if metric != nil {
return metric.(Counter), err
}
return nil, err
}
signature := model.LabelValuesToSignature(labels)
metric.mutex.Lock()
defer metric.mutex.Unlock()
if original, ok := metric.values[signature]; ok {
original.Value += value
} else {
metric.values[signature] = &counterVector{
Labels: labels,
Value: value,
}
// GetMetricWith replaces the method of the same name in MetricVec. The
// difference is that this method returns a Counter and not a Metric so that no
// type conversion is required.
func (m *CounterVec) GetMetricWith(labels Labels) (Counter, error) {
metric, err := m.MetricVec.GetMetricWith(labels)
if metric != nil {
return metric.(Counter), err
}
return value
return nil, err
}
func (metric *counter) Increment(labels map[string]string) float64 {
return metric.IncrementBy(labels, 1)
// WithLabelValues works as GetMetricWithLabelValues, but panics where
// GetMetricWithLabelValues would have returned an error. By not returning an
// error, WithLabelValues allows shortcuts like
// myVec.WithLabelValues("404", "GET").Add(42)
func (m *CounterVec) WithLabelValues(lvs ...string) Counter {
return m.MetricVec.WithLabelValues(lvs...).(Counter)
}
func (metric *counter) DecrementBy(labels map[string]string, value float64) float64 {
if labels == nil {
labels = blankLabelsSingleton
}
signature := model.LabelValuesToSignature(labels)
metric.mutex.Lock()
defer metric.mutex.Unlock()
if original, ok := metric.values[signature]; ok {
original.Value -= value
} else {
metric.values[signature] = &counterVector{
Labels: labels,
Value: -1 * value,
}
}
return value
}
func (metric *counter) Decrement(labels map[string]string) float64 {
return metric.DecrementBy(labels, 1)
}
func (metric *counter) MarshalJSON() ([]byte, error) {
metric.mutex.RLock()
defer metric.mutex.RUnlock()
values := make([]*counterVector, 0, len(metric.values))
for _, value := range metric.values {
values = append(values, value)
}
return json.Marshal(map[string]interface{}{
valueKey: values,
typeKey: counterTypeValue,
})
}
func (metric *counter) dumpChildren(f *dto.MetricFamily) {
metric.mutex.RLock()
defer metric.mutex.RUnlock()
f.Type = dto.MetricType_COUNTER.Enum()
for _, child := range metric.values {
c := &dto.Counter{
Value: proto.Float64(child.Value),
}
m := &dto.Metric{
Counter: c,
}
for name, value := range child.Labels {
p := &dto.LabelPair{
Name: proto.String(name),
Value: proto.String(value),
}
m.Label = append(m.Label, p)
}
f.Metric = append(f.Metric, m)
}
// With works as GetMetricWith, but panics where GetMetricWithLabels would have
// returned an error. By not returning an error, With allows shortcuts like
// myVec.With(Labels{"code": "404", "method": "GET"}).Add(42)
func (m *CounterVec) With(labels Labels) Counter {
return m.MetricVec.With(labels).(Counter)
}

View File

@ -1,238 +1,45 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"encoding/json"
"testing"
import "testing"
"github.com/prometheus/client_golang/test"
)
func testCounter(t test.Tester) {
type input struct {
steps []func(g Counter)
func TestCounterAdd(t *testing.T) {
counter := NewCounter(CounterOpts{
Name: "test",
Help: "test help",
}).(*counter)
counter.Inc()
if expected, got := 1., counter.val; expected != got {
t.Errorf("Expected %f, got %f.", expected, got)
}
type output struct {
value string
counter.Add(42)
if expected, got := 43., counter.val; expected != got {
t.Errorf("Expected %f, got %f.", expected, got)
}
var scenarios = []struct {
in input
out output
}{
{
in: input{
steps: []func(g Counter){},
},
out: output{
value: `{"type":"counter","value":[]}`,
},
},
{
in: input{
steps: []func(g Counter){
func(g Counter) {
g.Set(nil, 1)
},
},
},
out: output{
value: `{"type":"counter","value":[{"labels":{},"value":1}]}`,
},
},
{
in: input{
steps: []func(g Counter){
func(g Counter) {
g.Set(map[string]string{}, 2)
},
},
},
out: output{
value: `{"type":"counter","value":[{"labels":{},"value":2}]}`,
},
},
{
in: input{
steps: []func(g Counter){
func(g Counter) {
g.Set(map[string]string{}, 3)
},
func(g Counter) {
g.Set(map[string]string{}, 5)
},
},
},
out: output{
value: `{"type":"counter","value":[{"labels":{},"value":5}]}`,
},
},
{
in: input{
steps: []func(g Counter){
func(g Counter) {
g.Set(map[string]string{"handler": "/foo"}, 13)
},
func(g Counter) {
g.Set(map[string]string{"handler": "/bar"}, 17)
},
func(g Counter) {
g.Reset(map[string]string{"handler": "/bar"})
},
},
},
out: output{
value: `{"type":"counter","value":[{"labels":{"handler":"/foo"},"value":13}]}`,
},
},
{
in: input{
steps: []func(g Counter){
func(g Counter) {
g.Set(map[string]string{"handler": "/foo"}, 13)
},
func(g Counter) {
g.Set(map[string]string{"handler": "/bar"}, 17)
},
func(g Counter) {
g.ResetAll()
},
},
},
out: output{
value: `{"type":"counter","value":[]}`,
},
},
{
in: input{
steps: []func(g Counter){
func(g Counter) {
g.Set(map[string]string{"handler": "/foo"}, 19)
},
},
},
out: output{
value: `{"type":"counter","value":[{"labels":{"handler":"/foo"},"value":19}]}`,
},
},
{
in: input{
steps: []func(g Counter){
func(g Counter) {
g.Set(map[string]string{"handler": "/foo"}, 23)
},
func(g Counter) {
g.Increment(map[string]string{"handler": "/foo"})
},
},
},
out: output{
value: `{"type":"counter","value":[{"labels":{"handler":"/foo"},"value":24}]}`,
},
},
{
in: input{
steps: []func(g Counter){
func(g Counter) {
g.Increment(map[string]string{"handler": "/foo"})
},
},
},
out: output{
value: `{"type":"counter","value":[{"labels":{"handler":"/foo"},"value":1}]}`,
},
},
{
in: input{
steps: []func(g Counter){
func(g Counter) {
g.Decrement(map[string]string{"handler": "/foo"})
},
},
},
out: output{
value: `{"type":"counter","value":[{"labels":{"handler":"/foo"},"value":-1}]}`,
},
},
{
in: input{
steps: []func(g Counter){
func(g Counter) {
g.Set(map[string]string{"handler": "/foo"}, 29)
},
func(g Counter) {
g.Decrement(map[string]string{"handler": "/foo"})
},
},
},
out: output{
value: `{"type":"counter","value":[{"labels":{"handler":"/foo"},"value":28}]}`,
},
},
{
in: input{
steps: []func(g Counter){
func(g Counter) {
g.Set(map[string]string{"handler": "/foo"}, 31)
},
func(g Counter) {
g.IncrementBy(map[string]string{"handler": "/foo"}, 5)
},
},
},
out: output{
value: `{"type":"counter","value":[{"labels":{"handler":"/foo"},"value":36}]}`,
},
},
{
in: input{
steps: []func(g Counter){
func(g Counter) {
g.Set(map[string]string{"handler": "/foo"}, 37)
},
func(g Counter) {
g.DecrementBy(map[string]string{"handler": "/foo"}, 10)
},
},
},
out: output{
value: `{"type":"counter","value":[{"labels":{"handler":"/foo"},"value":27}]}`,
},
},
}
for i, scenario := range scenarios {
counter := NewCounter()
for _, step := range scenario.in.steps {
step(counter)
}
bytes, err := json.Marshal(counter)
if err != nil {
t.Errorf("%d. could not marshal into JSON %s", i, err)
continue
}
asString := string(bytes)
if scenario.out.value != asString {
t.Errorf("%d. expected %q, got %q", i, scenario.out.value, asString)
}
if expected, got := "counter cannot decrease in value", decreaseCounter(counter).Error(); expected != got {
t.Errorf("Expected error %q, got %q.", expected, got)
}
}
func TestCounter(t *testing.T) {
testCounter(t)
}
func BenchmarkCounter(b *testing.B) {
for i := 0; i < b.N; i++ {
testCounter(b)
}
func decreaseCounter(c *counter) (err error) {
defer func() {
if e := recover(); e != nil {
err = e.(error)
}
}()
c.Add(-1)
return nil
}

186
prometheus/desc.go Normal file
View File

@ -0,0 +1,186 @@
package prometheus
import (
"bytes"
"errors"
"fmt"
"hash/fnv"
"regexp"
"sort"
"strings"
"github.com/prometheus/client_golang/model"
dto "github.com/prometheus/client_model/go"
"code.google.com/p/goprotobuf/proto"
)
var (
metricNameRE = regexp.MustCompile(`^[a-zA-Z_][a-zA-Z0-9_:]*$`)
labelNameRE = regexp.MustCompile(`^[a-zA-Z_][a-zA-Z0-9_]*$`)
)
// Labels represents a collection of label name -> value mappings. This type is
// commonly used with the With(Labels) and GetMetricWith(Labels) methods of
// metric vector Collectors, e.g.:
// myVec.With(Labels{"code": "404", "method": "GET"}).Add(42)
//
// The other use-case is the specification of constant label pairs in Opts or to
// create a Desc.
type Labels map[string]string
// Desc is the descriptor used by every Prometheus Metric. It is essentially
// the immutable meta-data of a Metric. The normal Metric implementations
// included in this package manage their Desc under the hood. Users only have to
// deal with Desc if they use advanced features like the ExpvarCollector or
// custom Collectors and Metrics.
//
// Descriptors registered with the same registry have to fulfill certain
// consistency and uniqueness criteria if they share the same fully-qualified
// name: They must have the same help string and the same label names (aka label
// dimensions) in each, constLabels and variableLabels, but they must differ in
// the values of the constLabels.
//
// Descriptors that share the same fully-qualified names and the same label
// values of their constLabels are considered equal.
//
// Use NewDesc to create new Desc instances.
type Desc struct {
// fqName has been built from Namespace, Subsystem, and Name.
fqName string
// help provides some helpful information about this metric.
help string
// constLabelPairs contains precalculated DTO label pairs based on
// the constant labels.
constLabelPairs []*dto.LabelPair
// VariableLabels contains names of labels for which the metric
// maintains variable values.
variableLabels []string
// id is a hash of the values of the ConstLabels and fqName. This
// must be unique among all registered descriptors and can therefore be
// used as an identifier of the descriptor.
id uint64
// dimHash is a hash of the label names (preset and variable) and the
// Help string. Each Desc with the same fqName must have the same
// dimHash.
dimHash uint64
// err is an error that occured during construction. It is reported on
// registration time.
err error
}
// NewDesc allocates and initializes a new Desc. Errors are recorded in the Desc
// and will be reported on registration time. variableLabels and constLabels can
// be nil if no such labels should be set. fqName and help must not be empty.
//
// variableLabels only contain the label names. Their label values are variable
// and therefore not part of the Desc. (They are managed within the Metric.)
//
// For constLabels, the label values are constant. Therefore, they are fully
// specified in the Desc. See the Opts documentation for the implications of
// constant labels.
func NewDesc(fqName, help string, variableLabels []string, constLabels Labels) *Desc {
d := &Desc{
fqName: fqName,
help: help,
variableLabels: variableLabels,
}
if help == "" {
d.err = errors.New("empty help string")
return d
}
if !metricNameRE.MatchString(fqName) {
d.err = fmt.Errorf("%q is not a valid metric name", fqName)
return d
}
// labelValues contains the label values of const labels (in order of
// their sorted label names) plus the fqName (at position 0).
labelValues := make([]string, 1, len(constLabels)+1)
labelValues[0] = fqName
labelNames := make([]string, 0, len(constLabels)+len(variableLabels))
labelNameSet := map[string]struct{}{}
// First add only the const label names and sort them...
for labelName := range constLabels {
if !checkLabelName(labelName) {
d.err = fmt.Errorf("%q is not a valid label name", labelName)
return d
}
labelNames = append(labelNames, labelName)
labelNameSet[labelName] = struct{}{}
}
sort.Strings(labelNames)
// ... so that we can now add const label values in the order of their names.
for _, labelName := range labelNames {
labelValues = append(labelValues, constLabels[labelName])
}
// Now add the variable label names, but prefix them with something that
// cannot be in a regular label name. That prevents matching the label
// dimension with a different mix between preset and variable labels.
for _, labelName := range variableLabels {
if !checkLabelName(labelName) {
d.err = fmt.Errorf("%q is not a valid label name", labelName)
return d
}
labelNames = append(labelNames, "$"+labelName)
labelNameSet[labelName] = struct{}{}
}
if len(labelNames) != len(labelNameSet) {
d.err = errors.New("duplicate label names")
return d
}
h := fnv.New64a()
var b bytes.Buffer // To copy string contents into, avoiding []byte allocations.
for _, val := range labelValues {
b.Reset()
b.WriteString(val)
h.Write(b.Bytes())
}
d.id = h.Sum64()
// Sort labelNames so that order doesn't matter for the hash.
sort.Strings(labelNames)
// Now hash together (in this order) the help string and the sorted
// label names.
h.Reset()
b.Reset()
b.WriteString(help)
h.Write(b.Bytes())
for _, labelName := range labelNames {
b.Reset()
b.WriteString(labelName)
h.Write(b.Bytes())
}
d.dimHash = h.Sum64()
d.constLabelPairs = make([]*dto.LabelPair, 0, len(constLabels))
for n, v := range constLabels {
d.constLabelPairs = append(d.constLabelPairs, &dto.LabelPair{
Name: proto.String(n),
Value: proto.String(v),
})
}
sort.Sort(LabelPairSorter(d.constLabelPairs))
return d
}
func (d *Desc) String() string {
lpStrings := make([]string, 0, len(d.constLabelPairs))
for _, lp := range d.constLabelPairs {
lpStrings = append(
lpStrings,
fmt.Sprintf("%s=%q", lp.Name, lp.Value),
)
}
return fmt.Sprintf(
"Desc{fqName: %q, help: %q, constLabels: {%s}, variableLabels: %v}",
d.fqName,
d.help,
strings.Join(lpStrings, ","),
d.variableLabels,
)
}
func checkLabelName(l string) bool {
return labelNameRE.MatchString(l) &&
!strings.HasPrefix(l, model.ReservedLabelPrefix)
}

View File

@ -1,36 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package prometheus
import (
"math"
)
// Go's standard library does not offer a factorial function.
func factorial(of int) int64 {
if of <= 0 {
return 1
}
var result int64 = 1
for i := int64(of); i >= 1; i-- {
result *= i
}
return result
}
// Calculate the value of a probability density for a given binomial statistic,
// where k is the target count of true cases, n is the number of subjects, and
// p is the probability.
func binomialPDF(k, n int, p float64) float64 {
binomialCoefficient := float64(factorial(n)) / float64(factorial(k)*factorial(n-k))
intermediate := math.Pow(p, float64(k)) * math.Pow(1-p, float64(n-k))
return binomialCoefficient * intermediate
}

108
prometheus/doc.go Normal file
View File

@ -0,0 +1,108 @@
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package prometheus provides embeddable metric primitives for servers and
// standardized exposition of telemetry through a web services interface.
//
// All exported functions and methods are safe to be used concurrently unless
// specified otherwise.
//
// To expose metrics registered with the Prometheus registry, an HTTP server
// needs to know about the Prometheus handler. The usual endpoint is "/metrics".
//
// http.Handle("/metrics", prometheus.Handler())
//
// As a starting point a very basic usage example:
//
// package main
//
// import (
// "net/http"
//
// "github.com/prometheus/client_golang/prometheus"
// )
//
// var (
// cpuTemp = prometheus.NewGauge(prometheus.GaugeOpts{
// Name: "cpu_temperature_celsius",
// Help: "Current temperature of the CPU.",
// })
// hdFailures = prometheus.NewCounter(prometheus.CounterOpts{
// Name: "hd_errors_total",
// Help: "Number of hard-disk errors.",
// })
// )
//
// func init() {
// prometheus.MustRegister(cpuTemp)
// prometheus.MustRegister(hdFailures)
// }
//
// func main() {
// cpuTemp.Set(65.3)
// hdFailures.Inc()
//
// http.Handle("/metrics", prometheus.Handler())
// http.ListenAndServe(":8080", nil)
// }
//
//
// This is a complete program that exports two metrics, a Gauge and a Counter.
// It also exports some stats about the HTTP usage of the /metrics
// endpoint. (See the Handler function for more detail.)
//
// A more advanced metric type is the Summary.
//
// In addition to the fundamental metric types Gauge, Counter, and Summary, a
// very important part of the Prometheus data model is the partitioning of
// samples along dimensions called labels, which results in metric vectors. The
// fundamental types are GaugeVec, CounterVec, and SummaryVec.
//
// Those are all the parts needed for basic usage. Detailed documentation and
// examples are provided below.
//
// Everything else this package offers is essentially for "power users" only. A
// few pointers to "power user features":
//
// All the various ...Opts structs have a ConstLabels field for labels that
// never change their value (which is only useful under special circumstances,
// see documentation of the Opts type).
//
// The Untyped metric behaves like a Gauge, but signals the Prometheus server
// not to assume anything about its type.
//
// Functions to fine-tune how the metric registry works: EnableCollectChecks,
// PanicOnCollectError, Register, Unregister, SetMetricFamilyInjectionHook.
//
// For custom metric collection, there are two entry points: Custom Metric
// implementations and custom Collector implementations. A Metric is the
// fundamental unit in the Prometheus data model: a sample at a point in time
// together with its meta-data (like its fully-qualified name and any number of
// pairs of label name and label value) that knows how to marshal itself into a
// data transfer object (aka DTO, implemented as a protocol buffer). A Collector
// gets registered with the Prometheus registry and manages the collection of
// one or more Metrics. Many parts of this package are building blocks for
// Metrics and Collectors. Desc is the metric descriptor, actually used by all
// metrics under the hood, and by Collectors to describe the Metrics to be
// collected, but only to be dealt with by users if they implement their own
// Metrics or Collectors. To create a Desc, the BuildFQName function will come
// in handy. Other useful components for Metric and Collector implementation
// include: LabelPairSorter to sort the DTO version of label pairs,
// NewConstMetric and MustNewConstMetric to create "throw away" Metrics at
// collection time, MetricVec to bundle custom Metrics into a metric vector
// Collector, SelfCollector to make a custom Metric collect itself.
//
// A good example for a custom Collector is the ExpVarCollector included in this
// package, which exports variables exported via the "expvar" package as
// Prometheus metrics.
package prometheus

View File

@ -1,17 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be found
// in the LICENSE file.
// Package prometheus provides client side metric primitives and a telemetry
// exposition framework.
//
// This package provides both metric primitives and tools for their exposition
// to the Prometheus time series collection and computation framework.
//
// prometheus.Register("human_readable_metric_name", "metric docstring", map[string]string{"baseLabel": "baseLabelValue"}, metric)
//
// The examples under github.com/prometheus/client_golang/examples should be
// consulted.
package prometheus

View File

@ -1,47 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package prometheus
import (
"container/heap"
"time"
)
// EvictionPolicy implements some sort of garbage collection methodology for
// an underlying heap.Interface. This is presently only used for
// AccumulatingBucket.
type EvictionPolicy func(h heap.Interface)
// As the name implies, this evicts the oldest x objects from the heap.
func EvictOldest(count int) EvictionPolicy {
return func(h heap.Interface) {
for i := 0; i < count; i++ {
heap.Pop(h)
}
}
}
// This factory produces an EvictionPolicy that applies some standardized
// reduction methodology on the to-be-terminated values.
func EvictAndReplaceWith(count int, reducer ReductionMethod) EvictionPolicy {
return func(h heap.Interface) {
oldValues := make([]float64, count)
for i := 0; i < count; i++ {
oldValues[i] = heap.Pop(h).(*item).Value.(float64)
}
reduced := reducer(oldValues)
heap.Push(h, &item{
Value: reduced,
// TODO(mtp): Parameterize the priority generation since these tools are
// useful.
Priority: -1 * time.Now().UnixNano(),
})
}
}

View File

@ -1,177 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package prometheus
import (
"container/heap"
. "github.com/matttproud/gocheck"
)
func (s *S) TestEvictOldest(c *C) {
q := make(priorityQueue, 0, 10)
heap.Init(&q)
var e EvictionPolicy = EvictOldest(5)
for i := 0; i < 10; i++ {
var item item = item{
Priority: int64(i),
Value: float64(i),
}
heap.Push(&q, &item)
}
c.Check(q, HasLen, 10)
e(&q)
c.Check(q, HasLen, 5)
c.Check(heap.Pop(&q), ValueEquals, 4.0)
c.Check(heap.Pop(&q), ValueEquals, 3.0)
c.Check(heap.Pop(&q), ValueEquals, 2.0)
c.Check(heap.Pop(&q), ValueEquals, 1.0)
c.Check(heap.Pop(&q), ValueEquals, 0.0)
}
func (s *S) TestEvictAndReplaceWithAverage(c *C) {
q := make(priorityQueue, 0, 10)
heap.Init(&q)
var e EvictionPolicy = EvictAndReplaceWith(5, AverageReducer)
for i := 0; i < 10; i++ {
var item item = item{
Priority: int64(i),
Value: float64(i),
}
heap.Push(&q, &item)
}
c.Check(q, HasLen, 10)
e(&q)
c.Check(q, HasLen, 6)
c.Check(heap.Pop(&q), ValueEquals, 4.0)
c.Check(heap.Pop(&q), ValueEquals, 3.0)
c.Check(heap.Pop(&q), ValueEquals, 2.0)
c.Check(heap.Pop(&q), ValueEquals, 1.0)
c.Check(heap.Pop(&q), ValueEquals, 0.0)
c.Check(heap.Pop(&q), ValueEquals, 7.0)
}
func (s *S) TestEvictAndReplaceWithMedian(c *C) {
q := make(priorityQueue, 0, 10)
heap.Init(&q)
var e EvictionPolicy = EvictAndReplaceWith(5, MedianReducer)
for i := 0; i < 10; i++ {
var item item = item{
Priority: int64(i),
Value: float64(i),
}
heap.Push(&q, &item)
}
c.Check(q, HasLen, 10)
e(&q)
c.Check(q, HasLen, 6)
c.Check(heap.Pop(&q), ValueEquals, 4.0)
c.Check(heap.Pop(&q), ValueEquals, 3.0)
c.Check(heap.Pop(&q), ValueEquals, 2.0)
c.Check(heap.Pop(&q), ValueEquals, 1.0)
c.Check(heap.Pop(&q), ValueEquals, 0.0)
c.Check(heap.Pop(&q), ValueEquals, 7.0)
}
func (s *S) TestEvictAndReplaceWithFirstMode(c *C) {
q := make(priorityQueue, 0, 10)
heap.Init(&q)
e := EvictAndReplaceWith(5, FirstModeReducer)
for i := 0; i < 10; i++ {
heap.Push(&q, &item{
Priority: int64(i),
Value: float64(i),
})
}
c.Check(q, HasLen, 10)
e(&q)
c.Check(q, HasLen, 6)
c.Check(heap.Pop(&q), ValueEquals, 4.0)
c.Check(heap.Pop(&q), ValueEquals, 3.0)
c.Check(heap.Pop(&q), ValueEquals, 2.0)
c.Check(heap.Pop(&q), ValueEquals, 1.0)
c.Check(heap.Pop(&q), ValueEquals, 0.0)
c.Check(heap.Pop(&q), ValueEquals, 9.0)
}
func (s *S) TestEvictAndReplaceWithMinimum(c *C) {
q := make(priorityQueue, 0, 10)
heap.Init(&q)
var e EvictionPolicy = EvictAndReplaceWith(5, MinimumReducer)
for i := 0; i < 10; i++ {
var item item = item{
Priority: int64(i),
Value: float64(i),
}
heap.Push(&q, &item)
}
c.Check(q, HasLen, 10)
e(&q)
c.Check(q, HasLen, 6)
c.Check(heap.Pop(&q), ValueEquals, 4.0)
c.Check(heap.Pop(&q), ValueEquals, 3.0)
c.Check(heap.Pop(&q), ValueEquals, 2.0)
c.Check(heap.Pop(&q), ValueEquals, 1.0)
c.Check(heap.Pop(&q), ValueEquals, 0.0)
c.Check(heap.Pop(&q), ValueEquals, 5.0)
}
func (s *S) TestEvictAndReplaceWithMaximum(c *C) {
q := make(priorityQueue, 0, 10)
heap.Init(&q)
var e EvictionPolicy = EvictAndReplaceWith(5, MaximumReducer)
for i := 0; i < 10; i++ {
var item item = item{
Priority: int64(i),
Value: float64(i),
}
heap.Push(&q, &item)
}
c.Check(q, HasLen, 10)
e(&q)
c.Check(q, HasLen, 6)
c.Check(heap.Pop(&q), ValueEquals, 4.0)
c.Check(heap.Pop(&q), ValueEquals, 3.0)
c.Check(heap.Pop(&q), ValueEquals, 2.0)
c.Check(heap.Pop(&q), ValueEquals, 1.0)
c.Check(heap.Pop(&q), ValueEquals, 0.0)
c.Check(heap.Pop(&q), ValueEquals, 9.0)
}

View File

@ -0,0 +1,130 @@
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus_test
import (
"sync"
"github.com/prometheus/client_golang/prometheus"
)
// ClusterManager is an example for a system that might have been built without
// Prometheus in mind. It models a central manager of jobs running in a
// cluster. To turn it into something that collects Prometheus metrics, we
// simply add the two methods required for the Collector interface.
//
// An additional challenge is that multiple instances of the ClusterManager are
// run within the same binary, each in charge of a different zone. We need to
// make use of ConstLabels to be able to register each ClusterManager instance
// with Prometheus.
type ClusterManager struct {
Zone string
OOMCount *prometheus.CounterVec
RAMUsage *prometheus.GaugeVec
mtx sync.Mutex // Protects OOMCount and RAMUsage.
// ... many more fields
}
// ReallyExpensiveAssessmentOfTheSystemState is a mock for the data gathering a
// real cluster manager would have to do. Since it may actually be really
// expensive, it must only be called once per collection. This implementation,
// obviously, only returns some made-up data.
func (c *ClusterManager) ReallyExpensiveAssessmentOfTheSystemState() (
oomCountByHost map[string]int, ramUsageByHost map[string]float64,
) {
// Just example fake data.
oomCountByHost = map[string]int{
"foo.example.org": 42,
"bar.example.org": 2001,
}
ramUsageByHost = map[string]float64{
"foo.example.org": 6.023e23,
"bar.example.org": 3.14,
}
return
}
// Describe faces the interesting challenge that the two metric vectors that are
// used in this example are already Collectors themselves. However, thanks to
// the use of channels, it is really easy to "chain" Collectors. Here we simply
// call the Describe methods of the two metric vectors.
func (c *ClusterManager) Describe(ch chan<- *prometheus.Desc) {
c.OOMCount.Describe(ch)
c.RAMUsage.Describe(ch)
}
// Collect first triggers the ReallyExpensiveAssessmentOfTheSystemState. Then it
// sets the retrieved values in the two metric vectors and then sends all their
// metrics to the channel (again using a chaining technique as in the Describe
// method). Since Collect could be called multiple times concurrently, that part
// is protected by a mutex.
func (c *ClusterManager) Collect(ch chan<- prometheus.Metric) {
oomCountByHost, ramUsageByHost := c.ReallyExpensiveAssessmentOfTheSystemState()
c.mtx.Lock()
defer c.mtx.Unlock()
for host, oomCount := range oomCountByHost {
c.OOMCount.WithLabelValues(host).Set(float64(oomCount))
}
for host, ramUsage := range ramUsageByHost {
c.RAMUsage.WithLabelValues(host).Set(ramUsage)
}
c.OOMCount.Collect(ch)
c.RAMUsage.Collect(ch)
// All metrics in OOMCount and RAMUsage are sent to the channel now. We
// can safely reset the two metric vectors now, so that we can start
// fresh in the next Collect cycle. (Imagine a host disappears from the
// cluster. If we did not reset here, its Metric would stay in the
// metric vectors forever.)
c.OOMCount.Reset()
c.RAMUsage.Reset()
}
// NewClusterManager creates the two metric vectors OOMCount and RAMUsage. Note
// that the zone is set as a ConstLabel. (It's different in each instance of the
// ClusterManager, but constant over the lifetime of an instance.) The reported
// values are partitioned by host, which is therefore a variable label.
func NewClusterManager(zone string) *ClusterManager {
return &ClusterManager{
Zone: zone,
OOMCount: prometheus.NewCounterVec(
prometheus.CounterOpts{
Subsystem: "clustermanager",
Name: "oom_count",
Help: "number of OOM crashes",
ConstLabels: prometheus.Labels{"zone": zone},
},
[]string{"host"},
),
RAMUsage: prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Subsystem: "clustermanager",
Name: "ram_usage_bytes",
Help: "RAM usage as reported to the cluster manager",
ConstLabels: prometheus.Labels{"zone": zone},
},
[]string{"host"},
),
}
}
func ExampleCollector_clustermanager() {
workerDB := NewClusterManager("db")
workerCA := NewClusterManager("ca")
prometheus.MustRegister(workerDB)
prometheus.MustRegister(workerCA)
// Since we are dealing with custom Collector implementations, it might
// be a good idea to enable the collect checks in the registry.
prometheus.EnableCollectChecks(true)
}

View File

@ -0,0 +1,87 @@
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus_test
import (
"runtime"
"github.com/prometheus/client_golang/prometheus"
)
var (
allocDesc = prometheus.NewDesc(
prometheus.BuildFQName("", "memstats", "alloc_bytes"),
"bytes allocated and still in use",
nil, nil,
)
totalAllocDesc = prometheus.NewDesc(
prometheus.BuildFQName("", "memstats", "total_alloc_bytes"),
"bytes allocated (even if freed)",
nil, nil,
)
numGCDesc = prometheus.NewDesc(
prometheus.BuildFQName("", "memstats", "num_gc_total"),
"number of GCs run",
nil, nil,
)
)
// MemStatsCollector is an example for a custom Collector that solves the
// problem of feeding into multiple metrics at the same time. The
// runtime.ReadMemStats should happen only once, and then the results need to be
// fed into a number of separate Metrics. In this example, only a few of the
// values reported by ReadMemStats are used. For each, there is a Desc provided
// as a var, so the MemStatsCollector itself needs nothing else in the
// struct. Only the methods need to be implemented.
type MemStatsCollector struct{}
// Describe just sends the three Desc objects for the Metrics we intend to
// collect.
func (_ MemStatsCollector) Describe(ch chan<- *prometheus.Desc) {
ch <- allocDesc
ch <- totalAllocDesc
ch <- numGCDesc
}
// Collect does the trick by calling ReadMemStats once and then constructing
// three different Metrics on the fly.
func (_ MemStatsCollector) Collect(ch chan<- prometheus.Metric) {
var ms runtime.MemStats
runtime.ReadMemStats(&ms)
ch <- prometheus.MustNewConstMetric(
allocDesc,
prometheus.GaugeValue,
float64(ms.Alloc),
)
ch <- prometheus.MustNewConstMetric(
totalAllocDesc,
prometheus.GaugeValue,
float64(ms.TotalAlloc),
)
ch <- prometheus.MustNewConstMetric(
numGCDesc,
prometheus.CounterValue,
float64(ms.NumGC),
)
// To avoid new allocations on each collection, you could also keep
// metric objects around and return the same objects each time, just
// with new values set.
}
func ExampleCollector_memstats() {
prometheus.MustRegister(&MemStatsCollector{})
// Since we are dealing with custom Collector implementations, it might
// be a good idea to enable the collect checks in the registry.
prometheus.EnableCollectChecks(true)
}

View File

@ -0,0 +1,67 @@
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus_test
import (
"runtime"
"code.google.com/p/goprotobuf/proto"
dto "github.com/prometheus/client_model/go"
"github.com/prometheus/client_golang/prometheus"
)
func NewCallbackMetric(desc *prometheus.Desc, callback func() float64) *CallbackMetric {
result := &CallbackMetric{desc: desc, callback: callback}
result.Init(result) // Initialize the SelfCollector.
return result
}
// CallbackMetric is an example for a user-defined Metric that exports the
// result of a function call as a metric of type "untyped" without any
// labels. It uses SelfCollector to turn the Metric into a Collector so that it
// can be registered with Prometheus.
//
// Note that this is a pretty low-level approach. A more high-level approach is
// to implement a Collector directly and not an individual Metric, see the
// Collector examples.
type CallbackMetric struct {
prometheus.SelfCollector
desc *prometheus.Desc
callback func() float64
}
func (cm *CallbackMetric) Desc() *prometheus.Desc {
return cm.desc
}
func (cm *CallbackMetric) Write(m *dto.Metric) {
m.Untyped = &dto.Untyped{Value: proto.Float64(cm.callback())}
}
func ExampleSelfCollector() {
m := NewCallbackMetric(
prometheus.NewDesc(
"runtime_goroutines_count",
"Total number of goroutines that currently exist.",
nil, nil, // No labels, these must be nil.
),
func() float64 {
return float64(runtime.NumGoroutine())
},
)
prometheus.MustRegister(m)
}

435
prometheus/examples_test.go Normal file
View File

@ -0,0 +1,435 @@
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus_test
import (
"flag"
"fmt"
"math"
"net/http"
"sort"
dto "github.com/prometheus/client_model/go"
"code.google.com/p/goprotobuf/proto"
"github.com/prometheus/client_golang/prometheus"
)
func ExampleGauge() {
opsQueued := prometheus.NewGauge(prometheus.GaugeOpts{
Namespace: "our_company",
Subsystem: "blob_storage",
Name: "ops_queued",
Help: "Number of blob storage operations waiting to be processed.",
})
prometheus.MustRegister(opsQueued)
// 10 operations queued by the goroutine managing incoming requests.
opsQueued.Add(10)
// A worker goroutine has picked up a waiting operation.
opsQueued.Dec()
// And once more...
opsQueued.Dec()
}
func ExampleGaugeVec() {
binaryVersion := flag.String("binary_version", "debug", "Version of the binary: debug, canary, production.")
flag.Parse()
opsQueued := prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Namespace: "our_company",
Subsystem: "blob_storage",
Name: "ops_queued",
Help: "Number of blob storage operations waiting to be processed, partitioned by user and type.",
ConstLabels: prometheus.Labels{"binary_version": *binaryVersion},
},
[]string{
// Which user has requested the operation?
"user",
// Of what type is the operation?
"type",
},
)
prometheus.MustRegister(opsQueued)
// Increase a value using compact (but order-sensitive!) WithLabelValues().
opsQueued.WithLabelValues("bob", "put").Add(4)
// Increase a value with a map using WithLabels. More verbose, but order
// doesn't matter anymore.
opsQueued.With(prometheus.Labels{"type": "delete", "user": "alice"}).Inc()
}
func ExampleCounter() {
pushCounter := prometheus.NewCounter(prometheus.CounterOpts{
Name: "repository_pushes", // Note: No help string...
})
_, err := prometheus.Register(pushCounter) // ... so this will return an error.
if err != nil {
fmt.Println("Push counter couldn't be registered, no counting will happen:", err)
return
}
// Try it once more, this time with a help string.
pushCounter = prometheus.NewCounter(prometheus.CounterOpts{
Name: "repository_pushes",
Help: "Number of pushes to external repository.",
})
_, err = prometheus.Register(pushCounter)
if err != nil {
fmt.Println("Push counter couldn't be registered AGAIN, no counting will happen:", err)
return
}
pushComplete := make(chan struct{})
// TODO: Start a goroutine that performs repository pushes and reports
// each completion via the channel.
for _ = range pushComplete {
pushCounter.Inc()
}
// Output:
// Push counter couldn't be registered, no counting will happen: descriptor Desc{fqName: "repository_pushes", help: "", constLabels: {}, variableLabels: []} is invalid: empty help string
}
func ExampleCounterVec() {
binaryVersion := flag.String("environment", "test", "Execution environment: test, staging, production.")
flag.Parse()
httpReqs := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "http_requests_total",
Help: "How many HTTP requests processed, partitioned by status code and http method.",
ConstLabels: prometheus.Labels{"env": *binaryVersion},
},
[]string{"code", "method"},
)
prometheus.MustRegister(httpReqs)
httpReqs.WithLabelValues("404", "POST").Add(42)
// If you have to access the same set of labels very frequently, it
// might be good to retrieve the metric only once and keep a handle to
// it. But beware of deletion of that metric, see below!
m := httpReqs.WithLabelValues("200", "GET")
for i := 0; i < 1000000; i++ {
m.Inc()
}
// Delete a metric from the vector. If you have previously kept a handle
// to that metric (as above), future updates via that handle will go
// unseen (even if you re-create a metric with the same label set
// later).
httpReqs.DeleteLabelValues("200", "GET")
// Same thing with the more verbose Labels syntax.
httpReqs.Delete(prometheus.Labels{"method": "GET", "code": "200"})
}
func ExampleInstrumentHandler() {
// Handle the "/doc" endpoint with the standard http.FileServer handler.
// By wrapping the handler with InstrumentHandler, request count,
// request and response sizes, and request latency are automatically
// exported to Prometheus, partitioned by HTTP status code and method
// and by the handler name (here "fileserver").
http.Handle("/doc", prometheus.InstrumentHandler(
"fileserver", http.FileServer(http.Dir("/usr/share/doc")),
))
// The Prometheus handler still has to be registered to handle the
// "/metrics" endpoint. The handler returned by prometheus.Handler() is
// already instrumented - with "prometheus" as the handler name. In this
// example, we want the handler name to be "metrics", so we instrument
// the uninstrumented Prometheus handler ourselves.
http.Handle("/metrics", prometheus.InstrumentHandler(
"metrics", prometheus.UninstrumentedHandler(),
))
}
func ExampleLabelPairSorter() {
labelPairs := []*dto.LabelPair{
&dto.LabelPair{Name: proto.String("status"), Value: proto.String("404")},
&dto.LabelPair{Name: proto.String("method"), Value: proto.String("get")},
}
sort.Sort(prometheus.LabelPairSorter(labelPairs))
fmt.Println(labelPairs)
// Output:
// [name:"method" value:"get" name:"status" value:"404" ]
}
func ExampleRegister() {
// Imagine you have a worker pool and want to count the tasks completed.
taskCounter := prometheus.NewCounter(prometheus.CounterOpts{
Subsystem: "worker_pool",
Name: "completed_tasks_total",
Help: "Total number of tasks completed.",
})
// This will register fine.
if _, err := prometheus.Register(taskCounter); err != nil {
fmt.Println(err)
} else {
fmt.Println("taskCounter registered.")
}
// Don't forget to tell the HTTP server about the Prometheus handler.
// (In a real program, you still need to start the http server...)
http.Handle("/metrics", prometheus.Handler())
// Now you can start workers and give every one of them a pointer to
// taskCounter and let it increment it whenever it completes a task.
taskCounter.Inc() // This has to happen somewhere in the worker code.
// But wait, you want to see how individual workers perform. So you need
// a vector of counters, with one element for each worker.
taskCounterVec := prometheus.NewCounterVec(
prometheus.CounterOpts{
Subsystem: "worker_pool",
Name: "completed_tasks_total",
Help: "Total number of tasks completed.",
},
[]string{"worker_id"},
)
// Registering will fail because we already have a metric of that name.
if _, err := prometheus.Register(taskCounterVec); err != nil {
fmt.Println("taskCounterVec not registered:", err)
} else {
fmt.Println("taskCounterVec registered.")
}
// To fix, first unregister the old taskCounter.
if prometheus.Unregister(taskCounter) {
fmt.Println("taskCounter unregistered.")
}
// Try registering taskCounterVec again.
if _, err := prometheus.Register(taskCounterVec); err != nil {
fmt.Println("taskCounterVec not registered:", err)
} else {
fmt.Println("taskCounterVec registered.")
}
// Bummer! Still doesn't work.
// Prometheus will not allow you to ever export metrics with
// inconsistent help strings or label names. After unregistering, the
// unregistered metrics will cease to show up in the /metrics http
// response, but the registry still remembers that those metrics had
// been exported before. For this example, we will now choose a
// different name. (In a real program, you would obviously not export
// the obsolete metric in the first place.)
taskCounterVec = prometheus.NewCounterVec(
prometheus.CounterOpts{
Subsystem: "worker_pool",
Name: "completed_tasks_by_id",
Help: "Total number of tasks completed.",
},
[]string{"worker_id"},
)
if _, err := prometheus.Register(taskCounterVec); err != nil {
fmt.Println("taskCounterVec not registered:", err)
} else {
fmt.Println("taskCounterVec registered.")
}
// Finally it worked!
// The workers have to tell taskCounterVec their id to increment the
// right element in the metric vector.
taskCounterVec.WithLabelValues("42").Inc() // Code from worker 42.
// Each worker could also keep a reference to their own counter element
// around. Pick the counter at initialization time of the worker.
myCounter := taskCounterVec.WithLabelValues("42") // From worker 42 initialization code.
myCounter.Inc() // Somewhere in the code of that worker.
// Note that something like WithLabelValues("42", "spurious arg") would
// panic (because you have provided too many label values). If you want
// to get an error instead, use GetMetricWithLabelValues(...) instead.
notMyCounter, err := taskCounterVec.GetMetricWithLabelValues("42", "spurious arg")
if err != nil {
fmt.Println("Worker initialization failed:", err)
}
if notMyCounter == nil {
fmt.Println("notMyCounter is nil.")
}
// A different (and somewhat tricky) approach is to use
// ConstLabels. ConstLabels are pairs of label names and label values
// that never change. You might ask what those labels are good for (and
// rightfully so - if they never change, they could as well be part of
// the metric name). There are essentially two use-cases: The first is
// if labels are constant throughout the lifetime of a binary execution,
// but they vary over time or between different instances of a running
// binary. The second is what we have here: Each worker creates and
// registers an own Counter instance where the only difference is in the
// value of the ConstLabels. Those Counters can all be registered
// because the different ConstLabel values guarantee that each worker
// will increment a different Counter metric.
counterOpts := prometheus.CounterOpts{
Subsystem: "worker_pool",
Name: "completed_tasks",
Help: "Total number of tasks completed.",
ConstLabels: prometheus.Labels{"worker_id": "42"},
}
taskCounterForWorker42 := prometheus.NewCounter(counterOpts)
if _, err := prometheus.Register(taskCounterForWorker42); err != nil {
fmt.Println("taskCounterVForWorker42 not registered:", err)
} else {
fmt.Println("taskCounterForWorker42 registered.")
}
// Obviously, in real code, taskCounterForWorker42 would be a member
// variable of a worker struct, and the "42" would be retrieved with a
// GetId() method or something. The Counter would be created and
// registered in the initialization code of the worker.
// For the creation of the next Counter, we can recycle
// counterOpts. Just change the ConstLabels.
counterOpts.ConstLabels = prometheus.Labels{"worker_id": "2001"}
taskCounterForWorker2001 := prometheus.NewCounter(counterOpts)
if _, err := prometheus.Register(taskCounterForWorker2001); err != nil {
fmt.Println("taskCounterVForWorker2001 not registered:", err)
} else {
fmt.Println("taskCounterForWorker2001 registered.")
}
taskCounterForWorker2001.Inc()
taskCounterForWorker42.Inc()
taskCounterForWorker2001.Inc()
// Yet another approach would be to turn the workers themselves into
// Collectors and register them. See the Collector example for details.
// Output:
// taskCounter registered.
// taskCounterVec not registered: a previously registered descriptor with the same fully-qualified name as Desc{fqName: "worker_pool_completed_tasks_total", help: "Total number of tasks completed.", constLabels: {}, variableLabels: [worker_id]} has different label names or a different help string
// taskCounter unregistered.
// taskCounterVec not registered: a previously registered descriptor with the same fully-qualified name as Desc{fqName: "worker_pool_completed_tasks_total", help: "Total number of tasks completed.", constLabels: {}, variableLabels: [worker_id]} has different label names or a different help string
// taskCounterVec registered.
// Worker initialization failed: inconsistent label cardinality
// notMyCounter is nil.
// taskCounterForWorker42 registered.
// taskCounterForWorker2001 registered.
}
func ExampleSummary() {
temps := prometheus.NewSummary(prometheus.SummaryOpts{
Name: "pond_temperature_celsius",
Help: "The temperature of the frog pond.", // Sorry, we can't measure how badly it smells.
})
// Simulate some observations.
for i := 0; i < 1000; i++ {
temps.Observe(30 + math.Floor(120*math.Sin(float64(i)*0.1))/10)
}
// Just for demonstration, let's check the state of the summary by
// (ab)using its Write method (which is usually only used by Prometheus
// internally).
metric := &dto.Metric{}
temps.Write(metric)
fmt.Println(proto.MarshalTextString(metric))
// Output:
// summary: <
// sample_count: 1000
// sample_sum: 29969.50000000001
// quantile: <
// quantile: 0.5
// value: 30.2
// >
// quantile: <
// quantile: 0.9
// value: 41.4
// >
// quantile: <
// quantile: 0.99
// value: 41.9
// >
// >
}
func ExampleSummaryVec() {
temps := prometheus.NewSummaryVec(
prometheus.SummaryOpts{
Name: "pond_temperature_celsius",
Help: "The temperature of the frog pond.", // Sorry, we can't measure how badly it smells.
},
[]string{"species"},
)
// Simulate some observations.
for i := 0; i < 1000; i++ {
temps.WithLabelValues("litoria-caerulea").Observe(30 + math.Floor(120*math.Sin(float64(i)*0.1))/10)
temps.WithLabelValues("lithobates-catesbeianus").Observe(32 + math.Floor(100*math.Cos(float64(i)*0.11))/10)
}
// Just for demonstration, let's check the state of the summary vector
// by (ab)using its Collect method and the Write method of its elements
// (which is usually only used by Prometheus internally - code like the
// following will never appear in your own code).
metricChan := make(chan prometheus.Metric)
go func() {
defer close(metricChan)
temps.Collect(metricChan)
}()
metricStrings := []string{}
for metric := range metricChan {
dtoMetric := &dto.Metric{}
metric.Write(dtoMetric)
metricStrings = append(metricStrings, proto.MarshalTextString(dtoMetric))
}
sort.Strings(metricStrings) // For reproducible print order.
fmt.Println(metricStrings)
// Output:
// [label: <
// name: "species"
// value: "lithobates-catesbeianus"
// >
// summary: <
// sample_count: 1000
// sample_sum: 31956.100000000017
// quantile: <
// quantile: 0.5
// value: 32
// >
// quantile: <
// quantile: 0.9
// value: 41.5
// >
// quantile: <
// quantile: 0.99
// value: 41.9
// >
// >
// label: <
// name: "species"
// value: "litoria-caerulea"
// >
// summary: <
// sample_count: 1000
// sample_sum: 29969.50000000001
// quantile: <
// quantile: 0.5
// value: 30.2
// >
// quantile: <
// quantile: 0.9
// value: 41.4
// >
// quantile: <
// quantile: 0.99
// value: 41.9
// >
// >
// ]
}

View File

@ -1,110 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be found in
// the LICENSE file.
package exp
import (
"fmt"
"github.com/prometheus/client_golang/prometheus"
"net/http"
"strings"
"time"
)
const (
handler = "handler"
code = "code"
method = "method"
)
type (
coarseMux struct {
*http.ServeMux
}
handlerDelegator struct {
delegate http.Handler
pattern string
}
)
var (
requestCounts = prometheus.NewCounter()
requestDuration = prometheus.NewCounter()
requestDurations = prometheus.NewDefaultHistogram()
requestBytes = prometheus.NewCounter()
responseBytes = prometheus.NewCounter()
// DefaultCoarseMux is a drop-in replacement for http.DefaultServeMux that
// provides standardized telemetry for Go's standard HTTP handler registration
// and dispatch API.
//
// The name is due to the coarse grouping of telemetry by (HTTP Method, HTTP Response Code,
// and handler match pattern) triples.
DefaultCoarseMux = newCoarseMux()
)
func (h handlerDelegator) ServeHTTP(w http.ResponseWriter, r *http.Request) {
start := time.Now()
rwd := NewResponseWriterDelegator(w)
defer func() {
duration := float64(time.Since(start) / time.Microsecond)
status := rwd.Status()
labels := map[string]string{handler: h.pattern, code: status, method: strings.ToLower(r.Method)}
requestCounts.Increment(labels)
requestDuration.IncrementBy(labels, duration)
requestDurations.Add(labels, duration)
requestBytes.IncrementBy(labels, float64(computeApproximateRequestSize(*r)))
responseBytes.IncrementBy(labels, float64(rwd.BytesWritten))
}()
h.delegate.ServeHTTP(rwd, r)
}
func (h handlerDelegator) String() string {
return fmt.Sprintf("handlerDelegator wrapping %s for %s", h.delegate, h.pattern)
}
// Handle registers a http.Handler to this CoarseMux. See http.ServeMux.Handle.
func (m *coarseMux) handle(pattern string, handler http.Handler) {
m.ServeMux.Handle(pattern, handlerDelegator{
delegate: handler,
pattern: pattern,
})
}
// Handle registers a handler to this CoarseMux. See http.ServeMux.HandleFunc.
func (m *coarseMux) handleFunc(pattern string, handler http.HandlerFunc) {
m.ServeMux.Handle(pattern, handlerDelegator{
delegate: handler,
pattern: pattern,
})
}
func newCoarseMux() *coarseMux {
return &coarseMux{
ServeMux: http.NewServeMux(),
}
}
// Handle registers a http.Handler to DefaultCoarseMux. See http.Handle.
func Handle(pattern string, handler http.Handler) {
DefaultCoarseMux.handle(pattern, handler)
}
// HandleFunc registers a handler to DefaultCoarseMux. See http.HandleFunc.
func HandleFunc(pattern string, handler http.HandlerFunc) {
DefaultCoarseMux.handleFunc(pattern, handler)
}
func init() {
prometheus.Register("http_requests_total", "A counter of the total number of HTTP requests made against the default multiplexor.", prometheus.NilLabels, requestCounts)
prometheus.Register("http_request_durations_total_microseconds", "The total amount of time the default multiplexor has spent answering HTTP requests (microseconds).", prometheus.NilLabels, requestDuration)
prometheus.Register("http_request_durations_microseconds", "The amounts of time the default multiplexor has spent answering HTTP requests (microseconds).", prometheus.NilLabels, requestDurations)
prometheus.Register("http_request_bytes_total", "The total volume of content body sizes received (bytes).", prometheus.NilLabels, requestBytes)
prometheus.Register("http_response_bytes_total", "The total volume of response payloads emitted (bytes).", prometheus.NilLabels, responseBytes)
}

View File

@ -1,11 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be found in
// the LICENSE file.
// A repository of various immature Prometheus client components that may
// assist in your use of the library. Items contained herein are regarded as
// especially interface unstable and may change without warning. Upon
// maturation, they should be migrated into a formal package for users.
package exp

View File

@ -1,100 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be found in
// the LICENSE file.
package exp
import (
"fmt"
"net/http"
"reflect"
"strconv"
)
const (
unknownStatusCode = "unknown"
statusFieldName = "status"
)
type status string
func (s status) unknown() bool {
return len(s) == 0
}
func (s status) String() string {
if s.unknown() {
return unknownStatusCode
}
return string(s)
}
func computeApproximateRequestSize(r http.Request) (s int) {
s += len(r.Method)
if r.URL != nil {
s += len(r.URL.String())
}
s += len(r.Proto)
for name, values := range r.Header {
s += len(name)
for _, value := range values {
s += len(value)
}
}
s += len(r.Host)
// N.B. r.Form and r.MultipartForm are assumed to be included in r.URL.
if r.ContentLength != -1 {
s += int(r.ContentLength)
}
return
}
// ResponseWriterDelegator is a means of wrapping http.ResponseWriter to divine
// the response code from a given answer, especially in systems where the
// response is treated as a blackbox.
type ResponseWriterDelegator struct {
http.ResponseWriter
status status
BytesWritten int
}
func (r ResponseWriterDelegator) String() string {
return fmt.Sprintf("ResponseWriterDelegator decorating %s with status %s and %d bytes written.", r.ResponseWriter, r.status, r.BytesWritten)
}
func (r *ResponseWriterDelegator) WriteHeader(code int) {
r.status = status(strconv.Itoa(code))
r.ResponseWriter.WriteHeader(code)
}
func (r *ResponseWriterDelegator) Status() string {
if r.status.unknown() {
delegate := reflect.ValueOf(r.ResponseWriter).Elem()
statusField := delegate.FieldByName(statusFieldName)
if statusField.IsValid() {
r.status = status(strconv.Itoa(int(statusField.Int())))
}
}
return r.status.String()
}
func (r *ResponseWriterDelegator) Write(b []byte) (n int, err error) {
n, err = r.ResponseWriter.Write(b)
r.BytesWritten += n
return
}
func NewResponseWriterDelegator(delegate http.ResponseWriter) *ResponseWriterDelegator {
return &ResponseWriterDelegator{
ResponseWriter: delegate,
}
}

117
prometheus/expvar.go Normal file
View File

@ -0,0 +1,117 @@
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"encoding/json"
"expvar"
)
// ExpvarCollector collects metrics from the expvar interface. It provides a
// quick way to expose numeric values that are already exported via expvar as
// Prometheus metrics. Note that the data models of expvar and Prometheus are
// fundamentally different, and that the ExpvarCollector is inherently
// slow. Thus, the ExpvarCollector is probably great for experiments and
// prototying, but you should seriously consider a more direct implementation of
// Prometheus metrics for monitoring production systems.
//
// Use NewExpvarCollector to create new instances.
type ExpvarCollector struct {
exports map[string]*Desc
}
// NewExpvarCollector returns a newly allocated ExpvarCollector that still has
// to be registered with the Prometheus registry.
//
// The exports map has the following meaning:
//
// The keys in the map correspond to expvar keys, i.e. for every expvar key you
// want to export as Prometheus metric, you need an entry in the exports
// map. The descriptor mapped to each key describes how to export the expvar
// value. It defines the name and the help string of the Prometheus metric
// proxying the expvar value. The type will always be Untyped.
//
// For descriptors without variable labels, the expvar value must be a number or
// a bool. The number is then directly exported as the Prometheus sample
// value. (For a bool, 'false' translates to 0 and 'true' to 1). Expvar values
// that are not numbers or bools are silently ignored.
//
// If the descriptor has one variable label, the expvar value must be an expvar
// map. The keys in the expvar map become the various values of the one
// Prometheus label. The values in the expvar map must be numbers or bools again
// as above.
//
// For descriptors with more than one variable label, the expvar must be a
// nested expvar map, i.e. where the values of the topmost map are maps again
// etc. until a depth is reached that corresponds to the number of labels. The
// leaves of that structure must be numbers or bools as above to serve as the
// sample values.
//
// Anything that does not fit into the scheme above is silently ignored.
func NewExpvarCollector(exports map[string]*Desc) *ExpvarCollector {
return &ExpvarCollector{
exports: exports,
}
}
// Describe implements Collector.
func (e *ExpvarCollector) Describe(ch chan<- *Desc) {
for _, desc := range e.exports {
ch <- desc
}
}
// Collect implements Collector.
func (e *ExpvarCollector) Collect(ch chan<- Metric) {
for name, desc := range e.exports {
var m Metric
expVar := expvar.Get(name)
if expVar == nil {
continue
}
var v interface{}
labels := make([]string, len(desc.variableLabels))
if err := json.Unmarshal([]byte(expVar.String()), &v); err == nil {
var processValue func(v interface{}, i int)
processValue = func(v interface{}, i int) {
if i >= len(labels) {
copiedLabels := append(make([]string, 0, len(labels)), labels...)
switch v := v.(type) {
case float64:
m = MustNewConstMetric(desc, UntypedValue, v, copiedLabels...)
case bool:
if v {
m = MustNewConstMetric(desc, UntypedValue, 1, copiedLabels...)
} else {
m = MustNewConstMetric(desc, UntypedValue, 0, copiedLabels...)
}
default:
return
}
ch <- m
return
}
vm, ok := v.(map[string]interface{})
if !ok {
return
}
for lv, val := range vm {
labels[i] = lv
processValue(val, i+1)
}
}
processValue(v, 0)
}
}
}

97
prometheus/expvar_test.go Normal file
View File

@ -0,0 +1,97 @@
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus_test
import (
"expvar"
"fmt"
"sort"
"strings"
dto "github.com/prometheus/client_model/go"
"github.com/prometheus/client_golang/prometheus"
)
func ExampleExpvarCollector() {
expvarCollector := prometheus.NewExpvarCollector(map[string]*prometheus.Desc{
"memstats": prometheus.NewDesc(
"expvar_memstats",
"All numeric memstats as one metric family. Not a good role-model, actually... ;-)",
[]string{"type"}, nil,
),
"lone-int": prometheus.NewDesc(
"expvar_lone_int",
"Just an expvar int as an example.",
nil, nil,
),
"http-request-map": prometheus.NewDesc(
"expvar_http_request_total",
"How many http requests processed, partitioned by status code and http method.",
[]string{"code", "method"}, nil,
),
})
prometheus.MustRegister(expvarCollector)
// The Prometheus part is done here. But to show that this example is
// doing anything, we have to manually export something via expvar. In
// real-life use-cases, some library would already have exported via
// expvar what we want to re-export as Prometheus metrics.
expvar.NewInt("lone-int").Set(42)
expvarMap := expvar.NewMap("http-request-map")
var (
expvarMap1, expvarMap2 expvar.Map
expvarInt11, expvarInt12, expvarInt21, expvarInt22 expvar.Int
)
expvarMap1.Init()
expvarMap2.Init()
expvarInt11.Set(3)
expvarInt12.Set(13)
expvarInt21.Set(11)
expvarInt22.Set(212)
expvarMap1.Set("POST", &expvarInt11)
expvarMap1.Set("GET", &expvarInt12)
expvarMap2.Set("POST", &expvarInt21)
expvarMap2.Set("GET", &expvarInt22)
expvarMap.Set("404", &expvarMap1)
expvarMap.Set("200", &expvarMap2)
// Results in the following expvar map:
// "http-request-count": {"200": {"POST": 11, "GET": 212}, "404": {"POST": 3, "GET": 13}}
// Let's see what the scrape would yield, but exclude the memstats metrics.
metricStrings := []string{}
metric := dto.Metric{}
metricChan := make(chan prometheus.Metric)
go func() {
expvarCollector.Collect(metricChan)
close(metricChan)
}()
for m := range metricChan {
if strings.Index(m.Desc().String(), "expvar_memstats") == -1 {
metric.Reset()
m.Write(&metric)
metricStrings = append(metricStrings, metric.String())
}
}
sort.Strings(metricStrings)
for _, s := range metricStrings {
fmt.Println(strings.TrimRight(s, " "))
}
// Output:
// label:<name:"code" value:"200" > label:<name:"method" value:"GET" > untyped:<value:212 >
// label:<name:"code" value:"200" > label:<name:"method" value:"POST" > untyped:<value:11 >
// label:<name:"code" value:"404" > label:<name:"method" value:"GET" > untyped:<value:13 >
// label:<name:"code" value:"404" > label:<name:"method" value:"POST" > untyped:<value:3 >
// untyped:<value:42 >
}

View File

@ -1,139 +1,123 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"encoding/json"
"fmt"
"sync"
import "hash/fnv"
"code.google.com/p/goprotobuf/proto"
dto "github.com/prometheus/client_model/go"
"github.com/prometheus/client_golang/model"
)
// A gauge metric merely provides an instantaneous representation of a scalar
// value or an accumulation. For instance, if one wants to expose the current
// temperature or the hitherto bandwidth used, this would be the metric for such
// circumstances.
// Gauge is a Metric that represents a single numerical value that can
// arbitrarily go up and down.
//
// A Gauge is typically used for measured values like temperatures or current
// memory usage, but also "counts" that can go up and down, like the number of
// running goroutines.
//
// To create Gauge instances, use NewGauge.
type Gauge interface {
Metric
Set(labels map[string]string, value float64) float64
Collector
// Set sets the Gauge to an arbitrary value.
Set(float64)
// Inc increments the Gauge by 1.
Inc()
// Dec decrements the Gauge by 1.
Dec()
// Add adds the given value to the Gauge. (The value can be
// negative, resulting in a decrease of the Gauge.)
Add(float64)
// Sub subtracts the given value from the Gauge. (The value can be
// negative, resulting in an increase of the Gauge.)
Sub(float64)
}
type gaugeVector struct {
Labels map[string]string `json:"labels"`
Value float64 `json:"value"`
// GaugeOpts is an alias for Opts. See there for doc comments.
type GaugeOpts Opts
// NewGauge creates a new Gauge based on the provided GaugeOpts.
func NewGauge(opts GaugeOpts) Gauge {
return newValue(NewDesc(
BuildFQName(opts.Namespace, opts.Subsystem, opts.Name),
opts.Help,
nil,
opts.ConstLabels,
), GaugeValue, 0)
}
func NewGauge() Gauge {
return &gauge{
values: map[uint64]*gaugeVector{},
// GaugeVec is a Collector that bundles a set of Gauges that all share the same
// Desc, but have different values for their variable labels. This is used if
// you want to count the same thing partitioned by various dimensions
// (e.g. number of operations queued, partitioned by user and operation
// type). Create instances with NewGaugeVec.
type GaugeVec struct {
MetricVec
}
// NewGaugeVec creates a new GaugeVec based on the provided GaugeOpts and
// partitioned by the given label names. At least one label name must be
// provided.
func NewGaugeVec(opts GaugeOpts, labelNames []string) *GaugeVec {
desc := NewDesc(
BuildFQName(opts.Namespace, opts.Subsystem, opts.Name),
opts.Help,
labelNames,
opts.ConstLabels,
)
return &GaugeVec{
MetricVec: MetricVec{
children: map[uint64]Metric{},
desc: desc,
hash: fnv.New64a(),
newMetric: func(lvs ...string) Metric {
return newValue(desc, GaugeValue, 0, lvs...)
},
},
}
}
type gauge struct {
mutex sync.RWMutex
values map[uint64]*gaugeVector
}
func (metric *gauge) String() string {
formatString := "[Gauge %s]"
metric.mutex.RLock()
defer metric.mutex.RUnlock()
return fmt.Sprintf(formatString, metric.values)
}
func (metric *gauge) Set(labels map[string]string, value float64) float64 {
if labels == nil {
labels = blankLabelsSingleton
// GetMetricWithLabelValues replaces the method of the same name in
// MetricVec. The difference is that this method returns a Gauge and not a
// Metric so that no type conversion is required.
func (m *GaugeVec) GetMetricWithLabelValues(lvs ...string) (Gauge, error) {
metric, err := m.MetricVec.GetMetricWithLabelValues(lvs...)
if metric != nil {
return metric.(Gauge), err
}
return nil, err
}
signature := model.LabelValuesToSignature(labels)
metric.mutex.Lock()
defer metric.mutex.Unlock()
if original, ok := metric.values[signature]; ok {
original.Value = value
} else {
metric.values[signature] = &gaugeVector{
Labels: labels,
Value: value,
}
// GetMetricWith replaces the method of the same name in MetricVec. The
// difference is that this method returns a Gauge and not a Metric so that no
// type conversion is required.
func (m *GaugeVec) GetMetricWith(labels Labels) (Gauge, error) {
metric, err := m.MetricVec.GetMetricWith(labels)
if metric != nil {
return metric.(Gauge), err
}
return value
return nil, err
}
func (metric *gauge) Reset(labels map[string]string) {
signature := model.LabelValuesToSignature(labels)
metric.mutex.Lock()
defer metric.mutex.Unlock()
delete(metric.values, signature)
// WithLabelValues works as GetMetricWithLabelValues, but panics where
// GetMetricWithLabelValues would have returned an error. By not returning an
// error, WithLabelValues allows shortcuts like
// myVec.WithLabelValues("404", "GET").Add(42)
func (m *GaugeVec) WithLabelValues(lvs ...string) Gauge {
return m.MetricVec.WithLabelValues(lvs...).(Gauge)
}
func (metric *gauge) ResetAll() {
metric.mutex.Lock()
defer metric.mutex.Unlock()
for key, value := range metric.values {
for label := range value.Labels {
delete(value.Labels, label)
}
delete(metric.values, key)
}
}
func (metric *gauge) MarshalJSON() ([]byte, error) {
metric.mutex.RLock()
defer metric.mutex.RUnlock()
values := make([]*gaugeVector, 0, len(metric.values))
for _, value := range metric.values {
values = append(values, value)
}
return json.Marshal(map[string]interface{}{
typeKey: gaugeTypeValue,
valueKey: values,
})
}
func (metric *gauge) dumpChildren(f *dto.MetricFamily) {
metric.mutex.RLock()
defer metric.mutex.RUnlock()
f.Type = dto.MetricType_GAUGE.Enum()
for _, child := range metric.values {
c := &dto.Gauge{
Value: proto.Float64(child.Value),
}
m := &dto.Metric{
Gauge: c,
}
for name, value := range child.Labels {
p := &dto.LabelPair{
Name: proto.String(name),
Value: proto.String(value),
}
m.Label = append(m.Label, p)
}
f.Metric = append(f.Metric, m)
}
// With works as GetMetricWith, but panics where GetMetricWithLabels would have
// returned an error. By not returning an error, With allows shortcuts like
// myVec.With(Labels{"code": "404", "method": "GET"}).Add(42)
func (m *GaugeVec) With(labels Labels) Gauge {
return m.MetricVec.With(labels).(Gauge)
}

View File

@ -1,154 +1,158 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"encoding/json"
"math"
"math/rand"
"sync"
"testing"
"github.com/prometheus/client_golang/test"
"testing/quick"
)
func testGauge(t test.Tester) {
type input struct {
steps []func(g Gauge)
func listenGaugeStream(vals, result chan float64, done chan struct{}) {
var sum float64
outer:
for {
select {
case <-done:
close(vals)
for v := range vals {
sum += v
}
break outer
case v := <-vals:
sum += v
}
}
type output struct {
value string
result <- sum
close(result)
}
func TestGaugeConcurrency(t *testing.T) {
it := func(n uint32) bool {
mutations := int(n % 10000)
concLevel := int(n%15 + 1)
var start, end sync.WaitGroup
start.Add(1)
end.Add(concLevel)
sStream := make(chan float64, mutations*concLevel)
result := make(chan float64)
done := make(chan struct{})
go listenGaugeStream(sStream, result, done)
go func() {
end.Wait()
close(done)
}()
gge := NewGauge(GaugeOpts{
Name: "test_gauge",
Help: "no help can be found here",
})
for i := 0; i < concLevel; i++ {
vals := make([]float64, mutations)
for j := 0; j < mutations; j++ {
vals[j] = rand.Float64() - 0.5
}
go func(vals []float64) {
start.Wait()
for _, v := range vals {
sStream <- v
gge.Add(v)
}
end.Done()
}(vals)
}
start.Done()
if expected, got := <-result, gge.(*value).val; math.Abs(expected-got) > 0.000001 {
t.Fatalf("expected approx. %f, got %f", expected, got)
return false
}
return true
}
var scenarios = []struct {
in input
out output
}{
{
in: input{
steps: []func(g Gauge){},
},
out: output{
value: `{"type":"gauge","value":[]}`,
},
},
{
in: input{
steps: []func(g Gauge){
func(g Gauge) {
g.Set(nil, 1)
},
},
},
out: output{
value: `{"type":"gauge","value":[{"labels":{},"value":1}]}`,
},
},
{
in: input{
steps: []func(g Gauge){
func(g Gauge) {
g.Set(map[string]string{}, 2)
},
},
},
out: output{
value: `{"type":"gauge","value":[{"labels":{},"value":2}]}`,
},
},
{
in: input{
steps: []func(g Gauge){
func(g Gauge) {
g.Set(map[string]string{}, 3)
},
func(g Gauge) {
g.Set(map[string]string{}, 5)
},
},
},
out: output{
value: `{"type":"gauge","value":[{"labels":{},"value":5}]}`,
},
},
{
in: input{
steps: []func(g Gauge){
func(g Gauge) {
g.Set(map[string]string{"handler": "/foo"}, 13)
},
func(g Gauge) {
g.Set(map[string]string{"handler": "/bar"}, 17)
},
func(g Gauge) {
g.Reset(map[string]string{"handler": "/bar"})
},
},
},
out: output{
value: `{"type":"gauge","value":[{"labels":{"handler":"/foo"},"value":13}]}`,
},
},
{
in: input{
steps: []func(g Gauge){
func(g Gauge) {
g.Set(map[string]string{"handler": "/foo"}, 13)
},
func(g Gauge) {
g.Set(map[string]string{"handler": "/bar"}, 17)
},
func(g Gauge) {
g.ResetAll()
},
},
},
out: output{
value: `{"type":"gauge","value":[]}`,
},
},
{
in: input{
steps: []func(g Gauge){
func(g Gauge) {
g.Set(map[string]string{"handler": "/foo"}, 19)
},
},
},
out: output{
value: `{"type":"gauge","value":[{"labels":{"handler":"/foo"},"value":19}]}`,
},
},
}
for i, scenario := range scenarios {
gauge := NewGauge()
for _, step := range scenario.in.steps {
step(gauge)
}
bytes, err := json.Marshal(gauge)
if err != nil {
t.Errorf("%d. could not marshal into JSON %s", i, err)
continue
}
asString := string(bytes)
if scenario.out.value != asString {
t.Errorf("%d. expected %q, got %q", i, scenario.out.value, asString)
}
if err := quick.Check(it, nil); err != nil {
t.Fatal(err)
}
}
func TestGauge(t *testing.T) {
testGauge(t)
}
func TestGaugeVecConcurrency(t *testing.T) {
it := func(n uint32) bool {
mutations := int(n % 10000)
concLevel := int(n%15 + 1)
vecLength := int(n%5 + 1)
func BenchmarkGauge(b *testing.B) {
for i := 0; i < b.N; i++ {
testGauge(b)
var start, end sync.WaitGroup
start.Add(1)
end.Add(concLevel)
sStreams := make([]chan float64, vecLength)
results := make([]chan float64, vecLength)
done := make(chan struct{})
for i := 0; i < vecLength; i++ {
sStreams[i] = make(chan float64, mutations*concLevel)
results[i] = make(chan float64)
go listenGaugeStream(sStreams[i], results[i], done)
}
go func() {
end.Wait()
close(done)
}()
gge := NewGaugeVec(
GaugeOpts{
Name: "test_gauge",
Help: "no help can be found here",
},
[]string{"label"},
)
for i := 0; i < concLevel; i++ {
vals := make([]float64, mutations)
pick := make([]int, mutations)
for j := 0; j < mutations; j++ {
vals[j] = rand.Float64() - 0.5
pick[j] = rand.Intn(vecLength)
}
go func(vals []float64) {
start.Wait()
for i, v := range vals {
sStreams[pick[i]] <- v
gge.WithLabelValues(string('A' + pick[i])).Add(v)
}
end.Done()
}(vals)
}
start.Done()
for i := range sStreams {
if expected, got := <-results[i], gge.WithLabelValues(string('A'+i)).(*value).val; math.Abs(expected-got) > 0.000001 {
t.Fatalf("expected approx. %f, got %f", expected, got)
return false
}
}
return true
}
if err := quick.Check(it, nil); err != nil {
t.Fatal(err)
}
}

View File

@ -1,56 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package prometheus
import (
"math"
"reflect"
. "github.com/matttproud/gocheck"
)
type isNaNChecker struct {
*CheckerInfo
}
// This piece provides a simple tester for the gocheck testing library to
// ascertain if a value is not-a-number.
var IsNaN = &isNaNChecker{
&CheckerInfo{Name: "IsNaN", Params: []string{"value"}},
}
func (checker *isNaNChecker) Check(params []interface{}, names []string) (result bool, error string) {
return isNaN(params[0]), ""
}
func isNaN(obtained interface{}) (result bool) {
if obtained == nil {
result = false
} else {
switch v := reflect.ValueOf(obtained); v.Kind() {
case reflect.Float64:
return math.IsNaN(obtained.(float64))
}
}
return false
}
type valueEqualsChecker struct {
*CheckerInfo
}
var ValueEquals = &valueEqualsChecker{
&CheckerInfo{Name: "IsValue", Params: []string{"obtained", "expected"}},
}
func (checker *valueEqualsChecker) Check(params []interface{}, names []string) (result bool, error string) {
actual := params[0].(*item).Value
expected := params[1]
return actual == expected, ""
}

View File

@ -1,403 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package prometheus
import (
"bytes"
"encoding/json"
"fmt"
"math"
"strconv"
"sync"
"time"
dto "github.com/prometheus/client_model/go"
"code.google.com/p/goprotobuf/proto"
"github.com/prometheus/client_golang/model"
)
// This generates count-buckets of equal size distributed along the open
// interval of lower to upper. For instance, {lower=0, upper=10, count=5}
// yields the following: [0, 2, 4, 6, 8].
func EquallySizedBucketsFor(lower, upper float64, count int) []float64 {
buckets := make([]float64, count)
partitionSize := (upper - lower) / float64(count)
for i := 0; i < count; i++ {
m := float64(i)
buckets[i] = lower + (m * partitionSize)
}
return buckets
}
// This generates log2-sized buckets spanning from lower to upper inclusively
// as well as values beyond it.
func LogarithmicSizedBucketsFor(lower, upper float64) []float64 {
bucketCount := int(math.Ceil(math.Log2(upper)))
buckets := make([]float64, bucketCount)
for i, j := 0, 0.0; i < bucketCount; i, j = i+1, math.Pow(2, float64(i+1.0)) {
buckets[i] = j
}
return buckets
}
// A HistogramSpecification defines how a Histogram is to be built.
type HistogramSpecification struct {
BucketBuilder BucketBuilder
ReportablePercentiles []float64
Starts []float64
PurgeInterval time.Duration
}
type Histogram interface {
Metric
Add(labels map[string]string, value float64)
}
// The histogram is an accumulator for samples. It merely routes into which
// bucket to capture an event and provides a percentile calculation mechanism.
type histogram struct {
bucketMaker BucketBuilder
// This represents the open interval's start at which values shall be added to
// the bucket. The interval continues until the beginning of the next bucket
// exclusive or positive infinity.
//
// N.B.
// - bucketStarts should be sorted in ascending order;
// - len(bucketStarts) must be equivalent to len(buckets);
// - The index of a given bucketStarts' element is presumed to
// correspond to the appropriate element in buckets.
bucketStarts []float64
mutex sync.RWMutex
// These are the buckets that capture samples as they are emitted to the
// histogram. Please consult the reference interface and its implements for
// further details about behavior expectations.
values map[uint64]*histogramVector
// These are the percentile values that will be reported on marshalling.
reportablePercentiles []float64
purgeInterval time.Duration
lastPurge time.Time
}
type histogramVector struct {
buckets []Bucket
labels map[string]string
sum float64
count uint64
}
func (h *histogram) Add(labels map[string]string, value float64) {
if labels == nil {
labels = blankLabelsSingleton
}
signature := model.LabelValuesToSignature(labels)
var histogram *histogramVector = nil
h.mutex.Lock()
defer h.mutex.Unlock()
if original, ok := h.values[signature]; ok {
histogram = original
} else {
bucketCount := len(h.bucketStarts)
histogram = &histogramVector{
buckets: make([]Bucket, bucketCount),
labels: labels,
}
for i := 0; i < bucketCount; i++ {
histogram.buckets[i] = h.bucketMaker()
}
h.values[signature] = histogram
}
lastIndex := 0
for i, bucketStart := range h.bucketStarts {
if value < bucketStart {
break
}
lastIndex = i
}
histogram.buckets[lastIndex].Add(value)
histogram.sum += value
histogram.count++
}
func (h *histogram) String() string {
h.mutex.RLock()
defer h.mutex.RUnlock()
stringBuffer := &bytes.Buffer{}
stringBuffer.WriteString("[Histogram { ")
for _, histogram := range h.values {
fmt.Fprintf(stringBuffer, "Labels: %s ", histogram.labels)
for i, bucketStart := range h.bucketStarts {
bucket := histogram.buckets[i]
fmt.Fprintf(stringBuffer, "[%f, inf) = %s, ", bucketStart, bucket)
}
}
stringBuffer.WriteString("}]")
return stringBuffer.String()
}
// Determine the number of previous observations up to a given index.
func previousCumulativeObservations(cumulativeObservations []int, bucketIndex int) int {
if bucketIndex == 0 {
return 0
}
return cumulativeObservations[bucketIndex-1]
}
// Determine the index for an element given a percentage of length.
func prospectiveIndexForPercentile(percentile float64, totalObservations int) int {
return int(percentile * float64(totalObservations-1))
}
// Determine the next bucket element when interim bucket intervals may be empty.
func (h histogram) nextNonEmptyBucketElement(signature uint64, currentIndex, bucketCount int, observationsByBucket []int) (*Bucket, int) {
for i := currentIndex; i < bucketCount; i++ {
if observationsByBucket[i] == 0 {
continue
}
histogram := h.values[signature]
return &histogram.buckets[i], 0
}
panic("Illegal Condition: There were no remaining buckets to provide a value.")
}
// Find what bucket and element index contains a given percentile value.
// If a percentile is requested that results in a corresponding index that is no
// longer contained by the bucket, the index of the last item is returned. This
// may occur if the underlying bucket catalogs values and employs an eviction
// strategy.
func (h histogram) bucketForPercentile(signature uint64, percentile float64) (*Bucket, int) {
bucketCount := len(h.bucketStarts)
// This captures the quantity of samples in a given bucket's range.
observationsByBucket := make([]int, bucketCount)
// This captures the cumulative quantity of observations from all preceding
// buckets up and to the end of this bucket.
cumulativeObservationsByBucket := make([]int, bucketCount)
totalObservations := 0
histogram := h.values[signature]
for i, bucket := range histogram.buckets {
observations := bucket.Observations()
observationsByBucket[i] = observations
totalObservations += bucket.Observations()
cumulativeObservationsByBucket[i] = totalObservations
}
// This captures the index offset where the given percentile value would be
// were all submitted samples stored and never down-/re-sampled nor deleted
// and housed in a singular array.
prospectiveIndex := prospectiveIndexForPercentile(percentile, totalObservations)
for i, cumulativeObservation := range cumulativeObservationsByBucket {
if cumulativeObservation == 0 {
continue
}
// Find the bucket that contains the given index.
if cumulativeObservation >= prospectiveIndex {
var subIndex int
// This calculates the index within the current bucket where the given
// percentile may be found.
subIndex = prospectiveIndex - previousCumulativeObservations(cumulativeObservationsByBucket, i)
// Sometimes the index may be the last item, in which case we need to
// take this into account.
if observationsByBucket[i] == subIndex {
return h.nextNonEmptyBucketElement(signature, i+1, bucketCount, observationsByBucket)
}
return &histogram.buckets[i], subIndex
}
}
return &histogram.buckets[0], 0
}
// Return the histogram's estimate of the value for a given percentile of
// collected samples. The requested percentile is expected to be a real
// value within (0, 1.0].
func (h histogram) percentile(signature uint64, percentile float64) float64 {
bucket, index := h.bucketForPercentile(signature, percentile)
return (*bucket).ValueForIndex(index)
}
func formatFloat(value float64) string {
return strconv.FormatFloat(value, floatFormat, floatPrecision, floatBitCount)
}
func (h *histogram) MarshalJSON() ([]byte, error) {
h.Purge()
h.mutex.RLock()
defer h.mutex.RUnlock()
values := make([]map[string]interface{}, 0, len(h.values))
for signature, value := range h.values {
percentiles := make(map[string]float64, len(h.reportablePercentiles))
for _, percentile := range h.reportablePercentiles {
formatted := formatFloat(percentile)
percentiles[formatted] = h.percentile(signature, percentile)
}
values = append(values, map[string]interface{}{
labelsKey: value.labels,
valueKey: percentiles,
})
}
return json.Marshal(map[string]interface{}{
typeKey: histogramTypeValue,
valueKey: values,
})
}
func (h *histogram) Purge() {
if h.purgeInterval == 0 {
return
}
h.mutex.Lock()
defer h.mutex.Unlock()
if time.Since(h.lastPurge) < h.purgeInterval {
return
}
h.resetAll()
h.lastPurge = time.Now()
}
func (h *histogram) Reset(labels map[string]string) {
signature := model.LabelValuesToSignature(labels)
h.mutex.Lock()
defer h.mutex.Unlock()
value, ok := h.values[signature]
if !ok {
return
}
for _, bucket := range value.buckets {
bucket.Reset()
}
delete(h.values, signature)
}
func (h *histogram) ResetAll() {
h.mutex.Lock()
defer h.mutex.Unlock()
h.resetAll()
}
func (h *histogram) resetAll() {
for signature, value := range h.values {
for _, bucket := range value.buckets {
bucket.Reset()
}
delete(h.values, signature)
}
}
// Produce a histogram from a given specification.
func NewHistogram(specification *HistogramSpecification) Histogram {
metric := &histogram{
bucketMaker: specification.BucketBuilder,
bucketStarts: specification.Starts,
reportablePercentiles: specification.ReportablePercentiles,
values: map[uint64]*histogramVector{},
lastPurge: time.Now(),
purgeInterval: specification.PurgeInterval,
}
return metric
}
// Furnish a Histogram with unsensible default values and behaviors that is
// strictly useful for prototyping purposes.
func NewDefaultHistogram() Histogram {
return NewHistogram(
&HistogramSpecification{
Starts: LogarithmicSizedBucketsFor(0, 4096),
BucketBuilder: AccumulatingBucketBuilder(EvictAndReplaceWith(10, AverageReducer), 50),
ReportablePercentiles: []float64{0.01, 0.05, 0.5, 0.90, 0.99},
PurgeInterval: 15 * time.Minute,
},
)
}
func (metric *histogram) dumpChildren(f *dto.MetricFamily) {
metric.Purge()
metric.mutex.RLock()
defer metric.mutex.RUnlock()
f.Type = dto.MetricType_SUMMARY.Enum()
for signature, child := range metric.values {
c := &dto.Summary{
SampleSum: proto.Float64(child.sum),
SampleCount: proto.Uint64(child.count),
}
m := &dto.Metric{
Summary: c,
}
for name, value := range child.labels {
p := &dto.LabelPair{
Name: proto.String(name),
Value: proto.String(value),
}
m.Label = append(m.Label, p)
}
for _, percentile := range metric.reportablePercentiles {
q := &dto.Quantile{
Quantile: proto.Float64(percentile),
Value: proto.Float64(metric.percentile(signature, percentile)),
}
c.Quantile = append(c.Quantile, q)
}
f.Metric = append(f.Metric, m)
}
}

View File

@ -1,9 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package prometheus
// TODO(matt): Re-Add tests for this type.

287
prometheus/http.go Normal file
View File

@ -0,0 +1,287 @@
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"net/http"
"strconv"
"strings"
"time"
)
var (
instLabels = []string{"handler", "method", "code"}
reqCnt = NewCounterVec(
CounterOpts{
Subsystem: "http",
Name: "requests_total",
Help: "Total number of HTTP requests made.",
},
instLabels,
)
reqDur = NewSummaryVec(
SummaryOpts{
Subsystem: "http",
Name: "request_duration_microseconds",
Help: "The HTTP request latencies in microseconds.",
},
instLabels,
)
reqSz = NewSummaryVec(
SummaryOpts{
Subsystem: "http",
Name: "request_size_bytes",
Help: "The HTTP request sizes in bytes.",
},
instLabels,
)
resSz = NewSummaryVec(
SummaryOpts{
Subsystem: "http",
Name: "response_size_bytes",
Help: "The HTTP response sizes in bytes.",
},
instLabels,
)
)
type nower interface {
Now() time.Time
}
type nowFunc func() time.Time
func (n nowFunc) Now() time.Time {
return n()
}
var now nower = nowFunc(func() time.Time {
return time.Now()
})
func nowSeries(t ...time.Time) nower {
return nowFunc(func() time.Time {
defer func() {
t = t[1:]
}()
return t[0]
})
}
// InstrumentHandler wraps the given HTTP handler for instrumentation. It
// registers four metric vector collectors (if not already done) and reports
// http metrics to the (newly or already) registered collectors:
// http_requests_total (CounterVec), http_request_duration_microseconds
// (SummaryVec), http_request_size_bytes (SummaryVec), http_response_size_bytes
// (SummaryVec). Each has three labels: handler, method, code. The value of the
// handler label is set by the handlerName parameter of this function.
func InstrumentHandler(handlerName string, handler http.Handler) http.HandlerFunc {
regReqCnt := MustRegisterOrGet(reqCnt).(*CounterVec)
regReqDur := MustRegisterOrGet(reqDur).(*SummaryVec)
regReqSz := MustRegisterOrGet(reqSz).(*SummaryVec)
regResSz := MustRegisterOrGet(resSz).(*SummaryVec)
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
now := time.Now()
delegate := &responseWriterDelegator{ResponseWriter: w}
out := make(chan int)
go computeApproximateRequestSize(r, out)
handler.ServeHTTP(delegate, r)
elapsed := float64(time.Since(now)) / float64(time.Second)
method := sanitizeMethod(r.Method)
code := sanitizeCode(delegate.status)
regReqCnt.WithLabelValues(handlerName, method, code).Inc()
regReqDur.WithLabelValues(handlerName, method, code).Observe(elapsed)
regResSz.WithLabelValues(handlerName, method, code).Observe(float64(delegate.written))
regReqSz.WithLabelValues(handlerName, method, code).Observe(float64(<-out))
})
}
func computeApproximateRequestSize(r *http.Request, out chan int) {
s := len(r.Method)
if r.URL != nil {
s += len(r.URL.String())
}
s += len(r.Proto)
for name, values := range r.Header {
s += len(name)
for _, value := range values {
s += len(value)
}
}
s += len(r.Host)
// N.B. r.Form and r.MultipartForm are assumed to be included in r.URL.
if r.ContentLength != -1 {
s += int(r.ContentLength)
}
out <- s
}
type responseWriterDelegator struct {
http.ResponseWriter
handler, method string
status int
written int
wroteHeader bool
}
func (r *responseWriterDelegator) WriteHeader(code int) {
r.status = code
r.wroteHeader = true
r.ResponseWriter.WriteHeader(code)
}
func (r *responseWriterDelegator) Write(b []byte) (int, error) {
if !r.wroteHeader {
r.WriteHeader(http.StatusOK)
}
n, err := r.ResponseWriter.Write(b)
r.written += n
return n, err
}
func sanitizeMethod(m string) string {
switch m {
case "GET", "get":
return "get"
case "PUT", "put":
return "put"
case "HEAD", "head":
return "head"
case "POST", "post":
return "post"
case "DELETE", "delete":
return "delete"
case "CONNECT", "connect":
return "connect"
case "OPTIONS", "options":
return "options"
case "NOTIFY", "notify":
return "notify"
default:
return strings.ToLower(m)
}
}
func sanitizeCode(s int) string {
switch s {
case 100:
return "100"
case 101:
return "101"
case 200:
return "200"
case 201:
return "201"
case 202:
return "202"
case 203:
return "203"
case 204:
return "204"
case 205:
return "205"
case 206:
return "206"
case 300:
return "300"
case 301:
return "301"
case 302:
return "302"
case 304:
return "304"
case 305:
return "305"
case 307:
return "307"
case 400:
return "400"
case 401:
return "401"
case 402:
return "402"
case 403:
return "403"
case 404:
return "404"
case 405:
return "405"
case 406:
return "406"
case 407:
return "407"
case 408:
return "408"
case 409:
return "409"
case 410:
return "410"
case 411:
return "411"
case 412:
return "412"
case 413:
return "413"
case 414:
return "414"
case 415:
return "415"
case 416:
return "416"
case 417:
return "417"
case 418:
return "418"
case 500:
return "500"
case 501:
return "501"
case 502:
return "502"
case 503:
return "503"
case 504:
return "504"
case 505:
return "505"
case 428:
return "428"
case 429:
return "429"
case 431:
return "431"
case 511:
return "511"
default:
return strconv.Itoa(s)
}
}

109
prometheus/http_test.go Normal file
View File

@ -0,0 +1,109 @@
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"net/http"
"net/http/httptest"
"testing"
"time"
dto "github.com/prometheus/client_model/go"
)
type respBody string
func (b respBody) ServeHTTP(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusTeapot)
w.Write([]byte(b))
}
func TestInstrumentHandler(t *testing.T) {
defer func(n nower) {
now = n.(nower)
}(now)
instant := time.Now()
end := instant.Add(30 * time.Second)
now = nowSeries(instant, end)
reqCnt.Reset()
reqDur.Reset()
reqSz.Reset()
resSz.Reset()
respBody := respBody("Howdy there!")
hndlr := InstrumentHandler("test-handler", respBody)
resp := httptest.NewRecorder()
req := &http.Request{
Method: "GET",
}
hndlr.ServeHTTP(resp, req)
if resp.Code != http.StatusTeapot {
t.Fatalf("expected status %d, got %d", http.StatusTeapot, resp.Code)
}
if string(resp.Body.Bytes()) != "Howdy there!" {
t.Fatalf("expected body %s, got %s", "Howdy there!", string(resp.Body.Bytes()))
}
if want, got := 1, len(reqDur.children); want != got {
t.Errorf("want %d children in reqDur, got %d", want, got)
}
sum, err := reqDur.GetMetricWithLabelValues("test-handler", "get", "418")
if err != nil {
t.Fatal(err)
}
out := &dto.Metric{}
sum.Write(out)
if want, got := "418", out.Label[0].GetValue(); want != got {
t.Errorf("want label value %q in reqDur, got %q", want, got)
}
if want, got := "test-handler", out.Label[1].GetValue(); want != got {
t.Errorf("want label value %q in reqDur, got %q", want, got)
}
if want, got := "get", out.Label[2].GetValue(); want != got {
t.Errorf("want label value %q in reqDur, got %q", want, got)
}
if want, got := uint64(1), out.Summary.GetSampleCount(); want != got {
t.Errorf("want sample count %d in reqDur, got %d", want, got)
}
out.Reset()
if want, got := 1, len(reqCnt.children); want != got {
t.Errorf("want %d children in reqCnt, got %d", want, got)
}
cnt, err := reqCnt.GetMetricWithLabelValues("test-handler", "get", "418")
if err != nil {
t.Fatal(err)
}
cnt.Write(out)
if want, got := "418", out.Label[0].GetValue(); want != got {
t.Errorf("want label value %q in reqCnt, got %q", want, got)
}
if want, got := "test-handler", out.Label[1].GetValue(); want != got {
t.Errorf("want label value %q in reqCnt, got %q", want, got)
}
if want, got := "get", out.Label[2].GetValue(); want != got {
t.Errorf("want label value %q in reqCnt, got %q", want, got)
}
if out.Counter == nil {
t.Fatal("expected non-nil counter in reqCnt")
}
if want, got := 1., out.Counter.GetValue(); want != got {
t.Errorf("want reqCnt of %f, got %f", want, got)
}
}

View File

@ -1,27 +1,146 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"encoding/json"
"strings"
dto "github.com/prometheus/client_model/go"
)
// A Metric is something that can be exposed via the registry framework.
// A Metric models a single sample value with its meta data being exported to
// Prometheus. Implementers of Metric in this package inclued Gauge, Counter,
// Untyped, and Summary. Users can implement their own Metric types, but that
// should be rarely needed. See the example for SelfCollector, which is also an
// example for a user-implemented Metric.
type Metric interface {
// Produce a JSON representation of the metric.
json.Marshaler
// Reset removes any stored values associated with a given labelset.
Reset(labels map[string]string)
// Reset the parent metrics and delete all child metrics.
ResetAll()
// Produce a human-consumable representation of the metric.
String() string
// dumpChildren populates the child metrics of the given family.
dumpChildren(*dto.MetricFamily)
// Desc returns the descriptor for the Metric. This method idempotently
// returns the same descriptor throughout the lifetime of the
// Metric. The returned descriptor is immutable by contract.
Desc() *Desc
// Write encodes the Metric into a "Metric" Protocol Buffer data
// transmission object.
//
// Implementers of custom Metric types must observe concurrency safety
// as reads of this metric may occur at any time, and any blocking
// occurs at the expense of total performance of rendering all
// registered metrics. Ideally Metric implementations should support
// concurrent readers.
//
// The Prometheus client library attempts to minimize memory allocations
// and will provide a pre-existing reset dto.Metric pointer. Prometheus
// may recycle the dto.Metric proto message, so Metric implementations
// should just populate the provided dto.Metric and then should not keep
// any reference to it.
//
// While populating dto.Metric, labels must be sorted lexicographically.
// (Implementers may find LabelPairSorter useful for that.)
Write(*dto.Metric)
}
// Opts bundles the options for creating most Metric types. Each metric
// implementation XXX has its own XXXOpts type, but in most cases, it is just be
// an alias of this type (which might change when the requirement arises.)
//
// It is mandatory to set Name and Help to a non-empty string. All other fields
// are optional and can safely be left at their zero value.
type Opts struct {
// Namespace, Subsystem, and Name are components of the fully-qualified
// name of the Metric (created by joining these components with
// "_"). Only Name is mandatory, the others merely help structuring the
// name. Note that the fully-qualified name of the metric must be a
// valid Prometheus metric name.
Namespace string
Subsystem string
Name string
// Help provides information about this metric. Mandatory!
//
// Metrics with the same fully-qualified name must have the same Help
// string.
Help string
// ConstLabels are used to attach fixed labels to this metric. Metrics
// with the same fully-qualified name must have the same label names in
// their ConstLabels.
//
// Note that in most cases, labels have a value that varies during the
// lifetime of a process. Those labels are usually managed with a metric
// vector collector (like CounterVec, GaugeVec, UntypedVec). ConstLabels
// serve only special purposes. One is for the special case where the
// value of a label does not change during the lifetime of a process,
// e.g. if the revision of the running binary is put into a
// label. Another, more advanced purpose is if more than one Collector
// needs to collect Metrics with the same fully-qualified name. In that
// case, those Metrics must differ in the values of their
// ConstLabels. See the Collector examples.
//
// If the value of a label never changes (not even between binaries),
// that label most likely should not be a label at all (but part of the
// metric name).
ConstLabels Labels
}
// BuildFQName joins the given three name components by "_". Empty name
// components are ignored. If the name parameter itself is empty, an empty
// string is returned, no matter what. Metric implementations included in this
// library use this function internally to generate the fully-qualified metric
// name from the name component in their Opts. Users of the library will only
// need this function if they implement their own Metric or instantiate a Desc
// (with NewDesc) directly.
func BuildFQName(namespace, subsystem, name string) string {
if name == "" {
return ""
}
switch {
case namespace != "" && subsystem != "":
return strings.Join([]string{namespace, subsystem, name}, "_")
case namespace != "":
return strings.Join([]string{namespace, name}, "_")
case subsystem != "":
return strings.Join([]string{subsystem, name}, "_")
}
return name
}
// LabelPairSorter implements sort.Interface. It is used to sort a slice of
// dto.LabelPair pointers. This is useful for implementing the Write method of
// custom metrics.
type LabelPairSorter []*dto.LabelPair
func (s LabelPairSorter) Len() int {
return len(s)
}
func (s LabelPairSorter) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
func (s LabelPairSorter) Less(i, j int) bool {
return s[i].GetName() < s[j].GetName()
}
type hashSorter []uint64
func (s hashSorter) Len() int {
return len(s)
}
func (s hashSorter) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
func (s hashSorter) Less(i, j int) bool {
return s[i] < s[j]
}

35
prometheus/metric_test.go Normal file
View File

@ -0,0 +1,35 @@
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import "testing"
func TestBuildFQName(t *testing.T) {
scenarios := []struct{ namespace, subsystem, name, result string }{
{"a", "b", "c", "a_b_c"},
{"", "b", "c", "b_c"},
{"a", "", "c", "a_c"},
{"", "", "c", "c"},
{"a", "b", "", ""},
{"a", "", "", ""},
{"", "b", "", ""},
{" ", "", "", ""},
}
for i, s := range scenarios {
if want, got := s.result, BuildFQName(s.namespace, s.subsystem, s.name); want != got {
t.Errorf("%d. want %s, got %s", i, want, got)
}
}
}

View File

@ -1,48 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package prometheus
type item struct {
Priority int64
Value interface{}
index int
}
type priorityQueue []*item
func (q priorityQueue) Len() int {
return len(q)
}
func (q priorityQueue) Less(i, j int) bool {
return q[i].Priority > q[j].Priority
}
func (q priorityQueue) Swap(i, j int) {
q[i], q[j] = q[j], q[i]
q[i].index = i
q[j].index = j
}
func (q *priorityQueue) Push(x interface{}) {
queue := *q
size := len(queue)
queue = queue[0 : size+1]
item := x.(*item)
item.index = size
queue[size] = item
*q = queue
}
func (q *priorityQueue) Pop() interface{} {
queue := *q
size := len(queue)
item := queue[size-1]
item.index = -1
*q = queue[0 : size-1]
return item
}

View File

@ -1,34 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package prometheus
import (
"container/heap"
. "github.com/matttproud/gocheck"
)
func (s *S) TestPriorityQueueSort(c *C) {
q := make(priorityQueue, 0, 6)
c.Check(len(q), Equals, 0)
heap.Push(&q, &item{Value: "newest", Priority: -100})
heap.Push(&q, &item{Value: "older", Priority: 90})
heap.Push(&q, &item{Value: "oldest", Priority: 100})
heap.Push(&q, &item{Value: "newer", Priority: -90})
heap.Push(&q, &item{Value: "new", Priority: -80})
heap.Push(&q, &item{Value: "old", Priority: 80})
c.Check(len(q), Equals, 6)
c.Check(heap.Pop(&q), ValueEquals, "oldest")
c.Check(heap.Pop(&q), ValueEquals, "older")
c.Check(heap.Pop(&q), ValueEquals, "old")
c.Check(heap.Pop(&q), ValueEquals, "new")
c.Check(heap.Pop(&q), ValueEquals, "newer")
c.Check(heap.Pop(&q), ValueEquals, "newest")
}

View File

@ -1,20 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package prometheus
import (
. "github.com/matttproud/gocheck"
"testing"
)
type S struct{}
var _ = Suite(&S{})
func TestPrometheus(t *testing.T) {
TestingT(t)
}

View File

@ -1,3 +1,16 @@
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
@ -7,370 +20,610 @@
package prometheus
import (
"compress/gzip"
"encoding/base64"
"encoding/json"
"flag"
"bytes"
"encoding/binary"
"errors"
"fmt"
"hash/fnv"
"io"
"log"
"net/http"
"sort"
"strings"
"sync"
"time"
dto "github.com/prometheus/client_model/go"
"code.google.com/p/goprotobuf/proto"
"github.com/prometheus/client_golang/model"
"github.com/prometheus/client_golang/_vendor/goautoneg"
"github.com/prometheus/client_golang/text"
"github.com/prometheus/client_golang/vendor/goautoneg"
)
var (
defRegistry = newRegistry()
errAlreadyReg = errors.New("duplicate metrics collector registration attempted")
)
// Constants relevant to the HTTP interface.
const (
authorization = "Authorization"
authorizationHeader = "WWW-Authenticate"
authorizationHeaderValue = "Basic"
// APIVersion is the version of the format of the exported data. This
// will match this library's version, which subscribes to the Semantic
// Versioning scheme.
APIVersion = "0.0.4"
acceptEncodingHeader = "Accept-Encoding"
contentEncodingHeader = "Content-Encoding"
contentTypeHeader = "Content-Type"
gzipAcceptEncodingValue = "gzip"
gzipContentEncodingValue = "gzip"
jsonContentType = "application/json"
// DelimitedTelemetryContentType is the content type set on telemetry
// data responses in delimited protobuf format.
DelimitedTelemetryContentType = `application/vnd.google.protobuf; proto="io.prometheus.client.MetricFamily"; encoding="delimited"`
// TextTelemetryContentType is the content type set on telemetry data
// responses in text format.
TextTelemetryContentType = `text/plain; version=` + APIVersion
// ProtoTextTelemetryContentType is the content type set on telemetry
// data responses in protobuf text format. (Only used for debugging.)
ProtoTextTelemetryContentType = `application/vnd.google.protobuf; proto="io.prometheus.client.MetricFamily"; encoding="text"`
// ProtoCompactTextTelemetryContentType is the content type set on
// telemetry data responses in protobuf compact text format. (Only used
// for debugging.)
ProtoCompactTextTelemetryContentType = `application/vnd.google.protobuf; proto="io.prometheus.client.MetricFamily"; encoding="compact-text"`
// Constants for object pools.
numBufs = 4
numMetricFamilies = 1000
numMetrics = 10000
// Capacity for the channel to collect metrics and descriptors.
capMetricChan = 1000
capDescChan = 10
contentTypeHeader = "Content-Type"
)
// Handler returns the HTTP handler for the global Prometheus registry. It is
// already instrumented with InstrumentHandler (using "prometheus" as handler
// name). Usually the handler is used to handle the "/metrics" endpoint.
func Handler() http.Handler {
return InstrumentHandler("prometheus", defRegistry)
}
// UninstrumentedHandler works in the same way as Handler, but the returned HTTP
// handler is not instrumented. This is useful if no instrumentation is desired
// (for whatever reason) or if the instrumentation has to happen with a
// different handler name (or with a different instrumentation approach
// altogether). See the InstrumentHandler example.
func UninstrumentedHandler() http.Handler {
return defRegistry
}
// Register registers a new Collector to be included in metrics collection. It
// returns an error if the descriptors provided by the Collector are invalid or
// if they - in combination with descriptors of already registered Collectors -
// do not fulfill the consistency and uniqueness criteria described in the Desc
// documentation. If the registration is successful, the registered Collector
// is returned.
//
// Do not register the same Collector multiple times concurrently. (Registering
// the same Collector twice would result in an error anyway, but on top of that,
// it is not safe to do so concurrently.)
func Register(m Collector) (Collector, error) {
return defRegistry.Register(m)
}
// MustRegister works like Register but panics where Register would have
// returned an error.
func MustRegister(m Collector) Collector {
m, err := Register(m)
if err != nil {
panic(err)
}
return m
}
// RegisterOrGet works like Register but does not return an error if a Collector
// is registered that equals a previously registered Collector. (Two Collectors
// are considered equal if their Describe method yields the same set of
// descriptors.) Instead, the previously registered Collector is returned (which
// is helpful if the new and previously registered Collectors are equal but not
// identical, i.e. not pointers to the same object).
//
// As for Register, it is still not safe to call RegisterOrGet with the same
// Collector multiple times concurrently.
func RegisterOrGet(m Collector) (Collector, error) {
return defRegistry.RegisterOrGet(m)
}
// MustRegisterOrGet works like Register but panics where RegisterOrGet would
// have returned an error.
func MustRegisterOrGet(m Collector) Collector {
existing, err := RegisterOrGet(m)
if err != nil {
panic(err)
}
return existing
}
// Unregister unregisters the Collector that equals the Collector passed in as
// an argument. (Two Collectors are considered equal if their Describe method
// yields the same set of descriptors.) The function returns whether a Collector
// was unregistered.
func Unregister(c Collector) bool {
return defRegistry.Unregister(c)
}
// SetMetricFamilyInjectionHook sets a function that is called whenever metrics
// are collected. The hook function must be set before metrics collection begins
// (i.e. call SetMetricFamilyInjectionHook before setting the HTTP handler.) The
// MetricFamily protobufs returned by the hook function are added to the
// delivered metrics. Each returned MetricFamily must have a unique name (also
// taking into account the MetricFamilies created in the regular way).
//
// This is a way to directly inject MetricFamily protobufs managed and owned by
// the caller. The caller has full responsibility. No sanity checks are
// performed on the returned protobufs (besides the name checks described
// above). The function must be callable at any time and concurrently.
func SetMetricFamilyInjectionHook(hook func() []*dto.MetricFamily) {
defRegistry.metricFamilyInjectionHook = hook
}
// PanicOnCollectError sets the behavior whether a panic is caused upon an error
// while metrics are collected and served to the http endpoint. By default, an
// internal server error (status code 500) is served with an error message.
func PanicOnCollectError(b bool) {
defRegistry.panicOnCollectError = b
}
// EnableCollectChecks enables (or disables) additional consistency checks
// during metrics collection. By default, These additional checks are not
// enabled by default because they inflict a performance penalty and the errors
// they check for can only happen if the used Metric and Collector types have
// internal programming errors. It can be helpful to enable these checks while
// working with custom Collectors or Metrics whose correctness is not well
// established yet.
func EnableCollectChecks(b bool) {
defRegistry.collectChecksEnabled = b
}
// encoder is a function that writes a dto.MetricFamily to an io.Writer in a
// certain encoding. It returns the number of bytes written and any error
// encountered. Note that ext.WriteDelimited and text.MetricFamilyToText are
// encoders.
type encoder func(io.Writer, *dto.MetricFamily) (int, error)
// container represents a top-level registered metric that encompasses its
// static metadata.
type container struct {
BaseLabels map[string]string `json:"baseLabels"`
Docstring string `json:"docstring"`
Metric Metric `json:"metric"`
name string
}
type containers []*container
func (c containers) Len() int {
return len(c)
}
func (c containers) Swap(i, j int) {
c[i], c[j] = c[j], c[i]
}
func (c containers) Less(i, j int) bool {
return c[i].name < c[j].name
}
type registry struct {
mutex sync.RWMutex
signatureContainers map[uint64]*container
mtx sync.RWMutex
collectorsByID map[uint64]Collector // ID is a hash of the descIDs.
descIDs map[uint64]struct{}
dimHashesByName map[string]uint64
bufPool chan *bytes.Buffer
metricFamilyPool chan *dto.MetricFamily
metricPool chan *dto.Metric
metricFamilyInjectionHook func() []*dto.MetricFamily
panicOnCollectError, collectChecksEnabled bool
}
// Registry is a registrar where metrics are listed.
//
// In most situations, using DefaultRegistry is sufficient versus creating one's
// own.
type Registry interface {
// Register a metric with a given name. Name should be globally unique.
Register(name, docstring string, baseLabels map[string]string, metric Metric) error
// SetMetricFamilyInjectionHook sets a function that is called whenever
// metrics are requested. The MetricsFamily protobufs returned by the
// function are appended to the delivered metrics. This is a way to
// directly inject MetricFamily protobufs managed and owned by the
// caller. The caller has full responsibility. No sanity checks are
// performed on the returned protobufs. The function must be callable at
// any time and concurrently. The only thing handled by the Registry is
// the conversion if metrics are requested in a non-protobuf format. The
// deprecated JSON format, however, is not supported, i.e. metrics
// delivered as JSON will not contain the metrics injected by the
// injection hook.
SetMetricFamilyInjectionHook(func() []*dto.MetricFamily)
// Handler creates a http.HandlerFunc. Requests against it generate a
// representation of the metrics managed by this registry.
Handler() http.HandlerFunc
// YieldExporter is a legacy version of Handler and is deprecated.
// Please stop using.
YieldExporter() http.HandlerFunc
}
func (r *registry) Register(c Collector) (Collector, error) {
descChan := make(chan *Desc, capDescChan)
go func() {
c.Describe(descChan)
close(descChan)
}()
// NewRegistry builds a new metric registry. It is not needed in the majority
// of cases.
func NewRegistry() Registry {
return &registry{
signatureContainers: make(map[uint64]*container),
}
}
newDescIDs := map[uint64]struct{}{}
newDimHashesByName := map[string]uint64{}
collectorIDHash := fnv.New64a()
buf := make([]byte, 8)
var duplicateDescErr error
// Register associates a Metric with the DefaultRegistry.
func Register(name, docstring string, baseLabels map[string]string, metric Metric) error {
return DefaultRegistry.Register(name, docstring, baseLabels, metric)
}
r.mtx.Lock()
defer r.mtx.Unlock()
// Coduct various tests...
for desc := range descChan {
// SetMetricFamilyInjectionHook implements the Registry interface.
func (r *registry) SetMetricFamilyInjectionHook(hook func() []*dto.MetricFamily) {
r.metricFamilyInjectionHook = hook
}
// MarshalJSON implements json.Marshaler.
func (r *registry) MarshalJSON() ([]byte, error) {
containers := make(containers, 0, len(r.signatureContainers))
for _, container := range r.signatureContainers {
containers = append(containers, container)
}
sort.Sort(containers)
return json.Marshal(containers)
}
// isValidCandidate returns true if the candidate is acceptable for use. In the
// event of any apparent incorrect use it will report the problem, invalidate
// the candidate, or outright abort.
func (r *registry) isValidCandidate(name string, baseLabels map[string]string) (signature uint64, err error) {
if len(name) == 0 {
err = fmt.Errorf("unnamed metric named with baseLabels %s is invalid", baseLabels)
if *abortOnMisuse {
panic(err)
} else if *debugRegistration {
log.Println(err)
// Is the descriptor valid at all?
if desc.err != nil {
return c, fmt.Errorf("descriptor %s is invalid: %s", desc, desc.err)
}
}
for label := range baseLabels {
if strings.HasPrefix(label, model.ReservedLabelPrefix) {
err = fmt.Errorf("metric named %s with baseLabels %s contains reserved label name %s in baseLabels", name, baseLabels, label)
// Is the descID unique?
// (In other words: Is the fqName + constLabel combination unique?)
if _, exists := r.descIDs[desc.id]; exists {
duplicateDescErr = fmt.Errorf("descriptor %s already exists with the same fully-qualified name and const label values", desc)
}
// If it is not a duplicate desc in this collector, add it to
// the hash. (We allow duplicate descs within the same
// collector, but their existence must be a no-op.)
if _, exists := newDescIDs[desc.id]; !exists {
newDescIDs[desc.id] = struct{}{}
binary.BigEndian.PutUint64(buf, desc.id)
collectorIDHash.Write(buf)
}
if *abortOnMisuse {
panic(err)
} else if *debugRegistration {
log.Println(err)
// Are all the label names and the help string consistent with
// previous descriptors of the same name?
// First check existing descriptors...
if dimHash, exists := r.dimHashesByName[desc.fqName]; exists {
if dimHash != desc.dimHash {
return nil, fmt.Errorf("a previously registered descriptor with the same fully-qualified name as %s has different label names or a different help string", desc)
}
return signature, err
}
}
baseLabels[string(model.MetricNameLabel)] = name
signature = model.LabelsToSignature(baseLabels)
if _, contains := r.signatureContainers[signature]; contains {
err = fmt.Errorf("metric named %s with baseLabels %s is already registered", name, baseLabels)
if *abortOnMisuse {
panic(err)
} else if *debugRegistration {
log.Println(err)
}
return signature, err
}
if *useAggressiveSanityChecks {
for _, container := range r.signatureContainers {
if container.name == name {
err = fmt.Errorf("metric named %s with baseLabels %s is already registered as %s and risks causing confusion", name, baseLabels, container.BaseLabels)
if *abortOnMisuse {
panic(err)
} else if *debugRegistration {
log.Println(err)
} else {
// ...then check the new descriptors already seen.
if dimHash, exists := newDimHashesByName[desc.fqName]; exists {
if dimHash != desc.dimHash {
return nil, fmt.Errorf("descriptors reported by collector have inconsistent label names or help strings for the same fully-qualified name, offender is %s", desc)
}
return signature, err
} else {
newDimHashesByName[desc.fqName] = desc.dimHash
}
}
}
// Did anything happen at all?
if len(newDescIDs) == 0 {
return nil, errors.New("collector has no descriptors")
}
collectorID := collectorIDHash.Sum64()
if existing, exists := r.collectorsByID[collectorID]; exists {
return existing, errAlreadyReg
}
// If the collectorID is new, but at least one of the descs existed
// before, we are in trouble.
if duplicateDescErr != nil {
return nil, duplicateDescErr
}
return signature, err
// Only after all tests have passed, actually register.
r.collectorsByID[collectorID] = c
for hash := range newDescIDs {
r.descIDs[hash] = struct{}{}
}
for name, dimHash := range newDimHashesByName {
r.dimHashesByName[name] = dimHash
}
return c, nil
}
func (r *registry) Register(name, docstring string, baseLabels map[string]string, metric Metric) error {
r.mutex.Lock()
defer r.mutex.Unlock()
func (r *registry) RegisterOrGet(m Collector) (Collector, error) {
existing, err := r.Register(m)
if err != nil && err != errAlreadyReg {
return nil, err
}
return existing, nil
}
labels := map[string]string{}
func (r *registry) Unregister(c Collector) bool {
descChan := make(chan *Desc, capDescChan)
go func() {
c.Describe(descChan)
close(descChan)
}()
if baseLabels != nil {
for k, v := range baseLabels {
labels[k] = v
descIDs := map[uint64]struct{}{}
collectorIDHash := fnv.New64a()
buf := make([]byte, 8)
for desc := range descChan {
if _, exists := descIDs[desc.id]; !exists {
binary.BigEndian.PutUint64(buf, desc.id)
collectorIDHash.Write(buf)
descIDs[desc.id] = struct{}{}
}
}
collectorID := collectorIDHash.Sum64()
r.mtx.RLock()
if _, exists := r.collectorsByID[collectorID]; !exists {
r.mtx.RUnlock()
return false
}
r.mtx.RUnlock()
r.mtx.Lock()
defer r.mtx.Unlock()
delete(r.collectorsByID, collectorID)
for id := range descIDs {
delete(r.descIDs, id)
}
// dimHashesByName is left untouched as those must be consistent
// throughout the lifetime of a program.
return true
}
func (r *registry) ServeHTTP(w http.ResponseWriter, req *http.Request) {
enc, contentType := chooseEncoder(req)
buf := r.getBuf()
defer r.giveBuf(buf)
header := w.Header()
if _, err := r.writePB(buf, enc); err != nil {
if r.panicOnCollectError {
panic(err)
}
w.WriteHeader(http.StatusInternalServerError)
header.Set(contentTypeHeader, "text/plain")
fmt.Fprintf(w, "An error has occurred:\n\n%s", err)
return
}
header.Set(contentTypeHeader, contentType)
w.Write(buf.Bytes())
}
func (r *registry) writePB(w io.Writer, writeEncoded encoder) (int, error) {
metricFamiliesByName := make(map[string]*dto.MetricFamily, len(r.dimHashesByName))
var metricHashes map[uint64]struct{}
if r.collectChecksEnabled {
metricHashes = make(map[uint64]struct{})
}
metricChan := make(chan Metric, capMetricChan)
wg := sync.WaitGroup{}
// Scatter.
// (Collectors could be complex and slow, so we call them all at once.)
r.mtx.RLock()
wg.Add(len(r.collectorsByID))
go func() {
wg.Wait()
close(metricChan)
}()
for _, collector := range r.collectorsByID {
go func(collector Collector) {
defer wg.Done()
collector.Collect(metricChan)
}(collector)
}
r.mtx.RUnlock()
// Gather.
for metric := range metricChan {
// This could be done concurrently, too, but it required locking
// of metricFamiliesByName (and of metricHashes if checks are
// enabled). Most likely not worth it.
desc := metric.Desc()
metricFamily, ok := metricFamiliesByName[desc.fqName]
if !ok {
metricFamily = r.getMetricFamily()
defer r.giveMetricFamily(metricFamily)
metricFamily.Name = proto.String(desc.fqName)
metricFamily.Help = proto.String(desc.help)
metricFamiliesByName[desc.fqName] = metricFamily
}
dtoMetric := r.getMetric()
defer r.giveMetric(dtoMetric)
metric.Write(dtoMetric)
switch {
case metricFamily.Type != nil:
// Type already set. We are good.
case dtoMetric.Gauge != nil:
metricFamily.Type = dto.MetricType_GAUGE.Enum()
case dtoMetric.Counter != nil:
metricFamily.Type = dto.MetricType_COUNTER.Enum()
case dtoMetric.Summary != nil:
metricFamily.Type = dto.MetricType_SUMMARY.Enum()
case dtoMetric.Untyped != nil:
metricFamily.Type = dto.MetricType_UNTYPED.Enum()
default:
return 0, fmt.Errorf("empty metric collected: %s", dtoMetric)
}
if r.collectChecksEnabled {
if err := r.checkConsistency(metricFamily, dtoMetric, desc, metricHashes); err != nil {
return 0, err
}
}
metricFamily.Metric = append(metricFamily.Metric, dtoMetric)
}
if r.metricFamilyInjectionHook != nil {
for _, mf := range r.metricFamilyInjectionHook() {
if _, exists := metricFamiliesByName[mf.GetName()]; exists {
return 0, fmt.Errorf("metric family with duplicate name injected: %s", mf)
}
metricFamiliesByName[mf.GetName()] = mf
}
}
signature, err := r.isValidCandidate(name, labels)
if err != nil {
return err
// Now that MetricFamilies are all set, sort their Metrics
// lexicographically by their label values.
for _, mf := range metricFamiliesByName {
sort.Sort(metricSorter(mf.Metric))
}
r.signatureContainers[signature] = &container{
BaseLabels: labels,
Docstring: docstring,
Metric: metric,
name: name,
// Write out MetricFamilies sorted by their name.
names := make([]string, 0, len(metricFamiliesByName))
for name := range metricFamiliesByName {
names = append(names, name)
}
sort.Strings(names)
var written int
for _, name := range names {
w, err := writeEncoded(w, metricFamiliesByName[name])
written += w
if err != nil {
return written, err
}
}
return written, nil
}
func (r *registry) checkConsistency(metricFamily *dto.MetricFamily, dtoMetric *dto.Metric, desc *Desc, metricHashes map[uint64]struct{}) error {
// Type consistency with metric family.
if metricFamily.GetType() == dto.MetricType_GAUGE && dtoMetric.Gauge == nil ||
metricFamily.GetType() == dto.MetricType_COUNTER && dtoMetric.Counter == nil ||
metricFamily.GetType() == dto.MetricType_SUMMARY && dtoMetric.Summary == nil ||
metricFamily.GetType() == dto.MetricType_UNTYPED && dtoMetric.Untyped == nil {
return fmt.Errorf(
"collected metric %q is not a %s",
dtoMetric, metricFamily.Type,
)
}
// Desc consistency with metric family.
if metricFamily.GetHelp() != desc.help {
return fmt.Errorf(
"collected metric %q has help %q but should have %q",
dtoMetric, desc.help, metricFamily.GetHelp(),
)
}
// Is the desc consistent with the content of the metric?
lpsFromDesc := make([]*dto.LabelPair, 0, len(dtoMetric.Label))
lpsFromDesc = append(lpsFromDesc, desc.constLabelPairs...)
for _, l := range desc.variableLabels {
lpsFromDesc = append(lpsFromDesc, &dto.LabelPair{
Name: proto.String(l),
})
}
if len(lpsFromDesc) != len(dtoMetric.Label) {
return fmt.Errorf(
"labels in collected metric %q are inconsistent with descriptor %s",
dtoMetric, desc,
)
}
sort.Sort(LabelPairSorter(lpsFromDesc))
for i, lpFromDesc := range lpsFromDesc {
lpFromMetric := dtoMetric.Label[i]
if lpFromDesc.GetName() != lpFromMetric.GetName() ||
lpFromDesc.Value != nil && lpFromDesc.GetValue() != lpFromMetric.GetValue() {
return fmt.Errorf(
"labels in collected metric %q are inconsistent with descriptor %s",
dtoMetric, desc,
)
}
}
// Is the metric unique (i.e. no other metric with the same name and the same label values)?
h := fnv.New64a()
var buf bytes.Buffer
buf.WriteString(desc.fqName)
h.Write(buf.Bytes())
for _, lp := range dtoMetric.Label {
buf.Reset()
buf.WriteString(lp.GetValue())
h.Write(buf.Bytes())
}
metricHash := h.Sum64()
if _, exists := metricHashes[metricHash]; exists {
return fmt.Errorf(
"collected metric %q was collected before with the same name and label values",
dtoMetric,
)
}
metricHashes[metricHash] = struct{}{}
r.mtx.RLock() // Remaining checks need the read lock.
defer r.mtx.RUnlock()
// Is the desc registered?
if _, exist := r.descIDs[desc.id]; !exist {
return fmt.Errorf("collected metric %q with unregistered descriptor %s", dtoMetric, desc)
}
return nil
}
// YieldBasicAuthExporter creates a http.HandlerFunc that is protected by HTTP's
// basic authentication.
func (r *registry) YieldBasicAuthExporter(username, password string) http.HandlerFunc {
// XXX: Work with Daniel to get this removed from the library, as it is really
// superfluous and can be much more elegantly accomplished via
// delegation.
log.Println("Registry.YieldBasicAuthExporter is deprecated.")
exporter := r.YieldExporter()
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
authenticated := false
if auth := r.Header.Get(authorization); auth != "" {
base64Encoded := strings.SplitAfter(auth, " ")[1]
decoded, err := base64.URLEncoding.DecodeString(base64Encoded)
if err == nil {
usernamePassword := strings.Split(string(decoded), ":")
if usernamePassword[0] == username && usernamePassword[1] == password {
authenticated = true
}
}
}
if authenticated {
exporter.ServeHTTP(w, r)
} else {
w.Header().Add(authorizationHeader, authorizationHeaderValue)
http.Error(w, "access forbidden", 401)
}
})
}
// decorateWriter annotates the response writer to handle any other behaviors
// that might be beneficial to the client---e.g., GZIP encoding.
func decorateWriter(request *http.Request, writer http.ResponseWriter) io.Writer {
if !strings.Contains(request.Header.Get(acceptEncodingHeader), gzipAcceptEncodingValue) {
return writer
}
writer.Header().Set(contentEncodingHeader, gzipContentEncodingValue)
gziper := gzip.NewWriter(writer)
return gziper
}
func (r *registry) YieldExporter() http.HandlerFunc {
log.Println("Registry.YieldExporter is deprecated in favor of Registry.Handler.")
return r.Handler()
}
func (r *registry) dumpPB(w io.Writer, writeEncoded encoder) {
r.mutex.RLock()
defer r.mutex.RUnlock()
f := new(dto.MetricFamily)
for _, container := range r.signatureContainers {
f.Reset()
f.Name = proto.String(container.name)
f.Help = proto.String(container.Docstring)
container.Metric.dumpChildren(f)
for name, value := range container.BaseLabels {
if model.LabelName(name) == model.MetricNameLabel {
// The name is already in MetricFamily.
continue
// TODO: Once JSON is history, do not anymore
// add the __name__ label to BaseLabels and
// then remove this check.
}
p := &dto.LabelPair{
Name: proto.String(name),
Value: proto.String(value),
}
for _, child := range f.Metric {
child.Label = append(child.Label, p)
}
}
writeEncoded(w, f)
func (r *registry) getBuf() *bytes.Buffer {
select {
case buf := <-r.bufPool:
return buf
default:
return &bytes.Buffer{}
}
}
func (r *registry) dumpExternalPB(w io.Writer, writeEncoded encoder) {
if r.metricFamilyInjectionHook == nil {
return
}
for _, f := range r.metricFamilyInjectionHook() {
writeEncoded(w, f)
func (r *registry) giveBuf(buf *bytes.Buffer) {
buf.Reset()
select {
case r.bufPool <- buf:
default:
}
}
func (r *registry) Handler() http.HandlerFunc {
return func(w http.ResponseWriter, req *http.Request) {
defer requestLatencyAccumulator(time.Now())
func (r *registry) getMetricFamily() *dto.MetricFamily {
select {
case mf := <-r.metricFamilyPool:
return mf
default:
return &dto.MetricFamily{}
}
}
requestCount.Increment(nil)
header := w.Header()
func (r *registry) giveMetricFamily(mf *dto.MetricFamily) {
mf.Reset()
select {
case r.metricFamilyPool <- mf:
default:
}
}
writer := decorateWriter(req, w)
func (r *registry) getMetric() *dto.Metric {
select {
case m := <-r.metricPool:
return m
default:
return &dto.Metric{}
}
}
if closer, ok := writer.(io.Closer); ok {
defer closer.Close()
}
func (r *registry) giveMetric(m *dto.Metric) {
m.Reset()
select {
case r.metricPool <- m:
default:
}
}
accepts := goautoneg.ParseAccept(req.Header.Get("Accept"))
for _, accept := range accepts {
var enc encoder
switch {
case accept.Type == "application" &&
accept.SubType == "vnd.google.protobuf" &&
accept.Params["proto"] == "io.prometheus.client.MetricFamily":
switch accept.Params["encoding"] {
case "delimited":
header.Set(contentTypeHeader, DelimitedTelemetryContentType)
enc = text.WriteProtoDelimited
case "text":
header.Set(contentTypeHeader, ProtoTextTelemetryContentType)
enc = text.WriteProtoText
case "compact-text":
header.Set(contentTypeHeader, ProtoCompactTextTelemetryContentType)
enc = text.WriteProtoCompactText
default:
continue
}
case accept.Type == "text" &&
accept.SubType == "plain" &&
(accept.Params["version"] == "0.0.4" || accept.Params["version"] == ""):
header.Set(contentTypeHeader, TextTelemetryContentType)
enc = text.MetricFamilyToText
func newRegistry() *registry {
return &registry{
collectorsByID: map[uint64]Collector{},
descIDs: map[uint64]struct{}{},
dimHashesByName: map[string]uint64{},
bufPool: make(chan *bytes.Buffer, numBufs),
metricFamilyPool: make(chan *dto.MetricFamily, numMetricFamilies),
metricPool: make(chan *dto.Metric, numMetrics),
}
}
func chooseEncoder(req *http.Request) (encoder, string) {
accepts := goautoneg.ParseAccept(req.Header.Get("Accept"))
for _, accept := range accepts {
switch {
case accept.Type == "application" &&
accept.SubType == "vnd.google.protobuf" &&
accept.Params["proto"] == "io.prometheus.client.MetricFamily":
switch accept.Params["encoding"] {
case "delimited":
return text.WriteProtoDelimited, DelimitedTelemetryContentType
case "text":
return text.WriteProtoText, ProtoTextTelemetryContentType
case "compact-text":
return text.WriteProtoCompactText, ProtoCompactTextTelemetryContentType
default:
continue
}
r.dumpPB(writer, enc)
r.dumpExternalPB(writer, enc)
return
case accept.Type == "text" &&
accept.SubType == "plain" &&
(accept.Params["version"] == "0.0.4" || accept.Params["version"] == ""):
return text.MetricFamilyToText, TextTelemetryContentType
default:
continue
}
// TODO: Once JSON deprecation is completed, use text format as
// fall-back.
header.Set(contentTypeHeader, JSONTelemetryContentType)
json.NewEncoder(writer).Encode(r)
}
return text.MetricFamilyToText, TextTelemetryContentType
}
var (
abortOnMisuse = flag.Bool(FlagNamespace+"abortonmisuse", false, "abort if a semantic misuse is encountered (bool).")
debugRegistration = flag.Bool(FlagNamespace+"debugregistration", false, "display information about the metric registration process (bool).")
useAggressiveSanityChecks = flag.Bool(FlagNamespace+"useaggressivesanitychecks", false, "perform expensive validation of metrics (bool).")
)
type metricSorter []*dto.Metric
func (s metricSorter) Len() int {
return len(s)
}
func (s metricSorter) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
func (s metricSorter) Less(i, j int) bool {
for n, lp := range s[i].Label {
vi := lp.GetValue()
vj := s[j].Label[n].GetValue()
if vi != vj {
return vi < vj
}
}
return true
}

View File

@ -1,3 +1,16 @@
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
@ -9,187 +22,13 @@ package prometheus
import (
"bytes"
"encoding/binary"
"encoding/json"
"fmt"
"io"
"net/http"
"testing"
dto "github.com/prometheus/client_model/go"
"code.google.com/p/goprotobuf/proto"
"github.com/prometheus/client_golang/model"
"github.com/prometheus/client_golang/test"
dto "github.com/prometheus/client_model/go"
)
func testRegister(t test.Tester) {
var oldState = struct {
abortOnMisuse bool
debugRegistration bool
useAggressiveSanityChecks bool
}{
abortOnMisuse: *abortOnMisuse,
debugRegistration: *debugRegistration,
useAggressiveSanityChecks: *useAggressiveSanityChecks,
}
defer func() {
abortOnMisuse = &(oldState.abortOnMisuse)
debugRegistration = &(oldState.debugRegistration)
useAggressiveSanityChecks = &(oldState.useAggressiveSanityChecks)
}()
type input struct {
name string
baseLabels map[string]string
}
validLabels := map[string]string{"label": "value"}
var scenarios = []struct {
inputs []input
outputs []bool
}{
{},
{
inputs: []input{
{
name: "my_name_without_labels",
},
},
outputs: []bool{
true,
},
},
{
inputs: []input{
{
name: "my_name_without_labels",
},
{
name: "another_name_without_labels",
},
},
outputs: []bool{
true,
true,
},
},
{
inputs: []input{
{
name: "",
},
},
outputs: []bool{
false,
},
},
{
inputs: []input{
{
name: "valid_name",
baseLabels: map[string]string{model.ReservedLabelPrefix + "internal": "illegal_internal_name"},
},
},
outputs: []bool{
false,
},
},
{
inputs: []input{
{
name: "duplicate_names",
},
{
name: "duplicate_names",
},
},
outputs: []bool{
true,
false,
},
},
{
inputs: []input{
{
name: "duplicate_names_with_identical_labels",
baseLabels: map[string]string{"label": "value"},
},
{
name: "duplicate_names_with_identical_labels",
baseLabels: map[string]string{"label": "value"},
},
},
outputs: []bool{
true,
false,
},
},
{
inputs: []input{
{
name: "metric_1_with_identical_labels",
baseLabels: validLabels,
},
{
name: "metric_2_with_identical_labels",
baseLabels: validLabels,
},
},
outputs: []bool{
true,
true,
},
},
{
inputs: []input{
{
name: "duplicate_names_with_dissimilar_labels",
baseLabels: map[string]string{"label": "foo"},
},
{
name: "duplicate_names_with_dissimilar_labels",
baseLabels: map[string]string{"label": "bar"},
},
},
outputs: []bool{
true,
false,
},
},
}
for i, scenario := range scenarios {
if len(scenario.inputs) != len(scenario.outputs) {
t.Fatalf("%d. expected scenario output length %d, got %d", i, len(scenario.inputs), len(scenario.outputs))
}
abortOnMisuse = proto.Bool(false)
debugRegistration = proto.Bool(false)
useAggressiveSanityChecks = proto.Bool(true)
registry := NewRegistry()
for j, input := range scenario.inputs {
actual := registry.Register(input.name, "", input.baseLabels, nil)
if scenario.outputs[j] != (actual == nil) {
t.Errorf("%d.%d. expected %t, got %t", i, j, scenario.outputs[j], actual == nil)
}
}
}
}
func TestRegister(t *testing.T) {
testRegister(t)
}
func BenchmarkRegister(b *testing.B) {
for i := 0; i < b.N; i++ {
testRegister(b)
}
}
type fakeResponseWriter struct {
header http.Header
body bytes.Buffer
@ -206,11 +45,19 @@ func (r *fakeResponseWriter) Write(d []byte) (l int, err error) {
func (r *fakeResponseWriter) WriteHeader(c int) {
}
func testHandler(t test.Tester) {
func testHandler(t testing.TB) {
metric := NewCounter()
metric.Increment(map[string]string{"labelname": "val1"})
metric.Increment(map[string]string{"labelname": "val2"})
metricVec := NewCounterVec(
CounterOpts{
Name: "name",
Help: "docstring",
ConstLabels: Labels{"constname": "constvalue"},
},
[]string{"labelname"},
)
metricVec.WithLabelValues("val1").Inc()
metricVec.WithLabelValues("val2").Inc()
varintBuf := make([]byte, binary.MaxVarintLen32)
@ -227,8 +74,8 @@ func testHandler(t test.Tester) {
Value: proto.String("externalval1"),
},
{
Name: proto.String("externalbasename"),
Value: proto.String("externalbasevalue"),
Name: proto.String("externalconstname"),
Value: proto.String("externalconstvalue"),
},
},
Counter: &dto.Counter{
@ -255,7 +102,7 @@ func testHandler(t test.Tester) {
externalMetricFamilyAsBytes := externalBuf.Bytes()
externalMetricFamilyAsText := []byte(`# HELP externalname externaldocstring
# TYPE externalname counter
externalname{externallabelname="externalval1",externalbasename="externalbasevalue"} 1
externalname{externallabelname="externalval1",externalconstname="externalconstvalue"} 1
`)
externalMetricFamilyAsProtoText := []byte(`name: "externalname"
help: "externaldocstring"
@ -266,8 +113,8 @@ metric: <
value: "externalval1"
>
label: <
name: "externalbasename"
value: "externalbasevalue"
name: "externalconstname"
value: "externalconstvalue"
>
counter: <
value: 1
@ -275,7 +122,7 @@ metric: <
>
`)
externalMetricFamilyAsProtoCompactText := []byte(`name:"externalname" help:"externaldocstring" type:COUNTER metric:<label:<name:"externallabelname" value:"externalval1" > label:<name:"externalbasename" value:"externalbasevalue" > counter:<value:1 > >
externalMetricFamilyAsProtoCompactText := []byte(`name:"externalname" help:"externaldocstring" type:COUNTER metric:<label:<name:"externallabelname" value:"externalval1" > label:<name:"externalconstname" value:"externalconstvalue" > counter:<value:1 > >
`)
expectedMetricFamily := &dto.MetricFamily{
@ -286,12 +133,12 @@ metric: <
{
Label: []*dto.LabelPair{
{
Name: proto.String("labelname"),
Value: proto.String("val1"),
Name: proto.String("constname"),
Value: proto.String("constvalue"),
},
{
Name: proto.String("basename"),
Value: proto.String("basevalue"),
Name: proto.String("labelname"),
Value: proto.String("val1"),
},
},
Counter: &dto.Counter{
@ -301,12 +148,12 @@ metric: <
{
Label: []*dto.LabelPair{
{
Name: proto.String("labelname"),
Value: proto.String("val2"),
Name: proto.String("constname"),
Value: proto.String("constvalue"),
},
{
Name: proto.String("basename"),
Value: proto.String("basevalue"),
Name: proto.String("labelname"),
Value: proto.String("val2"),
},
},
Counter: &dto.Counter{
@ -332,20 +179,20 @@ metric: <
expectedMetricFamilyAsBytes := buf.Bytes()
expectedMetricFamilyAsText := []byte(`# HELP name docstring
# TYPE name counter
name{labelname="val1",basename="basevalue"} 1
name{labelname="val2",basename="basevalue"} 1
name{constname="constvalue",labelname="val1"} 1
name{constname="constvalue",labelname="val2"} 1
`)
expectedMetricFamilyAsProtoText := []byte(`name: "name"
help: "docstring"
type: COUNTER
metric: <
label: <
name: "labelname"
value: "val1"
name: "constname"
value: "constvalue"
>
label: <
name: "basename"
value: "basevalue"
name: "labelname"
value: "val1"
>
counter: <
value: 1
@ -353,12 +200,12 @@ metric: <
>
metric: <
label: <
name: "labelname"
value: "val2"
name: "constname"
value: "constvalue"
>
label: <
name: "basename"
value: "basevalue"
name: "labelname"
value: "val2"
>
counter: <
value: 1
@ -366,7 +213,7 @@ metric: <
>
`)
expectedMetricFamilyAsProtoCompactText := []byte(`name:"name" help:"docstring" type:COUNTER metric:<label:<name:"labelname" value:"val1" > label:<name:"basename" value:"basevalue" > counter:<value:1 > > metric:<label:<name:"labelname" value:"val2" > label:<name:"basename" value:"basevalue" > counter:<value:1 > >
expectedMetricFamilyAsProtoCompactText := []byte(`name:"name" help:"docstring" type:COUNTER metric:<label:<name:"constname" value:"constvalue" > label:<name:"labelname" value:"val1" > counter:<value:1 > > metric:<label:<name:"constname" value:"constvalue" > label:<name:"labelname" value:"val2" > counter:<value:1 > >
`)
type output struct {
@ -386,9 +233,9 @@ metric: <
},
out: output{
headers: map[string]string{
"Content-Type": `application/json; schema="prometheus/telemetry"; version=0.0.2`,
"Content-Type": `text/plain; version=0.0.4`,
},
body: []byte("[]\n"),
body: []byte{},
},
},
{ // 1
@ -397,9 +244,9 @@ metric: <
},
out: output{
headers: map[string]string{
"Content-Type": `application/json; schema="prometheus/telemetry"; version=0.0.2`,
"Content-Type": `text/plain; version=0.0.4`,
},
body: []byte("[]\n"),
body: []byte{},
},
},
{ // 2
@ -408,9 +255,9 @@ metric: <
},
out: output{
headers: map[string]string{
"Content-Type": `application/json; schema="prometheus/telemetry"; version=0.0.2`,
"Content-Type": `text/plain; version=0.0.4`,
},
body: []byte("[]\n"),
body: []byte{},
},
},
{ // 3
@ -430,10 +277,9 @@ metric: <
},
out: output{
headers: map[string]string{
"Content-Type": `application/json; schema="prometheus/telemetry"; version=0.0.2`,
"Content-Type": `text/plain; version=0.0.4`,
},
body: []byte(`[{"baseLabels":{"__name__":"name","basename":"basevalue"},"docstring":"docstring","metric":{"type":"counter","value":[{"labels":{"labelname":"val1"},"value":1},{"labels":{"labelname":"val2"},"value":1}]}}]
`),
body: expectedMetricFamilyAsText,
},
withCounter: true,
},
@ -455,9 +301,9 @@ metric: <
},
out: output{
headers: map[string]string{
"Content-Type": `application/json; schema="prometheus/telemetry"; version=0.0.2`,
"Content-Type": `text/plain; version=0.0.4`,
},
body: []byte("[]\n"),
body: externalMetricFamilyAsText,
},
withExternalMF: true,
},
@ -483,8 +329,8 @@ metric: <
},
body: bytes.Join(
[][]byte{
expectedMetricFamilyAsBytes,
externalMetricFamilyAsBytes,
expectedMetricFamilyAsBytes,
},
[]byte{},
),
@ -525,8 +371,8 @@ metric: <
},
body: bytes.Join(
[][]byte{
expectedMetricFamilyAsText,
externalMetricFamilyAsText,
expectedMetricFamilyAsText,
},
[]byte{},
),
@ -544,8 +390,8 @@ metric: <
},
body: bytes.Join(
[][]byte{
expectedMetricFamilyAsBytes,
externalMetricFamilyAsBytes,
expectedMetricFamilyAsBytes,
},
[]byte{},
),
@ -563,8 +409,8 @@ metric: <
},
body: bytes.Join(
[][]byte{
expectedMetricFamilyAsProtoText,
externalMetricFamilyAsProtoText,
expectedMetricFamilyAsProtoText,
},
[]byte{},
),
@ -582,8 +428,8 @@ metric: <
},
body: bytes.Join(
[][]byte{
expectedMetricFamilyAsProtoCompactText,
externalMetricFamilyAsProtoCompactText,
expectedMetricFamilyAsProtoCompactText,
},
[]byte{},
),
@ -593,25 +439,21 @@ metric: <
},
}
for i, scenario := range scenarios {
registry := NewRegistry().(*registry)
registry := newRegistry()
registry.collectChecksEnabled = true
if scenario.withCounter {
registry.Register(
"name", "docstring",
map[string]string{"basename": "basevalue"},
metric,
)
registry.Register(metricVec)
}
if scenario.withExternalMF {
registry.SetMetricFamilyInjectionHook(
func() []*dto.MetricFamily {
return externalMetricFamily
},
)
registry.metricFamilyInjectionHook = func() []*dto.MetricFamily {
return externalMetricFamily
}
}
writer := &fakeResponseWriter{
header: http.Header{},
}
handler := registry.Handler()
handler := InstrumentHandler("prometheus", registry)
request, _ := http.NewRequest("GET", "/", nil)
for key, value := range scenario.headers {
request.Header.Add(key, value)
@ -636,7 +478,7 @@ metric: <
}
}
func TestHander(t *testing.T) {
func TestHandler(t *testing.T) {
testHandler(t)
}
@ -645,151 +487,3 @@ func BenchmarkHandler(b *testing.B) {
testHandler(b)
}
}
func testDecorateWriter(t test.Tester) {
type input struct {
headers map[string]string
body []byte
}
type output struct {
headers map[string]string
body []byte
}
var scenarios = []struct {
in input
out output
}{
{},
{
in: input{
headers: map[string]string{
"Accept-Encoding": "gzip,deflate,sdch",
},
body: []byte("Hi, mom!"),
},
out: output{
headers: map[string]string{
"Content-Encoding": "gzip",
},
body: []byte("\x1f\x8b\b\x00\x00\tn\x88\x00\xff\xf2\xc8\xd4Q\xc8\xcd\xcfU\x04\x04\x00\x00\xff\xff9C&&\b\x00\x00\x00"),
},
},
{
in: input{
headers: map[string]string{
"Accept-Encoding": "foo",
},
body: []byte("Hi, mom!"),
},
out: output{
headers: map[string]string{},
body: []byte("Hi, mom!"),
},
},
}
for i, scenario := range scenarios {
request, _ := http.NewRequest("GET", "/", nil)
for key, value := range scenario.in.headers {
request.Header.Add(key, value)
}
baseWriter := &fakeResponseWriter{
header: make(http.Header),
}
writer := decorateWriter(request, baseWriter)
for key, value := range scenario.out.headers {
if baseWriter.Header().Get(key) != value {
t.Errorf("%d. expected %s for header %s, got %s", i, value, key, baseWriter.Header().Get(key))
}
}
writer.Write(scenario.in.body)
if closer, ok := writer.(io.Closer); ok {
closer.Close()
}
if !bytes.Equal(scenario.out.body, baseWriter.body.Bytes()) {
t.Errorf("%d. expected %s for body, got %s", i, scenario.out.body, baseWriter.body.Bytes())
}
}
}
func TestDecorateWriter(t *testing.T) {
testDecorateWriter(t)
}
func BenchmarkDecorateWriter(b *testing.B) {
for i := 0; i < b.N; i++ {
testDecorateWriter(b)
}
}
func testDumpToWriter(t test.Tester) {
type input struct {
metrics map[string]Metric
}
var scenarios = []struct {
in input
out []byte
}{
{
out: []byte("[]"),
},
{
in: input{
metrics: map[string]Metric{
"foo": NewCounter(),
},
},
out: []byte(`[{"baseLabels":{"__name__":"foo","label_foo":"foo"},"docstring":"metric foo","metric":{"type":"counter","value":[]}}]`),
},
{
in: input{
metrics: map[string]Metric{
"foo": NewCounter(),
"bar": NewCounter(),
},
},
out: []byte(`[{"baseLabels":{"__name__":"bar","label_bar":"bar"},"docstring":"metric bar","metric":{"type":"counter","value":[]}},{"baseLabels":{"__name__":"foo","label_foo":"foo"},"docstring":"metric foo","metric":{"type":"counter","value":[]}}]`),
},
}
for i, scenario := range scenarios {
registry := NewRegistry().(*registry)
for name, metric := range scenario.in.metrics {
err := registry.Register(name, fmt.Sprintf("metric %s", name), map[string]string{fmt.Sprintf("label_%s", name): name}, metric)
if err != nil {
t.Errorf("%d. encountered error while registering metric %s", i, err)
}
}
actual, err := json.Marshal(registry)
if err != nil {
t.Errorf("%d. encountered error while dumping %s", i, err)
}
if !bytes.Equal(scenario.out, actual) {
t.Errorf("%d. expected %q for dumping, got %q", i, scenario.out, actual)
}
}
}
func TestDumpToWriter(t *testing.T) {
testDumpToWriter(t)
}
func BenchmarkDumpToWriter(b *testing.B) {
for i := 0; i < b.N; i++ {
testDumpToWriter(b)
}
}

View File

@ -1,126 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package prometheus
import (
"math"
"sort"
)
// TODO(mtp): Split this out into a summary statistics file once moving/rolling
// averages are calculated.
// ReductionMethod provides a method for reducing metrics into a scalar value.
type ReductionMethod func([]float64) float64
var (
medianReducer = NearestRankReducer(50)
)
// These are the canned ReductionMethods.
var (
// Reduce to the average of the set.
AverageReducer = averageReducer
// Extract the first modal value.
FirstModeReducer = firstModeReducer
// Reduce to the maximum of the set.
MaximumReducer = maximumReducer
// Reduce to the median of the set.
MedianReducer = medianReducer
// Reduce to the minimum of the set.
MinimumReducer = minimumReducer
)
func averageReducer(input []float64) float64 {
count := 0.0
sum := 0.0
for _, v := range input {
sum += v
count++
}
if count == 0 {
return math.NaN()
}
return sum / count
}
func firstModeReducer(input []float64) float64 {
valuesToFrequency := map[float64]int64{}
largestTally := int64(math.MinInt64)
largestTallyValue := math.NaN()
for _, v := range input {
presentCount, _ := valuesToFrequency[v]
presentCount++
valuesToFrequency[v] = presentCount
if presentCount > largestTally {
largestTally = presentCount
largestTallyValue = v
}
}
return largestTallyValue
}
// Calculate the percentile by choosing the nearest neighboring value.
func nearestRank(input []float64, percentile float64) float64 {
inputSize := len(input)
if inputSize == 0 {
return math.NaN()
}
ordinalRank := math.Ceil(((percentile / 100.0) * float64(inputSize)) + 0.5)
copiedInput := make([]float64, inputSize)
copy(copiedInput, input)
sort.Float64s(copiedInput)
preliminaryIndex := int(ordinalRank) - 1
if preliminaryIndex == inputSize {
return copiedInput[preliminaryIndex-1]
}
return copiedInput[preliminaryIndex]
}
// Generate a ReductionMethod based off of extracting a given percentile value.
func NearestRankReducer(percentile float64) ReductionMethod {
return func(input []float64) float64 {
return nearestRank(input, percentile)
}
}
func minimumReducer(input []float64) float64 {
minimum := math.MaxFloat64
for _, v := range input {
minimum = math.Min(minimum, v)
}
return minimum
}
func maximumReducer(input []float64) float64 {
maximum := math.SmallestNonzeroFloat64
for _, v := range input {
maximum = math.Max(maximum, v)
}
return maximum
}

View File

@ -1,119 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package prometheus
import (
. "github.com/matttproud/gocheck"
)
func (s *S) TestAverageOnEmpty(c *C) {
empty := []float64{}
var v float64 = AverageReducer(empty)
c.Assert(v, IsNaN)
}
func (s *S) TestAverageForSingleton(c *C) {
input := []float64{5}
var v float64 = AverageReducer(input)
c.Check(v, Equals, 5.0)
}
func (s *S) TestAverage(c *C) {
input := []float64{5, 15}
var v float64 = AverageReducer(input)
c.Check(v, Equals, 10.0)
}
func (s *S) TestFirstModeOnEmpty(c *C) {
input := []float64{}
var v float64 = FirstModeReducer(input)
c.Assert(v, IsNaN)
}
func (s *S) TestFirstModeForSingleton(c *C) {
input := []float64{5}
var v float64 = FirstModeReducer(input)
c.Check(v, Equals, 5.0)
}
func (s *S) TestFirstModeForUnimodal(c *C) {
input := []float64{1, 2, 3, 4, 3}
var v float64 = FirstModeReducer(input)
c.Check(v, Equals, 3.0)
}
func (s *S) TestNearestRankForEmpty(c *C) {
input := []float64{}
c.Assert(nearestRank(input, 0), IsNaN)
c.Assert(nearestRank(input, 50), IsNaN)
c.Assert(nearestRank(input, 100), IsNaN)
}
func (s *S) TestNearestRankForSingleton(c *C) {
input := []float64{5}
c.Check(nearestRank(input, 0), Equals, 5.0)
c.Check(nearestRank(input, 50), Equals, 5.0)
c.Check(nearestRank(input, 100), Equals, 5.0)
}
func (s *S) TestNearestRankForDouble(c *C) {
input := []float64{5, 5}
c.Check(nearestRank(input, 0), Equals, 5.0)
c.Check(nearestRank(input, 50), Equals, 5.0)
c.Check(nearestRank(input, 100), Equals, 5.0)
}
func (s *S) TestNearestRankFor100(c *C) {
input := make([]float64, 100)
for i := 0; i < 100; i++ {
input[i] = float64(i + 1)
}
c.Check(nearestRank(input, 0), Equals, 1.0)
c.Check(nearestRank(input, 50), Equals, 51.0)
c.Check(nearestRank(input, 100), Equals, 100.0)
}
func (s *S) TestNearestRankFor101(c *C) {
input := make([]float64, 101)
for i := 0; i < 101; i++ {
input[i] = float64(i + 1)
}
c.Check(nearestRank(input, 0), Equals, 1.0)
c.Check(nearestRank(input, 50), Equals, 51.0)
c.Check(nearestRank(input, 100), Equals, 101.0)
}
func (s *S) TestMedianReducer(c *C) {
input := []float64{1, 2, 3}
c.Check(MedianReducer(input), Equals, 2.0)
}
func (s *S) TestMinimum(c *C) {
input := []float64{5, 1, 10, 1.1, 4}
c.Check(MinimumReducer(input), Equals, 1.0)
}
func (s *S) TestMaximum(c *C) {
input := []float64{5, 1, 10, 1.1, 4}
c.Check(MaximumReducer(input), Equals, 10.0)
}

425
prometheus/summary.go Normal file
View File

@ -0,0 +1,425 @@
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"fmt"
"hash/fnv"
"sort"
"sync"
"time"
"code.google.com/p/goprotobuf/proto"
"github.com/bmizerany/perks/quantile" // TODO: Vendorize?
dto "github.com/prometheus/client_model/go"
)
// A Summary captures individual observations from an event or sample stream and
// summarizes them in a manner similar to traditional summary statistics: 1. sum
// of observations, 2. observation count, 3. rank estimations.
//
// A typical use-case is the observation of request latencies. By default, a
// Summary provides the median, the 90th and the 99th percentile of the latency
// as rank estimations.
//
// To create Summary instances, use NewSummary.
type Summary interface {
Metric
Collector
// Observe adds a single observation to the summary.
Observe(float64)
}
// DefObjectives are the default Summary quantile values.
var (
DefObjectives = []float64{0.5, 0.9, 0.99}
)
// Default values for SummaryOpts.
const (
// DefMaxAge is the default duration for which observations stay
// relevant.
DefMaxAge time.Duration = 10 * time.Minute
// DefAgeBuckets is the default number of buckets used to calculate the
// age of observations.
DefAgeBuckets = 10
// DefBufCap is the standard buffer size for collecting Summary observations.
DefBufCap = 500
// DefEpsilon is the default error epsilon for the quantile rank estimates.
DefEpsilon = 0.001
)
// SummaryOpts bundles the options for creating a Summary metric. It is
// mandatory to set Name and Help to a non-empty string. All other fields are
// optional and can safely be left at their zero value.
type SummaryOpts struct {
// Namespace, Subsystem, and Name are components of the fully-qualified
// name of the Summary (created by joining these components with
// "_"). Only Name is mandatory, the others merely help structuring the
// name. Note that the fully-qualified name of the Summary must be a
// valid Prometheus metric name.
Namespace string
Subsystem string
Name string
// Help provides information about this Summary. Mandatory!
//
// Metrics with the same fully-qualified name must have the same Help
// string.
Help string
// ConstLabels are used to attach fixed labels to this
// Summary. Summaries with the same fully-qualified name must have the
// same label names in their ConstLabels.
//
// Note that in most cases, labels have a value that varies during the
// lifetime of a process. Those labels are usually managed with a
// SummaryVec. ConstLabels serve only special purposes. One is for the
// special case where the value of a label does not change during the
// lifetime of a process, e.g. if the revision of the running binary is
// put into a label. Another, more advanced purpose is if more than one
// Collector needs to collect Summaries with the same fully-qualified
// name. In that case, those Summaries must differ in the values of
// their ConstLabels. See the Collector examples.
//
// If the value of a label never changes (not even between binaries),
// that label most likely should not be a label at all (but part of the
// metric name).
ConstLabels Labels
// Objectives defines the quantile rank estimates. The default value is
// DefObjectives.
Objectives []float64
// MaxAge defines the duration for which an observation stays relevant
// for the summary. Must be positive. The default value is DefMaxAge.
MaxAge time.Duration
// AgeBuckets is the number of buckets used to exclude observations that
// are older than MaxAge from the summary. A higher number has a
// resource penalty, so only increase it if the higher resolution is
// really required. The default value is DefAgeBuckets.
AgeBuckets uint32
// BufCap defines the default sample stream buffer size. The default
// value of DefBufCap should suffice for most uses. If there is a need
// to increase the value, a multiple of 500 is recommended (because that
// is the internal buffer size of the underlying package
// "github.com/bmizerany/perks/quantile").
BufCap uint32
// Epsilon is the error epsilon for the quantile rank estimate. Must be
// positive. The default is DefEpsilon.
Epsilon float64
}
// NewSummary creates a new Summary based on the provided SummaryOpts.
func NewSummary(opts SummaryOpts) Summary {
return newSummary(
NewDesc(
BuildFQName(opts.Namespace, opts.Subsystem, opts.Name),
opts.Help,
nil,
opts.ConstLabels,
),
opts,
)
}
func newSummary(desc *Desc, opts SummaryOpts, labelValues ...string) Summary {
if len(desc.variableLabels) != len(labelValues) {
panic(errInconsistentCardinality)
}
if len(opts.Objectives) == 0 {
opts.Objectives = DefObjectives
}
if opts.MaxAge < 0 {
panic(fmt.Errorf("illegal max age MaxAge=%v", opts.MaxAge))
}
if opts.MaxAge == 0 {
opts.MaxAge = DefMaxAge
}
if opts.AgeBuckets == 0 {
opts.AgeBuckets = DefAgeBuckets
}
if opts.BufCap == 0 {
opts.BufCap = DefBufCap
}
if opts.Epsilon < 0 {
panic(fmt.Errorf("illegal value for Epsilon=%f", opts.Epsilon))
}
if opts.Epsilon == 0. {
opts.Epsilon = DefEpsilon
}
s := &summary{
desc: desc,
objectives: opts.Objectives,
epsilon: opts.Epsilon,
labelPairs: makeLabelPairs(desc, labelValues),
hotBuf: make([]float64, 0, opts.BufCap),
coldBuf: make([]float64, 0, opts.BufCap),
streamDuration: opts.MaxAge / time.Duration(opts.AgeBuckets),
}
s.mergedTailStreams = s.newStream()
s.mergedAllStreams = s.newStream()
s.headStreamExpTime = time.Now().Add(s.streamDuration)
s.hotBufExpTime = s.headStreamExpTime
for i := uint32(0); i < opts.AgeBuckets; i++ {
s.streams = append(s.streams, s.newStream())
}
s.headStream = s.streams[0]
s.Init(s) // Init self-collection.
return s
}
type summary struct {
SelfCollector
bufMtx sync.Mutex // Protects hotBuf and hotBufExpTime.
mtx sync.Mutex // Protects every other moving part.
// Lock bufMtx before mtx if both are needed.
desc *Desc
objectives []float64
epsilon float64
labelPairs []*dto.LabelPair
sum float64
cnt uint64
hotBuf, coldBuf []float64
streams []*quantile.Stream
streamDuration time.Duration
headStreamIdx int
headStreamExpTime, hotBufExpTime time.Time
headStream, mergedTailStreams, mergedAllStreams *quantile.Stream
}
func (s *summary) Desc() *Desc {
return s.desc
}
func (s *summary) Observe(v float64) {
s.bufMtx.Lock()
defer s.bufMtx.Unlock()
now := time.Now()
if now.After(s.hotBufExpTime) {
s.asyncFlush(now)
}
s.hotBuf = append(s.hotBuf, v)
if len(s.hotBuf) == cap(s.hotBuf) {
s.asyncFlush(now)
}
}
func (s *summary) Write(out *dto.Metric) {
sum := &dto.Summary{}
qs := make([]*dto.Quantile, 0, len(s.objectives))
s.bufMtx.Lock()
s.mtx.Lock()
if len(s.hotBuf) != 0 {
s.swapBufs(time.Now())
}
s.bufMtx.Unlock()
s.flushColdBuf()
s.mergedAllStreams.Merge(s.mergedTailStreams.Samples())
s.mergedAllStreams.Merge(s.headStream.Samples())
sum.SampleCount = proto.Uint64(s.cnt)
sum.SampleSum = proto.Float64(s.sum)
for _, rank := range s.objectives {
qs = append(qs, &dto.Quantile{
Quantile: proto.Float64(rank),
Value: proto.Float64(s.mergedAllStreams.Query(rank)),
})
}
s.mergedAllStreams.Reset()
s.mtx.Unlock()
if len(qs) > 0 {
sort.Sort(quantSort(qs))
}
sum.Quantile = qs
out.Summary = sum
out.Label = s.labelPairs
}
func (s *summary) newStream() *quantile.Stream {
stream := quantile.NewTargeted(s.objectives...)
stream.SetEpsilon(s.epsilon)
return stream
}
// asyncFlush needs bufMtx locked.
func (s *summary) asyncFlush(now time.Time) {
s.mtx.Lock()
s.swapBufs(now)
// Unblock the original goroutine that was responsible for the mutation
// that triggered the compaction. But hold onto the global non-buffer
// state mutex until the operation finishes.
go func() {
s.flushColdBuf()
s.mtx.Unlock()
}()
}
// rotateStreams needs mtx AND bufMtx locked.
func (s *summary) maybeRotateStreams() {
if s.hotBufExpTime.Equal(s.headStreamExpTime) {
// Fast return to avoid re-merging s.mergedTailStreams.
return
}
for !s.hotBufExpTime.Equal(s.headStreamExpTime) {
s.headStreamIdx++
if s.headStreamIdx >= len(s.streams) {
s.headStreamIdx = 0
}
s.headStream = s.streams[s.headStreamIdx]
s.headStream.Reset()
s.headStreamExpTime = s.headStreamExpTime.Add(s.streamDuration)
}
s.mergedTailStreams.Reset()
for _, stream := range s.streams {
if stream != s.headStream {
s.mergedTailStreams.Merge(stream.Samples())
}
}
}
// flushColdBuf needs mtx locked.
func (s *summary) flushColdBuf() {
for _, v := range s.coldBuf {
s.headStream.Insert(v)
s.cnt++
s.sum += v
}
s.coldBuf = s.coldBuf[0:0]
s.maybeRotateStreams()
}
// swapBufs needs mtx AND bufMtx locked, coldBuf must be empty.
func (s *summary) swapBufs(now time.Time) {
s.hotBuf, s.coldBuf = s.coldBuf, s.hotBuf
// hotBuf is now empty and gets new expiration set.
for now.After(s.hotBufExpTime) {
s.hotBufExpTime = s.hotBufExpTime.Add(s.streamDuration)
}
}
type quantSort []*dto.Quantile
func (s quantSort) Len() int {
return len(s)
}
func (s quantSort) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
func (s quantSort) Less(i, j int) bool {
return s[i].GetQuantile() < s[j].GetQuantile()
}
// SummaryVec is a Collector that bundles a set of Summaries that all share the
// same Desc, but have different values for their variable labels. This is used
// if you want to count the same thing partitioned by various dimensions
// (e.g. http request latencies, partitioned by status code and method). Create
// instances with NewSummaryVec.
type SummaryVec struct {
MetricVec
}
// NewSummaryVec creates a new SummaryVec based on the provided SummaryOpts and
// partitioned by the given label names. At least one label name must be
// provided.
func NewSummaryVec(opts SummaryOpts, labelNames []string) *SummaryVec {
desc := NewDesc(
BuildFQName(opts.Namespace, opts.Subsystem, opts.Name),
opts.Help,
labelNames,
opts.ConstLabels,
)
return &SummaryVec{
MetricVec: MetricVec{
children: map[uint64]Metric{},
desc: desc,
hash: fnv.New64a(),
newMetric: func(lvs ...string) Metric {
return newSummary(desc, opts, lvs...)
},
},
}
}
// GetMetricWithLabelValues replaces the method of the same name in
// MetricVec. The difference is that this method returns a Summary and not a
// Metric so that no type conversion is required.
func (m *SummaryVec) GetMetricWithLabelValues(lvs ...string) (Summary, error) {
metric, err := m.MetricVec.GetMetricWithLabelValues(lvs...)
if metric != nil {
return metric.(Summary), err
}
return nil, err
}
// GetMetricWith replaces the method of the same name in MetricVec. The
// difference is that this method returns a Summary and not a Metric so that no
// type conversion is required.
func (m *SummaryVec) GetMetricWith(labels Labels) (Summary, error) {
metric, err := m.MetricVec.GetMetricWith(labels)
if metric != nil {
return metric.(Summary), err
}
return nil, err
}
// WithLabelValues works as GetMetricWithLabelValues, but panics where
// GetMetricWithLabelValues would have returned an error. By not returning an
// error, WithLabelValues allows shortcuts like
// myVec.WithLabelValues("404", "GET").Add(42)
func (m *SummaryVec) WithLabelValues(lvs ...string) Summary {
return m.MetricVec.WithLabelValues(lvs...).(Summary)
}
// With works as GetMetricWith, but panics where GetMetricWithLabels would have
// returned an error. By not returning an error, With allows shortcuts like
// myVec.With(Labels{"code": "404", "method": "GET"}).Add(42)
func (m *SummaryVec) With(labels Labels) Summary {
return m.MetricVec.With(labels).(Summary)
}

314
prometheus/summary_test.go Normal file
View File

@ -0,0 +1,314 @@
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"math"
"math/rand"
"sort"
"sync"
"testing"
"testing/quick"
"time"
dto "github.com/prometheus/client_model/go"
)
func benchmarkSummaryObserve(w int, b *testing.B) {
b.StopTimer()
wg := new(sync.WaitGroup)
wg.Add(w)
g := new(sync.WaitGroup)
g.Add(1)
s := NewSummary(SummaryOpts{})
for i := 0; i < w; i++ {
go func() {
g.Wait()
for i := 0; i < b.N; i++ {
s.Observe(float64(i))
}
wg.Done()
}()
}
b.StartTimer()
g.Done()
wg.Wait()
}
func BenchmarkSummaryObserve1(b *testing.B) {
benchmarkSummaryObserve(1, b)
}
func BenchmarkSummaryObserve2(b *testing.B) {
benchmarkSummaryObserve(2, b)
}
func BenchmarkSummaryObserve4(b *testing.B) {
benchmarkSummaryObserve(4, b)
}
func BenchmarkSummaryObserve8(b *testing.B) {
benchmarkSummaryObserve(8, b)
}
func benchmarkSummaryWrite(w int, b *testing.B) {
b.StopTimer()
wg := new(sync.WaitGroup)
wg.Add(w)
g := new(sync.WaitGroup)
g.Add(1)
s := NewSummary(SummaryOpts{})
for i := 0; i < 1000000; i++ {
s.Observe(float64(i))
}
for j := 0; j < w; j++ {
outs := make([]dto.Metric, b.N)
go func(o []dto.Metric) {
g.Wait()
for i := 0; i < b.N; i++ {
s.Write(&o[i])
}
wg.Done()
}(outs)
}
b.StartTimer()
g.Done()
wg.Wait()
}
func BenchmarkSummaryWrite1(b *testing.B) {
benchmarkSummaryWrite(1, b)
}
func BenchmarkSummaryWrite2(b *testing.B) {
benchmarkSummaryWrite(2, b)
}
func BenchmarkSummaryWrite4(b *testing.B) {
benchmarkSummaryWrite(4, b)
}
func BenchmarkSummaryWrite8(b *testing.B) {
benchmarkSummaryWrite(8, b)
}
func TestSummaryConcurrency(t *testing.T) {
rand.Seed(42)
it := func(n uint32) bool {
mutations := int(n%10000 + 1)
concLevel := int(n%15 + 1)
total := mutations * concLevel
ε := 0.001
var start, end sync.WaitGroup
start.Add(1)
end.Add(concLevel)
sum := NewSummary(SummaryOpts{
Name: "test_summary",
Help: "helpless",
Epsilon: ε,
})
allVars := make([]float64, total)
var sampleSum float64
for i := 0; i < concLevel; i++ {
vals := make([]float64, mutations)
for j := 0; j < mutations; j++ {
v := rand.NormFloat64()
vals[j] = v
allVars[i*mutations+j] = v
sampleSum += v
}
go func(vals []float64) {
start.Wait()
for _, v := range vals {
sum.Observe(v)
}
end.Done()
}(vals)
}
sort.Float64s(allVars)
start.Done()
end.Wait()
m := &dto.Metric{}
sum.Write(m)
if got, want := int(*m.Summary.SampleCount), total; got != want {
t.Errorf("got sample count %d, want %d", got, want)
}
if got, want := *m.Summary.SampleSum, sampleSum; math.Abs((got-want)/want) > 0.001 {
t.Errorf("got sample sum %f, want %f", got, want)
}
for i, wantQ := range DefObjectives {
gotQ := *m.Summary.Quantile[i].Quantile
gotV := *m.Summary.Quantile[i].Value
min, max := getBounds(allVars, wantQ, ε)
if gotQ != wantQ {
t.Errorf("got quantile %f, want %f", gotQ, wantQ)
}
if (gotV < min || gotV > max) && len(allVars) > 500 { // Avoid statistical outliers.
t.Errorf("got %f for quantile %f, want [%f,%f]", gotV, gotQ, min, max)
}
}
return true
}
if err := quick.Check(it, nil); err != nil {
t.Error(err)
}
}
func TestSummaryVecConcurrency(t *testing.T) {
rand.Seed(42)
it := func(n uint32) bool {
mutations := int(n%10000 + 1)
concLevel := int(n%15 + 1)
ε := 0.001
vecLength := int(n%5 + 1)
var start, end sync.WaitGroup
start.Add(1)
end.Add(concLevel)
sum := NewSummaryVec(
SummaryOpts{
Name: "test_summary",
Help: "helpless",
Epsilon: ε,
},
[]string{"label"},
)
allVars := make([][]float64, vecLength)
sampleSums := make([]float64, vecLength)
for i := 0; i < concLevel; i++ {
vals := make([]float64, mutations)
picks := make([]int, mutations)
for j := 0; j < mutations; j++ {
v := rand.NormFloat64()
vals[j] = v
pick := rand.Intn(vecLength)
picks[j] = pick
allVars[pick] = append(allVars[pick], v)
sampleSums[pick] += v
}
go func(vals []float64) {
start.Wait()
for i, v := range vals {
sum.WithLabelValues(string('A' + picks[i])).Observe(v)
}
end.Done()
}(vals)
}
for _, vars := range allVars {
sort.Float64s(vars)
}
start.Done()
end.Wait()
for i := 0; i < vecLength; i++ {
m := &dto.Metric{}
s := sum.WithLabelValues(string('A' + i))
s.Write(m)
if got, want := int(*m.Summary.SampleCount), len(allVars[i]); got != want {
t.Errorf("got sample count %d for label %c, want %d", got, 'A'+i, want)
}
if got, want := *m.Summary.SampleSum, sampleSums[i]; math.Abs((got-want)/want) > 0.001 {
t.Errorf("got sample sum %f for label %c, want %f", got, 'A'+i, want)
}
for j, wantQ := range DefObjectives {
gotQ := *m.Summary.Quantile[j].Quantile
gotV := *m.Summary.Quantile[j].Value
min, max := getBounds(allVars[i], wantQ, ε)
if gotQ != wantQ {
t.Errorf("got quantile %f for label %c, want %f", gotQ, 'A'+i, wantQ)
}
if (gotV < min || gotV > max) && len(allVars[i]) > 500 { // Avoid statistical outliers.
t.Errorf("got %f for quantile %f for label %c, want [%f,%f]", gotV, gotQ, 'A'+i, min, max)
t.Log(len(allVars[i]))
}
}
}
return true
}
if err := quick.Check(it, nil); err != nil {
t.Error(err)
}
}
func TestSummaryDecay(t *testing.T) {
sum := NewSummary(SummaryOpts{
Name: "test_summary",
Help: "helpless",
Epsilon: 0.001,
MaxAge: 10 * time.Millisecond,
Objectives: []float64{0.1},
})
m := &dto.Metric{}
i := 0
tick := time.NewTicker(100 * time.Microsecond)
for _ = range tick.C {
i++
sum.Observe(float64(i))
if i%10 == 0 {
sum.Write(m)
if got, want := *m.Summary.Quantile[0].Value, math.Max(float64(i)/10, float64(i-90)); math.Abs(got-want) > 20 {
t.Errorf("%d. got %f, want %f", i, got, want)
}
m.Reset()
}
if i >= 1000 {
break
}
}
tick.Stop()
}
func getBounds(vars []float64, q, ε float64) (min, max float64) {
lower := int((q - 4*ε) * float64(len(vars)))
upper := int((q+4*ε)*float64(len(vars))) + 1
min = vars[0]
if lower > 0 {
min = vars[lower]
}
max = vars[len(vars)-1]
if upper < len(vars)-1 {
max = vars[upper]
}
return
}

View File

@ -1,158 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package prometheus
import (
"fmt"
"math"
"sync"
)
const (
lowerThird = 1.0 / 3.0
upperThird = 2.0 * lowerThird
)
// A TallyingIndexEstimator is responsible for estimating the value of index for
// a given TallyingBucket, even though a TallyingBucket does not possess a
// collection of samples. There are a few strategies listed below for how
// this value should be approximated.
type TallyingIndexEstimator func(minimum, maximum float64, index, observations int) float64
// Provide a filter for handling empty buckets.
func emptyFilter(e TallyingIndexEstimator) TallyingIndexEstimator {
return func(minimum, maximum float64, index, observations int) float64 {
if observations == 0 {
return math.NaN()
}
return e(minimum, maximum, index, observations)
}
}
var (
minimumEstimator = emptyFilter(func(minimum, maximum float64, _, observations int) float64 {
return minimum
})
maximumEstimator = emptyFilter(func(minimum, maximum float64, _, observations int) float64 {
return maximum
})
averageEstimator = emptyFilter(func(minimum, maximum float64, _, observations int) float64 {
return AverageReducer([]float64{minimum, maximum})
})
uniformEstimator = emptyFilter(func(minimum, maximum float64, index, observations int) float64 {
if observations == 1 {
return minimum
}
location := float64(index) / float64(observations)
if location > upperThird {
return maximum
} else if location < lowerThird {
return minimum
}
return AverageReducer([]float64{minimum, maximum})
})
)
// These are the canned TallyingIndexEstimators.
var (
// Report the smallest observed value in the bucket.
MinimumEstimator = minimumEstimator
// Report the largest observed value in the bucket.
MaximumEstimator = maximumEstimator
// Report the average of the extrema.
AverageEstimator = averageEstimator
// Report the minimum value of the index if it is in the lower-third of
// observations, the average if in the middle-third, and the maximum if in
// the largest third
UniformEstimator = uniformEstimator
)
// A TallyingBucket is a Bucket that tallies when an object is added to it.
// Upon insertion, an object is compared against collected extrema and noted
// as a new minimum or maximum if appropriate.
type TallyingBucket struct {
estimator TallyingIndexEstimator
largestObserved float64
mutex sync.RWMutex
observations int
smallestObserved float64
}
func (b *TallyingBucket) Add(value float64) {
b.mutex.Lock()
defer b.mutex.Unlock()
b.observations += 1
b.smallestObserved = math.Min(value, b.smallestObserved)
b.largestObserved = math.Max(value, b.largestObserved)
}
func (b TallyingBucket) String() string {
b.mutex.RLock()
defer b.mutex.RUnlock()
observations := b.observations
if observations == 0 {
return fmt.Sprintf("[TallyingBucket (Empty)]")
}
return fmt.Sprintf("[TallyingBucket (%f, %f); %d items]", b.smallestObserved, b.largestObserved, observations)
}
func (b TallyingBucket) Observations() int {
b.mutex.RLock()
defer b.mutex.RUnlock()
return b.observations
}
func (b TallyingBucket) ValueForIndex(index int) float64 {
b.mutex.RLock()
defer b.mutex.RUnlock()
return b.estimator(b.smallestObserved, b.largestObserved, index, b.observations)
}
func (b *TallyingBucket) Reset() {
b.mutex.Lock()
defer b.mutex.Unlock()
b.largestObserved = math.SmallestNonzeroFloat64
b.observations = 0
b.smallestObserved = math.MaxFloat64
}
// Produce a TallyingBucket with sane defaults.
func DefaultTallyingBucket() TallyingBucket {
return TallyingBucket{
estimator: MinimumEstimator,
largestObserved: math.SmallestNonzeroFloat64,
smallestObserved: math.MaxFloat64,
}
}
func CustomTallyingBucket(estimator TallyingIndexEstimator) TallyingBucket {
return TallyingBucket{
estimator: estimator,
largestObserved: math.SmallestNonzeroFloat64,
smallestObserved: math.MaxFloat64,
}
}
// This is used strictly for testing.
func tallyingBucketBuilder() Bucket {
b := DefaultTallyingBucket()
return &b
}

View File

@ -1,71 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package prometheus
import (
. "github.com/matttproud/gocheck"
)
func (s *S) TestTallyingPercentileEstimatorMinimum(c *C) {
c.Assert(MinimumEstimator(-2, -1, 0, 0), IsNaN)
c.Check(MinimumEstimator(-2, -1, 0, 1), Equals, -2.0)
}
func (s *S) TestTallyingPercentileEstimatorMaximum(c *C) {
c.Assert(MaximumEstimator(-2, -1, 0, 0), IsNaN)
c.Check(MaximumEstimator(-2, -1, 0, 1), Equals, -1.0)
}
func (s *S) TestTallyingPercentilesEstimatorAverage(c *C) {
c.Assert(AverageEstimator(-2, -1, 0, 0), IsNaN)
c.Check(AverageEstimator(-2, -2, 0, 1), Equals, -2.0)
c.Check(AverageEstimator(-1, -1, 0, 1), Equals, -1.0)
c.Check(AverageEstimator(1, 1, 0, 2), Equals, 1.0)
c.Check(AverageEstimator(2, 1, 0, 2), Equals, 1.5)
}
func (s *S) TestTallyingPercentilesEstimatorUniform(c *C) {
c.Assert(UniformEstimator(-5, 5, 0, 0), IsNaN)
c.Check(UniformEstimator(-5, 5, 0, 2), Equals, -5.0)
c.Check(UniformEstimator(-5, 5, 1, 2), Equals, 0.0)
c.Check(UniformEstimator(-5, 5, 2, 2), Equals, 5.0)
}
func (s *S) TestTallyingBucketBuilder(c *C) {
var bucket Bucket = tallyingBucketBuilder()
c.Assert(bucket, Not(IsNil))
}
func (s *S) TestTallyingBucketString(c *C) {
bucket := TallyingBucket{
observations: 3,
smallestObserved: 2.0,
largestObserved: 5.5,
}
c.Check(bucket.String(), Equals, "[TallyingBucket (2.000000, 5.500000); 3 items]")
}
func (s *S) TestTallyingBucketAdd(c *C) {
b := DefaultTallyingBucket()
b.Add(1)
c.Check(b.observations, Equals, 1)
c.Check(b.Observations(), Equals, 1)
c.Check(b.smallestObserved, Equals, 1.0)
c.Check(b.largestObserved, Equals, 1.0)
b.Add(2)
c.Check(b.observations, Equals, 2)
c.Check(b.Observations(), Equals, 2)
c.Check(b.smallestObserved, Equals, 1.0)
c.Check(b.largestObserved, Equals, 2.0)
}

View File

@ -1,44 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be found
// in the LICENSE file.
//
package prometheus
import (
"time"
)
// Boilerplate metrics about the metrics reporting subservice. These are only
// exposed if the DefaultRegistry's exporter is hooked into the HTTP request
// handler.
var (
requestCount = NewCounter()
requestLatencyBuckets = LogarithmicSizedBucketsFor(0, 1000)
requestLatency = NewHistogram(&HistogramSpecification{
Starts: requestLatencyBuckets,
BucketBuilder: AccumulatingBucketBuilder(EvictAndReplaceWith(50, AverageReducer), 1000),
ReportablePercentiles: []float64{0.01, 0.05, 0.5, 0.9, 0.99},
})
startTime = NewGauge()
)
func init() {
startTime.Set(nil, float64(time.Now().Unix()))
DefaultRegistry.Register("telemetry_requests_metrics_total", "A counter of the total requests made against the telemetry system.", NilLabels, requestCount)
DefaultRegistry.Register("telemetry_requests_metrics_latency_microseconds", "A histogram of the response latency for requests made against the telemetry system.", NilLabels, requestLatency)
DefaultRegistry.Register("instance_start_time_seconds", "The time at which the current instance started (UTC).", NilLabels, startTime)
}
// This callback accumulates the microsecond duration of the reporting
// framework's overhead such that it can be reported.
var requestLatencyAccumulator = func(began time.Time) {
microseconds := float64(time.Since(began) / time.Microsecond)
requestLatency.Add(nil, microseconds)
}

View File

@ -1,140 +1,121 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"encoding/json"
"fmt"
"sync"
import "hash/fnv"
"code.google.com/p/goprotobuf/proto"
dto "github.com/prometheus/client_model/go"
"github.com/prometheus/client_golang/model"
)
// An Untyped metric represents scalar values without any type implications
// whatsoever. If you need to handle values that cannot be represented by any of
// the existing metric types, you can use an Untyped type and rely on contracts
// outside of Prometheus to ensure that these values are understood correctly.
// Untyped is a Metric that represents a single numerical value that can
// arbitrarily go up and down.
//
// An Untyped metric works the same as a Gauge. The only difference is that to
// no type information is implied.
//
// To create Gauge instances, use NewUntyped.
type Untyped interface {
Metric
Set(labels map[string]string, value float64) float64
Collector
// Set sets the Untyped metric to an arbitrary value.
Set(float64)
// Inc increments the Untyped metric by 1.
Inc()
// Dec decrements the Untyped metric by 1.
Dec()
// Add adds the given value to the Untyped metric. (The value can be
// negative, resulting in a decrease.)
Add(float64)
// Sub subtracts the given value from the Untyped metric. (The value can
// be negative, resulting in an increase.)
Sub(float64)
}
type untypedVector struct {
Labels map[string]string `json:"labels"`
Value float64 `json:"value"`
// UntypedOpts is an alias for Opts. See there for doc comments.
type UntypedOpts Opts
// NewUntyped creates a new Untyped metric from the provided UntypedOpts.
func NewUntyped(opts UntypedOpts) Untyped {
return newValue(NewDesc(
BuildFQName(opts.Namespace, opts.Subsystem, opts.Name),
opts.Help,
nil,
opts.ConstLabels,
), UntypedValue, 0)
}
// NewUntyped returns a newly allocated Untyped metric ready to be used.
func NewUntyped() Untyped {
return &untyped{
values: map[uint64]*untypedVector{},
// UntypedVec is a Collector that bundles a set of Untyped metrics that all
// share the same Desc, but have different values for their variable
// labels. This is used if you want to count the same thing partitioned by
// various dimensions. Create instances with NewUntypedVec.
type UntypedVec struct {
MetricVec
}
// NewUntypedVec creates a new UntypedVec based on the provided UntypedOpts and
// partitioned by the given label names. At least one label name must be
// provided.
func NewUntypedVec(opts UntypedOpts, labelNames []string) *UntypedVec {
desc := NewDesc(
BuildFQName(opts.Namespace, opts.Subsystem, opts.Name),
opts.Help,
labelNames,
opts.ConstLabels,
)
return &UntypedVec{
MetricVec: MetricVec{
children: map[uint64]Metric{},
desc: desc,
hash: fnv.New64a(),
newMetric: func(lvs ...string) Metric {
return newValue(desc, UntypedValue, 0, lvs...)
},
},
}
}
type untyped struct {
mutex sync.RWMutex
values map[uint64]*untypedVector
}
func (metric *untyped) String() string {
formatString := "[Untyped %s]"
metric.mutex.RLock()
defer metric.mutex.RUnlock()
return fmt.Sprintf(formatString, metric.values)
}
func (metric *untyped) Set(labels map[string]string, value float64) float64 {
if labels == nil {
labels = blankLabelsSingleton
// GetMetricWithLabelValues replaces the method of the same name in
// MetricVec. The difference is that this method returns an Untyped and not a
// Metric so that no type conversion is required.
func (m *UntypedVec) GetMetricWithLabelValues(lvs ...string) (Untyped, error) {
metric, err := m.MetricVec.GetMetricWithLabelValues(lvs...)
if metric != nil {
return metric.(Untyped), err
}
return nil, err
}
signature := model.LabelValuesToSignature(labels)
metric.mutex.Lock()
defer metric.mutex.Unlock()
if original, ok := metric.values[signature]; ok {
original.Value = value
} else {
metric.values[signature] = &untypedVector{
Labels: labels,
Value: value,
}
// GetMetricWith replaces the method of the same name in MetricVec. The
// difference is that this method returns an Untyped and not a Metric so that no
// type conversion is required.
func (m *UntypedVec) GetMetricWith(labels Labels) (Untyped, error) {
metric, err := m.MetricVec.GetMetricWith(labels)
if metric != nil {
return metric.(Untyped), err
}
return value
return nil, err
}
func (metric *untyped) Reset(labels map[string]string) {
signature := model.LabelValuesToSignature(labels)
metric.mutex.Lock()
defer metric.mutex.Unlock()
delete(metric.values, signature)
// WithLabelValues works as GetMetricWithLabelValues, but panics where
// GetMetricWithLabelValues would have returned an error. By not returning an
// error, WithLabelValues allows shortcuts like
// myVec.WithLabelValues("404", "GET").Add(42)
func (m *UntypedVec) WithLabelValues(lvs ...string) Untyped {
return m.MetricVec.WithLabelValues(lvs...).(Untyped)
}
func (metric *untyped) ResetAll() {
metric.mutex.Lock()
defer metric.mutex.Unlock()
for key, value := range metric.values {
for label := range value.Labels {
delete(value.Labels, label)
}
delete(metric.values, key)
}
}
func (metric *untyped) MarshalJSON() ([]byte, error) {
metric.mutex.RLock()
defer metric.mutex.RUnlock()
values := make([]*untypedVector, 0, len(metric.values))
for _, value := range metric.values {
values = append(values, value)
}
return json.Marshal(map[string]interface{}{
typeKey: untypedTypeValue,
valueKey: values,
})
}
func (metric *untyped) dumpChildren(f *dto.MetricFamily) {
metric.mutex.RLock()
defer metric.mutex.RUnlock()
f.Type = dto.MetricType_UNTYPED.Enum()
for _, child := range metric.values {
c := &dto.Untyped{
Value: proto.Float64(child.Value),
}
m := &dto.Metric{
Untyped: c,
}
for name, value := range child.Labels {
p := &dto.LabelPair{
Name: proto.String(name),
Value: proto.String(value),
}
m.Label = append(m.Label, p)
}
f.Metric = append(f.Metric, m)
}
// With works as GetMetricWith, but panics where GetMetricWithLabels would have
// returned an error. By not returning an error, With allows shortcuts like
// myVec.With(Labels{"code": "404", "method": "GET"}).Add(42)
func (m *UntypedVec) With(labels Labels) Untyped {
return m.MetricVec.With(labels).(Untyped)
}

View File

@ -1,154 +0,0 @@
// Copyright (c) 2013, Prometheus Team
// All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package prometheus
import (
"encoding/json"
"testing"
"github.com/prometheus/client_golang/test"
)
func testUntyped(t test.Tester) {
type input struct {
steps []func(g Untyped)
}
type output struct {
value string
}
var scenarios = []struct {
in input
out output
}{
{
in: input{
steps: []func(g Untyped){},
},
out: output{
value: `{"type":"untyped","value":[]}`,
},
},
{
in: input{
steps: []func(g Untyped){
func(g Untyped) {
g.Set(nil, 1)
},
},
},
out: output{
value: `{"type":"untyped","value":[{"labels":{},"value":1}]}`,
},
},
{
in: input{
steps: []func(g Untyped){
func(g Untyped) {
g.Set(map[string]string{}, 2)
},
},
},
out: output{
value: `{"type":"untyped","value":[{"labels":{},"value":2}]}`,
},
},
{
in: input{
steps: []func(g Untyped){
func(g Untyped) {
g.Set(map[string]string{}, 3)
},
func(g Untyped) {
g.Set(map[string]string{}, 5)
},
},
},
out: output{
value: `{"type":"untyped","value":[{"labels":{},"value":5}]}`,
},
},
{
in: input{
steps: []func(g Untyped){
func(g Untyped) {
g.Set(map[string]string{"handler": "/foo"}, 13)
},
func(g Untyped) {
g.Set(map[string]string{"handler": "/bar"}, 17)
},
func(g Untyped) {
g.Reset(map[string]string{"handler": "/bar"})
},
},
},
out: output{
value: `{"type":"untyped","value":[{"labels":{"handler":"/foo"},"value":13}]}`,
},
},
{
in: input{
steps: []func(g Untyped){
func(g Untyped) {
g.Set(map[string]string{"handler": "/foo"}, 13)
},
func(g Untyped) {
g.Set(map[string]string{"handler": "/bar"}, 17)
},
func(g Untyped) {
g.ResetAll()
},
},
},
out: output{
value: `{"type":"untyped","value":[]}`,
},
},
{
in: input{
steps: []func(g Untyped){
func(g Untyped) {
g.Set(map[string]string{"handler": "/foo"}, 19)
},
},
},
out: output{
value: `{"type":"untyped","value":[{"labels":{"handler":"/foo"},"value":19}]}`,
},
},
}
for i, scenario := range scenarios {
untyped := NewUntyped()
for _, step := range scenario.in.steps {
step(untyped)
}
bytes, err := json.Marshal(untyped)
if err != nil {
t.Errorf("%d. could not marshal into JSON %s", i, err)
continue
}
asString := string(bytes)
if scenario.out.value != asString {
t.Errorf("%d. expected %q, got %q", i, scenario.out.value, asString)
}
}
}
func TestUntyped(t *testing.T) {
testUntyped(t)
}
func BenchmarkUntyped(b *testing.B) {
for i := 0; i < b.N; i++ {
testUntyped(b)
}
}

193
prometheus/value.go Normal file
View File

@ -0,0 +1,193 @@
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"errors"
"fmt"
"sort"
"sync"
dto "github.com/prometheus/client_model/go"
"code.google.com/p/goprotobuf/proto"
)
// ValueType is an enumeration of metric types that represent a simple value.
type ValueType int
// Possible values for the ValueType enum.
const (
_ ValueType = iota
CounterValue
GaugeValue
UntypedValue
)
var errInconsistentCardinality = errors.New("inconsistent label cardinality")
// value is a generic metric for simple values. It implements Metric, Collector,
// Counter, Gauge, and Untyped. Its effective type is determined by
// ValueType. This is a low-level building block used by the library to back the
// implementations of Counter, Gauge, and Untyped.
type value struct {
SelfCollector
mtx sync.RWMutex
desc *Desc
valType ValueType
val float64
labelPairs []*dto.LabelPair
}
// newValue returns a newly allocated Value with the given Desc, ValueType,
// sample value and label values. It panics if the number of label
// values is different from the number of variable labels in Desc.
func newValue(desc *Desc, valueType ValueType, val float64, labelValues ...string) *value {
if len(labelValues) != len(desc.variableLabels) {
panic(errInconsistentCardinality)
}
result := &value{
desc: desc,
valType: valueType,
val: val,
labelPairs: makeLabelPairs(desc, labelValues),
}
result.Init(result)
return result
}
func (v *value) Desc() *Desc {
return v.desc
}
func (v *value) Set(val float64) {
v.mtx.Lock()
defer v.mtx.Unlock()
v.val = val
}
func (v *value) Inc() {
v.Add(1)
}
func (v *value) Dec() {
v.Add(-1)
}
func (v *value) Add(val float64) {
v.mtx.Lock()
defer v.mtx.Unlock()
v.val += val
}
func (v *value) Sub(val float64) {
v.Add(val * -1)
}
func (v *value) Write(out *dto.Metric) {
v.mtx.RLock()
val := v.val
v.mtx.RUnlock()
populateMetric(v.valType, val, v.labelPairs, out)
}
// NewConstMetric returns a metric with one fixed value that cannot be
// changed. Users of this package will not have much use for it in regular
// operations. However, when implementing custom Collectors, it is useful as a
// throw-away metric that is generated on the fly to send it to Prometheus in
// the Collect method. NewConstMetric returns an error if the length of
// labelValues is not consistent with the variable labels in Desc.
func NewConstMetric(desc *Desc, valueType ValueType, value float64, labelValues ...string) (Metric, error) {
if len(desc.variableLabels) != len(labelValues) {
return nil, errInconsistentCardinality
}
return &constMetric{
desc: desc,
valType: valueType,
val: value,
labelPairs: makeLabelPairs(desc, labelValues),
}, nil
}
// MustNewConstMetric is a version of NewConstMetric that panics where
// NewConstMetric would have returned an error.
func MustNewConstMetric(desc *Desc, valueType ValueType, value float64, labelValues ...string) Metric {
m, err := NewConstMetric(desc, valueType, value, labelValues...)
if err != nil {
panic(err)
}
return m
}
type constMetric struct {
desc *Desc
valType ValueType
val float64
labelPairs []*dto.LabelPair
}
func (m *constMetric) Desc() *Desc {
return m.desc
}
func (m *constMetric) Write(out *dto.Metric) {
populateMetric(m.valType, m.val, m.labelPairs, out)
}
func populateMetric(
t ValueType,
v float64,
labelPairs []*dto.LabelPair,
m *dto.Metric,
) {
m.Label = labelPairs
switch t {
case CounterValue:
m.Counter = &dto.Counter{Value: proto.Float64(v)}
case GaugeValue:
m.Gauge = &dto.Gauge{Value: proto.Float64(v)}
case UntypedValue:
m.Untyped = &dto.Untyped{Value: proto.Float64(v)}
default:
panic(fmt.Errorf("encountered unknown type %v", t))
}
}
func makeLabelPairs(desc *Desc, labelValues []string) []*dto.LabelPair {
totalLen := len(desc.variableLabels) + len(desc.constLabelPairs)
if totalLen == 0 {
// Super fast path.
return nil
}
if len(desc.variableLabels) == 0 {
// Moderately fast path.
return desc.constLabelPairs
}
labelPairs := make([]*dto.LabelPair, 0, totalLen)
for i, n := range desc.variableLabels {
labelPairs = append(labelPairs, &dto.LabelPair{
Name: proto.String(n),
Value: proto.String(labelValues[i]),
})
}
for _, lp := range desc.constLabelPairs {
labelPairs = append(labelPairs, lp)
}
sort.Sort(LabelPairSorter(labelPairs))
return labelPairs
}

241
prometheus/vec.go Normal file
View File

@ -0,0 +1,241 @@
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"bytes"
"fmt"
"hash"
"sync"
)
// MetricVec is a Collector to bundle metrics of the same name that
// differ in their label values. MetricVec is usually not used directly but as a
// building block for implementations of vectors of a given metric
// type. GaugeVec, CounterVec, SummaryVec, and UntypedVec are examples already
// provided in this package.
type MetricVec struct {
mtx sync.RWMutex // Protects not only children, but also hash and buf.
children map[uint64]Metric
desc *Desc
// hash is our own hash instance to avoid repeated allocations.
hash hash.Hash64
// buf is used to copy string contents into it for hashing,
// again to avoid allocations.
buf bytes.Buffer
newMetric func(labelValues ...string) Metric
}
// Describe implements Collector. The length of the returned slice
// is always one.
func (m *MetricVec) Describe(ch chan<- *Desc) {
ch <- m.desc
}
// Collect implements Collector.
func (m *MetricVec) Collect(ch chan<- Metric) {
m.mtx.RLock()
defer m.mtx.RUnlock()
for _, metric := range m.children {
ch <- metric
}
}
// GetMetricWithLabelValues returns the Metric for the given slice of label
// values (same order as the VariableLabels in Desc). If that combination of
// label values is accessed for the first time, a new Metric is created.
// Keeping the Metric for later use is possible (and should be considered if
// performance is critical), but keep in mind that Reset, DeleteLabelValues and
// Delete can be used to delete the Metric from the MetricVec. In that case, the
// Metric will still exist, but it will not be exported anymore, even if a
// Metric with the same label values is created later. See also the CounterVec
// example.
//
// An error is returned if the number of label values is not the same as the
// number of VariableLabels in Desc.
//
// Note that for more than one label value, this method is prone to mistakes
// caused by an incorrect order of arguments. Consider GetMetricWith(Labels) as
// an alternative to avoid that type of mistake. For higher label numbers, the
// latter has a much more readable (albeit more verbose) syntax, but it comes
// with a performance overhead (for creating and processing the Labels map).
// See also the GaugeVec example.
func (m *MetricVec) GetMetricWithLabelValues(lvs ...string) (Metric, error) {
m.mtx.Lock()
defer m.mtx.Unlock()
h, err := m.hashLabelValues(lvs)
if err != nil {
return nil, err
}
return m.getOrCreateMetric(h, lvs...), nil
}
// GetMetricWith returns the Metric for the given Labels map (the label names
// must match those of the VariableLabels in Desc). If that label map is
// accessed for the first time, a new Metric is created. Implications of keeping
// the Metric are the same as for GetMetricWithLabelValues.
//
// An error is returned if the number and names of the Labels are inconsistent
// with those of the VariableLabels in Desc.
//
// This method is used for the same purpose as
// GetMetricWithLabelValues(...string). See there for pros and cons of the two
// methods.
func (m *MetricVec) GetMetricWith(labels Labels) (Metric, error) {
m.mtx.Lock()
defer m.mtx.Unlock()
h, err := m.hashLabels(labels)
if err != nil {
return nil, err
}
lvs := make([]string, len(labels))
for i, label := range m.desc.variableLabels {
lvs[i] = labels[label]
}
return m.getOrCreateMetric(h, lvs...), nil
}
// WithLabelValues works as GetMetricWithLabelValues, but panics if an error
// occurs. The method allows neat syntax like:
// httpReqs.WithLabelValues("404", "POST").Inc()
func (m *MetricVec) WithLabelValues(lvs ...string) Metric {
metric, err := m.GetMetricWithLabelValues(lvs...)
if err != nil {
panic(err)
}
return metric
}
// With works as GetMetricWith, but panics if an error occurs. The method allows
// neat syntax like:
// httpReqs.With(Labels{"status":"404", "method":"POST"}).Inc()
func (m *MetricVec) With(labels Labels) Metric {
metric, err := m.GetMetricWith(labels)
if err != nil {
panic(err)
}
return metric
}
// DeleteLabelValues removes the metric where the variable labels are the same
// as those passed in as labels (same error as the VariableLabels in Desc). It
// returns true if a metric was deleted.
//
// It is not an error if the number of label values is not the same as the
// number of VariableLabels in Desc. However, such inconsistent label count can
// never match an actual Metric, so the method will always return false in that
// case.
//
// Note that for more than one label value, this method is prone to mistakes
// caused by an incorrect order of arguments. Consider Delete(Labels) as an
// alternative to avoid that type of mistake. For higher label numbers, the
// latter has a much more readable (albeit more verbose) syntax, but it comes
// with a performance overhead (for creating and processing the Labels map).
// See also the CounterVec example.
func (m *MetricVec) DeleteLabelValues(lvs ...string) bool {
m.mtx.Lock()
defer m.mtx.Unlock()
h, err := m.hashLabelValues(lvs)
if err != nil {
return false
}
if _, has := m.children[h]; !has {
return false
}
delete(m.children, h)
return true
}
// Delete deletes the metric where the variable labels are the same as those
// passed in as labels. It returns true if a metric was deleted.
//
// It is not an error if the number and names of the Labels are inconsistent
// with those of the VariableLabels in the Desc of the MetricVec. However, such
// inconsistent Labels can never match an actual Metric, so the method will
// always return false in that case.
//
// This method is used for the same purpose as DeleteLabelValues(...string). See
// there for pros and cons of the two methods.
func (m *MetricVec) Delete(labels Labels) bool {
m.mtx.Lock()
defer m.mtx.Unlock()
h, err := m.hashLabels(labels)
if err != nil {
return false
}
if _, has := m.children[h]; !has {
return false
}
delete(m.children, h)
return true
}
// Reset deletes all metrics in this vector.
func (m *MetricVec) Reset() {
m.mtx.Lock()
defer m.mtx.Unlock()
for h := range m.children {
delete(m.children, h)
}
}
func (m *MetricVec) hashLabelValues(vals []string) (uint64, error) {
if len(vals) != len(m.desc.variableLabels) {
return 0, errInconsistentCardinality
}
m.hash.Reset()
for _, val := range vals {
m.buf.Reset()
m.buf.WriteString(val)
m.hash.Write(m.buf.Bytes())
}
return m.hash.Sum64(), nil
}
func (m *MetricVec) hashLabels(labels Labels) (uint64, error) {
if len(labels) != len(m.desc.variableLabels) {
return 0, errInconsistentCardinality
}
m.hash.Reset()
for _, label := range m.desc.variableLabels {
val, ok := labels[label]
if !ok {
return 0, fmt.Errorf("label name %q missing in label map", label)
}
m.buf.Reset()
m.buf.WriteString(val)
m.hash.Write(m.buf.Bytes())
}
return m.hash.Sum64(), nil
}
func (m *MetricVec) getOrCreateMetric(hash uint64, labelValues ...string) Metric {
metric, ok := m.children[hash]
if !ok {
// Copy labelValues. Otherwise, they would be allocated even if we don't go
// down this code path.
copiedLabelValues := append(make([]string, 0, len(labelValues)), labelValues...)
metric = m.newMetric(copiedLabelValues...)
m.children[hash] = metric
}
return metric
}

91
prometheus/vec_test.go Normal file
View File

@ -0,0 +1,91 @@
// Copyright 2014 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"hash/fnv"
"testing"
)
func TestDelete(t *testing.T) {
desc := NewDesc("test", "helpless", []string{"l1", "l2"}, nil)
vec := MetricVec{
children: map[uint64]Metric{},
desc: desc,
hash: fnv.New64a(),
newMetric: func(lvs ...string) Metric {
return newValue(desc, UntypedValue, 0, lvs...)
},
}
if got, want := vec.Delete(Labels{"l1": "v1", "l2": "v2"}), false; got != want {
t.Errorf("got %v, want %v", got, want)
}
vec.With(Labels{"l1": "v1", "l2": "v2"}).(Untyped).Set(42)
if got, want := vec.Delete(Labels{"l1": "v1", "l2": "v2"}), true; got != want {
t.Errorf("got %v, want %v", got, want)
}
if got, want := vec.Delete(Labels{"l1": "v1", "l2": "v2"}), false; got != want {
t.Errorf("got %v, want %v", got, want)
}
vec.With(Labels{"l1": "v1", "l2": "v2"}).(Untyped).Set(42)
if got, want := vec.Delete(Labels{"l2": "v2", "l1": "v1"}), true; got != want {
t.Errorf("got %v, want %v", got, want)
}
if got, want := vec.Delete(Labels{"l2": "v2", "l1": "v1"}), false; got != want {
t.Errorf("got %v, want %v", got, want)
}
vec.With(Labels{"l1": "v1", "l2": "v2"}).(Untyped).Set(42)
if got, want := vec.Delete(Labels{"l2": "v1", "l1": "v2"}), false; got != want {
t.Errorf("got %v, want %v", got, want)
}
if got, want := vec.Delete(Labels{"l1": "v1"}), false; got != want {
t.Errorf("got %v, want %v", got, want)
}
}
func TestDeleteLabelValues(t *testing.T) {
desc := NewDesc("test", "helpless", []string{"l1", "l2"}, nil)
vec := MetricVec{
children: map[uint64]Metric{},
desc: desc,
hash: fnv.New64a(),
newMetric: func(lvs ...string) Metric {
return newValue(desc, UntypedValue, 0, lvs...)
},
}
if got, want := vec.DeleteLabelValues("v1", "v2"), false; got != want {
t.Errorf("got %v, want %v", got, want)
}
vec.With(Labels{"l1": "v1", "l2": "v2"}).(Untyped).Set(42)
if got, want := vec.DeleteLabelValues("v1", "v2"), true; got != want {
t.Errorf("got %v, want %v", got, want)
}
if got, want := vec.DeleteLabelValues("v1", "v2"), false; got != want {
t.Errorf("got %v, want %v", got, want)
}
vec.With(Labels{"l1": "v1", "l2": "v2"}).(Untyped).Set(42)
if got, want := vec.DeleteLabelValues("v2", "v1"), false; got != want {
t.Errorf("got %v, want %v", got, want)
}
if got, want := vec.DeleteLabelValues("v1"), false; got != want {
t.Errorf("got %v, want %v", got, want)
}
}

View File

@ -1,40 +0,0 @@
// Copyright 2013 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package test provides common test helpers to the client library.
package test
// Tester is an interface implemented by both, testing.T and testing.B.
type Tester interface {
Error(args ...interface{})
Errorf(format string, args ...interface{})
Fatal(args ...interface{})
Fatalf(format string, args ...interface{})
}
// ErrorEqual compares Go errors for equality.
func ErrorEqual(left, right error) bool {
if left == right {
return true
}
if left != nil && right != nil {
if left.Error() == right.Error() {
return true
}
return false
}
return false
}

View File

@ -18,13 +18,12 @@ import (
"math"
"strings"
"testing"
"code.google.com/p/goprotobuf/proto"
"github.com/prometheus/client_golang/test"
"code.google.com/p/goprotobuf/proto"
dto "github.com/prometheus/client_model/go"
)
func testCreate(t test.Tester) {
func testCreate(t testing.TB) {
var scenarios = []struct {
in *dto.MetricFamily
out string
@ -257,7 +256,7 @@ func BenchmarkCreate(b *testing.B) {
}
}
func testCreateError(t test.Tester) {
func testCreateError(t testing.TB) {
var scenarios = []struct {
in *dto.MetricFamily
err string

View File

@ -17,15 +17,14 @@ import (
"math"
"strings"
"testing"
"code.google.com/p/goprotobuf/proto"
"github.com/prometheus/client_golang/test"
"code.google.com/p/goprotobuf/proto"
dto "github.com/prometheus/client_model/go"
)
var parser Parser
func testParse(t test.Tester) {
func testParse(t testing.TB) {
var scenarios = []struct {
in string
out []*dto.MetricFamily
@ -379,7 +378,7 @@ func BenchmarkParse(b *testing.B) {
}
}
func testParseError(t test.Tester) {
func testParseError(t testing.TB) {
var scenarios = []struct {
in string
err string