In principle, we needed to iterate through all permutations, mirroring
the same that is happening in the code. For lack of time, I only
picked one of the cases currently buggy.
As said, this really needs code generation, should we ever find
ourselves touching this again.
Previously, the pickDelegator function was not returning a
*hijackerDelegator so the return value did not implement the Hijacker
interface. As a result, code that attempts to hijack the connection
would fail when using a type assertion.
All the other cases returned the hijackerDelegator correctly.
Original discussion see
https://github.com/prometheus/client_golang/pull/362 .
Assuming that the most frequently used method of a `Gauge` is `Set`
and the most frequently used method of a `Conuter` is `Inc`, this
separates the implementation of both metric types. `Inc` and integral
`Add` of a counter is now handled in a separate `uint64`. This would
create a race in `Set`, but luckily, there is no `Set` anymore in a
counter.
All attempts to solve above race (to use the same idea for a `Gauge`)
slow down `Set`, So we just stick with the old implementation
(formerly `value`) for `Gauge`.
The idea behind it is described in detail in
https://github.com/prometheus/client_golang/issues/320 .
This commit also updates the example given in
promhttp/instrument_server_test.go , which nicely illustrates the
benefit of this change.
So far, currying could be emulated by creating different metric vec's
with different values in their ConstLabels. This was quite difficult
to grasp - which is essentially what was done in the example mentioned
above. Now that this use case can be solved without ConstLabels, we
can safely declare ConstLabels as rarely used. (Perhaps we can
deprecate them entirely one day, but I'll take a raincheck on that
when the changes of v0.10 have materialized.) This commit thus also
updates the ConstLabel doc comments in the various Opts. (It contained
fairly outdated stuff anyway.)
The "panic in case of error" code was so far in metricVec. This pulls
it up into the exported types like CounterVec. This is code
replication, but it avoids an explicit type conversion. Mostly,
however, this is preparation to make the wrapped metricVec an
interface (required for curried vec's).
Specifically @beorn7 pointed out that the previous implementation had
some shortcomings around large numbers. I've changed the code to match
the suggestion in review, as well as added a few test cases.
MetricVec is un-experted by now. godoc handles that correctly by
showing the methods of the embedded un-exported metricVec with the
exported type (CounterVec, SummaryVec, ...) that embeds metricVec.
This is in preparation for "curried" metric vecs, as discussed.
And it's a good thing anyway. The exported MetricVec was from a time
when I thought people would define own Metric types and then create
Vecs of it. That has never happened.
As it turned out, it's not that esay to guess "common" combination of
interface upgrades. So I decided to just implement all 32 possible
combination of interface upgrades. (Only 16 with Go 1.7 and earlier.)
Clearly, this calls for code generation. But right now, we still need
to find out what's the best form of the code. For later additions,
implementing code generation might be useful.
Note that newDelegator is called for each HTTP request. Thus, this
commit aims to make the upgrade selection quick. (After the type
checks, it's just directly accessing an element in a slice.)
The example method is assumed to be used as main() function. As a main()
function doesn't have any return values, the example doesn't compile and
is invalid.
This also updates all tests and examples to use explicitly set
objectives.
In v0.10, DefObjectives will be completely removed, and the default
Summary will have no objectives then.
Fixes#118
This finally makes the (presumably) common simple case as simple as it
gets.
The still quite common (but less common) case of using a Gauge is
slightly more verbose now, but not needing to provide a separate
constructor is totally worth it.
Finally, the advanced use case is not really more verbose as in my
original suggestion. However, the logic to decide which Observer to
use is now all in the ObserverFunc handed in at construction
time. This is deliberate and desired. It makes sure the selection
mechanism is all spelled out there. No surprises buried deep in the
function code somewhere.
- Deprecate Untyped for direct instrumentation.
- Add a SetToCurrentTime method to Gauge
Note that adding the SetToCurrentTime method is not really following
Go's principle of lean interfaces. However, the Gauge interface is
already quite fat. (The only methods really required are Set and
Add. Everything else could be expressed in terms of those two.) So we
have already quite a few "convenience" methods traditionally, so I
think we should stay consistent here.
The alternatives would be:
- Not support SetToCurrentTime at all (it's only a SHOULD in the
guidelines).
- A top level function `SetToCurrentTime(Gauge)`.
- Just a helper `CurrentTime()` that returns the curent unix time in
seconds as a float (which is pretty verbose using the standard
library, see code in this commit). This would allow
`myGauge.Set(CurrentTime)`.
Weighing all circumstances, I believe the way in this commit is the
least evil. Issue #223 could be used to rework interfaces more
fundamentally in a breaking change if feasible.
All was a mess, we had duplicates of the REs for label name and metric
names here, and we sometimes used them, sometimes we used those from
common/model.
Now we are consistently using the fast checking functions from common/model.
(Tests for leading colons are included there, see
https://github.com/prometheus/common/pull/66 .)
That's the "soft" part of the deprecation: Everything that has been
marked deprecated in v0.8 or earlier and is straight-forward to
replace by a non-deprecated way, is removed here.
Sadly, this does not include the HTTP part. We first need to provide a
replacement for HTTP instrumentation (as planned for v0.8) to then
remove the deprecated parts in v0.9.
After increasing unit test coverage, it was found that the split
function call nature of metric matching wasn't working well in many
cases. By increasing test coverage, we've ensured that both the fast
path and fallback collision path are working appropriately.
With these changes, there is a further performance hit, but now the
results are ensured to be correct.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
While hash collisions are quite rare, the current state of the client
library carries a risk of merging two separate label values into a
single metric bucket. The effects are near impossible to detect and the
result will be missing or incorrect counters.
This changeset handles hash collisions by falling back to collision
resolution if multiple label values hash to the same value. This works
similar to separate chaining using a slice. Extra storage is minimized
to only the value key slice to that metrics can be differentiated
within a bucket.
In general, the cost of handling collisions is completely minimized
under normal operation. Performance does show slight increases in
certain areas, but these are more likely statistically anomalies. More
importantly, zero allocation behavior for metrics is preserved on the
fast path. Minimal allocations may be made during collision handling but
this has minimal effect.
Benchmark comparisons with and without collision resolution follow.
```
benchmark old ns/op new ns/op delta
BenchmarkCounterWithLabelValues-4 99.0 107 +8.08%
BenchmarkCounterWithLabelValuesConcurrent-4 79.6 91.0 +14.32%
BenchmarkCounterWithMappedLabels-4 518 542 +4.63%
BenchmarkCounterWithPreparedMappedLabels-4 127 137 +7.87%
BenchmarkCounterNoLabels-4 19.5 19.1 -2.05%
BenchmarkGaugeWithLabelValues-4 97.4 110 +12.94%
BenchmarkGaugeNoLabels-4 12.4 10.3 -16.94%
BenchmarkSummaryWithLabelValues-4 1204 915 -24.00%
BenchmarkSummaryNoLabels-4 936 847 -9.51%
BenchmarkHistogramWithLabelValues-4 147 147 +0.00%
BenchmarkHistogramNoLabels-4 50.6 49.3 -2.57%
BenchmarkHistogramObserve1-4 37.9 37.5 -1.06%
BenchmarkHistogramObserve2-4 122 137 +12.30%
BenchmarkHistogramObserve4-4 310 352 +13.55%
BenchmarkHistogramObserve8-4 691 729 +5.50%
BenchmarkHistogramWrite1-4 3374 3097 -8.21%
BenchmarkHistogramWrite2-4 5310 5051 -4.88%
BenchmarkHistogramWrite4-4 12094 10690 -11.61%
BenchmarkHistogramWrite8-4 19416 17755 -8.55%
BenchmarkHandler-4 11934304 13765894 +15.35%
BenchmarkSummaryObserve1-4 1119 1105 -1.25%
BenchmarkSummaryObserve2-4 3679 3430 -6.77%
BenchmarkSummaryObserve4-4 10678 7982 -25.25%
BenchmarkSummaryObserve8-4 22974 16689 -27.36%
BenchmarkSummaryWrite1-4 25962 14680 -43.46%
BenchmarkSummaryWrite2-4 38019 35073 -7.75%
BenchmarkSummaryWrite4-4 78027 56816 -27.18%
BenchmarkSummaryWrite8-4 117220 132248 +12.82%
BenchmarkMetricVecWithLabelValuesBasic-4 138 133 -3.62%
BenchmarkMetricVecWithLabelValues2Keys10ValueCardinality-4 150 144 -4.00%
BenchmarkMetricVecWithLabelValues4Keys10ValueCardinality-4 263 256 -2.66%
BenchmarkMetricVecWithLabelValues2Keys100ValueCardinality-4 145 155 +6.90%
BenchmarkMetricVecWithLabelValues10Keys100ValueCardinality-4 606 634 +4.62%
BenchmarkMetricVecWithLabelValues10Keys1000ValueCardinality-4 746 641 -14.08%
benchmark old allocs new allocs delta
BenchmarkCounterWithLabelValues-4 0 0 +0.00%
BenchmarkCounterWithLabelValuesConcurrent-4 0 0 +0.00%
BenchmarkCounterWithMappedLabels-4 2 2 +0.00%
BenchmarkCounterWithPreparedMappedLabels-4 0 0 +0.00%
BenchmarkCounterNoLabels-4 0 0 +0.00%
BenchmarkGaugeWithLabelValues-4 0 0 +0.00%
BenchmarkGaugeNoLabels-4 0 0 +0.00%
BenchmarkSummaryWithLabelValues-4 0 0 +0.00%
BenchmarkSummaryNoLabels-4 0 0 +0.00%
BenchmarkHistogramWithLabelValues-4 0 0 +0.00%
BenchmarkHistogramNoLabels-4 0 0 +0.00%
BenchmarkMetricVecWithLabelValuesBasic-4 0 0 +0.00%
BenchmarkMetricVecWithLabelValues2Keys10ValueCardinality-4 0 0 +0.00%
BenchmarkMetricVecWithLabelValues4Keys10ValueCardinality-4 0 0 +0.00%
BenchmarkMetricVecWithLabelValues2Keys100ValueCardinality-4 0 0 +0.00%
BenchmarkMetricVecWithLabelValues10Keys100ValueCardinality-4 0 0 +0.00%
BenchmarkMetricVecWithLabelValues10Keys1000ValueCardinality-4 0 0 +0.00%
benchmark old bytes new bytes delta
BenchmarkCounterWithLabelValues-4 0 0 +0.00%
BenchmarkCounterWithLabelValuesConcurrent-4 0 0 +0.00%
BenchmarkCounterWithMappedLabels-4 336 336 +0.00%
BenchmarkCounterWithPreparedMappedLabels-4 0 0 +0.00%
BenchmarkCounterNoLabels-4 0 0 +0.00%
BenchmarkGaugeWithLabelValues-4 0 0 +0.00%
BenchmarkGaugeNoLabels-4 0 0 +0.00%
BenchmarkSummaryWithLabelValues-4 0 0 +0.00%
BenchmarkSummaryNoLabels-4 0 0 +0.00%
BenchmarkHistogramWithLabelValues-4 0 0 +0.00%
BenchmarkHistogramNoLabels-4 0 0 +0.00%
BenchmarkMetricVecWithLabelValuesBasic-4 0 0 +0.00%
BenchmarkMetricVecWithLabelValues2Keys10ValueCardinality-4 0 0 +0.00%
BenchmarkMetricVecWithLabelValues4Keys10ValueCardinality-4 0 0 +0.00%
BenchmarkMetricVecWithLabelValues2Keys100ValueCardinality-4 0 0 +0.00%
BenchmarkMetricVecWithLabelValues10Keys100ValueCardinality-4 0 0 +0.00%
BenchmarkMetricVecWithLabelValues10Keys1000ValueCardinality-4 0 0 +0.00%
```
Signed-off-by: Stephen J Day <stephen.day@docker.com>
This allows to finally get rid of the infamous injection hook in the
interface. The old SetMetricFamilyInjectionHook still exist as a
deprecated function but is now implemented with the new plumbing under
the hood.
Now that we have multiple Gatherer implementation, I renamed
push.Registry to push.FromGatherer.
This commit also improves the consistency checks, which happened as a
byproduct of the refactoring to allow checking in both the "merge
gatherer" Gatherers as well as in the normal Registry.
To keep backwards compatibility while not creating circular import
chains, some code had to be duplicated. But all functions using it
have been declared deprecated hereby.
The new ways of instrumenting handlers will all go into the new
package, and ultimately, the prometheus package itself will be
completely igorant of HTTP.
- Moved the Deliverer parameter to the end of the list to mirror
Collectors in push.Collectors.
- Improved doc comment and added an example for push.Registry.
Registry is now a struct, which implements two interfaces, Registrerer
and Deliverer. The latter is particularly important as it is now the
argument type for pushes and HTTP handler construction (i.e. it is
easy to implement a custom Deliverer for testing or other
purposes). The Registerer interface is not used as a parameter type
but can (and should) be used by users of custom registries so that
they can easily do things like mocking it out for testing purposes.
With the broken up interfaces, adding MustRegister to the interface is
not such a big deal anymore (interface is still small). And since
setting the injection hook is such a rare thing to happen, it is
acceptable to not have it in any of the interfaces.
The renaming from `Collect` to `Deliver` was done to avoid confusion
with Collectors. (The registry _collects_ from the Collectors, and
then _delivers_ to the exposition mechanism.)
General context and approch
===========================
This is the first part of the long awaited wider refurbishment of
`client_golang/prometheus/...`. After a lot of struggling, I decided
to not go for one breaking big-bang, but cut things into smaller steps
after all, mostly to keep the changes manageable and easy to
review. I'm aiming for having the invasive breaking changes
concentrated in as few steps as possible (ideally one). Some steps
will not be breaking at all, but typically there will be breaking
changes that only affect quite special cases so that 95+% of users
will not be affected. This first step is an example for that, see
details below.
What's happening in this commit?
================================
This step is about finally creating an exported registry
interface. This could not be done by simply export the existing
internal implementation because the interface would be _way_ too
fat. This commit introduces a qutie lean `Registry` interface
(compared to the previous interval implementation). The functions that
act on the default registry are retained (with very few exceptions) so
that most use cases won't see a change. However, several of those are
deprecated now to clean up the namespace in the future.
The default registry is kept in the public variable
`DefaultRegistry`. This follows the example of the http package in the
standard library (cf. `http.DefaultServeMux`, `http.DefaultClient`)
with the same implications. (This pattern is somewhat disputed within
the Go community but I chose to go with the devil you know instead of
creating something more complex or even disallowing any changes to the
default registry. The current approach gives everybody the freedom to
not touch DefaultRegistry or to do everything with a custom registry
to play save.)
Another important part in making the registry lean is the extraction
of the HTTP exposition, which also allows for customization of the
HTTP exposition. Note that the separation of metric collection and
exposition has the side effect that managing the MetricFamily and
Metric protobuf objects in a free-list or pool isn't really feasible
anymore. By now (with better GC in more recent Go versions), the
returns were anyway dimisishing. To be effective at all, scrapes had
to happen more often than GC cycles, and even then most elements of
the protobufs (everything excetp the MetricFamily and Metric structs
themselves) would still cause allocation churn. In a future breaking
change, the signature of the Write method in the Metric interface will
be adjusted accordingly. In this commit, avoiding breakage is more
important.
The following issues are fixed by this commit (some solved "on the
fly" now that I was touching the code anyway and it would have been
stupid to port the bugs):
https://github.com/prometheus/client_golang/issues/46https://github.com/prometheus/client_golang/issues/100https://github.com/prometheus/client_golang/issues/170https://github.com/prometheus/client_golang/issues/205
Documentation including examples have been amended as required.
What future changes does this commit enable?
============================================
The following items are not yet implemented, but this commit opens the
possibility of implementing these independently.
- The separation of the HTTP exposition allows the implementation of
other exposition methods based on the Registry interface, as known
from other Prometheus client libraries, e.g. sending the metrics to
Graphite.
Cf. https://github.com/prometheus/client_golang/issues/197
- The public `Registry` interface allows the implementation of
convenience tools for testing metrics collection. Those tools can
inspect the collected MetricFamily protobufs and compare them to
expectation. Also, tests can use their own testing instance of a
registry.
Cf. https://github.com/prometheus/client_golang/issues/58
Notable non-goals of this commit
================================
Non-goals that will be tackled later
------------------------------------
The following two issues are quite closely connected to the changes in
this commit but the line has been drawn deliberately to address them
in later steps of the refurbishment:
- `InstrumentHandler` has many known problems. The plan is to create a
saner way to conveniently intrument HTTP handlers and remove the old
`InstrumentHandler` altogether. To keep breakage low for now, even
the default handler to expose metrics is still using the old
`InstrumentHandler`. This leads to weird naming inconsistencies but
I have deemed it better to not break the world right now but do it
in the change that provides better ways of instrumenting HTTP
handlers.
Cf. https://github.com/prometheus/client_golang/issues/200
- There is work underway to make the whole handling of metric
descriptors (`Desc`) more intuitive and transparent for the user
(including an ability for less strict checking,
cf. https://github.com/prometheus/client_golang/issues/47). That's
quite invasive from the perspective of the internal code, namely the
registry. I deliberately kept those changes out of this commit.
- While this commit adds new external dependency, the effort to vendor
anything within the library that is not visible in any exported
types will have to be done later.
Non-goals that _might_ be tackled later
---------------------------------------
There is a strong and understandable urge to divide the `prometheus`
package into a number of sub-packages (like `registry`, `collectors`,
`http`, `metrics`, …). However, to not run into a multitude of
circular import chains, this would need to break every single existing
usage of the library. (As just one example, if the ubiquitious
`prometheus.MustRegister` (with more than 2,000 uses on GitHub alone)
is kept in the `prometheus` package, but the other registry concerns
go into a new `registry` package, then the `prometheus` package would
import the `registry` package (to call the actual register method),
while at the same time the `registry` package needs to import the
`prometheus` package to access `Collector`, `Metric`, `Desc` and
more. If we moved `MustRegister` into the `registry` package,
thousands of code lines would have to be fixed (which would be easy if
the world was a mono repo, but it is not). If we moved everything else
the proposed registry package needs into packages of their own, we
would break thousands of other code lines.)
The main problem is really the top-level functions like
`MustRegister`, `Handler`, …, which effectively pull everything into
one package. Those functions are however very convenient for the easy
and very frequent use-cases.
This problem has to be revisited later.
For now, I'm trying to keep the amount of exported names in the
package as low as possible (e.g. I unexported expvarCollector in this
commit because the NewExpvarCollector constructor is enough to export,
and it is now consistent with other collectors, like the goCollector).
Non-goals that won't be tackled anytime soon
--------------------------------------------
Something that I have played with a lot is "streaming collection",
i.e. allow an implementation of the `Registry` interface that collects
metrics incrementally and serves them while doing so. As it has turned
out, this has many many issues and makes the `Registry` interface very
clunky. Eventually, I made the call that it is unlikely we will really
implement streaming collection; and making the interface more clunky
for something that might not even happen is really a big no-no. Note
that the `Registry` interface only creates the in-memory
representation of the metric family protobufs in one go. The
serializaton onto the wire can still be handled in a streaming fashion
(which hasn't been done so far, without causing any trouble, but might
be done in the future without breaking any interfaces).
What are the breaking changes?
==============================
- Signatures of functions pushing to Pushgateway have changed to allow
arbitrary grouping (which was planned for a long time anyway, and
now that I had to work on the Push code anyway for the registry
refurbishment, I finally did it,
cf. https://github.com/prometheus/client_golang/issues/100).
With the gained insight that pushing to the default registry is almost
never the right thing, and now that we are breaking the Push call
anyway, all the Push functions were moved to their own package,
which cleans up the namespace and is more idiomatic (pushing
Collectors is now literally done by `push.Collectors(...)`).
- The registry is doing more consistency checks by default now. Past
creators of inconsistent metrics could have masked the problem by
not setting `EnableCollectChecks`. Those inconsistencies will now be
detected. (But note that a "best effort" metrics collection is now
possible with `HandlerOpts.ErrorHandling = ContinueOnError`.)
- `EnableCollectChecks` is gone. The registry is now performing some
of those checks anyway (see previous item), and a registry with all
of those checks can now be created with `NewPedanticRegistry` (only
used for testing).
- `PanicOnCollectError` is gone. This behavior can now be configured
when creating a custom HTTP handler.
This test fails when max_fds is a large value; say 4.5e+06, for example. Change it to match virtual_memory_bytes, which also has to handle large values.
This circumvents the following problem:
• The prometheus client library appends “/metrics/…” to the pushURL.
• When pushURL ends in a trailing slash, the URL becomes e.g.
http://pushgateway.example.com:9091//metrics/…
• The pushgateway will reply with an HTTP 307 status code
(temporary redirect).
• While Go’s net/http client follows redirects by default, it will only
follow HTTP 302 (Found) and HTTP 303 (See Other) redirects for PUT and
POST requests, which the prometheus client library uses.
Hence, when calling e.g.:
prometheus.Push("foo", "bar", "http://pushgateway.example.com:9091/")
…your metrics would not actually get pushed successfully, but rather
you’d see the error message:
2015/11/26 10:59:49 main.go:209: unexpected status code 307 while pushing to http://push...
These tests are always timing out in our Jenkins CI environment. We have moved our timeout to 2 minutes. I have added a check for the short flag so other can skip these tests as we are until this can be identified.
We are running into a timeout with TestHistogramConcurrency on our Jenkins box. I noticed in the stack trace for the timeout this block.
goroutine 2348 [chan send]:
github.comcast.com/ventris/kober/vnd/github.com/prometheus/client_golang/prometheus.(*goCollector).Collect(0xc20801e8e0, 0xc20800a7e0)
/var/lib/jenkins/jobs/Kober/workspace/src/github.comcast.com/ventris/kober/vnd/github.com/prometheus/client_golang/prometheus/go_collector.go:49 +0x6dd
github.comcast.com/ventris/kober/vnd/github.com/prometheus/client_golang/prometheus.func·028()
/var/lib/jenkins/jobs/Kober/workspace/src/github.comcast.com/ventris/kober/vnd/github.com/prometheus/client_golang/prometheus/go_collector_test.go:27 +0x11a
created by github.comcast.com/ventris/kober/vnd/github.com/prometheus/client_golang/prometheus.TestGoCollector
/var/lib/jenkins/jobs/Kober/workspace/src/github.comcast.com/ventris/kober/vnd/github.com/prometheus/client_golang/prometheus/go_collector_test.go:28 +0x35e
This suggested that even though the TestGoCollector test was finished, a goroutine was still hanging around. I traced it back to the call to c.Collect always sending twice of the provided channel. This change receives that second value and allows the goroutine to finish with the test.
Still can't figure out why TestHistogramConcurrency is timing out after 2 minutes :(
- Clarify documentation about sorting requirements.
- Add missing histogram support in consistency check.
- Add label sorting to consistency check.
- Improve error messages when reporting a metric.
(Previously, the metric name was not printed.)
InstrumentHandler provides proxy object over given http.ResponseWriter
- responseWriterDelegator that only implements the bare minimum
required of an http.ResponseWriter and doesn’t implement any interface
upgrades (http.Flusher, http.Hijacker, etc.). This commit fixes it by
providing fancyResponseWriterDelegator with all the fancy bells and
whistles if standard library http.ResponseWriter is detected.
Heavily inspired by Goji's middleware.
https://avtok.com/2014/11/05/interface-upgrades.html#the-proxy-problem
The cgo dependency in the "procfs" package is being removed in:
https://github.com/prometheus/procfs/pull/4
So now it should be fine to always import the "procfs" package and have
it determine dynamically at runtime whether the proc filesystem is
actually there.
This fixes a problem for users which were vendoring client_golang on
MacOS X, but building the result on Linux (the procfs package was
missing in the vendored packages).
If a metric family returned by the injection hook already exists (with
the same name), then its metrics are simply merged into that metric
family. With enabled collect-time checks, even uniqueness is checked,
but in general, things stay the same that the caller is responsible to
ensure metric consistency.
This fixes https://github.com/prometheus/pushgateway/issues/27 .
The new vendoring was produced by running:
godep save -r ./examples/... ./prometheus/... ./text/... ./model/... ./extraction/...
Two things to note:
- "extraction/processor0_0_{1,2}_test.go" imported a package from
"github.com/prometheus/prometheus", all for just one tiny testing
function. To not have to deal with a circular vendoring dependency, I
simply replaced the usage of the function by some in-line logic.
- godep grouped the rewritten imports slightly differently for some
reason, but at least the standard library imports are still in a
separate section. Not sure if it's worth manually keeping our old
import grouping scheme or if we should simply use that godep-generated
one.
As it is expected that the process collector can fail under certain
conditions (proc information for a process only readable by root or
other user for example) and as there is currently no option to configure
the error behavior of the client, this change reverts the error
reporting introduced in 159e96f. This effectively means that errors are
simply ignored and there won't be any samples for the process_* metrics
in case of an error.
Once a user can control how to behave in case of errors returned by
collectors, this change should probably be reverted.
The procfs package has a cgo dependency (necessary to calculate system
times). As procfs is not available under windows, darwin and supposely
all newer BSD systems, this change remove the procfs dependency on these
systems.
If a given pidFn for a process collector can't determine the pid, this
had to be signaled with an invalid pid so far (pid <= 0). In order to
make the error interface clearer for users, this change introduces an
explicit error parameter.
Adjust the API and usage accordingly.
Make tests stricter.
Since the merging is still faulty, test are broken now.
The next commit will fix it by avoiding merging.
Summaries as implemented cannot be aggregated in a meaningful
way. Partitoning them by status code and method only made sense if we
were interested in the individual latency and size of e.g. GET request
that result in status 503. In general, that's not the case. Most of
the time, the user will be interested in the latency and size of _all_
HTTP requests.
(With future changes to client_golang, we will consider making the
HTTP instrumentation configurable, e.g. to handle the case where the
user is only interested in the latency of successful requests.)
Both are interface changes I want to get in before public
announcement. They only break rare usage cases, and are always easy to
fix, but still we want to avoid breaking changes after a wider
announcement of the project.
The change of Register() simply removes the return of the Collector,
which nobody was using in practice. It was just bloating the call
syntax. Note that this is different from RegisterOrGet(), which is
used at various occasions where you want to register something that
might or might not be registered already, but if it is, you want the
previously registered Collector back (because that's the relevant
one).
WRT error reporting: I first tried the obvious way of letting the
Collector methods Describe() and Collect() return error. However, I
had to conclude that that bloated _many_ calls and their handling in
very obnoxious ways. On the other hand, the case where you actually
want to report errors during registration or collection is very
rare. Hence, this approach has the wrong trade-off. The approach taken
here might at first appear clunky but is in practice quite handy,
mostly because there is almost no change for the "normal" case of "no
special error handling", but also because it plays well with the way
descriptors and metrics are handled (via channels).
Explaining the approach in more detail:
- During registration / describe: Error handling was actually already
in place (for invalid descriptors, which carry an error anyway). I
only added a convenience function to create an invalid descriptor
with a given error on purpose.
- Metrics are now treated in a similar way. The Write method returns
an error now (the only change in interface). An "invalid metric" is
provided that can be sent via the channel to signal that that metric
could not be collected. It alse transports an error.
NON-GOALS OF THIS COMMIT:
This is NOT yet the major improvement of the whole registry part,
where we want a public Registry interface and plenty of modular
configurations (for error handling, various auto-metrics, http
instrumentation, testing, ...). However, we can do that whole thing
without breaking existing interfaces. For now (which is a significant
issue) any error during collection will either cause a 500 HTTP
response or a panic (depending on registry config). Later, we
definitely want to have a possibility to skip (and only report
somehow) non-collectible metrics instead of aborting the whole scrape.
This change adds two new collectors to the prometheus package which
export metrics about a given or the current process.
* ProcessCollector exports metrics about cpu time, vss, rss, fd usage as
well as the start time of a given process.
* GoCollector exports currently only the number of active goroutines.
computeApproximateRequestSize is run in a goroutine, but the
handlerFunc that runs in parallel may modify the URL, which is also
needed by computeApproximateRequestSize. So get the URL length
beforehand.
Change-Id: Idb84735845afe7be4ef79b3d642d5764f6d26a7c
Also, remove quotes from the Content-type header. It's not illegal to
have quotes there, but they are not needed, and at other places, we
are not using them. So fewer characters and more consistency.
Change-Id: If7a78bde85154163e4426daec493d973213e83e9
Since we prepare the whole content in a buf before sending, we can as
well set the Content-Length explicitly.
Change-Id: Ifd91764c90af53be49f93f0b33032138130b6f96
This is actually the intended behavior, and (as a nice side effect)
makes things cheaper to calculate.
Also, introduce a separator character to avoid hash collisions
(like label values {"ab","c"} vs {"a", "bc"}).
Apply the same principles to signature.go.
Change-Id: I607db544f278ed89684fe5fa11abdbc3e03d3061
Also, fix seconds to microseconds fot the http instrumentation to
match the metric name.
Fix Desc.String().
Simplify http error display.
Change-Id: Ib7397f4eac1eeed92b291e1c9cc88c080aee99ca
This rewrite had may backs and forths. In my git repository, it
consists of 35 commits which I cannot group or merge into reasonable
review buckets. Gerrit breaks fundamental git semantics, so I have to
squash the 35 commits into one for the review.
I'll push this not with refs/for/master, but with refs/for/next so
that we can transition after submission in a controlled fashion.
For the review, I recommend to start with looking at godoc and in
particular the many examples. After that, continue with a line-by-line
detailed review. (The big picture is hopefully as expected after
wrapping up the discussion earlier.)
Change-Id: Ib38cc46493a5139ca29d84020650929d94cac850
Most important here is the simple & flat text format, but while I'm on
it, I have also added the text representations for protobufs (which is
purely meant for debugging purposes). I hope my basic idea about
handling those various protocols (and the text package) becomes
clearer now.
Change-Id: I7299853eadc82a426101e907f2b3d4e37f9e4c71
The LabelsToSignature function is now used outside of the prometheus
package, too. Leaving it in the prometheuos package is misleading
design and will lead to circulat import chains soon.
Change-Id: If1ca442d4023b33b138cf79fee68e82ff2a355be
- Full support for UNTYPED type.
- Receptive support for timestamp_ms (i.e. the processor can process
it, but the client library cannot yet create it - which is kind of
intended as timestamps are meant for other things like federation,
which will need separate support anyway).
Change-Id: I5913164a80089943d49ad58bf86e465a843ab82b
The idea here is to always go via the protobufs if dealing with the
text format. That won't always be the most efficient way, but it
avoids the multiplicity of conversion routines required for direct
conversion (e.g. text format -> internal representation in the
Prometheus server). The loss of efficiency is acceptable because the
text format should not be used in high performance (high throughput,
low latency) situations anyway.
In that way, the text format stays perfectly isolated from other parts
of the code. To receive text format, just plug the conversion in
before the code path that normally reads protobufs. Correspondingly,
for sending text format, simply replace the WriteDelimited call by a
text.Create call.
Nevertheless, the conversion code itself is optimized for efficiency
and minimized memory churn (which was one of the reason for handcoding
the parser and not using a lexer/parser code generation tool).
Change-Id: Iee45ffe8aa421a844225d13a1f859becd8a3b066
These are all simple changes we should have caught a long time ago:
1. The hashing mechanism for fingerprint label sets should have not
allocated new objects for the actual hashing---at least not
egregiously. This simplifies the hash writing by just byte-
dumping the string stream into the hasher.
2. The hashing mechanism within the scope of a metric does not care
about the value of the label keys themselves but only of the label
values. The keys can be dropped from the calculation.
3. The locking mechanism for the metrics should not block on hash
computation but rather solely on the actual mutation or critical
section reads.
4. For scalar metrics (i.e., ones with niladic label signatures), we
should rely on a preallocated map versus requesting a new one
ad hoc.
This is tested with Go 1.1, so the results may yield other values
for us elsewhere:
BEFORE
BenchmarkLabelValuesToSignatureScalar 500000000 3.97 ns/op 0 B/op 0 allocs/op
BenchmarkLabelValuesToSignatureSingle 5000000 714 ns/op 74 B/op 4 allocs/op
BenchmarkLabelValuesToSignatureDouble 1000000 1153 ns/op 107 B/op 5 allocs/op
BenchmarkLabelValuesToSignatureTriple 1000000 1588 ns/op 138 B/op 6 allocs/op
BenchmarkLabelToSignatureScalar 500000000 3.91 ns/op 0 B/op 0 allocs/op
BenchmarkLabelToSignatureSingle 2000000 874 ns/op 92 B/op 5 allocs/op
BenchmarkLabelToSignatureDouble 1000000 1528 ns/op 139 B/op 7 allocs/op
BenchmarkLabelToSignatureTriple 1000000 2172 ns/op 186 B/op 9 allocs/op
AFTER
BenchmarkLabelValuesToSignatureScalar 500000000 4.36 ns/op 0 B/op 0 allocs/op
BenchmarkLabelValuesToSignatureSingle 5000000 378 ns/op 89 B/op 4 allocs/op
BenchmarkLabelValuesToSignatureDouble 5000000 574 ns/op 142 B/op 5 allocs/op
BenchmarkLabelValuesToSignatureTriple 5000000 758 ns/op 186 B/op 6 allocs/op
BenchmarkLabelToSignatureScalar 500000000 4.06 ns/op 0 B/op 0 allocs/op
BenchmarkLabelToSignatureSingle 5000000 472 ns/op 106 B/op 5 allocs/op
BenchmarkLabelToSignatureDouble 2000000 746 ns/op 174 B/op 7 allocs/op
BenchmarkLabelToSignatureTriple 1000000 1061 ns/op 235 B/op 9 allocs/op
In effect, a single metric mutation operation's lookup overhead will
move from Before::iBenchmarkLabelToSignature to
After::BenchmarkLabelValuesToSignature. This MINIMALLY reduces
1/2 the overhead. I would be hesitant in reading the memory
allocation statistics, for this was run with the GC still on and
thusly inaccurate per Go benchmarking documentation.
Before::BenchmarkLabelValuesToSignature never existed, so it is not
of any intrinsic value in itself. That said, the cases that still
rely on LabelToSignature experience consistently a 1/2 drop in time.
Change-Id: Ifc9e69f718af65a59f5be8117473518233258159
This hook is needed for the upcoming push gateway.
Also remove go vet warnings and add test for Handler().
Change-Id: If6c56676c7a0f10c16b4effae7285903f8267616
This also adds a check that forbids any user-supplied metrics to start
with the reserved label name prefix "__".
Change-Id: I2fe94c740b685ad05c4c670613cf2af7b9e1c1c0
This is an optimization of labelsToSignature to avoid excess allocations
when the label set is empty.
Change-Id: If2d59bbc3ae6d4457e2ded197b6f4e7c67e6a173
This ensures that you can pass the same base label set into multiple
Register() calls, e.g.:
labels := map[string]string{"key": "value"}
prometheus.Register("metric_1", "", labels, ...)
prometheus.Register("metric_2", "", labels, ...)
Change-Id: I951e5c2ed7844c74eb3716d1bf07126ce558f266
This commit introduces all relevant server-side artifacts such that the
Result streams can be used by external parties for one-off tools and
such. This will ultimately better enable us to support additional
wireformats with much more ease.
Bernerd had suggested extracting the value decoders and bundling them
into the client library. After some reflection, I tend to agree with
this, since we can start breaking the onion of Prometheus itself and
localize the protocol management into its own scope.
A couple of major changes since moving:
- Protocol 0.0.2 has moved to a struct{} so that our tests can perform
value matching, which cannot be done against function literals.
- Processing now acquires options to dictate behavioral changes of
metrics bodies.
- Processing no longer closes the stream, thusly returning this to the
hands of the caller.
- Process() has been renamed to ProcessSingle to better convey that it
works on complete message bodies. This paves the way for better
streaming payload support that the next API version will offer.