A previous PR made it so that the Go 1.17 collector locked only around
uses of rmSampleBuf, but really that means that Metric values may be
sent over the channel containing some values from future metrics.Read
calls. While generally-speaking this isn't a problem, we lose any
consistency guarantees provided by the runtime/metrics package.
Also, that optimization to not just lock around all of Collect was
premature. Truthfully, Collect is called relatively infrequently, and
its critical path is fairly fast (10s of µs). To prove it, this change
also adds a benchmark.
name old time/op new time/op delta
GoCollector-16 43.7µs ± 2% 43.2µs ± 2% ~ (p=0.190 n=9+9)
Note that because the benchmark is single-threaded it actually looks
like it might be getting *slightly* faster, because all those Collect
calls for the Metrics are direct calls instead of interface calls.
Signed-off-by: Michael Anthony Knyszek <mknyszek@google.com>
This change introduces use of the runtime/metrics package in place of
runtime.MemStats for Go 1.17 or later. The runtime/metrics package was
introduced in Go 1.16, but not all the old metrics were accounted for
until 1.17.
The runtime/metrics package offers several advantages over using
runtime.MemStats:
* The list of metrics and their descriptions are machine-readable,
allowing new metrics to get added without any additional work.
* Detailed histogram-based metrics are now available, offering much
deeper insights into the Go runtime.
* The runtime/metrics API is significantly more efficient than
runtime.MemStats, even with the additional metrics added, because
it does not require any stop-the-world events.
That being said, integrating the package comes with some caveats, some
of which were discussed in #842. Namely:
* The old MemStats-based metrics need to continue working, so they're
exported under their old names backed by equivalent runtime/metrics
metrics.
* Earlier versions of Go need to continue working, so the old code
remains, but behind a build tag.
Finally, a few notes about the implementation:
* This change includes a whole bunch of refactoring to avoid significant
code duplication.
* This change adds a new histogram metric type specifically optimized
for runtime/metrics histograms. This type's methods also include
additional logic to deal with differences in bounds conventions.
* This change makes a whole bunch of decisions about how runtime/metrics
names are translated.
* This change adds a `go generate` script to generate a list of expected
runtime/metrics names for a given Go version for auditing. Users of
new versions of Go will transparently be allowed to use new metrics,
however.
Signed-off-by: Michael Anthony Knyszek <mknyszek@google.com>
It is perfectly possible that a normal GC happens just before the
forced one. Thus seeing 2 GCs is fine.
Whenever this test failed, it was because two GCs were seen.
Signed-off-by: beorn7 <bjoern@rabenste.in>
This is really lame as it essentially just uses longer times to
wait. The test is still timing-dependent and thus could still
theoretically fail with unlucky scheduling. However, we are testing
something that _is_ about timing. Turning this all into something not
timing-dependent would be first quite involved and second might defeat
the purpose of testing code that is inherently about timing.
Let's see how this works out in practice.
Signed-off-by: beorn7 <bjoern@rabenste.in>
tl;dr: Return previous memstats if reading new ones takes longer than
1s.
See the doc comment of NewGoCollector for details.
Signed-off-by: beorn7 <bjoern@rabenste.in>
Original discussion see
https://github.com/prometheus/client_golang/pull/362 .
Assuming that the most frequently used method of a `Gauge` is `Set`
and the most frequently used method of a `Conuter` is `Inc`, this
separates the implementation of both metric types. `Inc` and integral
`Add` of a counter is now handled in a separate `uint64`. This would
create a race in `Set`, but luckily, there is no `Set` anymore in a
counter.
All attempts to solve above race (to use the same idea for a `Gauge`)
slow down `Set`, So we just stick with the old implementation
(formerly `value`) for `Gauge`.
We are running into a timeout with TestHistogramConcurrency on our Jenkins box. I noticed in the stack trace for the timeout this block.
goroutine 2348 [chan send]:
github.comcast.com/ventris/kober/vnd/github.com/prometheus/client_golang/prometheus.(*goCollector).Collect(0xc20801e8e0, 0xc20800a7e0)
/var/lib/jenkins/jobs/Kober/workspace/src/github.comcast.com/ventris/kober/vnd/github.com/prometheus/client_golang/prometheus/go_collector.go:49 +0x6dd
github.comcast.com/ventris/kober/vnd/github.com/prometheus/client_golang/prometheus.func·028()
/var/lib/jenkins/jobs/Kober/workspace/src/github.comcast.com/ventris/kober/vnd/github.com/prometheus/client_golang/prometheus/go_collector_test.go:27 +0x11a
created by github.comcast.com/ventris/kober/vnd/github.com/prometheus/client_golang/prometheus.TestGoCollector
/var/lib/jenkins/jobs/Kober/workspace/src/github.comcast.com/ventris/kober/vnd/github.com/prometheus/client_golang/prometheus/go_collector_test.go:28 +0x35e
This suggested that even though the TestGoCollector test was finished, a goroutine was still hanging around. I traced it back to the call to c.Collect always sending twice of the provided channel. This change receives that second value and allows the goroutine to finish with the test.
Still can't figure out why TestHistogramConcurrency is timing out after 2 minutes :(
The new vendoring was produced by running:
godep save -r ./examples/... ./prometheus/... ./text/... ./model/... ./extraction/...
Two things to note:
- "extraction/processor0_0_{1,2}_test.go" imported a package from
"github.com/prometheus/prometheus", all for just one tiny testing
function. To not have to deal with a circular vendoring dependency, I
simply replaced the usage of the function by some in-line logic.
- godep grouped the rewritten imports slightly differently for some
reason, but at least the standard library imports are still in a
separate section. Not sure if it's worth manually keeping our old
import grouping scheme or if we should simply use that godep-generated
one.
Both are interface changes I want to get in before public
announcement. They only break rare usage cases, and are always easy to
fix, but still we want to avoid breaking changes after a wider
announcement of the project.
The change of Register() simply removes the return of the Collector,
which nobody was using in practice. It was just bloating the call
syntax. Note that this is different from RegisterOrGet(), which is
used at various occasions where you want to register something that
might or might not be registered already, but if it is, you want the
previously registered Collector back (because that's the relevant
one).
WRT error reporting: I first tried the obvious way of letting the
Collector methods Describe() and Collect() return error. However, I
had to conclude that that bloated _many_ calls and their handling in
very obnoxious ways. On the other hand, the case where you actually
want to report errors during registration or collection is very
rare. Hence, this approach has the wrong trade-off. The approach taken
here might at first appear clunky but is in practice quite handy,
mostly because there is almost no change for the "normal" case of "no
special error handling", but also because it plays well with the way
descriptors and metrics are handled (via channels).
Explaining the approach in more detail:
- During registration / describe: Error handling was actually already
in place (for invalid descriptors, which carry an error anyway). I
only added a convenience function to create an invalid descriptor
with a given error on purpose.
- Metrics are now treated in a similar way. The Write method returns
an error now (the only change in interface). An "invalid metric" is
provided that can be sent via the channel to signal that that metric
could not be collected. It alse transports an error.
NON-GOALS OF THIS COMMIT:
This is NOT yet the major improvement of the whole registry part,
where we want a public Registry interface and plenty of modular
configurations (for error handling, various auto-metrics, http
instrumentation, testing, ...). However, we can do that whole thing
without breaking existing interfaces. For now (which is a significant
issue) any error during collection will either cause a 500 HTTP
response or a panic (depending on registry config). Later, we
definitely want to have a possibility to skip (and only report
somehow) non-collectible metrics instead of aborting the whole scrape.
This change adds two new collectors to the prometheus package which
export metrics about a given or the current process.
* ProcessCollector exports metrics about cpu time, vss, rss, fd usage as
well as the start time of a given process.
* GoCollector exports currently only the number of active goroutines.