Most important here is the simple & flat text format, but while I'm on
it, I have also added the text representations for protobufs (which is
purely meant for debugging purposes). I hope my basic idea about
handling those various protocols (and the text package) becomes
clearer now.
Change-Id: I7299853eadc82a426101e907f2b3d4e37f9e4c71
The LabelsToSignature function is now used outside of the prometheus
package, too. Leaving it in the prometheuos package is misleading
design and will lead to circulat import chains soon.
Change-Id: If1ca442d4023b33b138cf79fee68e82ff2a355be
- Full support for UNTYPED type.
- Receptive support for timestamp_ms (i.e. the processor can process
it, but the client library cannot yet create it - which is kind of
intended as timestamps are meant for other things like federation,
which will need separate support anyway).
Change-Id: I5913164a80089943d49ad58bf86e465a843ab82b
The idea here is to always go via the protobufs if dealing with the
text format. That won't always be the most efficient way, but it
avoids the multiplicity of conversion routines required for direct
conversion (e.g. text format -> internal representation in the
Prometheus server). The loss of efficiency is acceptable because the
text format should not be used in high performance (high throughput,
low latency) situations anyway.
In that way, the text format stays perfectly isolated from other parts
of the code. To receive text format, just plug the conversion in
before the code path that normally reads protobufs. Correspondingly,
for sending text format, simply replace the WriteDelimited call by a
text.Create call.
Nevertheless, the conversion code itself is optimized for efficiency
and minimized memory churn (which was one of the reason for handcoding
the parser and not using a lexer/parser code generation tool).
Change-Id: Iee45ffe8aa421a844225d13a1f859becd8a3b066
These are all simple changes we should have caught a long time ago:
1. The hashing mechanism for fingerprint label sets should have not
allocated new objects for the actual hashing---at least not
egregiously. This simplifies the hash writing by just byte-
dumping the string stream into the hasher.
2. The hashing mechanism within the scope of a metric does not care
about the value of the label keys themselves but only of the label
values. The keys can be dropped from the calculation.
3. The locking mechanism for the metrics should not block on hash
computation but rather solely on the actual mutation or critical
section reads.
4. For scalar metrics (i.e., ones with niladic label signatures), we
should rely on a preallocated map versus requesting a new one
ad hoc.
This is tested with Go 1.1, so the results may yield other values
for us elsewhere:
BEFORE
BenchmarkLabelValuesToSignatureScalar 500000000 3.97 ns/op 0 B/op 0 allocs/op
BenchmarkLabelValuesToSignatureSingle 5000000 714 ns/op 74 B/op 4 allocs/op
BenchmarkLabelValuesToSignatureDouble 1000000 1153 ns/op 107 B/op 5 allocs/op
BenchmarkLabelValuesToSignatureTriple 1000000 1588 ns/op 138 B/op 6 allocs/op
BenchmarkLabelToSignatureScalar 500000000 3.91 ns/op 0 B/op 0 allocs/op
BenchmarkLabelToSignatureSingle 2000000 874 ns/op 92 B/op 5 allocs/op
BenchmarkLabelToSignatureDouble 1000000 1528 ns/op 139 B/op 7 allocs/op
BenchmarkLabelToSignatureTriple 1000000 2172 ns/op 186 B/op 9 allocs/op
AFTER
BenchmarkLabelValuesToSignatureScalar 500000000 4.36 ns/op 0 B/op 0 allocs/op
BenchmarkLabelValuesToSignatureSingle 5000000 378 ns/op 89 B/op 4 allocs/op
BenchmarkLabelValuesToSignatureDouble 5000000 574 ns/op 142 B/op 5 allocs/op
BenchmarkLabelValuesToSignatureTriple 5000000 758 ns/op 186 B/op 6 allocs/op
BenchmarkLabelToSignatureScalar 500000000 4.06 ns/op 0 B/op 0 allocs/op
BenchmarkLabelToSignatureSingle 5000000 472 ns/op 106 B/op 5 allocs/op
BenchmarkLabelToSignatureDouble 2000000 746 ns/op 174 B/op 7 allocs/op
BenchmarkLabelToSignatureTriple 1000000 1061 ns/op 235 B/op 9 allocs/op
In effect, a single metric mutation operation's lookup overhead will
move from Before::iBenchmarkLabelToSignature to
After::BenchmarkLabelValuesToSignature. This MINIMALLY reduces
1/2 the overhead. I would be hesitant in reading the memory
allocation statistics, for this was run with the GC still on and
thusly inaccurate per Go benchmarking documentation.
Before::BenchmarkLabelValuesToSignature never existed, so it is not
of any intrinsic value in itself. That said, the cases that still
rely on LabelToSignature experience consistently a 1/2 drop in time.
Change-Id: Ifc9e69f718af65a59f5be8117473518233258159
This hook is needed for the upcoming push gateway.
Also remove go vet warnings and add test for Handler().
Change-Id: If6c56676c7a0f10c16b4effae7285903f8267616
This also adds a check that forbids any user-supplied metrics to start
with the reserved label name prefix "__".
Change-Id: I2fe94c740b685ad05c4c670613cf2af7b9e1c1c0
So far we've been using Go's native time.Time for anything related to sample
timestamps. Since the range of time.Time is much bigger than what we need, this
has created two problems:
- there could be time.Time values which were out of the range/precision of the
time type that we persist to disk, therefore causing incorrectly ordered keys.
One bug caused by this was:
https://github.com/prometheus/prometheus/issues/367
It would be good to use a timestamp type that's more closely aligned with
what the underlying storage supports.
- sizeof(time.Time) is 192, while Prometheus should be ok with a single 64-bit
Unix timestamp (possibly even a 32-bit one). Since we store samples in large
numbers, this seriously affects memory usage. Furthermore, copying/working
with the data will be faster if it's smaller.
*MEMORY USAGE RESULTS*
Initial memory usage comparisons for a running Prometheus with 1 timeseries and
100,000 samples show roughly a 13% decrease in total (VIRT) memory usage. In my
tests, this advantage for some reason decreased a bit the more samples the
timeseries had (to 5-7% for millions of samples). This I can't fully explain,
but perhaps garbage collection issues were involved.
*WHEN TO USE THE NEW TIMESTAMP TYPE*
The new clientmodel.Timestamp type should be used whenever time
calculations are either directly or indirectly related to sample
timestamps.
For example:
- the timestamp of a sample itself
- all kinds of watermarks
- anything that may become or is compared to a sample timestamp (like the timestamp
passed into Target.Scrape()).
When to still use time.Time:
- for measuring durations/times not related to sample timestamps, like duration
telemetry exporting, timers that indicate how frequently to execute some
action, etc.
Change-Id: I253a467388774280c10400fda122369ff77c1730
This is an optimization of labelsToSignature to avoid excess allocations
when the label set is empty.
Change-Id: If2d59bbc3ae6d4457e2ded197b6f4e7c67e6a173
This ensures that you can pass the same base label set into multiple
Register() calls, e.g.:
labels := map[string]string{"key": "value"}
prometheus.Register("metric_1", "", labels, ...)
prometheus.Register("metric_2", "", labels, ...)
Change-Id: I951e5c2ed7844c74eb3716d1bf07126ce558f266
This commit finally unlocks the ability for the Prometheus client
users of the Summary metric type to automatically get total counts
and sums partitioned by labels. It exposes them through two
synthetic variables, if the underlying data is present.
The following is the base case:
foo_samples{quantile=0.5} = <?>
foo_samples{quantile=0.99} = <?>
The following results with this change:
foo_samples{quantile=0.5} = <?>
foo_samples{quantile=0.99} = <?>
foo_samples_sum = <?>
foo_samples_count = <?>
Change-Id: I75b5ea0d8c851da8c0c82ed9c8ac0890e4238f87
Colliding labels can happen e.g. when an exporter job is scraped and already
includes "job" labels for its samples in /metrics. In this case, a
collisionPrefix of "exporter_" is added to the colliding target labels, but the
specifics (the collision prefix) are managed by Prometheus, not the client
library.
This commit introduces an incomplete processor that decodes varint
record length-delimited streams of io.prometheus.client.MetricFamily
Protocol Buffer messages. The Go client presently does not support
generation of said messages, but this will unblock a number of Java
changes.