client_golang/prometheus
Matt T. Proud 7efd34a6f8 Optimize fingerprinting and metric locks.
These are all simple changes we should have caught a long time ago:

1. The hashing mechanism for fingerprint label sets should have not
   allocated new objects for the actual hashing---at least not
   egregiously.  This simplifies the hash writing by just byte-
   dumping the string stream into the hasher.

2. The hashing mechanism within the scope of a metric does not care
   about the value of the label keys themselves but only of the label
   values.  The keys can be dropped from the calculation.

3. The locking mechanism for the metrics should not block on hash
   computation but rather solely on the actual mutation or critical
   section reads.

4. For scalar metrics (i.e., ones with niladic label signatures), we
   should rely on a preallocated map versus requesting a new one
   ad hoc.

This is tested with Go 1.1, so the results may yield other values
for us elsewhere:

BEFORE
BenchmarkLabelValuesToSignatureScalar	500000000	         3.97 ns/op	       0 B/op	       0 allocs/op
BenchmarkLabelValuesToSignatureSingle	 5000000	       714 ns/op	      74 B/op	       4 allocs/op
BenchmarkLabelValuesToSignatureDouble	 1000000	      1153 ns/op	     107 B/op	       5 allocs/op
BenchmarkLabelValuesToSignatureTriple	 1000000	      1588 ns/op	     138 B/op	       6 allocs/op
BenchmarkLabelToSignatureScalar	500000000	         3.91 ns/op	       0 B/op	       0 allocs/op
BenchmarkLabelToSignatureSingle	 2000000	       874 ns/op	      92 B/op	       5 allocs/op
BenchmarkLabelToSignatureDouble	 1000000	      1528 ns/op	     139 B/op	       7 allocs/op
BenchmarkLabelToSignatureTriple	 1000000	      2172 ns/op	     186 B/op	       9 allocs/op

AFTER
BenchmarkLabelValuesToSignatureScalar	500000000	         4.36 ns/op	       0 B/op	       0 allocs/op
BenchmarkLabelValuesToSignatureSingle	 5000000	       378 ns/op	      89 B/op	       4 allocs/op
BenchmarkLabelValuesToSignatureDouble	 5000000	       574 ns/op	     142 B/op	       5 allocs/op
BenchmarkLabelValuesToSignatureTriple	 5000000	       758 ns/op	     186 B/op	       6 allocs/op
BenchmarkLabelToSignatureScalar	500000000	         4.06 ns/op	       0 B/op	       0 allocs/op
BenchmarkLabelToSignatureSingle	 5000000	       472 ns/op	     106 B/op	       5 allocs/op
BenchmarkLabelToSignatureDouble	 2000000	       746 ns/op	     174 B/op	       7 allocs/op
BenchmarkLabelToSignatureTriple	 1000000	      1061 ns/op	     235 B/op	       9 allocs/op

In effect, a single metric mutation operation's lookup overhead will
move from Before::iBenchmarkLabelToSignature to
After::BenchmarkLabelValuesToSignature.  This MINIMALLY reduces
1/2 the overhead.  I would be hesitant in reading the memory
allocation statistics, for this was run with the GC still on and
thusly inaccurate per Go benchmarking documentation.

Before::BenchmarkLabelValuesToSignature never existed, so it is not
of any intrinsic value in itself.  That said, the cases that still
rely on LabelToSignature experience consistently a 1/2 drop in time.

Change-Id: Ifc9e69f718af65a59f5be8117473518233258159
2014-04-14 19:06:09 +02:00
..
exp Rearrange file and package per convention. 2013-04-04 15:27:09 +02:00
.gitignore Rearrange file and package per convention. 2013-04-04 15:27:09 +02:00
Makefile Enclose artifact generation process into Makefile. 2013-07-21 17:45:53 +02:00
accumulating_bucket.go Rearrange file and package per convention. 2013-04-04 15:27:09 +02:00
accumulating_bucket_test.go Rearrange file and package per convention. 2013-04-04 15:27:09 +02:00
bucket.go Rearrange file and package per convention. 2013-04-04 15:27:09 +02:00
constants.go Optimize fingerprinting and metric locks. 2014-04-14 19:06:09 +02:00
counter.go Optimize fingerprinting and metric locks. 2014-04-14 19:06:09 +02:00
counter_test.go Add Reset(map[string]string) to Metric interface 2014-02-19 15:18:16 +01:00
distributions.go Rearrange file and package per convention. 2013-04-04 15:27:09 +02:00
documentation.go Extract core Prometheus value decoders. 2013-06-10 19:35:41 +02:00
eviction.go Rearrange file and package per convention. 2013-04-04 15:27:09 +02:00
eviction_test.go Rearrange file and package per convention. 2013-04-04 15:27:09 +02:00
gauge.go Optimize fingerprinting and metric locks. 2014-04-14 19:06:09 +02:00
gauge_test.go Add Reset(map[string]string) to Metric interface 2014-02-19 15:18:16 +01:00
helpers_test.go Rename test helper files to helpers_test.go 2013-05-06 11:13:44 +02:00
histogram.go Optimize fingerprinting and metric locks. 2014-04-14 19:06:09 +02:00
histogram_test.go Rearrange file and package per convention. 2013-04-04 15:27:09 +02:00
interface.go Rearrange file and package per convention. 2013-04-04 15:27:09 +02:00
metric.go Add Reset(map[string]string) to Metric interface 2014-02-19 15:18:16 +01:00
priority_queue.go Rearrange file and package per convention. 2013-04-04 15:27:09 +02:00
priority_queue_test.go Rearrange file and package per convention. 2013-04-04 15:27:09 +02:00
prometheus_test.go Rearrange file and package per convention. 2013-04-04 15:27:09 +02:00
registry.go Remove redundant __name__ label from protobuf output. 2014-04-03 15:18:12 +02:00
registry_test.go Optimize fingerprinting and metric locks. 2014-04-14 19:06:09 +02:00
signature.go Optimize fingerprinting and metric locks. 2014-04-14 19:06:09 +02:00
signature_test.go Optimize fingerprinting and metric locks. 2014-04-14 19:06:09 +02:00
statistics.go Rearrange file and package per convention. 2013-04-04 15:27:09 +02:00
statistics_test.go Rearrange file and package per convention. 2013-04-04 15:27:09 +02:00
tallying_bucket.go Rearrange file and package per convention. 2013-04-04 15:27:09 +02:00
tallying_bucket_test.go Rearrange file and package per convention. 2013-04-04 15:27:09 +02:00
telemetry.go Registry and Metrics implement json.Marshaler 2013-04-19 15:07:24 +02:00