client_golang/model/signature.go

98 lines
2.2 KiB
Go
Raw Permalink Normal View History

// Copyright 2014 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package model
import (
"bytes"
"hash"
"hash/fnv"
)
// SeparatorByte is a byte that cannot occur in valid UTF-8 sequences and is
// used to separate label names, label values, and other strings from each other
// when calculating their combined hash value (aka signature aka fingerprint).
const SeparatorByte byte = 255
var (
// cache the signature of an empty label set.
emptyLabelSignature = fnv.New64a().Sum64()
hashAndBufPool = make(chan *hashAndBuf, 1024)
)
type hashAndBuf struct {
h hash.Hash64
b bytes.Buffer
}
func getHashAndBuf() *hashAndBuf {
select {
case hb := <-hashAndBufPool:
return hb
default:
return &hashAndBuf{h: fnv.New64a()}
Optimize fingerprinting and metric locks. These are all simple changes we should have caught a long time ago: 1. The hashing mechanism for fingerprint label sets should have not allocated new objects for the actual hashing---at least not egregiously. This simplifies the hash writing by just byte- dumping the string stream into the hasher. 2. The hashing mechanism within the scope of a metric does not care about the value of the label keys themselves but only of the label values. The keys can be dropped from the calculation. 3. The locking mechanism for the metrics should not block on hash computation but rather solely on the actual mutation or critical section reads. 4. For scalar metrics (i.e., ones with niladic label signatures), we should rely on a preallocated map versus requesting a new one ad hoc. This is tested with Go 1.1, so the results may yield other values for us elsewhere: BEFORE BenchmarkLabelValuesToSignatureScalar 500000000 3.97 ns/op 0 B/op 0 allocs/op BenchmarkLabelValuesToSignatureSingle 5000000 714 ns/op 74 B/op 4 allocs/op BenchmarkLabelValuesToSignatureDouble 1000000 1153 ns/op 107 B/op 5 allocs/op BenchmarkLabelValuesToSignatureTriple 1000000 1588 ns/op 138 B/op 6 allocs/op BenchmarkLabelToSignatureScalar 500000000 3.91 ns/op 0 B/op 0 allocs/op BenchmarkLabelToSignatureSingle 2000000 874 ns/op 92 B/op 5 allocs/op BenchmarkLabelToSignatureDouble 1000000 1528 ns/op 139 B/op 7 allocs/op BenchmarkLabelToSignatureTriple 1000000 2172 ns/op 186 B/op 9 allocs/op AFTER BenchmarkLabelValuesToSignatureScalar 500000000 4.36 ns/op 0 B/op 0 allocs/op BenchmarkLabelValuesToSignatureSingle 5000000 378 ns/op 89 B/op 4 allocs/op BenchmarkLabelValuesToSignatureDouble 5000000 574 ns/op 142 B/op 5 allocs/op BenchmarkLabelValuesToSignatureTriple 5000000 758 ns/op 186 B/op 6 allocs/op BenchmarkLabelToSignatureScalar 500000000 4.06 ns/op 0 B/op 0 allocs/op BenchmarkLabelToSignatureSingle 5000000 472 ns/op 106 B/op 5 allocs/op BenchmarkLabelToSignatureDouble 2000000 746 ns/op 174 B/op 7 allocs/op BenchmarkLabelToSignatureTriple 1000000 1061 ns/op 235 B/op 9 allocs/op In effect, a single metric mutation operation's lookup overhead will move from Before::iBenchmarkLabelToSignature to After::BenchmarkLabelValuesToSignature. This MINIMALLY reduces 1/2 the overhead. I would be hesitant in reading the memory allocation statistics, for this was run with the GC still on and thusly inaccurate per Go benchmarking documentation. Before::BenchmarkLabelValuesToSignature never existed, so it is not of any intrinsic value in itself. That said, the cases that still rely on LabelToSignature experience consistently a 1/2 drop in time. Change-Id: Ifc9e69f718af65a59f5be8117473518233258159
2014-04-14 20:45:16 +04:00
}
}
Optimize fingerprinting and metric locks. These are all simple changes we should have caught a long time ago: 1. The hashing mechanism for fingerprint label sets should have not allocated new objects for the actual hashing---at least not egregiously. This simplifies the hash writing by just byte- dumping the string stream into the hasher. 2. The hashing mechanism within the scope of a metric does not care about the value of the label keys themselves but only of the label values. The keys can be dropped from the calculation. 3. The locking mechanism for the metrics should not block on hash computation but rather solely on the actual mutation or critical section reads. 4. For scalar metrics (i.e., ones with niladic label signatures), we should rely on a preallocated map versus requesting a new one ad hoc. This is tested with Go 1.1, so the results may yield other values for us elsewhere: BEFORE BenchmarkLabelValuesToSignatureScalar 500000000 3.97 ns/op 0 B/op 0 allocs/op BenchmarkLabelValuesToSignatureSingle 5000000 714 ns/op 74 B/op 4 allocs/op BenchmarkLabelValuesToSignatureDouble 1000000 1153 ns/op 107 B/op 5 allocs/op BenchmarkLabelValuesToSignatureTriple 1000000 1588 ns/op 138 B/op 6 allocs/op BenchmarkLabelToSignatureScalar 500000000 3.91 ns/op 0 B/op 0 allocs/op BenchmarkLabelToSignatureSingle 2000000 874 ns/op 92 B/op 5 allocs/op BenchmarkLabelToSignatureDouble 1000000 1528 ns/op 139 B/op 7 allocs/op BenchmarkLabelToSignatureTriple 1000000 2172 ns/op 186 B/op 9 allocs/op AFTER BenchmarkLabelValuesToSignatureScalar 500000000 4.36 ns/op 0 B/op 0 allocs/op BenchmarkLabelValuesToSignatureSingle 5000000 378 ns/op 89 B/op 4 allocs/op BenchmarkLabelValuesToSignatureDouble 5000000 574 ns/op 142 B/op 5 allocs/op BenchmarkLabelValuesToSignatureTriple 5000000 758 ns/op 186 B/op 6 allocs/op BenchmarkLabelToSignatureScalar 500000000 4.06 ns/op 0 B/op 0 allocs/op BenchmarkLabelToSignatureSingle 5000000 472 ns/op 106 B/op 5 allocs/op BenchmarkLabelToSignatureDouble 2000000 746 ns/op 174 B/op 7 allocs/op BenchmarkLabelToSignatureTriple 1000000 1061 ns/op 235 B/op 9 allocs/op In effect, a single metric mutation operation's lookup overhead will move from Before::iBenchmarkLabelToSignature to After::BenchmarkLabelValuesToSignature. This MINIMALLY reduces 1/2 the overhead. I would be hesitant in reading the memory allocation statistics, for this was run with the GC still on and thusly inaccurate per Go benchmarking documentation. Before::BenchmarkLabelValuesToSignature never existed, so it is not of any intrinsic value in itself. That said, the cases that still rely on LabelToSignature experience consistently a 1/2 drop in time. Change-Id: Ifc9e69f718af65a59f5be8117473518233258159
2014-04-14 20:45:16 +04:00
func putHashAndBuf(hb *hashAndBuf) {
select {
case hashAndBufPool <- hb:
default:
}
Optimize fingerprinting and metric locks. These are all simple changes we should have caught a long time ago: 1. The hashing mechanism for fingerprint label sets should have not allocated new objects for the actual hashing---at least not egregiously. This simplifies the hash writing by just byte- dumping the string stream into the hasher. 2. The hashing mechanism within the scope of a metric does not care about the value of the label keys themselves but only of the label values. The keys can be dropped from the calculation. 3. The locking mechanism for the metrics should not block on hash computation but rather solely on the actual mutation or critical section reads. 4. For scalar metrics (i.e., ones with niladic label signatures), we should rely on a preallocated map versus requesting a new one ad hoc. This is tested with Go 1.1, so the results may yield other values for us elsewhere: BEFORE BenchmarkLabelValuesToSignatureScalar 500000000 3.97 ns/op 0 B/op 0 allocs/op BenchmarkLabelValuesToSignatureSingle 5000000 714 ns/op 74 B/op 4 allocs/op BenchmarkLabelValuesToSignatureDouble 1000000 1153 ns/op 107 B/op 5 allocs/op BenchmarkLabelValuesToSignatureTriple 1000000 1588 ns/op 138 B/op 6 allocs/op BenchmarkLabelToSignatureScalar 500000000 3.91 ns/op 0 B/op 0 allocs/op BenchmarkLabelToSignatureSingle 2000000 874 ns/op 92 B/op 5 allocs/op BenchmarkLabelToSignatureDouble 1000000 1528 ns/op 139 B/op 7 allocs/op BenchmarkLabelToSignatureTriple 1000000 2172 ns/op 186 B/op 9 allocs/op AFTER BenchmarkLabelValuesToSignatureScalar 500000000 4.36 ns/op 0 B/op 0 allocs/op BenchmarkLabelValuesToSignatureSingle 5000000 378 ns/op 89 B/op 4 allocs/op BenchmarkLabelValuesToSignatureDouble 5000000 574 ns/op 142 B/op 5 allocs/op BenchmarkLabelValuesToSignatureTriple 5000000 758 ns/op 186 B/op 6 allocs/op BenchmarkLabelToSignatureScalar 500000000 4.06 ns/op 0 B/op 0 allocs/op BenchmarkLabelToSignatureSingle 5000000 472 ns/op 106 B/op 5 allocs/op BenchmarkLabelToSignatureDouble 2000000 746 ns/op 174 B/op 7 allocs/op BenchmarkLabelToSignatureTriple 1000000 1061 ns/op 235 B/op 9 allocs/op In effect, a single metric mutation operation's lookup overhead will move from Before::iBenchmarkLabelToSignature to After::BenchmarkLabelValuesToSignature. This MINIMALLY reduces 1/2 the overhead. I would be hesitant in reading the memory allocation statistics, for this was run with the GC still on and thusly inaccurate per Go benchmarking documentation. Before::BenchmarkLabelValuesToSignature never existed, so it is not of any intrinsic value in itself. That said, the cases that still rely on LabelToSignature experience consistently a 1/2 drop in time. Change-Id: Ifc9e69f718af65a59f5be8117473518233258159
2014-04-14 20:45:16 +04:00
}
// LabelsToSignature returns a unique signature (i.e., fingerprint) for a given
// label set.
func LabelsToSignature(labels map[string]string) uint64 {
Optimize fingerprinting and metric locks. These are all simple changes we should have caught a long time ago: 1. The hashing mechanism for fingerprint label sets should have not allocated new objects for the actual hashing---at least not egregiously. This simplifies the hash writing by just byte- dumping the string stream into the hasher. 2. The hashing mechanism within the scope of a metric does not care about the value of the label keys themselves but only of the label values. The keys can be dropped from the calculation. 3. The locking mechanism for the metrics should not block on hash computation but rather solely on the actual mutation or critical section reads. 4. For scalar metrics (i.e., ones with niladic label signatures), we should rely on a preallocated map versus requesting a new one ad hoc. This is tested with Go 1.1, so the results may yield other values for us elsewhere: BEFORE BenchmarkLabelValuesToSignatureScalar 500000000 3.97 ns/op 0 B/op 0 allocs/op BenchmarkLabelValuesToSignatureSingle 5000000 714 ns/op 74 B/op 4 allocs/op BenchmarkLabelValuesToSignatureDouble 1000000 1153 ns/op 107 B/op 5 allocs/op BenchmarkLabelValuesToSignatureTriple 1000000 1588 ns/op 138 B/op 6 allocs/op BenchmarkLabelToSignatureScalar 500000000 3.91 ns/op 0 B/op 0 allocs/op BenchmarkLabelToSignatureSingle 2000000 874 ns/op 92 B/op 5 allocs/op BenchmarkLabelToSignatureDouble 1000000 1528 ns/op 139 B/op 7 allocs/op BenchmarkLabelToSignatureTriple 1000000 2172 ns/op 186 B/op 9 allocs/op AFTER BenchmarkLabelValuesToSignatureScalar 500000000 4.36 ns/op 0 B/op 0 allocs/op BenchmarkLabelValuesToSignatureSingle 5000000 378 ns/op 89 B/op 4 allocs/op BenchmarkLabelValuesToSignatureDouble 5000000 574 ns/op 142 B/op 5 allocs/op BenchmarkLabelValuesToSignatureTriple 5000000 758 ns/op 186 B/op 6 allocs/op BenchmarkLabelToSignatureScalar 500000000 4.06 ns/op 0 B/op 0 allocs/op BenchmarkLabelToSignatureSingle 5000000 472 ns/op 106 B/op 5 allocs/op BenchmarkLabelToSignatureDouble 2000000 746 ns/op 174 B/op 7 allocs/op BenchmarkLabelToSignatureTriple 1000000 1061 ns/op 235 B/op 9 allocs/op In effect, a single metric mutation operation's lookup overhead will move from Before::iBenchmarkLabelToSignature to After::BenchmarkLabelValuesToSignature. This MINIMALLY reduces 1/2 the overhead. I would be hesitant in reading the memory allocation statistics, for this was run with the GC still on and thusly inaccurate per Go benchmarking documentation. Before::BenchmarkLabelValuesToSignature never existed, so it is not of any intrinsic value in itself. That said, the cases that still rely on LabelToSignature experience consistently a 1/2 drop in time. Change-Id: Ifc9e69f718af65a59f5be8117473518233258159
2014-04-14 20:45:16 +04:00
if len(labels) == 0 {
return emptyLabelSignature
}
var result uint64
hb := getHashAndBuf()
defer putHashAndBuf(hb)
for k, v := range labels {
hb.b.WriteString(k)
hb.b.WriteByte(SeparatorByte)
hb.b.WriteString(v)
hb.h.Write(hb.b.Bytes())
result ^= hb.h.Sum64()
hb.h.Reset()
hb.b.Reset()
Optimize fingerprinting and metric locks. These are all simple changes we should have caught a long time ago: 1. The hashing mechanism for fingerprint label sets should have not allocated new objects for the actual hashing---at least not egregiously. This simplifies the hash writing by just byte- dumping the string stream into the hasher. 2. The hashing mechanism within the scope of a metric does not care about the value of the label keys themselves but only of the label values. The keys can be dropped from the calculation. 3. The locking mechanism for the metrics should not block on hash computation but rather solely on the actual mutation or critical section reads. 4. For scalar metrics (i.e., ones with niladic label signatures), we should rely on a preallocated map versus requesting a new one ad hoc. This is tested with Go 1.1, so the results may yield other values for us elsewhere: BEFORE BenchmarkLabelValuesToSignatureScalar 500000000 3.97 ns/op 0 B/op 0 allocs/op BenchmarkLabelValuesToSignatureSingle 5000000 714 ns/op 74 B/op 4 allocs/op BenchmarkLabelValuesToSignatureDouble 1000000 1153 ns/op 107 B/op 5 allocs/op BenchmarkLabelValuesToSignatureTriple 1000000 1588 ns/op 138 B/op 6 allocs/op BenchmarkLabelToSignatureScalar 500000000 3.91 ns/op 0 B/op 0 allocs/op BenchmarkLabelToSignatureSingle 2000000 874 ns/op 92 B/op 5 allocs/op BenchmarkLabelToSignatureDouble 1000000 1528 ns/op 139 B/op 7 allocs/op BenchmarkLabelToSignatureTriple 1000000 2172 ns/op 186 B/op 9 allocs/op AFTER BenchmarkLabelValuesToSignatureScalar 500000000 4.36 ns/op 0 B/op 0 allocs/op BenchmarkLabelValuesToSignatureSingle 5000000 378 ns/op 89 B/op 4 allocs/op BenchmarkLabelValuesToSignatureDouble 5000000 574 ns/op 142 B/op 5 allocs/op BenchmarkLabelValuesToSignatureTriple 5000000 758 ns/op 186 B/op 6 allocs/op BenchmarkLabelToSignatureScalar 500000000 4.06 ns/op 0 B/op 0 allocs/op BenchmarkLabelToSignatureSingle 5000000 472 ns/op 106 B/op 5 allocs/op BenchmarkLabelToSignatureDouble 2000000 746 ns/op 174 B/op 7 allocs/op BenchmarkLabelToSignatureTriple 1000000 1061 ns/op 235 B/op 9 allocs/op In effect, a single metric mutation operation's lookup overhead will move from Before::iBenchmarkLabelToSignature to After::BenchmarkLabelValuesToSignature. This MINIMALLY reduces 1/2 the overhead. I would be hesitant in reading the memory allocation statistics, for this was run with the GC still on and thusly inaccurate per Go benchmarking documentation. Before::BenchmarkLabelValuesToSignature never existed, so it is not of any intrinsic value in itself. That said, the cases that still rely on LabelToSignature experience consistently a 1/2 drop in time. Change-Id: Ifc9e69f718af65a59f5be8117473518233258159
2014-04-14 20:45:16 +04:00
}
return result
}
Optimize fingerprinting and metric locks. These are all simple changes we should have caught a long time ago: 1. The hashing mechanism for fingerprint label sets should have not allocated new objects for the actual hashing---at least not egregiously. This simplifies the hash writing by just byte- dumping the string stream into the hasher. 2. The hashing mechanism within the scope of a metric does not care about the value of the label keys themselves but only of the label values. The keys can be dropped from the calculation. 3. The locking mechanism for the metrics should not block on hash computation but rather solely on the actual mutation or critical section reads. 4. For scalar metrics (i.e., ones with niladic label signatures), we should rely on a preallocated map versus requesting a new one ad hoc. This is tested with Go 1.1, so the results may yield other values for us elsewhere: BEFORE BenchmarkLabelValuesToSignatureScalar 500000000 3.97 ns/op 0 B/op 0 allocs/op BenchmarkLabelValuesToSignatureSingle 5000000 714 ns/op 74 B/op 4 allocs/op BenchmarkLabelValuesToSignatureDouble 1000000 1153 ns/op 107 B/op 5 allocs/op BenchmarkLabelValuesToSignatureTriple 1000000 1588 ns/op 138 B/op 6 allocs/op BenchmarkLabelToSignatureScalar 500000000 3.91 ns/op 0 B/op 0 allocs/op BenchmarkLabelToSignatureSingle 2000000 874 ns/op 92 B/op 5 allocs/op BenchmarkLabelToSignatureDouble 1000000 1528 ns/op 139 B/op 7 allocs/op BenchmarkLabelToSignatureTriple 1000000 2172 ns/op 186 B/op 9 allocs/op AFTER BenchmarkLabelValuesToSignatureScalar 500000000 4.36 ns/op 0 B/op 0 allocs/op BenchmarkLabelValuesToSignatureSingle 5000000 378 ns/op 89 B/op 4 allocs/op BenchmarkLabelValuesToSignatureDouble 5000000 574 ns/op 142 B/op 5 allocs/op BenchmarkLabelValuesToSignatureTriple 5000000 758 ns/op 186 B/op 6 allocs/op BenchmarkLabelToSignatureScalar 500000000 4.06 ns/op 0 B/op 0 allocs/op BenchmarkLabelToSignatureSingle 5000000 472 ns/op 106 B/op 5 allocs/op BenchmarkLabelToSignatureDouble 2000000 746 ns/op 174 B/op 7 allocs/op BenchmarkLabelToSignatureTriple 1000000 1061 ns/op 235 B/op 9 allocs/op In effect, a single metric mutation operation's lookup overhead will move from Before::iBenchmarkLabelToSignature to After::BenchmarkLabelValuesToSignature. This MINIMALLY reduces 1/2 the overhead. I would be hesitant in reading the memory allocation statistics, for this was run with the GC still on and thusly inaccurate per Go benchmarking documentation. Before::BenchmarkLabelValuesToSignature never existed, so it is not of any intrinsic value in itself. That said, the cases that still rely on LabelToSignature experience consistently a 1/2 drop in time. Change-Id: Ifc9e69f718af65a59f5be8117473518233258159
2014-04-14 20:45:16 +04:00
// LabelValuesToSignature returns a unique signature (i.e., fingerprint) for the
// values of a given label set.
func LabelValuesToSignature(labels map[string]string) uint64 {
if len(labels) == 0 {
return emptyLabelSignature
}
Optimize fingerprinting and metric locks. These are all simple changes we should have caught a long time ago: 1. The hashing mechanism for fingerprint label sets should have not allocated new objects for the actual hashing---at least not egregiously. This simplifies the hash writing by just byte- dumping the string stream into the hasher. 2. The hashing mechanism within the scope of a metric does not care about the value of the label keys themselves but only of the label values. The keys can be dropped from the calculation. 3. The locking mechanism for the metrics should not block on hash computation but rather solely on the actual mutation or critical section reads. 4. For scalar metrics (i.e., ones with niladic label signatures), we should rely on a preallocated map versus requesting a new one ad hoc. This is tested with Go 1.1, so the results may yield other values for us elsewhere: BEFORE BenchmarkLabelValuesToSignatureScalar 500000000 3.97 ns/op 0 B/op 0 allocs/op BenchmarkLabelValuesToSignatureSingle 5000000 714 ns/op 74 B/op 4 allocs/op BenchmarkLabelValuesToSignatureDouble 1000000 1153 ns/op 107 B/op 5 allocs/op BenchmarkLabelValuesToSignatureTriple 1000000 1588 ns/op 138 B/op 6 allocs/op BenchmarkLabelToSignatureScalar 500000000 3.91 ns/op 0 B/op 0 allocs/op BenchmarkLabelToSignatureSingle 2000000 874 ns/op 92 B/op 5 allocs/op BenchmarkLabelToSignatureDouble 1000000 1528 ns/op 139 B/op 7 allocs/op BenchmarkLabelToSignatureTriple 1000000 2172 ns/op 186 B/op 9 allocs/op AFTER BenchmarkLabelValuesToSignatureScalar 500000000 4.36 ns/op 0 B/op 0 allocs/op BenchmarkLabelValuesToSignatureSingle 5000000 378 ns/op 89 B/op 4 allocs/op BenchmarkLabelValuesToSignatureDouble 5000000 574 ns/op 142 B/op 5 allocs/op BenchmarkLabelValuesToSignatureTriple 5000000 758 ns/op 186 B/op 6 allocs/op BenchmarkLabelToSignatureScalar 500000000 4.06 ns/op 0 B/op 0 allocs/op BenchmarkLabelToSignatureSingle 5000000 472 ns/op 106 B/op 5 allocs/op BenchmarkLabelToSignatureDouble 2000000 746 ns/op 174 B/op 7 allocs/op BenchmarkLabelToSignatureTriple 1000000 1061 ns/op 235 B/op 9 allocs/op In effect, a single metric mutation operation's lookup overhead will move from Before::iBenchmarkLabelToSignature to After::BenchmarkLabelValuesToSignature. This MINIMALLY reduces 1/2 the overhead. I would be hesitant in reading the memory allocation statistics, for this was run with the GC still on and thusly inaccurate per Go benchmarking documentation. Before::BenchmarkLabelValuesToSignature never existed, so it is not of any intrinsic value in itself. That said, the cases that still rely on LabelToSignature experience consistently a 1/2 drop in time. Change-Id: Ifc9e69f718af65a59f5be8117473518233258159
2014-04-14 20:45:16 +04:00
var result uint64
hb := getHashAndBuf()
defer putHashAndBuf(hb)
Optimize fingerprinting and metric locks. These are all simple changes we should have caught a long time ago: 1. The hashing mechanism for fingerprint label sets should have not allocated new objects for the actual hashing---at least not egregiously. This simplifies the hash writing by just byte- dumping the string stream into the hasher. 2. The hashing mechanism within the scope of a metric does not care about the value of the label keys themselves but only of the label values. The keys can be dropped from the calculation. 3. The locking mechanism for the metrics should not block on hash computation but rather solely on the actual mutation or critical section reads. 4. For scalar metrics (i.e., ones with niladic label signatures), we should rely on a preallocated map versus requesting a new one ad hoc. This is tested with Go 1.1, so the results may yield other values for us elsewhere: BEFORE BenchmarkLabelValuesToSignatureScalar 500000000 3.97 ns/op 0 B/op 0 allocs/op BenchmarkLabelValuesToSignatureSingle 5000000 714 ns/op 74 B/op 4 allocs/op BenchmarkLabelValuesToSignatureDouble 1000000 1153 ns/op 107 B/op 5 allocs/op BenchmarkLabelValuesToSignatureTriple 1000000 1588 ns/op 138 B/op 6 allocs/op BenchmarkLabelToSignatureScalar 500000000 3.91 ns/op 0 B/op 0 allocs/op BenchmarkLabelToSignatureSingle 2000000 874 ns/op 92 B/op 5 allocs/op BenchmarkLabelToSignatureDouble 1000000 1528 ns/op 139 B/op 7 allocs/op BenchmarkLabelToSignatureTriple 1000000 2172 ns/op 186 B/op 9 allocs/op AFTER BenchmarkLabelValuesToSignatureScalar 500000000 4.36 ns/op 0 B/op 0 allocs/op BenchmarkLabelValuesToSignatureSingle 5000000 378 ns/op 89 B/op 4 allocs/op BenchmarkLabelValuesToSignatureDouble 5000000 574 ns/op 142 B/op 5 allocs/op BenchmarkLabelValuesToSignatureTriple 5000000 758 ns/op 186 B/op 6 allocs/op BenchmarkLabelToSignatureScalar 500000000 4.06 ns/op 0 B/op 0 allocs/op BenchmarkLabelToSignatureSingle 5000000 472 ns/op 106 B/op 5 allocs/op BenchmarkLabelToSignatureDouble 2000000 746 ns/op 174 B/op 7 allocs/op BenchmarkLabelToSignatureTriple 1000000 1061 ns/op 235 B/op 9 allocs/op In effect, a single metric mutation operation's lookup overhead will move from Before::iBenchmarkLabelToSignature to After::BenchmarkLabelValuesToSignature. This MINIMALLY reduces 1/2 the overhead. I would be hesitant in reading the memory allocation statistics, for this was run with the GC still on and thusly inaccurate per Go benchmarking documentation. Before::BenchmarkLabelValuesToSignature never existed, so it is not of any intrinsic value in itself. That said, the cases that still rely on LabelToSignature experience consistently a 1/2 drop in time. Change-Id: Ifc9e69f718af65a59f5be8117473518233258159
2014-04-14 20:45:16 +04:00
for _, v := range labels {
hb.b.WriteString(v)
hb.h.Write(hb.b.Bytes())
result ^= hb.h.Sum64()
hb.h.Reset()
hb.b.Reset()
}
return result
}