* feat(tsdb): allow appending to ST capable XOR chunk optionally
Only for float samples as of now. Supports for in-order and out-of-order
samples.
Make sure that on readout the ST capable chunks are returned automatically.
When the chunks are returned as is, this is trivially true.
When a chunk needs to be re-coded due to deletion (tombstone) markers,
we take the encoding of the original chunk.
When a chunk needs to be created from overlapping chunks, we observe
whether ST is zero or not and create the new chunk based on that.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
It was 80 bytes with a lot of padding compared to the 56 bytes of the
original xor chunk iterator. Made it 64 bytes, tightly packed.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
No implementation yet. Just to test the shape of the interface.
AtST is implemented for trivial cases, anything else is hard coded
to return 0.
Ref: https://github.com/prometheus/prometheus/issues/17791
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
ReduceResolution is currently called before validation during
ingestion. This will cause a panic if there are not enough buckets in
the histogram. If there are too many buckets, the spurious buckets are
ignored, and therefore the error in the input histogram is masked.
Furthermore, invalid negative offsets might cause problems, too.
Therefore, we need to do some minimal validation in reduceResolution.
Fortunately, it is easy and shouldn't slow things down. Sadly, it
requires to return errors, which triggers a bunch of code changes.
Even here is a bright side, we can get rud of a few panics. (Remember:
Don't panic!)
In different news, we haven't done a full validation of histograms
read via remote-read. This is not so much a security concern (as you
can throw off Prometheus easily by feeding it bogus data via
remote-read) but more that remote-read sources might be makeshift and
could accidentally create invalid histograms. We really don't want to
panic in that case. So this commit does not only add a check of the
spans and buckets as needed for resolution reduction but also a full
validation during remote-read.
Signed-off-by: beorn7 <beorn@grafana.com>
Allow -9..52 schemas instead of just -4..8, but reduce resolution to 8 if
above.
The reduce code path will be slow, but we only expect it to happen if
TSDB already has higher resolution samples and we are in a rollback.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
# Conflicts:
# model/histogram/generic.go
Otherwise higher level code like PromQL needs to constantly check if it
can handle the samples.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Skip creating an iterator and walking all through any existing values,
when we can easily tell there are no existing values.
This is the normal case - the TSDB head creates an appender immediately
after creating every chunk.
Remove redundant handling of empty chunks.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Both `HistogramChunk` and `FloatHistogramChunk` have a `Layout()`
method for historical reasons. As it has turned out, these methods are
unused and also buggy. This commit simply removes them.
Signed-off-by: beorn7 <beorn@grafana.com>
See
https://pkg.go.dev/golang.org/x/tools/gopls/internal/analysis/modernize
for details.
This ran into a few issues (arguably bugs in the modernize tool),
which I will fix in the next commit, so that we have transparency what
was done automatically.
Beyond those hiccups, I believe all the changes applied are
legitimate. Even where there might be no tangible direct gain, I would
argue it's still better to use the "modern" way to avoid micro
discussions in tiny style PRs later.
Signed-off-by: beorn7 <beorn@grafana.com>
* test(chunkenc): appending histograms with empty buckets and gaps
Append such native histograms that have empty buckets and gaps
between the indexes of those buckets.
There is a special case for appending counter native histograms to a chunk in TSDB: if we append a histogram that is missing some buckets that are already in chunk, then usually that's a counter reset. However if the missing bucket is empty, meaning its value is 0, then we don't consider it missing.
For this case to trigger , we need to write empty buckets into the chunk. Normally native histograms are compacted when we emit them , so this is very rare and compact make sure that there are no multiple continuous empty buckets with gaps between them.
The code that I've added in #14513 did not take into account that you can bypass compact and write histograms with many empty buckets, with gaps between them. These are still valid, so the code has to account for them.
Main fix in the expandIntSpansAndBuckets and expandFloatSpansAndBuckets function. I've also refactored them for clarity. Consequently needed to fix insert and adjustForInserts to also allow gaps between inserts.
I've added some new test cases (data driven test would be nice here, too many cases). And removed the deprecated old function that didn't deal with empty buckets at all.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Signed-off-by: George Krajcsovits <krajorama@users.noreply.github.com>
Co-authored-by: Björn Rabenstein <beorn@grafana.com>
The custom values are the "le" bucket boundaries of native histograms
with custom buckets. They are never modified. It is ok to not copy them
when iterating a chunk, just reference them.
If we will ever have a function that modifies the custom values, like
'trim' for example. That function will have to make a copy on write.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Always return unknown hint for first sample in non-gauge histogram chunk
---------
Signed-off-by: Fiona Liao <fiona.liao@grafana.com>
Co-authored-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>