Methods added:
- `SampleOffset(metric *labels.Labels) float64` to calculate the sample offset for a given label set.
- `AddRatioSampleWithOffset(ratioLimit, sampleOffset float64) bool` to find out whether a given sample offset falls within a given ratio limit.
The already existing method `AddRatioSample(ratioLimit float64, sample *Sample) bool` is now implemented as a simple combination of the two other methods. Exposing these methods helps downstream projects to re-use the implementations including easier testing.
Signed-off-by: Andrew Hall <andrew.hall@grafana.com>
In this PR, we are eliminating expensive string-keyed (by signature) maps that are accessed for every sample processed. During preprocessing in rangeEval, we assign a unique number from 0 to n-1 to each of the n string signature values, and later only use this number as a label set signature.
Signed-off-by: Linas Medžiūnas <linasm@users.noreply.github.com>
Co-authored-by: George Krajcsovits <krajorama@users.noreply.github.com>
The funcResets and funcChanges functions now correctly return no result when all float samples are at or before the range start for anchored selectors, consistent with the behavior of rate/increase functions.
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
Add the "@" string representation for the AT token in ItemTypeStr map to
ensure proper token-to-string conversion.
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
Fixes#17255.
The implementation happens mostly in the Add and Sub method, but the reconciliation works for all relevant operations. For example, you can now `rate` over a range wherein the custom bucket boundaries are changing.
Any custom bucket reconciliation is flagged with an info-level annotation.
---------
Signed-off-by: Linas Medziunas <linas.medziunas@gmail.com>
Signed-off-by: Linas Medžiūnas <linasm@users.noreply.github.com>
The modifiers were already printed as part of the VectorSelector, so for:
`foo[5m] anchored`
...the output was:
`foo anchored[5m] anchored`
Similar to how it was already done for `@` and `offset`, I now removed these
modifiers in the copy of the vector selector that is used to print the matrix
selector. I also removed some unused code that restored the copy of the vector
selector after overwriting its fields. AFAICS there was no use in doing that,
since it was a copy already that would just be thrown away after printing, and
the original selector wasn't affected. I also removed an erroneous comment in
`atOffset()` where no actual copying took place and no fields were overwritten.
Signed-off-by: Julius Volz <julius.volz@gmail.com>
Add ANCHORED and SMOOTHED keywords to the maybe_label and
metric_identifier rules in the parser grammar, allowing them
to be used as metric names and label names, similar to other
keywords like 'offset', 'step', and 'bool'.
This fixes an issue where expressions like `anchored{job="test"}`
and `sum by (smoothed) (some_metric)` would fail to parse.
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
The detailed plan for this is laid out in
https://github.com/prometheus/prometheus/issues/16572 .
This commit adds a global and local scrape config option
`scrape_native_histograms`, which has to be set to true to ingest
native histograms.
To ease the transition, the feature flag is changed to simply set the
default of `scrape_native_histograms` to true.
Further implications:
- The default scrape protocols now depend on the
`scrape_native_histograms` setting.
- Everywhere else, histograms are now "on by default".
Documentation beyond the one for the feature flag and the scrape
config are deliberately left out. See
https://github.com/prometheus/prometheus/pull/17232 for that.
Signed-off-by: beorn7 <beorn@grafana.com>
avg_over_time already correctly checked the counter reset hint fo all
histograms, but in sum_over_time, the 1st histogram was missed. In
both cases, the 1st histogram is processed outside the loop.
Signed-off-by: beorn7 <beorn@grafana.com>
avg_over_time already correctly checked the counter reset hint fo all
histograms, but in sum_over_time, the 1st histogram was missed in the
loop. This commit exposes the bug in a test.
Signed-off-by: beorn7 <beorn@grafana.com>
* PromQL: Add benchmark case with variadic function
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* PromQL: Speed up parsing of variadic functions
Defer formatting of an error message until we hit an error.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
---------
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Fixes#17308.
As explained adding the warn-annotation about conflicting counter
reset hints doesn't happen consistently. Furthermore, because of
incremental mean calculation being used so far (which includes
subtraction), avg calculation always created gauge histograms.
The fix is to make Sub behave like Add WRT counter reset handling, and
then set the result of a subtraction to gauge explicitly in actual
PromQL subtraction (rather than using Sub for something else, like
incremental mean calculation). Also, track the presence of a
CounterReset hint and a NotCounterReset hint separately for the
entirety of aggregated histograms and create the warn-annotation based
on that.
As a minor fix, this commit also consistently creates the warn
annotation in aggregation to be about "aggregation" rather than
"subtraction" or "addition", because the latter are just internal
operations within the aggregation, which is not of interest for the
user.
Signed-off-by: beorn7 <beorn@grafana.com>
* Add anchored and smoothed to vector selectors.
This adds "anchored" and "smoothed" keywords that can be used following a matrix selector.
"Anchored" selects the last point before the range (or the first one after the range) and adds it at the boundary of the matrix selector.
"Smoothed" applies linear interpolation at the edges using the points around the edges. In the absence of a point before or after the edge, the first or the last point is added to the edge, without interpolation.
*Exemple usage*
* `increase(caddy_http_requests_total[5m] anchored)` (equivalent of *caddy_http_requests_total - caddy_http_requests_total offset 5m* but takes counter reset into consideration)
* `rate(caddy_http_requests_total[step()] smoothed)`
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
* Update docs/feature_flags.md
Co-authored-by: Charles Korn <charleskorn@users.noreply.github.com>
Signed-off-by: Julien <291750+roidelapluie@users.noreply.github.com>
* Smoothed/Anchored rate: Add more tests
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
* Anchored/Smoothed modifier: error out with histograms
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
---------
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
Signed-off-by: Julien <291750+roidelapluie@users.noreply.github.com>
Co-authored-by: Charles Korn <charleskorn@users.noreply.github.com>
Signed-off-by: Andrew Hall <andrew.hall@grafana.com>
Co-authored-by: Charles Korn <charleskorn@users.noreply.github.com>
Co-authored-by: Arve Knudsen <arve.knudsen@gmail.com>
The current code stops the walk after we have found the first relevant
function. However, in expressions with multiple legs, we will then use
the `HistogramStatsIterator` at most once. This change should make
sure we explore all legs.
The added tests make sure we are not using `HistogramStatsIterator`
where we shouldn't (but the opposite can only be seen in a benchmark
or with a more explicit test).
Signed-off-by: beorn7 <beorn@grafana.com>
Prior to #17127, we needed to add another level in the AST to trigger
the usage of `HistogramStatsIterator`. This is fixed now.
Signed-off-by: beorn7 <beorn@grafana.com>
Previously, multiple calls returned a wrong counter reset hint.
This commit also includes a bunch of refactorings that partially have
value on their own. However, the need for them was triggered by the
additional work needed for idempotency, so I included them in this
commit.
Signed-off-by: beorn7 <beorn@grafana.com>
Hide adding NHCB parser on top another parser in New() function
so we can easily add direct NHCB capable parsers.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
After an effective Seek, the lastFH isn't the lastFH anymore, so we
should nil it out.
In practice, this should only matter is sub-queries, because we are
otherwise not interested in a counter reset of the first sample
returned after a Seek.
Sub-queries, on the other hand, always do their own counter reset
detection. (For that, they would prefer to see the whole histogram, so
that's another problem for another commit.)
Signed-off-by: beorn7 <beorn@grafana.com>
The `HistogramStatsIterator` is only meant to be used within PromQL.
PromQL only ever uses float histograms. If `HistogramStatsIterator` is
capable of handling integer histograms, it will still be used, for
example by the `BufferedSeriesIterator`, which buffers samples and
will use an integer `Histogram` for it, if the underlying chunk is an
integer histogram chunk (which is common).
However, we can simply intercept the `Next` and `Seek` calls and
pretend to only ever be able te return float histograms. This has the
welcome side effect that we do not have to handle a mix of float and
integer histograms in the `HistogramStatsIterator` anymore.
With this commit, the `AtHistogram` call has been changed to panic so
that we ensure it is never called.
Benchmark differences between this and the previous commit:
name old time/op new time/op delta
NativeHistograms/histogram_count_with_short_rate_interval-16 837ms ± 3% 616ms ± 2% -26.36% (p=0.008 n=5+5)
NativeHistograms/histogram_count_with_long_rate_interval-16 1.11s ± 1% 0.91s ± 3% -17.75% (p=0.008 n=5+5)
NativeHistogramsCustomBuckets/histogram_count_with_short_rate_interval-16 751ms ± 6% 581ms ± 1% -22.63% (p=0.008 n=5+5)
NativeHistogramsCustomBuckets/histogram_count_with_long_rate_interval-16 1.13s ±11% 0.85s ± 2% -24.59% (p=0.008 n=5+5)
name old alloc/op new alloc/op delta
NativeHistograms/histogram_count_with_short_rate_interval-16 531MB ± 0% 148MB ± 0% -72.08% (p=0.008 n=5+5)
NativeHistograms/histogram_count_with_long_rate_interval-16 528MB ± 0% 145MB ± 0% -72.60% (p=0.016 n=5+4)
NativeHistogramsCustomBuckets/histogram_count_with_short_rate_interval-16 452MB ± 0% 145MB ± 0% -67.97% (p=0.016 n=5+4)
NativeHistogramsCustomBuckets/histogram_count_with_long_rate_interval-16 452MB ± 0% 141MB ± 0% -68.70% (p=0.016 n=5+4)
name old allocs/op new allocs/op delta
NativeHistograms/histogram_count_with_short_rate_interval-16 8.95M ± 0% 1.60M ± 0% -82.15% (p=0.008 n=5+5)
NativeHistograms/histogram_count_with_long_rate_interval-16 8.84M ± 0% 1.49M ± 0% -83.16% (p=0.008 n=5+5)
NativeHistogramsCustomBuckets/histogram_count_with_short_rate_interval-16 5.96M ± 0% 1.57M ± 0% -73.68% (p=0.008 n=5+5)
NativeHistogramsCustomBuckets/histogram_count_with_long_rate_interval-16 5.86M ± 0% 1.46M ± 0% -75.05% (p=0.016 n=5+4)
Signed-off-by: beorn7 <beorn@grafana.com>
PR #16702 introduced a regression because it was too strict in
detecting the condition for using the `HistogramStatsIterator`. It
essentially required the triggering function to be buried at least one
level deep.
`histogram_count(sum(rate(native_histogram_series[2m]))` would not
trigger anymore, but
`1*histogram_count(sum(rate(native_histogram_series[2m]))` would.
Ironically, PR #16682 made the performance of the
`HistogramStatsIterator` so much worse that _not_ using it was often
better, but this has to be addressed in a separate commit.
This commit reinstates the previous `HistogramStatsIterator` detection
behavior, as PR #16702 intended to keep it.
Relevant benchmark changes with this commit (i.e. old is without using
`HistogramStatsIterator`, new is with `HistogramStatsIterator`):
name old time/op new time/op delta
NativeHistograms/histogram_count_with_short_rate_interval-16 802ms ± 3% 837ms ± 3% +4.42% (p=0.008 n=5+5)
NativeHistograms/histogram_count_with_long_rate_interval-16 1.22s ± 3% 1.11s ± 1% -9.46% (p=0.008 n=5+5)
NativeHistogramsCustomBuckets/histogram_count_with_short_rate_interval-16 611ms ± 5% 751ms ± 6% +22.87% (p=0.008 n=5+5)
NativeHistogramsCustomBuckets/histogram_count_with_long_rate_interval-16 975ms ± 4% 1131ms ±11% +16.04% (p=0.008 n=5+5)
name old alloc/op new alloc/op delta
NativeHistograms/histogram_count_with_short_rate_interval-16 222MB ± 0% 531MB ± 0% +139.63% (p=0.008 n=5+5)
NativeHistograms/histogram_count_with_long_rate_interval-16 323MB ± 0% 528MB ± 0% +63.81% (p=0.008 n=5+5)
NativeHistogramsCustomBuckets/histogram_count_with_short_rate_interval-16 179MB ± 0% 452MB ± 0% +153.07% (p=0.016 n=4+5)
NativeHistogramsCustomBuckets/histogram_count_with_long_rate_interval-16 175MB ± 0% 452MB ± 0% +157.73% (p=0.016 n=4+5)
name old allocs/op new allocs/op delta
NativeHistograms/histogram_count_with_short_rate_interval-16 4.48M ± 0% 8.95M ± 0% +99.51% (p=0.008 n=5+5)
NativeHistograms/histogram_count_with_long_rate_interval-16 5.02M ± 0% 8.84M ± 0% +75.89% (p=0.008 n=5+5)
NativeHistogramsCustomBuckets/histogram_count_with_short_rate_interval-16 3.00M ± 0% 5.96M ± 0% +98.93% (p=0.008 n=5+5)
NativeHistogramsCustomBuckets/histogram_count_with_long_rate_interval-16 2.89M ± 0% 5.86M ± 0% +102.69% (p=0.016 n=4+5)
Signed-off-by: beorn7 <beorn@grafana.com>
- Add a code comment about a counter reset edge case (which is
hopefully not relevant in practice).
- Rename the receiver from `f` to `hsi`. (`f` seemed like completely
off as a name. `i` or `it` might have worked, too, but I ended up
with `hsi` as the easiest for the reader.)
Signed-off-by: beorn7 <beorn@grafana.com>