This adds:
* A `ScrapePoolConfig()` method to the scrape manager that allows getting
the scrape config for a given pool.
* An API endpoint at `/api/v1/targets/relabel_steps` that takes a pool name
and a label set of a target and returns a detailed list of applied
relabeling rules and their output for each step.
* A "show relabeling" link/button for each target on the discovery page
that shows the detailed flow of all relabeling rules (based on the API
response) for that target.
Note that this changes the JSON encoding of the relabeling rule config
struct to output the original snake_case (instead of camelCase) field names,
and before merging, we need to be sure that's ok :) See my comment about
that at https://github.com/prometheus/prometheus/pull/15383#issuecomment-3405591487
Fixes https://github.com/prometheus/prometheus/issues/17283
Signed-off-by: Julius Volz <julius.volz@gmail.com>
Fixes#17308.
As explained adding the warn-annotation about conflicting counter
reset hints doesn't happen consistently. Furthermore, because of
incremental mean calculation being used so far (which includes
subtraction), avg calculation always created gauge histograms.
The fix is to make Sub behave like Add WRT counter reset handling, and
then set the result of a subtraction to gauge explicitly in actual
PromQL subtraction (rather than using Sub for something else, like
incremental mean calculation). Also, track the presence of a
CounterReset hint and a NotCounterReset hint separately for the
entirety of aggregated histograms and create the warn-annotation based
on that.
As a minor fix, this commit also consistently creates the warn
annotation in aggregation to be about "aggregation" rather than
"subtraction" or "addition", because the latter are just internal
operations within the aggregation, which is not of interest for the
user.
Signed-off-by: beorn7 <beorn@grafana.com>
* fix: Fix slicelabels corruption when used with proto decoding
Alternative to https://github.com/prometheus/prometheus/pull/16957/
Signed-off-by: bwplotka <bwplotka@gmail.com>
* addressed comments
Signed-off-by: bwplotka <bwplotka@gmail.com>
---------
Signed-off-by: bwplotka <bwplotka@gmail.com>
We have always validated that none of the bucket is negative. We
should do the same for the count of observations and the zero bucket.
Note that this was always implied in the protobuf exposition format
because a count or a zero bucket population is ignored if it is not
positive.
Signed-off-by: beorn7 <beorn@grafana.com>
Allow -9..52 schemas instead of just -4..8, but reduce resolution to 8 if
above.
The reduce code path will be slow, but we only expect it to happen if
TSDB already has higher resolution samples and we are in a rollback.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
# Conflicts:
# model/histogram/generic.go
It slows down compilation and doesn't make any of our benchmarks go faster.
Assumed to be something that helped at an earlier point, but doesn't help now.
Add a benchmark with a more complicated regex to demonstrate the slowdown.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Otherwise higher level code like PromQL needs to constantly check if it
can handle the samples.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Histogram.Validate and FloatHistogram.Validate now return error on
unsupported schemas.
Scrape and remote-write handler reduces the schema to the maximum allowed
if it is above the maximum, but below theoretical maximum of 52.
For scrape the maximum is a configuration option, for remote-write it is 8.
Note: OTLP endpont already does the reduction, without checking that it is
below 52 as the spec does not specify a maximum.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
* fix(textparse): implement NHCB parsing in ProtoBuf parser directly
The NHCB conversion does some validation, but we can only return error
from Parser.Next() not Parser.Histogram(). So the conversion needs to
happen in Next().
There are 2 cases:
1. "always_scrape_classic_histograms" is enabled, in which case we
convert after returning the classic series. This is to be consistent
with the PromParser text parser, which collects NHCB while spitting out
classic series; then returns the NHCB.
2. "always_scrape_classic_histograms" is disabled. In which case we never
return the classic series.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
* refactor(textparse): skip classic series instead of adding NHCB around
Do not return the first classic series from the EntryType state,
switch to EntrySeries. This means we need to start the histogram
field state from -3 , not -2.
In EntrySeries, skip classic series if needed.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
* reuse nhcb converter
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
* test(textparse/nhcb): test corrupting metric family name
NHCB parse doesn't always copy the metric name from the underlying
parser. When called via HELP, UNIT, the string is directly referenced
which means that the read-ahead of NHCB can corrupt it.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Hide adding NHCB parser on top another parser in New() function
so we can easily add direct NHCB capable parsers.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
See
https://pkg.go.dev/golang.org/x/tools/gopls/internal/analysis/modernize
for details.
This ran into a few issues (arguably bugs in the modernize tool),
which I will fix in the next commit, so that we have transparency what
was done automatically.
Beyond those hiccups, I believe all the changes applied are
legitimate. Even where there might be no tangible direct gain, I would
argue it's still better to use the "modern" way to avoid micro
discussions in tiny style PRs later.
Signed-off-by: beorn7 <beorn@grafana.com>
Go will not allocate when reading from a map with a key cast from []byte to string.
Also remove some yoloString calls in package `textparse` - call a more suitable library function.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Add `ByteSize()` method to different labels implementations.
One of the use case so that we can track the memory used by Labels.
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
This requires a bit of repetition to cover all the different builds, but
it seems worth checking that the function does what is expected.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
The problem happens when we parse a standalone native histogram, which
sets the p.lastHistogramExponential state flag. We never unset it.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
The custom values are the "le" bucket boundaries of native histograms
with custom buckets. They are never modified. It is ok to not copy them
when iterating a chunk, just reference them.
If we will ever have a function that modifies the custom values, like
'trim' for example. That function will have to make a copy on write.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Instead of using varint to encode the size of each label, use a single
byte for size 0-254, or a flag value of 255 followed by the size in
3 bytes little-endian.
This reduces the amount of code, and also the number of branches in
commonly-executed code, so it runs faster.
The maximum allowed label name or value length is now 2^24 or 16MB.
Memory used by labels changes as follows:
* Labels from 0 to 127 bytes length: same
* From 128 to 254: 1 byte less
* From 255 to 16383: 2 bytes more
* From 16384 to 2MB: 1 byte more
* From 2MB to 16MB: same
Labels: panic on string too long.
Slightly more user-friendly than encoding bad data and finding out when
we decode.
Clarify that Labels.Bytes() encoding can change
---------
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>