The original implementation in #9705 for native histograms included a
technical dept #15177 where samples were committed ordered by type
not by their append order. This was fixed in #17071, but this docstring
was not updated.
I've also took the liberty to mention that we do not order by timestamp
either, thus it is possible to append out of order samples.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
ReduceResolution is currently called before validation during
ingestion. This will cause a panic if there are not enough buckets in
the histogram. If there are too many buckets, the spurious buckets are
ignored, and therefore the error in the input histogram is masked.
Furthermore, invalid negative offsets might cause problems, too.
Therefore, we need to do some minimal validation in reduceResolution.
Fortunately, it is easy and shouldn't slow things down. Sadly, it
requires to return errors, which triggers a bunch of code changes.
Even here is a bright side, we can get rud of a few panics. (Remember:
Don't panic!)
In different news, we haven't done a full validation of histograms
read via remote-read. This is not so much a security concern (as you
can throw off Prometheus easily by feeding it bogus data via
remote-read) but more that remote-read sources might be makeshift and
could accidentally create invalid histograms. We really don't want to
panic in that case. So this commit does not only add a check of the
spans and buckets as needed for resolution reduction but also a full
validation during remote-read.
Signed-off-by: beorn7 <beorn@grafana.com>
* drop extra label from receiver
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
* used constant
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
---------
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
Partially fixes https://github.com/prometheus/prometheus/issues/17416 by
renaming all CT* names to ST* in the whole codebase except RW2 (this is
done in separate
[PR](https://github.com/prometheus/prometheus/pull/17411)) and
PrometheusProto exposition proto.
```
CreatedTimestamp -> StartTimestamp
CreatedTimeStamp -> StartTimestamp
created_timestamp -> start_timestamp
CT -> ST
ct -> st
```
Signed-off-by: bwplotka <bwplotka@gmail.com>
OTLP Receiver: Only update metadata to WAL when metadata-wal-records feature is enabled.
---------
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
`histogram.Error` becomes the generic wrapper type for all histogram errors.
This makes it easier and less error prone when adding new errors to check if
an error is an histogram error as well as making it less error prone to convert
the errors.
This change the type of those specific sentinel errors from error to
`histogram.Error`, but it should almost never matter.
e.g., `errors.Is(err, ErrHistogram...)` would still work out of the box.
Signed-off-by: Laurent Dufresne <laurent.dufresne@grafana.com>
Add logic to the target_info metric generation in the OTLP endpoint, so that any samples with the same timestamp for the same (target_info) series are de-duplicated. It comes out of a user's bug report about duplicated target_info samples in Grafana Mimir (which uses the Prometheus target_info generation logic).
If I'm not mistaken, duplicate target_info samples should stem from multiple resources in the same OTLP request being translated to the same target_info label set. It shouldn't be caused by a Prometheus bug.
* Add WriteProto method and tests for promtool metrics
This commit adds:
1. WriteProto method to storage/remote/client.go that handles
marshaling and compression of protobuf messages
2. Updated parseAndPushMetrics in cmd/promtool/metrics.go to use
the new WriteProto method
3. Comprehensive tests for PushMetrics functionality
The WriteProto method provides a cleaner API for sending protobuf
messages without manually handling marshaling and compression.
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
* use Write method from exp/api/remote
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
* fix
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
* fix lint
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
* fix test
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
* fix
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
* nit fixed
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
* fix lint
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
---------
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
As a follow-up to #17344, add two configuration parameters for controlling label
name translation, both defaulting to on for backwards compatibility (currently
these behaviours are hardcoded as enabled):
* otlp.label_name_underscore_sanitization => Prefix label names starting with a
single underscore with key_ when translating OTel attribute names
* otlp.label_name_preserve_multiple_underscores => Keep multiple consecutive
underscores in label names when translating OTel attribute names
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
The upgrade to prometheus/otlptranslator@7f02967de0 fixes two label
name translation bugs, when in legacy name translation mode:
* 'key' is no longer prefixed when label names start with an underscore
* Multiple consecutive underscores are combined into one
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
The detailed plan for this is laid out in
https://github.com/prometheus/prometheus/issues/16572 .
This commit adds a global and local scrape config option
`scrape_native_histograms`, which has to be set to true to ingest
native histograms.
To ease the transition, the feature flag is changed to simply set the
default of `scrape_native_histograms` to true.
Further implications:
- The default scrape protocols now depend on the
`scrape_native_histograms` setting.
- Everywhere else, histograms are now "on by default".
Documentation beyond the one for the feature flag and the scrape
config are deliberately left out. See
https://github.com/prometheus/prometheus/pull/17232 for that.
Signed-off-by: beorn7 <beorn@grafana.com>
We have always validated that none of the bucket is negative. We
should do the same for the count of observations and the zero bucket.
Note that this was always implied in the protobuf exposition format
because a count or a zero bucket population is ignored if it is not
positive.
Signed-off-by: beorn7 <beorn@grafana.com>
If a sample read through remote read has too high resolution,
reduce it to the maximum allowed.
This is a slow path, but we only expect it to happen if the server
side is newer version that allows higher resolution.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>