The createAttributes error was incorrectly returning nil instead of err,
causing errors to be silently discarded. This could lead to silent data
loss for sum metrics during OTLP ingestion.
Fixes#17953
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
* simplify readability of timeseries filtering by using the slices package
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* ensure that BenchmarkBuildTimeSeries doesn't account for the building of
the actual proto in the benchmark results, we only care about the
buildTimeSeries call
Signed-off-by: Callum Styan <callumstyan@gmail.com>
---------
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* otlptranslator: filter __name__ from OTLP attributes to prevent duplicates
OTLP metrics can have a __name__ attribute which, when combined with the
metric name passed via extras, creates duplicate __name__ labels.
This commit implements filtering out of any __name__ metric attribute from OTLP.
Also rename TestCreateAttributes to TestPrometheusConverter_createAttributes
for consistency, and add test cases for __name__, __type__, and __unit__ OTLP metric attributes.
---------
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
* otlptranslator: add label caching for OTLP-to-Prometheus conversion
Add per-request caching to reduce redundant computation and allocations
during OTLP metric conversion:
1. Per-request label sanitization cache: Cache sanitized label names
within a request to avoid repeated string allocations for commonly
repeated labels like __name__, job, instance.
2. Resource-level label caching: Precompute and cache job, instance,
promoted resource attributes, and external labels once per
ResourceMetrics boundary instead of for each datapoint.
3. Scope-level label caching: Precompute and cache scope metadata labels
(otel_scope_name, otel_scope_version, etc.) once per ScopeMetrics
boundary.
4. LabelNamer instance caching: Reuse the LabelNamer struct across
datapoints within the same resource context.
These optimizations significantly reduce allocations and improve latency
for OTLP ingestion workloads with many datapoints per resource/scope.
---------
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
Co-authored-by: George Krajcsovits <krajorama@users.noreply.github.com>
The benchmark was passing appendMetadata=false to NewCombinedAppender,
which caused UpdateMetadata to never be called on the underlying
noOpAppender. This resulted in app.metadata always being 0, failing
the assertion that metadata count should be positive.
Fix by enabling metadata appending in the benchmark.
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
No implementation yet. Just to test the shape of the interface.
AtST is implemented for trivial cases, anything else is hard coded
to return 0.
Ref: https://github.com/prometheus/prometheus/issues/17791
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
NHCB is native histograms with custom buckets.
prompb is used for both remote write 1.0 and remote read. We do not
support NHCB over remote write 1.0 , however we should absolutely
support it for remote read.
Prometheus remote write 1.0 client already refuses to send NHCB.
Prometheus remote write 1.0 server accepts NHCB, but doesn't store
custom values, corrupting the result. I'm now handling NHCB correctly,
instead of refusing or corrupting.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
ReduceResolution is currently called before validation during
ingestion. This will cause a panic if there are not enough buckets in
the histogram. If there are too many buckets, the spurious buckets are
ignored, and therefore the error in the input histogram is masked.
Furthermore, invalid negative offsets might cause problems, too.
Therefore, we need to do some minimal validation in reduceResolution.
Fortunately, it is easy and shouldn't slow things down. Sadly, it
requires to return errors, which triggers a bunch of code changes.
Even here is a bright side, we can get rud of a few panics. (Remember:
Don't panic!)
In different news, we haven't done a full validation of histograms
read via remote-read. This is not so much a security concern (as you
can throw off Prometheus easily by feeding it bogus data via
remote-read) but more that remote-read sources might be makeshift and
could accidentally create invalid histograms. We really don't want to
panic in that case. So this commit does not only add a check of the
spans and buckets as needed for resolution reduction but also a full
validation during remote-read.
Signed-off-by: beorn7 <beorn@grafana.com>
* drop extra label from receiver
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
* used constant
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
---------
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
Partially fixes https://github.com/prometheus/prometheus/issues/17416 by
renaming all CT* names to ST* in the whole codebase except RW2 (this is
done in separate
[PR](https://github.com/prometheus/prometheus/pull/17411)) and
PrometheusProto exposition proto.
```
CreatedTimestamp -> StartTimestamp
CreatedTimeStamp -> StartTimestamp
created_timestamp -> start_timestamp
CT -> ST
ct -> st
```
Signed-off-by: bwplotka <bwplotka@gmail.com>
OTLP Receiver: Only update metadata to WAL when metadata-wal-records feature is enabled.
---------
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
`histogram.Error` becomes the generic wrapper type for all histogram errors.
This makes it easier and less error prone when adding new errors to check if
an error is an histogram error as well as making it less error prone to convert
the errors.
This change the type of those specific sentinel errors from error to
`histogram.Error`, but it should almost never matter.
e.g., `errors.Is(err, ErrHistogram...)` would still work out of the box.
Signed-off-by: Laurent Dufresne <laurent.dufresne@grafana.com>
Add logic to the target_info metric generation in the OTLP endpoint, so that any samples with the same timestamp for the same (target_info) series are de-duplicated. It comes out of a user's bug report about duplicated target_info samples in Grafana Mimir (which uses the Prometheus target_info generation logic).
If I'm not mistaken, duplicate target_info samples should stem from multiple resources in the same OTLP request being translated to the same target_info label set. It shouldn't be caused by a Prometheus bug.
* Add WriteProto method and tests for promtool metrics
This commit adds:
1. WriteProto method to storage/remote/client.go that handles
marshaling and compression of protobuf messages
2. Updated parseAndPushMetrics in cmd/promtool/metrics.go to use
the new WriteProto method
3. Comprehensive tests for PushMetrics functionality
The WriteProto method provides a cleaner API for sending protobuf
messages without manually handling marshaling and compression.
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
* use Write method from exp/api/remote
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
* fix
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
* fix lint
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
* fix test
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
* fix
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
* nit fixed
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
* fix lint
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
---------
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
As a follow-up to #17344, add two configuration parameters for controlling label
name translation, both defaulting to on for backwards compatibility (currently
these behaviours are hardcoded as enabled):
* otlp.label_name_underscore_sanitization => Prefix label names starting with a
single underscore with key_ when translating OTel attribute names
* otlp.label_name_preserve_multiple_underscores => Keep multiple consecutive
underscores in label names when translating OTel attribute names
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
The upgrade to prometheus/otlptranslator@7f02967de0 fixes two label
name translation bugs, when in legacy name translation mode:
* 'key' is no longer prefixed when label names start with an underscore
* Multiple consecutive underscores are combined into one
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>