Arve Knudsen 4afa76d083
OTLP: label caching for OTLP-to-Prometheus conversion to reduce allocations and improve latency (#17860)
* otlptranslator: add label caching for OTLP-to-Prometheus conversion

Add per-request caching to reduce redundant computation and allocations
during OTLP metric conversion:

1. Per-request label sanitization cache: Cache sanitized label names
   within a request to avoid repeated string allocations for commonly
   repeated labels like __name__, job, instance.

2. Resource-level label caching: Precompute and cache job, instance,
   promoted resource attributes, and external labels once per
   ResourceMetrics boundary instead of for each datapoint.

3. Scope-level label caching: Precompute and cache scope metadata labels
   (otel_scope_name, otel_scope_version, etc.) once per ScopeMetrics
   boundary.

4. LabelNamer instance caching: Reuse the LabelNamer struct across
   datapoints within the same resource context.

These optimizations significantly reduce allocations and improve latency
for OTLP ingestion workloads with many datapoints per resource/scope.

---------

Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
Co-authored-by: George Krajcsovits <krajorama@users.noreply.github.com>
2026-01-16 10:30:05 +00:00
..
2026-01-14 13:15:17 +01:00
2026-01-14 13:15:19 +01:00
2026-01-14 13:15:14 +01:00