* Update go modules
Signed-off-by: Arthur Silva Sens <arthursens2005@gmail.com>
* Update semconv
Signed-off-by: Arthur Silva Sens <arthursens2005@gmail.com>
* Force go version 1.24.0
Signed-off-by: Arthur Silva Sens <arthursens2005@gmail.com>
* Revert incorrect prometheus bump
Signed-off-by: Arthur Silva Sens <arthursens2005@gmail.com>
---------
Signed-off-by: Arthur Silva Sens <arthursens2005@gmail.com>
Implements expand/collapse functionality for displaying final scrape
configuration (interval + timeout) in the targets page timing column.
- Add ScrapeDetails component with expand/collapse chevron
- Keep existing "Last Scrape" and "Scrape Duration" badges always visible
- Display "Scrape interval: every \<interval\>" and "Scrape timeout: after \<timeout\>" when expanded
- Use IconRepeat for interval and IconPlugConnectedX for timeout
- Follow TargetLabels.tsx pattern for consistency
- Implement performance optimization with conditional DOM rendering
- Maintain existing hover tooltip functionality
Signed-off-by: ADITYATIWARI342005 <142050150+ADITYATIWARI342005@users.noreply.github.com>
* Add anchored and smoothed to vector selectors.
This adds "anchored" and "smoothed" keywords that can be used following a matrix selector.
"Anchored" selects the last point before the range (or the first one after the range) and adds it at the boundary of the matrix selector.
"Smoothed" applies linear interpolation at the edges using the points around the edges. In the absence of a point before or after the edge, the first or the last point is added to the edge, without interpolation.
*Exemple usage*
* `increase(caddy_http_requests_total[5m] anchored)` (equivalent of *caddy_http_requests_total - caddy_http_requests_total offset 5m* but takes counter reset into consideration)
* `rate(caddy_http_requests_total[step()] smoothed)`
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
* Update docs/feature_flags.md
Co-authored-by: Charles Korn <charleskorn@users.noreply.github.com>
Signed-off-by: Julien <291750+roidelapluie@users.noreply.github.com>
* Smoothed/Anchored rate: Add more tests
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
* Anchored/Smoothed modifier: error out with histograms
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
---------
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
Signed-off-by: Julien <291750+roidelapluie@users.noreply.github.com>
Co-authored-by: Charles Korn <charleskorn@users.noreply.github.com>
Reduce the resolution of histograms as needed and ignore invalid
schemas while emitting a warning log.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Because the label API endpoints read from the TSDB indexes, they can
return information for series which are present in the index but have no
samples in the queried interval.
Add similar note for the series endpoint.
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
If a sample read through remote read has too high resolution,
reduce it to the maximum allowed.
This is a slow path, but we only expect it to happen if the server
side is newer version that allows higher resolution.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Allow -9..52 schemas instead of just -4..8, but reduce resolution to 8 if
above.
The reduce code path will be slow, but we only expect it to happen if
TSDB already has higher resolution samples and we are in a rollback.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
# Conflicts:
# model/histogram/generic.go
It slows down compilation and doesn't make any of our benchmarks go faster.
Assumed to be something that helped at an earlier point, but doesn't help now.
Add a benchmark with a more complicated regex to demonstrate the slowdown.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
When remote read returns chunks, the validation is in tsdb/chunkenc.
However when it returns samples, we need to modify the iterator to
validate.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Otherwise higher level code like PromQL needs to constantly check if it
can handle the samples.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
With the fixed commit order, we can now handle the conversion of float
staleness markers to histogram staleness markers in a more direct way.
Signed-off-by: beorn7 <beorn@grafana.com>