Because of relabelling, an endpoint can only select a subset of series
that go through WriteStorage
Having a highestTimestamp at WriteStorage level yields wrong values
if the corresponding sample won't even make it to a remote queue.
Currently PrometheusRemoteWriteBehind is based on that, and would fire
if an endpoint is only interested in a subset of series that take time
to appear.
A "prometheus_remote_storage_queue_highest_timestamp_seconds" that only
takes into account samples in the queue is introduced, and used in
PrometheusRemoteWriteBehind and dashboards in documentation/prometheus-mixin
Same applies to samplesIn/dataIn, QueueManager should know more about
when to update those; when data is enqueued.
That makes dataDropped unnecessary, thus help simplify the logic
in QueueManager.calculateDesiredShards()
Signed-off-by: machine424 <ayoubmrini424@gmail.com>
Skip creating an iterator and walking all through any existing values,
when we can easily tell there are no existing values.
This is the normal case - the TSDB head creates an appender immediately
after creating every chunk.
Remove redundant handling of empty chunks.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Both `HistogramChunk` and `FloatHistogramChunk` have a `Layout()`
method for historical reasons. As it has turned out, these methods are
unused and also buggy. This commit simply removes them.
Signed-off-by: beorn7 <beorn@grafana.com>
- The tool left an empty line behind that we don't need anymore, see
https://github.com/prometheus/prometheus/pull/17092. (Arguably not a
bug in the tool but just our stricter style about empty lines.)
- In tsdb/index/postings_test.go , our (admittedly somewhat
convoluted) code structure tricked the tool so it spit out something
that wouldn't even compile.
- storage/remote/queue_manager_test.go is just a minor formatting
nit.
Signed-off-by: beorn7 <beorn@grafana.com>
See
https://pkg.go.dev/golang.org/x/tools/gopls/internal/analysis/modernize
for details.
This ran into a few issues (arguably bugs in the modernize tool),
which I will fix in the next commit, so that we have transparency what
was done automatically.
Beyond those hiccups, I believe all the changes applied are
legitimate. Even where there might be no tangible direct gain, I would
argue it's still better to use the "modern" way to avoid micro
discussions in tiny style PRs later.
Signed-off-by: beorn7 <beorn@grafana.com>
add metric to track unexpected metadata seen in populateV2TimeSeries, which would indicate metadata incorrectly routed in queue_manager code paths
---------
Signed-off-by: leegin <leegin.t@gmail.com>
Signed-off-by: Darkknight <leegin.t@gmail.com>
* Optimise concurrent rule evaluation for rules querying ALERTS and ALERTS_FOR_STATE
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Further optimised the case of ALERTS and ALERTS_FOR_STATE without alertname label matcher
Signed-off-by: Marco Pracucci <marco@pracucci.com>
---------
Signed-off-by: Marco Pracucci <marco@pracucci.com>
Currently the API always returns http code 422 for engine execution error, and
This PR allows the error code to be overriden, based on the ErrorType and the error itself.
Signed-off-by: Justin Jung <jungjust@amazon.com>
Signed-off-by: Justin Jung <justinjung04@gmail.com>
Co-authored-by: Ayoub Mrini <ayoubmrini424@gmail.com>
A race condition in TestSendSamplesWithBackoffWithSampleAgeLimit was
observed in CI where the sample age limit was too close to the backoff
time, causing samples to be dropped intermittently. Increasing the
SampleAgeLimit resolves the problem.
Signed-off-by: Adam Bernot <bernot@google.com>
While preparing PR #16701, we identified an inconsistency in the chunk
format documentation. The `varint` encoding can require up to 10 bytes
for a 64-bit integer, such as when timestamps are encoded. However, the
chunk length field is a 32-bit integer, which requires at most 5 bytes
in `varint` encoding.
This is reflected in the code, where a maximum of 5 bytes are read when
parsing the chunk length.
50ba25f273/tsdb/chunks/chunks.go (L709-L711)50ba25f273/tsdb/chunks/chunks.go (L47-L48)
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Signed-off-by: Victor Herrero Otal <victor.herrero.otal@sap.com>
* remote read: simplify ReadMultiple to return single SeriesSet
Changed ReadMultiple to return a single SeriesSet with interleaved
series from all queries instead of a slice of SeriesSets. This
simplifies the interface and removes the complex multiplexing
infrastructure while maintaining the ability to send multiple
queries in a single HTTP request.
Changes:
- Updated ReadClient interface: ReadMultiple now returns storage.SeriesSet
- Removed multiplexing infrastructure (MessageQueue, QueueConsumer, etc.)
- Simplified response handling to interleave series from all queries
- Updated tests to match new interface
- All existing tests pass
Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com>
* Fix sorting behavior in ReadMultiple for samples responses
When sortSeries=false, the previous implementation incorrectly used
storage.NewMergeSeriesSet which requires sorted inputs, violating the
function's contract and potentially producing incorrect results.
Changes:
- When sortSeries=true: Use NewMergeSeriesSet for efficient merging and
deduplication of sorted series
- When sortSeries=false: Use simple concatenation to avoid the sorted
input requirement, preserving duplicates from overlapping queries
- Add comprehensive tests to verify both sorting behaviors
- Update existing test expectations to match correct sorted order
Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com>
* Refactor to reduce code duplication in ReadMultiple implementation
Extract common query result combination logic into a shared
combineQueryResults function that handles both sorted and unsorted
cases. This eliminates duplication between the real client
implementation and the mock client used in tests.
Changes:
- Add combineQueryResults helper function in client.go
- Refactor handleSamplesResponseImpl to use the helper
- Simplify mockedRemoteClient.ReadMultiple to use the same helper
- Reduce code duplication by ~30 lines while maintaining same functionality
Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com>
Right now TestParseExpressions tests if a query returns an error but it only does a fuzzy check on returned errors.
The error returned by the parser is ParseErrors, which is a slice of ParseErr structs.
The Error() method on ParseErrors will return an error string based on the first error in that slice. This hides other returned errors so we can end up with bogus errors being returned but won't ever find this via this test.
This change makes the test compare returned error (which is always ParseErrors type) with expected ParseErrors slice.
The extra benefit of this is that current tests mostly ignore error positional range and only test for correct error message. Now errors must return expected positional information.
There are a few cases uncovered where the positional informatio of errors seems wrong, added FIXME for these lines.
Signed-off-by: Lukasz Mierzwa <l.mierzwa@gmail.com>