This callback is not used by Prometheus, but in downstream projects it
is wasteful to allocate an ID only to abandon it.
Remove lengthy commment which I feel is distracting from the flow.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Reduce the resolution of histograms as needed and ignore invalid
schemas while emitting a warning log.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
See
https://pkg.go.dev/golang.org/x/tools/gopls/internal/analysis/modernize
for details.
This ran into a few issues (arguably bugs in the modernize tool),
which I will fix in the next commit, so that we have transparency what
was done automatically.
Beyond those hiccups, I believe all the changes applied are
legitimate. Even where there might be no tangible direct gain, I would
argue it's still better to use the "modern" way to avoid micro
discussions in tiny style PRs later.
Signed-off-by: beorn7 <beorn@grafana.com>
Renames the head's deleted map to walExpiries, and creates entries for any
duplicate series records encountered during WAL replay, with the expiry set
to the highest current WAL segment number. Any subsequent WAL
checkpoints will see the duplicate series entry in the walExpiries map, and
keep the series record until the last WAL segment that could contain its
samples is deleted.
Other considerations:
WBL: series records aren't written to the WBL, so there are no duplicates to deal with
agent mode: has its own WAL replay logic that handles duplicate series records differently, and is outside the scope of this PR
Move a couple of variables inside the scope of a goroutine, to avoid
data races.
Use `zeropool` to reduce garbage and avoid some lint warnings.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
For: #14355
This commit updates Prometheus to adopt stdlib's log/slog package in
favor of go-kit/log. As part of converting to use slog, several other
related changes are required to get prometheus working, including:
- removed unused logging util func `RateLimit()`
- forward ported the util/logging/Deduper logging by implementing a small custom slog.Handler that does the deduping before chaining log calls to the underlying real slog.Logger
- move some of the json file logging functionality to use prom/common package functionality
- refactored some of the new json file logging for scraping
- changes to promql.QueryLogger interface to swap out logging methods for relevant slog sugar wrappers
- updated lots of tests that used/replicated custom logging functionality, attempting to keep the logical goal of the tests consistent after the transition
- added a healthy amount of `if logger == nil { $makeLogger }` type conditional checks amongst various functions where none were provided -- old code that used the go-kit/log.Logger interface had several places where there were nil references when trying to use functions like `With()` to add keyvals on the new *slog.Logger type
Signed-off-by: TJ Hoplock <t.hoplock@gmail.com>
In `mmapCurrentOOOHeadChunk`, check if the number is at the maximum and
drop the data with an error log. This is not expected to happen as the
maximum is over 8 million; that's 8 years of 1 sample every second.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Rename a variable.
Add parameters to memSeries.insert function.
No effect on how float samples are handled.
Related to #14546
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
We don't use seriesShard during DB initialization, so we can use the
same 8 bytes to store mmMaxTime, and save those during the rest of the
lifetime of the database.
This doesn't affect CPU performance.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* TSDB: Document what needs locking in memSeries
* TSDB: Lock around access to series labels
So we can modify them to reset the symbol-table.
* TSDB: Make label locking conditional on build tag
---------
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
They are used in multiple repos, so common is a better place for them.
Several packages now don't depend on `model/textparse`, e.g.
`storage/remote`.
Also remove `metadata` struct from `api.go`, since it was identical to
a struct in the `metadata` package.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Additionally wrap WBL replay error
Although WBL replay is already wrapped with errLoadWbl,
there are other errors that can happen during a WBL replay.
We should not try to repair WAL in those cases.
This commit additionally wraps the final error in Head.Init again
with errLoadWbl so that WBL replay errors can be identified properly.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Signed-off-by: Jesus Vazquez <jesusvzpg@gmail.com>
Co-authored-by: Jesus Vazquez <jesusvzpg@gmail.com>
* Benchmark WBL
Extended WAL benchmark test with WBL parts too - added basic cases for
OOO handling - a percentage of series have a percentage of samples set
as OOO ones.
Signed-off-by: Fiona Liao <fiona.y.liao@gmail.com>