This test consistently fails missing ~10 series.
If it doesn't fail on your machine, just increase totalSeries, that's
how race conditions work.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Remove experimental out-of-order native histogram flag
This feature has been available in Prometheus since September 2024,
and has no known issues. Therefore proposing to remove the flag
entirely and always have it on. Note that there are still two
settings that need to be configured (out-of-order time window > 0
and native histograms enabled) for this feature to work.
Signed-off-by: Fiona Liao <fiona.liao@grafana.com>
* Update CHANGELOG
Signed-off-by: Fiona Liao <fiona.liao@grafana.com>
* Keep feature flag with warning
Signed-off-by: Fiona Liao <fiona.liao@grafana.com>
* Update CHANGELOG
Signed-off-by: Fiona Liao <fiona.liao@grafana.com>
* Update tsdb/head_append.go
Co-authored-by: George Krajcsovits <krajorama@users.noreply.github.com>
Signed-off-by: Fiona Liao <fiona.y.liao@gmail.com>
* Update CHANGELOG.md
Co-authored-by: George Krajcsovits <krajorama@users.noreply.github.com>
Signed-off-by: Fiona Liao <fiona.y.liao@gmail.com>
* Update tsdb/head_append.go
Co-authored-by: George Krajcsovits <krajorama@users.noreply.github.com>
Signed-off-by: Fiona Liao <fiona.y.liao@gmail.com>
* Additional cleanup of comments and test names
Signed-off-by: Fiona Liao <fiona.liao@grafana.com>
---------
Signed-off-by: Fiona Liao <fiona.liao@grafana.com>
Signed-off-by: Fiona Liao <fiona.y.liao@gmail.com>
Co-authored-by: George Krajcsovits <krajorama@users.noreply.github.com>
The `:=` causes new variables to be created, which means the outer
slice stays at nil, and new memory is allocated every time round the
loop.
Extracted from https://github.com/prometheus/prometheus/pull/16182
Credit to @bwplotka.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Renames the head's deleted map to walExpiries, and creates entries for any
duplicate series records encountered during WAL replay, with the expiry set
to the highest current WAL segment number. Any subsequent WAL
checkpoints will see the duplicate series entry in the walExpiries map, and
keep the series record until the last WAL segment that could contain its
samples is deleted.
Other considerations:
WBL: series records aren't written to the WBL, so there are no duplicates to deal with
agent mode: has its own WAL replay logic that handles duplicate series records differently, and is outside the scope of this PR
Rationales:
* metadata-wal-records might be deprecated and replaced going forward: https://github.com/prometheus/prometheus/issues/15911
* PRW 2.0 works without metadata just fine (although it sends untyped metrics as expected).
Signed-off-by: bwplotka <bwplotka@gmail.com>
Around Mimir compactions we see logging in ShardedPostings do massive allocations and drive GC up to 50% of CPU.
Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com>
'defer' only runs at the end of the function, so explicitly close the
querier after we finish with it. Also check it didn't error.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
'defer' only runs at the end of the function, so introduce some more
functions / move the start, so that 'defer' can run at the end of the
logical block.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Compact() is an uppercase function that deals with locks on its own, so we shouldn't have a lock around it.
Signed-off-by: Lukasz Mierzwa <lukasz@cloudflare.com>
We don't hold db.mtx lock when trying to read db.blocks here so we need a read lock around this loop.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
This test ensures that running db.reloadBlocks() and db.CleanTombstones() at the same time doesn't race.
The problem is that CleanTombstones() is a public method while reloadBlocks() is internal.
CleanTombstones() sets db.cmtx lock while reloadBlocks() is not protected by any locks at all, it expects the public method through which it was called to do it.
So having a race between these two is not unexpected and we shouldn't really be testing this.
db.cmtx ensures that no other function can be modifying the list of open blocks and so the scenario tested here cannot happen.
If it would happen it would be only because some other method doesn't aquire db.ctmx lock, something this test cannot detect.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
This partially reverts ae3d392aa9.
ae3d392aa9 added a call to db.mtx.Lock() that lasts for the entire duration of db.reloadBlocks(),
previous db.mtx would be locked only during critical part of db.reloadBlocks().
The motivation was to protect against races:
9e0351e161 (r555699794)
The 'reloads' being mentioned are (I think) reloadBlocks() calls, rather than db.reload() or other methods.
TestTombstoneCleanRetentionLimitsRace was added to catch this but I wasn't able to ever get any error out of it, even after disabling all calls to db.mtx in reloadBlocks() and CleanTombstones().
To make things more complicated CleanupTombstones() itself calls reloadBlocks(), so it seems that the real issue is that we might have concurrent calls to reloadBlocks().
The problem with this change is that db.reloadBlocks() can take a very long time, that's because it might need to load very large blocks from disk, which is slow.
While db.mtx is locked a large chunk of the db is locked, including queries, since db.mtx read lock is needed for db.Querier() call.
One of the issues this manifests itself as is a gap in all metrics and blocked queries just after a large block compaction happens.
When compaction merges multiple day-or-more blocks into a week-or-more block it create a single very big block.
After that block is written it needs to be loaded and that seems to be taking many seconds (30-45), during which mtx is held and everything is blocked.
Turns out that there is another lock that is more fine grained and aimed at this specific use case:
// cmtx ensures that compactions and deletions don't run simultaneously.
cmtx sync.Mutex
All calls to reloadBlocks() are wrapped inside cmtx lock. The only exception is db.reload() which this change fixes.
We can't add cmtx lock inside reloadBlocks() itself because it's called by a number of functions, some of which are already holding cmtx.
Looking at the code I think it is sufficient to hold cmtx and skip a reloadBlocks() wide mtx call.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
Fix issues raised by staticcheck
We are not enabling staticcheck explicitly, though, because it has too many false positives.
---------
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
When creating dummy data for benchmarks, call `Commit()` periodically to
avoid growing the appender to enormous size.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Exported the CheckpointPrefix constant to be used in other packages.
Updated references to the constant in db.go and checkpoint.go files.
This change improves code readability and maintainability.
Signed-off-by: johncming <johncming@yahoo.com>
Co-authored-by: johncming <conjohn668@gmail.com>
This enables it to take advantage of a more compact data structure
since all postings are known to be `*ListPostings`.
Remove the `Get` member which was not used for anything else, and fix up
tests.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Now we can call it with more specific types which is more efficient than
making everything go through the `Postings` interface.
Benchmark the concrete type.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>