Moving the debouncing of the search field to the parent component and then
memoizing the ScrapePoolsList component prevents a lot of superfluous
re-renders of the entire scrape pools list that previously got triggered
immediately when you typed in the search box or even just collapsed a pool.
(While the computation of what data to show was already memoized in the
ScrapePoolList component, the component itself still had to re-render a lot
with the same data.)
Discovered this problem + verified fix using react-scan.
Signed-off-by: Julius Volz <julius.volz@gmail.com>
Manager.ApplyConfig() uses multiple locks:
- Provider.mu
- Manager.targetsMtx
Manager.cleaner() uses the same locks but in the opposite order:
- First it locks Manager.targetsMtx
- The it locks Provider.mu
I've seen a few strange cases of Prometheus hanging up on shutdown and never compliting that shutdown.
From a few traces I was given it appears that while Prometheus is still running only discovery.Manager and notifier.Manager are running running.
From that trace it also seems like they are stuck on a lock from two functions:
- cleaner waits on a RLock()
- ApplyConfig waits on a Lock()
I cannot reproduce it but I suspect this is a race between locks. Imagine this scenario:
- Manager.ApplyConfig() is called
- Manager.ApplyConfig locks Provider.mu.Lock()
- at the same time cleaner() is called on the same Provider instance and it calls Manager.targetsMtx.Lock()
- Manager.ApplyConfig() now calls Manager.targetsMtx.Lock() but that lock is already held by cleaner() function so ApplyConfig() hangs there
- at the same time cleaner() now wants to lock Provider.mu.Rlock() but that lock is already held by Manager.ApplyConfig()
- we end up with both functions locking each other out without any way to break that lock
Re-order lock calls to try to avoid this scenario.
I tried writing a test case for it but couldn't hit this issue.
Signed-off-by: Lukasz Mierzwa <l.mierzwa@gmail.com>
The custom values are the "le" bucket boundaries of native histograms
with custom buckets. They are never modified. It is ok to not copy them
when iterating a chunk, just reference them.
If we will ever have a function that modifies the custom values, like
'trim' for example. That function will have to make a copy on write.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Copy the benchmark for native histograms with exponential buckets and
adopt to native histograms with custom buckets.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
This is a fix for documentation update done in https://github.com/prometheus/prometheus/pull/16066, setting correct name for a configuration value.
Signed-off-by: Martin Danko <46035688+zepellin@users.noreply.github.com>
This is failing after 26088d01b9518b3315805186dcee534b986bf79f and it seems that currently tested position is incorrect, fix it
Signed-off-by: Lukasz Mierzwa <l.mierzwa@gmail.com>
The position range of nested aggregate expression was wrong, for the
expression "(sum(foo))" the position of "sum(foo)" should be 1-9, but
the parser could not decide the end of the expression on pos 9, instead
it read ahead to pos 10 and then emitted the aggregate. But we only
kept the last closing position (10) and wrote that into the aggregate.
The reason for this is that the parser cannot know from "(sum(foo)" alone
if the aggregate is finished. It could be finished as in "(sum(foo))" but
equally it could continue with group modifier as "(sum(foo) by (bar))".
The fix is to track ")" tokens in a stack in addition to the lastClosing
position, which is overloaded with other things like offset number tracking.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
* tsdb/errors.MultiError: implement Unwrap
the multierror was hiding some errors in Mimir. I also added unit tests because I had them handy from a similar change I and yuri did in XXX and some time ago
---------
Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com>
Co-authored-by: Arve Knudsen <arve.knudsen@gmail.com>
Addresses https://github.com/prometheus/prometheus/issues/16371
This will help with migrating to native histograms with `convert_classic_histograms_to_nhcb` since users may still need to keep the classic histograms during a migration
Signed-off-by: chardch <otwordsne@gmail.com>
We should use the configured value, or Prometheus' default of 75%, while
initializing and loading the WAL.
Since the Go default is 100%, most Prometheus users would experience
higher memory usage before the value is configured.
Also: move Go runtime params earlier in initialization.
E.g. if a module starting up looks at GOMAXPROCS to size something, we
need to have set it already.
---------
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Instead of using varint to encode the size of each label, use a single
byte for size 0-254, or a flag value of 255 followed by the size in
3 bytes little-endian.
This reduces the amount of code, and also the number of branches in
commonly-executed code, so it runs faster.
The maximum allowed label name or value length is now 2^24 or 16MB.
Memory used by labels changes as follows:
* Labels from 0 to 127 bytes length: same
* From 128 to 254: 1 byte less
* From 255 to 16383: 2 bytes more
* From 16384 to 2MB: 1 byte more
* From 2MB to 16MB: same
Labels: panic on string too long.
Slightly more user-friendly than encoding bad data and finding out when
we decode.
Clarify that Labels.Bytes() encoding can change
---------
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
State explicitely what kind of timestamps are expected for the
--min-time and --max-time options of promtool tsdb commands.
This is especially important for the dump-openmetrics command as users
could otherwise mistakenly think it would be in seconds, like the
OpenMetrics timestamps themselves.
Signed-off-by: Nicolas Peugnet <nicolas.peugnet@lip6.fr>