* close sync ch after sender() is stopped
* break if chan is closed
Signed-off-by: liyandi <littlepangdi@163.com>
Co-authored-by: liyandi <liyandi@xiaomi.com>
If we call ApplyConfig() at the same time the manager is being stopped we might end up hanging forever.
This is because ApplyConfig() will try to cancel obsolete providers and wait until they are cancelled.
It's done by setting a done() function that call Done() on a sync.WaitGroup:
```
if len(prov.newSubs) == 0 {
wg.Add(1)
prov.done = func() {
wg.Done()
}
}
```
then calling prov.cancel() and finally waiting until all providers run done() function
that by blocking it all on a wg.Wait() call.
For each provider there is a goroutine created by calling Manager.startProvider(*Provider):
```
func (m *Manager) startProvider(ctx context.Context, p *Provider) {
m.logger.Debug("Starting provider", "provider", p.name, "subs", fmt.Sprintf("%v", p.subs))
ctx, cancel := context.WithCancel(ctx)
updates := make(chan []*targetgroup.Group)
p.mu.Lock()
p.cancel = cancel
p.mu.Unlock()
go p.d.Run(ctx, updates)
go m.updater(ctx, p, updates)
}
```
It creates a context that can be cancelled and that cancel function becomes prov.cancel. This is what ApplyConfig will call.
If we look at the body of updater() method:
```
func (m *Manager) updater(ctx context.Context, p *Provider, updates chan []*targetgroup.Group) {
// Ensure targets from this provider are cleaned up.
defer m.cleaner(p)
for {
select {
case <-ctx.Done():
return
[...]
```
we can see that it will exit if that context is cancelled and that will trigger a call to Manager.cleaner().
That cleaner() is where done() is called.
So ApplyConfig() -> calls cancel() -> causes cleaner() to be executed -> calls done().
cancel() is also called from cancelDiscoverers() method that will be called by Manager.Run() when Manager is stopping:
```
func (m *Manager) Run() error {
go m.sender()
<-m.ctx.Done()
m.cancelDiscoverers()
return m.ctx.Err()
}
```
The problem is that if we call both ApplyConfig and stop the manager at the same time we might end up with:
- We call Manager.ApplyConfig()
- We stop the Manager
- Manager.cancelDiscoverers() is called
- Provider.cancel() is called for every Provider
- cancel() causes provider context to be cancelled which terminates updater() for given Provider
- cancelling context causes cleaner() method to be called for given Provider
- cleaner() calls done() and exits
- Provider is considered stopped at this point, there is no goroutine running that will call done() anymore
- ApplyConfig iterates providers and decides that one is obsolete is must be stopped
- It sets a custom done() function body with a WaitGroup.Done() call in it
- Then ApplyConfig waits until all Providers run done()
- But they are all stopped and no done() will be run
- We wait forever
This only happens if cancelDiscoverers() is run before ApplyConfig, if ApplyConfig runs first done() will be called,
if cancelDiscoverers() is called first it will stop updater() instances and so done() won't be called anymore.
Part of the problem is that there is no distinction between running and stopped providers. There is Provider.IsStarted() method
that returns a bool based on the value of cancel function but ApplyConfig doesn't check it.
Second problem is that although there is a mutex on a Provider it's used much in the code, so two goroutines can try to read and/or write
provider.cancel and/or provider.done at the same time, making it all more likely to race.
The easiest way to fix it is to check if the provider is started inside ApplyConfig so we don't try to stop a provider that's already stopped.
For that we need to mark it as stopped after cancel() is called, by setting cancel to nil.
This also needs better lock usage to avoid different parts of the code trying to set cancel and done at the same time.
Signed-off-by: Lukasz Mierzwa <l.mierzwa@gmail.com>
When doing a config reload that need to stop some providers while also sending SIGTERM to Prometheus at the same time can sometimes hang
1: sync.WaitGroup.Wait [83 minutes] [Created by run.(*Group).Run in goroutine 1 @ group.go:37]
sync sema.go:110 runtime_SemacquireWaitGroup(*uint32(#166))
sync waitgroup.go:118 (*WaitGroup).Wait(*WaitGroup(#23))
discovery manager.go:276 (*Manager).ApplyConfig(#23, #167)
main main.go:964 main.func5(#120)
main main.go:1505 reloadConfig({#183, 0x1b}, 1, #40, #43, #50, {#31, 0xa, 0})
main main.go:1182 main.func22()
run group.go:38 (*Group).Run.func1(*Group(#26), #51)
Add a test for it.
Signed-off-by: Lukasz Mierzwa <l.mierzwa@gmail.com>
* refactor(endpointslice): use cache.Indexer to index endpointslices by LabelServiceName so not have to iterate over all endpoint objects.
Signed-off-by: Ryan Wu <rongjun0821@gmail.com>
* check the type and error early and add 'TestEndpointSliceDiscoveryWithUnrelatedServiceUpdate' unit test to give a regression test
Signed-off-by: Ryan Wu <rongjun0821@gmail.com>
* make service indexer namespaced
Signed-off-by: Ryan Wu <rongjun0821@gmail.com>
* remove unneeded test func
Signed-off-by: Ryan Wu <rongjun0821@gmail.com>
* Apply suggestions from code review
Co-authored-by: Ayoub Mrini <ayoubmrini424@gmail.com>
Signed-off-by: Ryan Wu <rongjun0821@gmail.com>
---------
Signed-off-by: Ryan Wu <rongjun0821@gmail.com>
Co-authored-by: Ayoub Mrini <ayoubmrini424@gmail.com>
With golangci-lint v2, it now has "formatters" that can be configured.
Add `golangci-lint fmt` to the `make format` in Makefile.common.
* Enable goimports formatter.
Signed-off-by: SuperQ <superq@gmail.com>
Make sure the order of locks is always the same in all functions. In ApplyConfig() we have m.targetsMtx.Lock() after provider is locked, so replicate the same in allGroups().
Signed-off-by: Lukasz Mierzwa <l.mierzwa@gmail.com>
Manager.ApplyConfig() uses multiple locks:
- Provider.mu
- Manager.targetsMtx
Manager.cleaner() uses the same locks but in the opposite order:
- First it locks Manager.targetsMtx
- The it locks Provider.mu
I've seen a few strange cases of Prometheus hanging up on shutdown and never compliting that shutdown.
From a few traces I was given it appears that while Prometheus is still running only discovery.Manager and notifier.Manager are running running.
From that trace it also seems like they are stuck on a lock from two functions:
- cleaner waits on a RLock()
- ApplyConfig waits on a Lock()
I cannot reproduce it but I suspect this is a race between locks. Imagine this scenario:
- Manager.ApplyConfig() is called
- Manager.ApplyConfig locks Provider.mu.Lock()
- at the same time cleaner() is called on the same Provider instance and it calls Manager.targetsMtx.Lock()
- Manager.ApplyConfig() now calls Manager.targetsMtx.Lock() but that lock is already held by cleaner() function so ApplyConfig() hangs there
- at the same time cleaner() now wants to lock Provider.mu.Rlock() but that lock is already held by Manager.ApplyConfig()
- we end up with both functions locking each other out without any way to break that lock
Re-order lock calls to try to avoid this scenario.
I tried writing a test case for it but couldn't hit this issue.
Signed-off-by: Lukasz Mierzwa <l.mierzwa@gmail.com>
Allows to filter the servers when sending the listing request to the API. This feature is only available when using the `role=hcloud`.
See https://docs.hetzner.cloud/#label-selector for details on how to use the label selector.
Signed-off-by: Jonas Lammler <jonas.lammler@hetzner-cloud.de>
* discovery: a change to a service with the same name but from another namespace won't enqueue the endpointSlice
Signed-off-by: Ryan Wu <rongjun0821@gmail.com>
* Update discovery/kubernetes/endpointslice.go
Co-authored-by: Ayoub Mrini <ayoubmrini424@gmail.com>
Signed-off-by: Ryan Wu <rongjun0821@gmail.com>
* Update endpointslice.go
Signed-off-by: Ryan Wu <rongjun0821@gmail.com>
---------
Signed-off-by: Ryan Wu <rongjun0821@gmail.com>
Co-authored-by: Ayoub Mrini <ayoubmrini424@gmail.com>
* refactor: simplify error handling and remove redundant checks
Signed-off-by: Ryan Wu <rongjun0821@gmail.com>
* Add the comment for return of reloading blocks failure
Co-authored-by: Ayoub Mrini <ayoubmrini424@gmail.com>
Signed-off-by: Ryan Wu <rongjun0821@gmail.com>
* Add the comment for return of reloading blocks failure
Signed-off-by: Ryan Wu <rongjun0821@gmail.com>
---------
Signed-off-by: Ryan Wu <rongjun0821@gmail.com>
Co-authored-by: Ayoub Mrini <ayoubmrini424@gmail.com>