From 90efe8a87798e37d771c199d92f2ac8b350b24ce Mon Sep 17 00:00:00 2001 From: Willy Tarreau Date: Fri, 12 Apr 2024 10:02:26 +0200 Subject: [PATCH] CLEANUP: stick-tables: always respect the to_batch limit when trashing When adding the shards support to tables with commit 1a088da7c ("MAJOR: stktable: split the keys across multiple shards to reduce contention"), the condition to stop eliminating entries based on the batch size being reached is based on a pre-decrement of the max_search counter, but now it goes back into the outer loop which doesn't check it, so next time it does it when entering the next shard, it will become even more negative and will properly stop, but at first glance it looks like an int overflow (which it is not). Let's make sure the outer loop stops on this condition so that we don't continue searching when the limit is reached. --- src/stick_table.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/src/stick_table.c b/src/stick_table.c index 0a0012978..c007cda78 100644 --- a/src/stick_table.c +++ b/src/stick_table.c @@ -336,6 +336,9 @@ int stktable_trash_oldest(struct stktable *t, int to_batch) HA_RWLOCK_WRUNLOCK(STK_TABLE_LOCK, &t->shards[shard].sh_lock); + if (max_search <= 0) + break; + shard = (shard + 1) % CONFIG_HAP_TBL_BUCKETS; if (!shard) break;