MINOR: memprof: attempt different retry slots for different hashes on collision

When two pointer hash to the same memprofile bin, we currently try again
with the same bin until we find a spare one or we reach the limit of 16.
Olivier suggested to try with a different step for different pointers so
as to limit the number of bins to visit in such a case, so let's split
the pointer hash calculation so that we keep the raw hash before reduction
and use its lowest bits as the retry step. We force lowest bit to 1 to
avoid integral multiples that would oscillate between only a few positions.

Quick tests with h1+h2 requests show that for ~744 distinct entries, we
used to have 1.17 retries per lookup before and 0.6 now so we're halving
the cost of hash collisions. A heavier workload that used to produce 920
entries with 2.01 retries per lookup now reaches 966 entries (94.3% usage
vs 89.8% before) with only 1.44 retries per lookup.

This should be safe to backport, but depends on this previous commit:

    MINOR: tools: extend the pointer hashing code to ease manipulations
This commit is contained in:
Willy Tarreau 2026-03-11 16:33:33 +01:00
parent 3b4275b072
commit fb7e5e1696

View File

@ -302,13 +302,15 @@ struct memprof_stats *memprof_get_bin(const void *ra, enum memprof_method meth)
int retries = 16; // up to 16 consecutive entries may be tested.
const void *old;
unsigned int bin;
ullong hash;
if (unlikely(!ra)) {
bin = MEMPROF_HASH_BUCKETS;
goto leave;
}
bin = ptr_hash(ra, MEMPROF_HASH_BITS);
for (; memprof_stats[bin].caller != ra; bin = (bin + 1) & (MEMPROF_HASH_BUCKETS - 1)) {
hash = _ptr_hash(ra);
bin = _ptr_hash_reduce(hash, MEMPROF_HASH_BITS);
for (; memprof_stats[bin].caller != ra; bin = (bin + (hash | 1)) & (MEMPROF_HASH_BUCKETS - 1)) {
if (!--retries) {
bin = MEMPROF_HASH_BUCKETS;
break;