Commit Graph

18 Commits

Author SHA1 Message Date
Willy Tarreau
ae540e3d9c Revert "IMPORT: plock: export the uninlined version of the lock wait function"
This reverts commit 5496d06b2b.

It breaks the build on Windows which apparently doesn't support the weak
attribute well on functions. It's not big deal anyway, playing with build
options while debugging still works though it's less easy to use.
2025-02-07 19:51:15 +01:00
Willy Tarreau
b957e2f3ef IMPORT: plock: use cpu_relax() for a shorter time in EBO
Tests have shown that on modern CPUs it's interesting to wait a bit less
in cpu_relax(). Till now we were looping down to 60 iterations and then
switching to just barriers. Increasing the threshold to 90 iterations
left before getting out of the loop improved the average and max time
to grab a write lock by a few percent (e.g. 10% at 1us, 20% at 256ns
or lower). Higher values tend to progressively lose that gain so let's
stick to this one. This was measured on an EPYC 74F3 like previous
measurements that initially led to this value, and the value might
possibly depend on the mask applied to the loop counter.

This is plock commit 74ca0a7307fa6aec3139f27d3b7e534e1bdb748e.
2025-02-07 18:04:29 +01:00
Willy Tarreau
253fba01a7 IMPORT: plock: lower the slope of the exponential back-off
Along many tests involving both haproxy's scheduler and forwarded
traffic, various exponents and algorithms were attempted for the EBO
and their effects were measured. It was found that a growth in 1.25^N
limited to 128k cycles consistently gives a better latency than 1.5^N
limited to 256k cycles, without degrading general performance. The
measures of the time to grab a write lock on a 48-thread EPYC show
that the number of occurrences of low times was roughly multiplied by
2-3 while the number of occurrences of times above 64us was reduced
by similar factors, to even reach 300 at 64us and limiting the maximum
time by a factor of 4.

The other variants that were experimented with are:

  m = ((m + (m >> 1)) + 2) & 0x3ffff;            // original
  m = ((m + (m >> 1) + (m >> 3)) + 2) & 0x3ffff;
  m = ((m + (m >> 1) + (m >> 4)) + 2) & 0x3ffff;
  m = ((m + (m >> 1) + (m >> 4)) + 2) & 0x1ffff;
  m = ((m + (m >> 1) + (m >> 4)) + 1) & 0x1ffff;
  m = ((m + (m >> 2) + (m >> 4)) + 1) & 0x1ffff; // lowest CPU on pl_wr test + good perf
  m = ((m + (m >> 2)) + 1) & 0x1ffff;            // even lower cpu usage, lowest max
  m = ((m + (m >> 1) + (m >> 2)) + 1) & 0x1ffff; // correct but slightly higher maxes
  m = ((m + (m >> 1) + (m >> 3)) + 1) & 0x1ffff; // less good than m+m>>2
  m = ((m + (m >> 2) + (m >> 3)) + 1) & 0x1ffff; // better but not as good as m+m>>2
  m = ((m + (m >> 3) + (m >> 4)) + 1) & 0x1ffff; // less good, lower rates on small coounts.
  m = ((m + (m >> 2) + (m >> 3) + (m >> 4)) + 1) & 0x1ffff; // less good as well
  m = ((m & 0x7fff) + (m >> 1) + (m >> 4)) + 2;
  m = ((m & 0xffff) + (m >> 1) + (m >> 4)) + 2;

This is plock commit dddd9ee01c522da33c353e2e4d4fd743d8336ec3.
2025-02-07 18:04:29 +01:00
Willy Tarreau
9dd56da730 IMPORT: plock: give higher precedence to W than S
It was noticed in haproxy that in certain extreme cases, a write lock
subject to EBO may fail for a very long time in front of a large set
of readers constantly trying to upgrade to the S state. The reason is
that among many readers, one will succeed in its upgrade, and this
situation can last for a very long time with many readers upgrading
in turn, while the writer waits longer and longer before trying again.

Here we're taking a reasonable approach which is that the write lock
should have a higher precedence in its attempt to grab the lock. What
is done is that instead of fully rolling back in case of conflict with
a pure S lock, the writer will only release its read part in order to
let the S upgrade to W if needed, and finish its operations. This
guarantees no other seek/read/write can enter. Once the conflict is
resolved, the writer grabs the read part again and waits for readers
to be gone (in practice it could even return without waiting since we
know that any possible wanderers would leave or even not be there at
all, but it avoids a complicated loop code that wouldn't improve the
practical situation but inflate the code).

Thanks to this change, the maximum write lock latency on a 48 threads
AMD with aheavily loaded scheduler went down from 256 to 64 ms, and the
number of occurrences of 32ms or more was divided by 300, while all
occurrences of 1ms or less were multiplied by up to 3 (3 for the 4-16ns
cases).

This is plock commit b6a28366d156812f59c91346edc2eab6374a5ebd.
2025-02-07 18:04:29 +01:00
Willy Tarreau
5496d06b2b IMPORT: plock: export the uninlined version of the lock wait function
The inlining of the lock waiting function was made more easily
configurable with commit 7505c2e ("plock: always expose the inline
version of the lock wait function"). However, the standard one remained
static, but in order to resolve the symbols in "perf top", it's much
better to export it, so let's move "static" with "inline" and leave it
exported when PLOCK_INLINE_EBO is not set.

This is plock commit 3bea7812ec705b9339bbb0ed482a2cd8aa6c185c.
2025-02-07 18:04:29 +01:00
Willy Tarreau
7c69c9b51f BUG/MAJOR: plock: fix major bug in pl_take_w() introduced with EBO
When EBO was brought to pl_take_w() by plock commit 60d750d ("plock: use
EBO when waiting for readers to leave in take_w() and stow()"), a mistake
was made: the mask against which the current value of the lock is tested
excludes the first reader like in stow(), but it must not because it was
just obtained via an ldadd() which means that it doesn't count itself.

The problem this causes is that if there is exactly one reader when a
writer grabs the lock, the writer will not wait for it to leave before
starting its operations.

The solution consists in checking for any reader in the IF. However the
mask passed to pl_wait_unlock_*() must still exclude the lowest bit as
it's verified after a subsequent load.

Kudos to Remi Tricot-Le Breton for reporting and bisecting this issue
with a reproducer.

No backport is needed since this was brought in 2.9-dev3 with commit
8178a5211 ("MAJOR: threads/plock: update the embedded library again").
The code is now on par with plock commit ada70fe.
2023-10-03 08:28:12 +02:00
Willy Tarreau
892d04733f BUILD: import: guard plock.h against multiple inclusion
Surprisingly there's no include guard in plock.h though there is one in
atomic-ops.h. Let's add one, or we cannot risk including the file multiple
times.
2023-08-26 17:28:08 +02:00
Amaury Denoyelle
cd97ba147c BUILD/IMPORT: fix compilation with PLOCK_DISABLE_EBO=1
Compilation is broken due to missing __pl_wait_unlock_long() definition
when building with PLOCK_DISABLE_EBO=1. This has been introduced since
the following commit which activates the inlining version of
pl_wait_unlock_long() :
  commit 071d689a51
  MINOR: threads: inline the wait function for pthread_rwlock emulation

Add an extra check on PLOCK_DISABLE_EBO before choosing the inline or
default version of pl_wait_unlock_long() to fix this.
2023-08-17 11:16:54 +02:00
Willy Tarreau
e56275378f IMPORT: lorw: support inlining the wait call
Now when PLOCK_LORW_INLINE_WAIT is defined, the pl_wait_unlock_long()
calls in pl_lorw_rdlock() and pl_lorw_wrlock() will be inlined so that
all the CPU time is accounted for in the calling function.

This is plock upstream commit c993f81d581732a6eb8fe3033f21970420d21e5e.
2023-08-17 00:09:05 +02:00
Willy Tarreau
66dcc0550e IMPORT: plock: always expose the inline version of the lock wait function
Doing so will allow to expose the time spent in certain highly
contended functions, which can be desirable for more accurate CPU
profiling. For example this could be done in locking functions that
are already not inlined so that they are the ones being reported as
those consuming the CPU instead of just pl_wait_unlock_long().

This is plock upstream commit 7505c2e2c8c4aa0ab8f52a2288e1334ae6412be4.
2023-08-17 00:09:05 +02:00
Willy Tarreau
c6b98f05d2 IMPORT: plock: also support inlining the int code
Commit 9db830b ("plock: support inlining exponential backoff code")
added an option to support inlining of the wait code for longs but
forgot to do it for ints. Let's do it now.

This is plock upstream commit b1f9f0d252fa40577d11cfb2bc0a809d6960a297.
2023-08-17 00:09:05 +02:00
Willy Tarreau
8178a5211c MAJOR: threads/plock: update the embedded library again
This updates the local copy of the plock library to benefit from finer
memory ordering, EBO on more operations such as when take_w() and stow()
wait for readers to leave  and refined EBO, especially on common operation
such as attempts to upgade R to S, and avoids a counter-productive prior
read in rtos() and take_r().

These changes have shown a 5% increase on regular operations on ARM,
a 33% performance increase on ARM on stick-tables and 2% on x86, and
a 14% and 4% improvements on peers updates respectively on ARM and x86.

The availability of relaxed operations will probably be useful for stats
counters which are still extremely expensive to update.

The following plock commits were included in this update:

  9db830b plock: support inlining exponential backoff code
  008d3c2 plock: make the rtos upgrade faster
  2f76dde atomic: clean up the generic xchg()
  3c6919b atomic: make sure that the no-return macros do not return a value
  97c2bb7 atomic: make the fallback bts use the pointed type for the shift
  f4c1880 atomic: also implement the missing pl_btr()
  8329b82 atomic: guard all generic definitions to make it easier to provide specific ones
  7c5cb62 atomic: use C11 atomics when available
  96afaf9 atomic: prefer the C11 definitions in general
  f3ec7a6 atomic: implement load/store/atomic barriers
  8bdbd1e atomic: add atomic load/stores
  0f604c0 atomic: add more _noret operations
  3fe35db atomic: remove the (void) cast from the C11 operations
  3b08a7c atomic: allow to define the fallback _noret variants
  28deb22 atomic: make x86 arithmetic operations the _noret variants
  8061fe2 atomic: handle modern compilers that support returning flags
  b8b91b7 atomic: add the fetch-and-<op> operations (pl_ld<op>)
  59817ca atomic: add memory order variants for most operations
  a40774f plock: explicitly make use of the pl_*_noret operations
  6f1861b plock: switch to pl_sub_noret_lax() for cancellation
  c013980 plock: use pl_ldadd{_lax,_acq,} instead of pl_xadd()
  382eea3 plock: use a release ordering when dropping the lock
  60d750d plock: use EBO when waiting for readers to leave in take_w() and stow()
  fc01c4f plock: improve EBO a little bit
  1ef6390 plock: switch to CAS + XADD for pl_take_r()
2023-08-11 19:03:35 +02:00
Willy Tarreau
b13044cc1a MINOR: plock: support disabling exponential back-off
The new macro PLOCK_DISABLE_EBO may be defined to disable exponential
backoff. This can be useful to more easily spot functions that cause
contention. In this case the CPU will be spent inside the functions
themselves instead of the pl_wait_unlock_{long,int}() functions, making
them easier to spot using "perf top" even if that causes a significant
degradation of the thread scalability.
2022-10-12 14:19:05 +02:00
Willy Tarreau
688709d814 MAJOR: threads/plock: update the embedded library
The plock code hasn't been been updated since 2017 and didn't benefit
from the exponential back-off improvements that were added in 2018.
Simply updating the file shows a massive performance gain on large
thread count (>=48) with dequeuing going from 113k RPS to 300k RPS and
round robin from 229k RPS to 1020k RPS. It was about time to update.
In addition, some recent improvements to the code will be useful with
thread groups.

An interesting improvement concerns EPYC CPUs. This one alone increased
fairness and was sufficient to avoid crashes in process_srv_queue() there,
when hammering two servers with maxconn 200 under 1k connections.
2022-07-30 10:15:44 +02:00
Willy Tarreau
b1f54925fc BUILD: plock: remove dead code that causes a warning in gcc 11
As Ilya reported in issue #998, gcc 11 complains about misleading code
indentation which is in fact caused by dead assignments to zero after
a loop which stops on zero. Let's clean both of these.
2020-12-21 10:27:18 +01:00
Willy Tarreau
2532bd2f81 BUILD: threads/plock: fix a build issue on Clang without optimization
[ plock commit 4c53fd3a0b2b1892817cebd0db012a52f4087850 ]

Pieter Baauw reported a build issue affecting haproxy after plock was
included. It happens that expressions of the form :

     if ((const) ? (expr1) : (expr2))
       do_something()

always produce code for both expr1 and expr2 on Clang when building
without optimization. The resulting asm code is even funny, basically
doing :

     mov reg, 1
     cmp reg, 1
     ...

This causes our sizeof() tests to fail to build because we purposely
dereference a fake function that reports the location and nature of the
inconsistency, but this fake function appears in the object code despite
all conditions being there to avoid it.

However the compiler is still smart enough to optimize away code doing

    if (const)
       do_something()

So we simply repeat the condition before do_something(), and the dummy
function is not referenced anymore unless really required.
2017-11-20 21:06:35 +01:00
Willy Tarreau
f7ba77eb80 MINOR: threads/plock: rename local variables in macros to avoid conflicts
[ plock commit b155d5c762fb9a9793911881f80e61faa6b0e889 ]

Local variables "l", "i" and "ret" were renamed "__pl_l", "__pl_i" and
"__pl_r" respectively, to limit the risk of conflicts with existing
variables in application code.
2017-11-20 20:45:43 +01:00
Emeric Brun
7122ab31b1 MINOR: threads: Add atomic-ops and plock includes in import dir
atomic-ops header contains some low-level functions to do atomic
operations. These operations are used by the progressive locks (plock).
2017-10-31 11:36:13 +01:00