Commit Graph

22 Commits

Author SHA1 Message Date
Willy Tarreau
8da23393a1 BUILD: atomic: make the old HA_ATOMIC_LOAD() support const pointers
We have an implementation of atomic ops for older versions of gcc that
do not provide the __builtin_* API (< 4.4). Recent changes to the pools
broke that in pool_releasable() by having a load from a const pointer,
which doesn't work there due to a temporary local variable that is
declared then assigned. Let's make use of a compount statement to assign
it a value when declaring it.

There's no need to backport this.
2022-01-28 19:04:02 +01:00
Willy Tarreau
04065b87ce MINOR: atomic: remove the memcpy() call and dependency on string.h
The memcpy() call in the aarch64 version of __ha_cas_dw() is sometimes
inlined and sometimes not, depending on the gcc version. It's only used
to copy two void*, so let's use direct assignment instead of memcpy().
It would also be possible to change the asm code to directly write there,
but it's not worth it.

With this change the code is 8kB smaller with gcc-5.4.
2021-10-28 09:45:48 +02:00
David CARLIER
c050dc6c68 BUILD: atomic: fix build on mac/arm64
The inline assembly is invalid for this platform thus falling back
to the builtin instead.
2021-10-28 09:45:48 +02:00
Willy Tarreau
4957a32f9e BUILD: atomic: prefer __atomic_compare_exchange_n() for __ha_cas_dw()
__atomic_compare_exchange() is incorrectly documented in the gcc builtins
doc, it says the desired value is "type *desired" while in reality it is
"const type *desired" as expected since that value must in no way be
modified by the operation. However it seems that clang has implemented
it as documented, and reports build warnings when fed a const.

This is quite problematic because it means we have to betry the callers,
pretending we won't touch their constants but not knowing what the
compiler would do with them, and possibly hiding future bugs.

Instead of forcing a cast, let's just switch to the better
__atomic_compare_exchange_n() that takes a value instead of a pointer.
At least with this one there is no doubt about how the input will be
used.

It was verified that the output object code is the same both in clang
and gcc with this change.
2021-10-28 09:45:48 +02:00
Willy Tarreau
99198546f6 MEDIUM: atomic: relax the load/store barriers on x86_64
The x86-tso model makes the load and store barriers unneeded for our
usage as long as they perform at least a compiler barrier: the CPU
will respect store ordering and store vs load ordering. It's thus
safe to remove the lfence and sfence which are normally needed only
to communicate with external devices. Let's keep the mfence though,
to make sure that reads of same memory location after writes report
the value from memory and not the one snooped from the write buffer
for too long.

An in-depth review of all use cases tends to indicate that this is
okay in the rest of the code. Some parts could be cleaned up to
use atomic stores and atomic loads instead of explicit barriers
though.

Doing this reliably increases the overall performance by about 2-2.5%
on a 8c-16t Xeon thanks to less frequent flushes (it's likely that the
biggest gain is in the MT lists which use them a lot, and that this
results in less cache line flushes).
2021-08-01 17:34:06 +02:00
Willy Tarreau
cb0451146f MEDIUM: atomic: simplify the atomic load/store/exchange operations
The atomic_load/atomic_store/atomic_xchg operations were all forced to
__ATOMIC_SEQ_CST, which results in explicit store or even full barriers
even on x86-tso while we do not need them: we're not communicating with
external devices for example and are only interested in respecting the
proper ordering of loads and stores between each other.

These ones being rarely used, the emitted code on x86 remains almost the
same (barring a handful of locations). However they will allow to place
correct barriers at other places where atomics are accessed a bit lightly.

The patch is marked medium because we can never rule out the risk of some
bugs on more relaxed platforms due to the rest of the code.
2021-08-01 17:34:06 +02:00
Willy Tarreau
7b1425a91b MINOR: atomic: reimplement the relaxed version of x86 BTS/BTR
Olivier spotted that I messed up during a rebase of commit 92c059c2a
("MINOR: atomic: implement native BTS/BTR for x86"), losing the x86
version of the BTS/BTR and leaving the generic version for it instead
of having this block in the #else. Since this variant is not used for
now it was easy to overlook it. Let's re-implement it here.
2021-04-12 10:01:44 +02:00
Willy Tarreau
92c059c2ac MINOR: atomic: implement native BTS/BTR for x86
The current BTS/BTR operations on x86 are ugly because they rely on a
CAS, so they may be unfair and take time to converge. Fortunately,
where they are currently used (mostly FDs) the contention is expected
to be rare (mostly listeners). But this also limits their use to such
few low-load cases.

On x86 there is a set of BTS/BTR instructions which help for this,
but before the FD's state migrated to 32 bits there was little use of
them since they do not exist in 8 bits.

Now at least it makes sense to use them, at the very least in order
to significantly reduce the code size (one BTS instead of a CMPXCHG
loop). The implementation relies on modern gcc's ability to return
condition flags and limit code inflation and register spilling. The
fall back is retained on the old implementation for all other situations
(inappropriate target size or non-capable compiler). The code shrank
by 1.6 kB on the fast path.

As expected, for now on up to 4 threads there is no measurable difference
of performance.
2021-04-07 18:47:22 +02:00
Willy Tarreau
fa68d2641b CLEANUP: atomic: use the __atomic variant of BTS/BTR on modern compilers
Probably due to the result of an old copy-paste, HA_ATOMIC_BTS/BTR were
still implemented using the __sync_*  builtins instead of the more
modern __atomic_* which allow to specify the memory model. Let's update
this to use the newer there and also implement the relaxed variants
(which are not used for now).
2021-04-07 18:18:37 +02:00
Willy Tarreau
22d675cb77 CLEANUP: atomic: add HA_ATOMIC_INC/DEC for unit increments
Most ADD/SUB callers use them for a single unit (e.g. refcounts) and
it's a pain to always pass ",1". Let's add them to simplify the API.
However we currently don't add any return value. If needed in the future
better report zero/non-zero than a real value for the sake of efficiency
at the instruction level.
2021-04-07 18:18:37 +02:00
Willy Tarreau
185157201c CLEANUP: atomic: add a fetch-and-xxx variant for common operations
The fetch_and_xxx variant is often missing for add/sub/and/or. In fact
it was only provided for ADD under the name XADD which corresponds to
the x86 instruction name. But for destructive operations like AND and
OR it's missing even more as it's not possible to know the value before
modifying it.

This patch explicitly adds HA_ATOMIC_FETCH_{OR,AND,ADD,SUB} which
cover these standard operations, and renames XADD to FETCH_ADD (there
were only 6 call places).

In the future, backport of fixes involving such operations could simply
remap FETCH_ADD(x) to XADD(x), FETCH_SUB(x) to XADD(-x), and for the
OR/AND if needed, these could possibly be done using BTS/BTR.

It's worth noting that xchg could have been renamed to fetch_and_store()
but xchg already has well understood semantics and it wasn't needed to
go further.
2021-04-07 18:18:37 +02:00
Willy Tarreau
a477150fd7 CLEANUP: atomic: make all standard add/or/and/sub operations return void
In order to make sure these ones will not be used anymore in an expression,
let's make them always void. New callers will now be forced to use the
explicit _FETCH variant if required.
2021-04-07 18:18:37 +02:00
Willy Tarreau
1db427399c CLEANUP: atomic: add an explicit _FETCH variant for add/sub/and/or
Currently our atomic ops return a value but it's never known whether
the fetch is done before or after the operation, which causes some
confusion each time the value is desired. Let's create an explicit
variant of these operations suffixed with _FETCH to explicitly mention
that the fetch occurs after the operation, and make use of it at the
few call places.
2021-04-07 18:18:37 +02:00
Willy Tarreau
6756d95a8e MINOR: atomic/arm64: detect and use builtins for the double-word CAS
Gcc 10.2 implements outline atomics on aarch64. The replace all inline
atomic ops with a function call that checks if the machine supports LSE
atomics. This comes with a small cost but allows modern machines to scale
much better than with the old LL/SC ones even when built for full 8.0
compatibility.

This patch enables the use of the __atomic_compare_exchange() builtin
for the double-word CAS when detected as available instead of using the
hand-written LL/SC version. The extra cost is negligible because we do
very few DWCAS operations (essentially FD migrations and shared pools)
so the cost is low but under high contention it can still be beneficial.
As expected no performance difference was measured in either direction
on 4-core machines with this change.

This could be backported to 2.3 if it was shown that FD migrations were
representing a significant source of contention, but for now it does
not appear to be needed.
2021-04-07 18:18:37 +02:00
Ilya Shipitsin
f3ede874a5 CLEANUP: assorted typo fixes in the code and comments
This is 20th iteration of typo fixes
2021-03-13 11:45:17 +01:00
Willy Tarreau
3b728a92bb BUILD: atomic/arm64: force the register pairs to use in __ha_cas_dw()
Since commit f8fb4f75f ("MINOR: atomic: implement a more efficient arm64
__ha_cas_dw() using pairs"), on some modern arm64 (armv8.1+) compiled
with -march=armv8.1-a under gcc-7.5.0, a build error may appear on ev_poll.o :

  /tmp/ccHD2lN8.s:1771: Error: reg pair must start from even reg at operand 1 -- `casp x27,x28,x22,x23,[x12]'
  Makefile:927: recipe for target 'src/ev_poll.o' failed

It appears that the compiler cannot always assign register pairs there
for a structure made of two u64. It was possibly later addressed since
gcc-9.3 never caused this, but there's no trivially available info on
the subject in the changelogs. Unsuprizingly, using a u128 instead does
fix this, but it significantly inflates the code (+4kB for just 6 places,
very likely that it loaded some extra stubs) and the comparison is ugly,
involving two slower conditional jumps instead of a single one and a
conditional comparison. For example, ha_random64() grew from 144 bytes
to 232.

However, simply forcing the base register does work pretty well, and
makes the code even cleaner and more efficient by further reducing it
by about 4.5kB, possibly because it helps the compiler to pick suitable
registers for the pair there. And the perf on 64-cores looks steadily
0.5% above the previous one, so let's do this.

Note that the commit above was backported to 2.3 to fix scalability
issues on AWS Graviton2 platform, so this one will need to be as well.
2021-03-12 06:26:22 +01:00
Ubuntu
f8fb4f75f1 MINOR: atomic: implement a more efficient arm64 __ha_cas_dw() using pairs
There finally is a way to support register pairs on aarch64 assembly
under gcc, it's just undocumented, like many of the options there :-(
As indicated below, it's possible to pass "%H" to mention the high
part of a register pair (e.g. "%H0" to go with "%0"):

  https://patchwork.ozlabs.org/project/gcc/patch/59368A74.2060908@foss.arm.com/

By making local variables from pairs of registers via a struct (as
is used in IST for example), we can let gcc choose the correct register
pairs and avoid a few moves in certain situations. The code is now slightly
more efficient than the previous one on AWS' Graviton2 platform, and
noticeably smaller (by 4.5kB approx). A few tests on older releases show
that even Linaro's gcc-4.7 used to support such register pairs and %H,
and by then ATOMICS were not supported so this should not cause build
issues, and as such this patch replaces the earlier implementation.
2021-03-05 08:30:08 +01:00
Willy Tarreau
46cca86900 MINOR: atomic: add armv8.1-a atomics variant for cas-dw
This variant uses the CASP instruction available on armv8.1-a CPU cores,
which is detected when __ARM_FEATURE_ATOMICS is set (gcc-linaro >= 7,
mainline >= 9). This one was tested on cortex-A55 (S905D3) and on AWS'
Graviton2 CPUs.

The instruction performs way better on high thread counts since it
guarantees some forward progress when facing extreme contention while
the original LL/SC approach is light on low-thread counts but doesn't
guarantee progress.

The implementation is not the most optimal possible. In particular since
the instruction requires to work on register pairs and there doesn't seem
to be a way to force gcc to emit register pairs, we have to decide to force
to use the pair (x0,x1) to store the old value, and (x2,x3) to store the
new one, and this necessarily involves some extra moves. But at least it
does improve the situation with 16 threads and more. See issue #958 for
more context.

Note, a first implementation of this function was making use of an
input/output constraint passed using "+Q"(*(void**)target), which was
resulting in smaller overall code than passing "target" as an input
register only. It turned out that the cause was directly related to
whether the function was inlined or not, hence the "forceinline"
attribute. Any changes to this code should still pay attention to this
important factor.
2021-03-05 08:30:08 +01:00
Willy Tarreau
958ae26c35 REORG: atomic: reimplement pl_cpu_relax() from atomic-ops.h
There is some confusion here as we need to place some cpu_relax statements
in some loops where it's not easily possible to condition them on the use
of threads. That's what atomic.h already does. So let's take the various
pl_cpu_relax() implementations from there and place them in atomic.h under
the name __ha_cpu_relax() and let them adapt to the presence or absence of
threads and to the architecture (currently only x86 and aarch64 use a barrier
instruction), though it's very likely that arm would work well with a cache
flushing ISB instruction as well).

This time they were implemented as expressions returning 1 rather than
statements, in order to ease their placement as the loop condition or the
continuation expression inside "for" loops. We should probably do the same
with barriers and a few such other ones.
2021-03-05 08:30:08 +01:00
Olivier Houchard
63ee281854 MINOR: atomic: don't use ; to separate instruction on aarch64.
The assembler on MacOS aarch64 interprets ; as the beginning of comments,
so it is not suitable for separating instructions in inline asm. Use \n
instead.

This should be backported to 2.3, 2.2, 2.1, 2.0 and 1.9.
2020-12-23 01:23:41 +01:00
Willy Tarreau
bcefb85009 BUILD: atomic: add string.h for memcpy() on ARM64
As reported in issue #686, ARM64 build fails since the include files
reorganization. This is caused by the lack of string.h while a memcpy()
is present in __ha_cas_dw().
2020-06-14 08:08:13 +02:00
Willy Tarreau
9453ecd670 REORG: threads: extract atomic ops from hathreads.h
The hathreads.h file has quickly become a total mess because it contains
thread definitions, atomic operations and locking operations, all this for
multiple combinations of threads, debugging and architectures, and all this
done with random ordering!

This first patch extracts all the atomic ops code from hathreads.h to move
it to haproxy/atomic.h. The code there still contains several sections
based on non-thread vs thread, and GCC versions in the latter case. Each
section was arranged in the exact same order to ease finding.

The redundant HA_BARRIER() which was the same as __ha_compiler_barrier()
was dropped in favor of the latter which follows the naming convention
of all other barriers. It was only used in freq_ctr.c which was updated.
Additionally, __ha_compiler_barrier() was defined inconditionally but
used only for thread-related operations, so it was made thread-only like
HA_BARRIER() used to be. We'd still need to have two types of compiler
barriers, one for the general case (e.g. signals) and another one for
concurrency, but this was not addressed here.

Some comments were added at the beginning of each section to inform about
the use case and warn about the traps to avoid.

Some files which continue to include hathreads.h solely for atomic ops
should now be updated.
2020-06-11 10:18:56 +02:00