17 Commits

Author SHA1 Message Date
Willy Tarreau
eced1d6d8a DEBUG: thread: reduce the struct lock_stat to store only 30 buckets
Storing only 30 buckets means we only keep 256 bytes per label. This
further simplifies address calculation and reduces the memory used
without complicating the locking code. It means we won't measure wait
times larger than a second but we're not supposed to face this as it
would trigger the watchdog anyway. It may become a little bit just if
measuring using rdtsc() instead of now_mono_time() though (typically
the limit would be around 350ms for a 3 GHz CPU).
2025-02-10 18:34:43 +01:00
Willy Tarreau
c2f2d6fd3c DEBUG: thread: make lock_stat per operation instead of for all operations
It's more convenient (and more readable) to have the lock stats arranged
by operation type (read, seek, write). It will also allow to later simplify
the structure format and the bucket address calculation. Now lock_stat[]
got split into lock_stats_rd[], lock_stats_sk[], lock_stats_wr[].
2025-02-10 18:34:43 +01:00
Willy Tarreau
4168d1278c DEBUG: thread: don't keep the redundant _locked counter
Now that we have our sums by bucket, the _locked counter is redundant
since it's always equal to the sum of all entries. Let's just get rid
of it and replace its consumption with a loop over all buckets, this
will reduce the overhead of taking each lock at the expense of a tiny
extra effort when dumping all locks, which we don't care about.
2025-02-10 18:34:43 +01:00
Willy Tarreau
a22550fbd7 DEBUG: thread: report the wait time buckets for lock classes
In addition to the total/average wait time, we now also store the
wait time in 2^N buckets. There are 32 buckets for each type (read,
seek, write), allowing to store wait times from 1-2ns to 2.1-4.3s,
which is quite sufficient, even if we'd want to switch from NS to
CPU cycles in the future. The counters are only reported for non-
zero buckets so as not to visually pollute the output.

This significantly inflates the lock_stat struct, which is now
aligned to 256 bytes and rounded up to 1kB. But that's not really
a problem, given that there's only one per lock label.
2025-02-10 18:34:43 +01:00
Amaury Denoyelle
8e0e7d9d1a BUG/MINOR: guid/server: ensure thread-safety on GUID insert/delete
Since 3.0, it is possible to assign a GUID to proxies, listeners and
servers. These objects are stored in a global tree guid_tree.

Proxies and listeners are static. However, servers may be added or
deleted at runtime, which imply that guid_tree must be protected. Fix
this by declaring a read-write lock to protect tree access.

For now, only guid_insert() and guid_remove() are protected using a
write lock. Outside of these, GUID tree is not accessed at runtime. If
server CLI commands are extended to support GUID as server identifier,
lookup operation should be extended with a read lock protection.

Note that during stat-file preloading, GUID tree is accessed for lookup.
However, as it is performed on startup which is single threaded, there
is no need for lock here. A BUG_ON() has been added to ensure this
precondition remains true.

This bug could caused a segfault when using dynamic servers with GUID.
However, it was never reproduced for now.

This must be backported up to 3.0. To avoid a conflict issue, the
previous cleanup patch can be merged before it.
2024-11-07 18:17:03 +01:00
Willy Tarreau
ac692d7ee5 BUILD: thread: move lock label definitions to thread-t.h
The 'lock_label' enum is defined in thread.h but it's used in a few
type files, so let's move it to thread-t.h to allow explicit includes.
2024-03-05 11:50:34 +01:00
Willy Tarreau
7aa41196cf MEDIUM: debug/threads: make the lock debugging take tgroups into account
Since we have to use masks to verify owners/waiters, we have no other
option but to have them per group. This definitely inflates the size
of the locks, but this is only used for extreme debugging anyway so
that's not dramatic.

Thus as of now, all masks in the lock stats are local bit masks, derived
from ti->ltid_bit. Since at boot ltid_bit might not be set, we just take
care of this situation (since some structs are initialized under look
during boot), and use bit 0 from group 0 only.
2022-07-15 19:41:26 +02:00
Willy Tarreau
5e03dfaaf6 MINOR: thread: use a dedicated static pthread_t array in thread.c
This removes the thread identifiers from struct thread_info and moves
them only in static array in thread.c since it's now the only file that
needs to touch it. It's also the only file that needs to include
pthread.h, beyond haproxy.c which needs it to start the poll loop. As
a result, much less system includes are needed and the LoC reduced by
around 3%.
2021-10-07 01:41:15 +02:00
Willy Tarreau
b9ad30a8ad Revert "MINOR: threads: change lock_t to an unsigned int"
This reverts commit 8f1f177ed0bbdb6c10e61e83994df113452d434f.

Repeated tests have shown a small perforamnce degradation of ~1.8%
caused by this patch at high request rates on 16 threads. The exact
cause is not yet perfectly known but it probably stems in slower
accesses for non-64-bit aligned atomic accesses.
2021-02-18 10:06:45 +01:00
Willy Tarreau
8f1f177ed0 MINOR: threads: change lock_t to an unsigned int
We don't need to waste the size of a long for the locks: with the plocks,
even an unsigned short would offer enough room for up to 126 threads! Let's
use an unsigned int which will be easier to place in certain structures
and will more conveniently plug some holes, and Atomic ops are at least
as fast on 32-bit as on 64-bit. This will not change anything for 32-bit
platforms.
2020-10-19 14:08:13 +02:00
Willy Tarreau
3d18498645 CLEANUP: threads: don't register an initcall when not debugging
It's a bit overkill to register an initcall to call a function to set
a lock to zero when not debugging, let's just declare the lock as
pre-initialized to zero.
2020-10-19 14:08:13 +02:00
Willy Tarreau
8d5360ca7f MINOR: threads: augment rwlock debugging stats to report seek lock stats
We currently use only read and write lock operations with rwlocks, but
ours also support upgradable seek locks for which we do not report any
stats. Let's add them now when DEBUG_THREAD is enabled.
2020-10-16 16:51:49 +02:00
Willy Tarreau
e4d1505c83 REORG: includes: create tinfo.h for the thread_info struct
The thread_info struct is convenient to store various per-thread info
without having to resort to a painful thread_local storage which is
slow and painful to initialize.

The problem is, by having this one in thread.h it's very difficult to
add more entries there because everyone already includes thread.h so
conversely thread.h cannot reference certain types.

There's no point in having this there, instead let's create a new pair
of files, tinfo{,-t}.h, which declare the structure. This way it will
become possible to extend them with other includes and have certain
files store their own types there.
2020-06-29 09:57:23 +02:00
Willy Tarreau
5b9cde4820 REORG: include: move THREAD_LOCAL and __decl_thread() to compiler.h
Since these are used as type attributes or conditional clauses, they
are used about everywhere and should not require a dependency on
thread.h. Moving them to compiler.h along with other similar statements
like ALIGN() etc looks more logical; this way they become part of the
base API. This allowed to remove thread-t.h from ~12 files, one was
found to only require thread-t and not thread and dict.c was found to
require thread.h.
2020-06-11 10:18:59 +02:00
Willy Tarreau
ca8b069aa7 REORG: include: move MAX_THREADS to defaults.h
That's already where MAX_PROCS is set, and we already handle the case of
the default value so there is no reason for placing it in thread.h given
that most call places don't need the rest of the threads definitions. The
include was removed from global-t.h and activity.c.
2020-06-11 10:18:59 +02:00
Willy Tarreau
af613e8359 CLEANUP: thread: rename __decl_hathreads() to __decl_thread()
I can never figure whether it takes an "s" or not, and in the end it's
better if it matches the file's naming, so let's call it "__decl_thread".
2020-06-11 10:18:56 +02:00
Willy Tarreau
3f567e4949 REORG: include: split hathreads into haproxy/thread.h and haproxy/thread-t.h
This splits the hathreads.h file into types+macros and functions. Given
that most users of this file used to include it only to get the definition
of THREAD_LOCAL and MAXTHREADS, the bare minimum was placed into thread-t.h
(i.e. types and macros).

All the thread management was left to haproxy/thread.h. It's worth noting
the drop of the trailing "s" in the name, to remove the permanent confusion
that arises between this one and the system implementation (no "s") and the
makefile's option (no "s").

For consistency, src/hathreads.c was also renamed thread.c.

A number of files were updated to only include thread-t which is the one
they really needed.

Some future improvements are possible like replacing empty inlined
functions with macros for the thread-less case, as building at -O0 disables
inlining and causes these ones to be emitted. But this really is cosmetic.
2020-06-11 10:18:56 +02:00