This new command, enabled only with "DEBUG_DEV", sends 2 or 20 traces
per task wakeup (depending on the verbosity level), and stops after 1M
wakeups per thread in order not to have to stop/start the process each
time it's fired.
We have two small messages and 18 larger ones from 20 to 270 bytes
each, so that the average size is approx 213 bytes counting headers
(the header adds approx 82 bytes), which matches what's generally
observed on average when traces are enabled in all muxes.
Typical figures show varations between 5.7M and 6.2M msg/s on an EPYC
in a 3C6T setup (single CCX), and 2.12M - 2.22M in a 24C48T setup
(across 8 CCX, with 8 thread groups).
While trying to reproduce another crash case involving lua filters
reported by @bgrooot on GH #2467, we found out that mixing filters loaded
from different contexts ('lua-load' vs 'lua-load-per-thread') for the same
stream isn't supported and may even cause the process to crash.
Historically, mixing lua-load and lua-load-per-threads for a stream wasn't
supported, but this changed thanks to 0913386 ("BUG/MEDIUM: hlua: streams
don't support mixing lua-load with lua-load-per-thread").
However, the above fix didn't consider lua filters's use-case properly:
unlike lua fetches, actions or even services, lua filters don't simply
use the stream hlua context as a "temporary" hlua running context to
process some hlua code. For fetches, actions.. hlua executions are
processed sequentially, so we simply reuse the hlua context from the
previous action/fetch to run the next one (this allows to bypass memory
allocations and initialization, thus it increases performance), unless
we need to run on a different hlua state-id, in which case we perform a
reset of the hlua context.
But this cannot work with filters: indeed, once registered, a filter will
last for the whole stream duration. It means that the filter will rely
on the stream hlua context from ->attach() to ->detach(). And here is the
catch, if for the same stream we register 2 lua filters from different
contexts ('lua-load' + 'lua-load-per-thread'), then we have an issue,
because the hlua stream will be re-created each time we switch between
runtime contexts, which means each time we switch between the filters (may
happen for each stream processing step), and since lua filters rely on the
stream hlua to carry context between filtering steps, this context will be
lost upon a switch. Given that lua filters code was not designed with that
in mind, it would confuse the code and cause unexpected behaviors ranging
from lua errors to crashing process.
So here we take another approach: instead of re-creating the stream hlua
context each time we switch between "global" and "per-thread" runtime
context, let's have both of them inside the stream directly as initially
suggested by Christopher back then when talked about the original issue.
For this we leverage hlua_stream_ctx_prepare() and hlua_stream_ctx_get()
helper functions which return the proper hlua context for a given stream
and state_id combination.
As for debugging infos reported after ha_panic(), we check for both hlua
runtime contexts to check if one of them was active when the panic occured
(only 1 runtime ctx per stream may be active at a given time).
This should be backported to all stable versions with 0913386
("BUG/MEDIUM: hlua: streams don't support mixing lua-load with lua-load-per-thread")
This commit depends on:
- "DEBUG: lua: precisely identify if stream is stuck inside lua or not"
[for versions < 2.9 the ha_thread_dump_one() part should be skipped]
- "MINOR: hlua: use accessors for stream hlua ctx"
For 2.4, the filters API didn't exist. However it may be a good idea to
backport it anyway because ->set_priv()/->get_priv() from tcp/http lua
applets may also be affected by this bug, plus it will ease code
maintenance. Of course, filters-related parts should be skipped in this
case.
When ha_panic() is called by the watchdog, we try to guess from
ha_task_dump() and ha_thread_dump_one() if the thread was stuck while
executing lua from the stream context. However we consider this is the
case by simply checking if the stream hlua context was set, but this is
not very precise because if the hlua context is set, then it simply means
that at least one lua instruction was executed at the stream level, not
that the stuck was currently executing lua when the panic occured.
This is especially true with filters, one could simply register a lua
filter that does nothing but this will still end up initializing the
stream hlua context for each stream. If the thread end up being stuck
during the stream handling, then debug dumping functions will report
that the stream was stuck while handling lua, which is not necessarilly
true, and could in fact confuse us even more.
So here we take another approach, we add the BUSY flag to hlua context:
this flag is set by hlua_ctx_resume() around lua_resume() call, this way
we can precisely tell if the thread was handling lua when it was
interrupted, and we rely on this flag in debug functions to check if the
thread was effectively stuck inside lua or not while processing the stream
No backport needed unless a commit depends on it.
In issue #2427 Ilya reports that gcc-14 rightfully complains about
sizeof() being placed in the left term of calloc(). There's no impact
but it's a bad pattern that gets copy-pasted over time. Let's fix the
few remaining occurrences (debug.c, halog, udp-perturb).
This can be backported to all branches, and the irrelevant parts dropped.
The "show dev" CLI command is still missing useful elements such as the
build options, SSL version etc. Let's just add the build features and
the build options there so that it's possible to collect all of this
from a running process without having to start the executable with -vv.
This is still dumped all at once from the parsing function since the
output is small. If it were to grow, this would possibly require to be
reworked to support a context.
It might be helpful to backport this to 2.9 since it can help narrow
down certain issues.
Here the idea is to collect components' versions and build options. The
main component is haproxy, but the API is made so that any sub-system
can easily add a component there (for example the detailed version of a
device detection lib, or some info about a lib loaded from Lua).
The elements are stored as a pointer to an array of structs and its count
so that it's sufficient to issue this in gdb to list them all at once:
print *post_mortem.components@post_mortem.nb_components
For now we collect name, version, toolchain, toolchain options, build
options and path. Maybe more could be useful in the future.
Having the libs and their addresses listed in the post_mortem struct
is also helpful. Sometimes it helps notice that one version is not the
expected one, e.g. due to some LD_LIBRARY_PATH. We don't emit it on
"show dev" however since that's already available via "show libs".
The last starting thread now copies the pthread ID and stack top of
each thread into post_mortem. That way it's as easy as issuing
"p post_mortem" in gdb to see all thread IDs and stack frames and more
easily map them to the threads met in a core.
Here we collect the original uid/gid/rlimits for FD and RAM since these
ones do affect behavior and are sometimes different from expected in
containers or when starting as a service.
When the x86 CPU flags show the "hypervisor" flag, we know we're running
inside QEMU, VMware or possibly other flavors of hypervisors. In this
case we'll report either "qemu", "vmware" or "yes" for other ones in
the "virt_techno" field, based on the DMI hardware vendor name,
otherwise "no" when the flag is not found.
The CPU model and type has significant impact on certain bugs, such
as contention issues caused by CPUs having split L3 caches, or stricter
memory models that exhibit some barrier issues. It's complicated though
because the info about the model depends on the arch. For example, x86
reports an SKU name while ARM rather reports the CPU core types, families
and versions for each CPU core. There, the SoC will sometimes be reported
in the device tree or DMI info instead. But we don't really care, it's
essentially useful to know if the code is running on an armv8.0 such as
A53, a 8.2 such as A55/A76/Neoverse etc. For MIPS the model appears to
generally be there, and in addition the SoC is often present in the
"system type" field before the first CPU, and the type of machine in the
"machine" field, to replace the missing DMI and DT, so they are also
collected. Note that only the first CPU is checked and reported, that's
expected to be vastly sufficient, since we're just trying to spot known
incompatibilities or issues.
If we detect we're running inside a container on Linux, let's check if
it seems to be docker. Docker usually creates a /.dockerenv file, which
is easy to check. It's uncertain whether it's always the case, but on the
few tested instances that was true, and we don't really care, what matters
is to place helpful debugging info for developers. When this file is
detected, we report "docker" instead of "yes" in the container techno.
Containers often cause significant trouble depending on how they're
set up, and they're not always trivial for their users to extract info
from. Here we're trying to detect if we're running inside a container
on Linux. There are plenty of approaches and none is perfectly clean
nor reliable, which makes sense since the goal is to remain transparent
enough.
One interesting approach is to rely on the observation that containers
generally do not expose most kernel threads, and that the very firsts
of them are extremely stable across all kernel versions: pid 2 was
called "keventd" in kernel 2.4, became "kthreadd" in kernel 2.6, and
has since not changed. This is true on all architectures tested, even
with highly stripped down kernels such as those found on 15 year-old
OpenWRT images. And this one doesn't appear inside containers. Thus
here we check if we find such a thread via /proc and whether it's
called keventd or kthreadd, to detect a container, and we set the
"cont_techno" variable to "yes" or "no" depending on what is found.
Let's extract some info about the system (board model, vendor etc),
this will indicate some hypervisors, some cloud instances or some
uncommon embedded boards etc. Typically, vmware, qemu and raspberry-pi
are visible here and can help during the troubleshooting session.
The goal here is to accumulate precious debugging information in a
struct that is easy to find in memory. It's aligned to 256-byte as
it also helps. We'll progressively add a lot of info about the
startup conditions, the operating system, the hardware and hypervisor
so as to limit the number of round trips between developers and users
during debugging sessions. Also, opening a core file with an hex editor
should often be sufficient to extract most of the info.
In addition, a new "show dev" command will show these information so
that they can be checked at runtime without having to wait for a crash
(e.g. if a limit is bad in a container, better know it early).
For now the struct only contains utsname that's fed at boot time.
Now when calling ha_panic() with a thread still under malloc_trim(),
we'll set a new tainted flag to easily report it, and the output
trace will report that this condition happened and will suggest to
use no-memory-trimming to avoid it in the future.
William suggested that since we can detect the presence of Lua in the
stack, let's combine it with stuck detection to set a new pair of flags
indicating a stuck Lua context and a stuck Lua shared context.
Now, executing an infinite loop in a Lua sample fetch function with
yield disabled crashes with tainted=0xe40 if loaded from a lua-load
statement, or tainted=0x640 from a lua-load-per-thread statement.
In addition, at the end of the panic dump, we can check if Lua was
seen stuck and emit recommendations about lua-load-per-thread and
the choice of dependencies depending on the presence of threads
and/or shared context.
This will make it easier to know that the panic function was called,
for the occasional case where the dump crashes and/or the stack is
corrupted and not much exploitable. Now at least it will be sufficient
to check the tainted value to know that someone called ha_panic(), and
it will also be usable to condition extra analysis.
Since commit c185bc465 ("MEDIUM: stream: now provide full stream dumps
in case of loops"), the stuck threads show the stream's pointer in the
margin since it appears immediately after a line feed. Let's add it after
the prefix and "stream=" to make the output more readable.
There used to be two working modes for this function, a single-line one
and a multi-line one, the difference being made on the "eol" argument
which could contain either a space or an LF (and with the prefix being
adjusted accordingly). Let's get rid of the single-line mode as it's
what limits the output contents because it's difficult to produce
exploitable structured data this way. It was only used in the rare case
of spinning streams and applets and these are the ones lacking info. Now
a spinning stream produces:
[ALERT] (3511) : A bogus STREAM [0x227e7b0] is spinning at 5581202 calls per second and refuses to die, aborting now! Please report this error to developers:
strm=0x227e7b0,c4a src=127.0.0.1 fe=public be=public dst=s1
txn=0x2041650,3000 txn.req=MSG_DONE,4c txn.rsp=MSG_RPBEFORE,0
rqf=1840000 rqa=8000 rpf=80000000 rpa=1400000
scf=0x24af280,EST,482 scb=0x24af430,EST,1411
af=(nil),0 sab=(nil),0
cof=0x7fdb28026630,300:H1(0x24a6f60)/RAW((nil))/tcpv4(33)
cob=0x23199f0,10000300:H1(0x24af630)/RAW((nil))/tcpv4(32)
filters={}
call trace(11):
(...)
Ilya reported in issue #2193 that the latest Fedora complains about us
passing NULL to epoll_wait() in the "debug dev fd" code to try to detect
an epoll FD. That was intentional to get the kernel's verifications and
make sure we're facing a poller, but since such a warning comes from the
libc, it's possible that it plans to replace the syscall with a wrapper
in the near future (e.g. epoll_pwait()), and that just hiding the NULL
(as was confirmed to work) might just postpone the problem.
Let's take another approach, instead we'll create a new dummy FD that
we'll try to remove from the epoll set using epoll_ctl(). Since we
created the FD we're certain it cannot be there. In this case (and
only in this case) epoll_ctl() will return -ENOENT, otherwise it will
typically return EINVAL or EBADF. It was verified that it works and
doesn't return false positives for other FD types. It should be
backported to the branches that contain a backport of the commit which
introduced the feature, apparently as far as 2.4:
5be7c198e ("DEBUG: cli: add a new "debug dev fd" expert command")
Task pointer check in debug_parse_cli_task() computes the theoric end
address of provided task pointer to check if it is valid or not thanks to
may_access() helper function.
However, relative ending address is calculated by adding task size to 't'
pointer (which is a struct task pointer), thus it will result to incorrect
address since the compiler automatically translates 't + x' to
't + x * sizeof(*t)' internally (with sizeof(*t) != 1 here).
Solving the issue by using 'ptr' (which is the void * raw address) as
starting address to prevent automatic address scaling.
This was revealed by coverity, see GH #2157.
No backport is needed, unless 9867987 ("DEBUG: cli: add "debug dev task"
to show/wake/expire/kill tasks and tasklets") gets backported.
Commit 986798718 ("DEBUG: cli: add "debug dev task" to show/wake/expire/kill
tasks and tasklets") caused a build failure on 32-bit platforms when parsing
the task's pointer. Let's use strtoul() and not strtoll(). No backport is
needed, unless the commit above gets backported.
The build without thread support was broken by commit b30ced3d8 ("BUG/MINOR:
debug: fix incorrect profiling status reporting in show threads") because
it accesses the isolated_thread variable that is not defined when threads
are disabled. In fact both the test on harmless and this one make no sense
without threads, so let's comment out the block and mark the related
variables as unused.
This may have to be backported to 2.7 if the commit above is.
Previously it would re-dump all threads to the same trash if the output
buffer was full, which it never was since the trash is of the same size.
Now it dumps one thread, copies it to the buffer and yields until it can
continue. Showing 256 threads works as expected.
Currently large setups cannot dump all their threads because they're
first dumped to the trash buffer, then copied to stderr. Here we can
now change this, instead we dump one thread at a time into the trash
and immediately send it to stderr. We also keep a copy into a local
trash chunk that's assigned to thread_dump_buffer so that a core file
still contains a copy of a large number of threads, which is generally
sufficient for the vast majority of situations.
It was verified that dumping 256 threads now produces ~55kB of output
and all of them are properly dumped.
The thread dump mechanism that is used by "show threads" and by the
panic dump is overly complicated due to an initial misdesign. It
firsts wakes all threads, then serializes their dumps, then releases
them, while taking extreme care not to face colliding dumps. In fact
this is not what we need and it reached a limit where big machines
cannot dump all their threads anymore due to buffer size limitations.
What is needed instead is to be able to dump *one* thread, and to let
the requester iterate on all threads.
That's what this patch does. It adds the thread_dump_buffer to the
struct thread_ctx so that the requester offers the buffer to the
thread that is about to be dumped. This buffer also serves as a lock.
A thread at rest has a NULL, a valid pointer indicates the thread is
using it, and 0x1 (NULL+1) is used by the dumped thread to tell the
requester it's done. This makes sure that a given thread is dumped
once at a time. In addition to this, the calling thread decides
whether it accesses the thread by itself or via the debug signal
handler, in order to get a backtrace. This is much saner because the
calling thread is free to do whatever it wants with the buffer after
each thread is dumped, and there is no dependency between threads,
once they've dumped, they're free to continue (and possibly to dump
for another requester if needed). Finally, when the THREAD_DUMP
feature is disabled and the debug signal is not used, the requester
accesses the thread by itself like before.
For now we still have the buffer size limitation but it will be
addressed in future patches.
In 2.3, commit 471425f51 ("BUG/MINOR: debug: Don't dump the lua stack
if it is not initialized") introduced the possibility to emit an empty
line when there's no Lua info to dump. The problem is that doing this
on the CLI in "show threads" marks the end of the output, and it may
affect some external tools. We need to make sure that LFs are only
emitted if there's something on the line and that all lines properly
start with the prefix.
This may be backported as far as 2.0 since the commit above was
backported there.
Sometimes it's convenient to test the effect of tasks running under
isolation, e.g. to validate the contents of the crash dumps. Let's
add an optional "isolated" keyword to "debug dev loop" for this.
Thread dumps include a field "prof" for each thread that reports whether
task profiling is currently active or not. It turns out that in 2.7-dev1,
commit 680ed5f28 ("MINOR: task: move profiling bit to per-thread")
mistakenly replaced it with a check for the current thread's bit in the
thread dumps, which basically is the only place where another thread is
being watched. The same mistake was done a few lines later by confusing
threads_want_rdv_mask with the profiling mask. This mask disappeared
in 2.7-dev2 with commit 598cf3f22 ("MAJOR: threads: change thread_isolate
to support inter-group synchronization"), though instead we know the ID
of the isolated thread. This commit fixes this and now reports "isolated"
instead of "wantrdv".
This can be backported to 2.7.
Commit 986798718 ("DEBUG: cli: add "debug dev task" to show/wake/expire/kill
tasks and tasklets") broke the build on windows due to this:
src/debug.c:940:95: error: array subscript has type char [-Werror=char-subscripts]
940 | caller && may_access(caller) && may_access(caller->func) && isalnum(*caller->func) ? caller->func : "0",
| ^~~~~~~~~~~~~
It's classical on platforms which implement ctype.h as macros instead of
functions, let's cast it as uchar. No backport is needed.
When analyzing certain types of bugs in field, sometimes it would be
nice to be able to wake up a task or tasklet to see how events progress
(e.g. to detect a missing wakeup condition), or expire or kill such a
task. This restricted command shows hte current state of a task or tasklet
and allows to manipulate it like this. However it must be used with extreme
care because while it does verify that the pointers are mapped, it cannot
know if they point to a real task, and performing such actions on something
not a task will easily lead to a crash. In addition, performing a "kill"
on a task has great chances of provoking a deferred crash due to a double
free and/or another kill that is not idempotent. Use with extreme care!
The purpose of this patch is only a one-to-one replacement, as far as
possible.
CF_SHUTR(_NOW) and CF_SHUTW(_NOW) flags are now carried by the
stream-connecter. CF_ prefix is replaced by SC_FL_ one. Of course, it is not
so simple because at many places, we were testing if a channel was shut for
reads and writes in same time. To do the same, shut for reads must be tested
on one side on the SC and shut for writes on the other side on the opposite
SC. A special care was taken with process_stream(). flags of SCs must be
saved to be able to detect changes, just like for the channels.
The goal is to send signals to random threads at random instants so that
they spin for a random delay in a relax() loop, trying to give back the
CPU to another competing hardware thread, in hope that from time to time
this can trigger in critical areas and increase the chances to provoke a
latent concurrency bug. For now none were observed.
For example, this command starts 64 such tasks waking after random delays
of 0-1ms and delivering signals to trigger such loops on 3 random threads:
for i in {1..64}; do
socat - /tmp/sock1 <<< "expert-mode on;debug dev delay-inj 2 3"
done
This command is only enabled when DEBUG_DEV is set at build time.
These timers are related to the I/O. Thus it is logical to move them into
the SE descriptor. The patch is a bit huge but it is just a
replacement. However it is error-prone.
From the stconn or the stream, helper functions are used to get, set or
reset these timers. This simplify the timers manipulations.
The debug handler may deadlock with some threads waiting for isolation.
This may happend during a "show threads" command or even during a panic.
The reason is the call to thread_harmless_end() which waits for rdv_requests
to turn to zero before releasing its position in thread_dump_state,
while that one may not progress if another thread was interrupted in
thread_isolate() and is waiting for that thread to drop thread_dump_state.
In order to address this, we now use thread_harmless_end_sig() introduced
by previous commit:
MINOR: threads: add a thread_harmless_end() version that doesn't wait
However there's a catch: since commit f7afdd910 ("MINOR: debug: mark
oneself harmless while waiting for threads to finish"), there's a second
pair of thread_harmless_now()/thread_harmless_end() that surround the
loop around thread_dump_state. Marking a thread harmless before this
loop and dropping that without checking rdv_requests there could break
the harmless promise made to the other thread if it returns first and
proceeds with its isolated work. Hence we just drop this pair which was
only preventive for other signal handlers, while as indicated in that
patch's commit message, other signals are handled asynchronously and do
not require that extra protection.
This fix must be backported to 2.7.
The problem can be seen by running "show threads" in fast loops (100/s)
while reloading haproxy very quickly (10/s) and sending lots of traffic
to it (100krps, 15 Gbps). In this case the soft stop calls pool_gc()
which isolates a lot and manages to race with the dumps after a few
tens of seconds, leaving the process with all threads at 100%.
In applets, we stop processing when a write error (CF_WRITE_ERROR) or a shutdown
for writes (CF_SHUTW) is detected. However, any write error leads to an
immediate shutdown for writes. Thus, it is enough to only test if CF_SHUTW is
set.
Commit f0c86ddfe ("BUG/MEDIUM: debug: fix parallel thread dumps again")
added a clearing of the TH_FL_STUCK flag before dumping threads in case
of parallel dumps, but that was in part a sort of workaround for some
remains of the commit that introduced the flag in 2.0 before the watchdog
existed, and which would set it after dumping a thread: e6a02fa65 ("MINOR:
threads: add a "stuck" flag to the thread_info struct"), and in part an
attempt to avoid that a thread waiting for too long during the dump would
get the flag set. But that is not possible, a thread waiting for being
dumped has the harmless bit set and doesn't get the stuck bit. What happens
in fact is that issuing "show threads" in fast loops ends up causing some
threads to keep their STUCK bit that was set at the end of "show threads",
and confuses the output.
The problem with doing this is that the flag is cleared before the thread
is dumped, and since this flag is used to decide whether to show a backtrace
or not, we don't get backtraces anymore of stuck threads since the commit
above in 2.7.
This patch just removes the two points where the flag was cleared by the
commit above. It should be backported to 2.7.
When digging into suspected memory leaks, it's cumbersome to count the
number of allocations and free calls. Here we're adding a summary at the
end of the sum of allocs minus the sum of frees, excluding realloc since
we can't know how much it releases upon each call. This means that when
doing many realloc+free the count may be negative but in practice there
are very few reallocs so that's not a problem. Also the size/call is signed
and corresponds to the average size allocated (e.g. leaked) per call.
It seems to work reasonably well for now:
> debug dev memstats match buf
quic_conn.c:2978 P_FREE size: 1239547904 calls: 75656 size/call: 16384 buffer
quic_conn.c:2960 P_ALLOC size: 1239547904 calls: 75656 size/call: 16384 buffer
mux_quic.c:393 P_ALLOC size: 9112780800 calls: 556200 size/call: 16384 buffer
mux_quic.c:383 P_ALLOC size: 17783193600 calls: 1085400 size/call: 16384 buffer
mux_quic.c:159 P_FREE size: 8935833600 calls: 545400 size/call: 16384 buffer
mux_quic.c:142 P_FREE size: 9112780800 calls: 556200 size/call: 16384 buffer
h3.c:776 P_ALLOC size: 8935833600 calls: 545400 size/call: 16384 buffer
quic_stream.c:166 P_FREE size: 975241216 calls: 59524 size/call: 16384 buffer
quic_stream.c:127 P_FREE size: 7960592384 calls: 485876 size/call: 16384 buffer
stream.c:772 P_FREE size: 8798208 calls: 537 size/call: 16384 buffer
stream.c:768 P_FREE size: 2424832 calls: 148 size/call: 16384 buffer
stream.c:751 P_ALLOC size: 8852062208 calls: 540287 size/call: 16384 buffer
stream.c:641 P_FREE size: 8849162240 calls: 540110 size/call: 16384 buffer
stream.c:640 P_FREE size: 8847360000 calls: 540000 size/call: 16384 buffer
channel.h:850 P_ALLOC size: 2441216 calls: 149 size/call: 16384 buffer
channel.h:850 P_ALLOC size: 5914624 calls: 361 size/call: 16384 buffer
dynbuf.c:55 P_FREE size: 32768 calls: 2 size/call: 16384 buffer
Total BALANCE size: 0 calls: 5606906 size/call: 0 (excl. realloc)
Let's see how useful this becomes over time.
Sometimes when debugging it's convenient to be able to focus only on
certain pools. Just like we did for "show pools", let's add a filter
based on a prefix on "debug dev memstats match <prefix>".
These two have absolutely zero impact on the process and do not need to
be restricted to the expert mode. The first one calculates a string hash
that can be used by anyone when checking a dump; the second one may be
used by anyone tracking a memory leak, and is cumbersome to use due to
the "expert-mode on" that needs to be prepended. In addition this gives
bad habits to users and needlessly taints the process. So let's drop
this restriction for these two commands.
This command is used to hash a section name using the current anon key,
it was brought in 2.7 by commit 54966dffd ("MINOR: anon: store the
anonymizing key in the CLI's appctx"). However the help message only
says "return msg hashed" which is misleading because if anon mode is
not enabled, it returns the string as-is. Let's just mention this
condition in the help message, and also fix the alphabetical ordering
and alignment on the line.
"debug dev memstats" supports various options but silently ignores the
unknown ones. Let's make sure it returns indications about what it
expects, as the help message is quite limited otherwise.
In order to allow users to dump internal states using a specific key
without changing the global one, we're introducing a key in the CLI's
appctx. This key is preloaded from the global one when "set anon on"
is used (and if none exists, a random one is assigned). And the key
can optionally be assigned manually for the whole CLI session.
A "show anon" command was also added to show the anon state, and the
current key if the users has sufficient permissions. In addition, a
"debug dev hash" command was added to test the feature.
The memstats code currently defines its own file/function/line number,
type and extra pointer. We don't need to keep them separate and we can
easily replace them all with just a struct ha_caller. Note that the
extra pointer could be converted to a pool ID stored into arg8 or
arg32 and be dropped as well, but this would first require to define
IDs for pools (which we currently do not have).
This field is misnamed because its real and important content is the
date the task was woken up, not the date it was called. It temporarily
holds the call date during execution but this remains confusing. In
fact before the latency measurements were possible it was indeed a call
date. Thus is will now be called wake_date.
This change is necessary because a subsequent fix will require the
introduction of the real call date in the thread ctx.
The first column's width may vary a lot depending on outputs, and it's
annoying to have large empty columns on small names and mangled large
columns that are not yet large enough. In order to overcome this, this
patch adds a width field to the memstats applet's context, and this
width is calculated the first time the function is entered, by estimating
the width of all lines that will be dumped. This is simple enough and
does the job well. If in the future some filtering criteria are added,
it will still be possible to perform a single pass on everything
depending on the desired output format.
The calling function name is now stored in the structure, and it's
reported when the "all" argument is passed. The first column is
significantly enlarged because some names are really wide :-(
This command will create the requested number of tasks competing on a
lock, resulting in triggering the watchdog and crashing the process.
This will help stress the watchdog and inspect the lock debugging parts.
The previous attempt to fix thread dumps in commit 672972604 ("BUG/MEDIUM:
debug: fix possible hang when multiple threads dump at once") still had
some shortcomings. Sometimes parallel dumps are jerky essentially due to
the way that threads synchronize on startup and end. In addition the risk
of waiting forever for a stopped thread exists, and panics happening in
parallel to thread dumps are not more reliable either.
This commit revisits the state transitions so that all threads may request
a dump in parallel, that all of them wait for each other in the handler,
and that one thread is responsible for counting every other and checking
that the total matches the number of active threads.
Then for stopping there's a finishing phase that all threads wait for so
that none quits this area too early. Given that we now know the number of
participants to the dump, we can let them each decrement the counter when
leaving so that another dump may only start after the last participant
has completely left.
Now many thread dumps in parallel are running fine, so do panics. No
backport is needed as this was the result of the changes for thread
groups.
Some panic dumps are mangled or truncated due to the watchdog firing at
the same time on multiple threads and calling ha_panic() simultaneously.
What may happen in this case is that the second one waits for the first
one to finish but as soon as it's done the second one resets the buffer
and dumps again, sometimes resetting the first one's dump. Also the first
one's abort() may trigger while the second one is currently dumping,
resulting in a full dump followed by a truncated one, leading to
confusion. Sometimes some lines appear in the middle of a dump as well.
It doesn't happen often and is easier to trigger by causing massive
deadlocks.
There's no reason for the process to resist to a panic, so we can safely
add a counter and no nothing on subsequent calls. Ideally we'd wait there
forever but as this may happen inside a signal handler (e.g. watchdog),
it doesn't always work, so the easiest thing to do is to return so that
the thread is interrupted as soon as possible and brought to the debug
handler to be dumped.
This should be backported, at least to 2.6 and possibly to older versions
as well.
Running several concurrent "show threads" in loops might occasionally
cause a segfault when trying to retrieve the stream from appctx_sc()
which may be null while the applet is finishing. It's not easy to
reproduce, it requires 3-5 sessions in parallel for about a minute
or so. The appctx_sc must be checked before passing it to sc_strm().
This must be backported to 2.6 which also has the bug.
A bug in the thread dumper was introduced by commit 00c27b50c ("MEDIUM:
debug: make the thread dumper not rely on a thread mask anymore"). If
two or more threads try to trigger a thread dump exactly at the same
time, the second one may loop indefinitely trying to set the value to 1
while the other ones will wait for it to finish dumping before leaving.
This is a consequence of a logic change using thread numbers instead of
a thread mask, as threads do not need to see all other ones there anymore.
No backport is needed, this is only for 2.7.
Building with threads and without thread dump (e.g. macos, freebsd)
warns that thread_dump_state is unused. This happened in fact with
recentcommit 1229ef312 ("MINOR: wdt: do not rely on threads_to_dump
anymore"). The solution would be to mark it unused, but after a
second thought, it can be convenient to keep it exported to help
debug crashes, so let's export it again. It's just not referenced in
include files since it's not needed outside.
The thread mask is too short to dump more than 64 bits. Thus here we're
using a different approach with two counters, one for the next thread ID
to dump (which always exists, as it's looked up), and the second one for
the number of threads done dumping. This allows to dump threads in ascending
order then to let them wait for all others to be done, then to leave without
the risk of an overlapping dump until the done count is null again.
This allows to remove threads_to_dump which was the last non-FD variable
using a global thread mask.
This flag is not needed anymore as we're already marking the waiting
threads as harmless, thus the thread's bit is already covered by this
information. The variable was unexported.
The debug_handler() function waits for other threads to join, but does
not mark itself as harmless, so if at the same time another thread tries
to isolate, this may deadlock. In practice this does not happen as the
signal is received during epoll_wait() hence under harmless mode, but
it can possibly arrive under other conditions.
In order to improve this, while waiting for other threads to join, we're
now marking the current thread as harmless, as it's doing nothing but
waiting for the other ones. This way another harmless waiter will be able
to proceed. It's valid to do this since we're not doing anything else in
this loop.
One improvement could be to also check for the thread being idle and
marking it idle in addition to harmless, so that it can even release a
full isolation requester. But that really doesn't look worth it.
The harmless status is not re-entrant, so sometimes for signal handling
it can be useful to know if we're already harmless or not. Let's add a
function doing that, and make the debugger use it instead of manipulating
the harmless mask.
The thread group info is not sufficient to represent a thread group's
current state as it's read-only. We also need something comparable to
the thread context to represent the aggregate state of the threads in
that group. This patch introduces ha_tgroup_ctx[] and tg_ctx for this.
It's indexed on the group id and must be cache-line aligned. The thread
masks that were global and that do not need to remain global were moved
there (want_rdv, harmless, idle).
Given that all the masks placed there now become group-specific, the
associated thread mask (tid_bit) now switches to the thread's local
bit (ltid_bit). Both are the same for nbtgroups 1 but will differ for
other values.
There's also a tg_ctx pointer in the thread so that it can be reached
from other threads.
Since commit cc7a11ee3 ("MINOR: threads: set the tid, ltid and their bit
in thread_cfg") we ought not use (1UL << thr) to get the group mask for
thread <thr>, but (ha_thread_info[thr].ltid_bit). ha_thread_dump() needs
this.
At several places we're dereferencing the thread group just to catch
the group number, and this will become even more required once we start
to use per-group contexts. Let's just add the tgid in the thread_info
struct to make this easier.
Every single place where sleeping_thread_mask was still used was to test
or set a single thread. We can now add a per-thread flag to indicate a
thread is sleeping, and remove this shared mask.
The wake_thread() function now always performs an atomic fetch-and-or
instead of a first load then an atomic OR. That's cleaner and more
reliable.
This is not easy to test, as broadcast FD events are rare. The good
way to test for this is to run a very low rate-limited frontend with
a listener that listens to the fewest possible threads (2), and to
send it only 1 connection at a time. The listener will periodically
pause and the wakeup task will sometimes wake up on a random thread
and will call wake_thread():
frontend test
bind :8888 maxconn 10 thread 1-2
rate-limit sessions 5
Alternately, disabling/enabling a frontend in loops via the CLI also
broadcasts such events, but they're more difficult to observe since
this is causing connection failures.
The thread flags are touched a little bit by other threads, e.g. the STUCK
flag may be set by other ones, and they're watched a little bit. As such
we need to use atomic ops only to manipulate them. Most places were already
using them, but here we generalize the practice. Only ha_thread_dump() does
not change because it's run under isolation.
This bit field used to be a per-thread cache of the result of the last
lookup of the presence of a task for each thread in the shared cache.
Since we now know that each thread has its own shared cache, a test of
emptiness is now sufficient to decide whether or not the shared tree
has a task for the current thread. Let's just remove this mask.
The thread mask will not be used anymore, instead the thread id only
is used. Interestingly it was already implemented in the parsing but
not used. The single/multi thread argument is not needed anymore since
it's sufficient to pass tid<0 to get a multi-threaded task/tasklet.
This is in preparation for the removal of the thread_mask in tasks as
only this debug code was using it!
Sometimes using "debug dev memstats" can be frustrating because all
pool allocations are reported through pool-os.h and that's all.
But in practice there's nothing wrong with also intercepting pool_alloc,
pool_free and pool_zalloc and report their call counts and locations,
so that's what this patch does. It only uses an alternate set of macroes
for these 3 calls when DEBUG_MEM_STATS is defined. The outputs are
reported as P_ALLOC (for both pool_malloc() and pool_zalloc()) and
P_FREE (for pool_free()).
Instead of having a global mask of all the profiled threads, let's have
one flag per thread in each thread's flags. They are never accessed more
than one at a time an are better located inside the threads' contexts for
both performance and scalability.
There's no more reason for keepin the code and definitions in conn_stream,
let's move all that to stconn. The alphabetical ordering of include files
was adjusted.
This file contains all the stream-connector functions that are specific
to application layers of type stream. So let's name it accordingly so
that it's easier to figure what's located there.
The alphabetical ordering of include files was preserved.
The following functions which act on a connection-based stream connector
were renamed to sc_conn_* (~60 places):
cs_conn_drain_and_shut
cs_conn_process
cs_conn_read0
cs_conn_ready
cs_conn_recv
cs_conn_send
cs_conn_shut
cs_conn_shutr
cs_conn_shutw
These functions return the app-layer associated with an stconn, which
is a check, a stream or a stream's task. They're used a lot to access
channels, flags and for waking up tasks. Let's just name them
appropriately for the stream connector.
We're starting to propagate the stream connector's new name through the
API. Most call places of these functions that retrieve the channel or its
buffer are in applets. The local variable names are not changed in order
to keep the changes small and reviewable. There were ~92 uses of cs_ic(),
~96 of cs_oc() (due to co_get*() being less factorizable than ci_put*),
and ~5 accesses to the buffer itself.
This applies the change so that the applet code stops using ci_putchk()
and friends everywhere possible, for the much saferapplet_put*() instead.
The change is mechanical but large. Two or three functions used to have no
appctx and a cs derived from the appctx instead, which was a reminiscence
of old times' stream_interface. These were simply changed to directly take
the appctx. No sensitive change was performed, and the old (more complex)
API is still usable when needed (e.g. the channel is already known).
The change touched roughly a hundred of locations, with no less than 124
lines removed.
It's worth noting that the stats applet, the oldest of the series, could
get a serious lifting, as it's still very channel-centric instead of
propagating the appctx along the chain. Given that this code doesn't
change often, there's no emergency to clean it up but it would look
better.
This renames the "struct conn_stream" to "struct stconn" and updates
the descriptions in all comments (and the rare help descriptions) to
"stream connector" or "connector". This touches a lot of files but
the change is minimal. The local variables were not even renamed, so
there's still a lot of "cs" everywhere.
This one is the pointer to the conn_stream which is always in the
endpoint that is always present in the appctx, thus it's not needed.
This patch removes it and replaces it with appctx_cs() instead. A
few occurences that were using __cs_strm(appctx->owner) were moved
directly to appctx_strm() which does the equivalent.
Just like for the TCP service, let's move the context away from
appctx.ctx. A new struct hlua_http_ctx was defined, reserved in
hlua_applet_http_init() and used everywhere else. Similarly, the
task dump code will no more report decoded stack traces in case
these services would be involved. That may be solved later.
The use-service mechanism for Lua in TCP mode relies on the
hlua_tcp storage in appctx->ctx. We can move its definition to
hlua.c and simply use appctx_reserve_svcctx() to reserve and access
the stoage. One tiny side effect is that the task dump used in panics
will not show anymore the Lua call stack in its trace. For this a
better API is needed from the Lua code to expose a function that does
the job from an appctx.
The command only requires to store an int, but it will be useful later
to have a struct to pass extra info such as an "all" flag to dump all
FDs. The new context is now a struct dev_fd_ctx stored in svcctx.
This gets rid of most open-coded fcntl() calls, some of which were passed
through DISGUISE() to avoid a useless test. The FD_CLOEXEC was most often
set without preserving previous flags, which could become a problem once
new flags are created. Now this will not happen anymore.
cs_conn_io_cb(), cs_conn_sync_recv() and cs_conn_sync_send() are moved in
conn_stream.c. Associated functions are moved too (cs_notify, cs_conn_read0,
cs_conn_recv, cs_conn_send and cs_conn_process).
Remaining flags and associated functions are move in the conn-stream
scope. These flags are added on the endpoint and not the conn-stream
itself. This way it will be possible to get them from the mux or the
applet. The functions to get or set these flags are renamed accordingly with
the "cs_" prefix and updated to manipualte a conn-stream instead of a
stream-interface.
The stream-interface state (SI_ST_*) is now in the conn-stream. It is a
mechanical replacement for now. Nothing special. SI_ST_* and SI_SB_* were
renamed accordingly. Utils functions to manipulate these infos were moved
under the conn-stream scope.
But it could be good to keep in mind that this part should be
reworked. Indeed, at the CS level, we only need to know if it is ready to
receive or to send. The state of conn-stream from INI to EST is only used on
the server side. The client CS is immediately set to EST. Thus current
SI_ST_* states should probably be moved to the stream to reflect the server
connection state during the establishment stage.