The poller pipes needed to communicate between multiple threads are
allocated in init_pollers_per_thread() and released in
deinit_pollers_per_thread(). The former adds them via fd_insert()
so that they are known, but the former only closes them using a
regular close().
This asymmetry represents a problem, because we have in the fdtab[]
an entry for something that may disappear when one thread leaves, and
since these FD numbers are very low, there is a very high likelihood
that they are immediately reassigned to another thread trying to
connect() to a server or just sending a health check. In this case,
the other thread is going to fd_insert() the fd and the recently
added consistency checks will notive that ->owner is not NULL and
will crash. We just need to use fd_delete() here to match fd_insert().
Note that this test was added in 2.7-dev2 by commit 36d9097cf
("MINOR: fd: Add BUG_ON checks on fd_insert()") which was backported
to 2.4 as a safety measure (since it allowed to catch particularly
serious issues). The patch in itself isn't wrong, it just revealed
a long-dormant bug (been there since 1.9-dev1, 4 years ago). As such
the current patch needs to be backported wherever the commit above
is backported.
Many thanks to Christian Ruppert for providing detailed traces in
github issue #1807 and Cedric Paillet for bringing his complementary
analysis that helped to understand the required conditions for this
issue to happen (fast health checks @100ms + randomly long connections
~7s + fast reloads every second + hard-stop-after 5s were necessary
on the dev's machine to trigger it from time to time).
Christopher found that since commit 8e2c0fa8e ("MINOR: fd: delete unused
updates on close()") we may crash in a late stop due to an fd_delete()
in the main thread performed after all threads have deleted the fd_updt[]
array. Prior to that commit that didn't happen because we didn't touch
the updates on this path, but now it may happen. We don't care about these
ones anyway since the poller is stopped, so let's just wipe them by
resetting their counter before freeing the array.
No backport is needed as this is only 2.7.
In commit cfdd20a0b ("MEDIUM: fd: support broadcasting updates for foreign
groups in updt_fd_polling") we decided to pick a random thread number among
a set of candidates for a wakeup in case we need an instant change. But the
thread count range was wrong (MAX_THREADS) instead of tg->count, resulting
in random crashes when thread groups are > 1 and MAX_THREADS > 64.
No backport is needed, this was introduced in 2.7-dev2.
We're still facing the situation where it's impossible to update an FD
for a foreign group. That's of particular concern when disabling/enabling
listeners (e.g. pause/resume on signals) since we don't decide which thread
gets the signal and it needs to process all listeners at once.
Fortunately, not that much is unprotected in FDs. This patch adds a test for
tgid's equality in updt_fd_polling() so that if a change is applied for a
foreing group, then it's detected and taken care of separately. The method
consists in forcing the update on all bound threads in this group, adding it
to the group's update_list, and sending a wake-up as would be done for a
remote thread in the local group, except that this is done by grabbing a
reference to the FD's tgid.
Thanks to this, SIGTTOU/SIGTTIN now work for nbtgroups > 1 (after that was
temporarily broken by "MEDIUM: fd/poller: make the update-list per-group").
There's a nasty case during boot, which is the master process. It stops
all listeners from the main thread, and as such we're seeing calls to
fd_delete() from a thread that doesn't match the FD's mask, but more
importantly from a group that doesn't match either. Fortunately this
happens in a process that doesn't see the threads creation, so the FDs
are left intact in the table and we can overwrite the tgid there.
The approach is ugly, it probably shows that we should use a dummy
value for the tgid during boot, that would be replaced once the FDs
migrate to their target, but we also need a way to make sure not to
miss them. Also that doesn't solve the possibility of closing a
listener at run time from the wrong thread group.
At boot the pollers are allocated for each thread and they need to
reprogram updates for all FDs they will manage. This code is not
trivial, especially when trying to respect thread groups, so we'd
rather avoid duplicating it.
Let's centralize this into fd.c with this function. It avoids closed
FDs, those whose thread mask doesn't match the requested one or whose
thread group doesn't match the requested one, and performs the update
if required under thread-group protection.
The IOCB might have closed the FD itself, so it's not an error to
have fd.tgid==0 or anything else, nor to have a null running_mask.
In fact there are different conditions under which we can leave the
IOCB, all of them have been enumerated in the code's comments (namely
FD still valid and used, hence has running bit, FD closed but not yet
reassigned thus running==0, FD closed and reassigned, hence different
tgid and running becomes irrelevant, just like all other masks). For
this reason we have no other solution but to try to grab the tgid on
return before checking the other bits. In practice it doesn't represent
a big cost, because if the FD was closed and reassigned, it's instantly
detected and the bit is immediately released without blocking other
threads, and if the FD wasn't closed this doesn't prevent it from
being migrated to another thread. In the worst case a close by another
thread after a migration will be postponed till the moment the running
bit is cleared, which is the same as before.
These functions need to set/reset the FD's tgid but when they're called
there may still be wakeups on other threads that discover late updates
and have to touch the tgid at the same time. As such, it is not possible
to just read/write the tgid there. It must only be done using operations
that are compatible with what other threads may be doing.
As we're using inc/dec on the refcount, it's safe to AND the area to zero
the lower part when resetting the value. However, in order to set the
value, there's no other choice but fd_claim_tgid() which will assign it
only if possible (via a CAS). This is convenient in the end because it
protects the FD's masks from being modified by late threads, so while
we hold this refcount we can safely reset the thread_mask and a few other
elements. A debug test for non-null masks was added to fd_insert() as it
must not be possible to face this situation thanks to the protection
offered by the tgid.
fd_insert() was already given a thread group ID and a global thread mask.
Now we're changing the few callers to take the group-local thread mask
instead. It's passed directly into the FD's thread mask. Just like for
previous commit, it must not change anything when a single group is
configured.
With the change that was started on other masks, the thread mask was
still not fully converted, sometimes being used as a global mask and
sometimes as a local one. This finishes the code modifications so that
the mask is always considered as a group-local mask. This doesn't
change anything as long as there's a single group, but is necessary
for groups 2 and above since it's used against running_mask and so on.
It's an AND so it destroys information and due to this there's a call
place where we have to perform two reads to know the previous value
then to change it. With a fetch-and-and instead, in a single operation
we can know if the bit was previously present, which is more efficient.
From now on, the FD's running_mask only refers to local thread IDs. However,
there remains a limitation, in updt_fd_polling(), we temporarily have to
check and set shared FDs against .thread_mask, which still contains global
ones. As such, nbtgroups > 1 may break (but this is not yet supported without
special build options).
From now on, the FD's update_mask only refers to local thread IDs. However,
there remains a limitation, in updt_fd_polling(), we temporarily have to
check and set shared FDs against .thread_mask, which still contains global
ones. As such, nbtgroups > 1 may break (but this is not yet supported without
special build options).
We now grab a reference to the FD's tgid before manipulating the
running_mask so that we're certain it corresponds to our own group
(hence bits), and we drop it once we've set the bit. For now there's
no measurable performance impact in doing this, which is great. The
lock can be observed by perf top as taking a small share of the time
spent in fd_update_events(), itself taking no more than 0.28% of CPU
under 8 threads.
However due to the fact that the thread groups are not yet properly
spread across the pollers and the thread masks are still wrong, this
will trigger some BUG_ON() in fd_insert() after a few tens of thousands
of connections when threads other than those of group 1 are reached,
and this is expected.
The file descriptors will need to know the thread group ID in addition
to the mask. This extends fd_insert() to take the tgid, and will store
it into the FD.
In the FD, the tgid is stored as a combination of tgid on the lower 16
bits and a refcount on the higher 16 bits. This allows to know when it's
really possible to trust the tgid and the running mask. If a refcount is
higher than 1 it indeed indicates another thread else might be in the
process of updating these values.
Since a closed FD must necessarily have a zero refcount, a test was
added to fd_insert() to make sure that it is the case.
After a poller's ->clo() was called to completely terminate operations
on an FD, there's no reason for keeping updates on this FD, so if any
updates were already programmed it would be nice if we could delete them.
Tests show that __fd_clo() is called roughly half of the time with the
last FD from the local update list, which possibly makes sense if a close
has to appear after a polling change resulting from an incomplete read or
the end of a send().
We can detect this and remove the last entry, which gives less work to
do during the update() call, and eliminates most of the poll_drop_fd
event reports.
Note that while tempting, this must not be backported because it's only
safe to be done now that fd_delete_orphan() clears the update mask as
we need to be certain not to miss it:
- if the update mask is kept up with no entry, we can miss future
updates ;
- if the update mask is cleared too fast, it may result in failure
to add a shared event.
The update-list needs to be per-group because its inspection is based
on a mask and we need to be certain when scanning it if a mask is for
the same thread or another one. Once per-group there's no doubt about
it, even if the FD's polling changes, the entry remains valid. It will
be needed to check the tgid though.
Note that a soft-stop or pause/resume might not necessarily work here
with tgroups>1, because the operation might be delivered to a thread
that doesn't belong to the group and whoe update mask will not reflect
one that is interesting here. We can't do better at this stage.
Dealing with long-lasting updates that outlive a close() is always
going to be quite a problem, not because of the thread that will
discover such updates late, but mostly due to the shared update_list
that will have an entry on hold making it difficult to reuse it, and
requiring that the fd's tgid is changed and the update_mask reset
from a safe location.
After careful inspection, it turns out that all our pollers that support
automatic event removal upon close() do not need any extra bookkeeping,
and that poll and select that use an internal representation already
provide a poller->clo() callback that is already used to update the
local event. As such, it is already safe to reset the update mask and
to remove the event from the shared list just before the final close,
because nothing remains to be done with this FD by the poller.
Doing so considerably simplifies the handling of updates, which will
only have to be inspected by the pollers, while the writers can continue
to consider that the entries are always valid. Another benefit is that
it will be possible to reduce contention on the update_list by just
having one update_list per group (left to be done later if needed).
When an FD is migrated, all pollers program an update. That's useless
code duplication, and when thread groups will be supported, this will
require an extra round of locking just to verify the update_mask on
return. Let's just program the update direction from fd_update_events()
as it already does for closed FDs, this becomes more logical.
They were initially made to deal with both the cache and the update list
but there's no cache anymore and keeping them for the update list adds a
lot of obfuscation that is really not desired. Let's get rid of them now.
Their purpose was simply to get a pointer to fdtab[fd].update.{,next,prev}
in order to perform atomic tests and modifications. The offset passed in
argument to the functions (fd_add_to_fd_list() and fd_rm_from_fd_list())
was the offset of the ->update field in fdtab, and as it's not used anymore
it was removed. This also removes a number of casts, though those used by
the atomic ops have to remain since only scalars are supported.
Every single place where sleeping_thread_mask was still used was to test
or set a single thread. We can now add a per-thread flag to indicate a
thread is sleeping, and remove this shared mask.
The wake_thread() function now always performs an atomic fetch-and-or
instead of a first load then an atomic OR. That's cleaner and more
reliable.
This is not easy to test, as broadcast FD events are rare. The good
way to test for this is to run a very low rate-limited frontend with
a listener that listens to the fewest possible threads (2), and to
send it only 1 connection at a time. The listener will periodically
pause and the wakeup task will sometimes wake up on a random thread
and will call wake_thread():
frontend test
bind :8888 maxconn 10 thread 1-2
rate-limit sessions 5
Alternately, disabling/enabling a frontend in loops via the CLI also
broadcasts such events, but they're more difficult to observe since
this is causing connection failures.
When enabling an FD that's only bound to another thread, instead of
always picking the first one, let's pick a random one. This is rarely
used (enabling a frontend, or session rate-limiting period ending),
and has greater chances of avoiding that some obscure corner cases
could degenerate into a poorly distributed load.
Till now, update_fd_polling() used to check if all the target threads were
sleeping, and only then would wake an owning thread up. This causes several
problems among which the need for the sleeping_thread_mask and the fact that
by the time we wake one thread up, it has changed.
This commit changes this by leaving it to wake_thread() to perform this
check on the selected thread, since wake_thread() is already capable of
doing this now. Concretely speaking, for updt_fd_polling() it will mean
performing one computation of an ffsl() before knowing the sleeping status
on a global FD state change (which is very rare and not important here,
as it basically happens after relaxing a rate-limit (i.e. once a second
at beast) or after enabling a frontend from the CLI); thus we don't care.
When returning from the polling syscall, all pollers have a certain
dance to follow, made of wall clock updates, thread harmless updates,
idle time management and sleeping mask updates. Let's have a centralized
function to deal with all of this boring stuff: fd_leaving_poll(), and
make all the pollers use it.
The thread flags are touched a little bit by other threads, e.g. the STUCK
flag may be set by other ones, and they're watched a little bit. As such
we need to use atomic ops only to manipulate them. Most places were already
using them, but here we generalize the practice. Only ha_thread_dump() does
not change because it's run under isolation.
Almost every call place of wake_thread() checks for sleeping threads and
clears the sleeping mask itself, while the function is solely used for
there. Let's move the check and the clearing of the bit inside the function
itself. Note that updt_fd_polling() still performs the check because its
rules are a bit different.
Some FDs might be offered to some external code (external libraries)
which will deal with them until they close them. As such we must not
close them upon fd_delete() but we need to delete them anyway so that
they do not appear anymore in the fdtab. This used to be handled by
fd_remove() before 2.3 but we don't have this anymore.
This patch introduces a new flag FD_DISOWN to let fd_delete() know that
the core doesn't own the fd and it must not be closed upon removal from
the fd_tab. This way it's totally unregistered from the poller but still
open.
This patch must be backported on branches >= 2.3 because it will be
needed to fix a bug affecting SSL async. it should be adapted on 2.3
because state flags were stored in a different way (via bits in the
structure).
We thought that we could get rid of some DISGUISE() with commit a80e4a354
("MINOR: fd: add functions to set O_NONBLOCK and FD_CLOEXEC") thanks to
the calls being in a function but that was without counting on Coverity.
Let's put it directly in the function since most if not all callers don't
care about this result.
This gets rid of most open-coded fcntl() calls, some of which were passed
through DISGUISE() to avoid a useless test. The FD_CLOEXEC was most often
set without preserving previous flags, which could become a problem once
new flags are created. Now this will not happen anymore.
Instead of seeing each location manipulate the fcntl() themselves and
often forget to check previous flags, let's centralize the functions to
do this. It also allows to drop fcntl.h from most call places and will
ease the adoption of different OS-specific mechanisms if needed. Note
that the fd_set_nonblock() function purposely doesn't check the previous
flags as it's meant to be used on new FDs only.
Some older systems may routinely return EWOULDBLOCK for some syscalls
while we tend to check only for EAGAIN nowadays. Modern systems define
EWOULDBLOCK as EAGAIN so that solves it, but on a few older ones (AIX,
VMS etc) both are different, and for portability we'd need to test for
both or we never know if we risk to confuse some status codes with
plain errors.
There were few entries, the most annoying ones are the switch/case
because they require to only add the entry when it differs, but the
other ones are really trivial.
It's among the cases that would provoke memory corruption, let's add
some tests against negative FDs and those larger than the table. This
must never ever happen and would currently result in silent corruption
or a crash. Better have a noticeable one exhibiting the call chain if
that were to happen.
There's a risk that fdtab is not 64-byte aligned. The first effect is that
it may cause false sharing between cache lines resulting in contention
when adjacent FDs are used by different threads. The second is related
to what is explained in commit "BUG/MAJOR: compiler: relax alignment
constraints on certain structures", i.e. that modern compilers might
make use of aligned vector operations to zero some entries, and would
crash. We do not use any memset() or so on fdtab, so the risk is almost
inexistent, but that's not a reason for violating some valid assumptions.
This patch addresses this by allocating 64 extra bytes and aligning the
structure manually (this is an extremely cheap solution for this specific
case). The original address is stored in a new variable "fdtab_addr" and
is the one that gets freed. This remains extremely simple and should be
easily backportable. A dedicated aligned allocator later would help, of
course.
This needs to be backported as far as 2.2. No issue related to this was
reported yet, but it could very well happen as compilers evolve. In
addition this should preserve high performance across restarts (i.e.
no more dependency on allocator's alignment).
The poller's pipe was only registered on the read side since we don't
need to poll to write on it. But this leaves some known FDs so it's
better to also register the write side with no event. This will allow
to show them in "show fd" and to avoid dumping them as unhandled FDs.
Note that the only other type of unhandled FDs left are:
- stdin/stdout/stderr
- epoll FDs
The later can be registered upon startup though but at least a dummy
handler would be needed to keep the fdtab clean.
The TI_FL_STUCK flag is manipulated by the watchdog and scheduler
and describes the apparent life/death of a thread so it changes
all the time and it makes sense to move it to the thread's context
for an active thread.
It's not needed to inline it at all (one call per loop) and it introduces
dependencies, let's move it to fd.c.
Removing the few remaining includes that came with it further reduced
by ~0.2% the LoC and the build time is now below 6s.
A number of files currently access activity counters but rely on their
definitions to be inherited from other files (task.c, backend.c hlua.c,
sock.c, pool.c, stats.c, fd.c).
Ilya reports in GH #1392 that clang 13 complains about totlen being
calculated and not used in fd_write_frag_line(), which is true. It's
a leftover of some older code.
Right now we're using a DWCAS to atomically set the running_mask while
being constrained by the thread_mask. This DWCAS is annoying because we
may seriously need it later when adding support for thread groups, for
checking that the running_mask applies to the correct group.
It turns out that the DWCAS is not strictly necessary because we never
need it to set the thread_mask based on the running_mask, only the other
way around. And in fact, the running_mask is always cleared alone, and
the thread_mask is changed alone as well. The running_mask is only
relevant to indicate a takeover when the thread_mask matches it. Any
bit set in running and not present in thread_mask indicates a transition
in progress.
As such, it is possible to re-arrange this by using a regular CAS around a
consistency check between running_mask and thread_mask in fd_update_events
and by making a CAS on running_mask then an atomic store on the thread_mask
in fd_takeover(). The only other case is fd_delete() but that one already
sets the running_mask before clearing the thread_mask, which is compatible
with the consistency check above.
This change has happily survived 10 billion takeovers on a 16-thread
machine at 800k requests/s.
The fd-migration doc was updated to reflect this change.
This one is set whenever an FD is reported by a poller with a null owner,
regardless of the thread_mask. It has become totally meaningless because
it only indicates a migrated FD that was not yet reassigned to a thread,
but as soon as a thread uses it, the status will change to skip_fd. Thus
there is no reason to distinguish between the two, it adds more confusion
than it helps. Let's simply drop it.
This function already performs a number of checks prior to calling the
IOCB, and detects the change of thread (FD migration). Half of the
controls are still in each poller, and these pollers also maintain
activity counters for various cases.
Note that the unreliable test on thread_mask was removed so that only
the one performed by fd_set_running() is now used, since this one is
reliable.
Let's centralize all that fd-specific logic into the function and make
it return a status among:
FD_UPDT_DONE, // update done, nothing else to be done
FD_UPDT_DEAD, // FD was already dead, ignore it
FD_UPDT_CLOSED, // FD was closed
FD_UPDT_MIGRATED, // FD was migrated, ignore it now
Some pollers already used to call it last and have nothing to do after
it, regardless of the result. epoll has to delete the FD in case a
migration is detected. Overall this removes more code than it adds.
This one is stated as experimental in the doc but could still be used
by accidental copy-paste. Let's mark it with KWF_EXPERIMENTAL so that
users have to opt-in to use it.
Some pointer to arrays such as fdtab, fdinfo, polled_mask etc are never
written to at run time but are used a lot. fdtab accesses appear a lot in
perf top because ha_used_fds is in the same cache line and is modified
all the time. This patch moves all these read-mostly variables to the
read_mostly section when defined. This way their cache lines will be
able to remain in shared state in all CPU caches.
This patch replaces roughly all occurrences of an HA_ATOMIC_ADD(&foo, 1)
or HA_ATOMIC_SUB(&foo, 1) with the equivalent HA_ATOMIC_INC(&foo) and
HA_ATOMIC_DEC(&foo) respectively. These are 507 changes over 45 files.
Currently our atomic ops return a value but it's never known whether
the fetch is done before or after the operation, which causes some
confusion each time the value is desired. Let's create an explicit
variant of these operations suffixed with _FETCH to explicitly mention
that the fetch occurs after the operation, and make use of it at the
few call places.
There is a function called fd_write_frag_line() that's essentially used
by loggers and that is used to write an atomic message line over a file
descriptor using writev(). However a lock is required around the writev()
call to prevent messages from multiple threads from being interleaved.
Till now a SPIN_TRYLOCK was used on a dedicated lock that was common to
all FDs. This is quite not pretty as if there are multiple output pipes
to collect logs, there will be quite some contention. Now that there
are empty flags left in the FD state and that we can finally use atomic
ops on them, let's add a flag to indicate the FD is locked for exclusive
access by a syscall. At least the locking will now be on an FD basis and
not the whole process, so we can remove the log_lock.