Commit d9e7e36 ("BUG/MEDIUM: epoll/threads: use one epoll_fd per thread")
addressed an issue with the polling and required that cloned FDs are removed
from all polling threads on close. But in fact it does it for all bound
threads, some of which may not necessarily poll the FD. This is harmless,
but it may also make it harder later to deal with FD migration between
threads. Better use polled_mask which only reports threads still aware
of the FD instead of thread_mask.
This fix should be backported to 1.8.
Maxfd is really only useful to poll() and select(), yet epoll and
kqueue reference it almost by mistake :
- cloning of the initial FDs (maxsock should be used here)
- max polled events, it's maxpollevents which should be used here.
Let's fix these places.
When HAProxy is complied without threads, gcc throws following warnings:
src/ev_epoll.c:222:3: warning: array subscript is outside array bounds [-Warray-bounds]
...
src/ev_epoll.c:199:11: warning: array subscript is outside array bounds [-Warray-bounds]
...
Of course, this is not a bug. In such case, tid is always equal to 0. But to
avoid the noise, a check on MAX_THREADS in "if (tid)" lines makes gcc happy.
This patch should be backported in 1.8 with the commit d9e7e36c ("BUG/MEDIUM:
epoll/threads: use one epoll_fd per thread").
There currently is a problem regarding epoll(). While select() and poll()
compute their polling state on the fly upon each call, epoll() keeps a
shared state between all threads via the epoll_fd. The problem is that
once an fd is registered on *any* thread, all other threads receive
events for that FD as well. It is clearly visible when binding a listener
to a single thread like in the configuration below where all 4 threads
will work, 3 of them simply spinning to skip the event :
global
nbthread 4
frontend foo
bind :1234 process 1/1
The worst case happens when some slow operations are in progress on a
busy thread, preventing it from processing its task and causing the
other ones to wake up not being able to do anything with this event.
Typically computing a large TLS key will delay processing of next
events on the same thread while others will still wake up.
All this simply shows that the poller must remain thread-specific, with
its own events and its own ability to sleep when it doesn't have anyhing
to do.
This patch does exactly this. For this, it proceeds like this :
- have one epoll_fd per thread instead of one per process
- initialize these epoll_fd when threads are created.
- mark all known FDs as updated so that the next invocation of
_do_poll() recomputes their polling status (including a possible
removal of undesired polling from the original FD) ;
- use each fd's polled_mask to maintain an accurate status of
the current polling activity for this FD.
- when scanning updates, only focus on events whose new polling
status differs from the existing one
- during updates, always verify the thread_mask to resist migration
- on __fd_clo(), for cloned FDs (typically listeners inherited
from the parent during a graceful shutdown), run epoll_ctl(DEL)
on all epoll_fd. This is the reason why epoll_fd is stored in a
shared array and not in a thread_local storage. Note: maybe this
can be moved to an update instead.
Interestingly, this shows that we don't need the FD's old state anymore
and that we only use it to convert it to the new state based on stable
information. It appears clearly that the FD code can be further improved
by computing the final state directly when manipulating it.
With this change, the config above goes from 22000 cps at 380% CPU to
43000 cps at 100% CPU : not only the 3 unused threads are not activated,
but they do not disturb the activity anymore.
The output of "show activity" before and after the patch on a 4-thread
config where a first listener on thread 2 forwards over SSL to threads
3 & 4 shows this a much smaller amount of undesired events (thread 1
doesn't wake up anymore, poll_skip remains zero, fd_skip stays low) :
// before: 400% CPU, 7700 cps, 13 seconds
loops: 11380717 65879 5733468 5728129
wake_cache: 0 63986 317547 314174
wake_tasks: 0 0 0 0
wake_applets: 0 0 0 0
wake_signal: 0 0 0 0
poll_exp: 0 63986 317547 314174
poll_drop: 1 0 49981 48893
poll_dead: 65514 0 31334 31934
poll_skip: 46293690 34071 22867786 22858208
fd_skip: 66068135 174157 33732685 33825727
fd_lock: 0 2 2809 2905
fd_del: 0 494361 80890 79464
conn_dead: 0 0 0 0
stream: 0 407747 50526 49474
empty_rq: 11380718 1914 5683023 5678715
long_rq: 0 0 0 0
// after: 200% cpu, 9450 cps, 11 seconds
loops: 17 66147 1001631 450968
wake_cache: 0 66119 865139 321227
wake_tasks: 0 0 0 0
wake_applets: 0 0 0 0
wake_signal: 0 0 0 0
poll_exp: 0 66119 865139 321227
poll_drop: 6 5 38279 60768
poll_dead: 0 0 0 0
poll_skip: 0 0 0 0
fd_skip: 54 172661 4411407 2008198
fd_lock: 0 0 10890 5394
fd_del: 0 492829 58965 105091
conn_dead: 0 0 0 0
stream: 0 406223 38663 61338
empty_rq: 18 40 962999 390549
long_rq: 0 0 0 0
This patch presents a few risks but fixes a real problem with threads,
and as such it needs be backported to 1.8. It depends on previous patch
("MINOR: fd: add a bitmask to indicate that an FD is known by the poller").
Special thanks go to Samuel Reed for providing a large amount of useful
debugging information and for testing fixes.
Since the fd update tables are per-thread, we need to have a bit per
thread to indicate whether an update exists, otherwise this can lead
to lost update events every time multiple threads want to update the
same FD. In practice *for now*, it only happens at start time when
listeners are enabled and ask for polling after facing their first
EAGAIN. But since the pollers are still shared, a lost event is still
recovered by a neighbor thread. This will not reliably work anymore
with per-thread pollers, where it has been observed a few times on
startup that a single-threaded listener would not always accept
incoming connections upon startup.
It's worth noting that during this code review it appeared that the
"new" flag in the fdtab isn't used anymore.
This fix should be backported to 1.8.
A number of counters have been added at special places helping better
understanding certain bug reports. These counters are maintained per
thread and are shown using "show activity" on the CLI. The "clear
counters" commands also reset these counters. The output is sent as a
single write(), which currently produces up to about 7 kB of data for
64 threads. If more counters are added, it may be necessary to write
into multiple buffers, or to reset the counters.
To backport to 1.8 to help collect more detailed bug reports.
It was a leftover from the last cleaning session; this mask applies
to threads and calling it process_mask is a bit confusing. It's the
same in fd, task and applets.
There was a flaw in the way the threads was created. the main one was just used
to create all the others and just wait to exit. Now, it is used to run a poll
loop. So we only create nbthread-1 threads.
This also fixes a bug about the compression filter when there is only 1 thread
(nbthread == 1 or no threads support). The bug was in the way thread-local
resources was initialized. per-thread init/deinit callbacks were never called
for the main process. So, with nthread set to 1, some buffers remained
uninitialized.
Many changes have been made to do so. First, the fd_updt array, where all
pending FDs for polling are stored, is now a thread-local array. Then 3 locks
have been added to protect, respectively, the fdtab array, the fd_cache array
and poll information. In addition, a lock for each entry in the fdtab array has
been added to protect all accesses to a specific FD or its information.
For pollers, according to the poller, the way to manage the concurrency is
different. There is a poller loop on each thread. So the set of monitored FDs
may need to be protected. epoll and kqueue are thread-safe per-se, so there few
things to do to protect these pollers. This is not possible with select and
poll, so there is no sharing between the threads. The poller on each thread is
independant from others.
Finally, per-thread init/deinit functions are used for each pollers and for FD
part for manage thread-local ressources.
Now, you must be carefull when a FD is created during the HAProxy startup. All
update on the FD state must be made in the threads context and never before
their creation. This is mandatory because fd_updt array is thread-local and
initialized only for threads. Because there is no pollers for the main one, this
array remains uninitialized in this context. For this reason, listeners are now
enabled in run_thread_poll_loop function, just like the worker pipe.
Since commit 5be2f35 ("MAJOR: polling: centralize calls to I/O callbacks")
that came into 1.6-dev1, each poller deals with its own events and decides
to signal ability to receive or send on a file descriptor based on the
active events on the file descriptor.
The commit above was incorrectly done for the epoll code. Instead of
checking the active events on the fd, it checks for the new events. In
general these ones are the same for POLL_IN and POLL_OUT since they
are always cleared prior to being computed, but it is possible that
POLL_HUP and POLL_ERR were initially reported and are not reported
again (especially for HUP). This could happen for example if POLL_HUP
and POLL_IN were received together, the pending data exactly correspond
to a full buffer which is read at once, preventing the POLL_HUP from
being dealt with in the same call, and on the next call only POLL_OUT
is reported (eg: to emit some response or peers protocol ACKs). In this
case fd_may_recv() will not be enabled anymore and the close event will
be missed.
It seems quite hard to trigger this case, though it might explain some
of the rare missed close events that were detected in the past on the
peers.
This fix needs to be backported to 1.6 and 1.7.
We'll need to differenciate between pollers which can report hangup at
the same time as read (POLL_RDHUP) from the other ones, because only
these ones may benefit from the fd_done_recv() optimization. Epoll has
had support for EPOLLRDHUP since Linux 2.6.17 and has always been used
this way in haproxy, so now we only set the flag once we've observed it
once in a response. It means that some initial requests may try to
perform a second recv() call, but after the first closed connection it
will be enough to know that the second call is not needed anymore.
Later we may extend these flags to designate event-triggered pollers.
The poll() functions have become a bit dirty because they now check the
size of the signal queue, the FD cache and the number of tasks. It's not
their job, this must be moved to the caller. In the end it simplifies the
code because the expiration date is now set to now_ms if we must not wait,
and this achieves in exactly the same result and is cleaner. The change
looks large due to the change of indent for blocks which were inside an
"if" block.
Willy: commit f2e8ee2b introduced an optimization in the old speculative epoll
code, which implemented its own event cache. It was needed to store that many
events (it was bound to maxsock/4 btw). Now the event cache lives on its own
and we don't need this anymore. And since events are allocated on the kernel
side, we only need to allocate the events we want to return.
As a result, absmaxevents will be not used anymore. Just remove the definition
and the comment of it, replace it with global.tune.maxpollevents. It is also an
optimization of memory usage for large amounts of sockets.
Signed-off-by: Godbach <nylzhaowei@gmail.com>
In order for HTTP/2 not to eat too much memory, we'll have to support
on-the-fly buffer allocation, since most streams will have an empty
request buffer at some point. Supporting allocation on the fly means
being able to sleep inside I/O callbacks if a buffer is not available.
Till now, the I/O callbacks were called from two locations :
- when processing the cached events
- when processing the polled events from the poller
This change cleans up the design a bit further than what was started in
1.5. It now ensures that we never call any iocb from the poller itself
and that instead, events learned by the poller are put into the cache.
The benefit is important in terms of stability : we don't have to care
anymore about the risk that new events are added into the poller while
processing its events, and we're certain that updates are processed at
a single location.
To achieve this, we now modify all the fd_* functions so that instead of
creating updates, they add/remove the fd to/from the cache depending on
its state, and only create an update when the polling status reaches a
state where it will have to change. Since the pollers make use of these
functions to notify readiness (using fd_may_recv/fd_may_send), the cache
is always up to date with the poller.
Creating updates only when the polling status needs to change saves a
significant amount of work for the pollers : a benchmark showed that on
a typical TCP proxy test, the amount of updates per connection dropped
from 11 to 1 on average. This also means that the update list is smaller
and has more chances of not thrashing too many CPU cache lines. The first
observed benefit is a net 2% performance gain on the connection rate.
A second benefit is that when a connection is accepted, it's only when
we're processing the cache, and the recv event is automatically added
into the cache *after* the current one, resulting in this event to be
processed immediately during the same loop. Previously we used to have
a second run over the updates to detect if new events were added to
catch them before waking up tasks.
The next gain will be offered by the next steps on this subject consisting
in implementing an I/O queue containing all cached events ordered by priority
just like the run queue, and to be able to leave some events pending there
as long as needed. That will allow us *not* to perform some FD processing
if it's not the proper time for this (typically keep waiting for a buffer
to be allocated if none is available for an recv()). And by only processing
a small bunch of them, we'll allow priorities to take place even at the I/O
level.
As a result of this change, functions fd_alloc_or_release_cache_entry()
and fd_process_polled_events() have disappeared, and the code dedicated
to checking for new fd events after the callback during the poll() loop
was removed as well. Despite the patch looking large, it's mostly a
change of what function is falled upon fd_*() and almost nothing was
added.
When run in daemon mode (i.e. with at least one forked process) and using
the epoll poller, sending USR1 (graceful shutdown) to the worker processes
can cause some workers to start running at 100% CPU. Precondition is having
an established HTTP keep-alive connection when the signal is received.
The cloned (during fork) listening sockets do not get closed in the parent
process, thus they do not get removed from the epoll set automatically
(see man 7 epoll). This can lead to the process receiving epoll events
that it doesn't feel responsible for, resulting in an endless loop around
epoll_wait() delivering these events.
The solution is to explicitly remove these file descriptors from the epoll
set. To not degrade performance, care was taken to only do this when
neccessary, i.e. when the file descriptor was cloned during fork.
Signed-off-by: Conrad Hoffmann <conrad@soundcloud.com>
[wt: a backport to 1.4 could be studied though chances to catch the bug are low]
This function is used to compute the new polling state based on
the previous state. All pollers have to do this in their update
loop, so better centralize the logic for it.
Currently, each poll loop handles the polled events the same way,
resulting in a lot of duplicated, complex code. Additionally, epoll
was the only one to handle newly created FDs immediately.
So instead, let's move that code to fd.c in a new function dedicated
to this task : fd_process_polled_events(). All pollers now use this
function.
This commit heavily changes the polling system in order to definitely
fix the frequent breakage of SSL which needs to remember the last
EAGAIN before deciding whether to poll or not. Now we have a state per
direction for each FD, as opposed to a previous and current state
previously. An FD can have up to 8 different states for each direction,
each of which being the result of a 3-bit combination. These 3 bits
indicate a wish to access the FD, the readiness of the FD and the
subscription of the FD to the polling system.
This means that it will now be possible to remember the state of a
file descriptor across disable/enable sequences that generally happen
during forwarding, where enabling reading on a previously disabled FD
would result in forgetting the EAGAIN flag it met last time.
Several new state manipulation functions have been introduced or
adapted :
- fd_want_{recv,send} : enable receiving/sending on the FD regardless
of its state (sets the ACTIVE flag) ;
- fd_stop_{recv,send} : stop receiving/sending on the FD regardless
of its state (clears the ACTIVE flag) ;
- fd_cant_{recv,send} : report a failure to receive/send on the FD
corresponding to EAGAIN (clears the READY flag) ;
- fd_may_{recv,send} : report the ability to receive/send on the FD
as reported by poll() (sets the READY flag) ;
Some functions are used to report the current FD status :
- fd_{recv,send}_active
- fd_{recv,send}_ready
- fd_{recv,send}_polled
Some functions were removed :
- fd_ev_clr(), fd_ev_set(), fd_ev_rem(), fd_ev_wai()
The POLLHUP/POLLERR flags are now reported as ready so that the I/O layers
knows it can try to access the file descriptor to get this information.
In order to simplify the conditions to add/remove cache entries, a new
function fd_alloc_or_release_cache_entry() was created to be used from
pollers while scanning for updates.
The following pollers have been updated :
ev_select() : done, built, tested on Linux 3.10
ev_poll() : done, built, tested on Linux 3.10
ev_epoll() : done, built, tested on Linux 3.10 & 3.13
ev_kqueue() : done, built, tested on OpenBSD 5.2
We're completely changing the way FDs will be polled. There will be no
more speculative I/O since we'll know the exact FD state, so these will
only be cached events.
First, let's fix a few field names which become confusing. "spec_e" was
used to store a speculative I/O event state. Now we'll store the whole
R/W states for the FD there. "spec_p" was used to store a speculative
I/O cache position. Now let's clearly call it "cache".
We're completely changing the way FDs will be polled. First, let's fix
a few field names which become confusing. "spec_e" was used to store a
speculative I/O event state. Now we'll store the whole R/W states for
the FD there.
This reverts commit 2f877304ef.
This commit is OK for clear text traffic but causes trouble with SSL
when buffers are smaller than SSL buffers. Since the issue it addresses
will be gone once the polling redesign is complete, there's no reason
for trying to workaround temporary inefficiencies. Better remove it.
epoll_wait() takes a number of returned events, not the number of
fds to consider. We must not pass it the number of the smallest fd,
as it leads to value zero being used, which is invalid in epoll_wait().
The effect may sometimes be observed with peers sections trying to
connect and causing 2-seconds CPU loops upon a soft reload because
epoll_wait() immediately returns -1 EINVAL instead of waiting for the
timeout to happen.
This fix should be backported to 1.4 too (into ev_epoll and ev_sepoll).
epoll may report pending shutdowns using EPOLLRDHUP. Since this
flag is missing from a number of libcs despite being available
since kernel 2.6.17, let's define it ourselves.
Doing so saves one syscall by allow us to avoid the read()==0 when
the server closes with the respose.
When a polled I/O event is detected, the event is added to the updates
list and the I/O handler is called. Upon return, if the event handler
did not experience an EAGAIN, the event remains in the updates list so
that it will be processed later. But if the event was already in the
spec list, its state is updated and it will be called again immediately
upon exit, by fd_process_spec_events(), so this creates unfairness
between speculative events and polled events.
So don't call the I/O handler upon I/O detection when the FD already is
in the spec list. The fd events are still updated so that the spec list
is up to date with the possible I/O change.
The epoll loop checks for newly appeared FDs in order to process them early
if they're accepted sockets. Since the introduction of the fd_ev_set()
calls before the iocb(), the current FD is always in the update list,
and we don't want to check it again, so we must assign the old_updt
index just before calling the I/O handler.
Playing with fdtab[fd].ev makes gcc constantly reload the pointers
because it does not know they don't alias. Use a temporary variable
instead. This saves a few operations in the fast path.
In ev_poll and ev_epoll, we have a bit-to-bit mapping between the POLL_
constants and the FD_POLL_ constants. A comment said that gcc was able
to detect this and to automatically apply a mask. Things have possibly
changed since the output assembly doesn't always reflect this. So let's
perform an explicit assignment when bits are equal.
Errors and Hangups are sticky events, which means that once they're
detected, we never clear them, allowing them to be handled later if
needed.
Till now when an error was reported, it used to register a speculative
I/O event for both recv and send. Since the connection had not requested
such events, it was not able to detect a change and did not clear them,
so the events were called in loops until a timeout caused their owner
task to die.
So this patch does two things :
- stop registering spec events when no I/O activity was requested,
so that we don't end up with non-disablable polling state ;
- keep the sticky polling flags (ERR and HUP) when leaving the
connection handler so that an error notification doesn't
magically become a normal recv() or send() report once the
event is converted to a spec event.
It is normally not needed to make the connection handler emit an
error when it detects POLL_ERR because either a registered data
handler will have done it, or the event will be disabled by the
wake() callback.
Now that all pollers make use of speculative I/O, there is no point
having two epoll implementations, so replace epoll with the sepoll code
and remove sepoll which has just become the standard epoll method.
When calling fd_rem(), the polling was not correctly disabled because the
->prev state was set to zero instead of the previous value. fd_rem() is
very rarely used, only just before closing a socket.
The effect is that upon an error reported at the connection level, if the
task assigned to the connection was too slow to be woken up because of too
many other tasks in the run queue, the FD was still not disabled and caused
the connection handler to be called again with the same event until the task
was finally executed to close the fd.
This issue only affects the epoll poller, not the sepoll variant nor any of
the other ones.
It was already present in 1.4 and even 1.3 with the same almost unnoticeable
effects. The bug can in fact only be discovered during development where it
emphasizes other bugs.
It should be backported anyway.
The old EV_FD_SET() macro was confusing, as it would enable receipt but there
was no way to indicate that EAGAIN was received, hence the recently added
FD_WAIT_* flags. They're not enough as we're still facing a conflict between
EV_FD_* and FD_WAIT_*. So let's offer I/O functions what they need to explicitly
request polling.
These primitives were initially introduced so that callers were able to
conditionally set/disable polling on a file descriptor and check in return
what the state was. It's been long since we last had an "if" on this, and
all pollers' functions were the same for cond_* and their systematic
counter parts, except that this required a check and a specific return
value that are not always necessary.
So let's simplify the FD API by removing this now unused distinction and
by making all specific functions return void.
In an attempt to get rid of fdtab[].state, and to move the relevant
parts to the connection struct, we remove the FD_STCLOSE state which
can easily be deduced from the <owner> pointer as there is a 1:1 match.
fdtab[].ev was only set in ev_sepoll. Unfortunately, some I/O handling
functions now rely on this, so depending on the polling mechanism, some
useless operations might have been performed, such as performing a useless
recv() when a HUP was reported.
This is a very old issue, the flags were only added to the fdtab and not
propagated into any poller. Then they were used in ev_sepoll which needed
them for the cache. It is unsure whether a backport to 1.4 is appropriate
or not.
We now measure the work and idle times in order to report the idle
time in the stats. It's expected that we'll be able to use it at
other places later.
Some older libc don't define splice() and and don't define _syscall*()
either, which causes build errors if splicing is enabled.
To solve this, we now split the syscall redefinition into two layers :
- one file per syscall (epoll, splice)
- one common file to declare the _syscall*() macros
The code is cleaner because files using the syscalls just have to include
their respective file. It's not adviced to merge multiple syscall families
into a same file if all are not intended to be used simultaneously, because
defining unused static functions causes warnings to be emitted during build.
As a result, the new USE_MY_SPLICE parameter was added in order to be able
to define the splice() syscall separately.
epoll, sepoll and kqueue pollers should check that their fd is not
closed before attempting to close it, otherwise we can end up with
multiple closes of fd #0 upon exit, which is harmless but dirty.