6744 Commits

Author SHA1 Message Date
Willy Tarreau
679bba13f7 MINOR: init: report the list of optionally available services
It's never easy to guess what services are built in. We currently have
the prometheus exporter in contrib/ which is the only extension for now.
Let's enumerate all available ones just like we do for filterr and pollers.
2019-03-19 08:08:10 +01:00
Christopher Faulet
203b2b0a5a MINOR: muxes: Report the Last read with a dedicated flag
For conveniance, in HTTP muxes (h1 and h2), the end of the stream and the end of
the message are reported the same way to the stream, by setting the flag
CS_FL_EOS. In the stream-interface, when CS_FL_EOS is detected, a shutdown for
read is reported on the channel side. This is historical. With the legacy HTTP
layer, because the parsing is done by the stream in HTTP analyzers, the EOS
really means a shutdown for read.

Most of time, for muxes h1 and h2, it works pretty well, especially because the
keep-alive is handled by the muxes. The stream is only used for one
transaction. So mixing EOS and EOM is good enough. But not everytime. For now,
client aborts are only reported if it happens before the end of the request. It
is an error and it is properly handled. But because the EOS was already
reported, client aborts after the end of the request are silently
ignored. Eventually an error can be reported when the response is sent to the
client, if the sending fails. Otherwise, if the server does not reply fast
enough, an error is reported when the server timeout is reached. It is the
expected behaviour, excpect when the option abortonclose is set. In this case,
we must report an error when the client aborts. But as said before, this event
can be ignored. So to be short, for now, the abortonclose is broken.

In fact, it is a design problem and we have to rethink all channel's flags and
probably the conn-stream ones too. It is important to split EOS and EOM to not
loose information anymore. But it is not a small job and the refactoring will be
far from straightforward.

So for now, temporary flags are introduced. When the last read is received, the
flag CS_FL_READ_NULL is set on the conn-stream. This way, we can set the flag
SI_FL_READ_NULL on the stream interface. Both flags are persistant. And to be
sure to wake the stream, the event CF_READ_NULL is reported. So the stream will
always have the chance to handle the last read.

This patch must be backported to 1.9 because it will be used by another patch to
fix the option abortonclose.
2019-03-18 15:50:23 +01:00
Christopher Faulet
2b9b6784b9 MINOR: stats: Move stuff about the stats status codes in stats files
The status codes definition (STAT_STATUS_*) and their string representation
stat_status_codes) have been moved in stats files. There is no reason to keep
them in proto_http files.
2019-03-15 14:34:59 +01:00
Christopher Faulet
3c2ecf75c8 MINOR: stats: Add the status code STAT_STATUS_IVAL to handle invalid requests
This patch must be backported to 1.9 because a bug fix depends on it.
2019-03-15 14:34:52 +01:00
Olivier Houchard
1d7f37a2cb BUG/MAJOR: tasks: Use the TASK_GLOBAL flag to know if we're in the global rq.
In task_unlink_rq, to decide if we should logk the global runqueue lock,
use the TASK_GLOBAL flag instead of relying on t->thread_mask being tid_bit,
as it could be so while still being in the global runqueue if another thread
woke that task for us.

This should be backported to 1.9.
2019-03-14 16:19:11 +01:00
Olivier Houchard
237985b228 MEDIUM: connections: Use _HA_ATOMIC_*
Use _HA_ATOMIC_ instead of HA_ATOMIC_ because we know we don't need barriers
2019-03-14 15:55:15 +01:00
Olivier Houchard
9f8d821a55 MEDIUM: list: Use _HA_ATOMIC_*
Use _HA_ATOMIC_ instead of HA_ATOMIC_ because we know we don't need barriers.
2019-03-14 15:55:15 +01:00
Olivier Houchard
17fbb4eb3f MEDIUM: list: Remove useless barriers.
Don't bother forcing a barrier after using HA_ATOMIC_XCHG if we're about
to check the returned value anyway.
2019-03-14 15:55:15 +01:00
Willy Tarreau
b0cef35b09 BUG/MEDIUM: list: fix incorrect pointer unlocking in LIST_DEL_LOCKED()
Injecting on a saturated listener started to exhibit some deadlocks
again between LIST_POP_LOCKED() and LIST_DEL_LOCKED(). Olivier found
it was due to a leftover from a previous debugging session. This patch
fixes it.

This will have to be backported if the other LIST_*_LOCKED() patches
are backported.
2019-03-13 14:15:54 +01:00
Willy Tarreau
df23c0ce45 MINOR: config: continue to rely on DEFAULT_MAXCONN to set the minimum maxconn
Some packages used to rely on DEFAULT_MAXCONN to set the default global
maxconn value to use regardless of the initial ulimit. The recent changes
made the lowest bound set to 100 so that it is compatible with almost any
environment. Now that DEFAULT_MAXCONN is not needed for anything else, we
can use it for the lowest bound set when maxconn is not configured. This
way it retains its original purpose of setting the default maxconn value
eventhough most of the time the effective value will be higher thanks to
the automatic computation based on "ulimit -n".
2019-03-13 10:10:49 +01:00
Willy Tarreau
ca783d4ee6 MINOR: config: remove obsolete use of DEFAULT_MAXCONN at various places
This entry was still set to 2000 but never used anymore. The only places
where it appeared was as an alias to SYSTEM_MAXCONN which forces it, so
let's turn these ones to SYSTEM_MAXCONN and remove the default value for
DEFAULT_MAXCONN. SYSTEM_MAXCONN still defines the upper bound however.
2019-03-13 10:10:25 +01:00
Olivier Houchard
20872763dd MEDIUM: memory: Use the new _HA_ATOMIC_* macros.
Use the new _HA_ATOMIC_* macros and add barriers where needed.
2019-03-11 17:02:38 +01:00
Olivier Houchard
4c28328572 MEDIUM: task: Use the new _HA_ATOMIC_* macros.
Use the new _HA_ATOMIC_* macros and add barriers where needed.
2019-03-11 17:02:37 +01:00
Olivier Houchard
aa4d71a7fe MEDIUM: server: Use the new _HA_ATOMIC_* macros.
Use the new _HA_ATOMIC_* macros and add barriers where needed.
2019-03-11 17:02:37 +01:00
Olivier Houchard
11ecfd1c01 MEDIUM: proxy: Use the new _HA_ATOMIC_* macros.
Use the new _HA_ATOMIC_* macros and add barriers where needed.
2019-03-11 17:02:37 +01:00
Olivier Houchard
d5f9b19196 MEDIUM: freq_ctr: Use the new _HA_ATOMIC_* macros.
Use the new _HA_ATOMIC_* macros and add barriers where needed.
2019-03-11 17:02:37 +01:00
Olivier Houchard
d360879fb5 MEDIUM: fd: Use the new _HA_ATOMIC_* macros.
Use the new _HA_ATOMIC_* macros and add barriers where needed.
2019-03-11 17:02:37 +01:00
Olivier Houchard
8beb27e9ce MEDIUM: xref: Use the new _HA_ATOMIC_* macros.
Use the new _HA_ATOMIC_* macros and add barriers where needed.
2019-03-11 17:02:37 +01:00
Olivier Houchard
a2735340fb MEDIUM: applets: Use the new _HA_ATOMIC_* macros.
Use the new _HA_ATOMIC_* macros and add barriers where needed.
2019-03-11 17:02:37 +01:00
Olivier Houchard
d2b5d16187 MEDIUM: various: Use __ha_barrier_atomic* when relevant.
When protecting data modified by atomic operations, use __ha_barrier_atomic*
to avoid unneeded barriers on x86.
2019-03-11 17:02:37 +01:00
Olivier Houchard
d0c3b8894a MINOR: threads: Add macros to do atomic operation with no memory barrier.
Add variants of the HA_ATOMIC* macros, prefixed with a _, that do the
atomic operation with no barrier generated by the compiler. It is expected
the developer adds barriers manually if needed.
2019-03-11 17:02:37 +01:00
Olivier Houchard
113537967c MEDIUM: threads: Use __ATOMIC_SEQ_CST when using the newer atomic API.
When using the new __atomic* API, ask the compiler to generate barriers.
A variant of those functions that don't generate barriers will be added later.
Before that, using HA_ATOMIC* would not generate any barrier, and some parts
of the code should be reviewed and missing barriers should be added.

This should probably be backported to 1.8 and 1.9.
2019-03-11 17:02:37 +01:00
Olivier Houchard
9abcf6ef9a MINOR: threads: Implement __ha_barrier_atomic*.
Implement __ha_barrier functions to be used when trying to protect data
modified by atomic operations (except when using HA_ATOMIC_STORE).
On intel, atomic operations either use the LOCK prefix and xchg, and both
atc as full barrier, so there's no need to add an extra barrier.
2019-03-11 17:02:37 +01:00
Olivier Houchard
92fce85d03 MINOR: fd: Remove debugging code.
Remove a debugging test, and call to abort, it's no longer needed.
2019-03-08 16:05:25 +01:00
Willy Tarreau
1e56c70cc9 OPTIM: task: limit the impact of memory barriers in taks_remove_from_task_list()
In this function we end up with successive locked operations then a
store barrier, and in addition the compiler has to emit less efficient
code due to a longer jump. There's no need for absolutely updating the
tasks_run_queue counter before clearing the task's leaf pointer, so
let's swap the two operations and benefit from a single barrier as much
as possible. This code is on the hot path and shows about half a percent
of improvement with 8 threads.
2019-03-07 18:44:12 +01:00
Willy Tarreau
0cf33176bd MINOR: listener: move thr_idx from the bind_conf to the listener
Tests show that it's slightly faster to have this field in the listener.
The cache walk patterns are under heavy stress and having only this field
written to in the bind_conf was wasting a cache line that was heavily
read. Let's move this close to the other entries already written to in
the listener. Warning, the position does have an impact on peak performance.
2019-03-07 14:08:26 +01:00
Willy Tarreau
9f1d4e7f7f CLEANUP: listener: remove old thread bit mapping
Now that the P2C algorithm for the accept queue is removed, we don't
need to map a number to a thread bit anymore, so let's remove all
these fields which are taking quite some space for no reason.
2019-03-07 13:59:04 +01:00
Willy Tarreau
d87a67f9bc MINOR: tools: implement my_flsl()
We already have my_ffsl() to find the lowest bit set in a word, and
this patch implements the search for the highest bit set in a word.
On x86 it uses the bsr instruction and on other architectures it
uses an efficient implementation.
2019-03-07 13:48:04 +01:00
Willy Tarreau
fc630bd373 MINOR: listener: improve incoming traffic distribution
By picking two randoms following the P2C algorithm, we seldom observe
asymmetric loads on bursts of small session counts. This is typically
what makes h2load take a bit of time to complete the last 100% because
if a thread gets two connections while the other ones only have one,
it takes twice the time to complete its work.

This patch proposes a modification of the p2c algorithm which seems
more suitable to this case : it mixes a rotating index with a random.
This way, we're certain that all threads are consulted in turn and at
the same time we're not forced to use the ones we're giving a chance.

This significantly increases the traffic rate. Now h2load shows faster
completion and the average request rates on H2 and the TLS resume rate
increases by a bit more than 5% compared to pure p2c.

The index was placed into the struct bind_conf because 1) it's faster
there and it's the best place to optimally distribute traffic among a
group of listeners. It's the only runtime-modified element there and
it will be quite cache-hot.
2019-03-07 13:48:04 +01:00
Willy Tarreau
b238b12e98 MINOR: task: use LIST_DEL_INIT() to remove a task from the queue
By using LIST_DEL_INIT() instead of LIST_DEL()+LIST_INIT() we manage
to bump the peak connection rate by no less than 3% on 8 threads.
The perf top profile shows much less contention in this area which
suffered from the second reload.
2019-03-07 11:45:44 +01:00
Willy Tarreau
c5bd311b2a MINOR: lists: add a LIST_DEL_INIT() macro
It turns out that we call LIST_DEL+LIST_INIT very frequently and that
the compiler doesn't know what pointers get modified in the e->n->p
and e->p->n dance, so when LIST_INIT() is called, it reloads these
pointers, which is quite a bit of a mess in terms of performance.

This patch adds LIST_DEL_INIT() to perform the two operations at once
using local temporary variables so that the compiler knows these
pointers are left unaffected.
2019-03-07 11:45:44 +01:00
Frédéric Lécaille
5f33f85ce8 MINOR: sample: Extract some protocol buffers specific code.
We move the code responsible of parsing protocol buffers messages
inside gRPC messages from sample.c to include/proto/protocol_buffers.h
so that to reuse it to cascade "ungrpc" converter.
2019-03-06 15:36:02 +01:00
Frédéric Lécaille
756d97f205 MINOR: sample: Rework gRPC converter code.
For now on, "ungrpc" may take a second optional argument to provide
the protocol buffers types used to encode the field value to be extracted.
When absent the field value is extracted as a binary sample which may then
followed by others converters like "hex" which takes binary as input sample.
When this second argument is a type which does not match the one found by "ungrpc",
this field is considered as not found even if present.

With this patch we also remove the useless "varint" and "svarint" converters.

Update the documentation about "ungrpc" converters.
2019-03-05 11:04:23 +01:00
Frédéric Lécaille
7c93e88d0c MINOR: sample: Code factorization "ungrpc" converter.
Parsing protocol buffer fields always consists in skip the field
if the field is not found or store the field value if found.
So, with this patch we factorize a little bit the code for "ungrpc" converter.
2019-03-05 11:03:53 +01:00
Willy Tarreau
967de20a43 BUG/MEDIUM: list: fix again LIST_ADDQ_LOCKED
Well, that's becoming embarrassing. Now this fixes commit 4ef6801c
("BUG/MEDIUM: list: correct fix for LIST_POP_LOCKED's removal of last
element") which itself tried to fix commit 285192564. This fix only
works under low contention and was tested with the listener's queue.
With the idle conns it's obvious that it's still wrong since adding
more than one element to the list leaves a LLIST_BUSY pointer into
the list's head. This was visible when accumulating idle connections
in a server's list.

This new version of the fix almost goes back to the original code,
except that since then we addressed issues with expectedly idempotent
operations that were not. Now the code has been verified on paper again
and has survived 300 million connections spread over 4 threads.

This will have to be backported if the commit above is backported.
2019-03-04 14:09:22 +01:00
Willy Tarreau
bf6964007a MINOR: global: keep a copy of the initial rlim_fd_cur and rlim_fd_max values
Let's keep a copy of these initial values. They will be useful to
compute automatic maxconn, as well as to restore proper limits when
doing an execve() on external checks.
2019-03-01 10:40:30 +01:00
Frédéric Lécaille
645635da84 MINOR: peers: Add a message for heartbeat.
This patch implements peer heartbeat feature to prevent any haproxy peer
from reconnecting too often, consuming sockets for nothing.

To do so, we add PEER_MSG_CTRL_HEARTBEAT new message to PEER_MSG_CLASS_CONTROL peers
control class of messages. A ->heartbeat field is added to peer structs
to store the heatbeat timeout value which is handled by the same function as for ->reconnect
to control the session timeouts. A 2-bytes heartbeat message is sent every 3s when
no updates have to be sent. This way, the peer which receives such a message is sure
the remote peer is still alive. So, it resets the ->reconnect peer session
timeout to its initial value (5s). This prevents any reconnection to an
already connected alive peer.
2019-03-01 09:33:26 +01:00
Willy Tarreau
c8d5b95e6d MEDIUM: config: don't enforce a low frontend maxconn value anymore
Historically the default frontend's maxconn used to be quite low (2000),
which was sufficient two decades ago but often proved to be a problem
when users had purposely set the global maxconn value but forgot to set
the frontend's.

There is no point in keeping this arbitrary limit for frontends : when
the global maxconn is lower, it's already too high and when the global
maxconn is much higher, it becomes a limiting factor which causes trouble
in production.

This commit allows the value to be set to zero, which becomes the new
default value, to mean it's not directly limited, or in fact it's set
to the global maxconn. Since this operation used to be performed before
computing a possibly automatic global maxconn based on memory limits,
the calculation of the maxconn value and its propagation to the backends'
fullconn has now moved to a dedicated function, proxy_adjust_all_maxconn(),
which is called once the global maxconn is stabilized.

This comes with two benefits :
  1) a configuration missing "maxconn" in the defaults section will not
     limit itself to a magically hardcoded value but will scale up to the
     global maxconn ;

  2) when the global maxconn is not set and memory limits are used instead,
     the frontends' maxconn automatically adapts, and the backends' fullconn
     as well.
2019-02-28 17:05:32 +01:00
Willy Tarreau
e2711c7bd6 MINOR: listener: introduce listener_backlog() to report the backlog value
In an attempt to try to provide automatic maxconn settings, we need to
decorrelate a listner's backlog and maxconn so that these values can be
independent. This introduces a listener_backlog() function which retrieves
the backlog value from the listener's backlog, the frontend's, the
listener's maxconn, the frontend's or falls back to 1024. This
corresponds to what was done in cfgparse.c to force a value there except
the last fallback which was not set since the frontend's maxconn is always
known.
2019-02-28 17:05:29 +01:00
Willy Tarreau
4ef6801cd4 BUG/MEDIUM: list: correct fix for LIST_POP_LOCKED's removal of last element
As seen with Olivier, in the end the fix in commit 285192564 ("BUG/MEDIUM:
list: fix LIST_POP_LOCKED's removal of the last pointer") is wrong,
the code there was right but the bug was triggered by another bug in
LIST_ADDQ_LOCKED() which doesn't properly update the list's head by
inserting in the wrong order.

This will have to be backported if the commit above is backported.
2019-02-28 16:51:28 +01:00
Willy Tarreau
01abd02508 BUG/MEDIUM: listener: use a self-locked list for the dequeue lists
There is a very difficult to reproduce race in the listener's accept
code, which is much easier to reproduce once connection limits are
properly enforced. It's an ABBA lock issue :

  - the following functions take l->lock then lq_lock :
      disable_listener, pause_listener, listener_full, limit_listener,
      do_unbind_listener

  - the following ones take lq_lock then l->lock :
      resume_listener, dequeue_all_listener

This is because __resume_listener() only takes the listener's lock
and expects to be called with lq_lock held. The problem can easily
happen when listener_full() and limit_listener() are called a lot
while in parallel another thread releases sessions for the same
listener using listener_release() which in turn calls resume_listener().

This scenario is more prevalent in 2.0-dev since the removal of the
accept lock in listener_accept(). However in 1.9 and before, a different
but extremely unlikely scenario can happen :

      thread1                                  thread2
         ............................  enter listener_accept()
  limit_listener()
         ............................  long pause before taking the lock
  session_free()
    dequeue_all_listeners()
      lock(lq_lock) [1]
         ............................  try_lock(l->lock) [2]
      __resume_listener()
        spin_lock(l->lock) =>WAIT[2]
         ............................  accept()
                                       l->accept()
                                       nbconn==maxconn =>
                                         listener_full()
                                           state==LI_LIMITED =>
                                             lock(lq_lock) =>DEADLOCK[1]!

In practice it is almost impossible to trigger it because it requires
to limit both on the listener's maxconn and the frontend's rate limit,
at the same time, and to release the listener when the connection rate
goes below the limit between poll() returns the FD and the lock is
taken (a few nanoseconds). But maybe with threads competing on the
same core it has more chances to appear.

This patch removes the lq_lock and replaces it with a lockless queue
for the listener's wait queue (well, technically speaking a self-locked
queue) brought by commit a8434ec14 ("MINOR: lists: Implement locked
variations.") and its few subsequent fixes. This relieves us from the
need of the lq_lock and removes the deadlock. It also gets rid of the
distinction between __resume_listener() and resume_listener() since the
only difference was the lq_lock. All listener removals from the list
are now unconditional to avoid races on the state. It's worth noting
that the list used to never be initialized and that it used to work
only thanks to the state tests, so the initialization has now been
added.

This patch must carefully be backported to 1.9 and very likely 1.8.
It is mandatory to be careful about replacing all manipulations of
l->wait_queue, global.listener_queue and p->listener_queue.
2019-02-28 16:08:54 +01:00
Willy Tarreau
c912f94b57 MINOR: server: remove a few unneeded LIST_INIT calls after LIST_DEL_LOCKED
Since LIST_DEL_LOCKED() and LIST_POP_LOCKED() now automatically reinitialize
the removed element, there's no need for keeping this LIST_INIT() call in the
idle connection code.
2019-02-28 16:08:54 +01:00
Willy Tarreau
4c747e86cd MINOR: list: make the delete and pop operations idempotent
These operations previously used to return a "locked" element, which is
a constraint when multiple threads try to delete the same element, because
the second one will block indefinitely. Instead, let's make sure that both
LIST_DEL_LOCKED() and LIST_POP_LOCKED() always reinitialize the element
after deleting it. This ensures that the second thread will immediately
unblock and succeed with the removal. It also secures the pop vs delete
competition that may happen when trying to remove an element that's about
to be dequeued.
2019-02-28 16:03:29 +01:00
Willy Tarreau
690d2ad4d2 BUG/MEDIUM: list: add missing store barriers when updating elements and head
Commit a8434ec14 ("MINOR: lists: Implement locked variations.")
introduced locked lists which use the elements pointers as locks
for concurrent operations. Under heavy stress the lists occasionally
fail. The cause is a missing barrier at some points when updating
the list element and the head : nothing prevents the compiler (or
CPU) from updating the list head first before updating the element,
making another thread jump to a wrong location. This patch simply
adds the missing barriers before these two opeations.

This will have to be backported if the commit above is backported.
2019-02-28 15:59:31 +01:00
Willy Tarreau
285192564d BUG/MEDIUM: list: fix LIST_POP_LOCKED's removal of the last pointer
There was a typo making the last updated pointer be the pre-last element's
prev instead of the last's prev element. It didn't show up during early
tests because the contention is very rare on this one  and it's implicitly
recovered when updating the pointers to go to the next element, but it was
clearly visible in the listener_accept() tests by having all threads block
on LIST_POP_LOCKED() with n==p==LLIST_BUSY.

This will have to be backported if commit a8434ec14 ("MINOR: lists:
Implement locked variations.") is backported.
2019-02-28 15:59:31 +01:00
Willy Tarreau
bd20ad5874 BUG/MEDIUM: list: fix the rollback on addq in the locked liss
Commit a8434ec14 ("MINOR: lists: Implement locked variations.")
introduced locked lists which use the elements pointers as locks
for concurrent operations. A copy-paste typo in LIST_ADDQ_LOCKED()
causes corruption in the list in case the next pointer is already
held, as it restores the previous pointer into the next one. It
may impact the server pools.

This will have to be backported if the commit above is backported.
2019-02-28 15:10:15 +01:00
Willy Tarreau
149ab779cc MAJOR: threads: enable one thread per CPU by default
Threads have long matured by now, still for most users their usage is
not trivial. It's about time to enable them by default on platforms
where we know the number of CPUs bound. This patch does this, it counts
the number of CPUs the process is bound to upon startup, and enables as
many threads by default. Of course, "nbthread" still overrides this, but
if it's not set the default behaviour is to start one thread per CPU.

The default number of threads is reported in "haproxy -vv". Simply using
"taskset -c" is now enough to adjust this number of threads so that there
is no more need for playing with cpu-map. And thanks to the previous
patches on the listener, the vast majority of configurations will not
need to duplicate "bind" lines with the "process x/y" statement anymore
either, so a simple config will automatically adapt to the number of
processors available.
2019-02-27 14:51:50 +01:00
Willy Tarreau
7ac908bf8c MINOR: config: add global tune.listener.multi-queue setting
tune.listener.multi-queue { on | off }
  Enables ('on') or disables ('off') the listener's multi-queue accept which
  spreads the incoming traffic to all threads a "bind" line is allowed to run
  on instead of taking them for itself. This provides a smoother traffic
  distribution and scales much better, especially in environments where threads
  may be unevenly loaded due to external activity (network interrupts colliding
  with one thread for example). This option is enabled by default, but it may
  be forcefully disabled for troubleshooting or for situations where it is
  estimated that the operating system already provides a good enough
  distribution and connections are extremely short-lived.
2019-02-27 14:27:07 +01:00
Willy Tarreau
8a03408d81 MINOR: activity: add accept queue counters for pushed and overflows
It's important to monitor the accept queues to know if some incoming
connections had to be handled by their originating thread due to an
overflow. It's also important to be able to confirm thread fairness.
This patch adds "accq_pushed" to activity reporting, which reports
the number of connections that were successfully pushed into each
thread's queue, and "accq_full", which indicates the number of
connections that couldn't be pushed because the thread's queue was
full.
2019-02-27 14:27:07 +01:00
Willy Tarreau
1efafce61f MINOR: listener: implement multi-queue accept for threads
There is one point where we can migrate a connection to another thread
without taking risk, it's when we accept it : the new FD is not yet in
the fd cache and no task was created yet. It's still possible to assign
it a different thread than the one which accepted the connection. The
only requirement for this is to have one accept queue per thread and
their respective processing tasks that have to be woken up each time
an entry is added to the queue.

This is a multiple-producer, single-consumer model. Entries are added
at the queue's tail and the processing task is woken up. The consumer
picks entries at the head and processes them in order. The accept queue
contains the fd, the source address, and the listener. Each entry of
the accept queue was rounded up to 64 bytes (one cache line) to avoid
cache aliasing because tests have shown that otherwise performance
suffers a lot (5%). A test has shown that it's important to have at
least 256 entries for the rings, as at 128 it's still possible to fill
them often at high loads on small thread counts.

The processing task does almost nothing except calling the listener's
accept() function and updating the global session and SSL rate counters
just like listener_accept() does on synchronous calls.

At this point the accept queue is implemented but not used.
2019-02-27 14:27:07 +01:00