Commit Graph

180 Commits

Author SHA1 Message Date
Aurelien DARRAGON
01dfe17acf MEDIUM: server: add and use a separate last_change variable for internal use
last_change server metric is used for 2 separate purposes. First it is
used to report last server state change date for stats and other related
metrics. But it is also used internally, including in sensitive paths,
such as lb related stuff to take decision or perform computations
(ie: in srv_dynamic_maxconn()).

Due to last_change counter now being split over thread groups since 16eb0fa
("MAJOR: counters: dispatch counters over thread groups"), reading the
aggregated value has a cost, and we cannot afford to consult last_change
value from srv_dynamic_maxconn() anymore. Moreover, since the value is
used to take decision for the current process we don't wan't the variable
to be updated by another process in our back.

To prevent performance regression and sharing issues, let's instead add a
separate srv->last_change value, which is not updated atomically (given how
rare the  updates are), and only serves for places where the use of the
aggregated last_change counter/stats (split over thread groups) is too
costly.
2025-06-30 16:26:25 +02:00
Aurelien DARRAGON
16eb0fab31 MAJOR: counters: dispatch counters over thread groups
Most fe and be counters are good candidates for being shared between
processes. They are now grouped inside "shared" struct sub member under
be_counters and fe_counters.

Now they are properly identified, they would greatly benefit from being
shared over thread groups to reduce the cost of atomic operations when
updating them. For this, we take the current tgid into account so each
thread group only updates its own counters. For this to work, it is
mandatory that the "shared" member from {fe,be}_counters is initialized
AFTER global.nbtgroups is known, because each shared counter causes the stat
to be allocated lobal.nbtgroups times. When updating a counter without
concurrency, the first counter from the array may be updated.

To consult the shared counters (which requires aggregation of per-tgid
individual counters), some helper functions were added to counter.h to
ease code maintenance and avoid computing errors.
2025-06-05 09:59:38 +02:00
Aurelien DARRAGON
a0dcab5c45 MAJOR: counters: add shared counters base infrastructure
Shareable counters are not tagged as shared counters and are dynamically
allocated in separate memory area as a prerequisite for being stored
in shared memory area. For now, GUID and threads groups are not taken into
account, this is only a first step.

also we ensure all counters are now manipulated using atomic operations,
namely, "last_change" counter is now read from and written to using atomic
ops.

Despite the numerous changes caused by the counters being moved away from
counters struct, no change of behavior should be expected.
2025-06-05 09:58:58 +02:00
Willy Tarreau
099c1b2442 BUG/MAJOR: queue: properly keep count of the queue length
The queue length was moved to its own variable in commit 583303c48
("MINOR: proxies/servers: Calculate queueslength and use it."), however a
few places were missed in pendconn_unlink() and assign_server_and_queue()
resulting in never decreasing counts on aborted streams. This was
reproduced when injecting more connections than the total backend
could stand in TCP mode and letting some of them time out in the
queue. No backport is needed, this is only 3.2.
2025-05-17 10:46:10 +02:00
Willy Tarreau
99f5be5631 BUG/MAJOR: queue: lock around the call to pendconn_process_next_strm()
The extra call to pendconn_process_next_strm() made in commit cda7275ef5
("MEDIUM: queue: Handle the race condition between queue and dequeue
differently") was performed after releasing the server queue's lock,
which is incompatible with the calling convention for this function.
The result is random corruption of the server's streams list likely
due to picking old or incorrect pendconns from the queue, and in the
end infinitely looping on apparently already locked mt_list objects.
Just adding the lock fixes the problem.

It's very difficult to reproduce, it requires low maxconn values on
servers, stickiness on the servers (cookie), a long enough slowstart
(e.g. 10s), and regularly flipping servers up/down to re-trigger the
slowstart.

No backport is needed as this was only in 3.2.
2025-05-06 18:59:54 +02:00
Ilia Shipitsin
78b849b839 CLEANUP: assorted typo fixes in the code and comments
code, comments and doc actually.
2025-04-02 11:12:20 +02:00
Olivier Houchard
9253146b90 BUILD: queues: Use unsigned int when needed
Use unsigned int instead of int when calculating which thread group we
should dequeue from next, as the difference in signedness makes clang
unhappy.
2025-01-28 17:44:54 +01:00
Olivier Houchard
b74ec1efc2 MINOR: queues: use __ha_cpu_relax() on failed CAS.
Make sure we call __ha_cpu_relax() if we fail a CAS, to help with
contention.
2025-01-28 16:00:19 +01:00
Olivier Houchard
26b3e5236f MEDIUM: servers/proxies: Switch to using per-tgroup queues.
For both servers and proxies, use one connection queue per thread-group,
instead of only one. Having only one can lead to severe performance
issues on NUMA machines, it is actually trivial to get the watchdog to
trigger on an AMD machine, having a server with a maxconn of 96, and an
injector that uses 160 concurrent connections.
We now have one queue per thread-group, however when dequeueing, we're
dequeuing MAX_SELF_USE_QUEUE (currently 9) pendconns from our own queue,
before dequeueing one from another thread group, if available, to make
sure everybody is still running.
2025-01-28 12:49:41 +01:00
Olivier Houchard
583303c48b MINOR: proxies/servers: Calculate queueslength and use it.
For both proxies and servers, properly calculates queueslength, which is
the total number of element in each queues (as they currently are only
using one queue, it is equivalent to the number of element of that
queue), and use it instead of the queue's length.
2025-01-28 12:49:41 +01:00
Olivier Houchard
659d5f6579 BUG/MEDIUM: queues: Adjust the proxy counters when appropriate
In process_srv_queue(), if we manage to successfully run an extra task,
don't forget to adjust the proxy's totpend and served counters accordingly.
Having an inaccurate served could lead to various subtle bugs, as it is
used when making load balancing decisions.

This should not be backported, unless cda7275ef5
is backported too.
2025-01-09 17:46:46 +01:00
Olivier Houchard
505480eeef CLEANUP: Remove pendconn_must_try_again().
Remove pendconn_must_try_again(), now that it no longer is used.
2024-12-24 14:10:06 +01:00
Olivier Houchard
cda7275ef5 MEDIUM: queue: Handle the race condition between queue and dequeue differently
There is a small race condition, where a server would check if there is
something left in the proxy queue, and adding something to the proxy
queue. If the server checks just before the stream is added to the queue,
and it no longer has any stream to deal with, then nothing will take
care of the stream, that may stay in the queue forever.
This was worked around with commit 5541d4995d, by checking for that exact
condition after adding the stream to the queue, and trying again to get
a server assigned if it is detected.
That fix lead to multiple infinite loops, that got fixed, but it is not
unlikely that it could happen again. So let's fix the initial problem
differently : a single server may mark itself as ready, and it removes
itself once used. The principle is that when we discover that the just
queued stream is alone with no active request anywhere ot dequeue it,
instead of rebalancing it, it will be assigned to that current "ready"
server that is available to handle it. The extra cost of the atomic ops
is negligible since the situation is super rare.
2024-12-24 14:10:06 +01:00
Olivier Houchard
3372a2ea00 BUG/MEDIUM: queues: Stricly respect maxconn for outgoing connections
The "served" field of struct server is used to know how many connections
are currently in use for a server. But served used to be incremented way
after the server was picked, so there were race conditions that could
lead more than maxconn connections to be allocated for one server. To
fix this, increment served way earlier, and make sure at the time that
it never goes past maxconn.
We now should never have more outgoing connections than set by maxconn.
2024-12-24 14:10:06 +01:00
Olivier Houchard
5b8899b6cc BUG/MEDIUM: queue: Make process_srv_queue return the number of streams
Make process_srv_queue() return the number of streams unqueued, as
pendconn_grab_from_px() did, as that number is used by
srv_update_status() to generate logs.

This should be backported up to 2.6 with
111ea83ed4
2024-12-23 15:03:40 +01:00
Olivier Houchard
b3cd5a4b86 CLEANUP: queues: Remove pendconn_grab_from_px().
pendconn_grab_from_px() is now unused, so just remove it.
2024-12-17 16:05:44 +01:00
Willy Tarreau
53f52e67a0 BUG/MEDIUM: queue: always dequeue the backend when redistributing the last server
An interesting bug was revealed by commit 5541d4995d ("BUG/MEDIUM: queue:
deal with a rare TOCTOU in assign_server_and_queue()"). When shutting
down a server to redistribute its connections, no check is made on the
backend's queue. If we're turning off the last server and the backend
has pending connections, these ones will wait there till the queue
timeout. But worse, since the commit above, we can enter an endless loop
in the following situation:

  - streams are present in the backend's queue
  - streams are purged on the last server via srv_shutdown_streams()
  - that one calls pendconn_redistribute(srv) which does not purge
    the backend's pendconns
  - a stream performs some load balancing and enters assign_server_and_queue()
  - assign_server() is called in turn
  - the LB algo is non-deterministic and there are entries in the
    backend's queue. The function notices it and returns SRV_STATUS_FULL
  - assign_server_and_queue() calls pendconn_add() to add the connection
    to the backend's queue
  - on return, pendconn_must_try_again() is called, it figures there's
    no stream served anymore on the server nor the proxy, so it removes
    the pendconn from the queue and returns 1
  - assign_server_and_queue() loops back to the beginning to try again,
    while the conditions have not changed, resulting in an endless loop.

Ideally a change count should be used in the queues so that it's possible
to detect that some dequeuing happened and/or that a last stream has left.
But that wouldn't completely solve the problem that is that we must never
ever add to a queue when there's no server streams to dequeue the new
entries.

The current solution consists in making pendconn_redistribute() take care
of the proxy after the server in case there's no more server available on
the proxy. It at least ensures that no pending streams are left in the
backend's queue when shutting streams down or when the last server goes
down. The try_again loop remains necessary to deal with inevitable races
during pendconn additions. It could be limited to a few rounds, though,
but it should never trigger if the conditions are sufficient to permit
it to converge.

One way to reproduce the issue is to run a config with a single server
with maxconn 1 and plenty of threads, then run in loops series of:

 "disable server px/s;shutdown sessions server px/s;
  wait 100ms server-removable px/s; show servers conn px;
  enable server px/s"

on the CLI at ~10/s while injecting with around 40 concurrent conns at
40-100k RPS. In this case in 10s - 1mn the crash can appear with a
backtrace like this one for at least 1 thread:

  #0  pendconn_add (strm=strm@entry=0x17f2ce0) at src/queue.c:487
  #1  0x000000000064797d in assign_server_and_queue (s=s@entry=0x17f2ce0) at src/backend.c:1064
  #2  0x000000000064a928 in srv_redispatch_connect (s=s@entry=0x17f2ce0) at src/backend.c:1962
  #3  0x000000000064ac54 in back_handle_st_req (s=s@entry=0x17f2ce0) at src/backend.c:2287
  #4  0x00000000005ae1d5 in process_stream (t=t@entry=0x17f4ab0, context=0x17f2ce0, state=<optimized out>) at src/stream.c:2336

It's worth noting that other threads may often appear waiting after the
poller and one in server_atomic_sync() waiting for isolation, because
the event that is processed when shutting the server down is consumed
under isolation, and having less threads available to dequeue remaining
requests increases the probability to trigger the problem, though it is
not at all necessary (some less common traces never show them).

This should carefully be backported wherever the commit above was
backported.
2024-10-01 18:57:51 +02:00
Willy Tarreau
1385e33eb0 BUG/MINOR: queue: make sure that maintenance redispatches server queue
Turning a server to maintenance currently doesn't redispatch the server
queue unless there's an explicit "option redispatch" and no "option
persist", while the former has never really been the purpose of this
test. Better refine this so that forced maintenance also causes the
queue to be flushed, and possibly redispatched unless the proxy has
option persist. This way now when turning a server to maintenance,
the queue is immediately flushed and streams can decide what to do.

This can be backported, though there's no need to go far since it was
never directly reported and only noticed as part of debugging some
rare "shutdown sessions" strangeness, which it might participate to.
2024-09-27 18:54:07 +02:00
Willy Tarreau
b11495652e BUG/MEDIUM: queue: implement a flag to check for the dequeuing
As unveiled in GH issue #2711, commit 5541d4995d ("BUG/MEDIUM: queue:
deal with a rare TOCTOU in assign_server_and_queue()") does have some
side effects in that it can occasionally cause an endless loop.

As Christopher analysed it, the problem is that process_srv_queue(),
which uses a trylock in order to leave only one thread in charge of
the dequeueing process, can lose the lock race against pendconn_add().
If this happens on the last served request, then there's no more thread
to deal with the dequeuing, and assign_server_and_queue() will loop
forever on a condition that was initially exepected to be extremely
rare (and still is, except that now it can become sticky). Previously
what was happening is that such queued requests would just time out
and since that was very rare, nobody would notice.

The root of the problem really is that trylock. It was added so that
only one thread dequeues at a time but it doesn't offer only that
guarantee since it also prevents a thread from dequeuing if another
one is in the process of queuing. We need a different criterion.

What we're doing now is to set a flag "dequeuing" in the server, which
indicates that one thread is currently in the process of dequeuing
requests. This one is atomically tested, and only if no thread is in
this process, then the thread grabs the queue's lock and dequeues.
This way it will be serialized with pendconn_add() and no request
addition will be missed.

It is not certain whether the original race covered by the fix above
can still happen with this change, so better keep that fix for now.

Thanks to @Yenya (Jan Kasprzak) for the precise and complete report
allowing to spot the problem.

This patch should be backported wherever the patch above was backported.
2024-09-13 08:35:47 +02:00
Willy Tarreau
1a8f3a368f MINOR: queue: add a function to check for TOCTOU after queueing
There's a rare TOCTOU case that happens from time to time with maxconn 1
and multiple threads. Between the moment we see the queue full and the
moment we queue a request, it's possible that the last request on the
server or proxy ended and that no other one is left to offer it its place.

Given that all this code path is performance-critical and we cannot afford
to increase the lock duration, better recheck for the condition after
queueing. For this we need to be able to check for the condition and
cleanly dequeue a request. That's what this patch provides via the new
function pendconn_must_try_again(). It will catch more requests than
absolutely needed though it will catch them all. It may find that around
1/1000 of requests are at risk, though testing shows that in practice,
it's around 1 per million that really gets stuck (other ones benefit
from timing and finishing late requests). Maybe in the future some
conditions might be refined but it's harmless.

What happens to such requests is that they're dequeued and their pendconn
freed, so that the caller can decide to try to LB or queue them again. For
now the function is not used, it's just added separately for easier tracking.
2024-07-29 09:27:01 +02:00
Amaury Denoyelle
634cc2a5d8 MINOR: counters: move last_change into counters struct
last_change was a member present in both proxy and server struct. It is
used as an age statistics to report the last update of the object.

Move last_change into fe_counters/be_counters. This is necessary to be
able to manipulate it through generic stat column and report it into
stats-file.

Note that there is a change for proxy structure with now 2 different
last_change values, on frontend and backend side. Special care was taken
to ensure that the value is initialized only on the proxy side. The
other value is set to 0 unless a listen proxy is instantiated. For the
moment, only backend counter is reported in stats. However, with now two
distinct values, stats could be extended to report it on both side.
2024-05-02 10:55:25 +02:00
Willy Tarreau
69530f59ae MEDIUM: clock: replace timeval "now" with integer "now_ns"
This puts an end to the occasional confusion between the "now" date
that is internal, monotonic and not synchronized with the system's
date, and "date" which is the system's date and not necessarily
monotonic. Variable "now" was removed and replaced with a 64-bit
integer "now_ns" which is a counter of nanoseconds. It wraps every
585 years, so if all goes well (i.e. if humanity does not need
haproxy anymore in 500 years), it will just never wrap. This implies
that now_ns is never nul and that the zero value can reliably be used
as "not set yet" for a timestamp if needed. This will also simplify
date checks where it becomes possible again to do "date1<date2".

All occurrences of "tv_to_ns(&now)" were simply replaced by "now_ns".
Due to the intricacies between now, global_now and now_offset, all 3
had to be turned to nanoseconds at once. It's not a problem since all
of them were solely used in 3 functions in clock.c, but they make the
patch look bigger than it really  is.

The clock_update_local_date() and clock_update_global_date() functions
are now much simpler as there's no need anymore to perform conversions
nor to round the timeval up or down.

The wrapping continues to happen by presetting the internal offset in
the short future so that the 32-bit now_ms continues to wrap 20 seconds
after boot.

The start_time used to calculate uptime can still be turned to
nanoseconds now. One interrogation concerns global_now_ms which is used
only for the freq counters. It's unclear whether there's more value in
using two variables that need to be synchronized sequentially like today
or to just use global_now_ns divided by 1 million. Both approaches will
work equally well on modern systems, the difference might come from
smaller ones. Better not change anyhting for now.

One benefit of the new approach is that we now have an internal date
with a resolution of the nanosecond and the precision of the microsecond,
which can be useful to extend some measurements given that timestamps
also have this resolution.
2023-04-28 16:08:08 +02:00
Willy Tarreau
eed5da1037 MINOR: clock: do not use now.tv_sec anymore
Instead we're using ns_to_sec(tv_to_ns(&now)) which allows the tv_sec
part to disappear. At this point, "now" is only used as a timeval in
clock.c where it is updated.
2023-04-28 16:08:08 +02:00
Willy Tarreau
ddab05b98a BUG/MEDIUM: queue/threads: limit the number of entries dequeued at once
When testing strong queue contention on a 48-thread machine, some crashes
would frequently happen due to process_srv_queue() never leaving and
processing pending requests forever. A dump showed more than 500000
loops at once. The problem is that other threads find it working so
they don't do anything and are free to process their pending requests.
Because of this, the dequeuing thread can be kept busy forever and does
not process its own requests anymore (fortunately the watchdog stops it).

This patch adds a limit to the number of rounds, it limits it to
maxpollevents, which is reasonable because it's also an indicator of
latency and batches size. However there's a catch. If all requests
are finished when the thread ends the loop, there might not be enough
anymore to restart processing the queue. Thus we tolerate to re-enter
the loop to process one request at a time when it doesn't have any
anymore. This way we're leaving more room for another thread to take
on this task, and we're sure to eventually end this loop.

Doing this has also improved the overall dequeuing performance by ~20%
in highly contended situations with 48 threads.

It should be backported at least to 2.4, maybe even 2.2 since issues
were faced in the past on machines having many cores.
2022-07-30 10:00:59 +02:00
Willy Tarreau
3fdacdddaf MEDIUM: queue: revert to regular inter-task wakeups
Now that the inter-task wakeups are cheap, there's no point in using
task_instant_wakeup() anymore when dequeueing tasks. The use of the
regular task_wakeup() is sufficient there and will preserve a better
fairness: the test that went from 40k to 570k RPS now gives 580k RPS
(down from 585k RPS with previous commit). This essentially reverts
commit 27fab1dcb ("MEDIUM: queue: use tasklet_instant_wakeup() to
wake tasks").
2022-07-01 19:15:14 +02:00
Willy Tarreau
7cb9e6c6ba CLEANUP: stream: rename "csf" and "csb" to "scf" and "scb"
These are the stream connectors, let's give them consistent names. The
patch is large (405 locations) but totally trivial.
2022-05-27 19:33:34 +02:00
Willy Tarreau
03bd3952a6 MEDIUM: stream: remove the confusing SF_ADDR_SET flag
This flag is no longer needed now that it must always match the presence
of a destination address on the backend conn_stream. Worse, before previous
patch, if it were to be accidently removed while the address is present, it
could result in a leak of that address since alloc_dst_address() would first
be called to flush it.

Its usage has a long history where addresses were stored in an area shared
with the connection, but as this is no longer the case, there's no reason
for putting this burden onto application-level code that should not focus
on setting obscure flags.

The only place where that made a small difference is in the dequeuing code
in case of queue redistribution, because previously the code would first
clear the flag, and only later when trying to deal with the queue, would
release the address. It's not even certain whether there would exist a
code path going to connect_server() without calling pendconn_dequeue()
first (e.g. retries on queue timeout maybe?).

Now the pendconn_dequeue() code will rely on SF_ASSIGNED to decide to
clear and release the address, since that flag is always set while in
a server's queue, and its clearance implies that we don't want to keep
the address. At least it remains consistent and there's no more risk of
leaking it.
2022-05-02 16:56:01 +02:00
Willy Tarreau
27fab1dcbc MEDIUM: queue: use tasklet_instant_wakeup() to wake tasks
It's long been known that queues didn't scale with threads for various
reasons ranging from the cost of the queue lock to the cost of the
massive amount of inter-thread wakeups.

But some recent reports showing deplorable perfs with threads used at
100% CPU helped us notice that the two elements above add on top of
each other:
  - with plenty of inter-thread wakeups, the scheduler takes a lot of
    time to dequeue pending tasks from the shared queue ;
  - the lock held by the scheduler to do this slows down subsequent
    task_wakeup() calls from the the queue that are made under the
    queue's lock
  - the queue's lock slows down addition of new requests to the queue
    and adds up to the number of needed queue entries for a steady
    traffic.

But the cost of the share queue has no reason for being paid because
it had already been paid when process_stream() added the request to
the queue. As such an instant wakeup is perfectly fit for this.

This is exactly what this patch does, it uses tasklet_instant_wakeup()
to dequeue pending requests, which has the effect of not bloating the
shared queue, hence not requiring the global queue lock, which in turn
results in the wakeup to be much faster, and the queue lock to be much
shorter. In the end, a test with 4k concurrent connections that was
being limited to 40-80k requests/s before with 16 threads, some of
which were stuck at 100% CPU now reaches 570k req/s with 4% idle.

Given that it's been found that it was possible to trigger the watchdog
on the queue lock under extreme conditions, and that such conditions
could happen when users want to protect their servers during a DoS, it
would definitely make sense to backport it to the most recent releases
(2.5 and 2.4 seem like good candidates especially because their scheduler
is modern enough to receive the change above). If a backport is performed,
the following patch is needed:

    MINOR: task: add a new task_instant_wakeup() function
2022-04-22 19:11:59 +02:00
Christopher Faulet
6b0a0fb2f9 CLEANUP: tree-wide: Remove any ref to stream-interfaces
Stream-interfaces are gone. Corresponding files can be safely be removed. In
addition, comments are updated accordingly.
2022-04-13 15:10:16 +02:00
Christopher Faulet
8da67aae3e MEDIUM: stream-int/conn-stream: Move src/dst addresses in the conn-stream
The source and destination addresses at the applicative layer are moved from
the stream-interface to the conn-stream. This simplifies a bit the code and
it is a logicial step to remove the stream-interface.
2022-04-13 15:10:14 +02:00
Tim Duesterhus
cc8348fbc1 MINOR: queue: Replace if() + abort() with BUG_ON()
see 5cd4bbd7a ("BUG/MAJOR: threads/queue: Fix thread-safety issues on the queues management")
2022-03-01 10:14:56 +01:00
Christopher Faulet
4a0114b298 MINOR: backend: Always access the stream-int via the conn-stream
To be able to move the stream-interface from the stream to the conn-stream,
all access to the SI is done via the conn-stream. This patch is limited to
the backend part.
2022-02-24 11:00:02 +01:00
Willy Tarreau
266d540549 BUG/MEDIUM: backend: fix possible sockaddr leak on redispatch
A subtle change of target address allocation was introduced with commit
68cf3959b ("MINOR: backend: rewrite alloc of stream target address") in
2.4. Prior to this patch, a target address was allocated by function
assign_server_address() only if none was previously allocated. After
the change, the allocation became unconditional. Most of the time it
makes no difference, except when we pass multiple times through
connect_server() with SF_ADDR_SET cleared.

The most obvious fix would be to avoid allocating that address there
when already set, but the root cause is that since introduction of
dynamically allocated addresses, the SF_ADDR_SET flag lies. It can
be cleared during redispatch or during a queue redistribution without
the address being released.

This patch instead gives back all its correct meaning to SF_ADDR_SET
and guarantees that when not set no address is allocated, by freeing
that address at the few places the flag is cleared. The flag could
even be removed so that only the address is checked but that would
require to touch many areas for no benefit.

The easiest way to test it is to send requests to a proxy with l7
retries enabled, which forwards to a server returning 500:

  defaults
    mode http
    timeout client 1s
    timeout server 1s
    timeout connect 1s
    retry-on all-retryable-errors
    retries 1
    option redispatch

  listen proxy
    bind *:5000
    server app 0.0.0.0:5001

  frontend dummy-app
    bind :5001
    http-request return status 500

Issuing "show pools" on the CLI will show that pool "sockaddr" grows
as requests are redispatched, and remains stable with the fix. Even
"ps" will show that the process' RSS grows by ~160B per request.

This fix will need to be backported to 2.4. Note that before 2.5,
there's no strm->si[1].dst, strm->target_addr must be used instead.

This addresses github issue #1499. Special thanks to Daniil Leontiev
for providing a well-documented reproducer.
2021-12-24 11:50:01 +01:00
Willy Tarreau
87154e3010 BUG/MAJOR: queue: better protect a pendconn being picked from the proxy
The locking in the dequeuing process was significantly improved by commit
49667c14b ("MEDIUM: queue: take the proxy lock only during the px queue
accesses") in that it tries hard to limit the time during which the
proxy's queue lock is held to the strict minimum. Unfortunately it's not
enough anymore, because we take up the task and manipulate a few pendconn
elements after releasing the proxy's lock (while we're under the server's
lock) but the task will not necessarily hold the server lock since it may
not have successfully found one (e.g. timeout in the backend queue). As
such, stream_free() calling pendconn_free() may release the pendconn
immediately after the proxy's lock is released while the other thread
currently proceeding with the dequeuing tries to wake up the owner's
task and dies in task_wakeup().

One solution consists in releasing le proxy's lock later. But tests have
shown that we'd have to sacrifice a significant share of the performance
gained with the patch above (roughly a 20% loss).

This patch takes another approach. It adds a "del_lock" to each pendconn
struct, that allows to keep it referenced while the proxy's lock is being
released. It's mostly a serialization lock like a refcount, just to maintain
the pendconn alive till the task_wakeup() call is complete. This way we can
continue to release the proxy's lock early while keeping this one. It had
to be added to the few points where we're about to free a pendconn, namely
in pendconn_dequeue() and pendconn_unlink(). This way we continue to
release the proxy's lock very early and there is no performance degradation.

This lock may only be held under the queue's lock to prevent lock
inversion.

No backport is needed since the patch above was merged in 2.5-dev only.
2021-08-31 18:37:13 +02:00
Willy Tarreau
47ee44fb71 BUG/MINOR: queue/debug: use the correct lock labels on the queue lock
A dedicated queue lock was added by commit 16fbdda3c ("MEDIUM: queue:
use a dedicated lock for the queues (v2)") but during its rebase, some
labels were lost and left to SERVER_LOCK / PROXY_LOCK instead of
QUEUE_LOCK. It's harmless but can confuse the lock debugger, so better
fix it.

No backport is needed.
2021-06-24 16:00:18 +02:00
Willy Tarreau
19c5581b43 BUG: backend: stop looking for queued connections once there's no more
Commit ae0b12ee0 ("MEDIUM: queue: use a trylock on the server's queue")
introduced a hard to trigger bug that's more visible with a single thread:
if a server dequeues a connection and finds another free slot with no
connection to place there, process_srv_queue() will never break out of
the loop. In multi-thread it almost does not happen because other threads
bring new connections.

No backport is needed as it's only in -dev.
2021-06-24 15:56:07 +02:00
Willy Tarreau
d03adce575 MINOR: queue: simplify pendconn_unlink() regarding srv vs px
Since the code paths became exactly the same except for what log field
to update, let's simplify the code and move further code out of the
lock. The queue position update and the test for server vs proxy do not
need to be inside the lock.
2021-06-24 10:52:31 +02:00
Willy Tarreau
51c63f0f0a MINOR: queue: remove the px/srv fields from pendconn
Now we directly use p->queue to get to the queue, which is much more
straightforward. The performance on 100 servers and 16 threads
increased from 560k to 574k RPS, or 2.5%.

A lot more simplifications are possible, but the minimum was done at
this point.
2021-06-24 10:52:31 +02:00
Willy Tarreau
8429097c61 MINOR: queue: store a pointer to the queue into the pendconn
By following the queue pointer in the pendconn it will now be possible
to always retrieve the elements (index, srv, px, etc).
2021-06-24 10:52:31 +02:00
Willy Tarreau
ae0b12ee03 MEDIUM: queue: use a trylock on the server's queue
Doing so makes sure that threads attempting to wake up new connections
for a server will give up early if another thread is already in charge
of this. The goal is to avoid unneeded contention on low server counts.

Now with a single server with 16 threads in roundrobin we get the same
performance as with multiple servers, i.e. ~575kreq/s instead of ~496k
before. Leastconn is seeing a similar jump, from ~460 to ~560k (the
difference being the calls to fwlc_srv_reposition).

The overhead of process_srv_queue() is now around 2% instead of ~20%
previously.
2021-06-24 10:52:31 +02:00
Willy Tarreau
49667c14ba MEDIUM: queue: take the proxy lock only during the px queue accesses
There's no point keeping the proxy lock held for a long time, it's
only needed when checking the proxy's queue, and keeping it prevents
multiple servers from dequeuing in parallel. Let's move it into
pendconn_process_next_strm() and release it ASAP. The pendconn
remains under the server queue lock's protection, guaranteeing that
no stream will release it while it's being touched.

For roundrobin, the performance increases by 76% (327k to 575k) on
16 threads. Even with a single server and maxconn=100, the performance
increases from 398 to 496 kreq/s. For leastconn, almost no change is
visible (less than one percent) but this is expected since most of the
time there is spent in fwlc_reposition() and fwlc_get_next_server().
2021-06-24 10:52:31 +02:00
Willy Tarreau
98c8910be7 MINOR: queue: use atomic-ops to update the queue's index (v2)
Doing so allows to retrieve and update the pendconn's queue index outside
of the queue's lock and to save one more percent CPU on a highly-contented
backend.
2021-06-24 10:52:31 +02:00
Willy Tarreau
12529c0ed3 MINOR: queue: factor out the proxy/server queuing code (v2)
The code only differed by the nbpend_max counter. Let's have a pointer
to it and merge the two variants to always use a generic queue. It was
initially considered to put the max inside the queue structure itself,
but the stats support clearing values and maxes and this would have been
the only counter having to be handled separately there. Given that we
don't need this max anywhere outside stats, let's keep it where it is
and have a pointer to it instead.

The CAS loop to update the max remains. It was naively thought that it
would have been faster without atomic ops inside the lock, but this is
not the case for the simple reason that it is a max, it converges very
quickly and never has to perform the check anymore. Thus this code is
better out of the lock.

The queue_idx is still updated inside the lock since that's where the
idx is updated, though it could be performed using atomic ops given
that it's only used to roughly count places for logging.
2021-06-24 10:52:31 +02:00
Willy Tarreau
a0e9c55ab1 MEDIUM: queue: determine in process_srv_queue() if the proxy is usable (v2)
By doing so we can move some evaluations outside of the lock and the
loop.
2021-06-24 10:52:31 +02:00
Willy Tarreau
9ab78293bf MEDIUM: queue: simplify again the process_srv_queue() API (v2)
This basically undoes the API changes that were performed by commit
0274286dd ("BUG/MAJOR: server: fix deadlock when changing maxconn via
agent-check") to address the deadlock issue: since process_srv_queue()
doesn't use the server lock anymore, it doesn't need the "server_locked"
argument, so let's get rid of it before it gets used again.
2021-06-24 10:52:31 +02:00
Willy Tarreau
16fbdda3c3 MEDIUM: queue: use a dedicated lock for the queues (v2)
Till now whenever a server or proxy's queue was touched, this server
or proxy's lock was taken. Not only this requires distinct code paths,
but it also causes unnecessary contention with other uses of these locks.

This patch adds a lock inside the "queue" structure that will be used
the same way by the server and the proxy queuing code. The server used
to use a spinlock and the proxy an rwlock, though the queue only used
it for locked writes. This new version uses a spinlock since we don't
need the read lock part here. Tests have not shown any benefit nor cost
in using this one versus the rwlock so we could change later if needed.

The lower contention on the locks increases the performance from 362k
to 374k req/s on 16 threads with 20 servers and leastconn. The gain
with roundrobin even increases by 9%.

This is tagged medium because the lock is changed, but no other part of
the code touches the queues, with nor without locking, so this should
remain invisible.
2021-06-24 10:52:31 +02:00
Willy Tarreau
9cef43acab MEDIUM: queue: update px->served and lb's take_conn once per loop
There's no point doing atomic incs over px->served/px->totpend under the
locks from the inner loop, as this value is used by the LB algorithms but
not during the dequeuing step. In addition, the LB algo's take_conn()
doesn't need to be refreshed for each and every connection taken
under the lock, it can be performed once at the end and out of the
lock.

While the gain on roundrobin is not noticeable (only the atomic inc),
on leastconn which uses take_conn(), the performance increases from
355k to 362k req/s on 16 threads.
2021-06-24 10:09:40 +02:00
Willy Tarreau
a48905bad8 Revert "MEDIUM: queue: make pendconn_process_next_strm() only return the pendconn"
This reverts commit 5304669e1b.

The recent changes since 5304669e1 MEDIUM: queue: make
pendconn_process_next_strm() only return the pendconn opened a tiny race
condition between stream_free() and process_srv_queue(), as the pendconn
is accessed outside of the lock, possibly while it's being freed. A
different approach is required.
2021-06-24 09:55:59 +02:00
Willy Tarreau
d83c98eb14 Revert "MINOR: queue: update proxy->served once out of the loop"
This reverts commit 3e92a31783.

The recent changes since 5304669e1 MEDIUM: queue: make
pendconn_process_next_strm() only return the pendconn opened a tiny race
condition between stream_free() and process_srv_queue(), as the pendconn
is accessed outside of the lock, possibly while it's being freed. A
different approach is required.
2021-06-24 09:55:14 +02:00
Willy Tarreau
e76fc3253d Revert "MEDIUM: queue: refine the locking in process_srv_queue()"
This reverts commit 1b648c857b.

The recent changes since 5304669e1 MEDIUM: queue: make
pendconn_process_next_strm() only return the pendconn opened a tiny race
condition between stream_free() and process_srv_queue(), as the pendconn
is accessed outside of the lock, possibly while it's being freed. A
different approach is required.
2021-06-24 09:55:14 +02:00