Since 04276f3d ("MEDIUM: server: split the address and the port into two
different fields") we should not use srv->addr to store server's port
and rely on srv->svc_port instead.
For sink servers, we correctly set >svc_port upon server creation but
we didn't use it when initializing address for the connection.
As a result, FQDN resolution will not work properly with sink servers.
Hopefully, this used to work by accident because sink servers were
resolved using the PA_O_RESOLVE flag in str2sa_range(), which made the
srv->addr contain the port in addition to the address.
But this will fail to work when FQDN resolution is postponed because only
->svc_port will contain the proper server port upon resolution.
For instance, FQDN resolution with servers from log backends (which are
resolved as regular servers, that is, without the PA_O_RESOLVE) will fail
to work because of this.
This may be backported as far as 2.2 even though the bug didn't have
noticeable effects for versions below 2.9
[In 2.2, sink_forward_session_init() didn't exist it should be applied in
sink_forward_session_create()]
This bug was introduced with 29b76ca ("BUG/MEDIUM: server/log: "mode log"
after server keyword causes crash ")
Indeed, we cannot safely rely on addr_proto being set when str2sa_range()
returns in parse_server() (even if SRV_PARSE_PARSE_ADDR is set), because
proto lookup might be bypassed when FQDN addresses are involved.
Unfortunately, the above patch wrongly assumed that proto would always
be set when SRV_PARSE_PARSE_ADDR was passed to parse_server() (so when
str2sa_range() was called), resulting in invalid postparsing checks being
performed, which could as well lead to crashes with log backends
("mode log" set) because some postparsing init was skipped as a result of
proto not being set and this wasn't expected later in the init code.
To fix this, we now make use of the previous patch to perform server's
address compatibility checks on hints that are always set when
str2sa_range() succesfully returns.
For log backend, we're also adding a complementary test to check if the
address family is of expected type, else we report an error, plus we're
moving the postinit logic in log api since _srv_check_proxy_mode() is
only meant to check proxy mode compatibility and we were abusing it.
This patch depends on:
- "MINOR: tools: make str2sa_range() directly return type hints"
No backport required unless 29b76ca gets backported.
str2sa_range() already allows the caller to provide <proto> in order to
get a pointer on the protocol matching with the string input thanks to
5fc9328a ("MINOR: tools: make str2sa_range() directly return the protocol")
However, as stated into the commit message, there is a trick:
"we can fail to return a protocol in case the caller
accepts an fqdn for use later. This is what servers do and in this
case it is valid to return no protocol"
In this case, we're unable to return protocol because the protocol lookup
depends on both the [proto type + xprt type] and the [family type] to be
known.
While family type might not be directly resolved when fqdn is involved
(because family type might be discovered using DNS queries), proto type
and xprt type are already known. As such, the caller might be interested
in knowing those address related hints even if the address family type is
not yet resolved and thus the matching protocol cannot be looked up.
Thus in this patch we add the optional net_addr_type (custom type)
argument to str2sa_range to enable the caller to check the protocol type
and transport type when the function succeeds.
For now, the appctx is removed from the buffer wait list when it is
freed. However, when it is released, it is not necessarily freed
immediately. But it is detached from the SC. If it is still registered in
the buffer wait list, it could then be woken up to get a buffer. At this
stage it is totally unexpected, especially because we must access the SC.
The fix is obvious, the appctx must be removed from the buffer wait list on
release.
Note this bug exists because the appctx was moved at the mux level.
This patch must be backported as far as 2.6.
User @nwehrman reported in issue #2328 that the used of "proxy" instead
of "backend" in the argument of the "track" server keyword is confusing.
Admittedly, all other places in the doc use "backend/server" instead of
"proxy/server", so let's update it for the sake of consistency.
After emission/reception of a CONNECTION_CLOSE, a connection enters the
CLOSING state. In this state, only minimal exchanges occurs as only the
packets which containted the CONNECTION_CLOSE frame can be reemitted. In
conformance with the RFC, most resources are released and quic_conn
instance is converted to the lighter quic_cc_conn.
Push further this optimization by closing quic_conn socket FD before
switching to a quic_cc_conn. This means that quic_cc_conn will rely on
listener socket for its send/recv operation. This should not impact
performance as as stated input/output are minimal on this state.
This patch should improve FD consumption as prior to this a socket FD
was kept during the closing delay which could cause maxsock to be
reached for other connections.
Note that fd member is kept in QUIC_CONN_COMMON and not removed from
quic_cc_conn. This is because quic_cc_conn relies on qc_snd_buf() which
access this field.
As a side-effect to this change, jobs accounting for quic_conn is also
updated. quic_cc_conn instances are now not counted as jobs. Indeed, the
main objective of jobs is to prevent haproxy process to be stopped with
data truncation. However, this relies on the connection to uses its
owned socket as the listener socket is shut down inconditionaly on
shutdown.
A consequence of the jobs handling change is that haproxy process will
be closed if only quic_cc_conn instances are present, thus preventing to
respect the closing state. In case of a reload, if a client missed a
CONNECTION_CLOSE frame just before process shutdown, it will probably
received a Stateless Reset on sending retry.
This change is considered safe as, for now, haproxy only emits
CONNECTION_CLOSE on error conditions (such as protocol violation or
timeout). It is considered as expected to suffer from data truncation
from this. However, if connection closing is reused by haproxy to
implement clean shutdown, it should be necessary to delay
CONNECTION_CLOSE frame emission to ensure no data truncation happens
here.
Prior to this patch, a special condition was set when idle timer was
rearmed for closing connections during haproxy process stopping. In this
case, the timeout was ditched and the idle task woken up immediatly.
The objective was to release quickly closing connections to not prevent
the process stopping to be too long. However, it is not conform with RFC
9000 recommandations and may cause some clients to miss a
CONNECTION_CLOSE in case of a packet loss.
A recent fix was set to use a shorter timeout for closing state. Now a
connection should only be left in this state for one second or less.
This reduces greatly the importance of stopping special condition. Thus,
this patch removes it completely.
In quic_rx_pkt_retrieve_conn(), err label is now only used if qc is
NULL. Thus, condition on qc can be removed.
No need to backport.
This issue was reported by coverity on github.
This should fix issue #2338.
When an H2 mux works with a slow downstream connection and without the
mux-mux mode, it is possible that a single stream will allocate all 32
buffers in the connection. This is not desirable at all because 1) it
brings no value, and 2) it allocates a lot of memory per connection,
which, in addition to using a lot of memory, tends to degrade performance
due to cache thrashing.
This patch improves the situation by refraining from sending data frames
over a connection when more mbufs than streams are allocated. On a test
featuring 10k connections each with a single stream reading from the
cache, this patch reduces the RAM usage from ~180k buffers to ~20k bufs,
and improves the bandwidth. This may even be backported later to recent
versions to improve memory usage. Note however that it is efficient only
when combined with e16762f8a ("OPTIM: mux-h2: call h2_send() directly
from h2_snd_buf()"), and tends to slightly reduce the single-stream
performance without it, so in case of a backport, the two need to be
considered together.
It's common to see process_stream() being woken up by wake_expired_tasks
in the profiling output, without knowing which timeout was set to cause
this. By making it possible to record the call places of task_queue()
and task_schedule(), and by making wake_expired_tasks() explicitly not
replace it, we'll be able to know which task_queue() or task_schedule()
was triggered for a given wakeup.
For example below:
process_stream 51200 311.4ms 6.081us 34.59s 675.6us <- run_tasks_from_lists@src/task.c:659 task_queue
process_stream 19227 70.00ms 3.640us 9.813m 30.62ms <- sc_notify@src/stconn.c:1136 task_wakeup
process_stream 6414 102.3ms 15.95us 8.093m 75.70ms <- stream_new@src/stream.c:578 task_wakeup
It's visible that it's the run_tasks_from_lists() which in fact applies
on the task->expire returned by the ->process() function itself.
This is used for tracing and profiling. By permitting to have a NULL
caller, we allow a caller to explicitly pass zero to state that the
current caller must not be replaced. This will soon be used by
wake_expired_tasks() to avoid replacing a caller in the expire loop.
A client may send multiple INITIAL packets if ClientHello is too big for
only one. In case a Retry token is used, the client must reuse it for
every INITIAL packets.
On the haproxy server side, there was an inconsistency to handle these
packets depending on the socket mode :
* when using listener socket, token is always revalidated.
* when using connection socket, token check is bypassed. This is because
quic_conn instance is known through its socket and thus
quic_rx_pkt_retrieve_conn() is not necessary.
RFC 9000 does not seems to mandate retry token validation after the
first INITIAL packet per connection. Thus, this patch chooses to bypass
the check every time the connection instance is known, as this indicates
that a previous token was already validated.
This should be backported up to 2.7.
QUIC connections are pushed manually into a dedicated listener queue
when they are ready to be accepted. This happens after handshake
finalization or on 0-RTT packet reception. Listener is then woken up to
dequeue them with listener_accept().
This patch comptabilizes the number of connections currently stored in
the accept queue. If reaching a certain limit, INITIAL packets are
dropped on reception to prevent further QUIC connections allocation.
This should help to preserve system resources.
This limit is automatically derived from the listener backlog. Half of
its value is reserved for handshakes and the other half for accept
queues. By default, backlog is equal to maxconn which guarantee that
there can't be no more than maxconn connections in handshake or waiting
to be accepted.
Implement a limit per listener for concurrent number of QUIC
connections. When reached, INITIAL packets for new connections are
automatically dropped until the number of handshakes is reduced.
The limit value is automatically based on listener backlog, which itself
defaults to maxconn.
This feature is important to ensure CPU and memory resources are not
consume if too many handshakes attempt are started in parallel.
Special care is taken if a connection is released before handshake
completion. In this case, counter must be decremented. This forces to
ensure that member <qc.state> is set early in qc_new_conn() before any
quic_conn_release() invocation.
Accounting is implemented for half open connections which represent QUIC
connections waiting for handshake completion. When reaching a certain
limit, Retry mechanism is automatically activated prior to instantiate
new connections.
The issue with this behavior is that two notions are mixed : QUIC
connection handshake phase and Retry which is mechanism against
amplification attacks. As such, only peer address validation should be
taken into account to activate Retry protection.
This patch chooses to reduce the scope of half_open_conn. Now only
connection waiting to validate the peer address are now accounted for.
Most notably, connections instantiated with a validated Retry token
check are not accounted.
One impact of this patch is that it should prevent to activate Retry
mechanism too early, in particular in case if multiple handshakes are
too slow. Another limitation should be implemented to protect against
this scenario.
When a new QUIC connection is created, server considers peer address as
not yet validated. The server must limit its sending up to 3 times the
content already received. This is a defensive measure to avoid flooding
a remote host victim of address spoofing.
This patch adjust the condition to consider the peer address as
validated. Two conditions are now considered :
* successful handling of a received HANDSHAKE packet. This was already
done before although implemented in a different way.
* validation of a Retry token. This was not considered prior this patch
despite RFC recommandation.
This patch also adjusts how a connection is internally labelled as using
a validated peer address. Before, above conditions were checked via
quic_peer_validated_addr(). Now, a flag QUIC_FL_CONN_PEER_VALIDATED_ADDR
is set to labelled this. It already existed prior this patch but was
only used for quic_cc_conn. This should now be more explicit.
The commit 4be0c7c65 ("MEDIUM: stconn/muxes: Loop on data fast-forwarding to
forward at least a buffer") introduced a regression. In h1_fastfwd(), if
data fast-forwarding is not supported by the opposite SC, we must exit
without calling se_donn_ff(). Otherwise a BUG_ON() will be triggered because
the opposite mux has no .done_fastfwd() callback function.
No backport needed.
Defining a master CLI without the master-worker mode emits a warning
since version 1.8. This patch enforce the behavior by forbiding the
usage of the -S option without the master-worker mode.
The documentation about -q seems wrong, it does not output messages
after the startup, it disables all messages. It was always quiet with
the stdio_quiet() function.
Must be backported in all stable versions.
Move the MODE_QUIET and MODE_VERBOSE test in print_message() so we
always output in the startup-logs even with MODE_QUIET.
ha_warning(), ha_alert() and ha_notice() does not check the MODE_QUIET
and MODE_VERBOSE anymore, it is done before doing the fprintf() in
print_message().
ha_alert(), ha_warning() and ha_notice() shouldn't check MODE_STARTING
for log emission. Let's remove the check.
This shouldn't do much since the stdio_quiet() function mute the output
in main().
The commit 08d7169f4 ("MINOR: stconn: Don't queue stream task in past in
sc_notify()") tried to fix issues with epiration date set in past for the
stream in sc_notify(). However it remains some cases where the stream
expiration date may already be expired before recomputing it. This happens
when an event is reported by the mux exactly when a timeout is triggered. In
this case, depending on the scheduling, the SC may be woken up before the
stream. For these cases, we fall into the BUG_ON() preventing to queue in
the past.
So, it remains unexpected to queue a task in the past. The BUG_ON() is
correct at this place. We must just avoid to recompute the stream expiration
date if it is already expired. At worst, the stream will be woken up for
nothing. But it is not really a big deal because it will only happen on
timeouts from time to time. It is so sporadic that we can ignore it from a
performance point of view.
This patch must be backpoted to 2.8. Be careful to remove the BUG_ON() on
the 2.8.
This bug could be reproduced with -dMfail option and detected by libasan.
During the TLS secrets allocations, when failed, quic_tls_ctx_secs_free()
is called. It resets the already initialized secrets. Some were detected
as initialized when not, or with a non initialized length, which leads
to big "memset(0)" detected by libsasan.
Ensure that all the secrets are really initialized with correct lengths.
No need to be backported.
If a TX packet cannot be allocated (by qc_build_pkt()), as it can be coalesced
to another one, this leads the TX buffer to have remaining not sent prepared data.
Then haproxy crashes upon a BUG_ON() triggered by the next call to qc_txb_release().
This may happen only during handshakes.
To fix this, qc_build_pkt() returns a new -3 error to dected such allocation
failures followed which is for now on followed by a call to qc_purge_txbuf() to
send the TX prepared data and purge the TX buffer.
Must be backported as far as 2.6.
This may happen during handshakes when Handshake packets cannot be coalesced
to a first Initial packet because of TX frame allocation failures (from
qc_build_frms()). This leads too short (not padded) Initial packets to be sent.
This is detected by a BUG_ON() in qc_send_ppkts().
To avoid this an Handshake packet without ack-eliciting frames which should have
been built by qc_build_frms() is built.
Must be backported as far as 2.6.
This may happen upon ack ranges allocation failures (from quic_update_ack_ranges_list().
This can lead to empty trees of ack ranges to be used to build ACK frames which
is not good at all. Furthermore this is detected by a BUG_ON() (in qc_do_build_pkt()).
To avoid this, simply update the acknowledgemen state of the connection only if
quic_update_ack_ranges_list() succeeds, as it fails only in case of memory
allocation failures.
Must be backported as far as 2.6.
If the Handshake encryption level could not be allocated, this could lead
to Initial packets to be sent because no Handshake CRYPTO frames were generated.
Furthermore in such an allocation failure case, the connection should be closed
as soon as possible. This is done making ha_quic_set_encryption_secrets() return
0 upon an encryption level allocation failure.
Also fix a typo in the trace in relation to this allocation failure.
No need to be backported.
When the idle timer expired with a still present mux, this task was not freed
and even requeued with a timer in the past.
Fix this issue calling task_destroy() in this case. As the task is freed,
its handler must return NULL setting local <t> variable to NULL in every cases.
Also ensure that this timer task is not armed again after having been released
with a <return> statement when this is the case from qc_idle_timer_do_rearm().
Must be backported as far as 2.6.
This was no reason not to release as soon as possible the TLS/SSL QUIC connection
context from quic_conn_release() before allocating a "closing connection" connection
(quic_cc_conn struct).
This patch sets the handshake task in heavy task mode when receiving in disorder
CRYPTO data which results in in order bufferized CRYPTO data. This is done
thanks to a non-contiguous buffer and from qc_handle_crypto_frm() after having
potentially bufferized CRYPTO data in this buffer.
qc_treat_rx_crypto_frms() is no more called from qc_treat_rx_pkts() but instead
this is where the task is set in heavy task mode. Consequently,
this is the job of qc_ssl_provide_all_quic_data() to call directly
qc_treat_rx_crypto_frms() to provide the in order bufferized CRYPTO data to the
TLS stack. As this function releases the non-contiguous buffer for the CRYPTO
data, if possible, there is no need to do that from qc_treat_rx_crypto_frms()
anymore.
Add a new pool for the CRYPTO data frames received in order.
Add ->rx.crypto_frms list to each encryption level to store such frames
when they are received in order from qc_handle_crypto_frm().
Also set the handshake task (qc_conn_io_cb()) in heavy task mode from
this function after having received such frames. When this task
detects that it is set in heavy mode, it calls qc_ssl_provide_all_quic_data()
newly implemented function to provide the CRYPTO data to the TLS task.
Modify quic_conn_enc_level_uninit() to release these CRYPTO frames
when releasing the encryption level they are in relation with.
IOBUF_FL_EOI iobuf flag is now set by the producer to notify the consumer
that the end of input was reached. Thanks to this flag, we can remove the
ugly ack in h2_done_ff() to test the opposite SE flags.
Of course, for now, it works and it is good enough. But we must keep in mind
that EOI is always forwarded from the producer side to the consumer side in
this case. But if this change, a new CO_RFL_ flag will have to be added to
instruct the producer if it can forward EOI or not.
In the mux-to-mux data forwarding, we now try, as far as possible to send at
least a buffer. Of course, if the consumer side is congested or if nothing
more can be received, we leave. But the idea is to retry to fast-forward
data if less than a buffer was forwarded. It is only performed for buffer
fast-forwarding, not splicing.
The idea behind this patch is to optimise the forwarding, when a first
forward was performed to complete a buffer with some existing data. In this
case, the amount of data forwarded is artificially limited because we are
using a non-empty buffer. But without this limitation, it is highly probable
that a full buffer could have been sent. And indeed, with H2 client, a
significant improvement was observed during our test.
To do so, .done_fastfwd() callback function must be able to deal with
interim forwards. Especially for the H2 mux, to remove H2_SF_NOTIFIED flags
on the H2S on the last call only. Otherwise, the H2 stream can be blocked by
itself because it is in the send_list. IOBUF_FL_INTERIM_FF iobuf flag is
used to notify the consumer it is not the last call. This flag is then
removed on the last call.
In order to limit inter-thread contention on the global pool, in 2.9-dev3
with commit 7bf829ace ("MAJOR: pools: move the shared pool's free_list
over multiple buckets"), it was decided that if the selected bucket had
an empty free list, we would simply give up and fall back to the OS
allocator.
But this causes allocations to be made from the OS for certain threads,
to be released to overloaded pools that are sent back to the OS. One
visible effect is that sending a lot of traffic using h2load with 100
parallel streams over 100 connections causes 5-10k buffers to be
allocated, then reducing the load to only 10 connections doesn't make
these allocations go down, just because some buckets are no longer
visited.
Tests show that giving a second chance to pick another bucket in this
case is sufficient to visit all other buckets and recycle their pending
objects. Now "show pools" that starts at 10k buffers at 100 connections
goes down to about 150 with 1 connection and 100 streams in a fraction
of a second.
No backport is needed, as the issue is only in 2.9.
Since 2.9-dev3 with commit 7bf829ace ("MAJOR: pools: move the shared
pool's free_list over multiple buckets"), the global pool supports
multiple heads to reduce inter-thread contention. However, when
grabbing a freelist head fails because another thread is already
picking from it, we just skip to the next one and try again.
Unfortunately, it still maintains a bit of contention between thread
pairs when for some reasons only a few threads are used. This may
happen for example when running on a 4- or 8- thread system and
the two most active ones end up on adjacent buckets.
A better and much simpler solution consists in visiting a random bucket
instead of the current one. Tests show that the CPU usage spent in
pool_refill_local_from_shared() reduces at low number of connections
(hence threads).
No backport is needed, as the issue is only in 2.9.
In 2.6-dev1, the method used to decide how many pool entries could be
released at once was revisited to support releases in batches. This was
done with commits 91a8e28f9 ("MINOR: pool: add a function to estimate
how many may be released at once") and 361e31e3f ("MEDIUM: pool: compute
the number of evictable entries once per pool").
The first commit takes care of the possible inconsistency between the
moment the allocated count and the used count are read, but unfortunately
fixed it the wrong way, by adjusting "used" to match "alloc" whenever it
was lower (i.e. almost always). This results in a nasty case which is that
as soon as the allocated value becomes higher than the estimated count of
needed entries, we end up returning pool->minavail, which causes very
small batches to be released, starting from commit 1513c5479 ("MEDIUM:
pools: release cached objects in batches").
The problem was further amplified in 2.9-dev3 with commit 7bf829ace
("MAJOR: pools: move the shared pool's free_list over multiple buckets")
because it now becomes possible for a thread to allocate from one bucket
and release into a few other different ones, causing an accumulation of
entries in that bucket.
The fix is trivial, simply adjust the alloc counter if the used one is
higher, before performing operations.
This must be backported to 2.6.
The function returning the excess of events over the current period for a
target frequency (the overshoot) has a flaw if the inactivity period is too
long. In this case, the result may overflow. Instead to be negative, a very
high positive value is returned.
This function is used by the bandwidth limitation filter. It means after a
long inactivity period, a huge burst may be detected while it should not.
In fact, the problem arise from the moment we're past the current period. In
this case, we should not report any overshoot and just get the number of
remaining events as usual.
This patch should be backported as far as 2.7.
It is now the turn for the H1 mux to be fix to properly handle http-request
and http-keep-alive timeouts. It is quite surprising but it is broken since
the 2.2. For idle connections on client side, the smallest value between the
client timeout and the http-request/http-keep-alive timeout is used while
the client timeout should only be used if other ones are not defined. So, if
the client timeout is the smallest value, the keep-alive timeout is not
respected.
It is only an issue for idle client connections. The http-request timeout is
respected from the moment part of the next request was received.
This patch should fix the issue #2334. It must be backported as far as 2.2. But
be careful during the backports. The H1 mux had evolved a lot since the 2.2.
Add a special treatment for the IPV4 and IPV6 cases in
table_process_entry_per_key() function so that input string is parsed
in best effort (STR to pseudo type ADDR): input format is first considered
over table type and then let smp_to_stkey() do the type conversion for us
when needed.
This patch heavily depends on:
- "MEDIUM: stktable/cli: simplify entry key handling"
And optionally depends on:
- 72514a44 ("MEDIUM: tools/ip: v4tov6() and v6tov4() rework")
Make use of smp_to_stkey() in table_process_entry_per_key() to simplify
key handling and leverage auto type conversions from sample API.
One noticeable side effect is that integer input checks will be relaxed
given that c_str2int() sample conv is more permissible than the integrated
table_process_entry_per_key() integer parser.
When an ipv4 key is used to filter a CLI command on a stick table
clear/set/show table ...), inetaddr_host+htonl combination was used
with no error checking.
Instead, we now use inet_pton(), which is what we use for ipv6 addresses
since b7c962b0c0 ("BUG/MINOR: stick-table/cli: Check for invalid ipv6 key")
Doing this allows us to easily check for parsing errors: we're trading off
some parsing efficience to better catch input errors and ensure we get
similar behavior between ipv4 and ipv6 addresses handling.
This patch may be backported to all supported versions.
We must take care to release H1 input buffer when it is emptied during the
fast-forwarding nego. Otherwise, it may be kept allocated for a while,
waiting for the next "normal" receive or the H1C release.
No backport needed.
Use backend connect timeout when a new connection is instantiated for
rhttp. This ensures that if connect operation fails after a certain
delay, reverse_connect listener task is woken up. This allows to free
the current connection and retry a new connect.
As a consequence of this change, rev_process() may be woken up even if
connection is not reported with CO_FL_ERROR. This happens if timeout
fired before any network reported issue. Connection freeing is adjusted
as in this case MUX instance is already allocated. Use destroy callback
to release MUX context prior to the connection itself.
This patch is really useful as a side measure for a haproxy bug
impacting connect with SSL for both backend connections and active
reverse connect. This is caused by the delayed allocation of MUX
allocation. Asynchronous connect error detected at the socket layer is
not notified to upper layers. Currently, only connect timeout allows to
release this failed connection.
The commit d6d4abdc3 ("BUILD: mux-h1: Fix build without kernel splicing
support") introduced a regression. The kernel support for the underlying
XPRT is no longer checked. So it is possible to enable the splicing for SSL
connection. This of course leads to a segfault.
This patch restore the test on the xprt rcv_pipe/snd_pipe functions.
This patch should fix a crash reported by Tristan in #2095
(#issuecomment-1788949014). No backport needed.
QUIC connections are accounted inside global sslconns. As with QUIC
actconn, it suffered from a similar issue if an intermediary allocation
failed inside qc_new_conn().
Fix this similarly by moving increment operation inside qc_new_conn().
Increment and error path are now centralized and much easier to
validate.
The consequences are similar to the actconn fix : on memory allocation
global sslconns may wrap, this time blocking any future QUIC or SSL
connections on the process.
This must be backported up to 2.6.