Define a new field in listener structure named flags.
For the moment, no flag is defined. This will be notably useful to
differentiate QUIC listeners with the implementation of a QUIC conn
accept queue.
Remove usage of connection in quic_conn_from_buf. As connection and
quic_conn are decorrelated, it is not logical to check connection flags
when using sendto.
This require to store the L4 peer address in quic_conn to be able to use
sendto.
This change is required to delay allocation of connection.
This flag is named RX_F_LOCAL_ACCEPT. It will be activated for special
receivers where connection balancing to threads is already handle
outside of listener_accept, such as with QUIC listeners.
Add a new function in mux-quic to install app-ops. For now this
functions is called during the ALPN negotiation of the QUIC handshake.
This change will be useful when the connection accept queue will be
implemented. It will be thus required to delay the app-ops
initialization because the mux won't be allocated anymore during the
QUIC handshake.
Define a new enum to represent the status of the mux/connection layer
above a quic_conn. This is important to know if it's possible to handle
application data, or if it should be buffered or dropped.
This new option, when set, will cause the callers of pool_alloc() and
pool_free() to be recorded into an extra area in the pool that is expected
to be helpful for later inspection (e.g. in core dumps). For example it
may help figure that an object was released to a pool with some sub-fields
not yet released or that a use-after-free happened after releasing it,
with an immediate indication about the exact line of code that released
it (possibly an error path).
This only works with the per-thread cache, and even objects refilled from
the shared pool directly into the thread-local cache will have a NULL
there. That's not an issue since these objects have not yet been freed.
It's worth noting that pool_alloc_nocache() continues not to set any
caller pointer (e.g. when the cache is empty) because that would require
a possibly undesirable API change.
The extra cost is minimal (one pointer per object) and this completes
well with DEBUG_POOL_INTEGRITY.
This adds a caller to pool_put_to_cache() and pool_get_from_cache()
which will optionally be used to pass a pointer to their callers. For
now it's not used, only the API is extended to support this pointer.
Here the idea is to calculate the POOL_EXTRA size that is appended at
the end of a pool object based on the sum of enabled optional fields
so that we can more easily compute offsets and sizes depending on build
options.
For this, POOL_EXTRA is replaced with POOL_EXTRA_MARK which itself is
set either to sizeof(void*) or zero depending on whether we enable
marking the origin pool or not upon allocation.
The pool_alloc() function was already a wrapper to __pool_alloc() which
was also inlined but took a set of flags. This latter was uninlined and
moved to pool.c, and pool_alloc()/pool_zalloc() turned to macros so that
they can more easily evolve to support debugging options.
The number of call places made this code grow over time and doing only
this change saved ~1% of the whole executable's size.
The pool_free() function has become a bit big over time due to the
extra consistency checks. It used to remain inline only to deal
cleanly with the NULL pointer free that's quite present on some
structures (e.g. in stream_free()).
Here we're splitting the function in two:
- __pool_free() does the inner block without the pointer test and
becomes a function ;
- pool_free() is now a macro that only checks the pointer and calls
__pool_free() if needed.
The use of a macro versus an inline function is only motivated by an
easier intrumentation of the code later.
With this change, the code size reduces by ~1%, which means that at
this point all pool_free() call places used to represent more than
1% of the total code size.
Allow to register quic_conn as ex-data in SSL callbacks. A new index is
used to identify it as ssl_qc_app_data_index.
Replace connection by quic_conn as SSL ex-data when initializing the QUIC
SSL session. When using SSL callbacks in QUIC context, the connection is
now NULL. Used quic_conn instead to retrieve the required parameters.
Also clean up
The same changes are conducted inside the QUIC SSL methods of xprt-quic
: connection instance usage is replaced by quic_conn.
Define a special accept cb for QUIC listeners to quic_session_accept().
This operation is conducted during the proto.add callback when creating
listeners.
A special care is now taken care when setting the standard callback
session_accept_fd() to not overwrite if already defined by the proto
layer.
When enabled, objects picked from the cache are checked for corruption
by comparing their contents against a pattern that was placed when they
were inserted into the cache. Objects are also allocated in the reverse
order, from the oldest one to the most recent, so as to maximize the
ability to detect such a corruption. The goal is to detect writes after
free (or possibly hardware memory corruptions). Contrary to DEBUG_UAF
this cannot detect reads after free, but may possibly detect later
corruptions and will not consume extra memory. The CPU usage will
increase a bit due to the cost of filling/checking the area and for the
preference for cold cache instead of hot cache, though not as much as
with DEBUG_UAF. This option is meant to be usable in production.
We have an anti-looping protection in process_stream() that detects bugs
that used to affect a few filters like compression in the past which
sometimes forgot to handle a read0 or a particular error, leaving a
thread looping at 100% CPU forever. When such a condition is detected,
an alert it emitted and the process is killed so that it can be replaced
by a sane one:
[ALERT] (19061) : A bogus STREAM [0x274abe0] is spinning at 2057156
calls per second and refuses to die, aborting now! Please
report this error to developers [strm=0x274abe0,3 src=unix
fe=MASTER be=MASTER dst=<MCLI> txn=(nil),0 txn.req=-,0
txn.rsp=-,0 rqf=c02000 rqa=10000 rpf=88000021 rpa=8000000
sif=EST,40008 sib=DIS,84018 af=(nil),0 csf=0x274ab90,8600
ab=0x272fd40,1 csb=(nil),0
cof=0x25d5d80,1300:PASS(0x274aaf0)/RAW((nil))/unix_stream(9)
cob=(nil),0:NONE((nil))/NONE((nil))/NONE(0) filters={}]
call trace(11):
| 0x4dbaab [c7 04 25 01 00 00 00 00]: stream_dump_and_crash+0x17b/0x1b4
| 0x4df31f [e9 bd c8 ff ff 49 83 7c]: process_stream+0x382f/0x53a3
(...)
One problem with this detection is that it used to only count the call
rate because we weren't sure how to make it more accurate, but the
threshold was high enough to prevent accidental false positives.
There is actually one case that manages to trigger it, which is when
sending huge amounts of requests pipelined on the master CLI. Some
short requests such as "show version" are sufficient to be handled
extremely fast and to cause a wake up of an analyser to parse the
next request, then an applet to handle it, back and forth. But this
condition is not an error, since some data are being forwarded by
the stream, and it's easy to detect it.
This patch modifies the detection so that update_freq_ctr() only
applies to calls made without CF_READ_PARTIAL nor CF_WRITE_PARTIAL
set on any of the channels, which really indicates that nothing is
happening at all.
This is greatly sufficient and extremely effective, as the call above
is still caught (shutr being ignored by an analyser) while a loop on
the master CLI now has no effect. The "call_rate" field in the detailed
"show sess" output will now be much lower, except for bogus streams,
which may help spot them. This field is only there for developers
anyway so it's pretty fine to slightly adjust its meaning.
This patch could be backported to stable versions in case of reports
of such an issue, but as that's unlikely, it's not really needed.
There is no need to use an MT_LIST to store frames to send from a packet
number space. This is a reminiscence for multi-threading support for the TX part.
For now we have co_getline() which reads a buffer and stops on LF, and
co_getword() which reads a buffer and stops on one arbitrary delimiter.
But sometimes we'd need to stop on a set of delimiters (CR and LF, etc).
This patch adds a new function co_getdelim() which takes a set of delimiters
as a string, and constructs a small map (32 bytes) that's looked up during
parsing to stop after the first delimiter found within the set. It also
supports an optional escape character that skips a delimiter (typically a
backslash). For the rest it works exactly like the two other variants.
During 2.4-dev, fault injection was enabled for cached pools with commit
207c09509 ("MINOR: pools: move the fault injector to __pool_alloc()"),
except that the condition for CONFIG_HAP_POOLS still depended on
DEBUG_FAIL_ALLOC not being set, which limits the usability to cases
where the define is set by hand. Let's remove it from the equation as
this is not a constraint anymore. While a bit old, there's no need to
backport this as it's only used during development.
Implement the emission of Retry packets. These packets are emitted in
response to Initial from clients without token. The token from the Retry
packet contains the ODCID from the Initial packet.
By default, Retry packet emission is disabled and the handshake can
continue without address validation. To enable Retry, a new bind option
has been defined named "quic-force-retry". If set, the handshake must be
conducted only after receiving a token in the Initial packet.
Implement the parsing of token from Initial packets. It is expected that
the token contains a CID which is the DCID from the Initial packet
received from the client without token which triggers a Retry packet.
This CID is then used for transport parameters.
Note that at the moment Retry packet emission is not implemented. This
will be achieved in a following commit.
Implement a new QUIC TLS related function
quic_tls_generate_retry_integrity_tag(). This function can be used to
calculate the AEAD tag of a Retry packet.
If an error is raised during the ClientHello callback on the server side
(ssl_sock_switchctx_cbk), the servername callback won't be called and
the client's SNI will not be saved in the SSL context. But since we use
the SSL_get_servername function to return this SNI in the ssl_fc_sni
sample fetch, that means that in case of error, such as an SNI mismatch
with a frontend having the strict-sni option enabled, the sample fetch
would not work (making strict-sni related errors hard to debug).
This patch fixes that by storing the SNI as an ex_data in the SSL
context in case the ClientHello callback returns an error. This way the
sample fetch can fallback to getting the SNI this way. It will still
first call the SSL_get_servername function first since it is the proper
way of getting a client's SNI when the handshake succeeded.
In order to avoid memory allocations are runtime into this highly used
runtime function, a new memory pool was created to store those client
SNIs. Its entry size is set to 256 bytes since SNIs can't be longer than
255 characters.
This fixes GitHub #1484.
It can be backported in 2.5.
Avoid closing idle connections if a soft stop is in progress.
By default, idle connections will be closed during a soft stop. In some
environments, a client talking to the proxy may have prepared some idle
connections in order to send requests later. If there is no proper retry
on write errors, this can result in errors while haproxy is reloading.
Even though a proper implementation should retry on connection/write
errors, this option was introduced to support back compat with haproxy <
v2.4. Indeed before v2.4, we were waiting for a last request to be able
to add a "connection: close" header and advice the client to close the
connection.
In a real life example, this behavior was seen in AWS using the ALB in
front of a haproxy. The end result was ALB sending 502 during haproxy
reloads.
This patch was tested on haproxy v2.4, with a regular reload on the
process, and a constant trend of requests coming in. Before the patch,
we see regular 502 returned to the client; when activating the option,
the 502 disappear.
This patch should help fixing github issue #1506.
In order to unblock some v2.3 to v2.4 migraton, this patch should be
backported up to v2.4 branch.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
[wt: minor edits to the doc to mention other options to care about]
Signed-off-by: Willy Tarreau <w@1wt.eu>
When block by the anti-amplification limit, this is the responsability of the
client to unblock it sending new datagrams. On the server side, even if not
well parsed, such datagrams must trigger the PTO timer arming.
Switch back to QUIC_HS_ST_SERVER_HANDSHAKE state after a completed handshake
if acks must be send.
Also ensure we build post handshake frames only one time without using prev_st
variable and ensure we discard the Handshake packet number space only one time.
We need to be able to decrypt late Handshake packets after the TLS secret
keys have been discarded. If not the peer send Handshake packet which have
not been acknowledged. But for such packets, we discard the CRYPTO data.
RFC 9002 5.3. Estimating smoothed_rtt and rttvar:
MUST use the lesser of the acknowledgment delay and the peer's max_ack_delay
after the handshake is confirmed.
In ticket #1413, the transfer of FDs couldn't correctly work on alpine
linux. After a few tests with musl on another distribution it seems to
be a limitation of this libc.
The number of FD that could be sent per sendmsg was set to 253, which
does not seem to work with musl, decreasing it 252 seems to work
better, so lets set this value everywhere since it does not have that
much impact.
This must be backported in every maintained version.
Now that we support batched allocations/releases, it appears that we can
reach the same performance on H2 with shared pools and 256kB thread-local
cache as without shared pools, a fast allocator and 1MB thread-local cache.
With 512kB we're up to 10% faster on highly multiplexed H2 than without the
shared cache. This was tested on a 16-core ARM machine. Thus it's time to
slightly reduce the per-thread memory cost, which may also improve the
performance on machines with smaller L2 caches. It essentially reverts
commit f587003fe ("MINOR: pools: double the local pool cache size to 1 MB").
Since previous patch we can forcefully evict multiple objects from the
local cache, even when evicting basd on the LRU entries. Let's define
a compile-time configurable setting to batch releasing of objects. For
now we set this value to 8 items per round.
This is marked medium because eviction from the LRU will slightly change
in order to group the last items that are freed within a single cache
instead of accurately scanning only the oldest ones exactly in their
order of appearance. But this is required in order to evolve towards
batched removals.
In order to support batched allocations and releases, we'll need to
prepare chains of items linked together and that can be atomically
attached and detached at once. For this we implement a "down" pointer
in each pool_item that points to the other items belonging to the same
group. For now it's always NULL though freeing functions already check
them when trying to release everything.
At the moment we count the number of releasable objects to a shared pool
one by one. The way the formula is made allows to pre-compute the number
of available slots, so let's add a function for that so that callers can
do it once before iterating.
This takes into account the average number of entries needed and the
minimum availability per pool. The function is not used yet.
In order to support batch allocation from/to shared pools, we'll have to
support a specific representation for pool objects. The new pool_item
structure will be used for this. For now it only contains a "next"
pointer that matches exactly the current storage model. The few functions
that deal with the shared pool entries were adapted to use the new type.
There is no functionality difference at this point.
Instead of letting pool_put_to_shared_cache() pass the object to the
underlying OS layer when there's no more room, let's have the caller
check if the pool is full and either call pool_put_to_shared_cache()
or call pool_free_nocache().
Doing this sensibly simplifies the code as this function now only has
to deal with a pool and an item and only for cases where there are
local caches and shared caches. As the code was simplified and the
calls more isolated, the function was moved to pool.c.
Note that it's only called from pool_evict_from_local_cache{,s}() and
that a part of its logic might very well move there when dealing with
batches.
This function is used to know whether the shared pools are full or if we
can store more objects in them. Right now it cannot be used in a generic
way because when shared pools are not used it will return false, letting
one think pools can accept objects. Let's make one variant for each build
model.
At the moment pool_put_to_shared_cache() checks if the pool is crowded,
and if so it does the exact same job as pool_free_nocache(), otherwise
it adds the object there.
This patch rearranges the code so that the function is split in two and
either uses one path or the other, and always relies on pool_free_nocache()
in case we don't want to store the object. This way there will be a common
path with the variant not using the shared cache. The patch is better viewed
using git show -b since a whole block got reindented.
It's worth noting that there is a tiny difference now in the local cache
usage measurement, as the decrement of "used" used to be performed before
checking for pool_is_crowded() instead of being done after. This used to
result in always one less object being kept in the cache than what was
configured in minavail. The rearrangement of the code aligns it with
other call places.
Some changes affect the list element and others affect the pool stats.
Better group them together, as the compiler may not detect certain
possible optimizations after the casts made by the list macros.
One of the thread scaling challenges nowadays for the pools is the
contention on the shared caches. There's never any situation where we
have a shared cache and no local cache anymore, so we can technically
afford to transfer objects from the shared cache to the local cache
before returning them to the user via the regular path. This adds a
little bit more work per object per miss, but will permit batch
processing later.
This patch simply moves pool_get_from_shared_cache() to pool.c under
the new name pool_refill_local_from_shared(), and this function does
not return anything but it places the allocated object at the head of
the local cache.
The POOL_LINK macro is now only used for debugging, and it still requires
ifdefs around, which needlessly complicates the code. Let's replace it
and the calling code with a new pair of macros: POOL_DEBUG_SET_MARK()
and POOL_DEBUG_CHECK_MARK(), that respectively store and check the pool
pointer in the extra location at the end of the pool. This removes 4
pairs of ifdefs in the middle of the code.
This practice relying on POOL_LINK() dates from the era where there were
no pool caches, but given that the structures are a bit more complex now
and that pool caches do not make use of this feature, it is totally
useless since released elements have already been overwritten, and yet
it complicates the architecture and prevents from making simplifications
and optimizations. Let's just get rid of this feature. The pointer to
the origin pool is preserved though, as it helps detect incorrect frees
and serves as a canary for overflows.
The pools have become complex with the shared pools and the thread-local
caches, and the purpose of certain structures is never easy to grasp.
Let's add a bit of documentation there to save some long and painful
analysis to those touching that area.