Implement the parsing of token from Initial packets. It is expected that
the token contains a CID which is the DCID from the Initial packet
received from the client without token which triggers a Retry packet.
This CID is then used for transport parameters.
Note that at the moment Retry packet emission is not implemented. This
will be achieved in a following commit.
Implement a new QUIC TLS related function
quic_tls_generate_retry_integrity_tag(). This function can be used to
calculate the AEAD tag of a Retry packet.
If an error is raised during the ClientHello callback on the server side
(ssl_sock_switchctx_cbk), the servername callback won't be called and
the client's SNI will not be saved in the SSL context. But since we use
the SSL_get_servername function to return this SNI in the ssl_fc_sni
sample fetch, that means that in case of error, such as an SNI mismatch
with a frontend having the strict-sni option enabled, the sample fetch
would not work (making strict-sni related errors hard to debug).
This patch fixes that by storing the SNI as an ex_data in the SSL
context in case the ClientHello callback returns an error. This way the
sample fetch can fallback to getting the SNI this way. It will still
first call the SSL_get_servername function first since it is the proper
way of getting a client's SNI when the handshake succeeded.
In order to avoid memory allocations are runtime into this highly used
runtime function, a new memory pool was created to store those client
SNIs. Its entry size is set to 256 bytes since SNIs can't be longer than
255 characters.
This fixes GitHub #1484.
It can be backported in 2.5.
Avoid closing idle connections if a soft stop is in progress.
By default, idle connections will be closed during a soft stop. In some
environments, a client talking to the proxy may have prepared some idle
connections in order to send requests later. If there is no proper retry
on write errors, this can result in errors while haproxy is reloading.
Even though a proper implementation should retry on connection/write
errors, this option was introduced to support back compat with haproxy <
v2.4. Indeed before v2.4, we were waiting for a last request to be able
to add a "connection: close" header and advice the client to close the
connection.
In a real life example, this behavior was seen in AWS using the ALB in
front of a haproxy. The end result was ALB sending 502 during haproxy
reloads.
This patch was tested on haproxy v2.4, with a regular reload on the
process, and a constant trend of requests coming in. Before the patch,
we see regular 502 returned to the client; when activating the option,
the 502 disappear.
This patch should help fixing github issue #1506.
In order to unblock some v2.3 to v2.4 migraton, this patch should be
backported up to v2.4 branch.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
[wt: minor edits to the doc to mention other options to care about]
Signed-off-by: Willy Tarreau <w@1wt.eu>
When block by the anti-amplification limit, this is the responsability of the
client to unblock it sending new datagrams. On the server side, even if not
well parsed, such datagrams must trigger the PTO timer arming.
Switch back to QUIC_HS_ST_SERVER_HANDSHAKE state after a completed handshake
if acks must be send.
Also ensure we build post handshake frames only one time without using prev_st
variable and ensure we discard the Handshake packet number space only one time.
We need to be able to decrypt late Handshake packets after the TLS secret
keys have been discarded. If not the peer send Handshake packet which have
not been acknowledged. But for such packets, we discard the CRYPTO data.
RFC 9002 5.3. Estimating smoothed_rtt and rttvar:
MUST use the lesser of the acknowledgment delay and the peer's max_ack_delay
after the handshake is confirmed.
In ticket #1413, the transfer of FDs couldn't correctly work on alpine
linux. After a few tests with musl on another distribution it seems to
be a limitation of this libc.
The number of FD that could be sent per sendmsg was set to 253, which
does not seem to work with musl, decreasing it 252 seems to work
better, so lets set this value everywhere since it does not have that
much impact.
This must be backported in every maintained version.
Now that we support batched allocations/releases, it appears that we can
reach the same performance on H2 with shared pools and 256kB thread-local
cache as without shared pools, a fast allocator and 1MB thread-local cache.
With 512kB we're up to 10% faster on highly multiplexed H2 than without the
shared cache. This was tested on a 16-core ARM machine. Thus it's time to
slightly reduce the per-thread memory cost, which may also improve the
performance on machines with smaller L2 caches. It essentially reverts
commit f587003fe ("MINOR: pools: double the local pool cache size to 1 MB").
Since previous patch we can forcefully evict multiple objects from the
local cache, even when evicting basd on the LRU entries. Let's define
a compile-time configurable setting to batch releasing of objects. For
now we set this value to 8 items per round.
This is marked medium because eviction from the LRU will slightly change
in order to group the last items that are freed within a single cache
instead of accurately scanning only the oldest ones exactly in their
order of appearance. But this is required in order to evolve towards
batched removals.
In order to support batched allocations and releases, we'll need to
prepare chains of items linked together and that can be atomically
attached and detached at once. For this we implement a "down" pointer
in each pool_item that points to the other items belonging to the same
group. For now it's always NULL though freeing functions already check
them when trying to release everything.
At the moment we count the number of releasable objects to a shared pool
one by one. The way the formula is made allows to pre-compute the number
of available slots, so let's add a function for that so that callers can
do it once before iterating.
This takes into account the average number of entries needed and the
minimum availability per pool. The function is not used yet.
In order to support batch allocation from/to shared pools, we'll have to
support a specific representation for pool objects. The new pool_item
structure will be used for this. For now it only contains a "next"
pointer that matches exactly the current storage model. The few functions
that deal with the shared pool entries were adapted to use the new type.
There is no functionality difference at this point.
Instead of letting pool_put_to_shared_cache() pass the object to the
underlying OS layer when there's no more room, let's have the caller
check if the pool is full and either call pool_put_to_shared_cache()
or call pool_free_nocache().
Doing this sensibly simplifies the code as this function now only has
to deal with a pool and an item and only for cases where there are
local caches and shared caches. As the code was simplified and the
calls more isolated, the function was moved to pool.c.
Note that it's only called from pool_evict_from_local_cache{,s}() and
that a part of its logic might very well move there when dealing with
batches.
This function is used to know whether the shared pools are full or if we
can store more objects in them. Right now it cannot be used in a generic
way because when shared pools are not used it will return false, letting
one think pools can accept objects. Let's make one variant for each build
model.
At the moment pool_put_to_shared_cache() checks if the pool is crowded,
and if so it does the exact same job as pool_free_nocache(), otherwise
it adds the object there.
This patch rearranges the code so that the function is split in two and
either uses one path or the other, and always relies on pool_free_nocache()
in case we don't want to store the object. This way there will be a common
path with the variant not using the shared cache. The patch is better viewed
using git show -b since a whole block got reindented.
It's worth noting that there is a tiny difference now in the local cache
usage measurement, as the decrement of "used" used to be performed before
checking for pool_is_crowded() instead of being done after. This used to
result in always one less object being kept in the cache than what was
configured in minavail. The rearrangement of the code aligns it with
other call places.
Some changes affect the list element and others affect the pool stats.
Better group them together, as the compiler may not detect certain
possible optimizations after the casts made by the list macros.
One of the thread scaling challenges nowadays for the pools is the
contention on the shared caches. There's never any situation where we
have a shared cache and no local cache anymore, so we can technically
afford to transfer objects from the shared cache to the local cache
before returning them to the user via the regular path. This adds a
little bit more work per object per miss, but will permit batch
processing later.
This patch simply moves pool_get_from_shared_cache() to pool.c under
the new name pool_refill_local_from_shared(), and this function does
not return anything but it places the allocated object at the head of
the local cache.
The POOL_LINK macro is now only used for debugging, and it still requires
ifdefs around, which needlessly complicates the code. Let's replace it
and the calling code with a new pair of macros: POOL_DEBUG_SET_MARK()
and POOL_DEBUG_CHECK_MARK(), that respectively store and check the pool
pointer in the extra location at the end of the pool. This removes 4
pairs of ifdefs in the middle of the code.
This practice relying on POOL_LINK() dates from the era where there were
no pool caches, but given that the structures are a bit more complex now
and that pool caches do not make use of this feature, it is totally
useless since released elements have already been overwritten, and yet
it complicates the architecture and prevents from making simplifications
and optimizations. Let's just get rid of this feature. The pointer to
the origin pool is preserved though, as it helps detect incorrect frees
and serves as a canary for overflows.
The pools have become complex with the shared pools and the thread-local
caches, and the purpose of certain structures is never easy to grasp.
Let's add a bit of documentation there to save some long and painful
analysis to those touching that area.
This bug was introduced by d817dc73 ("MEDIUM: ssl: Load client
certificates in a ckch for backend servers") in which the creation of
the SSL_CTX for a server was moved to the configuration parser when
using a "crt" keyword instead of being done in ssl_sock_prepare_srv_ctx().
The patch 0498fa40 ("BUG/MINOR: ssl: Default-server configuration ignored by
server") made it worse by setting the same SSL_CTX for every servers
using a default-server. Resulting in any SSL option on a server applied
to every server in its backend.
This patch fixes the issue by reintroducing a string which store the
path of certificate inside the server structure, and loading the
certificate in ssl_sock_prepare_srv_ctx() again.
This is a quick fix to backport, a cleaner way can be achieve by always
creating the SSL_CTX in ssl_sock_prepare_srv_ctx() and splitting
properly the ssl_sock_load_srv_cert() function.
This patch fixes issue #1488.
Must be backported as far as 2.4.
This is a second help to dump loaded library names late at boot, once
external code has already been initialized. The purpose is to provide
a format that makes it easy to pass to "tar" to produce an archive
containing the executable and the list of dependencies. For example
if haproxy is started as "haproxy -f foo.cfg", a config check only
will suffice to quit before starting, "-q" will be used to disable
undesired output messages, and -dL will be use to dump libraries.
This will result in such a command to trivially produce a tarball
of loaded libraries:
./haproxy -q -c -dL -f foo.cfg | tar -T - -hzcf archive.tgz
Many times core dumps reported by users who experience trouble are
difficult to exploit due to missing system libraries. Sometimes,
having just a list of loaded libraries and their respective addresses
can already provide some hints about some problems.
This patch makes a step in that direction by adding a new "show libs"
command that will try to enumerate the list of object files that are
loaded in memory, relying on the dynamic linker for this. It may also
be used to detect that some foreign code embarks other undesired libs
(e.g. some external Lua modules).
At the moment it's only supported on glibc when USE_DL is set, but it's
implemented in a way that ought to make it reasonably easy to be extended
to other platforms.
We'll use this glibc function to dump loaded libs. It's been
available since glibc-2.2.4, and as it requires dlpi headers defined
in link.h, it implicitly relies on dlfcn, thus we condition it to
USE_DL. Other operating systems or libc might have different
dependencies so let's stick to the bare minimum for now.
A subtle change of target address allocation was introduced with commit
68cf3959b ("MINOR: backend: rewrite alloc of stream target address") in
2.4. Prior to this patch, a target address was allocated by function
assign_server_address() only if none was previously allocated. After
the change, the allocation became unconditional. Most of the time it
makes no difference, except when we pass multiple times through
connect_server() with SF_ADDR_SET cleared.
The most obvious fix would be to avoid allocating that address there
when already set, but the root cause is that since introduction of
dynamically allocated addresses, the SF_ADDR_SET flag lies. It can
be cleared during redispatch or during a queue redistribution without
the address being released.
This patch instead gives back all its correct meaning to SF_ADDR_SET
and guarantees that when not set no address is allocated, by freeing
that address at the few places the flag is cleared. The flag could
even be removed so that only the address is checked but that would
require to touch many areas for no benefit.
The easiest way to test it is to send requests to a proxy with l7
retries enabled, which forwards to a server returning 500:
defaults
mode http
timeout client 1s
timeout server 1s
timeout connect 1s
retry-on all-retryable-errors
retries 1
option redispatch
listen proxy
bind *:5000
server app 0.0.0.0:5001
frontend dummy-app
bind :5001
http-request return status 500
Issuing "show pools" on the CLI will show that pool "sockaddr" grows
as requests are redispatched, and remains stable with the fix. Even
"ps" will show that the process' RSS grows by ~160B per request.
This fix will need to be backported to 2.4. Note that before 2.5,
there's no strm->si[1].dst, strm->target_addr must be used instead.
This addresses github issue #1499. Special thanks to Daniil Leontiev
for providing a well-documented reproducer.
Implement a refcount on quic_conn instance. By default, the refcount is
0. Two functions are implemented to manipulate it.
* qc_conn_take() which increments the refcount
* qc_conn_drop() which decrements it. If the refcount is 0 *BEFORE*
the substraction, the instance is freed.
The refcount is incremented on retrieve_qc_conn_from_cid() or when
allocating a new quic_conn in qc_lstnr_pkt_rcv(). It is substracted most
notably by the xprt.close operation and at the end of
qc_lstnr_pkt_rcv(). The increments/decrements should be conducted under
the CID lock to guarantee thread-safety.
Add a pointer in quic_conn to its related ssl_sock_ctx. This change is
required to avoid to use the connection instance to access it.
This commit is part of the rearchitecture of xprt-quic layers and the
separation between xprt and connection instances. It will be notably
useful when the connection allocation will be delayed.
Some applications may send some information about the reason why they decided
to close a connection. Add them to CONNECTION_CLOSE frame traces.
Take the opportunity of this patch to shorten some too long variable names
without any impact.
Add traces about important frame types to chunk_tx_frm_appendf()
and call this function for any type of frame when parsing a packet.
Move it to quic_frame.c
Prepare trace support for quic_conn instances as argument. This will be
used by the xprt-quic layer in replacement of the connection.
This commit is part of the rearchitecture of xprt-quic layers and the
separation between xprt and connection instances.
Add const qualifier on arguments of several dump functions used in the
trace callback. This is required to be able to replace the first trace
argument by a quic_conn instance. The first argument is a const pointer
and so the members accessed through it must also be const.
Add a new member in ssl_sock_ctx structure to reference the quic_conn
instance if used in the QUIC stack. This member is initialized during
qc_conn_init().
This is needed to be able to access to the quic_conn without relying on
the connection instance. This commit is part of the rearchitecture of
xprt-quic layers and the separation between xprt and connection
instances.
Move qcc_get_qcs() function from xprt_quic.c to mux_quic.c. This
function is used to retrieve the qcs instance from a qcc with a stream
id. This clearly belongs to the mux-quic layer.
When a packet is present in the RX buffer at the first place
but without a null reference counter, there is no need to continue
to try to empty the buffer, it is sure the next packet will not
be at the first place!
With the DCID refactoring, the locking is more centralized. It is
possible to simplify the code for removal of a quic_conn from the ODCID
tree.
This operation can be conducted as soon as the connection has been
retrieved from the DCID tree, meaning that the peer now uses the final
DCID. Remove the bit to flag a connection for removal and just uses
ebmb_delete() on each sucessful lookup on the DCID tree. If the
quic_conn has already been removed, it is just a noop thanks to
eb_delete() implementation.
A new function named qc_retrieve_conn_from_cid() now contains all the
code to retrieve a connection from a DCID. It handle all type of packets
and centralize the locking on the ODCID/DCID trees.
This simplify the qc_lstnr_pkt_rcv() function.
If an UDP datagram contains multiple QUIC packets, they must all use the
same DCID. The datagram context is used partly for this.
To ensure this, a comparison was made on the dcid_node of DCID tree. As
this is a comparison based on pointer address, it can be faulty when
nodes are removed/readded on the same pointer address.
Replace this comparison by a proper comparison on the DCID data itself.
To this end, the dgram_ctx structure contains now a quic_cid member.
For first Initial packets, the socket source dest address is
concatenated to the DCID. This is used to be able to differentiate
possible collision between several clients which used the same ODCID.
Refactor the code to manage DCID and the concatenation with the address.
Before this, the concatenation was done on the quic_cid struct and its
<len> field incremented. In the code it is difficult to differentiate a
normal DCID with a DCID + address concatenated.
A new field <addrlen> has been added in the quic_cid struct. The <len>
field now only contains the size of the QUIC DCID. the <addrlen> is
first initialized to 0. If the address is concatenated, it will be
updated with the size of the concatenated address. This now means we
have to explicitely used either cid.len or cid.len + cid.addrlen to
access the DCID or the DCID + the address. The code should be clearer
thanks to this.
The field <odcid_len> in quic_rx_packet struct is now useless and has
been removed. However, a new parameter must be added to the
qc_new_conn() function to specify the size of the ODCID addrlen.
On haproxy implementation, generated DCID are on 8 bytes, the minimal
value allowed by the specification. Rename the constant representing
this size to inform that this is haproxy specific.
The packet number space flags were mixed with the connection level flags.
This leaded to ACK to be sent at the connection level without regard to
the underlying packet number space. But we want to be able to acknowleged
packets for a specific packet number space.
A client sends a 0-RTT data packet after an Initial one in the same datagram.
We must be able to parse such packets just after having parsed the Initial packets.
Export the code responsible which set the ->app_ops structure into
quic_set_app_ops() function. It must be called by the TLS callback which
selects the application (ssl_sock_advertise_alpn_protos) so that
to be able to build application packets after having received 0-RTT data.
This field is no more useful. Modify the traces consequently.
Also initialize ->pn_node.key value to -1, which is an illegal value
for QUIC packet number, and display it in traces if different from -1.
This patch adds the parsing of the optional condition parameters that
can be passed to the set-var and set-var-fmt actions (http as well as
tcp). Those conditions will not be taken into account yet in the var_set
function so conditions passed as parameters will not have any effect.
Since actions do not benefit from the parameter preparsing that
converters have, parsing conditions needed to be done by hand.
This patch adds the parsing of the optional condition parameters that
can be passed to the set-var converter. Those conditions will not be
taken into account yet in the var_set function so conditions passed as
parameters will not have any effect. This is true for any condition
apart from the "ifexists" one that is also used to replace the
VF_UPDATEONLY flag that was used to prevent proc scope variable creation
from a LUA module.
In LibreSSL 3.5.0, BIO is going to become opaque, so haproxy's
compat macros will no longer work. The functions they substitute
have been available since LibreSSL 2.7.0.
allowing for all platforms supporting cpu affinity to have a chance
to detect the cpu topology from a given valid node (e.g.
DragonflyBSD seems to be NUMA aware from a kernel's perspective
and seems to be willing start to provide userland means to get
proper info).
This was reported by the CI wich clang as compilator.
In file included from src/ssl_sock.c:80:
include/haproxy/xprt_quic.h:1100:50: error: passing 'int *' to parameter of
type 'unsigned int *' converts between pointers to integer types with
different sign [-Werror,-Wpointer-sign]
} while (refcnt && !HA_ATOMIC_CAS(&pkt->refcnt, &refcnt, refcnt - 1));
^~~~~~~
Initialize all flow control members on the qcc instance. Without this,
the value are undefined and it may be possible to have errors about
reached streams limit.
The xprt layer is reponsible to notify the mux of a CONNECTION_CLOSE
reception. In this case the flag QC_CF_CC_RECV is positionned on the
qcc and the mux tasklet is waken up.
One of the notable effect of the QC_CF_CC_RECV is that each qcs will be
released even if they have remaining data in their send buffers.
A qcs is not freed if there is remaining data in its buffer. In this
case, the flag QC_SF_DETACH is positionned.
The qcc io handler is responsible to remove the qcs if the QC_SF_DETACH
is set and their buffers are empty.
Replace bug.h by api.h in mux_quic header.
This is required because bug.h uses atomic operations when compiled with
DEBUG_MEM_STATS. api.h takes care of including atomic.h before bug.h.
Set the HTX EOM flag on RX the app layer. This is required to notify
about the end of the request for the stream analyzers, else the request
channel never goes to MSG_DONE state.
Remove qc_eval_pkt() which has come with the multithreading support. It
was there to evaluate the length of a TX packet before building. We could
build from several thread TX packets without consuming a packet number for nothing (when
the building failed). But as the TX packet building functions are always
executed by the same thread, the one attached to the connection, this does
not make sense to continue to use such a function. Furthermore it is buggy
since we had to recently pad the TX packet under certain circumstances.
After the handshake has succeeded, we must delete any remaining
Initial or Handshake packets from the RX buffer. This cannot be
done depending on the state the connection (->st quic_conn struct
member value) as the packet are not received/treated in order.
Add a null byte to the end of the RX buffer to notify the consumer there is no
more data to treat.
Modify quic_rx_packet_pool_purge() which is the function which remove the
RX packet from the buffer.
Also rename this function to quic_rx_pkts_del().
As the RX packets may be accessed by the QUIC connection handler (quic_conn_io_cb())
the function responsible of decrementing their reference counters must not
access other information than these reference counters! It was a very bad idea
to try to purge the RX buffer asap when executing this function.
Handle the case when the app layer sending buffer is full. A new flag
QC_SF_BLK_MROOM is set in this case and the transfer is interrupted. It
is expected that then the conn-stream layer will subscribe to SEND.
The MROOM flag is reset each time the muxer transfer data from the app
layer to its own buffer. If the app layer has been subscribed on SEND it
is woken up.
Implement the subscription in the mux on the qcs instance.
Subscribe is now used by the h3 layer when receiving an incomplete frame
on the H3 control stream. It is also used when attaching the remote
uni-directional streams on the h3 layer.
In the qc_send, the mux wakes up the qcs for each new transfer executed.
This is done via the method qcs_notify_send().
The xprt wakes up the qcs when receiving data on unidirectional streams.
This is done via the method qcs_notify_recv().
Set the QC_SF_FIN_STREAM on the app layers (h3 / hq-interop) when
reaching the HTX EOM. This is used to warn the mux layer to set the FIN
on the QUIC stream.
Re-implement the QUIC mux. It will reuse the mechanics from the previous
mux without all untested/unsupported features. This should ease the
maintenance.
Note that a lot of features are broken for the moment. They will be
re-implemented on the following commits to have a clean commit history.
ha_backtrace_to_stderr() must be declared in CRASH_NOW() macro whe HAProxy
is compiled with DEBUG_STRICT_NOCRASH. Otherwise an error is reported during
compilation:
include/haproxy/bug.h:58:26: error: implicit declaration of function ‘ha_backtrace_to_stderr’ [-Werror=implicit-function-declaration]
58 | #define CRASH_NOW() do { ha_backtrace_to_stderr(); } while (0)
This patch must be backported as far as 2.4.
If H1 headers are not fully received at once, the parsing is restarted a
last time when all headers are finally received. When this happens, the h1m
flags are sanitized to remove all value set during parsing.
But some flags where erroneously preserved. Among others, H1_MF_TE_CHUNKED
flag was not removed, what could lead to parsing error.
To fix the bug and make things easy, a mask has been added with all flags
that must be preserved. It will be more stable. This mask is used to
sanitize h1m flags.
This patch should fix the issue #1469. It must be backported to 2.5.
When the response is parsed, query items are stored in a list, attached to
the parsed response (resolve_response).
First, there is one and only one query sent at a time. Thus, there is no
reason to use a list. There is a test to be sure there is only one query
item in the response. Then, the reference on this query item is only used to
validate the domain name is the one requested. So the query list can be
removed. We only expect one query item, no reason to loop on query records.
In addition, the query domain name is now immediately checked against the
resolution domain name. This way, the query item is only manipulated during
the response parsing.
During post-parsing stage, the SSL context of a server is initialized if SSL
is configured on the server or its default-server. It is required to be able
to enable SSL at runtime. However a regression was introduced, because the
last parsed default-server is used. But it is not necessarily the
default-server line used to configure the server. This may lead to
erroneously initialize the SSL context for a server without SSL parameter or
the skip it while it should be done.
The problem is the default-server used to configure a server is not saved
during configuration parsing. So, the information is lost during the
post-parsing. To fix the bug, the SRV_F_DEFSRV_USE_SSL flag is
introduced. It is used to know when a server was initialized with a
default-server using SSL.
For the record, the commit f63704488e ("MEDIUM: cli/ssl: configure ssl on
server at runtime") has introduced the bug.
This patch must be backported as far as 2.4.
Apple libmalloc has its own notion of memory arenas as malloc_zone with
rich API having various callbacks for various allocations strategies but
here we just use the defaults.
In trim_all_pools, we advise to purge each zone as much as possible, called "greedy" mode.
As soon as the connection ID (the one choosen by the QUIC server) has been used
by the client, we can delete its original destination connection ID from its tree.
When running Key Update process, we must maintain much information
especially when the key phase bit has been toggled by the peer as
it is possible that it is due to late packets. This patch adds
quic_tls_kp new structure to do so. They are used to store
previous and next secrets, keys and IVs associated to the previous
and next RX key phase. We also need the next TX key phase information
to be able to encrypt packets for the next key phase.
When sending a CONNECTION_CLOSE frame to immediately close the connection,
do not provide CRYPTO data to the TLS stack. Do not built anything else than a
CONNECTION_CLOSE and do not derive any secret when in immediately close state.
Seize the opportunity of this patch to rename ->err quic_conn struct member
to ->error_code.
We set this TLS error when no application protocol could be negotiated
via the TLS callback concerned. It is converted as a QUIC CRYPTO_ERROR
error (0x178).
Change the way the CIDs are organized to rattach received packets DCID
to QUIC connection. This is necessary to be able to handle multiple DCID
to one connection.
For this, the quic_connection_id structure has been extended. When
allocated, they are inserted in the receiver CID tree instead of the
quic_conn directly. When receiving a packet, the receiver tree is
inspected to retrieve the quic_connection_id. The quic_connection_id
contains now contains a reference to the QUIC connection.
Add ->err member to quic_conn struct to store the connection errors.
This is the responsability of ->send_alert callback of SSL_QUIC_METHOD
struct to handle the TLS alert and consequently update ->err value.
At this time, when entering qc_build_pkt() we build a CONNECTION_CLOSE
frame close the connection when ->err value is not null.
If we want to run quic-tracker against haproxy, we must at least
support the draft version of the TLS extension for the QUIC transport
parameters (0xffa5). quic-tracker QUIC version is draft-29 at this time.
We select this depending on the QUIC version. If draft, we select the
draft TLS extension.
When establishing an outboud connection, haproxy checks if the cached
TLS session has the same SNI as the connection we are trying to
resume.
This test was done by calling SSL_get_servername() which in TLSv1.2
returned the SNI. With TLSv1.3 this is not the case anymore and this
function returns NULL, which invalidates any outboud connection we are
trying to resume if it uses the sni keyword on its server line.
This patch fixes the problem by storing the SNI in the "reused_sess"
structure beside the session itself.
The ssl_sock_set_servername() now has a RWLOCK because this session
cache entry could be accessed by the CLI when trying to update a
certificate on the backend.
This fix must be backported in every maintained version, however the
RWLOCK only exists since version 2.4.
Implement a new app_ops layer for quic interop. This layer uses HTTP/0.9
on top of QUIC. Implementation is minimal, with the intent to be able to
pass interoperability test suite from
https://github.com/marten-seemann/quic-interop-runner.
It is instantiated if the negotiated ALPN is "hq-interop".