wait CLI command can be used to wait until either a defined timeout or a
specific condition is reached. So far, srv-removable is the only event
supported. This is tested via srv_check_for_deletion().
This is implemented via srv_check_for_deletion(), which is
able to report a message describing the reason if the condition is
unmet.
Previously, wait return a generic string, to specify if the condition is
met, the timer has expired or an immediate error is encountered. In case
of srv-removable, it did not report the real reason why a server could
not be removed.
This patch improves wait command with srv-removable. It now displays the
last message returned by srv_check_for_deletion(), either on immediate
error or on timeout. This is implemented by using dynamic string output
with cli_dynmsg/dynerr() functions.
When a connection is reversed via rhttp protocol on the edge endpoint,
it migrates from frontend to backend side. This operation is performed
by conn_reverse(). During this transition, the conn owning session is
freed as it becomes unneeded.
Prior to this patch, session_unown_conn() was also called during
frontend to backend migration. However, this is unnecessary as this
function is only used for backend connection reuse. As such, this patch
removes this unnecessary call.
This does not cause any harm to the process as session_unown_conn() can
handle a connection not inserted yet. However, for clarity purpose it's
better to backport this patch up to 3.0.
A connection can be stored in several lists, thus there is several
attach points in struct connection. Depending on its proxy side, either
frontend or backend, a single connection will only access some of them
during its lifetime.
As an optimization, these attach points are organized in a union.
However, this repartition was not correctly achieved along
frontend/backend side delimitation.
Furthermore, reverse HTTP has recently been introduced. With this
feature, a connection can migrate from frontend to backend side or vice
versa. As such, it becomes even more tedious to ensure that these
members are always accessed in a safe way.
This commit rearrange these fields. First, union is now clearly splitted
between frontend and backend only elements. Next, backend elements are
initialized with conn_backend_init(), which is already used during
connection reversal on an edge endpoint. A new function
conn_frontend_init() serves to initialize the other members, called both
on connection first instantiation and on reversal on a dialer endpoint.
This model is much cleaner and should prevent any access to fields from
the wrong side.
Currently, there is no known case of wrong access in the existing code
base. However, this cleanup is considered an improvement which must be
backported up to 3.0 to remove any possible undefined behavior.
Since the mworker rework in haproxy 3.1, the worker need to tell the
master that it is ready. This is done using the sockpair protocol by
sending a _send_status message to the master.
It seems that the sockpair protocol is buggy on macOS because of a known
issue around fd transfer documented in sendmsg(2):
https://man.freebsd.org/cgi/man.cgi?sendmsg(2) BUGS section
Because sendmsg() does not necessarily block until the data has been
transferred, it is possible to transfer an open file descriptor across
an AF_UNIX domain socket (see recv(2)), then close() it before it has
actually been sent, the result being that the receiver gets a closed
file descriptor. It is left to the application to implement an
acknowledgment mechanism to prevent this from happening.
Indeed the recv side of the sockpair is closed on the send side just
after the send_fd_uxst(), which does not implement an acknowledgment
mechanism. So the master might never recv the _send_status message.
In order to implement an acknowledgment mechanism, a blocking read() is
done before closing the recv fd on the sending side, so we are sure that
the message was read on the other side.
This was only reproduced on macOS, meaning the master CLI is also
impacted on macOS. But no solution was found on macOS for it.
Implementing an acknowledgment mechanism would complexify too much the
protocol in non-blocking mode.
The problem was reported in ticket #3045, reproduced and analyzed by
@cognet.
Must be backported as far as 3.1.
During configuration parsing *args can contain different addresses, it is
changing from line to line. smp_resolve_args() is called after the
configuration parsing, it uses arg_list->kw to create an error message, if a
userlist referenced in some ACL is absent. This leads to wrong keyword names
reported in such message or some garbage is printed.
It does not happen in the case of sample fetches. In this case arg_list->kw is
assigned to a string literal from the sample_fetch struct returned by
find_sample_fetch(). Let's do the same in parse_acl_expr(), when find_acl_kw()
lookup returns a corresponding acl_keyword structure.
This fixes the issue #3088 at GitHub.
This should be backported in all stable versions since 2.6 including 2.6.
This issue leads to crashes when the QUIC mux traces are enabled and could be
reproduced with -dMfail. When the qcc allocation fails (qcc_init()) haproxy
crashes into qmux_dump_qcc_info() because ->conn qcc member is initialized:
Program terminated with signal SIGSEGV, Segmentation fault.
at src/qmux_trace.c:146
146 const struct quic_conn *qc = qcc->conn->handle.qc;
[Current thread is 1 (LWP 1448960)]
(gdb) p qcc
$1 = (const struct qcc *) 0x7f9c63719fa0
(gdb) p qcc->conn
$2 = (struct connection *) 0x155550508
(gdb)
This patch simply fixes the TRACE() call concerned to avoid <qcc> object
dereferencing when it is NULL.
Must be backported as far as 3.0.
This patch follows this previous bug fix:
BUG/MINOR: quic: reorder fragmented RX CRYPTO frames by their offsets
where a ebtree node has been added to qf_crypto struct. It has the same
meaning and type as ->offset_node.key field with ->offset_node an eb64tree node.
This patch simply removes ->offset which is no more useful.
This patch should be easily backported as far as 2.6 as the one mentioned above
to ease any further backport to come.
Previous patch emits a diag warning when both 'strict-sni' +
'default-crt' are used on the same bind line.
This patch converts this diagnostic warning to a real warning, so the
previous patch could be backported without breaking configurations.
This was discussed in #3082.
It possible to use both 'strict-sni' and 'default-crt' on the same bind
line, which does not make much sense.
This patch implements a check which will look for default certificates
in the sni_w tree when strict-sni is used. (Referenced by their empty
sni ""). default-crt sets the CKCH_INST_EXPL_DEFAULT flag in
ckch_inst->is_default, so its possible to differenciate explicits
default from implicit default.
Could be backported as far as 3.0.
This was discussed in ticket #3082.
This issue impacts the QUIC listeners. It is the same as the one fixed by this
commit:
BUG/MINOR: quic: repeat packet parsing to deal with fragmented CRYPTO
As chrome, ngtcp2 client decided to fragment its CRYPTO frames but in a much
more agressive way. This could be fixed with a list local to qc_parse_pkt_frms()
to please chrome thanks to the commit above. But this is not sufficient for
ngtcp2 which often splits its ClientHello message into more than 10 fragments
with very small ones. This leads the packet parser to interrupt the CRYPTO frames
parsing due to the ncbuf gap size limit.
To fix this, this patch approximatively proceeds the same way but with an
ebtree to reorder the CRYPTO by their offsets. These frames are directly
inserted into a local ebtree. Then this ebtree is reused to provide the
reordered CRYPTO data to the underlying ncbuf (non contiguous buffer). This way
there are very few less chances for the ncbufs used to store CRYPTO data
to reach a too much fragmented state.
Must be backported as far as 2.6.
This bug arrived with this fix:
BUG/MINOR: quic-be: missing Initial packet number space discarding
leading to crashes when dereferencing ->ipktns.
Such crashes could be reproduced with -dMfail option. To reach them, the
memory allocations must fail. So, this is relatively rare, except on systems
with limited memory.
To fix this, do not call quic_pktns_discard() if ->ipktns is NULL.
No need to backport.
We used to allocate and prepare listener counters from
check_config_validity() all at once. But it isn't correct, since at that
time listeners's guid are not inserted yet, thus
counters_fe_shared_prepare() cannot work correctly, and so does
shm_stats_file_preload() which is meant to be called even earlier.
Thus in this commit (and to prepare for upcoming shm shared counters
preloading patches), we handle the shared listener counters prep in
proxy_postcheck(), which means that between the allocation and the
prep there is the proper window for listener's guid insertion and shm
counters preloading.
No change of behavior expected when shm shared counters are not
actually used.
We actually need more granularity to split srv postparsing init tasks:
Some of them are required to be run BEFORE the config is checked, and
some of them AFTER the config is checked.
Thus we push the logic from 368d0136 ("MEDIUM: server: add and use
srv_init() function") a little bit further and split the function
in two distinct ones, one of them executed under check_config_validity()
and the other one using REGISTER_POST_SERVER_CHECK() hook.
SRV_F_CHECKED flag was removed because it is no longer needed,
srv_preinit() is only called once, and so is srv_postinit().
When pre-check and post-check postparsing hooks= are evaluated in
step_init_2() potential fatal errors are ignored during the iteration
and are only taken into account at the end of the loop. This is not ideal
because some errors (ie: memory errors) could cause multiple alert
messages in a row, which could make troubleshooting harder for the user.
Let's stop as soon as a fatal error is encountered for post parsing
hooks, as we use to do everywhere else.
It is possible to interrupt a SPOE applet without reporting an error. For
instance, when the client of the parent stream aborts. Thanks to this patch,
we take care to report an error on the SPOE applet to be sure to interrupt
the processing. It is especially important if the connection to the agent is
queued. Thanks to 886a248be ("BUG/MEDIUM: mux-spop: Reject connection
attempts from a non-spop frontend"), it is no longer an issue. But there is
no reason to continue to process if the parent stream is gone.
In addition, in the SPOE filter, if the processing is interrupted when the
filter is destroyed, no specific status code was set. It is not a big deal
because it cannot be logged at this stage. But it can be used to notify the SPOE
applet. So better to set it.
This patch should be backported as far as 3.1.
Stop declaring "cert.ecdsa.pem" in a crt-store, and add it dynamically
over the stats socket insted.
This way we fully verify a JWS signature with a certificate which never
existed at HAProxy startup.
It is possible to crash the process by initializing a connection to a SPOP
server from a non-spop frontend. It is of course unexpected and invalid. And
there are some checks to prevent that when the configuration is
loaded. However, it is not possible to handle all cases, especially the
"use_backend" rules relying on log-format strings.
It could be good to improve the backend selection by checking the mode
compatibility (for now, it is only performed for the HTTP).
But at the end, this can also be handled by the SPOP multiplexer when it is
initialized. If the opposite SD is not attached to an SPOE agent, we should
fail the mux initialization and return an internal error.
This patch must be backported as far as 3.1.
Based on the applet flags, it is possible to set .rcv_buf and .snd_buf
callback functions if necessary. If these functions are not defined for an
applet using the new API, it means the default functions must be used.
We also take care to choose the raw version or the htx version, depending on
the applet flags.
applet_output_room() and applet_input_data() are now HTX aware. These
functions automatically rely on htx versions if APPLET_FL_HTX flag is set
for the applet.
Multiplexers already explicitly announce their HTX support. Now it is
possible to set flags on applet, it could be handy to do the same. So, now,
HTX aware applets must set the APPLET_FL_HTX flag.
appctx_app_test() function can now be used to test the applet flags using an
appctx. This simplify a bit tests on applet flags. For now, this function is
used to test APPLET_FL_NEW_API flag.
Instead of setting a flag on the applet context by checking the defined
callback functions of the applet to know if an applet is using the new API
or not, we can now rely on the applet flags itself. By checking
APPLET_FL_NEW_API flag, it does the job. APPCTX_FL_INOUT_BUFS flag is thus
removed.
stats http-request rules evaluation is handled separately in
http_process_req_common(). Because of that, if a rule requires yielding,
the evaluation is interrupted as (F)YIELD verdict return values are not
handled there.
Since 3.2 with the introduction of costly ruleset interruption in
0846638 ("MEDIUM: stream: interrupt costly rulesets after too many
evaluations"), the issue started being more visible because stats
http-request rules would be interrupted when the evaluation counters
reached tune.max-rules-at-once, but the evaluation would never be
resumed, and the request would continue to be handled as if the
evaluation was complete. Note however that the issue already existed
in the past for actions that could return ACT_RET_YIELD such as
"pause" for instance.
This issue was reported by GH user @Wahnes in #3087, thanks to him for
providing useful repro and details.
To fix the issue, we merge rule vedict handling in
http_process_req_common() so that "stats http-request" evaluation benefits
from all return values already supported for the current ruleset.
It should be backported in 3.2 with 0846638 ("MEDIUM: stream: interrupt
costly rulesets after too many evaluations"), and probably even further
(all stable versions) if the patch adaptation is not to complex (before
HTTP_RULE_RES_FYIELD was introduced) because it is still relevant.
HTTP_RULE_RES_YIELD was used where HTTP_RULE_RES_FYIELD should be used.
Hopefully, aside from debug traces, both return values were treated
equally. Let's fix that to prevent confusion and from causing bugs
in the future.
It may be backported in 3.2 with 0846638 ("MEDIUM: stream: interrupt
costly rulesets after too many evaluations") if it easily applies
The below patch has simplified INITIAL padding on emission. Now,
qc_prep_pkts() is responsible to activate padding for this case, and
there is no more special case in qc_do_build_pkt() needed.
commit 8bc339a6ad4702f2c39b2a78aaaff665d85c762b
BUG/MAJOR: quic: fix INITIAL padding with probing packet only
However, qc_do_build_pkt() may still activate padding on its own, to
ensure that a packet is big enough so that header protection decryption
can be performed by the peer. HP decryption is performed by extracting a
sample from the ciphered packet, starting 4 bytes after PN offset.
Sample length is 16 bytes as defined by TLS algos used by QUIC. Thus, a
QUIC sender must ensures that length of packet number plus payload
fields to be at least 4 bytes long. This is enough given that each
packet is completed by a 16 bytes AEAD tag which can be part of the HP
sample.
This patch simplifies qc_do_build_pkt() by centralizing padding for this
case in a single location. This is performed at the end of the function
after payload is completed. The code is thus simpler.
This is not a bug. However, it may be interesting to backport this patch
up to 2.6, as qc_do_build_pkt() is a tedious function, in particular
when dealing with padding generation, thus it may benefit greatly from
simplification.
Haproxy QUIC stack suffers from a limitation : it's not possible to emit
a packet which contains probing data and a ACK frame in it. Thus, in
case qc_do_build_pkt() is invoked which both values as true, probing has
the priority and ACK is ignored.
However, this has the undesired side-effect of possibly generating two
coalesced packets of the same type in the same datagram : the first one
with the probing data and the second with an ACK frame. This is caused
by qc_prep_pkts() loop which may call qc_do_build_pkt() multiple times
with the same QEL instance. This case is normally use when a full
datagram has been built but there is still content to emit on the
current encryption level.
To fix this, alter qc_prep_pkts() loop : if both probing and ACK is
requested, force the datagram to be written after packet encoding. This
will result in a datagram containing the packet with probing data as
final entry. A new datagram is started for the next packet which will
can contain the ACK frame.
This also has some impact on INITIAL padding. Indeed, if packet must be
the last due to probing emission, qc_prep_pkts() will also activate
padding to ensure final datagram is at least 1.200 bytes long.
Note that coalescing two packets of the same type is not invalid
according to QUIC RFC. However it could cause issue with some shaky
implementations, so it is considered as a bug.
This must be backported up to 2.6.
A QUIC datagram that contains an INITIAL packet must be padded to 1.200
bytes to prevent any deadlock due to anti-amplification protection. This
is implemented by encoding a PADDING frame on the last packet of the
datagram if necessary.
Previously, qc_prep_pkts() was responsible to activate padding when
calling qc_do_build_pkt(), as it knows which packet is the last to
encode. However, this has the side-effect of preventing PING emission
for probing with no data as this case was handled in an else-if branch
after padding. This was fixed by the below commit
217e467e89d15f3c22e11fe144458afbf718c8a8
BUG/MINOR: quic: fix malformed probing packet building
Above logic was altered to fix the PING case : padding was set to false
explicitely in qc_prep_pkts(). Padding was then added in a specific
block dedicated to the PING case in qc_do_build_pkt() itself for INITIAL
packets.
However, the fix is incorrect if the last QEL used to built a packet is
not the initial one and probing is used with PING frame only. In this
case, specific block in qc_do_build_pkt() does not add padding. This
causes a BUG_ON() crash in qc_txb_store() which catches these packets as
irregularly formed.
To fix this while also properly handling PING emission, revert to the
original padding logic : qc_prep_pkts() is responsible to activate
INITIAL padding. To not interfere with PING emission, qc_do_build_pkt()
body is adjusted so that PING block is moved up in the function and
detached from the padding condition.
The main benefit from this patch is that INITIAL padding decision in
qc_prep_pkts() is clearer now.
Note that padding can also be activated by qc_do_build_pkt(), as packets
should be big enough for header protection decipher. However, this case
is different from INITIAL padding, so it is not covered by this patch.
This should be backported up to 2.6.
If connection closing is activated, qc_prep_pkts() can only built a
datagram with a single packet. This is because we consider that only a
single CONNECTION_CLOSE frame is relevant at this stage.
This is handled both by qc_prep_pkts() which ensure that only a single
packet datagram is built and also qc_do_build_pkt() which prevents the
invokation of qc_build_frms() if <cc> is set.
However, there is an incoherency for probing. First, qc_prep_pkts()
deactivates it if connection closing is requested. But qc_do_build_pkt()
may still emit probing frame as it does not check its <probe> argument
but rather <pto_probe> QEL field directly. This can results in a packet
mixing a PING and a CONNECTION close frames, which is useless.
Fix this by adjusting qc_do_build_pkt() : closing argument is also
checked on PING probing emission. Note that there is still shaky code
here as qc_do_build_pkt() should rely only on <probe> argument to ensure
this.
This should be backported up to 2.6.
qc_prep_pkts() encodes input data into QUIC packets in a loop into one
or several datagrams. It supports GSO which requires to built a serie of
multiple datagrams of the same length.
Each packet encoding is performed via a call to qc_do_build_pkt(). This
function has an argument to specify if output packet must be completed
with a PADDING frame. This option is activated when qc_prep_pkts()
encodes the last packet of a datagram with at least one INITIAL packet
in it.
Padding is resetted each time a new datagram is started. However, this
was not performed if GSO is used to built the next datagram. This patch
fixes it by properly resetting padding in this case also.
The impact of this bug is unknown. It may have several effectfs, one of
the most obvious being the insertion of unnecessary padding in packets.
It could also potentially trigger an infinite loop in qc_prep_pkts(),
although this has never been encountered so far.
This must be backported up to 3.1.
This fixes the commit 2c7e05f80e3b
("MEDIUM: dns: don't call connect to dest socket for AF_INET*"). If we fail to
bind AF_INET sockets or the address family of the nameserver protocol isn't
something, what we expect, we need to close the fd, obtained by
connect.
This fixes the issue GitHub #3085
This must be backported along with the commit 2c7e05f80e3b.
It is possible to miss a synchronous write event in process_stream() if the
stream was woken up on a write event. In that case, it is possible to freeze
the stream until the next I/O event or timeout.
Concretely, the stream is woken up with CF_WRITE_EVENT on a channel. this
flag is removed from the channel when we leave proces_stream(). But before
leaving process_stream(), when a synchronous send is tried on this channel,
the flag is removed and eventually set again on success. But this event is
masked by the previous one, and the channel is not resync as it should be.
To fix the bug, CF_READ_EVENT and CF_WRITE_EVENT flags are removed from a
channel after the corresponding analysers evaluation. This way, we will be
able to detect a successful synchronous send to restart analysers evaluation
based on the new channel state. It is safe (or it should be) to do so
becaues these flags are only used by analysers and tested to resync the
stream inside process_stream().
It is a very old bug and I guess all versions are affected. It was observed
on 2.9 and higher, and with the master/worker only. But it could affect any
stream. It is tagged a MAJOR because this area is really sensitive to any
change.
This patch should fix the issue #3070. It should probably be backported to
all stable versions, but only after a period of observation and with a
special care because this area is really sensitive to changes. It is
probably reasonnable to backport it as far as 3.0 and wait for older
versions.
Thanks to Valentine for its help on this issue !
This patch introduces a change of behavior in the configuration parsing.
Previously the "ssl-f-use" lines were only applied on "ssl" bind lines
that does not have any "crt" configured.
Since there is no warning and you could mix bind lines with and without
crt, this is really confusing.
This patch applies the "ssl-f-use" lines on every "ssl" bind lines.
This was discussed in ticket #3082.
Must be backported in 3.2.
This bug impacts only the QUIC backends. It arrived with this commit:
MINOR: quic-be: QUIC connection allocation adaptation (qc_new_conn())
which was supposed to be fixed by:
BUG/MEDIUM: quic: crash after quic_conn allocation failures
but this commit was not sufficient.
Such a crashe could be reproduced with -dMfail option. To reach it, the
<conn_id> object allocation must fail (from qc_new_conn()). So, this is
relatively rare, except on systems with limited memory.
No need to backport.
A QUIC client must discard the Initial packet number space as soon as it first
sends a Handshake packet.
This patch implements this packet number space which was missing.
An ABORT_NOW() was used during debugging idle-ping but was not removed
from the final code. This may cause crash, in particular when mixing
idle-ping with shorter http-request/http-keep-alive values.
Fix this situation by removing ABORT_NOW() statement.
This should fix github issue #3079.
This must be backported up to 3.2.
Released version 3.3-dev7 with the following main changes :
- MINOR: quic: duplicate GSO unsupp status from listener to conn
- MINOR: quic: define QUIC_FL_CONN_IS_BACK flag
- MINOR: quic: prefer qc_is_back() usage over qc->target
- BUG/MINOR: cfgparse: immediately stop after hard error in srv_init()
- BUG/MINOR: cfgparse-listen: update err_code for fatal error on proxy directive
- BUG/MINOR: proxy: avoid NULL-deref in post_section_px_cleanup()
- MINOR: guid: add guid_get() helper
- MINOR: guid: add guid_count() function
- MINOR: clock: add clock_set_now_offset() helper
- MINOR: clock: add clock_get_now_offset() helper
- MINOR: init: add REGISTER_POST_DEINIT_MASTER() hook
- BUILD: restore USE_SHM_OPEN build option
- BUG/MINOR: stick-table: cap sticky counter idx with tune.nb_stk_ctr instead of MAX_SESS_STKCTR
- MINOR: sock: update broken accept4 detection for older hardwares.
- CI: vtest: add os name to OT cache key
- CI: vtest: add Ubuntu arm64 builds
- BUG/MEDIUM: ssl: Fix 0rtt to the server
- BUG/MEDIUM: ssl: fix build with AWS-LC
- MEDIUM: acme: use lowercase for challenge names in configuration
- BUG/MINOR: init: Initialize random seed earlier in the init process
- DOC: management: clarify usage of -V with -c
- MEDIUM: ssl/cli: relax crt insertion in crt-list of type directory
- MINOR: tools: implement ha_aligned_zalloc()
- CLEANUP: fd: make use of ha_aligned_alloc() for the fdtab
- MINOR: pools: distinguish the requested alignment from the type-specific one
- MINOR: pools: permit to optionally specify extra size and alignment
- MINOR: pools: always check that requested alignment matches the type's
- DOC: api: update the pools API with the alignment and typed declarations
- MEDIUM: tree-wide: replace most DECLARE_POOL with DECLARE_TYPED_POOL
- OPTIM: tasks: align task and tasklet pools to 64
- OPTIM: buffers: align the buffer pool to 64
- OPTIM: queue: align the pendconn pools to 64
- OPTIM: connection: align connection pools to 64
- OPTIM: server: start to use aligned allocs in server
- DOC: management: fix typo in commit f4f93c56
- DOC: config: recommend single quoting passwords
- MINOR: tools: also implement ha_aligned_alloc_typed()
- MEDIUM: server: introduce srv_alloc()/srv_free() to alloc/free a server
- MINOR: server: align server struct to 64 bytes
- MEDIUM: ring: always allocate properly aligned ring structures
- CI: Update to actions/checkout@v5
- MINOR: quic: implement qc_ssl_do_hanshake()
- BUG/MEDIUM: quic: listener connection stuck during handshakes (OpenSSL 3.5)
- BUG/MINOR: mux-h1: fix wrong lock label
- MEDIUM: dns: don't call connect to dest socket for AF_INET*
- BUG/MINOR: spoe: Properly detect and skip empty NOTIFY frames
- BUG/MEDIUM: cli: Report inbuf is no longer full when a line is consumed
- BUG/MEDIUM: quic: crash after quic_conn allocation failures
- BUG/MEDIUM: quic-be: do not initialize ->conn too early
- BUG/MEDIUM: mworker: more verbose error upon loading failure
- MINOR: xprt: Add recvmsg() and sendmsg() parameters to rcv_buf() and snd_buf().
- MINOR: ssl: Add a "flags" field to ssl_sock_ctx.
- MEDIUM: xprt: Add a "get_capability" method.
- MEDIUM: mux_h1/mux_pt: Use XPRT_CAN_SPLICE to decide if we should splice
- MINOR: cfgparse: Add a new "ktls" option to bind and server.
- MINOR: ssl: Define HAVE_VANILLA_OPENSSL if openssl is used.
- MINOR: build: Add a new option, USE_KTLS.
- MEDIUM: ssl: Add kTLS support for OpenSSL.
- MEDIUM: splice: Don't consider EINVAL to be a fatal error
- MEDIUM: ssl: Add splicing with SSL.
- MEDIUM: ssl: Add ktls support for AWS-LC.
- MEDIUM: ssl: Add support for ktls on TLS 1.3 with AWS-LC
- MEDIUM: ssl: Handle non-Application data record with AWS-LC
- MINOR: ssl: Add a way to globally disable ktls.
Add a new global option, "noktls", as well as a command line option,
"-dT", to totally disable ktls usage, even if it is activated on servers
or binds in the configuration.
That makes it easier to quickly figure out if a problem is related to
ktls or not.
Handle receiving and sending TLS records that are not application data
records.
When receiving, we ignore new session tickets records, we handle close
notify as a read0, and we consider any other records as a connection
error.
For sending, we're just sending close notify, so that the TLS connection
is properly closed.
AWS-LC added a new API in AWS-LC 1.54 that allows the user to retrieve
the keys for TLS 1.3 connections with SSL_get_read_traffic_secret(), so
use it to be able to use ktls with TLS 1.3 too.
Add ktls support for AWS-LC. As it does not know anything
about ktls, it means extracting keys from the ssl lib, and provide them
to the kernel. At which point we can use regular recvmsg()/sendmsg()
calls.
This patch only provides support for TLS 1.2, AWS-LC provides a
different way to extract keys for TLS 1.3.
Note that this may work with BoringSSL too, but it has not been tested.
Implement the splicing methods to the SSL xprt (which will just call the
raw_sock methods if kTLS is enabled on the socket), and properly report
that a connection supports splicing if kTLS is configured on that
connection.
For OpenSSL, if the upper layer indicated that it wanted to start using
splicing by adding the CO_FL_WANT_SPLICING flag, make sure we don't read
any more data from the socket, and just drain what may be in the
internal OpenSSL buffers, before allowing splicing
Don't consider that EINVAL is a fatal error, when calling splice().
When doing splicing from a kTLS socket, splice() will set errno to
EINVAL if the next record to be read is not an application data record.
This is not a fatal error, it just means we have to use recvmsg() to
read it, and potentially we can then resume using splicing.
It is unfortunate that EINVAL was used for that case, but we should
never get any other case of receiving EINVAL from splice(), so it should
be safe to treat it as non-fatal.
Modify the SSL code to enable kTLS with OpenSSL.
It mostly requires our internal BIO to be able to handle the various
kTLS-specific controls in ha_ssl_ctrl(), as well as being able to use
recvmsg() and sendmsg() from ha_ssl_read() and ha_ssl_write().
If we're using OpenSSL as our crypto library, so add a define,
HAVE_VANILLA_OPENSSL, to make it easier to differentiate between the
various crypto libs.
Add a new "ktls" option to bind and server. Valid values are "on" and
"off".
It currently does nothing, but when kTLS will be implemented, it will
enable or disable kTLS for the corresponding sockets.
It is marked as experimental for now.
In both mux_h1 and mux_pt, use the new XPRT_CAN_SPLICE capability to
decide if we should attempt to use splicing or not.
If we receive XPRT_CONN_CAN_MAYBE_SPLICE, add a new flag on the
connection, CO_FL_WANT_SPLICING, to let the xprt know that we'd love to
be able to do splicing, so that it may get ready for that.
This should have no effect right now, and is required work for adding
kTLS support.
Add a new method to xprts, get_capability, that can be used to query if
an xprt supports something or not.
The first capability implemented is XPRT_CAN_SPLICE, to know if the xprt
will be able to use splicing for the provided connection.
The possible answers are XPRT_CONN_CAN_NOT_SPLICE, which indicates
splicing will never be possible for that connection,
XPRT_CONN_COULD_SPLICE, which indicates that splicing is not usable
right now, but may be in the future, and XPRT_CONN_CAN_SPLICE, that
means we can splice right away.