The only purpose from tgroup_mask seems to be to calculate how many
tgroups share the same shard, but this is an information we can
calculate differently, we just have to increment the number when a new
receiver is added to the shard, and decrement it when one is detached
from the shard. Removing thread group masks will allow us to increase
the maximum number of thread groups past 64.
It's regularly needed to call getsockopt() on a connection, but each
time the calling code has to do all the job by itself. This commit adds
a "get_opt()" callback on the protocol struct, that directly calls
getsockopt() on the connection's FD. A generic implementation for
standard sockets is provided, though QUIC would likely require a
different approach, or maybe a mapping. Due to the overlap between
IP/TCP/socket option values, it is necessary for the caller to indicate
both the level and the option. An abstraction of the level could be
done, but the caller would nonetheless have to know the optname, which
is generally defined in the same include files. So for now we'll
consider that this callback is only for very specific use.
The levels and optnames are purposely passed as signed ints so that it
is possible to further extend the API by using negative levels for
internal namespaces.
This option enables TCP_SAVE_SYN on the listening socket, which will
cause the kernel to try to save a copy of the SYN packet header (L2,
IP and TCP are supported). This can permit to check the source MAC
address of a client, or find certain TCP options such as a source
address encapsulated using RFC7974. It could also be used as an
alternate approach to retrieving the source and destination addresses
and ports. For now setting the option is enabled, but sample fetch
functions and converters will be needed to extract info.
This makes a significant difference when loading large files and during
commit and clear operations, thanks to improved cache locality. In the
measurements below, master refers to the code before any of the changes
to the patterns code, not the code before this one commit.
Timing the replacement of 10M entries from the CLI with this command
which also reports timestamps at start, end of upload and end of clear:
$ (echo "prompt i"; echo "show activity"; echo "prepare acl #0";
awk '{print "add acl @1 #0",$0}' < bad-ip.map; echo "show activity";
echo "commit acl @1 #0"; echo "clear acl @0 #0";echo "show activity") |
socat -t 10 - /tmp/sock1 | grep ^uptim
master, on a 3.7 GHz EPYC, 3 samples:
uptime_now: 6.087030
uptime_now: 25.981777 => 21.9 sec insertion time
uptime_now: 29.286368 => 3.3 sec commit+clear
uptime_now: 5.748087
uptime_now: 25.740675 => 20.0s insertion time
uptime_now: 29.039023 => 3.3 s commit+clear
uptime_now: 7.065362
uptime_now: 26.769596 => 19.7s insertion time
uptime_now: 30.065044 => 3.3s commit+clear
And after this commit:
uptime_now: 6.119215
uptime_now: 25.023019 => 18.9 sec insertion time
uptime_now: 27.155503 => 2.1 sec commit+clear
uptime_now: 5.675931
uptime_now: 24.551035 => 18.9s insertion
uptime_now: 26.652352 => 2.1s commit+clear
uptime_now: 6.722256
uptime_now: 25.593952 => 18.9s insertion
uptime_now: 27.724153 => 2.1s commit+clear
Now timing the startup time with a 10M entries file (on another machine)
on master, 20 samples:
Standard Deviation, s: 0.061652677408033
Mean: 4.217
And after this commit:
Standard Deviation, s: 0.081821371548669
Mean: 3.78
Instead of a global list (and tree) of pattern reference elements, we
now have an intermediate pat_ref_gen structure and store the elements in
those. This simplifies the logic of some operations such as commit and
clear, and improves performance in some cases - numbers to be provided
in a subsequent commit after one important optimization is added.
A lot of the changes are due to adding an extra level of indirection,
changing many cases where we iterate over all elements to an outer loop
iterating over the generation and an inner one iterating over the
elements of the current generation. It is therefore easier to read this
patch using 'git diff -w'.
Safe and non-functional changes that only add currently unused
structures, field, functions and macros, in preparation of larger
changes that alter the way pattern reference elements are stored.
This includes code to create and lookup generation objects, and
macros to iterate over the generations of a pattern reference.
This guarantees that the compiler will not optimize away the memset()
call if it detects a dead store.
Use this to clear SSL passphrases.
No backport needed.
Add a new global keyword, max-threads-per-group. It sets the maximum number of
threads a thread group can contain. Unless the number of thread groups
is fixed with "thread-groups", haproxy will just create more thread
groups as needed.
The default and maximum value is 64.
The RX_F_INHERITED flag was ambiguous, as it was used to mark both
listeners inherited from the parent process and listeners duplicated
from another local receiver. This could lead to incorrect behavior
concerning socket unbinding and suspension.
This commit refactors the handling of inherited listeners by splitting
the RX_F_INHERITED flag into two more specific flags:
- RX_F_INHERITED_FD: Indicates a listener inherited from the parent
process via its file descriptor. These listeners should not be unbound
by the master.
- RX_F_INHERITED_SOCK: Indicates a listener that shares a socket with
another one, either by being inherited from the parent or by being
duplicated from another local listener. These listeners should not be
suspended or resumed individually.
Previously, the sharding code was unconditionally using RX_F_INHERITED
when duplicating a file descriptor. In HAProxy versions prior to 3.1,
this led to a file descriptor leak for duplicated unix stats sockets in
the master process. This would eventually cause the master to crash with
a BUG_ON in fd_insert() once the file descriptor limit was reached.
This must be backported as far as 3.0. Branches earlier than 3.0 are
affected but would need a different patch as the logic is different.
- Convert additional cases to use the automatic alignment feature for
the THREAD_ALIGN(ED) macros. This includes some cases that are less
obviously correct where it seems we wanted to align only in the
USE_THREAD case but were not using the thread specific macros.
- Also move some alignment requirements to the structure definition
instead of having it on variable declaration.
- Use the automatic alignment feature instead of hardcoding 64 all over
the code.
- This also converts a few bare __attribute__((aligned(X))) to using the
ALIGNED macro.
- It is now possible to use the THREAD_ALIGN and THREAD_ALIGNED macros
without a parameter. In this case, we automatically align on the cache
line size.
- The cache line size is set to 64 by default to match the current code,
but it can be overridden on the command line.
- This required moving the DEFVAL/DEFNULL/DEFZERO macros to compiler.h
instead of tools-t.h, to avoid namespace pollution if we included
tools-t.h from compiler.h.
Add a new flag to connections, CO_FL_SSL_NO_CACHED_INFO, and set it for
checks.
It lets the ssl layer know that he should not use cached informations,
such as the ALPN as stored in the server, or cached sessions.
This wlil be used for checks, as checks may target different servers, or
used a different SSL configuration, so we can't assume the stored
informations are correct.
This should be backported to 3.3, and may be backported up to 2.8 if the
attempts to do session resume by checks is proven to be a problem.
Thanks to the previous patch, "BUG/MEDIUM: ssl: Don't reuse TLS session
if the connection's SNI differs", it is no useless to store the SNI of
cached TLS sessions. This SNI is no longer tested and new connections
reusing a session must have the same SNI.
The main change here is for the ssl_sock_set_servername() function. It is no
longer possible to compare the SNI of the reused session with the one of the
new connection. So, the SNI is always set, with no other processing. Mainly,
the session is not destroyed when SNIs don't match. It means the commit
119a4084bf ("BUG/MEDIUM: ssl: for a handshake when server-side SNI changes")
is implicitly reverted.
It is good to note that it is unclear for me when and why the reused session
should be destroyed. Because I'm unable to reproduce any issue fixed by the
commit above.
This patch could be backported as far as 3.0 with the commit above.
When a SNI is set on a new connection, its hash is now saved in the
connection itself. To do so, a dedicated field was added into the connection
strucutre, called sni_hash. For now, this value is only used when the TLS
session is cached.
For cached TLS sessions, in addition to the SNI itself, its hash is now also
saved. No changes are expected here because this hash is not used for now.
This commit relies on:
* MINOR: ssl: Add a function to hash SNIs
This patch only adds the function ssl_sock_sni_hash() that can be used to
get the hash value corresponding to an SNI. A global seed, sni_hash_seed, is
used.
This patch reverts the commit efe60745b ("MINOR: quic: remove connection arg
from qc_new_conn()"). The connection will be mandatory when the QUIC
connection is created on backend side to fix an issue when we try to reuse a
TLS session.
So, the connection is again an argument of qc_new_conn(), the 4th
argument. It is NULL for frontend QUIC connections but there is no special
check on it.
This patch impacts both the QUIC frontends and listeners.
Note that "ssl-default-bind-ciphersuites", "ssl-default-bind-curves",
are not ignored by QUIC by the frontend. This is also the case for the
backends with "ssl-default-server-ciphersuites" and "ssl-default-server-curves".
These settings are set by ssl_sock_prepare_ctx() for the frontends and
by ssl_sock_prepare_srv_ssl_ctx() for the backends. But ssl_quic_initial_ctx()
first sets the default QUIC frontends (see <quic_ciphers> and <quic_groups>)
before these ssl_sock.c function are called, leading some TLS stack to
refuse them if they do not support them. This is the case for some OpenSSL 3.5
stack with FIPS support. They do not support X25519.
To fix this, set the default QUIC ciphersuites and curves only if not already
set by the settings mentioned above.
Rename <quic_ciphers> global variable to <default_quic_ciphersuites>
and <quic_groups> to <default_quic_curves> to reflect the OpenSSL API naming.
These options are taken into an account by ssl_quic_initial_ctx()
which inspects these four variable before calling SSL_CTX_set_ciphersuites()
with <default_quic_ciphersuites> as parameter and SSL_CTX_set_curves() with
<default_quic_curves> as parameter if needed, that is to say, if no ciphersuites
and curves were set by "ssl-default-bind-ciphersuites", "ssl-default-bind-curves"
as global options or "ciphersuites", "curves" as "bind" line options.
Note that the bind_conf struct is not modified when no "ciphersuites" or
"curves" option are used on "bind" lines.
On backend side, rely on ssl_sock_init_srv() to set the server ciphersuites
and curves. This function is modified to use respectively <default_quic_ciphersuites>
and <default_quic_curves> if no ciphersuites and curves were set by
"ssl-default-server-ciphersuites", "ssl-default-server-curves" as global options
or "ciphersuites", "curves" as "server" line options.
Thank to @rwagoner for having reported this issue in GH #3194 when using
an OpenSSL 3.5.4 stack with FIPS support.
Must be backported as far as 2.6
This bug was revealed on backend side by reg-tests/ssl/del_ssl_crt-list.vtc when
run wich QUIC connections. As expected by the test, a TLS alert is generated on
servsr side. This latter sands a CONNECTION_CLOSE frame with a CRYPTO error
(>= 0x100). In this case the client closes its QUIC connection. But
the stream connection was not informed. This leads the connection to
be closed after the server timeout expiration. It shouls be closed asap.
This is the reason why reg-tests/ssl/del_ssl_crt-list.vtc could succeeds
or failed, but only after a 5 seconds delay.
To fix this, mimic the ssl_sock_io_cb() for TCP/SSL connections. Call
the same code this patch implements with ssl_sock_handle_hs_error()
to correctly handle the handshake errors. Note that some SSL counters
were not incremented for both the backends and frontends. After such
errors, ssl_sock_io_cb() start the mux after the connection has been
flagged in error. This has as side effect to close the stream
in conn_create_mux().
Must be backported to 3.3 only for backends. This is not sure at this time
if this bug may impact the frontends.
Extend QUIC server configuration so that congestion algorithm and
maximum window size can be set on the server line. This can be achieved
using quic-cc-algo keyword with a syntax similar to a bind line.
This should be backported up to 3.3 as this feature is considered as
necessary for full QUIC backend support. Note that this relies on the
serie of previous commits which should be picked first.
Each QUIC congestion algorithm is defined as a structure with callbacks
in it. Every quic_conn has a member pointing to the configured
algorithm, inherited from the bind-conf keyword or to the default CUBIC
value.
Convert all these definitions to const. This ensures that there never
will be an accidental modification of a globally shared structure. This
also requires to mark quic_cc_algo field in bind_conf and quic_cc as
const.
Each quic_conn instance is stored in a global list. Its purpose is to be
able to loop over all known connections during "show quic".
Split this into two separate lists for frontend and backend usage.
Another change is that closing backend connections do not move into
quic_conns_clo list. They remain instead in their original list. The
objective of this patch is to reduce the contention between the two
sides.
Note that this prevents backend connections to be listed in "show quic"
now. This will be adjusted in a future patch.
QUIC CIDs are stored in a global tree. Prior to this patch, CIDs used on
both frontend and backend sides were mixed together.
This patch implement CID storage separation between FE and BE sides. The
original tre quic_cid_trees is splitted as
quic_fe_cid_trees/quic_be_cid_trees.
This patch should reduce contention between frontend and backend usages.
Also, it should reduce the risk of random CID collision.
All of the other bandwidth-limiting code stores limits and intermediate
(byte) counters as unsigned integers. The exception here is
freq_ctr_overshoot_period which takes in unsigned values but returns a
signed value. While this has the benefit of letting the caller know how
far away from overshooting they are, this is not currently leveraged
anywhere in the codebase, and it has the downside of halving the positive
range of the result.
More concretely though, returning a signed integer when all intermediate
values are unsigned (and boundaries are not checked) could result in an
overflow, producing values that are at best unexpected. In the case of
flt_bwlim (the only usage of freq_ctr_overshoot_period in the codebase at
the time of writing), an overflow could cause the filter to wait for a
large number of milliseconds when in fact it shouldn't wait at all.
This is a niche possibility, because it requires that a bandwidth limit is
defined in the range [2^31, 2^32). In this case, the raw limit value would
not fit into a signed integer, and close to the end of the period, the
`(elapsed * freq)/period` calculation could produce a value which also
doesn't fit into a signed integer.
If at the same time `curr` (the number of events counted so far in the
current period) is small, then we could get a very large negative value
which overflows. This is undefined behaviour and could produce surprising
results. The most obvious outcome is flt_bwlim sometimes waiting for a
large amount of time in a case where it shouldn't wait at all, thereby
incorrectly slowing down the flow of data.
Converting just the return type from signed to unsigned (and checking for
the overflow) prevents this undefined behaviour. It also makes the range
of valid values consistent between the input and output of
freq_ctr_overshoot_period and with the input and output of other freq_ctr
functions, thereby reducing the potential for surprise in intermediate
calculations: now everything supports the full 0 - 2^32 range.
When a multiplexer protocol is defined, it is now possible to specify the
ALPN it supports, in binary format. This info is optionnal. For now only the
h2 and the h1 multiplexers define an ALPN because this will be mandatory for
a fix. But this could be used in future for different purpose.
This patch will be mandatory for the next fix.
It's always a pain to guess the number of FDs that can be needed by
listeners, checks, threads, pollers etc. We have this estimate in
global.maxsock before calling set_global_maxconn(), but we lose it
the line after. Let's copy it into global.est_fd_usage and keep it.
This will be helpful to try to provide more accurate suggestions for
maxconn.
On the frontend side, QUIC transfer can be performed either via a
connection owned FD or multiplex on the listener one. When a quic_conn
is freed and converted to quic_conn_closed instance, its FD if open is
closed and all exchanges are now multiplex via the listener FD.
This is different for the backend as connections only has the choice to
use their owned FD. Thus, special care care must be taken when freeing a
connection and converting it to a quic_conn_closed instance. In this
case, qc_release_fd() is delayed to the quic_conn_closed release.
Furthermore, when the FD is transferred, its iocb and owner fields are
updated to the new quic_conn_closed instance. Without it, a crash will
occur when accessing the freed quic_conn tasklet. A newly dedicated
handler quic_conn_closed_sock_fd_iocb is used to ensure access to
quic_conn_closed members only.
Properly implement support for max-reuse server keyword. This is done by
adding a total count of streams seen for the whole connection. This
value is used in avail_streams callback.
Remove <ipv4> argument from qc_new_conn(). This parameter is unnecessary
as it can be derived from the family type of the addresses also passed
as argument.
The objective of this patch is to streamline qc_new_conn() usage so that
it is similar for frontend and backend sides.
Previously, several parameters were set only for frontend connections.
These arguments are replaced by a single quic_rx_packet argument, which
represents the INITIAL packet triggering the connection allocation on
the server side. For a QUIC client endpoint, it remains NULL. This usage
is consider more explicit.
As a minor change, <target> is moved as the first argument of the
function. This is considered useful as this argument determines whether
the connection is a frontend or backend entry.
Along with these changes, qc_new_conn() documentation has been reworded
so that it is now up-to-date with the newest usage.
quic_conn has two fields named <dcid> and <scid>. It may cause confusion
as it is not obvious how these fields are related to the connection
direction. Try to improve this by extending the documentation of these
two fields.
quic_newcid_from_hash64 is an external callback. If defined, it serves
as a CID method generation, as an alternative to the default random
implementation.
This mechanism was not correctly implemented on the backend side.
Indeed, <hash64> quic_conn member is only setted for frontend
connections. The simplest solution would be to properly define it also
for backend ones. However, quic_newcid_from_hash64 derivation is really
only useful for the frontend side for now. Thus, this patch disables
using it on the backend side in favor of the default random generator.
To implement this, quic_cid_generate() is splitted in two functions, for
both methods of CIDs generation. This is the responsibility of the
caller to select the proper method. On backend side, only random
implementation is now used.
The shard keyword is already used by the peers and on the server lines. And
it is unrelated with the session keys distribution. So instead of talking
about shard for the session key hashing, we now use the term "bucket".
The purpose of this new BUG_ON is beyond BUG_ON_HOT(). While BUG_ON_HOT()
is meant to be light but placed on very hot code paths, BUG_ON_STRESS()
might be heavy and only used under stress-testing, to try to detect early
that something bad is starting to happen. This one is not even type-checked
when not defined because we don't want to risk the compiler emitting the
slightest piece of code there in production mode, so as to give enough
freedom to the developers.
DEBUG_STRESS is currently used only to expose "stress-level". With this
patch, we go a bit further, by automatically forcing DEBUG_STRICT and
DEBUG_STRICT_ACTION to their highest values in order to enable all
BUG_ON levels, and make all of them result in a crash. In addition,
care is taken to always only have 0 or 1 in the macro, so that it can be
tested using "#if DEBUG_STRESS > 0" as well as "if (DEBUG_STRESS) { }"
everywhere.
The goal will be to ease insertion of extra tests for builds dedicated
to stress-testing that enable possibly expensive extra checks on certain
code paths that cannot reasonably be compiled in for production code
right now.
The target patch fixes a rare race condition which happen when a MUX IO
handler is working on a connection already moved into the purge list. In
this case, the handler will incorrectly moved back the connection into
the idle list.
To fix this, conn_delete_from_tree() was extended to remove flags along
with the connection from the idle list. This was performed when the
connection is moved into the purge list. However, it introduces another
issue related to the idle server connection accounting. Thus it is
necessary to revert it prior to the incoming newer fix.
This patch must be backported to every version where the original commit
is.
AWS-LC features are not easily tested with just the openssl version
constant. AWS-LC uses its own API versioning stored in the
AWSLC_API_VERSION constant.
This patch add the two awslc_api_atleast and awslc_api_before predicates
that help to check the AWS-LC API.
check-reuse-pool can only perform as expected if reuse policy on the
backend is set to aggressive or higher. Update the documentation to
reflect this and implement a server diag warning.
The current ACME scheduler suffers from problems due to the way the
tasks are stored:
- MT_LIST are not scalables when having a lot of ACME tasks and having
to look for a specific one.
- the acme_task pointer was stored in the ckch_store in order to not
passing through the whole list. But a ckch_store can be updated and
the pointer lost in the previous one.
- when a task fails, the ptr in the ckch_store was not removed because
we only work with a copy of the original ckch_store, it would need to
lock the ckchs_tree and remove this pointer.
This patch fixes the issues by removing the MT_LIST-based architecture,
and replacing it by a simple ebmbtree + rwlock design.
The pointer to the task is not stored anymore in the ckch_store, but
instead it is stored in the acme_tasks tree. Finding a task is done by
doing a lookup on this tree with a RDLOCK.
Instead of checking if store->acme_task is not NULL, a lookup is also
done.
This allow to remove the stuck "acme_task" pointer in the store, which
was preventing to restart an acme task when the previous failed for this
specific certificate.
Must be backported in 3.2.
During 0-RTT sessions, some server transport parameters are reused after having
been save from previous sessions. These parameters must not be reduced
when it resends them. The client must check this is the case when some early data
are accepted by the server. This is what is implemented by this patch.
Implement qc_early_tranport_params_validate() which checks the new server parameters
are not reduced.
Also implement qc_ssl_eary_data_accepted() which was not implemented for TLS
stack without 0-RTT support (for instance wolfssl). That said this function
was no more used. This is why the compilation against wolfssl could not fail.
This patch allows the use of 0-RTT feature on QUIC server lines with "allow-0rtt"
option. In fact 0-RTT is really enabled only if ssl_sock_srv_try_reuse_sess()
successfully manages to reuse the SSL session and the chosen application protocol
from previous connections.
Note that, at this time, 0-RTT works only with quictls and aws-lc as TLS stack.
(0-RTT does not work at all (even for QUIC frontends) with libressl).
This patch is required to make 0-RTT work. It modifies the prototype of
quic_build_post_handshake_frames() to send post handshake frames from a
list of frames in place of the application encryption level (used
as <qc->ael> local variable).
This patch does not modify at all the current QUIC stack behavior (even for
QUIC frontends). It must be considered as a preparation for the code
to come about 0-RTT support for QUIC backends.
This function is called for both TCP and QUIC connections to reuse SSL sessions
saved by ssl_sess_new_srv_cb() callback called upon new SSL session creation.
In addition to this, a QUIC SSL session must reuse the ALPN and some specific QUIC
transport parameters. This is what is added by this patch for QUIC 0-RTT sessions.
Note that for now on, ssl_sock_srv_try_reuse_sess() may fail for QUIC connections
if it did not managed to reuse the ALPN. The caller must be informed of such an
issue. It must not enable 0-RTT for the current session in this case. This is
impossible without ALPN which is required to start a mux.
ssl_sock_srv_try_reuse_sess() is modified to always succeeds for TCP connections.
For both TCP and QUIC connections, this is ssl_sess_new_srv_cb() callback which
is called when a new SSL session is created. Its role is to save the session to
be reused for the next sessions.
This patch modifies this callback to save the QUIC parameters to be reused
for the next 0-RTT sessions (or during SSL session resumption).
The already existing path_params->nego_alpn member is used to store the ALPN as
this is done for TCP alongside path_params->tps new quic_early_transport_params
struct used to save the QUIC transport parameters to be reused for 0-RTT sessions.