Many tests use the A128KW algorithm which is not supported by AWS-LC but
instead of removing those tests we will just have a hardcoded value set
by default in this case.
This converter checks the validity and decrypts the content of a JWE
token that has an asymetric "alg" algorithm (RSA). In such a case, we
must provide a path to an already loaded certificate and private key
that has the "jwt" option set to "on".
This converter checks the validity and decrypts the content of a JWE
token that has a symetric "alg" algorithm. In such a case, we only
require a secret as parameter in order to decrypt the token.
This test mimics what was already done for the aes_gcm converters. Some
data is encrypted and directly decrypted and we ensure that the output
was not changed.
Those converters allow to encrypt or decrypt data with AES in Cipher
Block Chaining mode. They work the same way as the already existing
aes_gcm_enc/_dec ones apart from the AEAD tag notion which is not
supported in CBC mode.
The parameter parsing and processing and the actual crypto part of the
aes_gcm converter are interleaved. This patch puts the crypto parts in a
dedicated function for better reuse in the upcoming JWE processing.
A recent patch has introduced a new state for proxies : unpublished
backends. Such backends won't be eligilible for traffic, thus
use_backend/default_backend rules which target them won't match and
content switching rules processing will continue.
This patch defines a new frontend keywords 'force-be-switch'. This
keyword allows to ignore unpublished or disabled state. Thus,
use_backend/default_backend will match even if the target backend is
unpublished or disabled. This is useful to be able to test a backend
instance before exposing it outside.
This new keyword is converted into a persist rule of new type
PERSIST_TYPE_BE_SWITCH, stored in persist_rules list proxy member. This
is the only persist rule applicable to frontend side. Prior to this
commit, pure frontend proxies persist_rules list were always empty.
This new features requires adjustment in process_switching_rules(). Now,
when a use_backend/default_backend rule matches with an non eligible
backend, frontend persist_rules are inspected to detect if a
force-be-switch is present so that the backend may be selected.
Utility function warnif_cond_conflicts() is used when parsing an ACL.
Previously, the function directly calls ha_warning() to report an error.
Change the function so that it now takes the error message as argument.
Caller can then output it as wanted.
This change is necessary to use the function when parsing a keyword
registered as cfg_kw_list. The next patch will reuse it.
A previous patch defines a new proxy status : unpublished backends. This
patch extends this by changing proxy status reported in stats. If
unpublished is set, an extra "(UNPUB)" is added to the field.
Also, HTML stats is also slightly updated. If a backend is up but
unpublished, its status will be reported in orange color.
Define a new set of CLI commands publish/unpublish backend <be>. The
objective is to be able to change the status of a backend to
unpublished. Such a backend is considered ineligible to traffic : this
allows to skip use_backend rules which target it.
Note that contrary to disabled/stopped proxies, an unpublished backend
still has server checks running on it.
Internally, a new proxy flags PR_FL_BE_UNPUBLISHED is defined. CLI
commands handler "publish backend" and "unpublish backend" are executed
under thread isolation. This guarantees that the flag can safely be set
or remove in the CLI handlers, and read during content-switching
processing.
A proxy can be marked as disabled using the keyword with the same name.
The doc mentions that it won't process any traffic. However, this is not
really the case for backends as they may still be selected via switching
rules during stream processing.
In fact, currently access to disabled backends will be conducted up to
assign_server(). However, no eligible server is found at this stage,
resulting in a connection closure or an HTTP 503, which is expected. So
in the end, servers in disabled backends won't receive any traffic. But
this is only because post-parsing steps are not performed on such
backends. Thus, this can be considered as functional but only via
side-effects.
This patch clarifies the handling of disable backends, so that they are
never selected via switching rules. Now, process_switching_rules() will
ignore disable backends and continue rules evaluation.
As this is a behavior change, this patch is labelled as medium. The
documentation manuel for use_backend is updated accordingly.
Create a new test to ensure that switching rules selection is fine.
Currently, this checks that dynamic backend switching works as expected.
If a matching rule is resolved to an unexisting backend, the default
backend is used instead.
This regtest should be useful as switching-rules will be extended in a
future set of patches to add new abilities on backends, linked to
dynamic backend support.
This commit rewrites process_switching_rules() function. The objective
is to simplify backend selection so that a single unified
stream_set_backend() call is kept, both for regular and default backends
case.
This patch will be useful to add new capabilities on backends, in the
context of dynamic backend support implementation.
force-persist proxy keyword is converted into a persist_rule, stored in
proxy persist_rules list member. Each new rule is dynamically allocated
during parsing.
This commit fixes the memory leak on deinit due to a missing free on
persist_rules list entries. This is done via deinit_proxy()
modification. Each rule in the list is freed, along with its associated
ACL condition type.
This can be backported to every stable version.
If we want to be able to have more than 64 thread groups, we can no
longer use thread group masks as long.
One remaining place where it is done is in struct thread_set. However,
it is not really used as a mask anywhere, all we want is a thread group
counter, so convert that mask to a counter.
Fix the arithmetic when pre-filling non_empty_tgids when we still have
more than 32/64 thread groups left, to get the right index, we of course
have to divide the number of thread groups by the number of bits in a
long.
This bug was introduced by commit
7e1fed4b7a8b862bf7722117f002ee91a836beb5, but hopefully was not hit
because it requires to have at least as much thread groups as there are
bits in a long, which is impossible on 64bits machines, as MAX_TGROUPS
is still 32.
Now that it is unused, eliminate all_tgroups_mask, as we can't 64bits
masks to represent thread groups, if we want to be able to have more
than 64 thread groups.
In order to be able to have more than 64 thread groups, turn
non_empty_tgids into a long array, so that we have enough bits to
represent everty thread group, and manipulate it with the ha_bit_*
functions.
As reported by GH user @Lzq-001 on issue #3245, the config below would
cause haproxy to SEGFAULT after having reported an error:
frontend 0000000
http-request set-map %[hdr(0000)0_
Root cause is simple, in parse_http_set_map(), we define the release
function (which is responsible to clear lf_expr expressions used by the
action), prior to initializing the expressions, while the release
function assumes the expressions are always initialized.
For all similar actions, we already perform the init prior to setting
the related release function, but this was not the case for
parse_http_set_map(). We fix the bug by initializing the expressions
earlier.
Thanks to @Lzq-001 for having reported the issue and provided a simple
reproducer.
It should be backported to all stable versions, note for versions prior to
3.0, lf_expr_init() should be replace by LIST_INIT(), see
6810c41 ("MEDIUM: tree-wide: add logformat expressions wrapper")
Contrarily to what was previously believed, there are corner cases where
the counters may not be allocated, and we may want to make them optional
at a later date, so we have to check if those counters are there.
However, just checking that shared.tg is non-NULL is enough, we can then
assume that shared.tg[tgid - 1] has properly been allocated too.
Also modify the various COUNTER_SHARED_* macros to make sure they check
for that too.
ACK frames are either of type 0x02 or 0x03. The latter is an indication
that it contains extra ECN related fields. In haproxy QUIC stack, this
is considered as a different frame type, set to QUIC_FT_ACK_ECN, with
its own set of builder/parser functions.
This patch fixes ACK ECN parsing function. Indeed, the latter suffered
from two issues. First, 'first ACK range' and 'ACK ranges' were
inverted. Then, the three remaining ECN fields were simply ignored by
the parsing function.
This issue can cause desynchronization in the frames parsing code, which
may result in various result. Most of the time, the connection will be
aborted by haproxy due to an invalid frame content read.
Note that this issue was not detected earlier as most clients do not
enable ECN support if the peer is not able to emit ACK ECN frame first,
which haproxy currently never sends. Nevertheless, this is not the case
for every client implementation, thus proper ACK ECN parsing is
mandatory for a proper QUIC stack support.
Fix this by adjusting quic_parse_ack_ecn_frame() function. The remaining
ECN fields are parsed to ensure correct packet parsing. Currently, they
are not used by the congestion controller.
This must be backported up to 2.6.
The code to parse the "thread" keyword on bind lines was changed to
check if the thread numbers were correct against the value provided with
max-threads-per-group, if any were provided, however, at the time those
thread keywords have been set, it may not yet have been set, and that
breaks the feature, so revert to check against MAX_THREADS_PER_GROUP instead,
it should have no major impact.
Before updating counters, a few tests are made to check if the counters
exits. but those counters should always exist at this point, so just
remmove them.
This commit should have no impact, but can easily be reverted with no
functional impact if various crashes appear.
Instead of statically allocating the per-thread group counters,
based on the max number of thread groups available, allocate
them dynamically, based on the number of thread groups actually
used. That way we can increase the maximum number of thread
groups without using an unreasonable amount of memory.
The IPv6 header contains a payload length that excludes the 40 bytes of
IPv6 packet header, which differs from IPv4's total length which includes
it. As a result, the parser was wrong and would only see the IP part and
not the TCP one unless sufficient options were present tocover it.
This issue came in 3.4-dev2 with recent commit e88e03a6e4 ("MINOR:
net_helper: add ip.fp() to build a simplified fingerprint of a SYN"),
so no backport is needed.
As reported by GH user @kanashimia in GH #3241, providing anything else
than a table to Patref:add_bulk() method could cause a segfault because
we were calling lua_next() with the lua object without ensuring it
actually is a table.
Let's add the missing lua_istable() check on the stack object before
calling lua_next() function on it.
It should be backported up to 3.2 with 884dc62 ("MINOR: hlua_fcn:
add Patref:add_bulk()")
In GH #3241, GH user @kanashimia reported that the Patref:add_bulk()
method would raise a Lua exception when called with more than 101
elements at once.
As identified by @kanashimia there was an error in the way the
add_bulk() method was forced to yield after 101 elements precisely.
The yield is there to ensure Lua doesn't eat too much ressources at
once and doesn't impact haproxy's core responsiveness, but the check
for the yield was misplaced resulting in improper stack content upon
resume.
Thanks to user @kanashimia who even provided a reproducer which helped
a lot to troubleshoot the issue.
This fix should be backported up to 3.2 with 884dc62 ("MINOR: hlua_fcn:
add Patref:add_bulk()") where the bug was introduced.
Now that the tgid stored in the stats file has been increased to 16bits
by commit 022cb3ab7fdce74de2cf24bea865ecf7015e5754, don't forget to
increase the variable size when reading it from the file, too.
This should have no impact given the maximum thread group limit is still
32.
Increase the size of the stored tgid in the stat file from 8bits to
32bits, so that we can have more than 256 thread group. 65536 should be
enough for some time.
This bumps thet stat file minor version, as the structure changes.
Instead of always allocating MAX_TGROUPS members, allocate them
dynamically, using the number of thread groups we'll use, so that
increasing MAX_TGROUPS will not have a huge impact on the structure
size.
This flag is used as of commit dcce9369129f6ca9b8eed6b451c0e20c226af2e3
("MINOR: connections: Add a new CO_FL_SSL_NO_CACHED_INFO flag"). This patch
should be backported to 3.3. Apparently dcce9369129 has been backported
to 3.2 and 3.1 already, with that change already applied, so no need for a
backport there.
The fc_xxx info that are retrieved over tcp_info could currently not
be accessed before a stream is created due to a test that verified the
existence of a stream. The rationale here was that the function works
both for frontend and backend. Let's always retrieve these info from
the session for the frontend case so that it now becomes possible to
set variables at connection/session time. The doc did not mention this
limitation so this could almost be considered as a bug.
Some timers, like the handshake timer, are stored in the session and are
only copied to the logs struct when a stream is created. But this means
we can't measure it without a stream, nor store it once for all in a
variable at session creation time. Let's extend the sample fetch function
to retrieve it from the session when no stream is present. The doc did not
mention this limitation so this could almost be considered as a bug.
It was reported in GH #2956 and more recently in GH #3235 that some
hashes are way too slow. The former triggers watchdog warnings during
checks, the second sees the config parsing take 20 seconds. This is
always due to the use of hash algorithms that are not suitable for use
in low-latency environments like web. They might be fine for a local
auth though. The difficulty, as explained by Philipp Hossner, is that
developers are not aware of this cost and adopt this without suspecting
any side effect.
The proposal here is to measure the crypt() call time and emit a warning
if it takes more than 10ms (which is already extreme). This was tested
by Philipp and confirmed to catch his case.
This is marked medium as it might start to report warnings on config
suffering from this problem without ever detecting it till now.
Patch dba4fd24 ("MEDIUM: ssl/ech: config and load keys") introduced
ECH configuration for bind lines, but the QUIC configuration parsers
still suffers from not using the same code as the TCP/TLS one, so the
init for QUIC was missed.
Must be backported in 3.3.
The ECH job still fails to compile since the openssl 4.0 deprecated
functions were not removed yet. Let's remove ERR=1 temporarly.
We do know that there's a regression in OpenSSL 4.0 with these
reg-tests though:
Error: # top TEST reg-tests/ssl/set_ssl_crlfile.vtc FAILED (0.219) exit=2
Error: # top TEST reg-tests/ssl/set_ssl_cafile.vtc FAILED (0.236) exit=2
Error: # top TEST reg-tests/quic/set_ssl_crlfile.vtc FAILED (0.196) exit=2
OpenSSL changed the output from "Server Temp Key" in prior versions to
"Peer Temp Key" in recent ones.
a39dc27c25
It looks like it affects OpenSSL >=3.5.0
This broke the reg-test for e.g. Debian 13 builds, using OpenSSL 3.5.1
Fixes bug #3238
Could be backported in every branches.
Signed-off-by: Christian Ruppert <idl0r@qasl.de>
Discussed in issue #3187, the CLI help is confusing for the "show table"
command as it seems that the argument is mandatory.
This patch adds the arguments between square brackets to remove the
confusion.
In GH issue #3226, Sergey Fedorov (@barracuda156) reported that since
commit 10c14a1ed0 ("MINOR: proto_sockpair: send_fd_uxst: init iobuf,
cmsghdr, cmsgbuf to zeros"), macOS 10.6.8 with gcc 14.3.0 doesn't build
anymore:
src/proto_sockpair.c: In function 'send_fd_uxst':
src/proto_sockpair.c:246:49: error: variable-sized object may not be initialized except with an empty initializer
246 | char cmsgbuf[CMSG_SPACE(sizeof(int))] = {0};
| ^
src/proto_sockpair.c:247:45: error: variable-sized object may not be initialized except with an empty initializer
247 | char buf[CMSG_SPACE(sizeof(int))] = {0};
| ^
Upon investigation, it appears that the CMSG_SPACE() macro on this OS
looks too complex for gcc to consider it as a constant, so it takes
these buffers for variable-length arrays and cannot initialize them.
Let's move to a simple memset() instead, which Sergey confirmed fixes
the problem.
This needs to be backported as far as 3.1. Thanks to Sergey for the
report, the bisect and testing the fix.
This patch covers issue https://github.com/haproxy/haproxy/issues/3221.
The parser for the "userlist" section did not use the standard keyword
registration mechanism. Instead, it relied on a series of strcmp()
comparisons to identify keywords such as "group" and "user".
This had two main drawbacks:
1. The keywords were not discoverable by the "-dKall" dump option,
making it difficult for users to see all available keywords for the
section.
2. The implementation was inconsistent with the parsers for other
sections, which have been progressively refactored to use the
standard cfg_kw_list infrastructure.
This patch refactors the userlist parser to align it with the project's
standard conventions.
The parsing logic for the "group" and "user" keywords has been extracted
from the if/else block in cfg_parse_users() into two new dedicated
functions:
- cfg_parse_users_group()
- cfg_parse_users_user()
These two keywords are now registered via a dedicated cfg_kw_list,
making them visible to the rest of the HAPorxy ecosystem, including the
-dKall dump.
When a unknown keyword was used in the "userlist" section, the error was
mentioning the "users" section, instead of "userlist".
Could be backported in every branches.
New gcc and clang versions from fedora rawhide seems to use the C23
standard by default. This version changes the definition of some
string.h functions, which now return a const char * instead of a char *.
src/tools.c: In function ‘fgets_from_mem’:
src/tools.c:7200:17: warning: assignment discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]
7200 | new_pos = memchr(*position, '\n', size);
| ^
Strangely, -Wdiscarded-qualifiers does not seem to catch all the
memchr.
Should fix issue #3228.
This could be backported in previous versions.
New gcc and clang versions from fedora rawhide seems to use the C23
standard by default. This version changes the definition of some
string.h functions, which now return a const char * instead of a char *.
src/ssl_sock.c: In function ‘SSL_CTX_keylog’:
src/ssl_sock.c:4475:17: error: assignment discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
4475 | lastarg = strrchr(line, ' ');
Strangely, -Wdiscarded-qualifiers does not seem to catch all the
strrchr.
Should fix issue #3228.
This could be backported in previous versions.
Released version 3.4-dev2 with the following main changes :
- BUG/MEDIUM: mworker/listener: ambiguous use of RX_F_INHERITED with shards
- BUG/MEDIUM: http-ana: Properly detect client abort when forwarding response (v2)
- BUG/MEDIUM: stconn: Don't report abort from SC if read0 was already received
- BUG/MEDIUM: quic: Don't try to use hystart if not implemented
- CLEANUP: backend: Remove useless test on server's xprt
- CLEANUP: tcpcheck: Remove useless test on the xprt used for healthchecks
- CLEANUP: ssl-sock: Remove useless tests on connection when resuming TLS session
- REGTESTS: quic: fix a TLS stack usage
- REGTESTS: list all skipped tests including 'feature cmd' ones
- CI: github: remove openssl no-deprecated job
- CI: github: add a job to test the master branch of OpenSSL
- CI: github: openssl-master.yml misses actions/checkout
- BUG/MEDIUM: backend: Do not remove CO_FL_SESS_IDLE in assign_server()
- CI: github: use git prefix for openssl-master.yml
- BUG/MEDIUM: mux-h2: synchronize all conditions to create a new backend stream
- REGTESTS: fix error when no test are skipped
- MINOR: cpu-topo: Turn the cpu policy configuration into a struct
- MEDIUM: cpu-topo: Add a "threads-per-core" keyword to cpu-policy
- MEDIUM: cpu-topo: Add a "cpu-affinity" option
- MEDIUM: cpu-topo: Add a new "max-threads-per-group" global keyword
- MEDIUM: cpu-topo: Add the "per-thread" cpu_affinity
- MEDIUM: cpu-topo: Add the "per-ccx" cpu_affinity
- BUG/MINOR: cpu-topo: fix -Wlogical-not-parentheses build with clang
- DOC: config: fix number of values for "cpu-affinity"
- MINOR: tools: add a secure implementation of memset
- MINOR: mux-h2: add missing glitch count for non-decodable H2 headers
- MINOR: mux-h2: perform a graceful close at 75% glitches threshold
- MEDIUM: mux-h1: implement basic glitches support
- MINOR: mux-h1: perform a graceful close at 75% glitches threshold
- MEDIUM: cfgparse: acknowledge that proxy ID auto numbering starts at 2
- MINOR: cfgparse: remove useless checks on no server in backend
- OPTIM/MINOR: proxy: do not init proxy management task if unused
- MINOR: patterns: preliminary changes for reorganization
- MEDIUM: patterns: reorganize pattern reference elements
- CLEANUP: patterns: remove dead code
- OPTIM: patterns: cache the current generation
- MINOR: tcp: add new bind option "tcp-ss" to instruct the kernel to save the SYN
- MINOR: protocol: support a generic way to call getsockopt() on a connection
- MINOR: tcp: implement the get_opt() function
- MINOR: tcp_sample: implement the fc_saved_syn sample fetch function
- CLEANUP: assorted typo fixes in the code, commits and doc
- BUG/MEDIUM: cpu-topo: Don't forget to reset visited_ccx.
- BUG/MAJOR: set the correct generation ID in pat_ref_append().
- BUG/MINOR: backend: fix the conn_retries check for TFO
- BUG/MINOR: backend: inspect request not response buffer to check for TFO
- MINOR: net_helper: add sample converters to decode ethernet frames
- MINOR: net_helper: add sample converters to decode IP packet headers
- MINOR: net_helper: add sample converters to decode TCP headers
- MINOR: net_helper: add ip.fp() to build a simplified fingerprint of a SYN
- MINOR: net_helper: prepare the ip.fp() converter to support more options
- MINOR: net_helper: add an option to ip.fp() to append the TTL to the fingerprint
- MINOR: net_helper: add an option to ip.fp() to append the source address
- DOC: config: fix the length attribute name for stick tables of type binary / string
- MINOR: mworker/cli: only keep positive PIDs in proc_list
- CLEANUP: mworker: remove duplicate list.h include
- BUG/MINOR: mworker/cli: fix show proc pagination using reload counter
- MINOR: mworker/cli: extract worker "show proc" row printer
- MINOR: cpu-topo: Factorize code
- MINOR: cpu-topo: Rename variables to better fit their usage
- BUG/MEDIUM: peers: Properly handle shutdown when trying to get a line
- BUG/MEDIUM: mux-h1: Take care to update <kop> value during zero-copy forwarding
- MINOR: threads: Avoid using a thread group mask when stopping.
- MINOR: hlua: Add support for lua 5.5
- MEDIUM: cpu-topo: Add an optional directive for per-group affinity
- BUG/MEDIUM: mworker: can't use signals after a failed reload
- BUG/MEDIUM: stconn: Move data from <kip> to <kop> during zero-copy forwarding
- DOC: config: fix a few typos and refine cpu-affinity
- MINOR: receiver: Remove tgroup_mask from struct shard_info
- BUG/MINOR: quic: fix deprecated warning for window size keyword
QUIC configuration was cleaned up in the previous release. Several
global keyword names were changed to unify the configuration. For each
of them the older keyword is marked as deprecated, with a warning to
mention the newer alternative.
This patch fixes the warning for 'tune.quic.frontend.default-max-size'
as the alternative proposed was not correct. The proper value now is
'tune.quic.fe.cc.max-win-size'.
This must be backported up to 3.3.
The only purpose from tgroup_mask seems to be to calculate how many
tgroups share the same shard, but this is an information we can
calculate differently, we just have to increment the number when a new
receiver is added to the shard, and decrement it when one is detached
from the shard. Removing thread group masks will allow us to increase
the maximum number of thread groups past 64.
There were two typos in the recently updated parts about per-group.
Also, change the commas to ':' after the options values, as sometimes
it would be confusing. Last, place quotes around keyword names so that
they're explicitly referred to as language keywords. No backport is
needed.
The <kip> of producer was not forwarded to <kop> of consumer when zero-copy
data forwarding was tried. Because of the issue, the chunking of emitted H1
messages could be invalid.
To fix the bug, sc_ep_fwd_kip() must be called at this stage.
This fix is related to the previous one (529a8dbfb "BUG/MEDIUM: mux-h1: Take
care to update <kop> value during zero-copy forwarding"). Both are required
to fully fix the issue #3230.
This patch must be backported to 3.3.
In issue #3229 it was reported that the master couldn't reload after a
failed reload following a wrong configuration.
It is still possible to do a reload using the "reload" command of the
master CLI. But every signals are blocked.
The problem was introduced in 709cde6d0 ("BUG/MEDIUM: mworker: signals
inconsistencies during startup and reload") which fixes the blocking of
signals during the reload.
However the patch missed a case, indeed, the
run_master_in_recovery_mode() is not being called when the worker failed
to parse the configuration, it is only failing when the master is
failing.
To handle this case, the mworker_unblock_signals() function must be
called upon mworker_on_new_child_failure(). But since this is called in
an haproxy signal handler it would mess with the signals.
Instead, the patch adds a task which is started by the signal handler,
and restores the signals outside of it.
This must be backported as far as 3.1.
When using per-group affinity, add an optional new directive. It accepts
the values of "auto", where when multiple thread groups are created, the
available CPUs are split equally across the groups, and is the new
default, and "loose", where all groups are bound to all available CPUs,
this is the old default.
Lua 5.5 adds an extra argument to lua_newstate(). Since there are
already a few other ifdefs in hlua.c checking for the Lua version,
and there's a single call place, let's do the same here. This should
be safe for backporting if needed.
Signed-off-by: Mike Lothian <mike@fireburn.co.uk>
Remove the "stopped_tgroup_mask" variable, that indicated which thread
groups were stopping, and instead just use "stopped_tgroups", a counter
indicating how many thread groups are stopping. We want to remove all
thread group masks, so that we can increase the maximum number of thread
groups past 64.
Since the extra field was removed from the HTX structure, a regression was
introduced when forwarding of chunked messages. The <kop> value was not
decreased as it should be when data were sent via the zero-copy
forwarding. Because of this bug, it was possible to announce a chunk size
larger than the chunk data sent.
To fix the bug, an helper function was added to properly update the <kop>
value when a chunk size is emitted. This function is now called when new
chunk is announced, including during zero-copy forwarding.
As a workaround, "tune.disable-zero-copy-forwarding" or just
"tune.h1.zero-copy-fwd-send off" can be set in the global section.
This patch should fix the issue #3230. It must be backported to 3.3.
When a shutdown was reported to a peer applet, the event was not properly
handled if it failed to receive data. The function responsible to get data
was exiting too early if the applet buffer was empty, without testing the
sedesc status. Because of this issue, it was possible to have frozen peer
applets. For instance, it happend on client timeout. With too many frozen
applets, it was possible to reach the maxconn.
This patch should fix the issue #3234. It must be backported to 3.3.
Rename "visited_tsid" and "visited_ccx" to "touse_tsid" and
"touse_ccx". They are not there to remember which tsid/ccx we
alreaday visited, contrarily to visited_ccx_set and
visited_cl_set, they are there to know which tsid/ccx we should
use, so make that clear.
Introduce cli_append_worker_row() to centralize formatting of a single
worker row. Also, replace duplicated row-printing code in both current
and old workers loops with the helper. Motivation: Reduces LOC and
improves readability by removing duplication.
After commit 594408cd612b5 ("BUG/MINOR: mworker/cli: 'show proc' is limited
by buffer size"), related to ticket #3204, the "show proc" logic
has been fixed to be able to print more than 202 processes. However, this
fix can lead to the omission of entries in case they have the same
timestamp.
To fix this, we use the unique reload counter instead of the timestamp.
On partial flush, set ctx->next_reload = child->reloads.
On resume skip entries with child->reloads >= ctx->next_reload.
Finally, we clear ctx->next_reload at the end of a complete dump so
subsequent show proc starts from the top.
Could be backported in all stable branches.
Change mworker_env_to_proc_list() to if (child->pid > 0) before
LIST_APPEND, avoiding invalid PIDs (0/-1) in the process list.
This has no functional impact beyond stricter validation and it aligns
with existing kill safeguards.
The stick-table doc was reworked and moved in 3.2 with commit da67a89f3
("DOC: config: move stick-tables and peers to their own section"), however
the optional length attribute for binary/string types was mistakenly
spelled "length" while it's "len".
This must be backported to 3.2.
It can make sense to support extra components in the fingerprint to ease
configuration, so let's change the 0/1 value to a bit field. We also turn
the current 1 (TCP options list) to 2 so that we'll reuse 1 for the TTL.
Here we collect all the stuff that depends on the sender's settings,
such as TOS, IP version, TTL range, presence of DF bit or IP options,
presence of DATA in the SYN, CWR+ECE flags, TCP header length, wscale,
initial window, mss, as well as the list of TCP extension kinds. It's
obviously fairly limited but can allows to avoid blacklisting certain
valid clients sharing the same IP address as a misbehaving one.
It supports both a short and a long mode depending on the argument.
These can be used with the tcp-ss bind option. The doc was updated
accordingly.
This adds the following converters, used to decode fields
in an incoming tcp header:
tcp.dst, tcp.flags, tcp.seq, tcp.src, tcp.win,
tcp.options.mss, tcp.options.tsopt, tcp.options.tsval,
tcp.options.wscale, tcp.options_list,
These can be used with the tcp-ss bind option. The doc was updated
accordingly.
This adds a few converters that help decode parts of IP packets:
- ip.data : returns the next header (typically TCP)
- ip.df : returns the dont-fragment flags
- ip.dst : returns the destination IPv4/v6 address
- ip.hdr : returns only the IP header
- ip.proto: returns the upper level protocol (udp/tcp)
- ip.src : returns the source IPv4/v6 address
- ip.tos : returns the TOS / TC field
- ip.ttl : returns the TTL/HL value
- ip.ver : returns the IP version (4 or 6)
These can be used with the tcp-ss bind option. The doc was updated
accordingly.
This adds a few converters that help decode parts of ethernet frame
headers:
- eth.data : returns the next header (typically IP)
- eth.dst : returns the destination MAC address
- eth.hdr : returns only the ethernet header
- eth.proto: returns the ethernet proto
- eth.src : returns the source MAC address
- eth.vlan : returns the VLAN ID when present
These can be used with the tcp-ss bind option. The doc was updated
accordingly.
In 2.6, do_connect_server() was introduced by commit 0a4dcb65f ("MINOR:
stream-int/backend: Move si_connect() in the backend scope") and changed
the approach to work with a stream instead of a stream-interface. However
si_oc(si) was wrongly turned to &s->res instead of &s->req, which breaks
TFO by always inspecting the response channel to figure whether there are
data pending.
This fix can be backported to all versions till 2.6.
In 2.6, the retries counter on a stream was changed from retries left
to retries done via commit 731c8e6cf ("MINOR: stream: Simplify retries
counter calculation"). However, one comparison fell through the cracks
in order to detect whether or not we can use TFO (only first attempt),
resulting in TFO never working anymore.
This may be backported to all versions till 2.6.
We want to reset visited_ccx, as introduced by commit
8aef5bec1ef57eac449298823843d6cc08545745, each time we run the loop,
otherwise the chances of its content being correct are very low, and
will likely end up being bound to the wrong threads.
This was reported in github issue #3224.
This function retrieves the copy of a SYN packet that the system has
kept for us when bind option "tcp-ss" was set to 1 or above. It's
recommended to copy it to a local variable because it will be freed
after being read. It allows to inspect all parts of an incoming SYN
packet, provided that it was preserved (e.g. not possible with SYN
cookies). The doc provides examples of how to use it.
It's regularly needed to call getsockopt() on a connection, but each
time the calling code has to do all the job by itself. This commit adds
a "get_opt()" callback on the protocol struct, that directly calls
getsockopt() on the connection's FD. A generic implementation for
standard sockets is provided, though QUIC would likely require a
different approach, or maybe a mapping. Due to the overlap between
IP/TCP/socket option values, it is necessary for the caller to indicate
both the level and the option. An abstraction of the level could be
done, but the caller would nonetheless have to know the optname, which
is generally defined in the same include files. So for now we'll
consider that this callback is only for very specific use.
The levels and optnames are purposely passed as signed ints so that it
is possible to further extend the API by using negative levels for
internal namespaces.
This option enables TCP_SAVE_SYN on the listening socket, which will
cause the kernel to try to save a copy of the SYN packet header (L2,
IP and TCP are supported). This can permit to check the source MAC
address of a client, or find certain TCP options such as a source
address encapsulated using RFC7974. It could also be used as an
alternate approach to retrieving the source and destination addresses
and ports. For now setting the option is enabled, but sample fetch
functions and converters will be needed to extract info.
This makes a significant difference when loading large files and during
commit and clear operations, thanks to improved cache locality. In the
measurements below, master refers to the code before any of the changes
to the patterns code, not the code before this one commit.
Timing the replacement of 10M entries from the CLI with this command
which also reports timestamps at start, end of upload and end of clear:
$ (echo "prompt i"; echo "show activity"; echo "prepare acl #0";
awk '{print "add acl @1 #0",$0}' < bad-ip.map; echo "show activity";
echo "commit acl @1 #0"; echo "clear acl @0 #0";echo "show activity") |
socat -t 10 - /tmp/sock1 | grep ^uptim
master, on a 3.7 GHz EPYC, 3 samples:
uptime_now: 6.087030
uptime_now: 25.981777 => 21.9 sec insertion time
uptime_now: 29.286368 => 3.3 sec commit+clear
uptime_now: 5.748087
uptime_now: 25.740675 => 20.0s insertion time
uptime_now: 29.039023 => 3.3 s commit+clear
uptime_now: 7.065362
uptime_now: 26.769596 => 19.7s insertion time
uptime_now: 30.065044 => 3.3s commit+clear
And after this commit:
uptime_now: 6.119215
uptime_now: 25.023019 => 18.9 sec insertion time
uptime_now: 27.155503 => 2.1 sec commit+clear
uptime_now: 5.675931
uptime_now: 24.551035 => 18.9s insertion
uptime_now: 26.652352 => 2.1s commit+clear
uptime_now: 6.722256
uptime_now: 25.593952 => 18.9s insertion
uptime_now: 27.724153 => 2.1s commit+clear
Now timing the startup time with a 10M entries file (on another machine)
on master, 20 samples:
Standard Deviation, s: 0.061652677408033
Mean: 4.217
And after this commit:
Standard Deviation, s: 0.081821371548669
Mean: 3.78
Situations where we are iterating over elements and find one with a
different generation ID cannot arise anymore since the elements are kept
per-generation.
Instead of a global list (and tree) of pattern reference elements, we
now have an intermediate pat_ref_gen structure and store the elements in
those. This simplifies the logic of some operations such as commit and
clear, and improves performance in some cases - numbers to be provided
in a subsequent commit after one important optimization is added.
A lot of the changes are due to adding an extra level of indirection,
changing many cases where we iterate over all elements to an outer loop
iterating over the generation and an inner one iterating over the
elements of the current generation. It is therefore easier to read this
patch using 'git diff -w'.
Safe and non-functional changes that only add currently unused
structures, field, functions and macros, in preparation of larger
changes that alter the way pattern reference elements are stored.
This includes code to create and lookup generation objects, and
macros to iterate over the generations of a pattern reference.
Each proxy has its owned task for internal purpose. Currently, it is
only used either by frontends or if a stick-table is present.
This commit rendres the task allocation optional to only the required
case. Thus, it is not allocated anymore for backend only proxies without
stick-table.
A legacy check could be activated at compile time to reject backends
without servers. In practice this is not used anymore and does not have
much sense with the introduction of dynamic servers.
Each frontend/backend/listen proxies is assigned an unique ID. It can
either be set explicitely via 'id' keyword, or automatically assigned on
post parsing depending on the available values.
It was expected that the first automatically assigned value would start
at '1'. However, due to a legacy bug this is not the case as this value
is always skipped. Thus, automatically assigned proxies always start at
'2' or more.
To avoid breaking the current existing state, this situation is now
acknowledged with the current patch. The code is rewritten with an
explicit warning to ensure that this won't be fixed without knowing the
current status. A new regtest also ensures this.
We now count glitches for each parsing error, including those that
have been accepted via accept-unsafe-violations-*. Front and back
are considered and the connection gets killed on error once if the
threshold is reached or passed and the CPU usage is beyond the
configured limit (0 by default). This was tested with:
curl -ivH "host : blah" 0:4445{,,,,,,,,,}
which sends 10 requests to a configuration having a threshold of 5.
The global keywords are named similarly to H2 and quic:
tune.h1.be.glitches-threshold xxxx
tune.h1.fe.glitches-threshold xxxx
The glitches count of each connection is also reported when non-null
in the connection dumps (e.g. "show fd").
This avoids hitting the hard wall for connections with non-compliant
peers that would be accumulating errors over long connections. We now
permit to recycle the connection early enough to reset the connection
counter.
This was tested artificially by adding this to h2c_frt_handle_headers():
h2c_report_glitch(h2c, 1, "new stream");
or this to h2_detach():
h2c_report_glitch(h2c, 1, "detaching");
and injecting using h2load -c 1 -n 1000 0:4445 on a config featuring
tune.h2.fe.glitches-threshold 1000:
finished in 8.74ms, 85802.54 req/s, 686.62MB/s
requests: 1000 total, 751 started, 751 done, 750 succeeded, 250 failed, 250 errored, 0 timeout
status codes: 750 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 6.00MB (6293303) total, 132.57KB (135750) headers (space savings 29.84%), 5.86MB (6144000) data
min max mean sd +/- sd
time for request: 9us 178us 10us 6us 99.47%
time for connect: 139us 139us 139us 0us 100.00%
time to 1st byte: 339us 339us 339us 0us 100.00%
req/s : 87477.70 87477.70 87477.70 0.00 100.00%
The failures are due to h2load not supporting reconnection.
One rare error case could produce a protocol error on the stream when
not being able to decode response headers wasn't being accounted as a
glitch, so let's fix it.
This guarantees that the compiler will not optimize away the memset()
call if it detects a dead store.
Use this to clear SSL passphrases.
No backport needed.
src/cpu_topo.c:1325:15: warning: logical not is only applied to the left hand side of this bitwise operator [-Wlogical-not-parentheses]
1325 | } else if (!cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE)
| ^ ~
src/cpu_topo.c:1325:15: note: add parentheses after the '!' to evaluate the bitwise operator first
1325 | } else if (!cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE)
| ^
| ( )
src/cpu_topo.c:1325:15: note: add parentheses around left hand side expression to silence this warning
1325 | } else if (!cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE)
| ^
| ( )
src/cpu_topo.c:1533:15: warning: logical not is only applied to the left hand side of this bitwise operator [-Wlogical-not-parentheses]
1533 | } else if (!cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE)
| ^ ~
src/cpu_topo.c:1533:15: note: add parentheses after the '!' to evaluate the bitwise operator first
1533 | } else if (!cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE)
| ^
| ( )
src/cpu_topo.c:1533:15: note: add parentheses around left hand side expression to silence this warning
1533 | } else if (!cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE)
| ^
| ( )
No backport needed.
Add a new cpu-affinity keyword, "per-thread".
If used, each thread will be bound to only one hardware thread of the
thread group.
If used in conjonction with the "threads-per-core 1" cpu_policy, then
each thread will be bound on a different core.
Add a new global keyword, max-threads-per-group. It sets the maximum number of
threads a thread group can contain. Unless the number of thread groups
is fixed with "thread-groups", haproxy will just create more thread
groups as needed.
The default and maximum value is 64.
Add a new global option, "cpu-affinity", which controls how threads are
bound.
It currently accepts three values, "per-core", which will bind one thread to
each hardware thread of a given core, and "per-group" which will use all
the available hardware threads of the thread group, and "auto", the
default, which will use "per-group", unless "threads-per-core 1" has
been specified in cpu_policy, in which case it will use per-core.
Add a new, optional key-word to "cpu-policy", "threads-per-core".
It takes one argument, "1" or "auto". If "1" is used, then only one
thread per core will be created, no matter how many hardware thread each
core has. If "auto" is used, then one thread will be created per
hardware thread, as is the case by default.
for example: cpu-policy performance threads-per-core 1
Turn the cpu policy configuration into a struct. Right now it just
contains an int, that represents the policy used, but will get more
information soon.
Since commit 1ed2c9d ("REGTESTS: list all skipped tests including
'feature cmd' ones"), the script emits some error when trying to display
the list of skipped tests when there are none.
No backport needed.
In H2 the conditions to create a new stream differ for a client and a
server when a GOAWAY was exchanged. While on the server, any stream
whose ID is lower than or equal to the one advertised in GOAWAY is
valid, for a client it's forbidden to create any stream after receipt
of a GOAWAY, even if its ID is lower than or equal to the last one,
despite the server not being able to tell the difference from the
number of streams in flight.
Unfortunately, the logic in the code did not always reflect this
specificity of the client (the backend code in our case), and most
often considered that it was still permitted to create a new stream
until the max_id was greater than or equal to the advertised last_id.
This is for example what h2c_is_dead() and h2c_streams_left() do. In
other places, such as h2_avail_streams(), the rule is properly taken
into account. Very often the advertised last_id is the same, and this
is also what haproxy does (which explains why it's impossible to
reproduce the issue by chaining two haproxy layers), but a server may
wish to advertise any ID including 2^31-1 as mentioned in the spec,
and in this case the functions would behave differently.
This discrepancy results in a corner case where a GOAWAY received on
an idle connection will cause the next stream creation to be initially
accepted but then rejected via h2_avail_streams(), and the connection
left in a bad state, still attached to the session due to http-reuse
safe, but not reinserted into idle list, since the backend code
currently is not able to properly recover from this situation. Worse,
the idle flags are no longer on it but TASK_F_USR1 still is, and this
makes the recently added BUG_ON() rightfully trigger since this case
is not supposed to happen.
Admittedly more of the backend recovery code needs to be reworked,
however the mux must consistently decide whether or not a connection
may be reused or needs to be released.
This commit fixes the affected logic by introducing a new function
"h2c_reached_last_stream()" which says if a connection has reached its
last stream, regardless of the side, and using this one everywhere
max_id was compared to last_id. This is sufficient to address the
corner case that be_reuse_connection() currently cannot recover from.
This is in relation to GH issue #3215 and it should be sufficient to
fix the issue there. Thanks to Chris Staite for reporting the issue
and kudos to Amaury for spotting the events sequence that can lead
to this situation.
This patch must be backported to 3.3 first, then to older versions
later. It's worth noting that it's much more difficult to observe
the issue before 3.3 because the BUG_ON() is not there, and the
possibly non-released connection might end up being killed for other
reasons (timeouts etc). But one possible visible effect might be the
impossibility to delete a server (which Chris observed in 3.3).
Back in the mists of time, commit e91a526c8f decided that if we were trying
to stay on the same server than the previous request, and if there were
a connection available in the session, we'd remove its CO_FL_SESS_IDLE.
The reason for doing that has been long lost, probably it fixed a bug at some
point, but it was most probably not the right place to do that. And starting
with 3.3, this triggers a BUG_ON() because that flag is expected later on.
So just revert the commit, if the ancient bug shows up again, it will be
fixed another way.
This should be backported to 3.3. There is little reason to backport it
to previous versions, unless other patches depend on it.
vtest.yml only builds the releases of OpenSSL for now, there's no way to
check if we still have issues with the API before a pre-release version
is released.
This job builds the master branch of OpenSSL.
It is run everyday at 3 AM.
Remove the openssl no-deprecated job which was used for 1.1.0 API.
It's not useful anymore since it uses the OpenSSL version of the
distributions.
Checking depreciations in the API is still useful when using newest
version of the library. A job for the OpenSSL master branch would be
more useful than that.
The script for running regression tests is modified to improve the
visibility of skipped tests.
Previously, the reasons for skipping tests were only visible during the
test discovery phase when grepping the vtc (REQUIRE, EXCLUDE, etc).
But reg-tests skipped by vtest with the 'feature cmd' keywords were not
listed.
This change introduces the following:
- vtest does not remove the logs itself anymore, because it is not
able to let the log available when a test is skipped. So the -L
parameter is now always passed to vtest
- All skipped tests during the discovery phase are now logged to a
'skipped.log' file within the test directory
- The script now parses vtest logs to find tests that were skipped
due to missing features (via the 'feature cmd' in .vtc files)
and adds them to the skipped list.
This issue was reported in GH #3214 where quic/tls13_ssl_crt-list_filters.vtc
QUIC reg test was run without haproxy QUIC support due to OPENSSL_AWSLC enabled
featured.
This is due to the fact that when ssl/tls13_ssl_crt-list_filters.vtc has been
ported to QUIC the feature(OPENSSL) was silly replaced by feature(QUIC) leading
the script to be run even without QUIC support if OR'ed OPENSSL_AWSLC feature is
enabled.
A good method to port these feature() commands to QUIC would have been
to add a feature(QUIC) command seperated from the one used for the supported
TLS stacks identified by the original underlying ssl reg tests (in reg-tests/ssl).
This is what is done by this patch.
Thank you to @idl0r for having reported this issue.
In ssl_sock_srv_try_reuse_sess(), the connection is always defined, to TCP
and QUIC connections. No reason to test it. Because it is not so obvious for
the QUIC part, a BUG_ON() could be added here. For now, just remove useless
tests.
This patch should fix a Coverity report from #3213.
The xprt used to perform a healthcheck is always defined and cannot be NULL.
So there is no reason to test it. It could lead to wrong assumptions later
in the code.
This patch should fix a Coverity report from #3213.
The server's xprt is always defined and cannot be NULL. So there is no
reason to test it. It could lead to wrong assumptions later in the code.
This patch should fix a Coverity report from #3213.
Not every CC algos implement hystart, so only call the method if it is
actually there. Failure to do so will cause crashes if hystart is on,
and the algo doesn't implement it.
This should fix github issue #3218
This should be backported up to 3.0.
SC_FL_ABRT_DONE flag should never be set when SC_FL_EOS was already
set. These both flags were introduced to replace the old CF_SHUTR and to
have a flag for shuts driven by the stream and a flag for the read0 received
by the mux. So both flags must not be seen at same time on a SC. It is
espeically important because some processing are performed when these flags
are set. And wrong decisions may be made.
This patch must be backproted as far as 2.8.
The first attempt to fix this issue (c672b2a29 "BUG/MINOR: http-ana:
Properly detect client abort when forwarding the response") was not fully
correct and could be responsible to false report of client abort during the
response forwarding. I guess it is possible to truncate the response.
Instead, we must also take care that the client closed on its side, by
checking SC_FL_EOS flag on the front SC. Indeed, if the client has aborted,
this flag should be set.
This patch should be backported as far as 2.8.
The RX_F_INHERITED flag was ambiguous, as it was used to mark both
listeners inherited from the parent process and listeners duplicated
from another local receiver. This could lead to incorrect behavior
concerning socket unbinding and suspension.
This commit refactors the handling of inherited listeners by splitting
the RX_F_INHERITED flag into two more specific flags:
- RX_F_INHERITED_FD: Indicates a listener inherited from the parent
process via its file descriptor. These listeners should not be unbound
by the master.
- RX_F_INHERITED_SOCK: Indicates a listener that shares a socket with
another one, either by being inherited from the parent or by being
duplicated from another local listener. These listeners should not be
suspended or resumed individually.
Previously, the sharding code was unconditionally using RX_F_INHERITED
when duplicating a file descriptor. In HAProxy versions prior to 3.1,
this led to a file descriptor leak for duplicated unix stats sockets in
the master process. This would eventually cause the master to crash with
a BUG_ON in fd_insert() once the file descriptor limit was reached.
This must be backported as far as 3.0. Branches earlier than 3.0 are
affected but would need a different patch as the logic is different.
Released version 3.4-dev1 with the following main changes :
- BUG/MINOR: jwt: Missing "case" in switch statement
- DOC: configuration: ECH support details
- Revert "MINOR: quic: use dynamic cc_algo on bind_conf"
- MINOR: quic: define quic_cc_algo as const
- MINOR: quic: extract cc-algo parsing in a dedicated function
- MINOR: quic: implement cc-algo server keyword
- BUG/MINOR: quic-be: Missing keywords array NULL termination
- REGTESTS: ssl enable tls12_reuse.vtc for AWS-LC
- REGTESTS: ssl: split tls*_reuse in stateless and stateful resume tests
- BUG/MEDIUM: connection: fix "bc_settings_streams_limit" typo
- BUG/MEDIUM: config: ignore empty args in skipped blocks
- DOC: config: mention clearer that the cache's total-max-size is mandatory
- DOC: config: reorder the cache section's keywords
- BUG/MINOR: quic/ssl: crash in ClientHello callback ssl traces
- BUG/MINOR: quic-be: handshake errors without connection stream closure
- MINOR: quic: Add useful debugging traces in qc_idle_timer_do_rearm()
- REGTESTS: ssl: Move all the SSL certificates, keys, crt-lists inside "certs" directory
- REGTESTS: quic/ssl: ssl/del_ssl_crt-list.vtc supported by QUIC
- REGTESTS: quic: dynamic_server_ssl.vtc supported by QUIC
- REGTESTS: quic: issuers_chain_path.vtc supported by QUIC
- REGTESTS: quic: new_del_ssl_cafile.vtc supported by QUIC
- REGTESTS: quic: ocsp_auto_update.vtc supported by QUIC
- REGTESTS: quic: set_ssl_bug_2265.vtc supported by QUIC
- MINOR: quic: avoid code duplication in TLS alert callback
- BUG/MINOR: quic-be: missing connection stream closure upon TLS alert to send
- REGTESTS: quic: set_ssl_cafile.vtc supported by QUIC
- REGTESTS: quic: set_ssl_cert_noext.vtc supported by QUIC
- REGTESTS: quic: set_ssl_cert.vtc supported by QUIC
- REGTESTS: quic: set_ssl_crlfile.vtc supported by QUIC
- REGTESTS: quic: set_ssl_server_cert.vtc supported by QUIC
- REGTESTS: quic: show_ssl_ocspresponse.vtc supported by QUIC
- REGTESTS: quic: ssl_client_auth.vtc supported by QUIC
- REGTESTS: quic: ssl_client_samples.vtc supported by QUIC
- REGTESTS: quic: ssl_default_server.vtc supported by QUIC
- REGTESTS: quic: new_del_ssl_crlfile.vtc supported by QUIC
- REGTESTS: quic: ssl_frontend_samples.vtc supported by QUIC
- REGTESTS: quic: ssl_server_samples.vtc supported by QUIC
- REGTESTS: quic: ssl_simple_crt-list.vtc supported by QUIC
- REGTESTS: quic: ssl_sni_auto.vtc code provision for QUIC
- REGTESTS: quic: ssl_curve_name.vtc supported by QUIC
- REGTESTS: quic: add_ssl_crt-list.vtc supported by QUIC
- REGTESTS: add ssl_ciphersuites.vtc (TCP & QUIC)
- BUG/MINOR: quic: do not set first the default QUIC curves
- REGTESTS: quic/ssl: Add ssl_curves_selection.vtc
- BUG/MINOR: ssl: Don't allow to set NULL sni
- MEDIUM: quic: Add connection as argument when qc_new_conn() is called
- MINOR: ssl: Add a function to hash SNIs
- MINOR: ssl: Store hash of the SNI for cached TLS sessions
- MINOR: ssl: Compare hashes instead of SNIs when a session is cached
- MINOR: connection/ssl: Store the SNI hash value in the connection itself
- MEDIUM: tcpcheck/backend: Get the connection SNI before initializing SSL ctx
- BUG/MEDIUM: ssl: Don't reuse TLS session if the connection's SNI differs
- MEDIUM: ssl/server: No longer store the SNI of cached TLS sessions
- BUG/MINOR: log: Dump good %B and %U values in logs
- BUG/MEDIUM: http-ana: Don't close server connection on read0 in TUNNEL mode
- DOC: config: Fix description of the spop mode
- DOC: config: Improve spop mode documentation
- MINOR: ssl: Split ssl_crt-list_filters.vtc in two files by TLS version
- REGTESTS: quic: tls13_ssl_crt-list_filters.vtc supported by QUIC
- BUG/MEDIUM: h3: do not access QCS <sd> if not allocated
- CLEANUP: mworker/cli: remove useless variable
- BUG/MINOR: mworker/cli: 'show proc' is limited by buffer size
- BUG/MEDIUM: ssl: Always check the ALPN after handshake
- MINOR: connections: Add a new CO_FL_SSL_NO_CACHED_INFO flag
- BUG/MEDIUM: ssl: Don't store the ALPN for check connections
- BUG/MEDIUM: ssl: Don't resume session for check connections
- CLEANUP: improvements to the alignment macros
- CLEANUP: use the automatic alignment feature
- CLEANUP: more conversions and cleanups for alignment
- BUG/MEDIUM: h3: fix access to QCS <sd> definitely
- MINOR: h2/trace: emit a trace of the received RST_STREAM type
Right now we don't get any state trace when receiving an RST_STREAM, and
this is not convenient because RST_STREAM(0) is not visible at all, except
in developer level because the function is entered and left.
Let's extract the RST code first and always log it using TRACE_PRINTF()
(along with h2c/h2s) so that it's possible to detect certain codes being
used.
The previous patch tried to fix access to QCS <sd> member, as the latter
is not always allocated anymore on the frontend side.
a15f0461a016a664427f5aaad2227adcc622c882
BUG/MEDIUM: h3: do not access QCS <sd> if not allocated
In particular, access was prevented after HEADERS parsing in case
h3_req_headers_to_htx() returned an error, which indicates that the
stream-endpoint allocation was not performed. However, this still is not
enough when QCS instance is already closed at this step. Indeed, in this
case, h3_req_headers_to_htx() returns OK but stream-endpoint allocation
is skipped as an optimization as no data exchange will be performed.
To definitely fix this kind of problems, add checks on qcs <sd> member
before accessing it in H3 layer. This method is the safest one to ensure
there is no NULL dereferencement.
This should fix github issue #3211.
This must be backported along the above mentionned patch.
- Convert additional cases to use the automatic alignment feature for
the THREAD_ALIGN(ED) macros. This includes some cases that are less
obviously correct where it seems we wanted to align only in the
USE_THREAD case but were not using the thread specific macros.
- Also move some alignment requirements to the structure definition
instead of having it on variable declaration.
- Use the automatic alignment feature instead of hardcoding 64 all over
the code.
- This also converts a few bare __attribute__((aligned(X))) to using the
ALIGNED macro.
- It is now possible to use the THREAD_ALIGN and THREAD_ALIGNED macros
without a parameter. In this case, we automatically align on the cache
line size.
- The cache line size is set to 64 by default to match the current code,
but it can be overridden on the command line.
- This required moving the DEFVAL/DEFNULL/DEFZERO macros to compiler.h
instead of tools-t.h, to avoid namespace pollution if we included
tools-t.h from compiler.h.
Don't attempt to use stored sessions when creating new check
connections, as the check SSL parameters might be different from the
server's ones.
This has not been proven to be a problem yet, but it doesn't mean it
can't be, and this should be backported up to 2.8 along with
dcce9369129f6ca9b8eed6b451c0e20c226af2e3 if it is.
When establishing check connections, do not store the negociated ALPN
into the server's path_param if the connection is a check connection, as
it may use different SSL parameters than the regular connections. To do
so, only store them if the CO_FL_SSL_NO_CACHED_INFO is not set.
Otherwise, the check ALPN may be stored, and the wrong mux can be used
for regular connections, which will end up generating 502s.
This should fix Github issue #3207
This should be backported to 3.3.
Add a new flag to connections, CO_FL_SSL_NO_CACHED_INFO, and set it for
checks.
It lets the ssl layer know that he should not use cached informations,
such as the ALPN as stored in the server, or cached sessions.
This wlil be used for checks, as checks may target different servers, or
used a different SSL configuration, so we can't assume the stored
informations are correct.
This should be backported to 3.3, and may be backported up to 2.8 if the
attempts to do session resume by checks is proven to be a problem.
Move the code that is responsible for checking the ALPN, and updating
the one stored in the server's path_param, from after we created the
mux, to after we did an handshake. Once we did it once, the mux will not
be created by the ssl code anymore, as when we know which mux to use
thanks to the ALPN, it will be done earlier in connect_server(), so in
the unlikely event it changes, we would not detect it anymore, and we'd
keep on creating the wrong mux.
This can be reproduced by doing a first request, and then changing the
ALPN of the server without haproxy noticing (ie without haproxy noticing
that the server went down).
This should be backported to 3.3.
In ticket #3204, it was reported that "show proc" is not able to display
more than 202 processes. Indeed the bufsize is 16k by default in the
master, and can't be changed anymore since 3.1.
This patch allows the 'show proc' to start again to dump when the buffer
is full, based on the timestamp of the last PID it attempted to dump.
Using pointers or count the number of processes might not be a good idea
since the list can change between calls.
Could be backported in all stable branche.
Since the following commit, allocation of QCS stream-endpoint on FE side
has been delayed. The objective is to allocate it only for QCS attached
to an upper stream object. Stream-endpoint allocation is now performed
on qcs_attach_sc() called during HEADERS parsing.
commit e6064c561684d9b079e3b5725d38dc3b5c1b5cd5
OPTIM: mux-quic: delay FE sedesc alloc to stream creation
Also, stream-endpoint is accessed through the QCS instance after HEADERS
or DATA frames parsing, to update the known input payload length. The
above patch triggered regressions as in some code paths, <sd> field is
dereferenced while still being NULL.
This patch fixes this by restricting access to <sd> field after newer
conditions.
First, after HEADERS parsing, known input length is only updated if
h3_req_headers_to_htx() previously returned a success value, which
guarantee that qcs_attach_sc() has been executed.
After DATA parsing, <sd> is only accessed after the frame validity
check. This ensures that HEADERS were already parsed, thus guaranteing
that stream-endpoint is allocated.
This should fix github issue #3211.
This must be backported up to 3.3. This is sufficient, unless above
patch is backported to previous releases, in which case the current one
must be picked with it.
ssl/tls13_ssl_crt-list_filters.vtc was renamed to ssl/tls13_ssl_crt-list_filters.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then tls13_ssl_crt-list_filters.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
Seperate the section from ssl_crt-list_filters.vtc which supports TLS 1.2 and 1.3
versions to produce tls12_ssl_crt-list_filters.vtc and tls13_ssl_crt-list_filters.vtc.
The spop mode description was a bit confusing. So let's improve it.
Thanks to @NickMRamirez.
This patch shoud fix issue #3206. It could be backported as far as 3.1.
It was mentionned that the spop mode turned the backend into a "log"
backend. It is obviously wrong. It turns the backend into a spop backend.
This patch should be backported as far as 3.1.
It is a very old bug (2012), dating from the introduction of the keep-alive
support to HAProxy. When a request is fully received, the SC on backend side
is switched to NOHALF mode. It means that when the read0 is received from
the server, the server connection is immediately closed. It is expected to
do so at the end of a classical request. However, it must not be performed
if the session is switched to the TUNNEL mode (after an HTTP/1 upgrade or a
CONNECT). The client may still have data to send to the server. And closing
brutally the server connection this way will be handled as an error on
client side.
This bug is especially visible when a H2 connection on client side because a
RST_STREAM is emitted and a "SD--" is reported in logs.
Thanks to @chrisstaite
This patch should fix the issue #3205. It must be backported to all stable
versions.
When per-stream "bytes_in" and "bytes_out" counters where replaced in 3.3,
the wrong counters were used for %B and %U values in logs. In the
configuration manual and the commit message, it was specificed that
"bytes_in" was replaced by "req_in" and "bytes_out" by "res_in", but in the
code, wrong counters were used. It is now fixed.
This patch should fix the issue #3208. It must be backported to 3.3.
Thanks to the previous patch, "BUG/MEDIUM: ssl: Don't reuse TLS session
if the connection's SNI differs", it is no useless to store the SNI of
cached TLS sessions. This SNI is no longer tested and new connections
reusing a session must have the same SNI.
The main change here is for the ssl_sock_set_servername() function. It is no
longer possible to compare the SNI of the reused session with the one of the
new connection. So, the SNI is always set, with no other processing. Mainly,
the session is not destroyed when SNIs don't match. It means the commit
119a4084bf ("BUG/MEDIUM: ssl: for a handshake when server-side SNI changes")
is implicitly reverted.
It is good to note that it is unclear for me when and why the reused session
should be destroyed. Because I'm unable to reproduce any issue fixed by the
commit above.
This patch could be backported as far as 3.0 with the commit above.
When a new SSL server connection is created, if no SNI is set, it is
possible to inherit from the one of the reused TLS session. The bug was
introduced by the commit 95ac5fe4a ("MEDIUM: ssl_sock: always use the SSL's
server name, not the one from the tid"). The mixup is possible between
regular connections but also with health-checks connections.
But it is only the visible part of the bug. If the SNI of the cached TLS
session does not match the one of the new connection, no reuse must be
performed at all.
To fix the bug, hash of the SNI of the reused session is compared with the
one of the new connection. The TLS session is reused only if the hashes are
the same.
This patch should fix the issue #3195. It must be slowly backported as far
as 3.0. it relies on the following series:
* MEDIUM: tcpcheck/backend: Get the connection SNI before initializing SSL ctx
* MINOR: connection/ssl: Store the SNI hash value in the connection itself
* MEDIUM: ssl: Store hash of the SNI for cached TLS sessions
* MINOR: ssl: Add a function to hash SNIs
* MEDIUM: quic: Add connection as argument when qc_new_conn() is called
* BUG/MINOR: ssl: Don't allow to set NULL sni
The SNI of a new connection is now retrieved earlier, before the
initialization of the SSL context. So, concretely, it is now performed
before calling conn_prepare(). The SNI is then set just after.
When a SNI is set on a new connection, its hash is now saved in the
connection itself. To do so, a dedicated field was added into the connection
strucutre, called sni_hash. For now, this value is only used when the TLS
session is cached.
This patch relies on the commit "MINOR: ssl: Store hash of the SNI for
cached TLS sessions". We now use the hash of the SNIs instead of the SNIs
themselves to know if we must update the cached SNI or not.
For cached TLS sessions, in addition to the SNI itself, its hash is now also
saved. No changes are expected here because this hash is not used for now.
This commit relies on:
* MINOR: ssl: Add a function to hash SNIs
This patch only adds the function ssl_sock_sni_hash() that can be used to
get the hash value corresponding to an SNI. A global seed, sni_hash_seed, is
used.
This patch reverts the commit efe60745b ("MINOR: quic: remove connection arg
from qc_new_conn()"). The connection will be mandatory when the QUIC
connection is created on backend side to fix an issue when we try to reuse a
TLS session.
So, the connection is again an argument of qc_new_conn(), the 4th
argument. It is NULL for frontend QUIC connections but there is no special
check on it.
ssl_sock_set_servername() function was documented to support NULL sni to
unset it. However, the man page of SSL_get_servername() does not mentionned
it is supported or not. And it is in fact not supported by WolfSSL and leads
to a crash if we do so.
For now, this function is never called with a NULL sni, so it better and
safer to forbid this case. Now, if the sni is NULL, the function does
nothing.
This patch could be backported to all stable versions.
This reg test ensures the curves may be correctly set for frontend
and backends by "ssl-default-bind-curves" and "ssl-default-server-curves"
as global options or with "curves" options on "bind" and "server" lines.
This patch impacts both the QUIC frontends and listeners.
Note that "ssl-default-bind-ciphersuites", "ssl-default-bind-curves",
are not ignored by QUIC by the frontend. This is also the case for the
backends with "ssl-default-server-ciphersuites" and "ssl-default-server-curves".
These settings are set by ssl_sock_prepare_ctx() for the frontends and
by ssl_sock_prepare_srv_ssl_ctx() for the backends. But ssl_quic_initial_ctx()
first sets the default QUIC frontends (see <quic_ciphers> and <quic_groups>)
before these ssl_sock.c function are called, leading some TLS stack to
refuse them if they do not support them. This is the case for some OpenSSL 3.5
stack with FIPS support. They do not support X25519.
To fix this, set the default QUIC ciphersuites and curves only if not already
set by the settings mentioned above.
Rename <quic_ciphers> global variable to <default_quic_ciphersuites>
and <quic_groups> to <default_quic_curves> to reflect the OpenSSL API naming.
These options are taken into an account by ssl_quic_initial_ctx()
which inspects these four variable before calling SSL_CTX_set_ciphersuites()
with <default_quic_ciphersuites> as parameter and SSL_CTX_set_curves() with
<default_quic_curves> as parameter if needed, that is to say, if no ciphersuites
and curves were set by "ssl-default-bind-ciphersuites", "ssl-default-bind-curves"
as global options or "ciphersuites", "curves" as "bind" line options.
Note that the bind_conf struct is not modified when no "ciphersuites" or
"curves" option are used on "bind" lines.
On backend side, rely on ssl_sock_init_srv() to set the server ciphersuites
and curves. This function is modified to use respectively <default_quic_ciphersuites>
and <default_quic_curves> if no ciphersuites and curves were set by
"ssl-default-server-ciphersuites", "ssl-default-server-curves" as global options
or "ciphersuites", "curves" as "server" line options.
Thank to @rwagoner for having reported this issue in GH #3194 when using
an OpenSSL 3.5.4 stack with FIPS support.
Must be backported as far as 2.6
This reg test ensures the ciphersuites may be correctly set for frontend
and backends by "ssl-default-bind-ciphersuites" and "ssl-default-server-ciphersuites"
as global options or with "ciphersuites" options on "bind" and "server" lines.
ssl/add_ssl_crt-list.vtc was renamed to ssl/add_ssl_crt-list.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then add_ssl_crt-list.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
ssl/ssl_curve_name.vtc was renamed to ssl/ssl_curve_name.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then ssl_curve_name.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
Note that this script works by chance for QUIC because the curves
selection matches the default ones used by QUIC.
ssl/ssl_sni_auto.vtc was renamed to ssl/ssl_sni_auto.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then ssl_sni_auto.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
Mark the test as broken for QUIC
ssl/ssl_simple_crt-list.vtc was renamed to ssl/ssl_simple_crt-list.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then ssl_simple_crt-list.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
ssl/ssl_server_samples.vtc was renamed to ssl/ssl_server_samples.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then ssl_server_samples.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
ssl/ssl_frontend_samples.vtc was renamed to ssl/ssl_frontend_samples.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then ssl_frontend_samples.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
ssl/new_del_ssl_crlfile.vtc was renamed to ssl/new_del_ssl_crlfile.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then new_del_ssl_crlfile.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
ssl/ssl_default_server.vtc was renamed to ssl/ssl_default_server.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then ssl_default_server.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
ssl/ssl_client_samples.vtc was renamed to ssl/ssl_client_samples.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then ssl_client_samples.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
ssl/ssl_client_auth.vtc was renamed to ssl/ssl_client_auth.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then ssl_client_auth.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
ssl/show_ssl_ocspresponse.vtc was renamed to ssl/show_ssl_ocspresponse.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then show_ssl_ocspresponse.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
ssl/set_ssl_server_cert.vtc was renamed to ssl/set_ssl_server_cert.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then set_ssl_server_cert.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
ssl/set_ssl_crlfile.vtc was renamed to ssl/set_ssl_crlfile.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then set_ssl_crlfile.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
ssl/set_ssl_cert.vtc was renamed to ssl/set_ssl_cert.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then set_ssl_cert.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
ssl/set_ssl_cert_noext.vtc was renamed to ssl/set_ssl_cert_noext.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then set_ssl_cert_noext.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
ssl/set_ssl_cafile.vtc was renamed to ssl/set_ssl_cafile.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then set_ssl_cafile.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
This is the same issue as the one fixed by this commit:
BUG/MINOR: quic-be: handshake errors without connection stream closure
But this time this is when the client has to send an alert to the server.
The fix consists in creating the mux after having set the handshake connection
error flag and error_code.
This bug was revealed by ssl/set_ssl_cafile.vtc reg test.
Depends on this commit:
MINOR: quic: avoid code duplication in TLS alert callback
Must be backported to 3.3
Both the OpenSSL QUIC API TLS alert callback ha_quic_ossl_alert() does exactly
the same thing than the one for quictls API, even if the parameter have different
types.
Call ha_quic_send_alert() quictls callback from ha_quic_ossl_alert OpenSSL
QUIC API callback to avoid such code duplication.
ssl/set_ssl_bug_2265.vtc was renamed to ssl/set_ssl_bug_2265.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then set_ssl_bug_2265.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
ssl/ocsp_auto_update.vtc was renamed to ssl/ocsp_auto_update.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then ocsp_auto_update.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
ssl/new_del_ssl_cafile.vtc was rename to ssl/new_del_ssl_cafile.vtci
to produce a common part runnable both for QUIC and TCP connections.
Then new_del_ssl_cafile.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC connection and "stream" for TCP connections);
ssl/issuers_chain_path.vtc was rename to ssl/issuers_chain_path.vtci
to produce a common part runnable both for QUIC and TCP connections.
Then issuers_chain_path.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC connection and "stream" for TCP connections);
ssl/dynamic_server_ssl.vtc was rename to ssl/dynamic_server_ssl.vtci
to produce a common part runnable both for QUIC and TCP connections.
Then dynamic_server_ssl.vtc were created both under ssl and quic directories
to call the .vtci file with correct VTC_SOCK_TYPE environment value.
Note that VTC_SOCK_TYPE may be resolved in haproxy -cli { } sections.
Extract from ssl/del_ssl_crt-list.vtc the common part to produce
ssl/del_ssl_crt-list.vtci which may be reused by QUIC and TCP
from respectively quic/del_ssl_crt-list.vtc and ssl/del_ssl_crt-list.vtc
thanks to "include" VTC command and VTC_SOCK_TYPE special vtest environment
variable.
Move all these files and others for OCSP tests found into reg-tests/ssl
to reg-test/ssl/certs and adapt all the VTC files which use them.
This patch is needed by other tests which have to include the SSL tests.
Indeed, some VTC commands contain paths to these files which cannot
be customized with environment variables, depending on the location the VTC file
is runi from, because VTC does not resolve the environment variables. Only macros
as ${testdir} can be resolved.
For instance this command run from a VTC file from reg-tests/ssl directory cannot
be reused from another directory, except if we add a symbolic link for each certs,
key etc.
haproxy h1 -cli {
send "del ssl crt-list ${testdir}/localhost.crt-list ${testdir}/common.pem:1"
}
This is not what we want. We add a symbolic link to reg-test/ssl/certs to the
directory and modify the command above as follows:
haproxy h1 -cli {
send "del ssl crt-list ${testdir}/certs/localhost.crt-list ${testdir}/certs/common.pem:1"
}
Traces were missing in this function.
Also add information about the connection struct from qc->conn when
initialized for all the traces.
Should be easily backported as far as 2.6.
This bug was revealed on backend side by reg-tests/ssl/del_ssl_crt-list.vtc when
run wich QUIC connections. As expected by the test, a TLS alert is generated on
servsr side. This latter sands a CONNECTION_CLOSE frame with a CRYPTO error
(>= 0x100). In this case the client closes its QUIC connection. But
the stream connection was not informed. This leads the connection to
be closed after the server timeout expiration. It shouls be closed asap.
This is the reason why reg-tests/ssl/del_ssl_crt-list.vtc could succeeds
or failed, but only after a 5 seconds delay.
To fix this, mimic the ssl_sock_io_cb() for TCP/SSL connections. Call
the same code this patch implements with ssl_sock_handle_hs_error()
to correctly handle the handshake errors. Note that some SSL counters
were not incremented for both the backends and frontends. After such
errors, ssl_sock_io_cb() start the mux after the connection has been
flagged in error. This has as side effect to close the stream
in conn_create_mux().
Must be backported to 3.3 only for backends. This is not sure at this time
if this bug may impact the frontends.
Such crashes may occur for QUIC frontends only when the SSL traces are enabled.
ssl_sock_switchctx_cbk() ClientHello callback may be called without any connection
initialize (<conn>) for QUIC connections leading to crashes when passing
conn->err_code to TRACE_ERROR().
Modify the TRACE_ERROR() statement to pass this parameter only when <conn> is
initialized.
Must be backported as far as 3.2.
Probably due to historical accumulation, keywords were in a random
order that doesn't help when looking them up. Let's just reorder them
in alphabetical order like other sections. This can be backported.
As returned by Christian Ruppert in GH issue #3203, we're having an
issue with checks for empty args in skipped blocks: the check is
performed after the line is tokenized, without considering the case
where it's disabled due to outer false .if/.else conditions. Because
of this, a test like this one:
.if defined(SRV1_ADDR)
server srv1 "$SRV1_ADDR"
.endif
will fail when SRV1_ADDR is empty or not set, saying that this will
result in an empty arg on the line.
The solution consists in postponing this check after the conditions
evaluation so that disabled lines are already skipped. And for this
to be possible, we need to move "errptr" one level above so that it
remains accessible there.
This will need to be backported to 3.3 and wherever commit 1968731765
("BUG/MEDIUM: config: solve the empty argument problem again") is
backported. As such it is also related to GH issue #2367.
The keyword was correct in the doc but in the code it was spelled
with a missing 's' after 'settings', making it unavailable. Since
there was no other way to find this but reading the code, it's safe
to simply fix it and assume nobody relied on the wrong spelling.
In the worst case for older backports it can also be duplicated.
This must be backported to 3.0.
Simplify ssl_reuse.vtci so it can be started with variables:
- SSL_CACHESIZE allow to specify the size of the session cache size for
the frontend
- NO_TLS_TICKETS allow to specify the "no-tls-tickets" option on bind
It introduces these files:
- ssl/tls12_resume_stateful.vtc
- ssl/tls12_resume_stateless.vtc
- ssl/tls13_resume_stateless.vtc
- ssl/tls13_resume_stateful.vtc
- quic/tls13_resume_stateless.vtc
- quic/tls13_resume_stateful.vtc
- quic/tls13_0rtt_stateful.vtc
- quic/tls13_0rtt_stateless.vtc
stateful files have "no-tls-tickets" + tune.tls.cachesize 20000
stateless files have "tls-tickets" + tune.tls.cachesize 0
This allows to enable AWS-LC on TCP TLS1.2 and TCP TL1.3+tickets.
TLS1.2+stateless does not seem to work on WolfSSL.
The TLS resume test was never started with AWS-LC because the TLS1.3
part was not working. Since we split the reg-tests with a TLS1.2 part
and a TLS1.3 part, we can enable the tls1.2 part for AWS-LC.
Extend QUIC server configuration so that congestion algorithm and
maximum window size can be set on the server line. This can be achieved
using quic-cc-algo keyword with a syntax similar to a bind line.
This should be backported up to 3.3 as this feature is considered as
necessary for full QUIC backend support. Note that this relies on the
serie of previous commits which should be picked first.
Extract code from bind_parse_quic_cc_algo() related to pure parsing of
quic-cc-algo keyword. The objective is to be able to quickly duplicate
this option on the server line.
This may need to be backported to support QUIC congestion control
algorithm support on the server line in version 3.3.
Each QUIC congestion algorithm is defined as a structure with callbacks
in it. Every quic_conn has a member pointing to the configured
algorithm, inherited from the bind-conf keyword or to the default CUBIC
value.
Convert all these definitions to const. This ensures that there never
will be an accidental modification of a globally shared structure. This
also requires to mark quic_cc_algo field in bind_conf and quic_cc as
const.
This reverts commit a6504c9cfb6bb48ae93babb76a2ab10ddb014a79.
Each supported QUIC algo are associated with a set of callbacks defined
in a structure quic_cc_algo. Originally, bind_conf would use a constant
pointer to one of these definitions.
During pacing implementation, this field was transformed into a
dynamically allocated value copied from the original definition. The
idea was to be able to tweak settings at the listener level. However,
this was never used in practice. As such, revert to the original model.
This may need to be backported to support QUIC congestion control
algorithm support on the server line in version 3.3.
Because of missing "case" keyword in front of the values in a switch
case statement, the values were interpreted as goto tags and the switch
statement became useless.
This patch should fix GitHub issue #3200.
The fix should be backported up to 2.8.
Released version 3.3.0 with the following main changes :
- BUG/MINOR: acme: better challenge_ready processing
- BUG/MINOR: acme: warning ‘ctx’ may be used uninitialized
- MINOR: httpclient: complete the https log
- BUG/MEDIUM: server: do not use default SNI if manually set
- BUG/MINOR: freq_ctr: Prevent possible signed overflow in freq_ctr_overshoot_period
- DOC: ssl: Document the restrictions on 0RTT.
- DOC: ssl: Note that 0rtt works fork QUIC with QuicTLS too.
- BUG/MEDIUM: quic: do not prevent sending if no BE token
- BUG/MINOR: quic/server: free quic_retry_token on srv drop
- MINOR: quic: split global CID tree between FE and BE sides
- MINOR: quic: use separate global quic_conns FE/BE lists
- MINOR: quic: add "clo" filter on show quic
- MINOR: quic: dump backend connections on show quic
- MINOR: quic: mark backend conns on show quic
- BUG/MINOR: quic: fix uninit list on show quic handler
- BUG/MINOR: quic: release BE quic_conn on connect failure
- BUG/MINOR: server: fix srv_drop() crash on partially init srv
- BUG/MINOR: h3: do no crash on forwarding multiple chained response
- BUG/MINOR: h3: handle properly buf alloc failure on response forwarding
- BUG/MEDIUM: server/ssl: Unset the SNI for new server connections if none is set
- BUG/MINOR: acme: fix ha_alert() call
- Revert "BUG/MEDIUM: server/ssl: Unset the SNI for new server connections if none is set"
- BUG/MINOR: sock-inet: ignore conntrack for transparent sockets on Linux
- DEV: patchbot: prepare for new version 3.4-dev
- DOC: update INSTALL with the range of gcc compilers and openssl versions
- MINOR: version: mention that 3.3 is stable now
As reported in github issue #3192, in certain situations with transparent
listeners, it is possible to get the incoming connection's destination
wrong via SO_ORIGINAL_DST. Two cases were identified thus far:
- incorrect conntrack configuration where NOTRACK is used only on
incoming packets, resulting in reverse connections being created
from response packets. It's then mostly a matter of timing, i.e.
whether or not the connection is confirmed before the source is
retrieved, but in this case the connection's destination address
as retrieved by SO_ORIGINAL_DST is the client's address.
- late outgoing retransmit that recreates a just expired conntrack
entry, in reverse direction as well. It's possible that combinations
of RST or FIN might play a role here in speeding up conntrack eviction,
as well as the rollover of source ports on the client whose new
connection matches an older one and simply refreshes it due to
nf_conntrack_tcp_loose being set by default.
TPROXY doesn't require conntrack, only REDIRECT, DNAT etc do. However
the system doesn't offer any option to know how a conntrack entry was
created (i.e. normally or via a response packet) to let us know that
it's pointless to check the original destination, nor does it permit
to access the local vs peer addresses in opposition to src/dst which
can be wrong in this case.
One alternate approach could consist in only checking SO_ORIGINAL_DST
for listening sockets not configured with the "transparent" option,
but the problem here is that our low-level API only works with FDs
without knowing their purpose, so it's unknown there that the fd
corresponds to a listener, let alone in transparent mode.
A (slightly more expensive) variant of this approach here consists in
checking on the socket itself that it was accepted in transparent mode
using IP_TRANSPARENT, and skip SO_ORIGINAL_DST if this is the case.
This does the job well enough (no more client addresses appearing in
the dst field) and remains a good compromise. A future improvement of
the API could permit to pass the transparent flag down the stack to
that function.
This should be backported to stable versions after some observation
in latest -dev.
For reference, here are some links to older conversations on that topic
that Lukas found during this analysis:
https://lists.openwall.net/netdev/2019/01/12/34https://discourse.haproxy.org/t/send-proxy-not-modifying-some-traffic-with-proxy-ip-port-details/3336/9https://www.mail-archive.com/haproxy@formilux.org/msg32199.htmlhttps://lists.openwall.net/netdev/2019/01/23/114
This reverts commit de29000e602bda55d32c266252ef63824e838ac0.
The fix was in fact invalid. First it is not supprted by WolfSSL to call
SSL_set_tlsext_host_name with a hostname to NULL. Then, it is not specified
as supported by other SSL libraries.
But, by reviewing the root cause of this bug, it appears there is an issue
with the reuse of TLS sesisons. It must not be performed if the SNI does not
match. A TLS session created with a SNI must not be reused with another
SNI. The side effects are not clear but functionnaly speaking, it is
invalid.
So, for now, the commit above was reverted because it is invalid and it
crashes with WolfSSL. Then the init of the SSL connection must be reworked
to get the SNI earlier, to be able to reuse or not an existing TLS
session.
When a new SSL server connection is created, if no SNI is set, it is
possible to inherit from the one of the reused TLS session. The bug was
introduced by the commit 95ac5fe4a ("MEDIUM: ssl_sock: always use the SSL's
server name, not the one from the tid"). The mixup is possible between
regular connections but also with health-checks connections.
To fix the issue, when no SNI is set, for regular server connections and for
health-check connections, the SNI must explicitly be disabled by calling
ssl_sock_set_servername() with the hostname set to NULL.
Many thanks to Lukas for his detailed bug report.
This patch should fix the issue #3195. It must be backported as far as 3.0.
Replace BUG_ON() for buffer alloc failure on h3_resp_headers_to_htx() by
proper error handling. An error status is reported which should be
sufficient to initiate connection closure.
No need to backport.
h3_resp_headers_to_htx() is the function used to convert an HTTP/3
response into a HTX message. It was introduced on this release for QUIC
backend support.
A BUG_ON() would occur if multiple responses are forwarded
simultaneously on a stream without rcv_buf in between. Fix this by
removing it. Instead, if QCS HTX buffer is not empty when handling with
a new response, prefer to pause demux operation. This is restarted when
the buffer has been read and emptied by the upper stream layer.
No need to backport.
A recent patch has introduced free operation for QUIC tokens stored in a
server. These values are located in <per_thr> server array.
However, a server instance may be released prior to its full
initialization in case of a failure during "add server" CLI command. The
mentionned patch would cause a srv_drop() crash due to an invalid usage
of NULL <per_thr> member.
Fix this by adding a check on <per_thr> prior to dereference it in
srv_drop().
No need to backport.
If quic_connect_server() fails, quic_conn FD will remain unopened as set
to -1. Backend connections do not have a fallback socket for future
exchange, contrary to frontend one which can use the listener FD. As
such, it is better to release these connections early.
This patch adjusts such failure by extending quic_close(). This function
is called by the upper layer immediately after a connect issue. In this
case, release immediately a quic_conn backend instance if the FD is
unset, which means that connect has previously failed.
Also, quic_conn_release() is extended to ensure that such faulty
connections are immediately freed and not converted into a
quic_conn_closed instance.
Prior to this patch, a backend quic_conn without any FD would remain
allocated and possibly active. If its tasklet is executed, this resulted
in a crash due to access to an invalid FD.
No need to backport.
A recent patch has extended "show quic" capability. It is now possible
to list a specific list of connections, either active frontend, closing
frontend or backend connections.
An issue was introduced as the list is local storage. As this command is
reentrant, show quic context must be extended so that the currently
inspected list is also saved.
This issue was reported via GCC which mentions an uninitilized value
depending on branching conditions.
Add an extra "(B)" marker when displaying a backend connection during a
"show quic". This is useful to differentiate them with the frontend side
when displaying all connections.
Add a new "be" filter to "show quic". Its purpose is to be able to
display backend connections. These connections can also be listed using
"all" filter.
Each quic_conn instance is stored in a global list. Its purpose is to be
able to loop over all known connections during "show quic".
Split this into two separate lists for frontend and backend usage.
Another change is that closing backend connections do not move into
quic_conns_clo list. They remain instead in their original list. The
objective of this patch is to reduce the contention between the two
sides.
Note that this prevents backend connections to be listed in "show quic"
now. This will be adjusted in a future patch.
QUIC CIDs are stored in a global tree. Prior to this patch, CIDs used on
both frontend and backend sides were mixed together.
This patch implement CID storage separation between FE and BE sides. The
original tre quic_cid_trees is splitted as
quic_fe_cid_trees/quic_be_cid_trees.
This patch should reduce contention between frontend and backend usages.
Also, it should reduce the risk of random CID collision.
A recent patch has implemented caching of QUIC token received from a
NEW_TOKEN frame into the server cache. This value is stored per thread
into a <quic_retry_token> field.
This field is an ist, first set to an empty string. Via
qc_try_store_new_token(), it is reallocated to fit the size of the newly
stored token. Prior to this patch, the field was never freed so this
causes a memory leak.
Fix this by using istfree() on <quic_retry_token> field during
srv_drop().
No need to backport.
For QUIC client support, a token may be emitted along with INITIAL
packets during the handshake. The token is encoded during emission via
qc_enc_token() called by qc_build_pkt().
The token may be provided from different sources. First, it can be
retrieved via <retry_token> quic_conn member when a Retry packet was
received. If not present, a token may be reused from the server cache,
populated from NEW_TOKEN received from previous a connection.
Prior to this patch, the last method may cause an issue. If the upper
connection instance is released prior to the handshake completion, this
prevents access to a possible server token. This is considered an error
by qc_enc_token(). The error is reported up to calling functions,
preventing any emission to be performed. In the end, this prevented the
either the full quic_conn release or subsizing into quic_conn_closed
until the idle timeout completion (30s by default). With abortonclose
set now by default on HTTP frontends, early client shutdowns can easily
cause excessive memory consumption.
To fix this, change qc_enc_token() so that if connection is closed, no
token is encoded but also no error is reported. This allows to continue
emission and permit early connection release.
No need to backport.
Document that with QUIC, 0RTT only works with OpenSSL >= 3.5.2 and
AWS-LC, and for TLS/TCP, it only works with OpenSSL, and frontends
require that an ALPN be sent by the client to use the early data before
the handshake.
All of the other bandwidth-limiting code stores limits and intermediate
(byte) counters as unsigned integers. The exception here is
freq_ctr_overshoot_period which takes in unsigned values but returns a
signed value. While this has the benefit of letting the caller know how
far away from overshooting they are, this is not currently leveraged
anywhere in the codebase, and it has the downside of halving the positive
range of the result.
More concretely though, returning a signed integer when all intermediate
values are unsigned (and boundaries are not checked) could result in an
overflow, producing values that are at best unexpected. In the case of
flt_bwlim (the only usage of freq_ctr_overshoot_period in the codebase at
the time of writing), an overflow could cause the filter to wait for a
large number of milliseconds when in fact it shouldn't wait at all.
This is a niche possibility, because it requires that a bandwidth limit is
defined in the range [2^31, 2^32). In this case, the raw limit value would
not fit into a signed integer, and close to the end of the period, the
`(elapsed * freq)/period` calculation could produce a value which also
doesn't fit into a signed integer.
If at the same time `curr` (the number of events counted so far in the
current period) is small, then we could get a very large negative value
which overflows. This is undefined behaviour and could produce surprising
results. The most obvious outcome is flt_bwlim sometimes waiting for a
large amount of time in a case where it shouldn't wait at all, thereby
incorrectly slowing down the flow of data.
Converting just the return type from signed to unsigned (and checking for
the overflow) prevents this undefined behaviour. It also makes the range
of valid values consistent between the input and output of
freq_ctr_overshoot_period and with the input and output of other freq_ctr
functions, thereby reducing the potential for surprise in intermediate
calculations: now everything supports the full 0 - 2^32 range.
A new server feature "sni-auto" has been introduced recently. The
objective is to automatically set the SNI value to the host header if no
SNI is explicitely set.
668916c1a2fc2180028ae051aa805bb71c7b690b
MEDIUM: server/ssl: Base the SNI value to the HTTP host header by default
There is an issue with it : server SNI is currently always overwritten,
even if explicitely set in the configuration file. Adjust
check_config_validity() to ensure the default value is only used if
<sni_expr> is NULL.
This issue was detected as a memory leak on <sni_expr> was reported when
SNI is explicitely set on a server line.
This patch is related to github feature request #3081.
No need to backport, unless the above patch is.
The httpsclient_log_format variable lacks a few values in the TLS fields
that are now available as fetches.
On the backend side we have:
"%[fc_err]/%[ssl_fc_err,hex]/%[ssl_c_err]/%[ssl_c_ca_err]/%[ssl_fc_is_resumed] %[ssl_fc_sni]/%sslv/%sslc"
We now have enough sample fetches to have this equivalent in the
httpclient:
"%[bc_err]/%[ssl_bc_err,hex]/%[ssl_c_err]/%[ssl_c_ca_err]/%[ssl_bc_is_resumed] %[ssl_bc_sni]/%[ssl_bc_protocol]/%[ssl_bc_cipher]"
Instead of the current:
"%[bc_err]/%[ssl_bc_err,hex]/-/-/%[ssl_bc_is_resumed] -/-/-"
Please compiler with maybe-uninitialized warning
src/acme.c: In function ‘cli_acme_chall_ready_parse’:
include/haproxy/task.h:215:9: error: ‘ctx’ may be used uninitialized [-Werror=maybe-uninitialized]
215 | _task_wakeup(t, f, MK_CALLER(WAKEUP_TYPE_TASK_WAKEUP, 0, 0))
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/acme.c:2903:17: note: in expansion of macro ‘task_wakeup’
2903 | task_wakeup(ctx->task, TASK_WOKEN_MSG);
| ^~~~~~~~~~~
src/acme.c:2862:26: note: ‘ctx’ was declared here
2862 | struct acme_ctx *ctx;
| ^~~
Backport to 3.2.
Improve the challenge_ready processing:
- do a lookup directly instead looping in the task tree
- only do a task_wakeup when every challenges are ready to avoid starting
the task and stopping it just after
- Compute the number of remaining challenge to setup
- Output a message giving the number of remaining challenges to setup
and if the task started again.
Backport to 3.2.
Released version 3.3-dev14 with the following main changes :
- MINOR: stick-tables: Rename stksess shards to use buckets
- MINOR: quic: do not use quic_newcid_from_hash64 on BE side
- MINOR: quic: support multiple random CID generation for BE side
- MINOR: quic: try to clarify quic_conn CIDs fields direction
- MINOR: quic: refactor qc_new_conn() prototype
- MINOR: quic: remove <ipv4> arg from qc_new_conn()
- MEDIUM: mworker: set the mworker-max-reloads to 50
- BUG/MEDIUM: quic-be: prevent use of MUX for 0-RTT sessions without secrets
- CLEANUP: startup: move confusing msg variable
- BUG/MEDIUM: mworker: signals inconsistencies during startup and reload
- BUG/MINOR: mworker: wrong signals during startup
- BUG/MINOR: acme: P-256 doesn't work with openssl >= 3.0
- REGTESTS: ssl: split the SSL reuse test into TLS 1.2/1.3
- BUILD: Makefile: make install with admin tools
- CI: github: make install-bin instead of make install
- BUG/MINOR: ssl: remove dead code in ssl_sock_from_buf()
- BUG/MINOR: mux-quic: implement max-reuse server parameter
- MINOR: quic: fix trace on quic_conn_closed release
- BUG/MINOR: quic: do not decrement jobs for backend conns
- BUG/MINOR: quic: fix FD usage for quic_conn_closed on backend side
- BUILD: Makefile: remove halog from install-admin
- REGTESTS: ssl: add basic 0rtt tests for TLSv1.2, TLSv1.3 and QUIC
- REGTESTS: ssl: also verify that 0-rtt properly advertises early-data:1
- MINOR: quic/flags: add missing QUIC flags for flags dev tool.
- MINOR: quic: uneeded xprt context variable passed as parameter
- MINOR: limits: keep a copy of the rough estimate of needed FDs in global struct
- MINOR: limits: explain a bit better what to do when fd limits are exceeded
- BUG/MEDIUM: quic-be/ssl_sock: TLS callback called without connection
- BUG/MINOR: acme: alert when the map doesn't exist at startup
- DOC: acme: add details about the DNS-01 support
- DOC: acme: explain how to dump the certificates
- DOC: acme: configuring acme needs a crt file
- DOC: acme: add details about key pair generation in ACME section
- BUG/MEDIUM: queues: Don't forget to unlock the queue before exiting
- MINOR: muxes: Support an optional ALPN string when defining mux protocols
- MINOR: config: Do proto detection for listeners before checks about ALPN
- BUG/MEDIUM: config: Use the mux protocol ALPN by default for listeners if forced
- DOC: config: Add a note about conflict with ALPN/NPN settings and proto keyword
- MINOR: quic: store source address for backend conns
- BUG/MINOR: quic: flag conn with CO_FL_FDLESS on backend side
- ADMIN: dump-certs: let dry-run compare certificates
- BUG/MEDIUM: connection/ssl: also fix the ssl_sock_io_cb() regarding idle list
- DOC: http: document 413 response code
- MINOR: limits: display the computed maxconn using ha_notice()
- BUG/MEDIUM: applet: Fix conditions to detect spinning loop with the new API
- BUG/MEDIUM: cli: State the cli have no more data to deliver if it yields
- MINOR: h3: adjust sedesc update for known input payload len
- BUG/MINOR: mux-quic: fix sedesc leak on BE side
- OPTIM: mux-quic: delay FE sedesc alloc to stream creation
- BUG/MEDIUM: quic-be: quic_conn_closed buffer overflow
- BUG/MINOR: mux-quic: check access on qcs stream-endpoint
- BUG/MINOR: acme: handle multiple auth with the same name
- BUG/MINOR: acme: prevent creating map entries with dns-01
In case of the dns-01 challenge, it is possible to have a domain
"example.com" and "*.example.com" in the same request. This will create
2 different auth objects, which need 2 different challenges.
However the associated domain is "example.com" for both auth objects.
When doing a "challenge_ready", the algorithm will break at the first
domain found. But since you can have multiple time the same domain in
this case, breaking at the first one prevent to have all auth objects in
a ready state.
This patch just remove the break so we can loop on every auth objects.
Must be backported to 3.2.
Since the following commit, allocation of stream-endpoint has been
delayed. The objective is to allocate it only for QCS attached to an
upper stream object.
commit e6064c561684d9b079e3b5725d38dc3b5c1b5cd5
OPTIM: mux-quic: delay FE sedesc alloc to stream creation
However, some MUX functions are unsafe as qcs->sd is dereferenced
without any check on it which will result in a crash. Fix this by
testing that qcs->sd is allocated before using it.
This does not need to be backported, unless the above patch is.
This bug impacts only the backends.
Recent commits have modified quic_rx_pkt_parse() for the QUIC backend to handle the
retry token, and version negotiation. This function is called for the quic_conn
even when is closing state (so for the quic_conn_closed struct). The quic_conn
struct and quic_conn_closed struct share some members thank to the leading
QUIC_CONN_COMMON struct. The recent modification impacts some members which do not
exist for the quic_connn_closed struct, leading to buffer overflows if modified.
For the backends only this patch:
1- silently drops the Retry packet (received/parsed only by backends)
2- silently drops the Initial packets received in closing state
This is safe for the Initial packets because in closing state the datagrams
are entirely skipped thanks to qc_rx_check_closing() in quic_dgram_parse().
No backport needed because the backend support arrived with the current dev.
On frontend side, a stream-endpoint is allocated on every qcs_new()
invokation. However, this is only used for bidirectional request
streams.
This patch delays stream-endpoint allocation to qcs_attach_sc(), just
prior the instantiation of the upper stream object. This does not bring
any behavior change but is a nice optimization.
On backend side, streams are instantiated prior to their QCS MUX
counterpart. Thus, QCS can reuse the stream-endpoint already allocated
with the streams, either on qmux_init() or attach operation.
However, a stream-endpoint is also always allocated in every qcs_new()
invokation. For backend QCS, it is thus overwritten on
qmux_init()/attach operation. This causes a memleak.
Fix this by restricting allocation of stream-endpoint only for frontend
connection.
This does not need to be backported.
A regression was introduced in the commit 2d7e3ddd4 ("BUG/MEDIUM: cli: do
not return ACKs one char at a time"). When the CLI is processing a command
line, we no longer send response immediately. It is especially useful for
clients sending a bunch of commands with very short response.
However, in that state, the CLI applet must state it has no more data to
deliver. Otherwise it will be woken up again and again because data are
found in its output buffer with no blocking conditions. In worst cases, if
the command rate is really high, this can trigger the watchdog.
This patch must be backported where the patch above is, so probably as far
as 3.0.
There was a mixup between read/send events and ability for an applet to
receive and send. The fix seems obvious by reading it. The call-rate must be
incremented when nothing was received from the applet while it was allowed
and nothing was sent to the applet while it was allowed.
This patch must be backported as far as 3.0.
The computed maxconn was only displayed in verbose or debug modes. This
is too bad because lots of users just don't know what they're starting
with and can be trapped when an environment changes. Let's use ha_notice()
instead of a conditional fprintf() so that it gets displayed right after
the other startup messages, hoping that users will get used to seeing it
and more easily spot anomalies. See github issue #3191 for more context.
The fix in commit 9481cef948 ("BUG/MEDIUM: connection: do not reinsert a
purgeable conn in idle list") is also needed for ssl_sock_io_cb() which
can also release an idle connection and must perform the same checks.
This fix must be backported to all stable versions containing the fix
above.
Let the --dry-run mode connect to the socket and compare the
certificates. It would exits the process just before trying to move
the previous certificate and replace it.
This allow to have the "[NOTICE] (1234) XXX is already up to date" message
with dry-run.
Connection struct defines an handle which can point to either a FD or a
quic_conn. On the latter case, CO_FL_FDLESS must be set. This is already
the case on frontend side.
This patch fixes QUIC backend support. Before setting connection handle
member to a quic_conn instance, ensure that CO_FL_FDLESS flag is set on
the connection.
Prior to this patch, crash can occur in "show sess all".
No need to backport.
quic_conn has a local_addr member which is used to store the connection
source address. On backend side, this member is initialized to NULL as
the address is not yet known prior to connect. With this patch,
quic_connect_server() is extended so that local_addr is updated after
connect() success.
Also, quic_sock_get_src() is completed for the backend side which now
returns local_addr member. This step is necessary to properly support
fetches bc_src/bc_src_port.
If a mux protocol is forced and an incompatible ALPN or NPN settings are
used, connection errors may be experienced. There is no check performed
during HAProxy startup and It is not necessarily obvious. So a note is added
to warn users about this usage.
Since the commit 5003ac7fe ("MEDIUM: config: set useful ALPN defaults for
HTTPS and QUIC"), the ALPN is set by default to "h2,http/1.1" for HTTPS
listeners. However, it is in conflict with the forced mux protocol, if
any. Indeed, with "proto" keyword, the mux can be forced. In that case, some
combinations with the default ALPN will triggers connections errors.
For instance, by setting "proto h2", it will not be possible to use the H1
multiplexer. So we must take care to not advertise it in the ALPN. Worse,
since the commit above, most modern HTTP clients will try to use the H2
because it is advertised in the ALPN. By setting "proto h1" on the bind line
will make all the traffic rejected in error.
To fix the issue, and thanks to previous commits, if it is defined, we are
now relying on the ALPN defined by the mux protocol by default. The H1
multiplexer (only the one that can be forced) defines it to "http/1.1" while
the H2 multiplexer defines it to "h2". So by default, if one or another of
these muxes is forced, and if no ALPN is set, the mux ALPN is used.
Other multiplexers are not defining any default ALPN for now, because it is
useless. In addition, only the listeners are concerned because there is no
default ALPN on the server side.Finally, there is no tests performed if the
ALPN is forced on the bind line. It is the user responsibility to properly
configure his listeners (at least for now).
This patch depends on:
* MINOR: config: Do proto detection for listeners before checks about ALPN
* MINOR: muxes: Support an optional ALPN string when defining mux protocols
The series must be backported as far as 2.8.
The verification of any forced mux protocol, via the "proto" keyword, for
listeners is now performed before any tests on the ALPN. It will be
mandatory to be able to force the default ALPN, if not forced on the bind
line.
This patch will be mandatory for the next fix.
When a multiplexer protocol is defined, it is now possible to specify the
ALPN it supports, in binary format. This info is optionnal. For now only the
h2 and the h1 multiplexers define an ALPN because this will be mandatory for
a fix. But this could be used in future for different purpose.
This patch will be mandatory for the next fix.
In assign_server_and_queue(), there's a rare case when the server was
full, so we created a pendconn, another server was considered but in the
meanwhile the pendconn was unqueued already, so we just left the
function. We did so, however, while still holding the queue lock, which
will ultimately lead to a deadlock, and ultimately the watchdog would
kill the process.
To fix that, just unlock the queue before leaving.
This should be backported to 3.2.
When configuring an acme section with the 'map' keyword, the user must
use an existing map. If the map doesn't exist, a log will be emitted
when trying to add the challenge to the map.
This patch change the behavior by checking at startup if the map exists,
so haproxy would warn and won't start with a non-existing map.
This must be backported in 3.2.
Contrary to TCP, QUIC does not SSL_free() its SSL * object when its ->close()
XPRT callback is called. This has as side effect to trigger some BUG_ON(!conn)
with <conn> the connection from TLS callbacks registered at configuration
parsing time, so after this <conn> have been released.
This is the case for instance with ssl_sock_srv_verifycbk() whose role is to
add some checks to the built-in server certificate verification process.
This patch prevents the pointer to <conn> dereferencing inside several callbacks
shared between TCP and QUIC.
Thank you to @InputOutputZ for its report in GH #3188.
As the QUIC backend feature arrived with the current 3.3 dev, no need to backport.
As shown in github issue #3191, the error message shown when FD limits
are exceeded is not very useful as-is, since the current hard limit is
not displayed, and no suggestion is made about what to change in the
config. Let's explain about maxconn/ulimit-n/fd-hard-limit, suggest
dropping them or setting them to a context-based value at roughly 49%
of the current limit minus the known used FDs for listeners and checks.
This allows common "large" hard limits to report mostly round maxconns.
Example:
[ALERT] (25330) : [haproxy.main()] Cannot raise FD limit to 4001020,
current limit is 1024 and hard limit is 4096. You may prefer to let
HAProxy adjust the limit by itself; for this, please just drop any
'maxconn' and 'ulimit-n' from the global section, and possibly add
'fd-hard-limit' lower than this hard limit. You may also force a new
'maxconn' value that is a bit lower than half of the hard limit minus
listeners and checks. This results in roughly 1500 here.
It's always a pain to guess the number of FDs that can be needed by
listeners, checks, threads, pollers etc. We have this estimate in
global.maxsock before calling set_global_maxconn(), but we lose it
the line after. Let's copy it into global.est_fd_usage and keep it.
This will be helpful to try to provide more accurate suggestions for
maxconn.
This quic_conn ->xrpt_ctx is passed to qc_send_ppkts(), the quic_conn is retrieved
from this context to be used inside this function and it is not used at all
by this function.
This patch simply directly passes the quic_conn to qc_send_ppkts(). This is only
what this function needs.
This patch completes the 0-rtt test to verify that early-data:1 is
properly emitted to the server in the relevant situations. We carefully
compare it with the expected values that are computed based on the TLS
version, the client and listener's support for 0-rtt and the resumption
status. A response header "x-early-data-test" is set to OK on success,
or KO on failure and the client tests this. The previous test is kept
as well. This was tested with quictls-1.1.1 and quictls-3.0.1 for TCP,
as well as aws-lc for QUIC.
These tests try all the combinations of {0,1}rtt <-> {0,1}rtt with
stateless and stateful tickets. They take into consideration the TLS
version to decide whether or not 0rtt should work. Since we cannot
use environment variables in the client, the tests are run in haproxy
itself where the frontends set a "x-early-rcvd-test" response header
that the client checks. At this stage, the test only verifies that
*some* early data were received.
Note that the tests are a bit complex because we need 4 listeners
for the various combinations of 0rtt/tickets, then we have to set
expectations based on the TLS version (1.2 vs 1.3), as well as the
session resumption status.
We have to set alpn on the server lines because currently our frontends
expect it for 0-rtt to work.
The dependency to halog build provokes problems when changing CFLAGS and
LDFLAGS, because you're suppose to have the same flags during the build
and the install if there's still some things to build.
We probably need to store the flags somewhere to reuse them at another
step, but we need to do it cleanly. In the meantime it's better not to
have this dependency.
On the frontend side, QUIC transfer can be performed either via a
connection owned FD or multiplex on the listener one. When a quic_conn
is freed and converted to quic_conn_closed instance, its FD if open is
closed and all exchanges are now multiplex via the listener FD.
This is different for the backend as connections only has the choice to
use their owned FD. Thus, special care care must be taken when freeing a
connection and converting it to a quic_conn_closed instance. In this
case, qc_release_fd() is delayed to the quic_conn_closed release.
Furthermore, when the FD is transferred, its iocb and owner fields are
updated to the new quic_conn_closed instance. Without it, a crash will
occur when accessing the freed quic_conn tasklet. A newly dedicated
handler quic_conn_closed_sock_fd_iocb is used to ensure access to
quic_conn_closed members only.
jobs is a global counter which serves to account activity through the
whole process. Soft-stop procedure will wait until this counter is
resetted to the nul value.
jobs is not used for backend connections. Thus, it is not incremented
when a QUIC backend connection is instantiated as expected. However,
decrement is performed on all sides during quic_conn_release(). This
causes the counter wrapping.
Fix this by decrementing jobs only for frontend connections. Without
this patch, soft stop procedure will hang indefinitely if QUIC backend
connections were in use.
Properly implement support for max-reuse server keyword. This is done by
adding a total count of streams seen for the whole connection. This
value is used in avail_streams callback.
When haproxy is compiled in -O0, the SSL_get_max_early_data() symbol is
used in the generated assembly, however -O2 seems to remove this symbol
when optimizing the code.
It happens because `if conn_is_back(conn)` and `if
(objt_listener(conn->target))` are opposed conditions, which mean we
never use the branch when objt_listener(conn->target) is true.
This patch removes the dead code. Bonus: SSL_get_max_early_data() is not
implemented in rustls, and that's the only thing preventing to start
with it.
This can be backported in every stable branches.
make install now have a dependency to install-admin which have a
dependency to admin/halog/halog.
halog links haproxy .o together with its own objects, but those objects
when built with ASAN must also be linked with ASAN or it won't be
possible to link the binary.
We don't need an ASAN-ready halog, so let's just do an install-bin
instead that will just install haproxy.
QUIC and TLS don't use the same tests because QUIC only supports
TLS 1.3 while SSL tests both TLS 1.2 and 1.3, which complicates
the tests scenarios.
This change extracts the core of the test into a single generic
ssl_reuse.vtci file and creates new high-level tests for TLSv1.2
over TCP, TLSv1.3 over TCP and TLSv1.3 over QUIC, which simply
include this file and set two variables. The test is now cleaner
and simpler.
When trying to use the P-256 curve in the acme configuration with
OpenSSL 3.x, the generation of the account was failing because OpenSSL
doesn't return a NIST or SECG curve name, but a ANSI X9.62 one.
Since the ANSI X9.62 curve names were not in the list, it couldn't match
anything supported.
This patch fixes the issue by adding both prime192v1 and prime256v1 name
in the struct curve array which is used during curve parsing.
Must be backported to 3.2.
Since the new master-worker model in 3.1, signals are registered in
step_init_3(). However, those signals were supposed to be registered
only for the worker or the standalone mode. It would call the wrong
callback in the master even during configuration parsing.
The patch set the signals handler to NULL for the master so it does
nothing until they really are registered.
Must be backported as far as 3.1.
Since haproxy 3.1, the master-worker mode changed to let the worker
parse the configuration instead of the master.
Previously, signals were blocked during configuration parsing and
unblocked before entering the polling loop of the master. This way it
was impossible to start a reload during the configuration parsing.
But with the new model, the polling loop is started in the master before
the configuration parsing is finished, and the signals are still
unblocked at this step. Meaning that it is possible to start a reload
while the configuration is parsing.
This patch reintroduce the behavior of blocking the signals during
configuration parsing adapted to the new model:
- Before the exec() of the reload, signals are blocked.
- When entering the polling loop, the SIGCHLD is unblocked because it is
required to get a failure during configuration parsing in the worker
- Once the configuration is parsed, upon success in _send_status() or
upon failure in run_master_in_recovery_mode() every signals are unblocked.
This patch must be backported as far as 3.1.
Move the char *msg variable declared in main() in a sub-block since
there's already multiple msg variable in other sub-blocks in this
function.
Also make it const.
The QUIC backend crashes when its peer does not support 0-RTT. In this case,
when the sessions are reused, no early-data level secrets are derived by
the TLS stack. This leads to crashes from qc_send_mux() which does not suppose
that both early-data level (qc->eel) and application level (qc->ael) cipher levels
could be non initialized.
To fix this:
- prevent qc_send_mux() to send data if these two encryption level are not
intialized. In this case it returns QUIC_TX_ERR_NONE;
- avoid waking up the MUX from XPRT ->start() callback if the MUX is ready
but without early-data level secrets to send them;
- ensure the MUX is woken up by qc_ssl_do_handshake() after handshake completion
if it is ready calling qc_notify_send()
Thank you to @InputOutputZ for having reported this issue in GH #3188.
No need to backport because QUIC backends is a current 3.3 development feature.
There was no mworker-max-reload value by default, it was set to INT_MAX
so this was impossible to reach.
The default value is now 50, which is still high, but no workers should
undergo that much reloads. Meaning that a worker will be killed with
SIGTERM if it reach this much reloads.
Remove <ipv4> argument from qc_new_conn(). This parameter is unnecessary
as it can be derived from the family type of the addresses also passed
as argument.
The objective of this patch is to streamline qc_new_conn() usage so that
it is similar for frontend and backend sides.
Previously, several parameters were set only for frontend connections.
These arguments are replaced by a single quic_rx_packet argument, which
represents the INITIAL packet triggering the connection allocation on
the server side. For a QUIC client endpoint, it remains NULL. This usage
is consider more explicit.
As a minor change, <target> is moved as the first argument of the
function. This is considered useful as this argument determines whether
the connection is a frontend or backend entry.
Along with these changes, qc_new_conn() documentation has been reworded
so that it is now up-to-date with the newest usage.
quic_conn has two fields named <dcid> and <scid>. It may cause confusion
as it is not obvious how these fields are related to the connection
direction. Try to improve this by extending the documentation of these
two fields.
When a new backend connection is instantiated, a CID is first randomly
generated. It will serve as the first DCID for incoming packets from the
server. Prior to this patch, if the generated CID caused a collision
with an other entries from another connection, an error is reported and
the connection cannot be allocated.
This patch improves this procedure by implementing retries when a
collision occurs. Now, at most three attemps will be performed before
giving up. This is the same procedure already performed for CIDs
instantiated after RETIRE_CONNECTION_ID frame parsing.
Along with this functional change, qc_new_conn() is refactored for
backend instantiation. The CID generation is extracted from it and the
value is passed as an argument. This is considered cleaner as the code
is more similar between frontend and backend sides.
quic_newcid_from_hash64 is an external callback. If defined, it serves
as a CID method generation, as an alternative to the default random
implementation.
This mechanism was not correctly implemented on the backend side.
Indeed, <hash64> quic_conn member is only setted for frontend
connections. The simplest solution would be to properly define it also
for backend ones. However, quic_newcid_from_hash64 derivation is really
only useful for the frontend side for now. Thus, this patch disables
using it on the backend side in favor of the default random generator.
To implement this, quic_cid_generate() is splitted in two functions, for
both methods of CIDs generation. This is the responsibility of the
caller to select the proper method. On backend side, only random
implementation is now used.
The shard keyword is already used by the peers and on the server lines. And
it is unrelated with the session keys distribution. So instead of talking
about shard for the session key hashing, we now use the term "bucket".
Released version 3.3-dev13 with the following main changes :
- BUG/MEDIUM: config: for word expansion, empty or non-existing are the same
- BUG/MINOR: quic: close connection on CID alloc failure
- MINOR: quic: adjust CID conn tree alloc in qc_new_conn()
- MINOR: quic: split CID alloc/generation function
- BUG/MEDIUM: quic: handle collision on CID generation
- MINOR: quic: extend traces on CID allocation
- MEDIUM/OPTIM: quic: alloc quic_conn after CID collision check
- MINOR: stats-proxy: ensure future-proof FN_AGE manipulation in me_generate_field()
- BUG/MEDIUM: stats-file: fix shm-stats-file preload not working anymore
- BUG/MINOR: do not account backend connections into maxconn
- BUG/MEDIUM: init: 'devnullfd' not properly closed for master
- BUG/MINOR: acme: more explicit error when BIO_new_file()
- BUG/MEDIUM: quic-be: do not launch the connection migration process
- MINOR: quic-be: Parse the NEW_TOKEN frame
- MEDIUM: quic-be: Parse, store and reuse tokens provided by NEW_TOKEN
- MINOR: quic-be: helper functions to save/restore transport params (0-RTT)
- MINOR: quic-be: helper quic_reuse_srv_params() function to reuse server params (0-RTT)
- MINOR: quic-be: Save the backend 0-RTT parameters
- MEDIUM: quic-be: modify ssl_sock_srv_try_reuse_sess() to reuse backend sessions (0-RTT)
- MINOR: quic-be: allow the preparation of 0-RTT packets
- MINOR: quic-be: Send post handshake frames from list of frames (0-RTT)
- MEDIUM: quic-be: qc_send_mux() adaptation for 0-RTT
- MINOR: quic-be: discard the 0-RTT keys
- MEDIUM: quic-be: enable the use of 0-RTT
- MINOR: quic-be: validate the 0-RTT transport parameters
- MINOR: quic-be: do not create the mux after handshake completion (for 0-RTT)
- MINOR: quic-be: avoid a useless I/O callback wakeup for 0-RTT sessions
- BUG/MEDIUM: acme: move from mt_list to a rwlock + ebmbtree
- BUG/MINOR: acme: can't override the default resolver
- MINOR: ssl/sample: expose ssl_*c_curve for AWS-LC
- MINOR: check: delay MUX init when SSL ALPN is used
- MINOR: cfgdiag: adjust diag on servers
- BUG/MINOR: check: only try connection reuse for http-check rulesets
- BUG/MINOR: check: fix reuse-pool if MUX inherited from server
- MINOR: check: clarify check-reuse-pool interaction with reuse policy
- DOC: configuration: add missing ssllib_name_startswith()
- DOC: configuration: add missing openssl_version predicates
- MINOR: cfgcond: add "awslc_api_atleast" and "awslc_api_before"
- REGTESTS: ssl: activate ssl_curve_name.vtc for AWS-LC
- BUILD: ech: fix clang warnings
- BUG/MEDIUM: stick-tables: Always return the good stksess from stktable_set_entry
- BUG/MINOR: stick-tables: Fix return value for __stksess_kill()
- CLEANUP: stick-tables: Don't needlessly compute shard number in stksess_free()
- MINOR: h1: h1_release() should return if it destroyed the connection
- BUG/MEDIUM: h1: prevent a crash on HTTP/2 upgrade
- MINOR: check: use auto SNI for QUIC checks
- MINOR: check: ensure QUIC checks configuration coherency
- CLEANUP: peers: remove an unneeded null check
- Revert "BUG/MEDIUM: connections: permit to permanently remove an idle conn"
- BUG/MEDIUM: connection: do not reinsert a purgeable conn in idle list
- DEBUG: extend DEBUG_STRESS to ease testing and turn on extra checks
- DEBUG: add BUG_ON_STRESS(): a BUG_ON() implemented only when DEBUG_STRESS > 0
- DEBUG: servers: add a few checks for stress-testing idle conns
- BUG/MINOR: check: fix QUIC check test when QUIC disabled
- BUG/MINOR: quic-be: missing version negotiation
- CLEANUP: quic: Missing succesful SSL handshake backend trace (OpenSSL 3.5)
- BUG/MINOR: quic-be: backend SSL session reuse fix (OpenSSL 3.5)
- REGTEST: quic: quic/ssl_reuse.vtc supports OpenSSL 3.5 QUIC API
This scripts is supported by OpenSSL 3.5 QUIC API since this previous commit:
BUG/MINOR: quic: backend SSL session reuse fix (HAVE_OPENSSL_QUIC)
Should be backported where this commit is backported.
This bug impacts only the QUIC backends when haproxy is compiled against
OpenSSL 3.5 with QUIC API(HAVE_OPENSSL_QUIC).
The QUIC clients could not reuse their SSL session because the TLS tickets
received from the servers could not be provided to the TLS stack. This should
be done when the stack calls ha_quic_ossl_crypto_recv_rcd()
(OSSL_FUNC_SSL_QUIC_TLS_CRYPTO_RECV_RCD callback).
According to OpenSSL team, an SSL_read() call must be done after the handshake
completion. It seems the correct location is at the same level as for
SSL_process_quic_post_handshake() for quictls.
Thank you to @mattcaswell, @Sashan and @vdukhovni for having helped in solving
this issue.
Must be backported to 3.1
This very minor issue impacts only the backend when compiled against OpenSSL 3.5
with QUIC API (HAVE_OPENSSL_QUIC).
The "SSL handshake OK" trace was not dumped by a TRACE() call. This was very
annoying when debugging.
Modify the concerned code section which is a bit ugly and simplify it.
The TRACE() call is done at a unique location for now on.
Should be backported to 3.2 to ease any further backport.
This bug impacts only the QUIC clients (or backends). The version negotiation
was not supported at all for them. This is an oversight.
Contrary to the QUIC server which choose the negotiated version after having
received the transport parameters (into ClientHello message) the client selects
the negotiated version from the first Initial packet version field. Indeed, the
server transport parameters are inside the ServerHello messages ciphered
into Handshake packets.
This non intrusive patch does not impact the QUIC server implementation.
It only selects the negotiated version from the first Initial packet
received from the server and consequently initializes the TLS cipher context.
Thank you to @InputOutputZ for having reporte this issue in GH #3178.
No need to backport because the QUIC backends support arrives with 3.3.
Latest commit ef206d441c ("MINOR: check: ensure QUIC checks configuration
coherency") introduced a regression when QUIC is not compiled in. Indeed,
not specifying a check proto sets mux_proto to NULL, which also happens to
be the value of get_mux_proto("QUIC"), so it complains about QUIC. Let's
add a non-null check in addition to this.
No backport is needed.
The latest idle conns fix 9481cef948 ("BUG/MEDIUM: connection: do not
reinsert a purgeable conn in idle list") addresses a very hard-to-hit
case which manifests itself with an attempt to reuse a connection fails
because conn->mux is NULL:
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x0000655410b8642c in conn_backend_get (reuse_mode=4, srv=srv@entry=0x6554378a7140,
sess=sess@entry=0x7cfe140948a0, is_safe=is_safe@entry=0,
hash=hash@entry=910818338996668161) at src/backend.c:1390
1390 if (conn->mux->takeover && conn->mux->takeover(conn, i, 0) == 0) {
However the condition that leads to this situation can be detected
earlier, by the presence of the connection in the toremove_list, whose
race window is much larger and easier to detect.
This patch adds a few BUG_ON_STRESS() at selected places that an detect
this condition. When built with -DDEBUG_STRESS and run under stress with
two distinct processes communicating over H2 over SSL, under a stress of
400-500k req/s, the front process usually crashes in the first 10-30s
triggering in _srv_add_idle() if the fix above is reverted (and it does
not crash with the fix).
This is mainly included to serve as an illustration of how to instrument
the code for seamless stress testing.
The purpose of this new BUG_ON is beyond BUG_ON_HOT(). While BUG_ON_HOT()
is meant to be light but placed on very hot code paths, BUG_ON_STRESS()
might be heavy and only used under stress-testing, to try to detect early
that something bad is starting to happen. This one is not even type-checked
when not defined because we don't want to risk the compiler emitting the
slightest piece of code there in production mode, so as to give enough
freedom to the developers.
DEBUG_STRESS is currently used only to expose "stress-level". With this
patch, we go a bit further, by automatically forcing DEBUG_STRICT and
DEBUG_STRICT_ACTION to their highest values in order to enable all
BUG_ON levels, and make all of them result in a crash. In addition,
care is taken to always only have 0 or 1 in the macro, so that it can be
tested using "#if DEBUG_STRESS > 0" as well as "if (DEBUG_STRESS) { }"
everywhere.
The goal will be to ease insertion of extra tests for builds dedicated
to stress-testing that enable possibly expensive extra checks on certain
code paths that cannot reasonably be compiled in for production code
right now.
A recent patch was introduced to fix a rare race condition in idle
connection code which would result in a crash. The issue is when MUX IO
handler run on top of connection moved in the purgeable list. The
connection would be considered as present in the idle list instead, and
reinserted in it at the end of the handler while still in the purge
list.
096999ee208b8ae306983bc3fd677517d05948d2
BUG/MEDIUM: connections: permit to permanently remove an idle conn
This patch solves the described issue. However, it introduces another
bug as it may clear connection flag when removing a connection from its
parent list. However, these flags now serve primarily as a status which
indicate that the connection is accounted by the server. When a backend
connection is freed, server idle/used counters are decremented
accordingly to these flags. With the above patch, an incorrect counter
could be adjusted and thus wrapping would occured.
The first impact of this bug is that it may distort the estimated number
of connections needed by servers, which would result either in poor
reuse rate or too many idle connections kept. Another noticeable impact
is that it may prevent server deletion.
The main problem of the original and current issues is that connection
flags are misinterpreted as telling if a connection is present in the
idle list. As already described here, in fact these flags are solely a
status which indicate that the connection is accounted in server
counters. Thus, here are the definitive conclusion that can be learned
here :
* (conn->flags & CO_FL_LIST_MASK) == 1:
the connection is accounted by the server
it may or may not be present in the idle list
* (conn->flags & CO_FL_LIST_MASK) == 0
the connection is not accounted and not present in idle list
The discussion above does not mention session list, but a similar
pattern can be observed when CO_FL_SESS_IDLE flag is set.
To keep the original issue solved and fix the current one, IO MUX
handlers prologue are rewritten. Now, flags are not checked anymore for
list appartenance and LIST_INLIST macro is used instead. This is
definitely clearer with conn_in_list purpose here.
On IO MUX handlers end, conn idle flags may be checked if conn_in_list
was true, to reinsert the connection either in idle or safe list. This
is considered safe as no function should modify idle flags when a
connection is not stored in a list, except during conn_free() operation.
This patch must be backported to every stable versions after revert of
the above commit. It should be appliable up to 3.0 without any issue. On
2.8 and below, <idle_list> connection member does not exist. It should
be safe to check <leaf_p> tree node as a replacement.
The target patch fixes a rare race condition which happen when a MUX IO
handler is working on a connection already moved into the purge list. In
this case, the handler will incorrectly moved back the connection into
the idle list.
To fix this, conn_delete_from_tree() was extended to remove flags along
with the connection from the idle list. This was performed when the
connection is moved into the purge list. However, it introduces another
issue related to the idle server connection accounting. Thus it is
necessary to revert it prior to the incoming newer fix.
This patch must be backported to every version where the original commit
is.
Coverity reported in GH #3181 that a NULL test was useless, in
peers_trace(), which is true since the peer always belongs to a
peers section and it was already dereferenced. Let's just remove
the test to avoid the confusion.
QUIC is now supported on the backend side, thus it is possible to use it
with server checks. However, checks configuration can be quite
extensive, differing greatly from the server settings.
This patch ensures that QUIC checks are always performed under a
controlled context. Objectives are to avoid any crashes and ensure that
there is no suprise for users in respect to the configuration.
The first part of this patch ensures that QUIC checks can only be
activated on QUIC servers. Indeed, QUIC requires dedicated
initialization steps prior to its usage.
The other part of this patch disables QUIC usage when one or multiple
specific check connection settings are specified in the configuration,
diverging from the server settings. This is the simplest solution for
now and ensure that there is no hidden behavior to users. This means
that it's currently impossible to perform QUIC checks if other endpoints
that the server itself. However for now there is no real use-case for
this scenario.
Along with these changes, check-proto documentation is updated to
clarify QUIC checks behavior.
By default, check SNI is set to the Host header when an HTTPS check is
performed. This patch extends this mode so that it is also active when
QUIC checks are executed.
This patch should improve reuse rate with checks. Indeed, SNI is also
already automatically set for normal traffic. The same value must be
used during check so that a connection hash match can be found.
Change h1_process() to return -2 when the mux is destroyed but the
connection is not, so that we can differentiate between "both mux and
connection were destroyed" and "only the mux was destroyed".
It can happen that only the mux gets destroyed, and the connection is
still alive, if we did upgrade it to HTTP/2.
In h1_wake(), if the connection is alive, then return 0, as the wake
methods should only return -1 if the connection is dead.
This fixes a bug where the ssl xprt would consider the connection
destroyed, and thus would consider its tasklet should die, and return
NULL, and its TASK_RUNNING flag would never be removed, leading to an
infinite loop later on. This would happen anytime an HTTP/2 upgrade was
successful.
This should be backported up to 2.8. While the bug by commit
00f43b7c8b136515653bcb2fc014b0832ec32d61, it was not triggered before
only by chance, and exists in previous releases too.
h1_release() is called to destroy everything related to the mux h1,
usually even the connection. However, it handles upgrades to HTTP/2 too,
in which case the h1 mux will be destroyed, but the connection will
still be alive. So make it so it returns 0 if everything is destroyed,
and -1 if the connection is still alive.
This should be backported up to 2.8, as a future bugfix will depend on
it.
Since commit 0bda33a3e ("MINOR: stick-tables: remove the uneeded read lock
in stksess_free()"), the lock on the shard is no longer acquired. So it is
useless to still compture the shard number. The result is never used and can
be safely removed.
The commit 9938fb9c7 ("BUG/MEDIUM: stick-tables: Fix race with peers when
killing a sticky session") introduced a regression.
__stksess_kill() must always return 0 if the session cannot be released. But
when the ref_cnt is tested under the update lock, a success is reported if
the session is still in-used. 0 must be returned in that case.
This bug is harmless because callers never use the return value of
__stksess_kill() or stksess_kill().
This bug must be backported as far as 3.0.
In stktable_set_entry(), the return value of __stktable_store() is not
tested while it is possible to get an existing session with the same key
instead of the one we want to insert. It happens when we fails to upgrade
the read lock on the bucket to an write lock. In that case, we release the
lock for a short time to get a write lock.
So, to fix the bug, we must check the session returned by __stktable_store()
and take care to return this one.
The bug was introduced by the commit e62885237c ("MEDIUM: stick-table: make
stktable_set_entry() look up under a read lock"). It must be backported as
far as 2.8.
No impact as the state is either SHOW_ECH_SPECIFIC or SHOW_ECH_ALL but
never anything else.
src/ech.c:240:6: error: variable 'p' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
240 | if (ctx->state == SHOW_ECH_ALL) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
src/ech.c:275:12: note: uninitialized use occurs here
275 | ctx->pp = p;
| ^
src/ech.c:240:2: note: remove the 'if' if its condition is always true
240 | if (ctx->state == SHOW_ECH_ALL) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/ech.c:228:17: note: initialize the variable 'p' to silence this warning
228 | struct proxy *p;
| ^
| = NULL
src/ech.c:240:6: error: variable 'bind_conf' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
240 | if (ctx->state == SHOW_ECH_ALL) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
src/ech.c:276:11: note: uninitialized use occurs here
276 | ctx->b = bind_conf;
| ^~~~~~~~~
src/ech.c:240:2: note: remove the 'if' if its condition is always true
240 | if (ctx->state == SHOW_ECH_ALL) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/ech.c:229:29: note: initialize the variable 'bind_conf' to silence this warning
229 | struct bind_conf *bind_conf;
| ^
| = NULL
2 errors generated.
make: *** [Makefile:1062: src/ech.o] Error 1
AWS-LC features are not easily tested with just the openssl version
constant. AWS-LC uses its own API versioning stored in the
AWSLC_API_VERSION constant.
This patch add the two awslc_api_atleast and awslc_api_before predicates
that help to check the AWS-LC API.
Add missing openssl_version_atleast() and openssl_version_before()
predicates.
The predicates exist since 3aeb3f9347 ("MINOR: cfgcond: implements
openssl_version_atleast and openssl_version_before").
Must be backported in every stable versions.
Add the missing ssllib_name_startswith() predicate in the documentation.
The predicate was introduced with b01179aa9 ("MINOR: ssl: Add
ssllib_name_startswith precondition").
Must be backported as far as 2.6.
check-reuse-pool can only perform as expected if reuse policy on the
backend is set to aggressive or higher. Update the documentation to
reflect this and implement a server diag warning.
Check reuse is only performed if no specific check connect options are
specified on the configuration. This ensures that reuse won't be
performed if intending to use different connection parameters from the
default traffic.
This relies on tcpcheck_use_nondefault_connect() which indicates if the
check has any specific connection parameters. One of them if check
<mux_proto> field. However, this field may be automatically set during
init_srv_check() in some specific conditions without any explicit
configuration, most notably when using http-check rulesets on an HTTP
backend. Thus, it prevents connection reuse for these checks.
This commit fixes this by adjuting tcpcheck_use_nondefault_connect().
Beside checking check <mux_proto> field, it also detects if it is
different from the server configuration. This is sufficient to know if
the value is derived from the configuration or automatically calculated
in init_srv_check().
Note that this patch introduces a small behavior change. Prior to it,
check reuse were never performed if "check-proto" is explicitely
configured. Now, check reuse will be performed if the configured value
is identical to the server MUX protocol. This is considered as
acceptable as connection reuse is safe when using a similar MUX
protocol.
This must be backported up to 3.2.
In 3.2, a new server keyword "check-reuse-pool" has been introduced. It
allows to reuse a connection for a new check, instead of always
initializing a new one. This is only performed if the check does not
rely on specific connection parameters differing from the server.
This patch further restricts reuse for checks only when an HTTP ruleset
is used at the backend level. Indeed, reusing a connection outside of
HTTP is an undefined behavior. The impact of this bug is unknown and
depends on the proxy/server configuration. In the case of an HTTP
backend with non-HTTP checks, check-reuse-pool would probably cause a
drop in reuse rate.
Along this change, implement a new diagnostic warning on servers to
report that check-reuse-pool cannot apply due to an incompatible check
type.
This must be backported up to 3.2.
Adjust code dealing with diagnostics performed on server. The objective
is to extract the check on duplicate cookies in a dedicated function
outside of the proxies/servers loop.
This does not have any noticeable impact. This patch is merely a code
improvment to implement easily new future diagnostics on servers.
When instantiating a new connection for check, its MUX may be
initialized early. This was not performed though if SSL ALPN negotiation
will be used, except if check MUX is already fixed.
However, this method of initialization is problematic when QUIC MUX is
used. Indeed, this multiplexer must only be instantiated after the above
application protocol is known, which is derived from the ALPN
negotiation. If this is not the case a crash will occur in qmux_init().
In fact, a similar problem was already encountered for normal traffic.
Thus, a change was performed in connect_server() : MUX early
initialization is now always skipped if SSL ALPN negotiation is active,
even if MUX is already fixed. This patch introduces a similar change for
checks.
Without this patch, it is not possible to perform check on QUIC servers
as expected. Indeed, when http-check ruleset is active a crash would
occur prior to it.
The underlying SSL_get_negotiated_group function has been backported
into AWS-LC [1], so expose the feature for users of this TLS stack
as well. Note that even though it was actually added in AWS-LC 1.56.0,
we require AWSLC_API_VERSION >= 35 which was released in AWS-LC 1.57.0,
because API version wasn't incremented after this change. As the delta
is one minor version (less than two weeks), I consider this acceptable
to avoid relying on a proxy constant like TLSEXT_nid_unknown which
might be removed at some point.
[1] d6a37244ad
httpclient_acme_init() was called in cfg_parse_acme() which is at
section parsing. httpclient_acme_init() also calls
httpclient_create_proxy() which could create a "default" resolvers
section if it doesn't exists.
If one tries to override the default resolvers section after an ACME
section, the resolvers section parsing will fail because the section was
already created by httpclient_create_proxy().
This patch fixes the issue by moving the initialization of the ACME
proxy to a pre_check callback, which is called just before
check_config_validity().
Must be backported in 3.2.
The current ACME scheduler suffers from problems due to the way the
tasks are stored:
- MT_LIST are not scalables when having a lot of ACME tasks and having
to look for a specific one.
- the acme_task pointer was stored in the ckch_store in order to not
passing through the whole list. But a ckch_store can be updated and
the pointer lost in the previous one.
- when a task fails, the ptr in the ckch_store was not removed because
we only work with a copy of the original ckch_store, it would need to
lock the ckchs_tree and remove this pointer.
This patch fixes the issues by removing the MT_LIST-based architecture,
and replacing it by a simple ebmbtree + rwlock design.
The pointer to the task is not stored anymore in the ckch_store, but
instead it is stored in the acme_tasks tree. Finding a task is done by
doing a lookup on this tree with a RDLOCK.
Instead of checking if store->acme_task is not NULL, a lookup is also
done.
This allow to remove the stuck "acme_task" pointer in the store, which
was preventing to restart an acme task when the previous failed for this
specific certificate.
Must be backported in 3.2.
For backends and 0-RTT sessions, this patch modifies the ->start() callback to
wake up the I/O callback only if the connection (and the mux) is not ready. Note that
connect_server() has been modified to call this xprt callback just after having
created the mux and installed the mux. Contrary to 1-RTT session, for 0-RTT sessions,
the connections are always ready before calling this ->start xprt callback.
This is required during connection with 0-RTT support, to prevent two mux creations.
Indeed, for 0-RTT sessions, the QUIC mux is already started very soon from
connect_server() (src/backend.c).
During 0-RTT sessions, some server transport parameters are reused after having
been save from previous sessions. These parameters must not be reduced
when it resends them. The client must check this is the case when some early data
are accepted by the server. This is what is implemented by this patch.
Implement qc_early_tranport_params_validate() which checks the new server parameters
are not reduced.
Also implement qc_ssl_eary_data_accepted() which was not implemented for TLS
stack without 0-RTT support (for instance wolfssl). That said this function
was no more used. This is why the compilation against wolfssl could not fail.
This patch allows the use of 0-RTT feature on QUIC server lines with "allow-0rtt"
option. In fact 0-RTT is really enabled only if ssl_sock_srv_try_reuse_sess()
successfully manages to reuse the SSL session and the chosen application protocol
from previous connections.
Note that, at this time, 0-RTT works only with quictls and aws-lc as TLS stack.
(0-RTT does not work at all (even for QUIC frontends) with libressl).
When entering this function, a selection is done about the encryption level
to be used to send data. For a client, the early data encryption level
is used to send 0-RTT if this encryption level is initialized.
The Initial encryption is also registered to the send list for clients if there
is Initial crypto data to send. This allow Initial and 0-RTT packets to
be coalesced by datagrams.
This patch is required to make 0-RTT work. It modifies the prototype of
quic_build_post_handshake_frames() to send post handshake frames from a
list of frames in place of the application encryption level (used
as <qc->ael> local variable).
This patch does not modify at all the current QUIC stack behavior (even for
QUIC frontends). It must be considered as a preparation for the code
to come about 0-RTT support for QUIC backends.
A QUIC server never sends 0-RTT packets contrary to the client.
This very simple modification allow the the preparation of 0-RTT packets
with early data as encryption level (->eel).
This function is called for both TCP and QUIC connections to reuse SSL sessions
saved by ssl_sess_new_srv_cb() callback called upon new SSL session creation.
In addition to this, a QUIC SSL session must reuse the ALPN and some specific QUIC
transport parameters. This is what is added by this patch for QUIC 0-RTT sessions.
Note that for now on, ssl_sock_srv_try_reuse_sess() may fail for QUIC connections
if it did not managed to reuse the ALPN. The caller must be informed of such an
issue. It must not enable 0-RTT for the current session in this case. This is
impossible without ALPN which is required to start a mux.
ssl_sock_srv_try_reuse_sess() is modified to always succeeds for TCP connections.
For both TCP and QUIC connections, this is ssl_sess_new_srv_cb() callback which
is called when a new SSL session is created. Its role is to save the session to
be reused for the next sessions.
This patch modifies this callback to save the QUIC parameters to be reused
for the next 0-RTT sessions (or during SSL session resumption).
The already existing path_params->nego_alpn member is used to store the ALPN as
this is done for TCP alongside path_params->tps new quic_early_transport_params
struct used to save the QUIC transport parameters to be reused for 0-RTT sessions.
Implement quic_reuse_srv_params() whose role is to reuse the ALPN negotiated
during a first connection to a QUIC backend alongside its transport parameters.
Define quic_early_transport_params new struct for QUIC transport parameters
in relation with 0-RTT. This parameters must be saved during a first session to
be reused for 0-RTT next sessions.
qc_early_transport_params_cpy() copies the 0-RTT transport parameters to be
saved during a first connection to a backend. The copy is made from
a quic_transport_params struct to a quic_ealy_transport_params struct.
On the contrary, qc_early_transport_params_reuse() copies the transport parameters
to be reused for a 0-RTT session from a previous one. The copy is made
from a quic_early_transport_params strcut to a quic_transport_params struct.
Also add QUIC_EV_EARLY_TRANSP_PARAMS trace event to dump such 0-RTT
transport parameters from traces.
Add a per thread ist struct to srv_per_thread struct to store the QUIC token to
be reused for subsequent sessions.
Parse at packet level (from qc_parse_ptk_frms()) these tokens and store
them calling qc_try_store_new_token() newly implemented function. This is
this new function which does its best (may fail) to update the tokens.
Modify qc_do_build_pkt() to resend these tokens calling quic_enc_token()
implemented by this patch.
Rename ->data qf_new_token struct field to ->w_data to distinguish it from
->r_data new field used to parse the NEW_TOKEN frame. Indeed to build the
NEW_TOKEN we need to write it to a static buffer into the frame struct. To
parse it we only need to store the address of the token field into the
RX buffer.
At this time the connection migration is not supported by QUIC backends.
This patch prevents this process to be launched for connections to QUIC backends.
Furthermore, the connection migration process could be started systematically
when connecting a backend to INADDR_ANY, leading to crashes into qc_handle_conn_migration()
(when referencing qc->li).
Thank you to @InputOutputZ for having reported this issue in GH #3178.
This patch simply checks the connection type (listener or not) before checking if
a connection migration must be started.
No need to backport because support for QUIC backends is available from 3.3.
Replace the error message of BIO_new_file() when the account-key cannot
be created on disk by "acme: cannot create the file '%s'". It was
previously "acme: out of memory." Which is unclear.
Must be backported to 3.2.
Since commit "1ec59d3 MINOR: init: Make devnullfd global and create it
earlier in init" the devnullfd pointing towards /dev/null gets created
early in the init process but it was closed after the call to
"mworker_run_master". The master process never got to the FD closing
code and we had an FD leak.
This patch does not need to be backported.
Remove QUIC backend connections from global actconn accounting. Indeed,
this counter is only used on the frontend side. This is required to
ensure maxconn coherence.
Due to recent commit 5c299dee ("MEDIUM: stats: consider that shared stats
pointers may be NULL") shm-stats-file preloading suddenly stopped working
In fact preloading should be considered as an initializing step so the
counters may be assigned there without checking for NULL first.
Indeed there are supposed to be NULL because preloading occurs before
counters_{fe,be}_shared_prepare() which takes care of setting the pointers
for counters if they weren't set before.
Obviously this corner-case was overlooked during 5c299dee writing and
testing. Thanks to Nick Ramirez for having reported the issue.
No backport needed, this issue is specific to 3.3.
Commit ad1bdc33 ("BUG/MAJOR: stats-file: fix crash on non-x86 platform
caused by unaligned cast") revealed an ambiguity in me_generate_field()
around FN_AGE manipulation. For now FN_AGE can only be stored as u32 or
s32, but in the future we could also support 64bit FN_AGES, and the
current code assumes 32bits types and performs and explicit unsigned int
cast. Instead we group current 32 bits operations for FF_U32 and FF_S32
formats, and let room for potential future formats for FN_AGE.
Commit ad1bdc33 also suggested that the fix was temporary and the approach
must change, but after a code review it turns out the current approach
(generic types manipulation under me_generate_field()) is legit. The
introduction of shm-stats-file feature didn't change the logic which
was initially implemented in 3.0. It only extended it and since shared
stats are now spread over thread-groups since 3.3, the use of atomic
operations made typecasting errors more visible, and structure mapping
change from d655ed5f14 ("BUG/MAJOR: stats-file: ensure
shm_stats_file_object struct mapping consistency (2nd attempt)") was in
fact the only change to blame for the crash on non-x86 platforms.
With ambiguities removed in me_generate_field(), let's hope we don't face
similar bugs in the future. Indeed, with generic counters, and more
specifically shared ones (which leverage atomic ops), great care must be
taken when changing their underlying types as me_generate_field() solely
relies on stat_col descriptor to know how to read the stat from a generic
pointer, so any breaking change must be reflected in that function as well
No backport needed.
On Initial packet parsing, a new quic_conn instance is allocated via
qc_new_conn(). Then a CID is allocated with its value derivated from
client ODCID. On CID tree insert, a collision can occur if another
thread was already parsing an Initial packet from the same client. In
this case, the connection is released and the packet will be requeued to
the other thread.
Originally, CID collision check was performed prior to quic_conn
allocation. This was changed by the commit below, as this could cause
issue on quic_conn alloc failure.
commit 4ae29be18c5b212dd2a1a8e9fa0ee2fcb9dbb4b3
BUG/MINOR: quic: Possible endless loop in quic_lstnr_dghdlr()
However, this procedure is less optimal. Indeed, qc_new_conn() performs
many steps, thus it could be better to skip it on Initial CID collision,
which can happen frequently. This patch restores the older order of
operations, with CID collision check prior to quic_conn allocation.
To ensure this does not cause again the same bug, the CID is removed in
case of quic_conn alloc failure. This should prevent any loop as it
ensures that a CID found in the global tree does not point to a NULL
quic_conn, unless if CID is attach to a foreign thread. When this thread
will parse a re-enqueued packet, either the quic_conn is already
allocated or the CID has been removed, triggering a fresh CID and
quic_conn allocation procedure.
CIDs are provided by haproxy so that the peer can use them as DCID of
its packets. Their value is set via a random generator. It happens on
several occasions during connection lifetime:
* via ODCID derivation if haproxy is the server
* on quic_conn init if haproxy is the client
* during post-handshake if haproxy is the server
* on RETIRE_CONNECTION_ID frame parsing
CIDs are stored in a global tree. On ODCID derivation, a check is
performed to ensure the CID is not a duplicate value. This is mandatory
to properly handle multiple INITIAL packets from the same client on
different thread.
However, for the other cases, no check is performed for CID collision.
As _quic_cid_insert() is silent, the issue is not detected at all. This
results in a CID advertized to the peer but not stored in the global
one. In the end, this may cause two issues. The first one is that
packets from the client which use the new CID will be rejected by
haproxy, most probably with a STATELESS_RESET. The second issue is that
it can cause a crash during quic_conn release. Indeed, the CID is stored
in the quic_conn local tree and thus eb_delete() for the global tree
will be performed. As <leaf_p> member is uninit, this results in a
segfault.
Note that this issue is pretty rare. It can only be observed if running
with a high number of concurrent connections in parallel, so that the
random generator will provide duplicate values. Patch is still labelled
as MEDIUM as this modifies code paths used frequently.
To fix this, _quic_cid_insert() unsafe function is completely removed.
Instead, quic_cid_insert() can be used, which reports an error code if a
collision happens. CID are then stored in the quic_conn tree only after
global tree insert success. Here is the solution for each steps if a
collision occurs :
* on init as client: the connection is completely released
* post-handshake: the CID is immediately released. The connection is
kept, but it will miss an extra CID.
* on RETIRE_CONNECTION_ID parsing: a loop is implemented to retry random
generation. It it fails several times, the connection is closed in
error.
A small convenience change is made to quic_cid_insert(). Output
parameter <new_tid> can now be NULL, which is useful as most of the
times caller do not care about it.
This must be backported up to 2.6.
Split new_quic_cid() function into multiple ones. This patch should not
introduce any visible change. The objective is to render CID allocation
and generation more modular.
The first advantage of this patch is to bring code simplication. In
particular, conn CID sequence number increment and insertion into
connection tree is simpler than before. Another improvment is also that
errors could now be handled easier at each different steps of the CID
init.
This patch is a prerequisite for the fix on CID collision, thus it must
be backported prior to it to every affected version.
Change qc_new_conn() so that the connection CID tree is allocated
earlier in the function. This patch does not introduce a behavior
change. Its objective is to facilitate future evolutions on CIDs
handling.
This patch is a prerequisite for the fix on CID collision, thus it must
be backported prior to it to every affected version.
During RETIRE_CONNECTION_ID frame parsing, a new connection ID is
immediately reallocated after the release of the previous one. This is
done to ensure that the peer will never run out of DCID.
Prior to this patch, a CID allocation failure was be silently ignored.
This prevent the emission of a new CID, which could prevent the peer to
emit packets if it had no other CIDs available for use. Now, such error
is considered fatal to the connection. This is the safest solution as
it's better to close connections when memory is running low.
It must be backported up to 2.8.
Amaury reported a case where "${FOO[*]}" still produces an empty field.
It happens if the variable is defined but does not contain any non-space
characters. The reason is that we special-case word expansion only on
non-existing vars. Let's change the ordering of operations so that word-
expanded vars always pretend the current arg is not an empty quote, so
that we don't make any difference between a non-existing var and an
empty one.
No backport is needed unless commit 1968731765 ("BUG/MEDIUM: config:
solve the empty argument problem again") is.
Released version 3.3-dev12 with the following main changes :
- MINOR: quic: enable SSL on QUIC servers automatically
- MINOR: quic: reject conf with QUIC servers if not compiled
- OPTIM: quic: adjust automatic ALPN setting for QUIC servers
- MINOR: sample: optional AAD parameter support to aes_gcm_enc/dec
- REGTESTS: converters: check USE_OPENSSL in aes_gcm.vtc
- BUG/MINOR: resolvers: ensure fair round robin iteration
- BUG/MAJOR: stats-file: fix crash on non-x86 platform caused by unaligned cast
- OPTIM: backend: skip conn reuse for incompatible proxies
- SCRIPTS: build-ssl: allow to build a FIPS version without FIPS
- OPTIM: proxy: move atomically access fields out of the read-only ones
- SCRIPTS: build-ssl: fix rpath in AWS-LC install for openssl and bssl bin
- CI: github: update to macos-26
- BUG/MINOR: quic: fix crash on client handshake abort
- MINOR: quic: do not set conn member if ssl_sock_ctx
- MINOR: quic: remove connection arg from qc_new_conn()
- BUG/MEDIUM: server: Add a rwlock to path parameter
- BUG/MEDIUM: server: Also call srv_reset_path_parameters() on srv up
- BUG/MEDIUM: mux-h1: fix 414 / 431 status code reporting
- BUG/MEDIUM: mux-h2: make sure not to move a dead connection to idle
- BUG/MEDIUM: connections: permit to permanently remove an idle conn
- MEDIUM: cfgparse: deprecate 'master-worker' keyword alone
- MEDIUM: cfgparse: 'daemon' not compatible with -Ws
- DOC: configuration: deprecate the master-worker keyword
- MINOR: quic: remove <mux_state> field
- BUG/MEDIUM: stick-tables: Make sure we handle expiration on all tables
- MEDIUM: stick-tables: Optimize the expiration process a bit.
- MEDIUM: ssl/ckch: use ckch_store instead of ckch_data for ckch_conf_kws
- MINOR: acme: generate a temporary key pair
- MEDIUM: acme: generate a key pair when no file are available
- BUILD: ssl/ckch: wrong function name in ckch_conf_kws
- BUILD: acme: acme_gen_tmp_x509() signedness and unused variables
- BUG/MINOR: acme: fix initialization issue in acme_gen_tmp_x509()
- BUILD: ssl/ckch: fix ckch_conf_kws parsing without ACME
- MINOR: server: move the lock inside srv_add_idle()
- DOC: acme: crt-store allows you to start without a certificate
- BUG/MINOR: acme: allow 'key' when generating cert
- MINOR: stconn: Add counters to SC to know number of bytes received and sent
- MINOR: stream: Add samples to get number of bytes received or sent on each side
- MINOR: counters: Add req_in/req_out/res_in/res_out counters for fe/be/srv/li
- MINOR: stream: Remove bytes_in and bytes_out counters from stream
- MINOR: counters: Remove bytes_in and bytes_out counter from fe/be/srv/li
- MINOR: stats: Add stats about request and response bytes received and sent
- MINOR: applet: Add function to get amount of data in the output buffer
- MINOR: channel: Remove total field from channels
- DEBUG: stream: Add bytes_in/bytes_out value for both SC in session dump
- MEDIUM: stktables: Limit the number of stick counters to 100
- BUG/MINOR: config: Limit "tune.maxpollevents" parameter to 1000000
- BUG/MEDIUM: server: close a race around ready_srv when deleting a server
- BUG/MINOR: config: emit warning for empty args when *not* in discovery mode
- BUG/MEDIUM: config: solve the empty argument problem again
- MEDIUM: config: now reject configs with empty arguments
- MINOR: tools: add support for ist to the word fingerprinting functions
- MINOR: tools: add env_suggest() to suggest alternate variable names
- MINOR: tools: have parse_line's error pointer point to unknown variable names
- MINOR: cfgparse: try to suggest correct variable names on errors
- IMPORT: cebtree: Replace offset calculation with offsetof to avoid UB
- BUG/MINOR: acme: wrong dns-01 challenge in the log
- MEDIUM: backend: Defer conn_xprt_start() after mux creation
- MINOR: peers: Improve traces for peers
- MEDIUM: peers: No longer ack updates during a full resync
- MEDIUM: peers: Remove commitupdate field on stick-tables
- BUG/MEDIUM: peers: Fix update message parsing during a full resync
- MINOR: sample/stats: Add "bytes" in req_{in,out} and res_{in,out} names
- BUG/MEDIUM: stick-tables: Make sure updates are seen as local
- BUG/MEDIUM: proxy: use aligned allocations for struct proxy
- BUG/MEDIUM: proxy: use aligned allocations for struct proxy_per_tgroup
- BUG/MINOR: acme: avoid a possible crash on error paths
In acme_EVP_PKEY_gen(), an error message is printed if *errmsg is set,
however, since commit 546c67d13 ("MINOR: acme: generate a temporary key
pair"), errmsg is passed as NULL in at least one occurrence, leading
the compiler to issue a NULL deref warning at -O3. And indeed, if the
errors are encountered, a crash will occur. No backport is needed.
In 3.2, commit f879b9a18 ("MINOR: proxies: Add a per-thread group field
to struct proxy") introduced struct proxy_per_tgroup that is declared as
thread_aligned, but is allocated using calloc(). Thus it is at risk of
crashing on machines using instructions requiring 64-byte alignment such
as AVX512. Let's use ha_aligned_zalloc_typed() instead of malloc().
For 3.2, we don't have aligned allocations, so instead the THREAD_ALIGNED()
will have to be removed from the struct definition. Alternately, we could
manually align it as is done for fdtab.
Commit fd012b6c5 ("OPTIM: proxy: move atomically access fields out of
the read-only ones") caused the proxy struct to be 64-byte aligned,
which allows the compiler to use optimizations such as AVX512 to zero
certain fields. However the struct was allocated using calloc() so it
was not necessarily aligned, causing segv on startup on compatible
machines. Let's just use ha_aligned_zalloc_typed() to allocate the
struct.
No backport is needed.
In stktable_touch_with_exp, if it is a local update, add it to the
pending update list even if it's already in the tree as a remote update,
otherwise it will never be communicated to other peers;
It used to work before 3.2 because of the ordering of operations, but
it's been broken by adding an extra step with the pending update list,
so we now have to explicitely check for that.
This should be backported to 3.2.
Number of bytes received or sent by a client or a server are now
saved. Sample fetches and stats fields to retrieve these informations are
renamed to add "bytes" in names to avoid any ambiguity with number of
requests and responses.
The commit 590c5ff2e ("MEDIUM: peers: No longer ack updates during a full
resync") introduced a regression. During a full resync, the ID of an update
message is not parsed at all. Thus, the parsing of the whole message in
desynchronized.
On full resync the update id itself is ignored, to not be acked, but it must
be parsed. It is now fixed.
It is a 3.3-specific bug, no backport needed.
This stick-table field was atomically updated with the last update id pushed
and dumped on the CLI but never used otherwise. And all peer sessions share
the same id because it is a stick-table info. So the info in peers dump is
pretty limited.
So, let's remove it.
ACK messages received by a peer sending updates during a full resync are
ignored. So, on the other side, there is no reason to still send these ACK
messages. Let's skip them.
In addition, the received updates during this stage are not considered as to
be acked. It is important to be sure to properly emit ACK messages once the
full sync finished.
Trace messages for peers were only protocol oriented and information
provided were quite light. With this patch, the traces were
improved. information about the peer, its applet and the section are
dumped. Several verbosities are now available and messages are dumped at
different levels depending on the context. It should easier to track issues
in the peers.
In connect_server(), defer the call to conn_xprt_start() until after we
had a chance to create the mux. The xprt can behave differently
depending on if a mux is or is not available at this point, as if it is,
it may want to wait until some data comes from the mux.
This does not need to be backported.
Since 861fe532046 ("MINOR: acme: add the dns-01-record field to the
sink"), the dns-01 challenge is output in the dns_record trash, instead
of the global trash.
The send_log string was never updated with this change, and dumps some
data from the global trash instead. Since the last data emitted in the
trash seems to be the dns-01 token from the authorization object, it
looks like the response to the challenge.
This must be backported to 3.2.
This is the same as the equivalent fix in ebtree:
The C standard specifies that it's undefined behavior to dereference
NULL (even if you use & right after). The hand-rolled offsetof idiom
&(((s*)NULL)->f) is thus technically undefined. This clutters the
output of UBSan and is simple to fix: just use the real offsetof when
it's available.
This is cebtree commit 2d08958858c2b8a1da880061aed941324e20e748.
When an empty argument comes from the use of a non-existing variable,
we'll now detect the difference with an empty variable (error pointer
points to the variable's name instead), and submit it to env_suggest()
to see if another variable looks likely to be the right one or not.
This can be quite useful to quickly figure how to fix misspelled variable
names. Currently only series of letters, digits and underscores are
attempted to be resolved as a name. A typical example is:
peer "${HAPROXY_LOCAL_PEER}" 127.0.0.1:10000
which produces:
[ALERT] (24231) : config : parsing [bug-argv4.cfg:2]: argument number 1 at position 13 is empty and marks the end of the argument list:
peer "${HAPROXY_LOCAL_PEER}" 127.0.0.1:10000
^
[NOTICE] (24231) : config : Hint: maybe you meant HAPROXY_LOCALPEER instead ?
When an argument is empty, parse_line() currently returns a pointer to
the empty string itself. This is convenient, but it's only actionable by
the user who will see for example "${HAPROXY_LOCALPEER}" and figure what
is wrong. Here we slightly change the reported pointer so that if an empty
argument results from the evaluation of an empty variable (meaning that
all variables in string are empty and no other char is present), then
instead of pointing to the opening quote, we'll return a pointer to the
first character of the variable's name. This will allow to make a
difference between an empty variable and an unknown variable, and for
the caller to take action based on this.
I.e. before we would get:
log "${LOG_SERVER_IP}" local0
^
if LOG_SERVER_IP is not set, and now instead we'll get this:
log "${LOG_SERVER_IP}" local0
^
The purpose here is to look in the environment for a variable whose
name looks like the provided one. This will be used to try to auto-
correct misspelled environment variables that would silently be turned
to an empty string.
The word fingerprinting functions are used to compare similar words to
suggest a correctly spelled one that looks like what the user proposed.
Currently the functions only support const char*, but there's no reason
for this, and it would be convenient to support substrings extracted
from random pieces of configurations. Here we're adding new variants
"_with_len" that take these ISTs and which are in fact a slight change
of the original ones that the old ones now rely on.
As prepared during 3.2, we must error on empty arguments because they
mark the end of the line and cause subsequent arguments to be silently
ignored. It was too late in 3.2 to turn that into an error so it's a
warning, but for 3.3 it needed to be an alert.
This patch does that. It doesn't instantly break, instead it counts
one fatal error per violating line. This allows to emit several errors
at once, which can often be caused by the same variable being missed,
or a group of variables sharing a same misspelled prefix for example.
Tests show that it helps locate them better. It also explains what to
look for in the config manual for help with variables expansion.
This mostly reverts commit ff8db5a85 ("BUG/MINOR: config: Stopped parsing
upon unmatched environment variables").
As explained in commit #2367, finally the fix above was incorrect because
it causes other trouble such as this:
log "192.168.100.${NODE}" "local0"
being resolved to this:
log 192.168.100.local0
when NODE does not exist due to the loss of the spaces. In fact, while none
of us was well aware of this, when the user had:
server app 127.0.0.1:80 "${NO_CHECK}" weight 123
in fact they should have written it this way:
server app 127.0.0.1:80 "${NO_CHECK[*]}" weight 123
so that the variable is expanded to zero, one or multiple words, leaving
no empty arg (like in shell). This is supported since 2.3 with commit
fa41cb6 so the right fix is in the config, let's revert the fix and
properly address the issue.
Some changes are necessary however, since after that patch, the in_arg
checks were added and are now inserting an empty argument even for
proper error reporting. For example, the following statement:
acl foo path "/a" "${FOO[*]}" "/b"
would complain about an empty arg at FOO due to in_arg=1, while dropping
this in_arg=1 with the following config:
acl foo path "/a" "${FOO}" "/b"
would silently stop after "/a" instead of complaining about an empty
field. So the approach here consists in noting whether or not something
was written since the quotes were emitted, in order to decide whether
or not to produce an argument. This way, "" continues to be an explicitly
empty arg, just like the same with an unknown variable, while "${FOO[*]}"
is allowed to prevent the creation of an argument if empty.
This should be backported to *some* versions, but the risk that some
configs were altered to rely on the broken fix is not null. At least
recent LTS should be reverted. Note that this requires previous commit:
BUG/MINOR: config: emit warning for empty args when *not* in discovery mode
otherwise this will break again configs relying on HAPROXY_LOCALPEER and
maybe a few other variables set at the end of discovery.
This actually reverses the condition of commit 5f1fad1690 ("BUG/MINOR:
config: emit warning for empty args only in discovery mode"). Indeed,
some variables are not known in discovery mode (e.g. HAPROXY_LOCALPEER),
and statements like:
peer "${HAPROXY_LOCALPEER}" 127.0.0.1:10000
are broken during discovery mode. It turns out that the warning is
currently hidden by commit ff8db5a85d ("BUG/MINOR: config: Stopped
parsing upon unmatched environment variables") since it silently drops
empty args which is sufficient to hide the warning, but it also breaks
other configs and needs to be reverted, which will break configs like
above again.
In issue #2995 we were not fully decided about discovery mode or not,
and already suspected some possible issues without being able to guess
which ones. The only downside of not displaying them in discovery mode
is that certain empty fields on the rare keywords specific to master
mode might remain silent until used. Let's just flip the condition to
check for empty args in normal mode only.
This should be backported to 3.2 after some time of observation.
When a server is being disabled or deleted, in case it matches the
backend's ready_srv, this one is reset. However it's currently done in
a non-atomic way when the server goes down, and that could occasionally
reset the entry matching another server, but more importantly if in
parallel some requests are dequeued for that server, it may re-appear
there after having been removed, leading to a possible crash once it
is fully removed, as shown in issue #3177.
Let's make sure we reset the pointer when detaching the server from
the proxy, and use a CAS in both cases to only reset this server.
This fix needs to be backported to 3.2. There, srv_detach() is in
server.c instead of server.h. Thanks to Basha Mougamadou for the
detailed report and the useful backtraces.
"tune.maxpollevents" global parameter was not limited. It was possible to
set any integer value. But this value is used to allocate the array of
events used by epoll. With a huge value, it seems the allocation silently
fail, making haproxy totally unresponsive.
So let's to limit its value to 1 million. It is pretty high and it should
not be an issue to forbid greater values. The documentation was updated
accordingly.
This patch could be backported to all stable branches.
"tune.stick-counters" global parameter was accepting any positive integer
value. But the maximum value is incredibly high. Setting a huge value has
signitifcant impact on memory and CPU usage. To avoid any issue, this value
is now limited to 100. It should be greater enough to all usage.
It can be seen as a breaking change.
The <total> field in the channel structure is now useless, so it can be
removed. The <bytes_in> field from the SC is used instead.
This patch is related to issue #1617.
The helper function applet_output_data() returns the amount of data in the
output buffer of an applet. For applets using the new API, it is based on
data present in the outbuf buffer. For legacy applets, it is based on input
data present in the input channel's buffer. The HTX version,
applet_htx_output_data(), is also available
This patch is related to issue #1617.
In previous patches, these counters were added per frontend, backend, server
and listener. With this patch, these counters are reported on stats,
including promex.
Note that the stats file minor version was incremented by one because the
shm_stats_file_object struct size has changed.
This patch is related to issue #1617.
bytes_in and bytes_out counters per frontend, backend, listener and server
were removed and we now rely on, respectively on, req_in and res_in
counters.
This patch is related to issue #1617.
per-stream bytes_in and bytes_out counters was removed and replaced by
req.in and res.in. Coorresponding samples still exists but replies on new
counters.
This patch is related to issue #1617.
Thanks to the previous patch, and based on info available on the stream, it
is now possible to have counters for frontends, backends, servers and
listeners to report number of bytes received and sent on both sides.
This patch is related to issue #1617.
req.in and req.out samples can now be used to get the number of bytes
received by a client and send to the server. And res.in and res.out samples
can be used to get the number of bytes received by a server and send to the
client. These info are stored in the logs structure inside a stream.
This patch is related to issue #1617.
<bytes_in> and <bytes_out> counters were added to SC to count, respectively,
the number of bytes received from an endpoint or sent to an endpoint. These
counters are updated for connections and applets.
This patch is related to issue #1617.
If your acme certificate is declared in a crt-store, and the certificate
file does not exist on the disk, HAProxy will start with a temporary key
pair.
Almost all callers of _srv_add_idle() lock the list then call the
function. It's not the most efficient and it requires some care from
the caller to take care of that lock. Let's change this a little bit by
having srv_add_idle() that takes the lock and calls _srv_add_idle() that
is now inlined. This way callers don't have to handle the lock themselves
anymore, and the lock is only taken around the sensitive parts, not the
function call+return.
Interestingly, perf tests show a small perf increase from 2.28-2.32M RPS
to 2.32-2.37M RPS on a 128-thread system.
src/acme.c: In function ‘acme_gen_tmp_x509’:
src/acme.c:2685:15: error: ‘digest’ may be used uninitialized [-Werror=maybe-uninitialized]
2685 | if (!(X509_sign(newcrt, pkey, digest)))
| ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/acme.c:2628:23: note: ‘digest’ was declared here
2628 | const EVP_MD *digest;
| ^~~~~~
When an acme keyword is associated to a crt and key, and the corresponding
files does not exist, HAProxy would not start.
This patch allows to configure acme without pre-generating a keypair before
starting HAProxy. If the files does not exist, it tries to generate a unique
keypair in memory, that will be used for every ACME certificates that don't
have a file on the disk yet.
This patch provides two functions acme_gen_tmp_pkey() and
acme_gen_tmp_x509().
These functions generates a unique keypair and X509 certificate that
will be stored in tmp_x509 and tmp_pkey. If the key pair or certificate
was already generated they will return the existing one.
The key is an RSA2048 and the X509 is generated with a expiration in the
past. The CN is "expired".
These are just placeholders to be used if we don't have files.
This is an API change, instead of passing a ckch_data alone, the
ckch_conf_kws.func() is called with a ckch_store.
This allows the callback to access the whole ckch_store, with the
ckch_conf and the ckch_data. But it requires the ckch_conf to be
actually put in the ckch_store before.
In process_tables_expire(), if the table we're analyzing still has
entries, and thus should be put back into the tree, do not put it in the
mt_list, to have it put back into the tree the next time the task runs.
There is no problem with putting it in the tree right away, as either
the next expiration is in the future, or we handled the maximum number
of expirations per task call and we're about to stop, anyway.
This does not need to be backported.
In process_tables_expire(), when parsing all the tables with expiration
set, to check if the any entry expired, make sure we start from the
oldest one, we can't just rely on eb32_first(), because of sign issues
on the timestamp.
Not doing that may mean some tables are not considered for expiration.
This does not need to be backported.
This patch removes <mux_state> field from quic_conn structure. The
purpose of this field was to indicate if MUX layer above quic_conn is
not yet initialized, active, or already released.
It became tedious to properly set it as initialization order of the
various quic_conn/conn/MUX layers now differ between the frontend and
backend sides, and also depending if 0-RTT is used or not. Recently, a
new change introduced in connect_server() will allow to initialize QUIC
MUX earlier if ALPN is cached on the server structure. This had another
level of complexity.
Thus, this patch removes <mux_state> field completely. Instead, a new
flag QUIC_FL_CONN_XPRT_CLOSED is defined. It is set at a single place
only on close XPRT callback invokation. It can be mixed with the new
utility functions qc_wait_for_conn()/qc_is_conn_ready() to determine the
status of conn/MUX layers now without an extra quic_conn field.
Deprecate the 'master-worker' keyword in the global section.
Split the configuration of the 'no-exit-on-failure' subkeyword in
another section which is not deprecated yet and explains that its only
meant for debugging purpose.
Emit a warning when the 'daemon' keyword is used in master-worker mode
for systemd (-Ws). This never worked and was always ignored by setting
MODE_FOREGROUND during cmdline parsing.
Warn when the 'master-worker' keyword is used without
'no-exit-on-failure'.
Warn when the 'master-worker' keyword is used and -W and -Ws already set
the mode.
There's currently a function conn_delete_from_tree() which is used to
detach an idle connection from the tree it's currently attached to so
that it is no longer found. This function is used in three circumstances:
- when picking a new connection that no longer has any avail stream
- when temporarily working on the connection from an I/O handler,
in which case it's re-added at the end
- when killing a connection
The 2nd case above is quite specific, as it requires to preserve the
CO_FL_LIST_MASK flags so that the connection can be re-inserted into
the proper tree when leaving the handler. However, there's a catch.
When killing a connection, we want to be certain it will not be
reinserted into the tree. The flags preservation is causing a tiny
race if an I/O happens while the connection is in the kill list,
because in this case the I/O handler will note the connection flags,
do its work, then reinsert the connection where it believed it was,
then the connection gets purged, and another user can find it in the
tree.
The issue is very difficult to reproduce. On a 128-thread machine it
happens in H2 around 500k req/s after around 50M requests. In H1 it
happens after around 1 billion requests.
The fix here consists in passing an extra argument to the function to
indicate if the removal is permanent or not. When it's permanent, the
function will clear the associated flags. The callers were adjusted
so that all those dequeuing a connection in order to kill it do it
permanently and all other ones do it only temporarily.
A slightly different approach could have worked: the function could
always remove all flags, and the callers would need to restore them.
But this would require trickier modifications of the various call
places, compared to only passing 0/1 to indicate the permanent status.
This will need to be backported to all stable versions. The issue was
at least reproduced since 3.1 (not tested before). The patch will need
to be adjusted for 3.2 and older, because a 2nd argument "thr" was
added in 3.3, so the patch will not apply to older versions as-is.
In h2_detach(), it looks possible to place a dead connection back to
the idle list, and to later call h2_release() on it once detected as
dead. It's not certain that it happens but nothing in the code shows
it is not possible, so better make sure it cannot happen.
This should be preventively backported to all versions.
The more detailed status code reporting introduced with bc967758a2 is
checking against the error state to determine whether it is a too long
URL or too large headers. The check used always returns true which
results in a 414 as the error state is only set at a later point.
This commit adjusts the check to use the current state instead to return
the intended status code.
This patch must be backported as far as 3.1.
Also call srv_reset_path_parameters() when the server changed states,
and got up. It is not enough to do it when the server goes down, because
there's a small race condition, and a connection could get established
just after we did it, and could have set the path parameters.
This does not need to be backported.
Add a rwlock to control the server's path_parameter, to make sure
multiple threads don't set it at the same time, and it can't be seen in
an inconsistent state.
Also don't set the parameter every time, only set them if they have
changed, to prevent needless writes.
This does not need to be backported.
This patch is similar to the previous one, this time dealing with
qc_new_conn(). This function was asymetric on frontend and backend side,
as connection argument was set only in the latter case.
This was required prior due to qc_alloc_ssl_sock_ctx() signature. This
has changed with the previous patch, thus qc_new_conn() can also be
realigned on both FE and BE sides. <conn> member of quic_conn instance
is always set outside it, in qc_xprt_start() on the backend case.
ssl_sock_ctx is a generic object used both on TCP/SSL and QUIC stacks.
Most notably it contains a <conn> member which is a pointer to struct
connection.
On QUIC frontend side, this member is always set to NULL. Indeed,
connection is only created after handshake completion. However, this has
changed for backend side, where the connection is instantiated prior to
its quic_conn counterpart. Thus, ssl_sock_ctx member would be set in
this case as a convenience for use later in qc_ssl_do_hanshake().
However, this method was unsafe as the connection can be released,
without resetting ssl_sock_ctx member. Thus, the previous patch fixes
this by using on <conn> member through the quic_conn instance which is
the proper way.
Thus, this patch resets ssl_sock_ctx <conn> member to NULL. This is
deemed the cleanest method as it ensures that both frontend and backend
sides must not use it anymore.
On backend side, a connection can be aborted and released prior to
handshake completion. This causes a crash in qc_ssl_do_hanshake() as
<conn> member of ssl_sock_ctx is not reset in this case.
To fix this, use <conn> member of quic_conn instead. This is safe as it
is properly set to NULL when a connection is released.
No impact on the frontend side as <conn> member is not accessed. Indeed,
in this case connection is most of the times allocated after handshake
completion.
No need to be backported.
macOS-15 images seems to have difficulties to run the reg-tests since a
few days for an unknown reason. Doing a rollback of both VTest2 and
haporxy doesn't seem to fix the problem so this is probably related to a
change in github actions.
This patch switches the image to the new macos-26 images which seems to
fix the problem.
Perf top showed that h1_snd_buf() was having great difficulties accessing
the proxy's server_id_hdr_name field in the middle of the headers loop.
Moving the assignment out of the loop to a local variable moved the
problem there as well:
| if (!(h1m->flags & H1_MF_RESP) && isttest(h1c->px->server_id_hdr_n
0.10 |20b0: mov -0x120(%rbp),%rdi
1.33 | mov 0x60(%rdi),%r10
0.01 | test %eax,%eax
0.18 | jne 2118
12.87 | mov 0x350(%r10),%rdi
0.01 | test %rdi,%rdi
0.05 | je 2118
| mov 0x358(%r10),%r11
It turns out that there are several atomically accessed fields in its
vicinity, causing the cache line to bounce all the time. Let's collect
the few frequently changed fields and place them together at the end
of the structure, and plug the 32-bit hole with another isolated field.
Doing so also reduced a little bit the cost of decrementing be->be_conn
in process_stream(), and overall the HTTP/1 performance increased by
about 1% both on ARM and x86_64.
build-ssl.sh is always prepending a "v" to the version, preventing to
build a FIPS version without FIPS enabled.
This patch checks if FIPS is in the version string to chose to add the
"v" or not.
Example:
AWS_LC_VERSION=AWS-LC-FIPS-3.0.0 BUILDSSL_DESTDIR=/opt/awslc-3.0.0 ./scripts/build-ssl.sh
When trying to reuse a backend connection, a connection hash is
calculated to match an entry with similar parameters. Previously, this
operation was skipped if the stream content wasn't based on HTTP, as it
would have been incompatible with http-reuse.
With the introduction of SPOP backends, this condition was removed, so
that it can also benefit from connection reuse. However, this means that
now hash calcul is always performed when connecting to a server, even
for TCP or log backends. This is unnecessary as these proxies cannot
perform connection reuse.
Note also that reuse mode is resetted on postparsing for incompatible
backends. This at least guarantees that no tree lookup will be performed
via be_reuse_connection(). However, connection lookup is still performed
in the session via session_get_conn() which is another unnecessary
operation.
Thus, this patch restores the condition so that reuse operations are now
entirely skipped if a backend mode is incompatible. This is implemented
via a new utility function named be_supports_conn_reuse().
This could be backported up to 3.1, as this commit could be considered
as a performance regression for tcp/log backend modes.
Since commit d655ed5f14 ("BUG/MAJOR: stats-file: ensure
shm_stats_file_object struct mapping consistency (2nd attempt)"), the
last_state_change field in the counters is a uint (to match how it's
reported). However, it happens that there are explicit casts in function
me_generate_field() to retrieve the value, and which cause crashes on
aarch64 and likely other non-x86 64-bit platforms due to atomically
reading an unaligned 64-bit value, and may even randomly crash other
64-bit platforms when reading past the end of the structure.
The fix for now adapts the cast to match the one used by the accessed
type (i.e. unsigned int), but the approach must change, as there's
nothing there which allows to figure whether or not the type is correct
by just reading the code. At minima a typeof() on a named field is
needed, but this requires more invasive changes, hence this temporary
fix.
No backport is needed, as stats-file is only in 3.3.
Previous fixes restored round robin iteration, but an imbalance remains
when the response tree contains record types other than A or AAAA. Let's
take the following example: the DNS answers two A records and a CNAME.
The response "tree" (which is actually flat, more like a list) may look
as follows, ordered by hash:
- 1st item: first A record with IP 1
- 2nd item: second A record with IP 2
- 3rd item: CNAME record
As a consequence, resolv_get_ip_from_response will iterate as follows,
while the TTL is still valid:
- 1st call: DNS request is done, response tree is created, iteration
starts at the first item, IP 1 is returned.
- 2nd call: cached response tree is used, iteration starts at the second
item, IP 2 is returned.
- 3rd call: cached response tree is used, iteration starts at the third
item, but it's a CNAME, so we continue to the next item, which restarts
iteration at the first item, and IP 1 is returned.
- 4th call: cached response tree is used and iteration restarts at the
beginning, returning IP 1 again.
The 1-2-1-1-2-1-1-2 sequence will repeat, so IP 1 will be used twice as
often as IP 2, creating a strong imbalance. Even with more IP addresses,
the first one by hashing order in the tree will always receive twice the
traffic of the others.
To fix this, set the next iteration item to the one following the selected
IP record, if any. This ensures we never use the same IP twice in a row.
This commit should be backported where 3023e9819 ("BUG/MINOR: resolvers:
Restore round-robin selection on records in DNS answers") is, so as far
as 2.6.
The aes_gcm_enc() and aes_gcm_dec() sample converters now accept an
optional fifth argument for Additional Authenticated Data (AAD). When
provided, the AAD value is base64-decoded and used during AES-GCM
encryption or decryption. Both string and variable forms are supported.
This enables use cases that require authentication of additional data.
If a QUIC server is declared without ALPN, "h3" value is automatically
set during _srv_parse_finalize().
This patch adjusts this operation. Instead of relying on
ssl_sock_parse_alpn(), a plain strdup() is used. This is considered more
efficient as the ALPN string is constant in this case. This method is
already used for listeners on the frontend side.
Ensure that QUIC support is compiled into haproxy when a QUIC server is
configured. This check is performed during _srv_parse_finalize() so that
it is detected both on configuration parsing and when adding a dynamic
server via the CLI.
Note that this changes the behavior of srv_is_quic() utility function.
Previously, it always returned false when QUIC support wasn't compiled.
With this new check introduced, it is now guaranteed that a QUIC server
won't exist if compilation support is not active. Hence srv_is_quic()
does not rely anymore on USE_QUIC define.
Previously, QUIC servers were rejected if SSL was not explicitely
activated using 'ssl' configuration keyword.
Change this behavior : now SSL is automatically activated for QUIC
servers when the keyword is missing. A warning is displayed as it is
considered better to explicitely note that SSL is in use.
2025-10-31 11:32:14 +01:00
388 changed files with 14041 additions and 6000 deletions
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.