27090 Commits

Author SHA1 Message Date
Willy Tarreau
5d26fe6082 [RELEASE] Released version 3.4-dev11
Released version 3.4-dev11 with the following main changes :
    - BUG/MEDIUM: acme: fix segfault on newOrder with empty authorizations
    - BUG/MINOR: acme: skip auth/challenge steps when newOrder returns a certificate
    - BUG/MINOR: sink: do not free existing sinks on allocation error
    - CLEANUP: net_helper: fix incorrect const pointers in writev_n16()
    - BUG/MINOR: vars: make parse_store() return error on var_set() failure
    - BUG/MINOR: vars: don't store the variable twice with set-var-fmt
    - BUG/MINOR: vars: only print first invalid char in fill_desc()
    - BUG/MINOR: hpack: validate idx > 0 in hpack_valid_idx()
    - MINOR: add an MPSC ring buffer implementation
    - OPTIM: quic: rework the QUIC RX code
    - MINOR: quic: store the DCID as an offset
    - OPTIM: quic: reduce the size of struct quic_dgram
    - BUG/MINOR: quic: handle cases where we don't have an address
    - BUG/MEDIUM: cli: fix master CLI connection slot leak on client disconnect
    - MEDIUM: mux-quic: extend shut to app proto layer
    - MINOR: h3/hq_interop: implement stream reset on shut abort/kill-conn
    - BUG/MINOR: acl: fix a possible arg corruption in smp_fetch_acl_parse()
    - BUG/MINOR: map: do not leak a map descriptor on load error
    - CLEANUP: map/cli: fix some map-related help messages
    - BUG/MINOR: pattern: release the reference on failure to load from file
    - CLEANUP: acl: remove duplicate test in parse_acl_expr() and unused variable
    - CI: github: add DEBUG_STRICT=2 to ASAN jobs
    - BUG/MINOR: quic: fix buffer overflow with sockaddr_in46
    - BUG/MEDIUM: acme: fix stalled renewal when opportunistic DNS check fails
    - BUG/MINOR: quic: fix trace crash on datagram receive
    - MINOR: quic: fix trace spacing when datagram is displayed
    - CLEANUP: mux-h2: remove the outdated condition to release h2c on timeout
    - BUILD: add an EXTRA_MAKE option to build addons easily
    - BUILD: otel: removed USE_OTEL, addon is now built via EXTRA_MAKE
    - CLEANUP: otel: move opentelemetry outside haproxy sources
    - BUG/MEDIUM: mux-h2: fix the body_len to check when parsing request trailers
    - BUG/MAJOR: mux-h2: preset MSGF_BODY_CL on H2_SF_DATA_CLEN in h2c_dec_hdrs()
    - DOC: otel: update the filter's status and URL in the docs
    - DOC: acme: document missing acme-vars and provider-name keywords
    - BUG/MINOR: dns: always validate the source address in responses
    - BUG/MINOR: tcpcheck: Properly report error for http health-checks
    - CLEANUP: resolvers: Remove duplicated line when resolvers proxy is initialized
    - BUG/MINOR: resolvers: Free new requester on error when linking a resolution
    - BUG/MINOR: resolvers: Fix lookup for a hostname in the state-file tree
    - BUG/MINOR: resolvers: Free opts on parse error in resolv_parse_do_resolve()
    - BUG/MAJOR: net_helper: also fix tcp_options_list for OOB write loop
    - BUG/MEDIUM: ssl/sample: check output buffer size in aes_cbc_enc converter
    - BUG/MAJOR: http-ana: fix private session retrieval on NTLM
    - REGTESTS: add a regtest to validate various NTLM transitions
    - BUG/MEDIUM: mworker/cli: fix user and operator permission via @@<pid> in master CLI
    - BUG/MINOR: mworker/cli: check ci_insert() return value in pcli_parse_request()
    - REGTESTS: http-messaging: always send RFC8441 client settings to use ext connect
    - BUG/MINOR: h2: add decoding for :protocol in traces
    - BUG/MINOR: mux-h2: condition the processing of 8441 extension to global setting
    - MINOR: mux-h2: add a new message flag to indicate ext connect support
    - BUG/MINOR: h2: only accept :protocol with extended CONNECT
    - BUG/MINOR: acme: contact mail should be optional, don't pass ToS bool
    - CLEANUP: http-fetch: Remove duplcated return statement in smp_fetch_stver()
    - CLEANUP: http-fetch: Adjust smp_fetch_url32_src() comment
    - CLEANUP: http-fetch: Fix indentation of sample_fetch_keywords
    - BUG/MINOR: http_fetch: Check return values of unchecked buffer operations
    - BUG/MINOR: http-fetch: Fix http_auth_bearer() when custom header is used
    - BUG/MEDIUM: h1_htx: Remove reverved block on error during contig chunks parsing
    - CLEANUP: haterm: Remove duplicated bloc to know if haterm must drain
    - BUG/MINOR: haterm: Immediately report error when draining the request
    - CLEANUP: haterm: Remove useless IS_HTX_SC() test
    - BUG/MINOR: haterm: Fix a possible integer overflow on the request body length
    - BUG/MEDIUM: haterm: Subscribe for receives until request was fully drained
    - BUG/MINOR: haterm: Don't set HTX_FL_EOM flag on 100-Continue responses
    - BUG/MEDIUM: haterm: Properly handle end of request and end of response
    - BUG/MEDIUM: haterm: Properly handle client timeout
    - BUG/MINOR: haterm: Fix condition to use direct data forwarding
    - BUG/MINOR: haterm: Report a 400-bad-request error on receive error
    - DEBUG: haterm: Add hstream flags in the trace messages
    - MINOR: haterm: Remove now useless req_body field from hstream
    - MINOR: mux_quic: reset stream after app shutdown for HTTP/0.9
    - MINOR: mux_quic: do not perform unnecessary timeout handling on BE side
    - BUG/MEDIUM: mux_quic: adjust qcc_is_dead() to account detached streams
    - MINOR: mux_quic: simplify MUX_CTL_GET_NBSTRM
    - MINOR: ssl: Export 'current_crtstore_name'
    - MINOR: ssl: Factorize code from "new/set ssl cert" CLI command
    - MINOR: ssl: Factorize ckch instance rebuild process
    - MEDIUM: ssl: Refactorize "commit ssl cert"
    - BUG/MINOR: ssl: Use the sequence number with kTLS and TLS 1.2
    - BUG/MINOR: mux_quic: fix max stream ID reuse estimation
    - MINOR: mux_quic: release BE conns if reuse definitely blocked
    - BUG/MINOR: mux_quic: refresh timeout only if I/O performed
    - MEDIUM: mux-h1: Return an error on h2 upgrade attempts if not allowed
    - BUG/MEDIUM: mux-h2: Properly consume padding for DATA frames
    - MEDIUM: tools: read_line_to_trash() handle empty files without \n
    - MINOR: jws: support HMAC in jws_b64_protected(), make nonce optional
    - MINOR: jws: introduce jws_b64_hmac_signature() function for HMAC signing
    - MINOR: acme: implement EAB - external account binding
    - MINOR: acme: allow specifying custom MAC alg for EAB
    - REGTESTS: Fix h1_to_h2_upgrade.vtc to force h2 on first bind line
    - MINOR: cli: allow specifying a tgid with show fd
    - Revert "BUG/MEDIUM: cli: fix master CLI connection slot leak on client disconnect"
    - BUILD: use Makefile.mk instead of Makefile.inc in EXTRA_MAKE
    - Revert "BUG/MINOR: mux-h2: condition the processing of 8441 extension to global setting"
    - BUG/MEDIUM: mux-h2: fix the detection of the ext connect support
    - MINOR: jwe: Add option to enable/disable algorithms or encryption algorithms for jwt_decrypt
    - MINOR: jwe: Disable 'RSA1_5' algorithm by default in jwt_decrypt converters
    - BUG/MEDIUM: jwe: Fix jwt.decrypt_alg_list to work correctly
    - BUG/MEDIUM: stick-table: properly check permissions on CLI's set/clear cmd
    - DOC: acme: EAB is now supported
v3.4-dev11
2026-05-08 05:22:55 +02:00
William Lallemand
815845f17e DOC: acme: EAB is now supported
Remove the line mentioning than External Account Binding is not
supported. Since it was implemented in 3.4.
2026-05-07 18:50:54 +02:00
Willy Tarreau
d04a56e17d BUG/MEDIUM: stick-table: properly check permissions on CLI's set/clear cmd
The "set stick-table" CLI command's permissions are checked a bit too
late in the I/O handler, because the lookups performed at parsing time
can already cause an entry to be created at level "user" even though the
user does not have the permission to go further and to fill the data in.

Note that the impact remains pretty low since the entry is created without
data being touchable, and all within the table's settings (max entries,
expire etc). In addition it cannot even be used to periodically refresh
an entry and prevent it from expiring because only a creation is handled
at this point.

Let's add the check in cli_parse_table_req() so that these privileged
commands are entirely denied past the table lookup. This way it remains
possible to know that the table doesn't exist, like for the "show" command
but not more.

This should be backported to all stable branches, because the bug right
now cannot result in an accidental use (entries are not properly created
and deletion does not work).

Thanks to Omkhar Arasaratnam for finding and reporting this.
2026-05-07 18:46:44 +02:00
Olivier Houchard
81abfaa4df BUG/MEDIUM: jwe: Fix jwt.decrypt_alg_list to work correctly
Function jwe_parse_global_alg_enc_list() handles both
jwt.decrypt_alg_list and jwd.decrypt_enc_list, but to know which array
to use, between the algorithms and encoding arrays to use, it was
checking the string to see if it matched jwe.supported_algorithms, so it
was always considering we were dealing with encodings, and
jwt.decrypt_alg_list could not possibly work.
Fix that by checking the right string.
2026-05-07 18:09:47 +02:00
Remi Tricot-Le Breton
495eb7b0e0 MINOR: jwe: Disable 'RSA1_5' algorithm by default in jwt_decrypt converters
In RFC8725, section 3.2, they suggest to "Avoid all RSA-PKCS1 v1.5
encryption algorithms" so this algorithm gets disabled by default.
Tokens having this "alg" won't be decrypted unless it is explicitly
reenabled thanks to 'jwt.decrypt_alg_list' global option.

Thanks to Omkhar Arasaratnam for raising our awareness about this!
2026-05-07 18:00:29 +02:00
Remi Tricot-Le Breton
f82a242c8f MINOR: jwe: Add option to enable/disable algorithms or encryption algorithms for jwt_decrypt
Some users of the jwt_decrypt_XXX converters might want to reject JWT
tokens with a specific algorithm or encryption algorithm ("alg" or "enc"
field respectively) in order to avoid weak algorithms for instance.
This could be done from the configuration but would be tedious.

This patch adds the new 'jwt.decrypt_alg_list' and
'jwt.decrypt_enc_list' global options that can be used to define a
subset of accepted algorithms
2026-05-07 18:00:27 +02:00
Willy Tarreau
00941af7b7 BUG/MEDIUM: mux-h2: fix the detection of the ext connect support
As reported by Huangbin Zhan (@zhanhb) in github issue #3355, latest
commit 96f7ff4fdd ("MINOR: mux-h2: add a new message flag to indicate
ext connect support") was not correct and can break RFC8441-compliant
clients, as it did for them with a variant of Chrome 142.

The problem is that while RFC9113 says that new pseudo-headers are only
permitted with *negotiated* extensions, and RFC8441 doesn't indicate
whether or not SETTINGS_ENABLE_CONNECT_PROTOCOL is needed from clients,
it only says that clients know that servers support the extension when
seeing it in their settings and can use it, which seems to imply that
they don't need to send it to indicate their willingness to use it.
This also means that the server cannot know if a client is expected to
use it or not by default. It only know that a client is not allowed to
use it if the server didn't emit support mentioning it, which haproxy
can do using h2-workaround-bogus-websocket-clients.

Thus the fix proposed by @zhanhb is right, when presetting the flag for
the parser to indicate whether or not we're willing to accept RFC8441's
:protocol pseudo-header, we should:
  - consider the received setting on the backend side (though the
    pseudo-header is neither used nor supported there, but at least
    we pass the info regarding the support of the extension)
  - consider the configuration for the frontend (since it's the only
    place where we can decide on support or not)

This patch does just that and reverts the accompanying changes to the
regtests that made them want to see the client's setting. It must be
backported to 2.6.

In the mean time, placing this option in the global section will force
the clients to downgrade to h1:

    h2-workaround-bogus-websocket-clients

Many thanks again to @zhanbb this feedback and proposing a tested fix.
2026-05-07 17:34:39 +02:00
Willy Tarreau
b587ea1f27 Revert "BUG/MINOR: mux-h2: condition the processing of 8441 extension to global setting"
This reverts commit 9986ad65a4af0b5e4212f1d12e108090490a8c2d.

The protocol was not super clear on one point when compared to RFC9113
and our internal setting GTUNE_DISABLE_H2_WEBSOCKET. While RFC9113 says
that protocol extensions are negotiated, RFC8441 is only advertised by
the server, which thus doesn't know if the client supports it or not
until it faces it. In addition, GTUNE_DISABLE_H2_WEBSOCKET doesn't
apply to the protocol support as it name seems to imply, but to the
frontend only since the corresponding option is
"h2-workaround-bogus-websocket-clients". As such, haproxy should not
expect the client to advertise anything regarding the setting, and
should not consider the option when receiving the server's setting.

This needs to be backported to 2.6 where the commit above was
backported.
2026-05-07 17:34:39 +02:00
William Lallemand
157e24272f BUILD: use Makefile.mk instead of Makefile.inc in EXTRA_MAKE
Use an external Makefile called Makefile.mk in order to build complex
addons.

    make TARGET=linux-glibc ... EXTRA_MAKE="/path/to/addon1" \
    EXTRA_MAKE+="/path/to/addon2"
2026-05-07 16:50:52 +02:00
Willy Tarreau
782336c21b Revert "BUG/MEDIUM: cli: fix master CLI connection slot leak on client disconnect"
This reverts commit 64383e655b23e1240dd0043a18ca020994c60022.

As reported by Alexander Stephan in issue #3351, it causes problems.
First, as seen in the issue, the "reload" operation, handled by an applet
local to the master process, is being interrupted by the timeout so that
the client never gets the result (though the timeout is applied). A fix
for this was found (ignore client-fin/server-fin on applets, as they make
no sense there), but it only hides a deeper problem. Indeed, issuing
"@1 debug dev delay 2000" still stops at 1s with an error, indicating
that commands are systematically being sent with a shutdown, and thus
that the server-fin always applies. This is a problem because it means
that any long command will now be interrupted after one second.

All of this needs to be put back into perspective before progressing
further on this issue, and the reason for sending the shutdown should
be reconsidered in the context of the current version, as it looks
like this was once necessary but no longer is.

In addition, the issue encountered by Alexander, of a frozen worker,
was essentially reported once in many years, so it's totally acceptable
to leave older versions unfixed and figure what's the best solution for
modern versions only.

Let's just revert to the pre-fix situation so as to avoid causing
breakage everywhere. This revert should be backported to all versions
(2.4 included).
2026-05-07 16:37:33 +02:00
Maxime Henrion
da554b7ef7 MINOR: cli: allow specifying a tgid with show fd
This will become useful when we implement using unshare() to split fd
tables per thread group. For now, the tgid is parsed but completely
ignored.
2026-05-07 16:02:37 +02:00
Christopher Faulet
972d0a4183 REGTESTS: Fix h1_to_h2_upgrade.vtc to force h2 on first bind line
With VTEST, It seems possible to receive the H2 preface in 2 packets. So the
preface cannot be matched and the H1 to H2 upgrade is not performed as
expected. The script was fixed by forcing the H2 proto on the first bind
line.

The problem with the preface matching will be reviewed later.
2026-05-07 16:19:10 +02:00
Mia Kanashi
5f91cf1b7d MINOR: acme: allow specifying custom MAC alg for EAB
This implements configuration for custom mac alg in EAB.
I don't think there are any reasons to allow that TBH,
but it is something that exists in the spec.

Depends on the EAB impl.
No backport needed
2026-05-07 15:19:15 +02:00
Mia Kanashi
187b1250dd MINOR: acme: implement EAB - external account binding
Patch introduces ACME EAB support.

Configuring EAB requires two parts: Key ID and MAC Key.
Key ID is an ASCII string that specifies the name of the record CA
should look up. MAC Key is a base64url encoded key that is used
for the sake of JWS signing, using HS256 or other algorithms.
They are the credentials so must be stored securely.

A thing about EAB is that it is required only during account creation
so it is unexpectedly complex to think about.
Some CAs provide EAB credential pair that is reused between
multiple account order requests, for example ZeroSSL, but others like
Google Trusted Services require an unique EAB credential for each new
account creation request.

There are a lot of ways config could be implemented, I decided to make
so that Key ID and MAC Key are stored in separate files on disk,
that decision was made because of the security concerns.
File based approach in particular works well with systemd credentials,
works well with systems that have config world readable, or immutable,
and is compatible with existing setups that specify credentials in a
file.

EAB is configured through options like this in an acme section:

eab-mac-alg HS512
eab-mac-key pebble.eab.mac-key
eab-key-id pebble.eab.key-id

I decided to not error out on empty files, but issue a log msg instead,
so that credentials can be removed without changing the haproxy config.

Used read_line_to_trash function from tools.c for reading files,
that is something that could be replaced by a dedicated function too.

No backport needed
2026-05-07 15:19:15 +02:00
Mia Kanashi
c9e76e5bb1 MINOR: jws: introduce jws_b64_hmac_signature() function for HMAC signing
New jws_b64_hmac_signature() duplicates the same functionality as
jws_b64_signature(), but for the use case of HMAC signing.
Intended to be used for ACME EAB.

OpenSSL allows to use EVP_PKEY for HMAC functionality, so
jws_b64_signature() could be reused, but the problem is that although
isn't deprecated it was removed in BoringSSL, and was removed
(due to BoringSSL roots) but then readded back in AWS-LC, because of
"legacy clients" (citing them), for that reason alone I say that having
a dedicated function for hmac is better, HMAC() macro seems to be widely
supported unlike other ways of doing same thing. Another alternative
would be to use EVP_MD API, but it was introduced in OpenSSL 3.0,
so not as widely supported.
2026-05-07 15:19:15 +02:00
Mia Kanashi
6900278ac6 MINOR: jws: support HMAC in jws_b64_protected(), make nonce optional
This adds support for HMAC algorithms in jws_b64_protected(), but also
makes nonce field optional, because it isn't needed in some cases where
HMAC is used, primarily ACME EAB requires that nonce field must not
exist.
2026-05-07 15:19:15 +02:00
Mia Kanashi
83e6ae3334 MEDIUM: tools: read_line_to_trash() handle empty files without \n
fgets() returns NULL when EOF is reached before newline, handle
that as a success for consistency, current behaviour is arguably a bug,
the API of fgets() is pretty weird after all so someone probably forgot.
2026-05-07 15:19:15 +02:00
Christopher Faulet
faf3e9ac3a BUG/MEDIUM: mux-h2: Properly consume padding for DATA frames
Since the commit 617592c9e ("MEDIUM: mux-h2: try to coalesce outgoing
WINDOW_UPDATE frames"), padding of DATA frames is no longer
consumed. Instead, this padidng is left in the demux buffer and used as the
header of the next frame. Because all bytes of the padding must be zero,
this lead to trigger a PROTOCOL_ERROR because haproxy erroneously thinks the
peer sent a DATA frame for the stream-id 0. It is true for a padding of 9
bytes or more, but similar issues may be exprienced with smaller padding.

Before the commit above, the padding was consumed in h2_process_demux to
restore the H2_CS_FRAME_H state at the end of the while loop processing
received frames.

However, it seems a bit strange to deal with the padding at this stage,
espcially because it is not obvious at all. So to fix the issue, the padding
is now consumed at the end of h2_frt_transfer_data(), inside "end_tranfer"
label. At the stage, we know all payload of the current DATA frame were
consumed and only the padding is still there, if any. We must only take care
to not consume more than available in the demux buffer. The padding may have
been partially received.

This patch should fix the issue #3354. It must be backported as far as 2.8.
2026-05-07 14:59:28 +02:00
Christopher Faulet
72fd357814 MEDIUM: mux-h1: Return an error on h2 upgrade attempts if not allowed
If h1 to h2 upgrades are not allowed, a 405-method-not-allowed error is now
returned from the H1 multiplexer itself instead of dealing with "PRI *
HTTP/2.0\r\n\r\n" request as a normal request.

Before this kind of request was caught by the HTTP analyzers and a
400-bad-request was returned. This was added before the multiplexers era to
protect backend apps against unexpected H1 to H2 uprade on server side.

Now, it is possible to handle the error in the H1 multiplexer. One benefit
is to be able to increment the glichtes counters. However, the error is
still handled in HTTP analyzers to be sure to detect unwanted upgrades that
can be hidden in H2 or H3 requests.

There is a special case. TCP > H1 > H2 upgrades. In that case, a H1 stream
exist. So we must report an error to the upper layer too.

A reg-test script was added to validate the feature. In addition,
tcp_to_http_upgrade.vtc was updated accordingly.
2026-05-07 14:59:28 +02:00
Amaury Denoyelle
419cc6e2f6 BUG/MINOR: mux_quic: refresh timeout only if I/O performed
Previously, QUIC MUX timeout was refreshed on every qcc_io_cb()
execution. This is not desired if no send/receive were performed, as in
this case the connection may be stuck.

This patch fixes this by refreshing timeout only if some progress is
performed during qcc_io_cb(). To implement this, return value of
qcc_io_recv() has been adjusted to return the number of newly decoded
bytes.

This patch is considered as a bug fix as without it there is a risk of
QUIC MUX inactivity timeout to be less efficient and to maintain a
connection too long.

This should be backported up to 2.8, after a period of observation.
2026-05-07 14:34:29 +02:00
Amaury Denoyelle
b8961ee8b3 MINOR: mux_quic: release BE conns if reuse definitely blocked
avail_streams callback serves to indicates how many streams can be
attached for a backend connection. On QUIC mux, this relies on several
parameters, first based on static limitation which only decreases over
time, but also on flow control which is dynamically adjusted by the peer
and can be increased or decreased at will.

qcc_is_dead() on the other hand serves to determine if a connection can
be removed. First, it must be inactive (no request in progress). Then,
if a backend connection cannot be reused due to some of the above
limitation reached, it is definitely useless and should be removed as
soon as possible. However, prior to this patch, qcc_is_dead() did not
take into account the same set of parameters than avail_streams : only
if graceful shutdown was initiated by the peer was considered.

The purpose of this patch is to linked these two functions together.
Reuse calcul based on static limits is extracted from avail_streams()
into new qcc_be_is_reusable(). This function is used directly in
qcc_is_dead(), which now for example takes into account the server
max-reuse parameter.

This patch should ensure that a backend connection which can not be
reuse anymore is release as soon as possible. This could improve
slightly reuse rate in some specific scenarii as non-reusable
connections should not pollute the idle cache.

Return value of QUIC avail_streams() is changed by this patch as server
max-reuse and max stream ID limits are now only taken into account when
already exceeded or if a single stream remains. However, this has no
consequence as callers of avail_streams() do not differentiates return
value of 2 or more.
2026-05-07 11:19:22 +02:00
Amaury Denoyelle
e586458ec0 BUG/MINOR: mux_quic: fix max stream ID reuse estimation
The following patch adjusts QUIC mux avail_streams() to ensure maximum
stream ID is never exceeded.

  commit 143d0034c912f1490812b6302f0dffb37f3ec02d
  BUG/MINOR: mux_quic: limit avail_streams() to 2^62

However, the calcul is incorrect, as <next_bidi_l> member value is set
to the next ID available, not the last one in use. Also, when the last
stream is closed, it will be greater than QCS_ID_MAX_STRM_CL_BIDI,
resulting in a substraction wrapping.

Fix this by using the simplest approach. Return value of avail_streams()
is only reduced if either the maximum stream ID limit is already
exceeded, or there is only a single stream still usable. In other cases,
return value is left as is.

Note that this bug is unlikely to have any impact as the maximum stream
ID is a very large value.

This should be backported up to 3.3.
2026-05-07 10:53:56 +02:00
Olivier Houchard
753a282373 BUG/MINOR: ssl: Use the sequence number with kTLS and TLS 1.2
When using TLS 1.2 and kTLS, use the sequence number as the explicit
nonce (what the linux kTLS API calls "iv"), as is strongly recommanded,
and done by most TLS implementations, instead of trying to generate a
pseudo random-number.
In practice, it changes nothing, because the kernel would override that
with the sequence number anyway, but there is no need to have confusing
code that uses statistical_prng_range() anyway.

This should be backported to 3.3.
2026-05-06 21:37:18 +02:00
Remi Tricot-Le Breton
2be6744189 MEDIUM: ssl: Refactorize "commit ssl cert"
In order for the code behind the "commit ssl cert" logic to be usable
outside of the CLI context, some new "ckch_store_update_" functions are
created. They allow to perform all the operations on ckch_stores to be
performed without needing an appctx.
The first function being called is ckch_store_update_init which mainly
takes the ckch_store lock and checks that there is an ongoing
transaction with the proper path (which was already done in
cli_parse_commit_cert).
The main one is ckch_store_update_process which replicates the logic
that could be found in the cli_io_handler_commit_cert function. We
iterate over the ckch instances of an existing ckch store and duplicate
them in the new ckch store which is still detached from the tree, before
replacing the old store with the new one. This whole operation could
take some time so we were yielding every 10 instances or when
applet_putstr calls would fail. The actual ckch_store operations and the
applet related calls are now decorrelated in order to stop having to
have an appctx during the ckch store/instances processing.
The ckch_store_update_process will now update a "msg" buffer and a
"state" that allow to send processing messages to the caller as well as
keep the state of the processing "state machine".
When the ckch_store_update_process loop is over,
ckch_store_update_cleanup can be called to release the lock and free
some now useless structures.
2026-05-06 21:37:18 +02:00
Remi Tricot-Le Breton
53ecb81781 MINOR: ssl: Factorize ckch instance rebuild process
The ckch instances for a given ckch_store have to be rebuilt when a
certificate is updated during runtime (via cli or lua). The code was
duplicated in lua so factorizing the actual loop avoids future errors
if the code changes. The new 'ckch_store_rebuild_instances' will have a
dedicated 0 return code if it needs to be called again (because of the
yielding logic since ckch instance rebuild might take some time).
2026-05-06 21:37:18 +02:00
Remi Tricot-Le Breton
efe6c97488 MINOR: ssl: Factorize code from "new/set ssl cert" CLI command
This allows to perform the same kind of operation without the need for
an appctx.
2026-05-06 21:37:18 +02:00
Remi Tricot-Le Breton
acf1331ed8 MINOR: ssl: Export 'current_crtstore_name'
Make the 'current_crtstore_name' global variable visible during parsing.
2026-05-06 21:37:18 +02:00
Amaury Denoyelle
1614204d28 MINOR: mux_quic: simplify MUX_CTL_GET_NBSTRM
Since the previous patch, accounting of HTTP requests in progress on MUX
QUIC as been simplified. Now QCC <nb_hreq> identifies them until the QCS
free.

Thus, MUX_CTL_GET_NBSTRM can be simplified. Instead of relying on
<nb_sc> plus the <opening_list>, simply return <nb_hreq> value which
should be slighly identical.
2026-05-06 10:21:16 +02:00
Amaury Denoyelle
3cfb08c07b BUG/MEDIUM: mux_quic: adjust qcc_is_dead() to account detached streams
Muxes are responsible to release connections once they are inactive and
won't be reusable. In QUIC mux, such connections are detected via
qcc_is_dead(). The first precondition is that there is no more upper
streams attached. This was accounted via QCC <nb_sc> counter.

A special characteristic of QCS instances is that they can be in
detached state : upper stream has been removed but there is still data
to emit. Such QCS were not taken into account in qcc_is_dead(), so a
connection could be freed with some remaining data not yet emitted.

It is also not possible for QUIC MUX to simply look at the QCS tree to
determine if the connection is inactive. Indeed, some streams are opened
for protocol internal usage. This is the case for example with HTTP/3
unidirectional control stream or QPACK encoder/decoder streams. These
streams are never closed. In the end, only requests streams should be
taken into account for the connection activity.

This patch improves the situation by reworking <nb_hreq> QCC counter.
Previously, it served for http-request timeout implementation. However,
this timeout only relies on <opening_list> now. Thus, <nb_hreq> scope is
changed : it is now incremented via qcs_wait_http_req(), used by app
protocol layer once a request stream is identified. Decrement is
performed on qcs_free(), so this guarantees that a connection cannot be
freed anymore if request streams still exists, unless if inactivity
timeout fires. As such, <nb_hreq> now supersedes <nb_sc> entirely, so
the qcc_is_dead() can now relies on the former.

Along with this change, qcc_timeout_task() must be updated. Call to
qcc_is_dead() was unnecessary prior to this patch as timeout handling
was only active when no upper streams were attached. When tested, both
<nb_sc> and QCC <task> were already null, so a connection was always
released on timeout, as expected. With qcc_is_dead() now checking
<nb_hreq> instead, this is not always the case anymore. In fact, this
check is unnecessary as inactivity timeout serves precisely to free a
stucked connection with remaining data to emit.

This patch also has some impact on http-keep-alive timeout. Previously,
this timeout could be armed if only detached streams remained. Now, it
is only applicable if all QCS request instances are closed and freed.
Thus, qcc_reset_idle_start() is now closed directly on qcs_free().

Ideally this should be backported up to 2.6, or at least 2.8 as QUIC
experimental status was removed there.
2026-05-06 10:19:25 +02:00
Amaury Denoyelle
81eda41d5c MINOR: mux_quic: do not perform unnecessary timeout handling on BE side
MUX implements a timeout for HTTP keep-alive which monitors the delay
between two HTTP requests. This is only applicable for frontend
connections, as on the backend side idle connections can be kept in the
server pool. In QUIC mux, this timeout relies on QCC <idle_start> which
is refresh when the last request is terminated.

This patch modifies the refresh operation so that it is only performed
for frontend connections. This is not strictly necessary but the timeout
timeout management is now clearer and it eliminates an unnecessary
operation for backend connections.

Similarly, http-request timeout is also only applicable for frontend
connections. This relies on qcs_wait_http_req() function. A request QCS
is inserted in <opening_list> until the headers are received. This is
unnecessary on the backend side so this is excluded as well.
2026-05-06 08:57:35 +02:00
Amaury Denoyelle
af49294633 MINOR: mux_quic: reset stream after app shutdown for HTTP/0.9
HTTP/3 implements a GOAWAY frame for graceful shutdown. This allows to
reject any new stream opening with a larger ID. This is implemented via
HTTP/3 attach() callback called by qcs_new().

When HTTP/0.9 is used, there is no similar mechanism. This renders some
feature such as server max-reuse difficult to implement. This patch now
provides a method for such protocols with no graceful shutdown support.
Instead of invoking attach() callback, a stream is now immediately
resetted if the application protocol layer is already closed.

This patch does not change the behavior for HTTP/3. Only limited
protocols (currently only HTTP/0.9) without graceful shutdown are
impacted. These protocols are identified as their shutdown() callback is
nul.

This change is only necessary for HTTP/0.9 as there is no equivalent of
HTTP/3 GOAWAY in this case.
2026-05-06 08:51:27 +02:00
Christopher Faulet
b71a0e7874 MINOR: haterm: Remove now useless req_body field from hstream
req_body field is no longer used, except in trace messages. And in fact, it
is not necessarily true if some data are received with the request headers.
So no reason to still use it.
2026-05-05 19:07:59 +02:00
Christopher Faulet
a68b96ad36 DEBUG: haterm: Add hstream flags in the trace messages
It could be useful to know the hstream state on debugging sessions.
2026-05-05 19:07:59 +02:00
Christopher Faulet
6e7802ca36 BUG/MINOR: haterm: Report a 400-bad-request error on receive error
When an error is reported when reading request data, the hstream now try to
send a 400-bad-request to the client. Before, the connection was just closed
with no error message.
2026-05-05 19:07:59 +02:00
Christopher Faulet
72e010fca3 BUG/MINOR: haterm: Fix condition to use direct data forwarding
The direct fowarding support was only relying on "hs->to_write" value. But
we must be sure to retry if fast-forward data are still there in the I/O
buffer.
2026-05-05 19:07:44 +02:00
Christopher Faulet
b6503f70e2 BUG/MEDIUM: haterm: Properly handle client timeout
No client timeout was set with haterm. It could be an issue with
unresponsive clients. So the I/O timeout of the SC is initialized to the
frontend client timeout when the hstream is created. Then a read activity is
reported when data are received. This read activity is used to set an
expiration date on the hstream task and test it when the hstream is woken up
with TASK_WOKEN_TIMER reason.

When a client timeout is detected, the hstream try to send a 408 and report
an error.
2026-05-05 19:03:31 +02:00
Christopher Faulet
c0f137b704 BUG/MEDIUM: haterm: Properly handle end of request and end of response
There were several issues with the handling of end of the request or end of
response. The main problem was about the request draining.

To help to fix these issues, two flags were introduced:

 * HS_ST_HTTP_EOM_RCVD: to know the request was fully received
 * HS_ST_HTTP_EOM_SENT: to know the response was fully sent

Thanks to these flags some parts were reviewed and simplified.

In the I/O callback function, outside of any error, the hstream task is now
woken when one of the direction is not finished or when there are still some
data in a buffer.

The function hstream_must_drain() was reworked to properly drain request
with no content-length before replying.

The condition to wake hstream up to drain the request after replying was
also reworked, and moved ouside of the else block. Indeed, it must also be
evaluated when the response was fully sent in one call, when request headers
were processed.

Finally, the condition to shut the hstream was slighly adapted to use the
new flags. In addition, we now rely on se_shutdown().
2026-05-05 19:01:03 +02:00
Christopher Faulet
b945a3207b BUG/MINOR: haterm: Don't set HTX_FL_EOM flag on 100-Continue responses
A 100-Continue response is an intermediary message. So the end of message
must not be announed.
2026-05-05 18:54:20 +02:00
Christopher Faulet
1bc050bc49 BUG/MEDIUM: haterm: Subscribe for receives until request was fully drained
When draining the request, if some data were received, no subscribe for
receives was performed to get the remaining. However, because request data
are just ignored, we must always subscribe until it was fully
drained. Otherwise, haterm will never be woken up to drain more data.
2026-05-05 18:54:20 +02:00
Christopher Faulet
999d71560d BUG/MINOR: haterm: Fix a possible integer overflow on the request body length
When request data were received, the request body length was decremented
accordingly with no check on it to be sure it was set. However, it remains
equal to 0 for chunked requests or H2/H3 requests with no content-length.

So now, it is only decremented when it is greater than 0.
2026-05-05 18:54:16 +02:00
Christopher Faulet
3f7b2023c9 CLEANUP: haterm: Remove useless IS_HTX_SC() test
Haterm is an HTTP endpoint. No reason to test if its sc is an HTX sc or
not. Let's remove Is_HTX_SC() test.
2026-05-05 18:36:34 +02:00
Christopher Faulet
f19312ab4b BUG/MINOR: haterm: Immediately report error when draining the request
When draining the request data, if an error was reported while some data
were received, the error was not processed immediately. This part was copied
from tcpchecks where the response should be processed first. For haterm, the
request data are ignored. So no reason to wait to handle the error. It may
be an issue because the response may be sent in the meanwhile.
2026-05-05 18:36:34 +02:00
Christopher Faulet
e373fd6319 CLEANUP: haterm: Remove duplicated bloc to know if haterm must drain
When haterm was waiting for request headers, there was two test to know if
it had to drain the request data before replying. One of them was useless
and was thus removed.
2026-05-05 18:36:34 +02:00
Christopher Faulet
4af4feed33 BUG/MEDIUM: h1_htx: Remove reverved block on error during contig chunks parsing
In h1_parse_full_contig_chunks(), we first try to reserve the bigger HTX
DATA block as possible. It is ajusted at the end of chunks parsing or
removed if no data was copied. However, it should also be removed when a
parsing error is triggered. It could be an issue for http health checks and
haterm to properly handle errors.

This patch should be backported as far as 2.6.
2026-05-05 18:36:34 +02:00
Christopher Faulet
9095785203 BUG/MINOR: http-fetch: Fix http_auth_bearer() when custom header is used
When http_auth_bearer() sample fetch function is called with a custom header
and the header is not found or type didn't match 'Bearer', a mismatch must
be reported instead of an empty string.

This patch should be backported as far as 2.6.
2026-05-05 18:36:04 +02:00
Willy Tarreau
9abfbbf0ba BUG/MINOR: http_fetch: Check return values of unchecked buffer operations
Several return value for chunk_istcat() or chunk_memcat() calls were not
tested. Now, 0 is returned on failure.

Concretly, for now, it is unexpected to trigger error because the result
cannot exceed the buffer size. Data are extracted from an HTX message.

At first glance, no reason to backport it.
2026-05-05 18:36:04 +02:00
Christopher Faulet
51f8bb46af CLEANUP: http-fetch: Fix indentation of sample_fetch_keywords
Misplaced spaces before comma in 'urlp' keyword table entry.
2026-05-05 18:36:04 +02:00
Christopher Faulet
45ca881a6b CLEANUP: http-fetch: Adjust smp_fetch_url32_src() comment
smp_fetch_base32() function was referenced instead of
smp_fetch_url32(). Let's fix it.
2026-05-05 18:36:04 +02:00
Christopher Faulet
e7482c4d0e CLEANUP: http-fetch: Remove duplcated return statement in smp_fetch_stver()
the return statement was needlessly repeated. Let's remove the second one.
2026-05-05 18:36:04 +02:00
Mia Kanashi
3fa0aa3664 BUG/MINOR: acme: contact mail should be optional, don't pass ToS bool
According to ACME RFC contact email is optional.
Letsencrypt used it some long time ago, but not today.
Currently HAProxy always sets the value of the contact mail to a string
that is read from the config, but if that string is not specified,
it sets %s in mailto:%s to null, which cases new account request
to fail in pebble.

Also HAProxy currently passes termsOfServiceAgreed bool to requests
that contain onlyReturnExisting, that isn't needed according to the RFC
and other ACME impls.

This patch dynamically builds the account request JSON to address that.

Can be backported to 3.2
2026-05-05 18:04:19 +02:00