Commit Graph

19601 Commits

Author SHA1 Message Date
Frederic Lecaille
9831f596ea MINOR: quic-be: ->connect() protocol callback adaptations
Modify quic_connect_server() which is the ->connect() callback for QUIC protocol:
    - add a BUG_ON() run when entering this funtion: the <fd> socket must equal -1
    - conn->handle is a union. conn->handle.qc is use for QUIC connection,
      conn->handle.fd must not be used to store the fd.
    - code alignment fix for setsockopt(fd, SOL_SOCKET, (SO_SNDBUF|SO_RCVBUF))
	  statements
    - remove the section of code which was duplicated from ->connect() TCP callback
    - fd_insert() the new socket file decriptor created to connect to the QUIC
      server with quic_conn_sock_fd_iocb() as callback for read event.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
52ec3430f2 MINOR: sock: Add protocol and socket types parameters to sock_create_server_socket()
This patch only adds <proto_type> new proto_type enum parameter and <sock_type>
socket type parameter to sock_create_server_socket() and adapts its callers.
This is to prepare the use of this function by QUIC servers/backends.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
9c84f64652 MINOR: quic-be: Add a function to initialize the QUIC client transport parameters
Implement qc_srv_params_init() to initialize the QUIC client transport parameters
in relation with connections to haproxy servers/backends.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
f49bbd36b9 MINOR: quic-be: SSL sessions initializations
Modify qc_alloc_ssl_sock_ctx() to pass the connection object as parameter. It is
NULL for a QUIC listener, not NULL for a QUIC server. This connection object is
set as value for ->conn quic_conn struct member. Initialise the SSL session object from
this function for QUIC servers.
qc_ssl_set_quic_transport_params() is also modified to pass the SSL object as parameter.
This is the unique parameter this function needs. <qc> parameter is used only for
the trace.
SSL_do_handshake() must be calle as soon as the SSL object is initialized for
the QUIC backend connection. This triggers the TLS CRYPTO data delivery.
tasklet_wakeup() is also called to send asap these CRYPTO data.
Modify the QUIC_EV_CONN_NEW event trace to dump the potential errors returned by
SSL_do_handshake().
2025-06-11 18:37:34 +02:00
Frederic Lecaille
1408d94bc4 MINOR: quic-be: ssl_sock contexts allocation and misc adaptations
Implement ssl_sock_new_ssl_ctx() to allocate a SSL server context as this is currently
done for TCP servers and also for QUIC servers depending on the <is_quic> boolean value
passed as new parameter. For QUIC servers, this function calls ssl_quic_srv_new_ssl_ctx()
which is specific to QUIC.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
7c76252d8a MINOR: quic-be: Correct the QUIC protocol lookup
From connect_server(), QUIC protocol could not be retreived by protocol_lookup()
because of the PROTO_TYPE_STREAM default passed as argument. In place to support
QUIC srv->addr_type.proto_type may be safely passed.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
1e45690656 MINOR: quic-be: Add a function for the TLS context allocations
Implement ssl_quic_srv_new_ssl_ctx() whose aim is to allocate a TLS context
for QUIC servers.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
a4e1296208 MINOR: quic-be: QUIC server xprt already set when preparing their CTXs
The QUIC servers xprts have already been set at server line parsing time.
This patch prevents the QUIC servers xprts to be reset to <ssl_sock> value which is
the value used for SSL/TCP connections.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
24fc44c44d MINOR: quic-be: QUIC backend XPRT and transport parameters init during parsing
Add ->quic_params new member to server struct.
Also set the ->xprt member of the server being initialized and initialize asap its
transport parameters from _srv_parse_init().
2025-06-11 18:37:34 +02:00
Frederic Lecaille
0e67687ca9 MINOR: quic-be: Call ->prepare_srv() callback at parsing time
This XPRT callback is called from check_config_validity() after the configuration
has been parsed to initialize all the SSL server contexts.

This patch implements the same thing for the QUIC servers.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
5a711551a2 MINOR: quic-be: Version Information transport parameter check
Add a little check to verify that the version chosen by the server matches
with the client one. Initiliazes local transport parameters ->negotiated_version
value with this version if this is the case. If not, return 0;
2025-06-11 18:37:34 +02:00
Frederic Lecaille
990c9f95f7 MINOR: quic-be: Correct Version Information transp. param encoding
According to the RFC, a QUIC client must encode the QUIC version it supports
into the "Available Versions" of "Version Information" transport parameter
order by descending preference.

This is done defining <quic_version_2> and <quic_version_draft_29> new variables
pointers to the corresponding version of <quic_versions> array elements.
A client announces its available versions as follows: v1, v2, draft29.
2025-06-11 18:37:34 +02:00
Amaury Denoyelle
9c751a3cc1 MINOR: mux-quic-be: allow QUIC proto on backend side
Activate QUIC protocol support for MUX-QUIC on the backend side,
additionally to current frontend support. This change is mandatory to be
able to implement QUIC on the backend side.

Without this modification, it is impossible to activate explicitely QUIC
protocol on a server line, hence an error is reported :
  config : proxy 'xxxx' : MUX protocol 'quic' is not usable for server 'yyyy'
2025-06-11 18:37:34 +02:00
Amaury Denoyelle
f66b495f8e MINOR: server: mark QUIC support as experimental
Mark QUIC address support for servers as experimental on the backend
side. Previously, it was allowed but wouldn't function as expected. As
QUIC backend support requires several changes, it is better to declare
it as experimental first.
2025-06-11 18:37:33 +02:00
Amaury Denoyelle
1ecf2e9bab BUG/MINOR: config/server: reject QUIC addresses
QUIC is not implemented on the backend side. To prevent any issue, it is
better to reject any server configured which uses it. This is done via
_srv_parse_init() which is used both for static and dynamic servers.

This should be backported up to all stable versions.
2025-06-11 18:37:17 +02:00
Christopher Faulet
9df380a152 MEDIUM: hlua: Update TCP applet functions to use the new applet API
The functions responsible to extract data from the applet input buffer or to
push data into the applet output buffer are now relying on the newly added
functions in the applet API. This simplifies a bit the code.
2025-06-10 08:16:10 +02:00
Frederic Lecaille
6b74633069 BUG/MINOR: quic: Missing SSL session object freeing
qc_alloc_ssl_sock_ctx() allocates an SSL_CTX object for each connection. It also
allocates an SSL object. When this function failed, it freed only the SSL_CTX object.
The correct way to free both of them is to call qc_free_ssl_sock_ctx().

Must be backported as far as 2.6.
2025-06-06 17:53:13 +02:00
Amaury Denoyelle
0cdf529720 BUG/MINOR: config: fix arg number reported on empty arg warning
If an empty argument is used in configuration, for example due to an
undefined environment variable, the rest of the line is not parsed. As
such, a warning is emitted to report this.

The warning was not totally correct as it reported the wrong argument
index. Fix this by this patch. Note that there is still an issue with
the "^" indicator, but this is not as easy to fix yet.

This is related to github issue #2995.

This should be backported up to 3.2.
2025-06-06 17:03:02 +02:00
Amaury Denoyelle
5f1fad1690 BUG/MINOR: config: emit warning for empty args only in discovery mode
Hide warning about empty argument outside of discovery mode. This is
necessary, else the message will be displayed twice, which hampers
haproxy output lisibility.

This should fix github isue #2995.

This should be backported up to 3.2.
2025-06-06 17:02:58 +02:00
Christopher Faulet
f5d41803d3 BUG/MEDIUM: cli: Properly parse empty lines and avoid crashed
Empty lines was not properly parsed and could lead to crashes because the
last argument was parsed outside of the cmdline buffer. Indeed, the last
argument is parsed to look for an eventual payload pattern. It is started
one character after the newline at the end of the command line. But it is
only valid for an non-empty command line.

So, now, this case is properly detected when we leave if an empty line is
detected.

This patch must be backported to 3.2.
2025-06-05 10:46:13 +02:00
Aurelien DARRAGON
16eb0fab31 MAJOR: counters: dispatch counters over thread groups
Most fe and be counters are good candidates for being shared between
processes. They are now grouped inside "shared" struct sub member under
be_counters and fe_counters.

Now they are properly identified, they would greatly benefit from being
shared over thread groups to reduce the cost of atomic operations when
updating them. For this, we take the current tgid into account so each
thread group only updates its own counters. For this to work, it is
mandatory that the "shared" member from {fe,be}_counters is initialized
AFTER global.nbtgroups is known, because each shared counter causes the stat
to be allocated lobal.nbtgroups times. When updating a counter without
concurrency, the first counter from the array may be updated.

To consult the shared counters (which requires aggregation of per-tgid
individual counters), some helper functions were added to counter.h to
ease code maintenance and avoid computing errors.
2025-06-05 09:59:38 +02:00
Aurelien DARRAGON
b599138842 MEDIUM: counters: manage shared counters using dedicated helpers
proxies, listeners and server shared counters are now managed via helpers
added in one of the previous commits.

When guid is not set (ie: when not yet assigned), shared counters pointer
is allocated using calloc() (local memory) and a flag is set on the shared
counters struct to know how to manipulate (and free it). Else if guid is
set, then it means that the counters may be shared so while for now we
don't actually use a shared memory location the API is ready for that.

The way it works, for proxies and servers (for which guid is not known
during creation), we first call counters_{fe,be}_shared_get with guid not
set, which results in local pointer being retrieved (as if we just
manually called calloc() to retrieve a pointer). Later (during postparsing)
if guid is set we try to upgrade the pointer from local to shared.

Lastly, since the memory location for some objects (proxies and servers
counters) may change from creation to postparsing, let's update
counters->last_change member directly under counters_{fe,be}_shared_get()
so we don't miss it.

No change of behavior is expected, this is only preparation work.
2025-06-05 09:59:17 +02:00
Aurelien DARRAGON
aa53887398 MINOR: counters: add shared counters helpers to get and drop shared pointers
create include/haproxy/counters.h and src/counters.c files to anticipate
for further helpers as some counters specific tasks needs to be carried
out and since counters are shared between multiple object types (ie:
listener, proxy, server..) we need generic helpers.

Add some shared counters helper which are not yet used but will be updated
in upcoming commits.
2025-06-05 09:59:04 +02:00
Aurelien DARRAGON
a0dcab5c45 MAJOR: counters: add shared counters base infrastructure
Shareable counters are not tagged as shared counters and are dynamically
allocated in separate memory area as a prerequisite for being stored
in shared memory area. For now, GUID and threads groups are not taken into
account, this is only a first step.

also we ensure all counters are now manipulated using atomic operations,
namely, "last_change" counter is now read from and written to using atomic
ops.

Despite the numerous changes caused by the counters being moved away from
counters struct, no change of behavior should be expected.
2025-06-05 09:58:58 +02:00
Aurelien DARRAGON
89b04f2191 CLEANUP: sink: remove useless cleanup in sink_new_from_logger()
As reported by Ilya in GH #2994, some cleanup parts in
sink_new_from_logger() function are not used.

We can actually simplify the cleanup logic to remove dead code, let's
do that by renaming "error_final" label to "error" and only making use
of the "error" label, because sink_free() already takes care of proper
cleanup for all sink members.
2025-06-05 09:58:50 +02:00
Christopher Faulet
8c4bb8cab3 BUG/MINOR: mux-spop: Fix null-pointer deref on SPOP stream allocation failure
When we try to allocate a new SPOP stream, if an error is encountered,
spop_strm_destroy() is called to released the eventually allocated
stream. But, it must only be called if a stream was allocated. If the
reported error is an SPOP stream allocation failure, we must just leave to
avoid null-pointer dereference.

This patch should fix point 1 of the issue #2993. It must be backported as
far as 3.1.
2025-06-04 08:48:49 +02:00
Christopher Faulet
6786b05297 DEBUG: check: Add the healthcheck's expiration date in the trace messags
It could help to diagnose some issues about timeout processing. So let's add
it !
2025-06-03 15:06:12 +02:00
Christopher Faulet
7c788f0984 BUG/MEDIUM: check: Requeue healthchecks on I/O events to handle check timeout
When a healthchecks is processed, once the first wakeup passed to start the
check, and as long as the expiration timer is not reached, only I/O events
are able to wake it up. It is an issue when there is a check timeout
defined.  Especially if the connect timeout is high and the check timeout is
low. In that case, the healthcheck's task is never requeue to handle any
timeout update. When the connection is established, the check timeout is set
to replace the connect timeout. It is thus possible to report a success
while a timeout should be reported.

So, now, when an I/O event is handled, the healthcheck is requeue, except if
an success or an abort is reported.

Thanks to Thierry Fournier for report and the reproducer.

This patch must be backported to all stable versions.
2025-06-03 15:03:30 +02:00
Olivier Houchard
913b2d6c83 BUG/MAJOR: leastconn: Protect tree_elt with the lbprm lock
In fwlc_srv_reposition(), set the server's tree_elt while we still hold
the lbprm read lock. While it was protected from concurrent
fwlc_srv_reposition() calls by the server's lb_lock, it was not from
dequeuing/requeuing that could occur if the server gets down/up or its
weight is changed, and that would lead to inconsistencies, and the
watchdog killing the process because it is stuck in an infinite loop in
fwlc_get_next_server().

This hopefully fixes github issue #2990.

This should be backported to 3.2.
2025-06-03 04:42:47 +02:00
Aurelien DARRAGON
368d01361a MEDIUM: server: add and use srv_init() function
rename _srv_postparse() internal function to srv_init() function and group
srv_init_per_thr() plus idle conns list init inside it. This way we can
perform some simplifications as srv_init() performs multiple server
init steps after parsing.

SRV_F_CHECKED flag was added, it is automatically set when srv_init()
runs successfully. If the flag is already set and srv_init() is called
again, nothing is done. This permis to manually call srv_init() earlier
than the default POST_CHECK hook when needed without risking to do things
twice.
2025-06-02 17:51:33 +02:00
Aurelien DARRAGON
889ef6f67b MEDIUM: server: automatically add server to proxy list in new_server()
while new_server() takes the parent proxy as argument and even assigns
srv->proxy to the parent proxy, it didn't actually inserted the server
to the parent proxy server list on success.

The result is that sometimes we add the server to the list after
new_server() is called, and sometimes we don't.

This is really error-prone and because of that hooks such as
REGISTER_POST_SERVER_CHECK() which as run for all servers listed in
all proxies may not be relied upon for servers which are not actually
inserted in their parent proxy server list. Plus it feels very strange
to have a server that points to a proxy, but then the proxy doesn't know
about it because it cannot find it in its server list.

To prevent errors and make proxy->srv list reliable, we move the insertion
logic directly under new_server(). This requires to know if we are called
during parsing or during runtime to either insert or append the server to
the parent proxy list. For that we use PR_FL_CHECKED flag from the parent
proxy (if the flag is set, then the proxy was checked so we are past the
init phase, thus we assume we are called during runtime)

This implies that during startup if new_server() has to be cancelled on
error paths we need to call srv_detach() (which is now exposed in server.h)
before srv_drop().

The consequence of this commit is that REGISTER_POST_SERVER_CHECK() should
not run reliably on all servers created using new_server() (without having
to manually loop on global servers_list)
2025-06-02 17:51:30 +02:00
Aurelien DARRAGON
e262e4bbe4 MEDIUM: proxy: use global proxy list for REGISTER_POST_PROXY_CHECK() hook
REGISTER_POST_PROXY_CHECK() used to iterate over "main" proxies to run
registered callbacks. This means hidden proxies (and their servers) did
not get a chance to get post-checked and could cause issues if some post-
checks are expected to be executed on all proxies no matter their type.

Instead we now rely on the global proxies list. Another side effect is that
the REGISTER_POST_SERVER_CHECK() now runs as well for servers from proxies
that are not part of the main proxies list.
2025-06-02 17:51:27 +02:00
Aurelien DARRAGON
1f12e45b0a MINOR: log: only run postcheck_log_backend() checks on backend
postcheck_log_backend() checks are executed no matter if the proxy
actually has the backend capability while the checks actually depend
on this.

Let's fix that by adding an extra condition to ensure that the BE
capability is set.

This issue is not tagged as a bug because for now it remains impossible
to have a syslog proxy without BE capability in the main proxy list, but
this may change in the future.
2025-06-02 17:51:24 +02:00
Aurelien DARRAGON
943958c3ff MINOR: proxy: add a true list containing all proxies
We have global proxies_list pointer which is announced as the list of
"all existing proxies", but in fact it only represents regular proxies
declared on the config file through "listen, frontend or backend" keywords

It is ambiguous, and we currently don't have a straightforwrd method to
iterate over all proxies (either public or internal ones) within haproxy

Instead we still have to manually iterate over multiple lists (main
proxies, log-forward proxies, peer proxies..) which is error-prone.

In this patch we add a struct list member (8 bytes) inside struct proxy
in order to store every proxy (except default ones) within a global
"proxies" list which is actually representative for all proxies existing
under haproxy process, like we already have for servers.
2025-06-02 17:51:21 +02:00
Aurelien DARRAGON
6ccf770fe2 MINOR: proxy: collect per-capability stat in proxy_cond_disable()
proxy_cond_disable() collects and prints cumulated connections for be and
fe proxies no matter their type. With shared stats it may cause issues
because depending on the proxy capabilities only fe or be counters may
be allocated.

In this patch we add some checks to ensure we only try to read from
valid memory locations, else we rely on default values (0).
2025-06-02 17:51:17 +02:00
Aurelien DARRAGON
c7c017ec3c MINOR: stats: add ME_NEW_COMMON() helper
Split ME_NEW_* helper into COMMON part and specific part so it becomes
easier to add alternative helpers without code duplication.
2025-06-02 17:51:12 +02:00
Aurelien DARRAGON
d04843167c MINOR: stats: add stat_col flags
Add stat_col flags member to store .generic bit and prepare for upcoming
flags. No functional change expected.
2025-06-02 17:51:08 +02:00
Aurelien DARRAGON
f0b40b49b8 MINOR: server: group postinit server tasks under _srv_postparse()
init_srv_requeue() and init_srv_slowstart() functions are called after
initial server parsing via REGISTER_POST_SERVER_CHECK() hook, and they
are also manually called for dynamic server after the server is
initialized.

This may conflict with _srv_postparse() which is also registered via
REGISTER_POST_SERVER_CHECK() and called during dynamic server creation

To ensure functions don't conflict with each other, let's ensure they
are executed in proper order by calling init_srv_requeue and
init_srv_slowstart() from _srv_postparse() which now becomes the parent
function for server related postparsing stuff. No change of behavior is
expected.
2025-06-02 17:51:05 +02:00
Willy Tarreau
b88164d9c0 BUILD: tools: properly define ha_dump_backtrace() to avoid a build warning
In resolve_sym_name() we declare a few symbols that we want to be able
to resolve. ha_dump_backtrace() was declared with a struct buffer instead
of a pointer to such a struct, which has no effect since we only want to
get the function's pointer, but produces a build warning with LTO, so
let's fix it.

This can be backported to 3.0.
2025-05-30 17:15:48 +02:00
Christopher Faulet
50fca6f0b7 BUG/MEDIUM: httpclient: Throw an error if an lua httpclient instance is reused
It is not expected/supported to reuse an httpclient instance to process
several requests. A new instance must be created for each request. However,
in lua, there is nothing to prevent a user to create an httpclient object
and use it in a loop to process requests.

That's unfortunate because this will apparently work, the requests will be
sent and a response will be received and processed. However internally some
ressources will be allocated and never released. When the next response is
processed, the ressources allocated for the previous one are definitively
lost.

In this patch we take care to check that the httpclient object was never
used when a request is sent from a lua script by checking
HTTPCLIENT_FS_STARTED flags. This flag is set when a httpclient applet is
spawned to process a request and never removed after that. In lua, the
httpclient applet is created when the request is sent. So, it is the right
place to do this test.

This patch should fix the issue #2986. It should be backported as far as
2.6.
2025-05-27 18:47:24 +02:00
Christopher Faulet
bc4c3c7969 BUG/MEDIUM: hlua: Fix receive API for TCP applets to properly handle shutdowns
An optional timeout was added to AppletTCP.receive() to interrupt calls after a
delay. It was mandatory to be able to implement interactive applets (like
trisdemo). However, this broke the API and it made impossible to differentiate
the shutdowns from the delays expirations. Indeed, in both cases, an empty
string was returned.

Because historically an empty string was used to notify a connection shutdown,
it should not be changed. So now, 'nil' value is returned when no data was
available before the delay expiration.

The new AppletTCP:try_receive() function was also affected. To fix it, instead
of stating there is no delay when a receive is tried, an expired delay is
set. Concretely TICK_ETERNITY was replaced by now_ms.

Finally, AppletTCP:getline() function is not concerned for now because there
is no way to interrupt it after some delay.

The documentation and trisdemo lua script were updated accordingly.

This patch depends on "BUG/MEDIUM: hlua: Properly detect shudowns for TCP
applets based on the new API". However, it is a 3.2-specific issue, so no
backport is needed.
2025-05-27 07:53:19 +02:00
Christopher Faulet
c0ecef71d7 BUG/MEDIUM: hlua: Fix getline() for TCP applets to work with applet's buffers
The commit e5e36ce09 ("BUG/MEDIUM: hlua/cli: Fix lua CLI commands to work
with applet's buffers") fixed the TCP applets API to work with applets using
its own buffers. Howver the getline() function was not updated. It could be
an issue for anyone registering a CLI commands reading lines.

This patch should be backported as far as 3.0.
2025-05-27 07:53:01 +02:00
Christopher Faulet
c64781c2c8 BUG/MEDIUM: hlua: Properly detect shudowns for TCP applets based on the new API
The internal function responsible to receive data for TCP applets with
internal buffers is buggy. Indeed, for these applets, the buffer API is used
to get data. So there is no tests on the SE to properly detect connection
shutdowns. So, it must be performed by hand after the call to b_getblk_nc().

This patch must be backported as far as 3.0.
2025-05-26 19:00:00 +02:00
Christopher Faulet
4d4da515f2 BUG/MEDIUM: cli/ring: Properly handle shutdown in "show event" I/O handler
The commit 03dc54d802 ("BUG/MINOR: ring: Fix I/O handler of "show event"
command to not rely on the SC") introduced a regression. By removing
dependencies on the SC, a test to detect client shutdowns was removed. So
now, the CLI applet is no longer released when the client shut the
connection during a "show event -w".

So of course, we should not use the SC to detect the shutdowns. But the SE
must be used insteead.

It is a 3.2-specific issue, so no backport needed.
2025-05-26 19:00:00 +02:00
Christopher Faulet
99e755d673 MINOR: listeners: Add support for a label on bind line
It is now possile to set a label on a bind line. All sockets attached to
this bind line inherits from this label. The idea is to be able to groud of
sockets. For now, there is no mechanism to create these groups, this must be
done by hand.
2025-05-26 19:00:00 +02:00
Christopher Faulet
e70c23e517 BUG/MEDIUM: h3: Declare absolute URI as normalized when a :authority is found
Since commit 2c3d656f8 ("MEDIUM: h3: use absolute URI form with
:authority"), the absolute URI form is used when a ':authority'
pseudo-header is found. However, this URI was not declared as normalized
internally.  So, when the request is reformated to be sent to an h1 server,
the absolute-form is used instead of the origin-form. It is unexpected and
may be an issue for some servers that could reject the request.

So, now, we take care to set HTX_SL_F_HAS_AUTHORITY flag on the HTX message
when an authority was found and HTX_SL_F_NORMALIZED_URI flag is set for
"http" or "https" schemes.

No backport needed because the commit above must not be backported. It
should fix a regression reported on the 3.2-dev17 in issue #2977.

This commit depends on "BUG/MINOR: h3: Set HTX flags corresponding to the
scheme found in the request".
2025-05-26 11:47:23 +02:00
Christopher Faulet
da9792cca8 BUG/MINOR: h3: Set HTX flags corresponding to the scheme found in the request
When a ":scheme" pseudo-header is found in a h3 request, the
HTX_SL_F_HAS_SCHM flag must be set on the HTX message. And if the scheme is
'http' or 'https', the corresponding HTX flag must also be set. So,
respectively, HTX_SL_F_SCHM_HTTP or HTX_SL_F_SCHM_HTTPS.

It is mainly used to send the right ":scheme" pseudo-header value to H2
server on backend side.

This patch could be backported as far as 2.6.
2025-05-26 11:38:29 +02:00
Remi Tricot-Le Breton
90441e9bfe BUG/MAJOR: cache: Crash because of wrong cache entry deleted
When "vary" is enabled, we can have multiple entries for a given primary
key in the cache tree. There is a limit to how many secondary entries
can be inserted for a given key. When we try to insert a new secondary
entry, if the limit is already reached, we can try to find expired
entries with the same primary key, and if the limit is still reached we
want to abort the current insertion and to remove the node that was just
inserted.

In commit "a29b073: MEDIUM: cache: Add refcount on cache_entry" though,
a regression was introduced. Instead of removing the entry just inserted
as the comments suggested, we removed the second to last entry and
returned NULL. We then reset the eb.key of the cache_entry in the caller
because we assumed that the entry was already removed from the tree.

This means that some entries with an empty key were wrongly kept in the
tree and the last secondary entry, which keeps the number of secondary
entries of a given key was removed.

This ended up causing some crashes later on when we tried to iterate
over the elements of this given key. The crash could occur in multiple
places, either when trying to retrieve an entry or to add some new ones.

This crash was raised in GitHub issue #2950.
The fix should be backported up to 3.0.
2025-05-23 22:38:54 +02:00
Willy Tarreau
84ffb3d0a9 MINOR: config: list recently added sections with -dKcfg
Newly added sections (crt-store, traces, acme) were not listed in
-dKcfg, let's add them. For now they have to be manually enumerated.
2025-05-23 10:49:33 +02:00
Willy Tarreau
28c7a22790 BUG/MEDIUM: server: fix potential null-deref after previous fix
A valid build warning was reported in the CI with latest commit b40ce97ecc
("BUG/MEDIUM: server: fix crash after duplicate GUID insertion"). Indeed,
if the first test in the function fails, we branch to the err label
with guid==NULL and will crash there. Let's just test guid before
dereferencing it for freeing.

This needs to be backported to 3.0 as well since the commit above was
meant to go there.
2025-05-22 18:09:12 +02:00
Amaury Denoyelle
b40ce97ecc BUG/MEDIUM: server: fix crash after duplicate GUID insertion
On "add server", if a GUID is defined, guid_insert() is used to add the
entry into the global GUID tree. If a similar entry already exists, GUID
insertion fails and the server creation is eventually aborted.

A crash could occur in this case because of an invalid memory access via
guid_remove(). The latter is caused via free_server() as the server
insertion is rejected. The invalid occurs on GUID key.

The issue occurs because of guid_insert(). The function properly
deallocates the GUID key on duplicate insertion, but it failed to reset
<guid.node.key> to NULL. This caused the invalid memory access on
guid_remove(). To fix this, ensure that key member is properly resetted
on guid_insert() error path.

This must be backported up to 3.0.
2025-05-22 17:59:37 +02:00
Amaury Denoyelle
5e088e3f8e MINOR: server: use stress mode for "add server help"
Implement stress mode on "add server help". This ensures that the
command is fully reentrant on full output buffer.

For testing, it requires compilation with USE_STRESS and global setting
"stress-level 1".
2025-05-22 17:40:05 +02:00
Amaury Denoyelle
4de5090976 MINOR: server: implement "add server help"
Implement "help" as a sub-command for "add server" CLI. The objective is
to list all the keywords that are supported for dynamic servers. CLI IO
handler and add_srv_ctx are used to support reentrancy on full output
buffer.

Now that this command is implemented, the outdated keyword list on "add
server" from management documentation can be removed.
2025-05-22 17:40:05 +02:00
Amaury Denoyelle
2570892c41 MINOR: server: define CLI I/O handler for "add server"
Extend "add server" to support an IO handler function named
cli_io_handler_add_server(). A context object is also defined whose
usage will depend on IO handler capabilities.

IO handler is skipped when "add server" is run in default mode, i.e. on
a dynamic server creation. Thus, currently IO handler is unneeded.
However, it will become useful to support sub-commands for "add server".

Note that return value of "add server" parser has been changed on server
creation success. Previously, it was used incorrectly to report if
server was inserted or not. In fact, parser return value is used by CLI
generic code to detect if command processing has been completed, or
should continue to the IO handler. Now, "add server" always returns 1 to
signal that CLI processing is completed. This is necessary to preserve
CLI output emitted by parser, even now that IO handler is defined for
the command. Previously, output was emitted in every situations due to
IO handler not defined. See below code snippet from cli.c for a better
overview :

  if (kw->parse && kw->parse(args, payload, appctx, kw->private) != 0) {
          ret = 1;
          goto fail;
  }

  /* kw->parse could set its own io_handler or io_release handler */
  if (!appctx->cli_ctx.io_handler) {
          ret = 1;
          goto fail;
  }

  appctx->st0 = CLI_ST_CALLBACK;
  ret = 1;
  goto end;
2025-05-22 17:40:05 +02:00
Willy Tarreau
1c0f2e62ad MINOR: ssl: also provide the "tls-tickets" bind option
Currently there is "no-tls-tickets" that is also supported in the
ssl-default-bind-options directive, but there's no way to re-enable
them on a specific "bind" line. This patch simply provides the option
to re-enable them. Note that the flag is inverted because tickets are
enabled by default and the no-tls-ticket option sets the flag to
disable them.
2025-05-22 15:31:54 +02:00
Willy Tarreau
3494775a1f MINOR: ssl: support strict-sni in ssl-default-bind-options
Several users already reported that it would be nice to support
strict-sni in ssl-default-bind-options. However, in order to support
it, we also need an option to disable it.

This patch moves the setting of the option from the strict_sni field
to a flag in the ssl_options field so that it can be inherited from
the default bind options, and adds a new "no-strict-sni" directive to
allow to disable it on a specific "bind" line.

The test file "del_ssl_crt-list.vtc" which already tests both options
was updated to make use of the default option and the no- variant to
confirm everything continues to work.
2025-05-22 15:31:54 +02:00
Christopher Faulet
7244f16ac4 MINOR: promex: Add agent check status/code/duration metrics
In the Prometheus exporter, the last health check status is already exposed,
with its code and duration in seconds. The server status is also exposed.
But the information about the agent check are not available. It is not
really handy because when a server status is changed because of the agent,
it is not obvious by looking to the Prometheus metrics. Indeed, the server
may reported as DOWN for instance, while the health check status still
reports a success. Being able to get the agent status in that case could be
valuable.

So now, the last agent check status is exposed, with its code and duration
in seconds. Following metrics can be grabbe now:

  * haproxy_server_agent_status
  * haproxy_server_agent_code
  * haproxy_server_agent_duration_seconds

Note that unlike the other metrics, no per-backend aggregated metric is
exposed.

This patch is related to issue #2983.
2025-05-22 09:50:10 +02:00
Willy Tarreau
a1577a89a0 MINOR: glitches: add global setting "tune.glitches.kill.cpu-usage"
It was mentioned during the development of glitches that it would be
nice to support not killing misbehaving connections below a certain
CPU usage so that poor implementations that routinely misbehave without
impact are not killed. This is now possible by setting a CPU usage
threshold under which we don't kill them via this parameter. It defaults
to zero so that we continue to kill them by default.
2025-05-21 15:47:42 +02:00
Willy Tarreau
eee57b4d3f CLEANUP: cfgparse: alphabetically sort the global keywords
The global keywords table was no longer sorted at all, let's fix it to
ease spotting the searched ones.
2025-05-21 15:47:42 +02:00
Amaury Denoyelle
01e3b2119a MINOR: quic: add some missing includes
Insert some missing includes statement in QUIC source files. This was
detected after the next commit which adjust the include list used in
quic_conn-t.h file.
2025-05-21 14:44:27 +02:00
Amaury Denoyelle
f286288471 MINOR: quic: refactor handling of streams after MUX release
quic-conn layer has to handle itself STREAM frames after MUX release. If
the stream was already seen, it is probably only a retransmitted frame
which can be safely ignored. For other streams, an active closure may be
needed.

Thus it's necessary that quic-conn layer knows the highest stream ID
already handled by the MUX after its release. Previously, this was done
via <nb_streams> member array in quic-conn structure.

Refactor this by replacing <nb_streams> by two members called
<stream_max_uni>/<stream_max_bidi>. Indeed, it is unnecessary for
quic-conn layer to monitor locally opened uni streams, as the peer
cannot by definition emit a STREAM frame on it. Also, bidirectional
streams are always opened by the remote side.

Previously, <nb_streams> were set by quic-stream layer. Now,
<stream_max_uni>/<stream_max_bidi> members are only set one time, just
prior to QUIC MUX release. This is sufficient as quic-conn do not use
them if the MUX is available.

Note that previously, IDs were used relatively to their type, thus
incremented by 1, after shifting the original value. For simplification,
use the plain stream ID, which is incremented by 4.
2025-05-21 14:26:45 +02:00
Amaury Denoyelle
07d41a043c MINOR: quic: move function to check stream type in utils
Move general function to check if a stream is uni or bidirectional from
QUIC MUX to quic_utils module. This should prevent unnecessary include
of QUIC MUX header file in other sources.
2025-05-21 14:17:41 +02:00
Amaury Denoyelle
cf45bf1ad8 CLEANUP: quic: remove unused cbuf module
Cbuf are not used anymore. Remove the related source and header files,
as well as include statements in the rest of QUIC source files.
2025-05-21 14:16:37 +02:00
William Lallemand
8b121ab6f7 BUG/MINOR: acme: fix formatting issue in error and logs
Stop emitting \n in errmsg for intermediate error messages, this was
emitting multiline logs and was returning to a new line in the middle of
sentences.

We don't need to emit them in acme_start_task() since the errmsg is
ouput in a send_log which already contains a \n or on the CLI which
also emits it.
2025-05-21 11:41:28 +02:00
William Lallemand
156f4bd7a6 BUG/MEDIUM: acme: check if acme domains are configured
When starting the ACME task with a ckch_conf which does not contain the
domains, the ACME task would segfault because it will try to dereference
a NULL in this case.

The patch fix the issue by emitting a warning when no domains are
configured. It's not done at configuration parsing because it is not
easy to emit the warning because there are is no callback system which
give access to the whole ckch_conf once a line is parsed.

No backport needed.
2025-05-21 11:41:28 +02:00
Amaury Denoyelle
e399daa67e BUG/MEDIUM: mux-quic: fix BUG_ON() on rxbuf alloc error
RX buffer allocation has been reworked in current dev tree. The
objective is to support multiple buffers per QCS to improve upload
throughput.

RX buffer allocation failure is handled simply : the whole connection is
closed. This is done via qcc_set_error(), with INTERNAL_ERROR as error
code. This function contains a BUG_ON() to ensure it is called only one
time per connection instance.

On RX buffer alloc failure, the aformentioned BUG_ON() crashes due to a
double invokation of qcc_set_error(). First by qcs_get_rxbuf(), and
immediately after it by qcc_recv(), which is the caller of the previous
one. This regression was introduced by the following commit.

  60f64449fb
  MAJOR: mux-quic: support multiple QCS RX buffers

To fix this, simply remove qcc_set_error() invocation in
qcs_get_rxbuf(). On buffer alloc failture, qcc_recv() is responsible to
set the error.

This does not need to be backported.
2025-05-21 11:33:00 +02:00
Willy Tarreau
4b52d5e406 BUILD: acme: fix build issue on 32-bit archs with 64-bit time_t
The build failed on mips32 with a 64-bit time_t here:

  https://github.com/haproxy/haproxy/actions/runs/15150389164/job/42595310111

Let's just turn the "remain" variable used to show the remaining time
into a more portable ullong and use %llu for all format specifiers,
since long remains limited to 32-bit on 32-bit archs.

No backport needed.
2025-05-21 10:18:47 +02:00
Willy Tarreau
09d4c9519e BUILD: ssl: avoid possible printf format warning in traces
When building on MIPS-32 with gcc-9.5 and glibc-2.31, I got this:

  src/ssl_trace.c: In function 'ssl_trace':
  src/ssl_trace.c:118:42: warning: format '%ld' expects argument of type 'long int', but argument 3 has type 'ssize_t' {aka 'const int'} [-Wformat=]
    118 |     chunk_appendf(&trace_buf, " : size=%ld", *size);
        |                                        ~~^   ~~~~~
        |                                          |   |
        |                                          |   ssize_t {aka const int}
        |                                          long int
        |                                        %d

Let's just cast the type. No backport needed.
2025-05-21 10:01:14 +02:00
Willy Tarreau
3b2fb5cc15 CLEANUP: wdt: clarify the comments on the common exit path
The condition in which we reach the check for ha_panic() and
ha_stuck_warning() are not super clear, let's reformulate them.
2025-05-20 16:37:06 +02:00
Willy Tarreau
0a8bfb5b90 BUG/MEDIUM: wdt: always ignore the first watchdog wakeup
With commit a06c215f08 ("MEDIUM: wdt: always make the faulty thread
report its own warnings"), when the TH_FL_STUCK flag was flipped on,
we'd then go to the panic code instead of giving a second chance like
before the commit. This can trigger rare cases that only happen with
moderate loads like was addressed by commit 24ce001771 ("BUG/MEDIUM:
wdt: fix the stuck detection for warnings"). This is in fact due to
the loss of the common "goto update_and_leave" that used to serve
both the warning code and the flag setting for probation, and it's
apparently what hit Christian in issue #2980.

Let's make sure we exit naturally when turning the bit on for the
first time. Let's also update the confusing comment at the end of
the check that was left over by latest change.

Since the first commit was backported to 3.1, this commit should be
backported there as well.
2025-05-20 16:37:03 +02:00
Frederic Lecaille
08eee0d9cf MINOR: quic: OpenSSL 3.5 trick to support 0-RTT
For an unidentified reason, SSL_do_hanshake() succeeds at its first call when 0-RTT
is enabled for the connection. This behavior looks very similar by the one encountered
by AWS-LC stack. That said, it was documented by AWS-LC. This issue leads the
connection to stop sending handshake packets after having release the handshake
encryption level. In fact, no handshake packets could even been sent leading
the handshake to always fail.

To fix this, this patch simulates a "handshake in progress" state waiting
for the application level read secret to be established by the TLS stack.
This may happen only after the QUIC listener has completed/confirmed the handshake
upon handshake CRYPTO data receipt from the peer.
2025-05-20 15:00:06 +02:00
Frederic Lecaille
849a3af14e MINOR: quic: OpenSSL 3.5 internal QUIC custom extension for transport parameters reset
A QUIC must sent its transport parameter using a TLS custom extention. This
extension is reset by SSL_set_SSL_CTX(). It can be restored calling
quic_ssl_set_tls_cbs() (which calls SSL_set_quic_tls_cbs()).
2025-05-20 15:00:06 +02:00
Frederic Lecaille
b3ac1a636c MINOR: quic: implement all remaining callbacks for OpenSSL 3.5 QUIC API
The quic_conn struct is modified for two reasons. The first one is to store
the encoded version of the local tranport parameter as this is done for
USE_QUIC_OPENSSL_COMPAT. Indeed, the local transport parameter "should remain
valid until after the parameters have been sent" as mentionned by
SSL_set_quic_tls_cbs(3) manual. In our case, the buffer is a static buffer
attached to the quic_conn object. qc_ssl_set_quic_transport_params() function
whose role is to call SSL_set_tls_quic_transport_params() (aliased by
SSL_set_quic_transport_params() to set these local tranport parameter into
the TLS stack from the buffer attached to the quic_conn struct.

The second quic_conn struct modification is the addition of the  new ->prot_level
(SSL protection level) member added to the quic_conn struct to store "the most
recent write encryption level set via the OSSL_FUNC_SSL_QUIC_TLS_yield_secret_fn
callback (if it has been called)" as mentionned by SSL_set_quic_tls_cbs(3) manual.

This patches finally implements the five remaining callacks to make the haproxy
QUIC implementation work.

OSSL_FUNC_SSL_QUIC_TLS_crypto_send_fn() (ha_quic_ossl_crypto_send) is easy to
implement. It calls ha_quic_add_handshake_data() after having converted
qc->prot_level TLS protection level value to the correct ssl_encryption_level_t
(boringSSL API/quictls) value.

OSSL_FUNC_SSL_QUIC_TLS_crypto_recv_rcd_fn() (ha_quic_ossl_crypto_recv_rcd())
provide the non-contiguous addresses to the TLS stack, without releasing
them.

OSSL_FUNC_SSL_QUIC_TLS_crypto_release_rcd_fn() (ha_quic_ossl_crypto_release_rcd())
release these non-contiguous buffer relying on the fact that the list of
encryption level (qc->qel_list) is correctly ordered by SSL protection level
secret establishements order (by the TLS stack).

OSSL_FUNC_SSL_QUIC_TLS_yield_secret_fn() (ha_quic_ossl_got_transport_params())
is a simple wrapping function over ha_quic_set_encryption_secrets() which is used
by boringSSL/quictls API.

OSSL_FUNC_SSL_QUIC_TLS_got_transport_params_fn() (ha_quic_ossl_got_transport_params())
role is to store the peer received transport parameters. It simply calls
quic_transport_params_store() and set them into the TLS stack calling
qc_ssl_set_quic_transport_params().

Also add some comments for all the OpenSSL 3.5 QUIC API callbacks.

This patch have no impact on the other use of QUIC API provided by the others TLS
stacks.
2025-05-20 15:00:06 +02:00
Frederic Lecaille
dc6a3c329a MINOR: quic: Allow the use of the new OpenSSL 3.5.0 QUIC TLS API (to be completed)
This patch allows the use of the new OpenSSL 3.5.0 QUIC TLS API when it is
available and detected at compilation time. The detection relies on the presence of the
OSSL_FUNC_SSL_QUIC_TLS_CRYPTO_SEND macro from openssl-compat.h. Indeed this
macro is defined by OpenSSL since 3.5.0 version. It is not defined by quictls.
This helps in distinguishing these two TLS stacks. When the detection succeeds,
HAVE_OPENSSL_QUIC is also defined by openssl-compat.h. Then, this is this new macro
which is used to detect the availability of the new OpenSSL 3.5.0 QUIC TLS API.

Note that this detection is done only if USE_QUIC_OPENSSL_COMPAT is not asked.
So, USE_QUIC_OPENSSL_COMPAT and HAVE_OPENSSL_QUIC are exclusive.

At the same location, from openssl-compat.h, ssl_encryption_level_t enum is
defined. This enum was defined by quictls and expansively used by the haproxy
QUIC implementation. SSL_set_quic_transport_params() is replaced by
SSL_set_quic_tls_transport_params. SSL_set_quic_early_data_enabled() (quictls) is also replaced
by SSL_set_quic_tls_early_data_enabled() (OpenSSL). SSL_quic_read_level() (quictls)
is not defined by OpenSSL. It is only used by the traces to log the current
TLS stack decryption level (read). A macro makes it return -1 which is an
usused values.

The most of the differences between quictls and OpenSSL QUI APIs are in quic_ssl.c
where some callbacks must be defined for these two APIs. This is why this
patch modifies quic_ssl.c to define an array of OSSL_DISPATCH structs: <ha_quic_dispatch>.
Each element of this arry defines a callback. So, this patch implements these
six callabcks:

  - ha_quic_ossl_crypto_send()
  - ha_quic_ossl_crypto_recv_rcd()
  - ha_quic_ossl_crypto_release_rcd()
  - ha_quic_ossl_yield_secret()
  - ha_quic_ossl_got_transport_params() and
  - ha_quic_ossl_alert().

But at this time, these implementations which must return an int return 0 interpreted
as a failure by the OpenSSL QUIC API, except for ha_quic_ossl_alert() which
is implemented the same was as for quictls. The five remaining functions above
will be implemented by the next patches to come.

ha_quic_set_encryption_secrets() and ha_quic_add_handshake_data() have been moved
to be defined for both quictls and OpenSSL QUIC API.

These callbacks are attached to the SSL objects (sessions) calling qc_ssl_set_cbs()
new function. This latter callback the correct function to attached the correct
callbacks to the SSL objects (defined by <ha_quic_method> for quictls, and
<ha_quic_dispatch> for OpenSSL).

The calls to SSL_provide_quic_data() and SSL_process_quic_post_handshake()
have been also disabled. These functions are not defined by OpenSSL QUIC API.
At this time, the functions which call them are still defined when HAVE_OPENSSL_QUIC
is defined.
2025-05-20 15:00:06 +02:00
Frederic Lecaille
894595b711 MINOR: quic: Add useful error traces about qc_ssl_sess_init() failures
There were no traces to diagnose qc_ssl_sess_init() failures from QUIC traces.
This patch add calls to TRACE_DEVEL() into qc_ssl_sess_init() and its caller
(qc_alloc_ssl_sock_ctx()). This was useful at least to diagnose SSL context
initialization failures when porting QUIC to the new OpenSSL 3.5 QUIC API.

Should be easily backported as far as 2.6.
2025-05-20 15:00:06 +02:00
Frederic Lecaille
a2822b1776 CLEANUP: quic: Useless BIO_METHOD initialization
This code is there from QUIC implementation start. It was supposed to
initialize <ha_quic_meth> as a BIO_METHOD static object. But this
BIO_METHOD is not used at all!

Should be backported as far as 2.6 to help integrate the next patches to come.
2025-05-20 15:00:06 +02:00
William Lallemand
e803385a6e MINOR: acme: renewal notification over the dpapi sink
Output a sink message when the certificate was renewed by the ACME
client.

The message is emitted on the "dpapi" sink, and ends by \n\0.
Since the message contains this binary character, the right -0 parameter
must be used when consulting the sink over the CLI:

Example:

	$ echo "show events dpapi -nw -0" | socat -t9999 /tmp/haproxy.sock -
	<0>2025-05-19T15:56:23.059755+02:00 acme newcert foobar.pem.rsa\n\0

When used with the master CLI, @@1 should be used instead of @1 in order
to keep the connection to the worker.

Example:

	$ echo "@@1 show events dpapi -nw -0" | socat -t9999 /tmp/master.sock -
	<0>2025-05-19T15:56:23.059755+02:00 acme newcert foobar.pem.rsa\n\0
2025-05-19 16:07:25 +02:00
Willy Tarreau
99d6c889d0 BUG/MAJOR: leastconn: never reuse the node after dropping the lock
On ARM with 80 cores and a single server, it's sometimes possible to see
a segfault in fwlc_get_next_server() around 600-700k RPS. It seldom
happens as well on x86 with 128 threads with the same config around 1M
rps. It turns out that in fwlc_get_next_server(), before calling
fwlc_srv_reposition(), we have to drop the lock and that one takes it
back again.

The problem is that anything can happen to our node during this time,
and it can be freed. Then when continuing our work, we later iterate
over it and its next to find a node with an acceptable key, and by
doing so we can visit either uninitialized memory or simply nodes that
are no longer in the tree.

A first attempt at fixing this consisted in artificially incrementing
the elements count before dropping the lock, but that turned out to be
even worse because other threads could loop forever on such an element
looking for an entry that does not exist. Maintaining a separate
refcount didn't work well either, and it required to deal with the
memory release while dropping it, which is really not convenient.

Here we're taking a different approach consisting in simply not
trusting this node anymore and going back to the beginning of the
loop, as is done at a few other places as well. This way we can
safely ignore the possibly released node, and the test runs reliably
both on the arm and the x86 platforms mentioned above. No performance
regression was observed either, likely because this operation is quite
rare.

No backport is needed since this appeared with the leastconn rework
in 3.2.
2025-05-19 16:05:03 +02:00
Amaury Denoyelle
d358da4d83 BUG/MINOR: quic: fix crash on quic_conn alloc failure
If there is an alloc failure during qc_new_conn(), cleaning is done via
quic_conn_release(). However, since the below commit, an unchecked
dereferencing of <qc.path> is performed in the latter.

  e841164a44
  MINOR: quic: account for global congestion window

To fix this, simply check <qc.path> before dereferencing it in
quic_conn_release(). This is safe as it is properly initialized to NULL
on qc_new_conn() first stage.

This does not need to be backported.
2025-05-19 11:03:48 +02:00
Willy Tarreau
099c1b2442 BUG/MAJOR: queue: properly keep count of the queue length
The queue length was moved to its own variable in commit 583303c48
("MINOR: proxies/servers: Calculate queueslength and use it."), however a
few places were missed in pendconn_unlink() and assign_server_and_queue()
resulting in never decreasing counts on aborted streams. This was
reproduced when injecting more connections than the total backend
could stand in TCP mode and letting some of them time out in the
queue. No backport is needed, this is only 3.2.
2025-05-17 10:46:10 +02:00
Willy Tarreau
6be02d1c6e BUG/MAJOR: leastconn: do not loop forever when facing saturated servers
Since commit 9fe72bba3 ("MAJOR: leastconn; Revamp the way servers are
ordered."), there's no way to escape the loop visiting the mt_list heads
in fwlc_get_next_server if all servers in the list are saturated,
resulting in a watchdog panic. It can be reproduced with this config
and injecting with more than 2 concurrent conns:

    balance leastconn
    server s1 127.0.0.1:8000 maxconn 1
    server s2 127.0.0.1:8000 maxconn 1

Here we count the number of saturated servers that were encountered, and
escape the loop once the number of remaining servers exceeds the number
of saturated ones. No backport is needed since this arrived in 3.2.
2025-05-17 10:44:36 +02:00
Willy Tarreau
ccc65012d3 IMPORT: slz: silence a build warning on non-x86 non-arm
Building with clang 16 on MIPS64 yields this warning:

  src/slz.c:931:24: warning: unused function 'crc32_uint32' [-Wunused-function]
  static inline uint32_t crc32_uint32(uint32_t data)
                         ^

Let's guard it using UNALIGNED_LE_OK which is the only case where it's
used. This saves us from introducing a possibly non-portable attribute.

This is libslz upstream commit f5727531dba8906842cb91a75c1ffa85685a6421.
2025-05-16 16:43:53 +02:00
Willy Tarreau
31ca29eee1 IMPORT: slz: fix header used for empty zlib message
Calling slz_rfc1950_finish() without emitting any data would result in
incorrectly emitting a gzip header (rfc1952) instead of a zlib header
(rfc1950) due to a copy-paste between the two wrappers. The impact is
almost inexistent since the zlib format is almost never used in this
context, and compressing totally empty messages is quite rare as well.
Let's take this opportunity for fixing another mistake on an RFC number
in a comment.

This is slz upstream commit 7f3fce4f33e8c2f5e1051a32a6bca58e32d4f818.
2025-05-16 16:43:53 +02:00
Willy Tarreau
411b04c7d3 IMPORT: slz: use a better hash for machines with a fast multiply
The current hash involves 3 simple shifts and additions so that it can
be mapped to a multiply on architecures having a fast multiply. This is
indeed what the compiler does on x86_64. A large range of values was
scanned to try to find more optimal factors on machines supporting such
a fast multiply, and it turned out that new factor 0x1af42f resulted in
smoother hashes that provided on average 0.4% better compression on both
the Silesia corpus and an mbox file composed of very compressible emails
and uncompressible attachments. It's even slightly better than CRC32C
while being faster on Skylake. This patch enables this factor on archs
with a fast multiply.

This is slz upstream commit 82ad1e75c13245a835c1c09764c89f2f6e8e2a40.
2025-05-16 16:43:53 +02:00
Willy Tarreau
248bbec83c IMPORT: slz: support crc32c for lookup hash on sse4 but only if requested
If building for sse4 and USE_CRC32C_HASH is defined, then we can use
crc32c to calculate the lookup hash. By default we don't do it because
even on skylake it's slower than the current hash, which only involves
a short multiply (~5% slower). But the gains are marginal (0.3%).

This is slz upstream commit 44ae4f3f85eb275adba5844d067d281e727d8850.

Note: this is not used by default and only merged in order to avoid
divergence between the code bases.
2025-05-16 16:43:53 +02:00
Willy Tarreau
ea1b70900f IMPORT: slz: avoid multiple shifts on 64-bits
On 64-bit platforms, disassembling the code shows that send_huff() performs
a left shift followed by a right one, which are the result of integer
truncation and zero-extension caused solely by using different types at
different levels in the call chain. By making encode24() take a 64-bit
int on input and send_huff() take one optionally, we can remove one shift
in the hot path and gain 1% performance without affecting other platforms.

This is slz upstream commit fd165b36c4621579c5305cf3bb3a7f5410d3720b.
2025-05-16 16:43:53 +02:00
Willy Tarreau
df00164fdd BUG/MEDIUM: h1/h2/h3: reject forbidden chars in the Host header field
In continuation with 9a05c1f574 ("BUG/MEDIUM: h2/h3: reject some
forbidden chars in :authority before reassembly") and the discussion
in issue #2941, @DemiMarie rightfully suggested that Host should also
be sanitized, because it is sometimes used in concatenation, such as
this:

    http-request set-url https://%[req.hdr(host)]%[pathq]

which was proposed as a workaround for h2 upstream servers that require
:authority here:

    https://www.mail-archive.com/haproxy@formilux.org/msg43261.html

The current patch then adds the same check for forbidden chars in the
Host header, using the same function as for the patch above, since in
both cases we validate the host:port part of the authority. This way
we won't reconstruct ambiguous URIs by concatenating Host and path.

Just like the patch above, this can be backported afer a period of
observation.
2025-05-16 15:13:17 +02:00
Willy Tarreau
b84762b3e0 BUG/MINOR: h3: don't insert more than one Host header
Let's make sure we drop extraneous Host headers after having compared
them. That also works when :authority was already present. This way,
like for h1 and h2, we only keep one copy of it, while still making
sure that Host matches :authority. This way, if a request has both
:authority and Host, only one Host header will be produced (from
:authority). Note that due to the different organization of the code
and wording along the evolving RFCs, here we also check that all
duplicates are identical, while h2 ignores them as per RFC7540, but
this will be re-unified later.

This should be backported to stable versions, at least 2.8, though
thanks to the existing checks the impact is probably nul.
2025-05-16 15:13:17 +02:00
Christopher Faulet
f45a632bad BUG/MEDIUM: stconn: Disable 0-copy forwarding for filters altering the payload
It is especially a problem with Lua filters, but it is important to disable
the 0-copy forwarding if a filter alters the payload, or at least to be able
to disable it. While the filter is registered on the data filtering, it is
not an issue (and it is the common case) because, there is now way to
fast-forward data at all. But it may be an issue if a filter decides to
alter the payload and to unregister from data filtering. In that case, the
0-copy forwarding can be re-enabled in a hardly precdictable state.

To fix the issue, a SC flags was added to do so. The HTTP compression filter
set it and lua filters too if the body length is changed (via
HTTPMessage.set_body_len()).

Note that it is an issue because of a bad design about the HTX. Many info
about the message are stored in the HTX structure itself. It must be
refactored to move several info to the stream-endpoint descriptor. This
should ease modifications at the stream level, from filter or a TCP/HTTP
rules.

This should be backported as far as 3.0. If necessary, it may be backported
on lower versions, as far as 2.6. In that case, it must be reviewed and
adapted.
2025-05-16 15:11:37 +02:00
Christopher Faulet
94055a5e73 MEDIUM: hlua: Add function to change the body length of an HTTP Message
There was no function for a lua filter to change the body length of an HTTP
Message. But it is mandatory to be able to alter the message payload. It is
not possible update to directly update the message headers because the
internal state of the message must also be updated accordingly.

It is the purpose of HTTPMessage.set_body_len() function. The new body
length myst be passed as argument. If it is an integer, the right
"Content-Length" header is set. If the "chunked" string is used, it forces
the message to be chunked-encoded and in that case the "Transfer-Encoding"
header.

This patch should fix the issue #2837. It could be backported as far as 2.6.
2025-05-16 14:34:12 +02:00
Willy Tarreau
f2d7aa8406 BUG/MEDIUM: peers: also limit the number of incoming updates
There's a configurable limit to the number of messages sent to a
peer (tune.peers.max-updates-at-once), but this one is not applied to
the receive side. While it can usually be OK with default settings,
setups involving a large tune.bufsize (1MB and above) regularly
experience high latencies and even watchdogs during reloads because
the full learning process sends a lot of data that manages to fill
the entire buffer, and due to the compactness of the protocol, 1MB
of buffer can contain more than 100k updates, meaning taking locks
etc during this time, which is not workable.

Let's make sure the receiving side also respects the max-updates-at-once
setting. For this it counts incoming updates, and refrains from
continuing once the limit is reached. It's a bit tricky to do because
after receiving updates we still have to send ours (and possibly some
ACKs) so we cannot just leave the loop.

This issue was reported on 3.1 but it should progressively be backported
to all versions having the max-updates-at-once option available.
2025-05-15 16:57:21 +02:00
Aurelien DARRAGON
098a5e5c0b BUG/MINOR: sink: detect and warn when using "send-proxy" options with ring servers
using "send-proxy" or "send-proxy-v2" option on a ring server is not
relevant nor supported. Worse, on 2.4 it causes haproxy process to
crash as reported in GH #2965.

Let's be more explicit about the fact that this keyword is not supported
under "ring" context by ignoring the option and emitting a warning message
to inform the user about that.

Ideally, we should do the same for peers and log servers. The proper way
would be to check servers options during postparsing but we currently lack
proper cross-type server postparsing hooks. This will come later and thus
will give us a chance to perform the compatibilty checks for server
options depending on proxy type. But for now let's simply fix the "ring"
case since it is the only one that's known to cause a crash.

It may be backported to all stable versions.
2025-05-15 16:18:31 +02:00
Christopher Faulet
e2ae8a74e8 DEBUG: mux-spop: Review some trace messages to adjust the message or the level
Some trace messages were not really accurrate, reporting a CLOSED connection
while only an error was reported on it. In addition, an TRACE_ERROR() was
used to report a short read on HELLO/DISCONNECT frames header. But it is not
an error. a TRACE_DEVEL() should be used instead.

This patch could be backported to 3.1 to ease future backports.
2025-05-14 11:52:10 +02:00
Christopher Faulet
6e46f0bf93 BUG/MEDIUM: mux-spop; Don't report a read error if there are pending data
When an read error is detected, no error must be reported on the SPOP
connection is there are still some data to parse. It is important to be sure
to process all data before reporting the error and be sure to not truncate
received frames. However, we must also take care to handle short read case
to not wait data that will never be received.

This patch must be backported to 3.1.
2025-05-14 11:51:58 +02:00
Christopher Faulet
16314bb93c BUG/MEDIUM: mux-spop: Properly detect truncated frames on demux to report error
There was no test in the demux part to detect truncated frames and to report
an error at the connection level. The SPOP streams were properly switch to
half-closed state. But waiting the associated SPOE applets were woken up and
released, the SPOP connection could be woken up several times for nothing. I
never triggered the watchdog in that case, but it is not excluded.

Now, at the end of the demux function, if a specific test was added to
detect truncated frames to report an error and close the connection.

This patch must be backported to 3.1.
2025-05-14 11:47:41 +02:00
Christopher Faulet
71feb49a9f BUG/MEDIUM: spop-conn: Report short read for partial frames payload
When a frame was not fully received, a short read must be reported on the
SPOP connection to help the demux to handle truncated frames. This was
performed for frames truncated on the header part but not on the payload
part. It is now properly detected.

This patch must be backported to 3.1.
2025-05-14 09:20:10 +02:00
Christopher Faulet
ddc5f8d92e BUG/MEDIUM: mux-spop: Properly handle CLOSING state
The CLOSING state was not handled at all by the SPOP multiplexer while it is
mandatory when a DISCONNECT frame was sent and the mux should wait for the
DISCONNECT frame in reply from the agent. Thanks to this patch, it should be
fixed.

In addition, if an error occurres during the AGENT HELLO frame parsing, the
SPOP connection is no longer switched to CLOSED state and remains in ERROR
state instead. It is important to be able to send the DISCONNECT frame to
the agent instead of closing the TCP connection immediately.

This patch depends on following commits:

  * BUG/MEDIUM: mux-spop: Remove frame parsing states from the SPOP connection state
  * MINOR: mux-spop: Don't set SPOP connection state to FRAME_H after ACK parsing
  * BUG/MINOR: mux-spop: Don't open new streams for SPOP connection on error
  * BUG/MINOR: mux-spop: Make the demux stream ID a signed integer

All the series must be backported to 3.1.
2025-05-14 09:14:12 +02:00
Christopher Faulet
a3940614c2 BUG/MEDIUM: mux-spop: Remove frame parsing states from the SPOP connection state
SPOP_CS_FRAME_H and SPOP_CS_FRAME_P states, that were used to handle frame
parsing, were removed. The demux process now relies on the demux stream ID
to know if it is waiting for the frame header or the frame
payload. Concretly, when the demux stream ID is not set (dsi == -1), the
demuxer is waiting for the next frame header. Otherwise (dsi >= 0), it is
waiting for the frame payload. It is especially important to be able to
properly handle DISCONNECT frames sent by the agents.

SPOP_CS_RUNNING state is introduced to know the hello handshake was finished
and the SPOP connection is able to open SPOP streams and exchange NOTIFY/ACK
frames with the agents.

It depends on the following fixes:

  * MINOR: mux-spop: Don't set SPOP connection state to FRAME_H after ACK parsing
  * BUG/MINOR: mux-spop: Make the demux stream ID a signed integer

This change will be mandatory for the next fix. It must be backported to 3.1
with the commits above.
2025-05-13 19:51:40 +02:00
Christopher Faulet
6b0f7de4e3 MINOR: mux-spop: Don't set SPOP connection state to FRAME_H after ACK parsing
After the ACK frame was parsed, it is useless to set the SPOP connection
state to SPOP_CS_FRAME_H state because this will be automatically handled by
the demux function. If it is not an issue, but this will simplify changes
for the next commit.
2025-05-13 19:51:40 +02:00
Christopher Faulet
197eaaadfd BUG/MINOR: mux-spop: Don't open new streams for SPOP connection on error
Till now, only SPOP connections fully closed or those with a TCP connection on
error were concerned. But available streams could be reported for SPOP
connections in error or closing state. But in these states, no NOTIFY frames
will be sent and no ACK frames will be parsed. So, no new SPOP streams should be
opened.

This patch should be backported to 3.1.
2025-05-13 19:51:40 +02:00
Christopher Faulet
cbc10b896e BUG/MINOR: mux-spop: Make the demux stream ID a signed integer
The demux stream ID of a SPOP connection, used when received frames are
parsed, must be a signed integer because it is set to -1 when the SPOP
connection is initialized. It will be important for the next fix.

This patch must be backported to 3.1.
2025-05-13 19:51:40 +02:00
Christopher Faulet
6d68beace5 BUG/MINOR: mux-spop: Don't report error for stream if ACK was already received
When a SPOP connection was closed or was in error, an error was
systematically reported on all its SPOP streams. However, SPOP streams that
already received their ACK frame must be excluded. Otherwise if an agent
sends a ACK and close immediately, the ACK will be ignored because the SPOP
stream will handle the error first.

This patch must be backported to 3.1.
2025-05-13 19:51:40 +02:00
Christopher Faulet
1cd30c998b BUG/MINOR: spoe: Don't report error on applet release if filter is in DONE state
When the SPOE applet was released, if a SPOE filter context was still
attached to it, an error was reported to the filter. However, there is no
reason to report an error if the ACK message was already received. Because
of this bug, if the ACK message is received and the SPOE connection is
immediately closed, this prevents the ACK message to be processed.

This patch should be backported to 3.1.
2025-05-13 19:51:40 +02:00
Christopher Faulet
a5de0e1595 BUG/MINOR: hlua: Fix Channel:data() and Channel:line() to respect documentation
When the channel API was revisted, the both functions above was added. An
offset can be passed as argument. However, this parameter could be reported
to be out of range if there was not enough input data was received yet. It
is an issue, especially with a tcp rule, because more data could be
received. If an error is reported too early, this prevent the rule to be
reevaluated later. In fact, an error should only be reported if the offset
is part of the output data.

Another issue is about the conditions to report 'nil' instead of an empty
string. 'nil' was reported when no data was found. But it is not aligned
with the documentation. 'nil' must only be returned if no more data cannot
be received and there is no input data at all.

This patch should fix the issue #2716. It should be backported as far as 2.6.
2025-05-13 19:51:40 +02:00
Willy Tarreau
158da59c34 MEDIUM: cpu-topo: prefer grouping by CCX for "performance" and "efficiency"
Most of the time, machines made of multiple CPU types use the same L3
for them, and grouping CPUs by frequencies to form groups doesn't bring
any value and on the opposite can impair the incoming connection balancing.
This choice of grouping by cluster was made in order to constitute a good
choice on homogenous machines as well, so better rely on the per-CCX
grouping than the per-cluster one in this case. This will create less
clusters on machines where it counts without affecting other ones.

It doesn't seem necessary to change anything for the "resource" policy
since it selects a single cluster.
2025-05-13 16:48:30 +02:00
Willy Tarreau
70b0dd6b0f MEDIUM: cpu-topo: change "efficiency" to consider per-core capacity
This is similar to the previous change to the "performance" policy but
it applies to the "efficiency" one. Here we're changing the sorting
method to sort CPU clusters by average per-CPU capacity, and we evict
clusters whose per-CPU capacity is above 125% of the previous one.
Per-core capacity allows to detect discrepancies between CPU cores,
and to continue to focus on efficient ones as a priority.
2025-05-13 16:48:30 +02:00
Willy Tarreau
6c88e27cf4 MEDIUM: cpu-topo: change "performance" to consider per-core capacity
Running the "performance" policy on highly heterogenous systems yields
bad choices when there are sufficiently more small than big cores,
and/or when there are multiple cluster types, because on such setups,
the higher the frequency, the lower the number of cores, despite small
differences in frequencies. In such cases, we quickly end up with
"performance" only choosing the small or the medium cores, which is
contrary to the original intent, which was to select performance cores.
This is what happens on boards like the Orion O6 for example where only
the 4 medium cores and 2 big cores are choosen, evicting the 2 biggest
cores and the 4 smallest ones.

Here we're changing the sorting method to sort CPU clusters by average
per-CPU capacity, and we evict clusters whose per-CPU capacity falls
below 80% of the previous one. Per-core capacity allows to detect
discrepancies between CPU cores, and to continue to focus on high
performance ones as a priority.
2025-05-13 16:48:30 +02:00
Willy Tarreau
5ab2c815f1 MINOR: cpu-topo: provide a function to sort clusters by average capacity
The current per-capacity sorting function acts on a whole cluster, but
in some setups having many small cores and few big ones, it becomes
easy to observe an inversion of metrics where the many small cores show
a globally higher total capacity than the few big ones. This does not
necessarily fit all use cases. Let's add new a function to sort clusters
by their per-cpu average capacity to cover more use cases.
2025-05-13 16:48:30 +02:00
Willy Tarreau
01df98adad MINOR: cpu-topo: add a new "group-by-ccx" CPU policy
This cpu-policy will only consider CCX and not clusters. This makes
a difference on machines with heterogenous CPUs that generally share
the same L3 cache, where it's not desirable to create multiple groups
based on the CPU types, but instead create one with the different CPU
types. The variants "group-by-2/3/4-ccx" have also been added.

Let's also add some text explaining the difference between cluster
and CCX.
2025-05-13 16:48:30 +02:00
Willy Tarreau
33d8b006d4 BUG/MINOR: cpu-topo: fix group-by-cluster policy for disordered clusters
Some (rare) boards have their clusters in an erratic order. This is
the case for the Radxa Orion O6 where one of the big cores appears as
CPU0 due to booting from it, then followed by the small cores, then the
medium cores, then the remaining big cores. This results in clusters
appearing this order: 0,2,1,0.

The core in cpu_policy_group_by_cluster() expected ordered clusters,
and performs ordered comparisons to decide whether a CPU's cluster has
already been taken care of. On the board above this doesn't work, only
clusters 0 and 2 appear and 1 is skipped.

Let's replace the cluster number comparison with a cpuset to record
which clusters have been taken care of. Now the groups properly appear
like this:

  Tgrp/Thr  Tid        CPU set
  1/1-2     1-2        2: 0,11
  2/1-4     3-6        4: 1-4
  3/1-6     7-12       6: 5-10

No backport is needed, this is purely 3.2.
2025-05-13 16:48:30 +02:00
Amaury Denoyelle
f3b9676416 MINOR: quic: display stream age
Add a field to save the creation date of qc_stream_desc instance. This
is useful to display QUIC stream age in "show quic stream" output.
2025-05-13 15:44:22 +02:00
Amaury Denoyelle
dbf07c754e MINOR: quic: display QCS info on "show quic stream"
Complete stream output for "show quic" by displaying information from
its upper QCS. Note that QCS may be NULL if already released, so a
default output is also provided.
2025-05-13 15:43:28 +02:00
Amaury Denoyelle
cbadfa0163 MINOR: quic: add stream format for "show quic"
Add a new format for "show quic" command labelled as "stream". This is
an equivalent of "show sess", dedicated to the QUIC stack. Each active
QUIC streams are listed on a line with their related infos.

The main objective of this command is to ensure there is no freeze
streams remaining after a transfer.
2025-05-13 15:41:51 +02:00
Amaury Denoyelle
1ccede211c MINOR: mux-quic: account Rx data per stream
Add counters to measure Rx buffers usage per QCS. This reused the newly
defined bdata_ctr type already used for Tx accounting.

Note that for now, <tot> value of bdata_ctr is not used. This is because
it is not easy to account for data accross contiguous buffers.

These values are displayed both on log/traces and "show quic" output.
2025-05-13 15:41:51 +02:00
Amaury Denoyelle
a1dc9070e7 MINOR: quic: account Tx data per stream
Add accounting at qc_stream_desc level to be able to report the number
of allocated Tx buffers and the sum of their data. This represents data
ready for emission or already emitted and waiting on ACK.

To simplify this accounting, a new counter type bdata_ctr is defined in
quic_utils.h. This regroups both buffers and data counter, plus a
maximum on the buffer value.

These values are now displayed on QCS info used both on logline and
traces, and also on "show quic" output.
2025-05-13 15:41:41 +02:00
Willy Tarreau
9a05c1f574 BUG/MEDIUM: h2/h3: reject some forbidden chars in :authority before reassembly
As discussed here:
   https://github.com/httpwg/http2-spec/pull/936
   https://github.com/haproxy/haproxy/issues/2941

It's important to take care of some special characters in the :authority
pseudo header before reassembling a complete URI, because after assembly
it's too late (e.g. the '/'). This patch does this, both for h2 and h3.

The impact on H2 was measured in the worst case at 0.3% of the request
rate, while the impact on H3 is around 1%, but H3 was about 1% faster
than H2 before and is now on par.

It may be backported after a period of observation, and in this case it
relies on this previous commit:

   MINOR: http: add a function to validate characters of :authority

Thanks to @DemiMarie for reviving this topic in issue #2941 and bringing
new potential interesting cases.
2025-05-12 18:02:47 +02:00
Aurelien DARRAGON
c40d6ac840 BUG/MINOR: server: perform lbprm deinit for dynamic servers
Last commit 7361515 ("BUG/MINOR: server: dont depend on proxy for server
cleanup in srv_drop()") introduced a regression because the lbprm
server_deinit is not evaluated anymore with dynamic servers, possibly
resulting in a memory leak.

To fix the issue, in addition to free_proxy(), the server deinit check
should be manually performed in cli_parse_delete_server() as well.

No backport needed.
2025-05-12 16:29:36 +02:00
Aurelien DARRAGON
736151556c BUG/MINOR: server: dont depend on proxy for server cleanup in srv_drop()
In commit b5ee8bebfc ("MINOR: server: always call ssl->destroy_srv when
available"), we made it so srv_drop() doesn't depend on proxy to perform
server cleanup.

It turns out this is now mandatory, because during deinit, free_proxy()
can occur before the final srv_drop(). This is the case when using Lua
scripts for instance.

In 2a9436f96 ("MINOR: lbprm: Add method to deinit server and proxy") we
added a freeing check under srv_drop() that depends on the proxy.
Because of that UAF may occur during deinit when using a Lua script that
manipulate server objects.

To fix the issue, let's perform the lbprm server deinit logic under
free_proxy() directly, where the DEINIT server hooks are evaluated.

Also, to prevent similar bugs in the future, let's explicitly document
in srv_drop() that server cleanups should assume that the proxy may
already be freed.

No backport needed unless 2a9436f96 is.
2025-05-12 16:17:26 +02:00
Willy Tarreau
be4d816be2 BUG/MINOR: cfgparse: improve the empty arg position report's robustness
OSS Fuzz found that the previous fix ebb19fb367 ("BUG/MINOR: cfgparse:
consider the special case of empty arg caused by \x00") was incomplete,
as the output can sometimes be larger than the input (due to variables
expansion) in which case the work around to try to report a bad arg will
fail. While the parse_line() function has been made more robust now in
order to avoid this condition, let's fix the handling of this special
case anyway by just pointing to the beginning of the line if the supposed
error location is out of the line's buffer.

All details here:
   https://oss-fuzz.com/testcase-detail/5202563081502720

No backport is needed unless the fix above is backported.
2025-05-12 16:11:15 +02:00
Willy Tarreau
2b60e54fb1 BUG/MINOR: tools: improve parse_line()'s robustness against empty args
The fix in 10e6d0bd57 ("BUG/MINOR: tools: only fill first empty arg when
not out of range") was not that good. It focused on protecting against
<arg> becoming out of range to detect we haven't emitted anything, but
it's not the right way to detect this. We're always maintaining arg_start
as a copy of outpos, and that later one is incremented when emitting a
char, so instead of testing args[arg] against out+arg_start, we should
instead check outpos against arg_start, thereby eliminating the <out>
offset and the need to access args[]. This way we now always know if
we've emitted an empty arg without dereferencing args[].

There's no need to backport this unless the fix above is also backported.
2025-05-12 16:11:15 +02:00
Aurelien DARRAGON
7d057e56af BUG/MINOR: threads: fix soft-stop without multithreading support
When thread support is disabled ("USE_THREAD=" or "USE_THREAD=0" when
building), soft-stop doesn't work as haproxy never ends after stopping
the proxies.

This used to work fine in the past but suddenly stopped working with
ef422ced91 ("MEDIUM: thread: make stopping_threads per-group and add
stopping_tgroups") because the "break;" instruction under the stopping
condition is never executed when support for multithreading is disabled.

To fix the issue, let's add an "else" block to run the "break;"
instruction when USE_THREAD is not defined.

It should be backported up to 2.8
2025-05-12 14:18:39 +02:00
William Lallemand
8b0d1a4113 MINOR: ssl/ckch: warn when the same keyword was used twice
When using a crt-list or a crt-store, keywords mentionned twice on the
same line overwritte the previous value.

This patch emits a warning when the same keyword is found another time
on the same line.
2025-05-09 19:18:38 +02:00
William Lallemand
9c0c05b7ba BUG/MINOR: ssl/ckch: always ha_freearray() the previous entry during parsing
The ckch_conf_parse() function is the generic function which parses
crt-store keywords from the crt-store section, and also from a
crt-list.

When having multiple time the same keyword, a leak of the previous
value happens. This patch ensure that the previous value is always
freed before overwriting it.

This is the same problem as the previous "BUG/MINOR: ssl/ckch: always
free() the previous entry during parsing" patch, however this one
applies on PARSE_TYPE_ARRAY_SUBSTR.

No backport needed.
2025-05-09 19:16:02 +02:00
William Lallemand
96b1f1fd26 MINOR: tools: ha_freearray() frees an array of string
ha_freearray() is a new function which free() an array of strings
terminated by a NULL entry.

The pointer to the array will be free and set to NULL.
2025-05-09 19:12:05 +02:00
William Lallemand
311e0aa5c7 BUG/MINOR: ssl/ckch: always free() the previous entry during parsing
The ckch_conf_parse() function is the generic function which parses
crt-store keywords from the crt-store section, and also from a crt-list.

When having multiple time the same keyword, a leak of the previous value
happens. This patch ensure that the previous value is always freed
before overwriting it.

This patch should be backported as far as 3.0.
2025-05-09 19:01:28 +02:00
William Lallemand
9ce3fb35a2 BUG/MINOR: ssl: prevent multiple 'crt' on the same ssl-f-use line
The 'ssl-f-use' implementation doesn't prevent to have multiple time the
'crt' keyword, which overwrite the previous value. Letting users think
that is it possible to use multiple certificates on the same line, which
is not the case.

This patch emits an alert when setting the 'crt' keyword multiple times
on the same ssl-f-use line.

Should fix issue #2966.

No backport needed.
2025-05-09 18:52:09 +02:00
William Lallemand
0c4abf5a22 BUG/MINOR: ssl: doesn't fill conf->crt with first arg
Commit c7f29afc ("MEDIUM: ssl: replace "crt" lines by "ssl-f-use"
lines") forgot to remove an the allocation of the crt field which was
done with the first argument.

Since ssl-f-use takes keywords, this would put the first keyword in
"crt" instead of the certificate name.
2025-05-09 18:23:06 +02:00
Willy Tarreau
8a96216847 MEDIUM: sock-inet: re-check IPv6 connectivity every 30s
IPv6 connectivity might start off (e.g. network not fully up when
haproxy starts), so for features like resolvers, it would be nice to
periodically recheck.

With this change, instead of having the resolvers code rely on a variable
indicating connectivity, it will now call a function that will check for
how long a connectivity check hasn't been run, and will perform a new one
if needed. The age was set to 30s which seems reasonable considering that
the DNS will cache results anyway. There's no saving in spacing it more
since the syscall is very check (just a connect() without any packet being
emitted).

The variables remain exported so that we could present them in show info
or anywhere else.

This way, "dns-accept-family auto" will now stay up to date. Warning
though, it does perform some caching so even with a refreshed IPv6
connectivity, an older record may be returned anyway.
2025-05-09 15:45:44 +02:00
Willy Tarreau
1404f6fb7b DEBUG: pools: add a new integrity mode "backup" to copy the released area
This way we can preserve the entire contents of the released area for
later inspection. This automatically enables comparison at reallocation
time as well (like "integrity" does). If used in combination with
integrity, the comparison is disabled but the check of non-corruption
of the area mangled by integrity is still operated.
2025-05-09 14:57:00 +02:00
William Lallemand
e7574cd5f0 MINOR: acme: add the global option 'acme.scheduler'
The automatic scheduler is useful but sometimes you don't want to use,
or schedule manually.

This patch adds an 'acme.scheduler' option in the global section, which
can be set to either 'auto' or 'off'. (auto is the default value)

This also change the ouput of the 'acme status' command so it does not
shows scheduled values. The state will be 'Stopped' instead of
'Scheduled'.
2025-05-09 14:00:39 +02:00
Willy Tarreau
0ae14beb2a DEBUG: pool: permit per-pool UAF configuration
The new MEM_F_UAF flag can be set just after a pool's creation to make
this pool UAF for debugging purposes. This allows to maintain a better
overall performance required to reproduce issues while still having a
chance to catch UAF. It will only be used by developers who will manually
add it to areas worth being inspected, though.
2025-05-09 13:59:02 +02:00
Amaury Denoyelle
14e4f2b811 BUG/MEDIUM: mux-quic: fix crash on invalid fctl frame dereference
Emission of flow-control frames have been recently modified. Now, each
frame is sent one by one, via a single entry list. If a failure occurs,
emission is interrupted and frame is reinserted into the original
<qcc.lfctl.frms> list.

This code is incorrect as it only checks if qcc_send_frames() returns an
error code to perform the reinsert operation. However, an error here
does not always mean that the frame was not properly emitted by lower
quic-conn layer. As such, an extra test LIST_ISEMPTY() must be performed
prior to reinsert the frame.

This bug would cause a heap overflow. Indeed, the reinsert frame would
be a random value. A crash would occur as soon as it would be
dereferenced via <qcc.lfctl.frms> list.

This was reproduced by issuing a POST with a big file and interrupt it
after just a few seconds. This results in a crash in about a third of
the tests. Here is an example command using ngtcp2 :

 $ ngtcp2-client -q --no-quic-dump --no-http-dump \
   -m POST -d ~/infra/html/1g 127.0.0.1 20443 "http://127.0.0.1:20443/post"

Heap overflow was detected via a BUG_ON() statement from qc_frm_free()
via qcc_release() caller :

  FATAL: bug condition "!((&((*frm)->reflist))->n == (&((*frm)->reflist)))" matched at src/quic_frame.c:1270

This does not need to be backported.
2025-05-09 11:07:11 +02:00
Willy Tarreau
ebb19fb367 BUG/MINOR: cfgparse: consider the special case of empty arg caused by \x00
The reporting of the empty arg location added with commit 08d3caf30
("MINOR: cfgparse: visually show the input line on empty args") falls
victim of a special case detected by OSS Fuzz:

     https://issues.oss-fuzz.com/issues/415850462

In short, making an argument start with "\x00" doesn't make it empty for
the parser, but still emits an empty string which is detected and
displayed. Unfortunately in this case the error pointer is not set so
the sanitization function crashes.

What we're doing in this case is that we fall back to the position of
the output argument as an estimate of where it was located in the input.
It's clearly inexact (quoting etc) but will still help the user locate
the problem.

No backport is needed unless the commit above is backported.
2025-05-09 10:01:44 +02:00
Amaury Denoyelle
3fdb039a99 BUG/MEDIUM: quic: free stream_desc on all data acked
The following patch simplifies qc_stream_desc_ack(). The qc_stream_desc
instance is not freed anymore, even if all data were acknowledged. As
implies by the commit message, the caller is responsible to perform this
cleaning operation.
  f4a83fbb14
  MINOR: quic: do not remove qc_stream_desc automatically on ACK handling

However, despite the commit instruction, qc_stream_desc_free()
invokation was not moved in the caller. This commit fixes this by adding
it after stream ACK handling. This is performed only when a transfer is
completed : all data is acknowledged and qc_stream_desc has been
released by its MUX stream instance counterpart.

This bug may cause a significant increase in memory usage when dealing
with long running connection. However, there is no memory leak, as every
qc_stream_desc attached to a connection are finally freed when quic_conn
instance is released.

This must be backported up to 3.1.
2025-05-09 09:25:47 +02:00
Willy Tarreau
576e47fb9a BUG/MEDIUM: stick-table: always remove update before adding a new one
Since commit 388539faa ("MEDIUM: stick-tables: defer adding updates to a
tasklet"), between the entry creation and its arrival in the updates tree,
there is time for scheduling, and it now becomes possible for an stksess
entry to be requeued into the list while it's still in the tree as a remote
one. Only local updates were removed prior to being inserted. In this case
we would re-insert the entry, causing it to appear as the parent of two
distinct nodes or leaves, and to be visited from the first leaf during a
delete() after having already been removed and freed, causing a crash,
as Christian reported in issue #2959.

There's no reason to backport this as this appeared with the commit above
in 3.2-dev13.
2025-05-08 23:32:25 +02:00
Aurelien DARRAGON
f03e999912 MINOR: server: ensure server postparse tasks are run for dynamic servers
commit 29b76cae4 ("BUG/MEDIUM: server/log: "mode log" after server
keyword causes crash") introduced some postparsing checks/tasks for
server

Initially they were mainly meant for "mode log" servers postparsing, but
we already have a check dedicated to "tcp/http" servers (ie: only tcp
proto supported)

However when dynamic servers are added they bypass _srv_postparse() since
the REGISTER_POST_SERVER_CHECK() is only executed for servers defined in
the configuration.

To ensure consistency between dynamic and static servers, and ensure no
post-check init routine is missed, let's manually invoke _srv_postparse()
after creating a dynamic server added via the cli.
2025-05-08 02:03:50 +02:00
Aurelien DARRAGON
976e0bd32f BUG/MINOR: cli: fix too many args detection for commands
d3f928944 ("BUG/MINOR: cli: Issue an error when too many args are passed
for a command") added a new check to prevent the command to run when
too many arguments are provided. In this case an error is reported.

However it turns out this check (despite marked for backports) was
ineffective prior to 20ec1de21 ("MAJOR: cli: Refacor parsing and
execution of pipelined commands") as 'p' pointer was reset to the end of
the buffer before the check was executed.

Now since 20ec1de21, the check works, but we have another issue: we may
read past initialized bytes in the buffer because 'p' pointer is always
incremented in a while loop without checking if we increment it past 'end'
(This was detected using valgrind)

To fix the issue introduced by 20ec1de21, let's only increment 'p' pointer
if p < end.

For 3.2 this is it, now for older versions, since d3f928944 was marked for
backport, a sligthly different approach is needed:

 - conditional p increment must be done in the loop (as in this patch)
 - max arg check must moved above "fill unused slots" comment where p is
   assigned to the end of the buffer

This patch should be backported with d3f928944.
2025-05-08 02:03:43 +02:00
Willy Tarreau
0cee7b5b8d BUG/MEDIUM: stick-tables: close a tiny race in __stksess_kill()
It might be possible not to see the element in the tree, then not to
see it in the update list, thus not to take the lock before deleting.
But an element in the list could have moved to the tree during the
check, and be removed later without the updt_lock.

Let's delete prior to checking the presence in the tree to avoid
this situation. No backport is needed since this arrived in -dev13
with the update list.
2025-05-07 18:49:21 +02:00
Willy Tarreau
006a3acbde BUG/MEDIUM: peers: hold the refcnt until updating ts->seen
In peer_treat_updatemsg(), we call stktable_touch_remote() after
releasing the write lock on the TS, asking it to decrement the
refcnt, then we update ts->seen. Unfortunately this is racy and
causes the issue that Christian reported in issue #2959.

The sequence of events is very hard to trigger manually, but what happens
is the following:

 T1.  stktable_touch_remote(table, ts, 1);
      -> at this point the entry is in the mt_list, and the refcnt is zero.

      T2.  stktable_trash_oldest() or process_table_expire()
           -> these can run, because the refcnt is now zero.
              The entry is cleanly deleted and freed.

 T1.  HA_ATOMIC_STORE(&ts->seen, 1)
      -> we dereference freed memory.

A first attempt at a fix was made by keeping the refcnt held during
all the time the entry is in the mt_list, but this is expensive as
such entries cannot be purged, causing lots of skips during
trash_oldest_data(). This managed to trigger watchdogs, and was only
hiding the real cause of the problem.

The correct approach clearly is to maintain the ref_cnt until we
touch ->seen. That's what this patch does. It does not decrement
the refcnt, while calling stktable_touch_remote(), and does it
manually after touching ->seen. With this the problem is gone.

Note that a reproducer involves the following:
  - a config with 10 stick-ctr tracking the same table with a
    random key between 10M and 100M depending on the machine.
  - the expiration should be between 10 and 20s. http_req_cnt
    is stored and shared with the peers.
  - 4 total processes with such a config on the local machine,
    each corresponding to a different peer. 3 of the peers are
    bound to half of the cores (all threads) and share the same
    threads; the last process is bound to the other half with
    its own threads.
  - injecting at full load, ~256 conn, on the shared listening
    port. After ~2x expiration time to 1 minute the lone process
    should segfault in pools code due to a corrupted by_lru list.

This problem already exists in earlier versions but the race looks
narrower. Given how difficult it is to trigger on a given machine
in its current form, it's likely that it only happens once in a
while on stable branches. The fix must be backported wherever the
code is similar, and there's no hope to reproduce it to validate
the backport.

Thanks again to Christian for his amazing help!
2025-05-07 18:49:21 +02:00
Amaury Denoyelle
4bc7aa548a BUG/MINOR: quic: reject invalid max_udp_payload size
Add a checks on received max_udp_payload transport parameters. As
defined per RFC 9000, values below 1200 are invalid, and thus the
connection must be closed with TRANSPORT_PARAMETER_ERROR code.

Prior to this patch, an invalid value was silently ignored.

This should be backported up to 2.6. Note that is relies on previous
patch "MINOR: quic: extend return value on TP parsing".
2025-05-07 15:21:30 +02:00
Amaury Denoyelle
ffabfb0fc3 BUG/MINOR: quic: fix TP reject on invalid max-ack-delay
Checks are implemented on some received transport parameter values,
to reject invalid ones defined per RFC 9000. This is the case for
max_ack_delay parameter.

The check was not properly implemented as it only reject values strictly
greater than the limit set to 2^14. Fix this by rejecting values of 2^14
and above. Also, the proper error code TRANSPORT_PARAMETER_ERROR is now
set.

This should be backported up to 2.6. Note that is relies on previous
patch "MINOR: quic: extend return value on TP parsing".
2025-05-07 15:21:30 +02:00
Amaury Denoyelle
b60a17aad7 BUG/MINOR: quic: use proper error code on invalid received TP value
As per RFC 9000, checks must be implemented to reject invalid values for
received transport parameters. Such values are dependent on the
parameter type.

Checks were already implemented for ack_delay_exponent and
active_connection_id_limit, accordingly with the QUIC specification.
However, the connection was closed with an incorrect error code. Fix
this to ensure that TRANSPORT_PARAMETER_ERROR code is used as expected.

This should be backported up to 2.6. Note that is relies on previous
patch "MINOR: quic: extend return value on TP parsing".
2025-05-07 15:21:30 +02:00
Amaury Denoyelle
10f1f1adce BUG/MINOR: quic: reject retry_source_cid TP on server side
Close the connection on error if retry_source_connection_id transport
parameter is received. This is specified by RFC 9000 as this parameter
must not be emitted by a client. Previously, it was silently ignored.

This should be backported up to 2.6. Note that is relies on previous
patch "MINOR: quic: extend return value on TP parsing".
2025-05-07 15:21:30 +02:00
Amaury Denoyelle
a54fdd3d92 BUG/MINOR: quic: use proper error code on invalid server TP
This commit is similar to the previous one. It fixes the error code
reported when dealing with invalid received transport parameters. This
time, it handles reception of original_destination_connection_id,
preferred_address and stateless_reset_token which must not be emitted by
the client.

This should be backported up to 2.6. Note that is relies on previous
patch "MINOR: quic: extend return value on TP parsing".
2025-05-07 15:20:06 +02:00
Amaury Denoyelle
df6bd4909e BUG/MINOR: quic: use proper error code on missing CID in TPs
Handle missing received transport parameter value
initial_source_connection_id / original_destination_connection_id.
Previously, such case would result in an error reported via
quic_transport_params_store(), which triggers a TLS alert converted as
expected as a CONNECTION_CLOSE. The issue is that the error code
reported in the frame was incorrect.

Fix this by returning QUIC_TP_DEC_ERR_INVAL for such conditions. This is
directly handled via quic_transport_params_store() which set the proper
TRANSPORT_PARAMETER_ERROR code for the CONNECTION_CLOSE. However, no
error is reported so the SSL handshake is properly terminated without a
TLS alert. This is enough to ensure that the CONNECTION_CLOSE frame will
be emitted as expected.

This should be backported up to 2.6. Note that is relies on previous
patch "MINOR: quic: extend return value on TP parsing".
2025-05-07 15:20:06 +02:00
Amaury Denoyelle
294bf26c06 MINOR: quic: extend return value during TP parsing
Extend API used for QUIC transport parameter decoding. This is done via
the introduction of a dedicated enum to report the various error
condition detected. No functional change should occur with this patch,
as the only returned code is QUIC_TP_DEC_ERR_TRUNC, which results in the
connection closure via a TLS alert.

This patch will be necessary to properly reject transport parameters
with the proper CONNECTION_CLOSE error code. As such, it should be
backported up to 2.6 with the following series.
2025-05-07 15:19:52 +02:00
Willy Tarreau
46b5dcad99 MINOR: stick-tables: add "ipv4" as an alias for the "ip" type
However the doc purposely says the opposite, to encourage migrating away
from "ip". The goal is that in the future we change "ip" to mean "ipv6",
which seems to be what most users naturally expect. But we cannot break
configurations in the LTS version so for now "ipv4" is the alias.

The reason for not changing it in the table is that the type name is
used at a few places (look for "].kw"):
  - dumps
  - promex

We'd rather not change that output for 3.2, but only do it in 3.3.
This way, 3.2 can be made future-proof by using "ipv4" in the config
without any other side effect.

Please see github issue #2962 for updates on this transition.
2025-05-07 10:11:55 +02:00
Willy Tarreau
697a531516 MINOR: debug: bump the dump buffer to 8kB
Now with the improved backtraces, the lock history and details in the
mux layers, some dumps appear truncated or with some chars alone at
the beginning of the line. The issue is in fact caused by the limited
dump buffer size (2kB for stderr, 4kB for warning), that cannot hold
a complete trace anymore.

Let's jump bump them to 8kB, this will be plenty for a long time.
2025-05-07 10:02:58 +02:00
Willy Tarreau
10e6d0bd57 BUG/MINOR: tools: only fill first empty arg when not out of range
In commit 3f2c8af313 ("MINOR: tools: make parse_line() provide hints
about empty args") we've added the ability to record the position of
the first empty arg in parse_line(), but that check requires to
access the args[] array for the current arg, which is not valid in
case we stopped on too large an argument count. Let's just check the
arg's validity before doing so.

This was reported by OSS Fuzz:
  https://issues.oss-fuzz.com/issues/415850462

No backport is needed since this was in the latest dev branch.
2025-05-07 07:25:29 +02:00
William Lallemand
fbceabbccf BUG/MINOR: ssl: can't use crt-store some certificates in ssl-f-use
When declaring a certificate via the crt-store section, this certificate
can then be used 2 ways in a crt-list:
- only by using its name, without any crt-store options
- or by using the exact set of crt-list option that was defined in the
  crt-store

Since ssl-f-use is generating a crt-list, this is suppose to behave the
same. To achieve this, ckch_conf_parse() will parse the keywords related
to the ckch_conf on the ssl-f-use line and use ckch_conf_cmp() to
compare it to the previous declaration from the crt-store. This
comparaison is only done when any ckch_conf keyword are present.

However, ckch_conf_parse() was done for the crt-list, and the crt-list
does not use the "crt" parameter to declare the name of the certificate,
since it's the first element of the line. So when used with ssl-f-use,
ckch_conf_parse() will always see a "crt" keyword which is a ckch_conf
one, and consider that it will always need to have the exact same set of
paremeters when using the same crt in a crt-store and an ssl-f-use line.

So a simple configuration like this:

   crt-store web
     load crt "foo.com.crt" key "foo.com.key" alias "foo"

   frontend mysite
     bind :443 ssl
     ssl-f-use crt "@web/foo" ssl-min-ver TLSv1.2

Would lead to an error like this:

    config : '@web/foo' in crt-list '(null)' line 0, is already defined with incompatible parameters:
    - different parameter 'key' : previously 'foo.com.key' vs '(null)'

In order to fix the issue, this patch parses the "crt" parameter itself
for ssl-f-use instead of using ckch_conf_parse(), so the keyword would
never be considered as a ckch_conf keyword to compare.

This patch also take care of setting the CKCH_CONF_SET_CRTLIST flag only
if a ckch_conf keyword was found. This flag is used by ckch_conf_cmp()
to know if it has to compare or not.

No backport needed.
2025-05-06 21:36:29 +02:00
William Lallemand
b3b282d2ee MINOR: ssl: add filename and linenum for ssl-f-use errors
Fill cfg_crt_node with a filename and linenum so the post_section
callback can use it to emit errors.

This way the errors are emitted with the right filename and linenum
where ssl-f-use is used instead of (null):0
2025-05-06 21:36:29 +02:00
Willy Tarreau
99f5be5631 BUG/MAJOR: queue: lock around the call to pendconn_process_next_strm()
The extra call to pendconn_process_next_strm() made in commit cda7275ef5
("MEDIUM: queue: Handle the race condition between queue and dequeue
differently") was performed after releasing the server queue's lock,
which is incompatible with the calling convention for this function.
The result is random corruption of the server's streams list likely
due to picking old or incorrect pendconns from the queue, and in the
end infinitely looping on apparently already locked mt_list objects.
Just adding the lock fixes the problem.

It's very difficult to reproduce, it requires low maxconn values on
servers, stickiness on the servers (cookie), a long enough slowstart
(e.g. 10s), and regularly flipping servers up/down to re-trigger the
slowstart.

No backport is needed as this was only in 3.2.
2025-05-06 18:59:54 +02:00
William Lallemand
b7c4a68ecf MEDIUM: acme/ssl: remove 'acme ps' in favor of 'acme status'
Remove the 'acme ps' command which does not seem useful anymore with the
'acme status' command.

The big difference with the 'acme status' command is that it was only
displaying the running tasks instead of the status of all certificate.
2025-05-06 15:27:29 +02:00
William Lallemand
48f1ce77b7 MINOR: acme/cli: 'acme status' show the status acme-configured certificates
The "acme status" command, shows the status of every certificates
configured with ACME, not only the running task like "acme ps".

The IO handler loops on the ckch_store tree and outputs a line for each
ckch_store which has an acme section set. This is still done under the
ckch_store lock and doesn't support resuming when the buffer is full,
but we need to change that in the future.
2025-05-06 15:27:29 +02:00
Christopher Faulet
a3ce7d7772 Revert "BUG/MEDIUM: mux-spop: Handle CLOSING state and wait for AGENT DISCONNECT frame"
This reverts commit 53c3046898.

This patch introduced a regression leading to a loop on the frames
demultiplexing because a frame may be ignore but not consumed.

But outside this regression that can be fixed, there is a design issue that
was not totally fixed by the patch above. The SPOP connection state is mixed
with the status of the frames demultiplexer and this needlessly complexify
the connection management. Instead of fixing the fix, a better solution is
to revert it to work a a proper solution.

For the record, the idea is to deal with the spop connection state onlu
using 'state' field and to introduce a new field to handle the frames
demultiplexer state. This should ease the closing state management.

Another issue that must be fixed. We must take care to not abort a SPOP
stream when an error is detected on a SPOP connection or when the connection
is closed, if the ACK frame was already received for this stream. It is not
a common case, but it can be solved by saving the last known stream ID that
recieved a ACK.

This patch must be backported if the commit above is backported.
2025-05-06 13:43:59 +02:00
Aurelien DARRAGON
b39825ee45 BUG/MINOR: proxy: only use proxy_inc_fe_cum_sess_ver_ctr() with frontends
proxy_inc_fe_cum_sess_ver_ctr() was implemented in 9969adbc
("MINOR: stats: add by HTTP version cumulated number of sessions and
requests")

As its name suggests, it is meant to be called for frontends, not backends

Also, in 9969adbc, when used under h1_init(), a precaution is taken to
ensure that the function is only called with frontends.

However, this precaution was not applied in h2_init() and qc_init().

Due to this, it remains possible to have proxy_inc_fe_cum_sess_ver_ctr()
being called with a backend proxy as parameter. While it did not cause
known issues so far, it is not expected and could result in bugs in the
future. Better fix this by ensuring the function is only called with
frontends.

It may be backported up to 2.8
2025-05-06 11:01:39 +02:00
Willy Tarreau
3bb6eea6d5 DEBUG: threads: display held locks in threads dumps
Based on the lock history, we can spot some locks that are still held
by checking the last operation that happened on them: if it's not an
unlock, then we know the lock is held. In this case we append the list
after "locked:" with their label and state like below:

  U:QUEUE S:IDLE_CONNS U:IDLE_CONNS R:TASK_WQ U:TASK_WQ S:QUEUE S:QUEUE S:QUEUE locked: QUEUE(S)
  S:IDLE_CONNS U:IDLE_CONNS S:TASK_RQ U:TASK_RQ S:QUEUE U:QUEUE S:IDLE_CONNS locked: IDLE_CONNS(S)
  R:TASK_WQ S:TASK_WQ R:TASK_WQ S:TASK_WQ R:TASK_WQ S:TASK_WQ R:TASK_WQ locked: TASK_WQ(R)
  W:STK_TABLE W:STK_TABLE_UPDT U:STK_TABLE_UPDT W:STK_TABLE W:STK_TABLE_UPDT U:STK_TABLE_UPDT W:STK_TABLE W:STK_TABLE_UPDT locked: STK_TABLE(W) STK_TABLE_UPDT(W)

The format is slightly different (label(status)) so as to easily
differentiate them visually from the history.
2025-05-06 05:20:37 +02:00
Willy Tarreau
1f51f1c816 BUG/MINOR: tools: make parseline report the required space for the trailing 0
The fix in commit 09a325a4de ("BUG/MINOR: tools: always terminate empty
lines") is insufficient. While it properly addresses the lack of trailing
zero, it doesn't account for it in the returned outlen that is used to
allocate a larger line. This happens at boot if the very first line of
the test file is exactly a sharp with nothing else. In this case it will
return a length 0 and the caller (parse_cfg()) will try to re-allocate an
entry of size zero and will fail, bailing out a lack of memory. This time
it should really be OK.

It doesn't need to be backported, unless the patch above would be.
2025-05-05 17:58:04 +02:00
Willy Tarreau
09a325a4de BUG/MINOR: tools: always terminate empty lines
Since latest commit 7e4a2f39ef ("BUG/MINOR: tools: do not create an empty
arg from trailing spaces"), an empty line will no longer produce an arg
and no longer append a trailing zero to them. This was not visible because
one is already present in the input string, however all the trailing args
are set to out+outpos-1, which now points one char before the buffer since
nothing was emitted, and was noticed by ASAN, and/or when parsing garbage.
Let's make sure to always emit the zero for empty lines as well to address
this issue. No backport is needed unless the patch above gets backported.
2025-05-05 17:33:22 +02:00
Willy Tarreau
08d3caf30e MINOR: cfgparse: visually show the input line on empty args
Now when an empty arg is found on a line, we emit the sanitized
input line and the position of the first empty arg so as to help
the user figure the cause (likely an empty environment variable).

Co-authored-by: Valentine Krasnobaeva <vkrasnobaeva@haproxy.com>
2025-05-05 16:17:24 +02:00
Willy Tarreau
3f2c8af313 MINOR: tools: make parse_line() provide hints about empty args
In order to help parse_line() callers report the position of empty
args to the user, let's decide that if no error is emitted, then
we'll stuff the errptr with the position of the first empty arg
without affecting the return value.

Co-authored-by: Valentine Krasnobaeva <vkrasnobaeva@haproxy.com>
2025-05-05 16:17:24 +02:00
Willy Tarreau
9d14f2c764 MEDIUM: config: warn about the consequences of empty arguments on a config line
For historical reasons, the config parser relies on the trailing '\0'
to detect the end of the line being parsed. When the lines started to be
tokenized into arguments, this principle has been preserved, and now all
the parsers rely on *args[arg]='\0' to detect the end of a line. But as
reported in issue #2944, while most of the time it breaks the parsing
like below:

     http-request deny if { path_dir '' }

it can also cause some elements to be silently ignored like below:

     acl bad_path path_sub '%2E' '' '%2F'

This may also subtly happen with environment variables that don't exist
or which are empty:

     acl bad_path path_sub '%2E' "$BAD_PATTERN" '%2F'

Fortunately, parse_line() returns the number of arguments found, so it's
easy from the callers to verify if any was empty. The goal of this commit
is not to perform sensitive changes, it's only to mention when parsing a
line that an empty argument was found and alert about its consequences
using a warning. Most of the time when this happens, the config does not
parse. But for examples as the ACLs above, there could be consequences
that are better detected early.

This patch depends on this previous fix:
   BUG/MINOR: tools: do not create an empty arg from trailing spaces

Co-authored-by: Valentine Krasnobaeva <vkrasnobaeva@haproxy.com>
2025-05-05 16:17:24 +02:00
Willy Tarreau
7e4a2f39ef BUG/MINOR: tools: do not create an empty arg from trailing spaces
Trailing spaces on the lines of the config file create an empty arg
which makes it complicated to detect really empty args. Let's first
address this. Note that it is not user-visible but prevents from
fixing user-visible issues. No backport is needed.

The initial issue was introduced with this fix that already tried to
address it:

    8a6767d266 ("BUG/MINOR: config: don't count trailing spaces as empty arg (v2)")

The current patch properly addresses leading and trailing spaces by
only counting arguments if non-lws chars were found on the line. LWS
do not cause a transition to a new arg anymore but they complete the
current one. The whole new code relies on a state machine to detect
when to create an arg (!in_arg->in_arg), and when to close the current
arg. A special care was taken for word expansion in the form of
"${ARGS[*]}" which still continue to emit individual arguments past
the first LWS. This example works fine:

    ARGS="100 check inter 1000"
    server name 192.168.1."${ARGS[*]}"

It properly results in 6 args:

    "server", "name", "192.168.1.100", "check", "inter", "1000"

This fix should not have any visible user impact and is a bit tricky,
so it's best not to backport it, at least for a while.

Co-authored-by: Valentine Krasnobaeva <vkrasnobaeva@haproxy.com>
2025-05-05 16:16:54 +02:00
William Lallemand
af5bbce664 BUG/MINOR: acme/cli: don't output error on success
Previous patch 7251c13c7 ("MINOR: acme: move the acme task init in a dedicated
function") mistakenly returned the wrong error code when "acme renew" parsing
was successful, and tried to emit an error message.

This patch fixes the issue by returning 0 when the acme task was correctly
scheduled to start.

No backport needed.
2025-05-02 21:21:09 +02:00
Aurelien DARRAGON
0e6f968ee3 BUG/MEDIUM: stktable: fix sc_*(<ctr>) BUG_ON() regression with ctx > 9
As reported in GH #2958, commit 6c9b315 caused a regression with sc_*
fetches and tracked counter id > 9.

As such, the below configuration would cause a BUG_ON() to be triggered:

  global
    log stdout format raw local0
    tune.stick-counters 11

  defaults
    log global
    mode http

  frontend www
    bind *:8080

    acl track_me bool(true)
    http-request set-var(txn.track_var) str("a")
    http-request track-sc10 var(txn.track_var) table rate_table if track_me
    http-request set-var(txn.track_var_rate) sc_gpc_rate(0,10,rate_table)
    http-request return status 200

  backend rate_table
      stick-table type string size 1k expire 5m store gpc_rate(1,1m)

While in 6c9b315 the src_fetch logic was removed from
smp_fetch_sc_stkctr(), num > 9 is indeed not expected anymore as
original num value. But what we didn't consider is that num is effectively
re-assigned for generic sc_* variant.

Thus the BUG_ON() is misplaced as it should only be evaluated for
non-generic fetches. It explains why it triggers with valid configurations

Thanks to GH user @tkjaer for his detailed report and bug analysis

No backport needed, this bug is specific to 3.2.
2025-05-02 16:57:45 +02:00
William Lallemand
7ad501e6a1 MINOR: acme: emit a log when the scheduler can't start the task
Emit an error log when the renewal scheduler can't start the task.
2025-05-02 16:12:41 +02:00
William Lallemand
7fe59ebb88 MEDIUM: acme: add a basic scheduler
This patch implements a very basic scheduler for the ACME tasks.

The scheduler is a task which is started from the postparser function
when at least one acme section was configured.

The scheduler will loop over the certificates in the ckchs_tree, and for
each certificate will start an ACME task if the notAfter date is past
curtime + (notAfter - notBefore) / 12, or 7 days if notBefore is not
available.

Once the lookup over all certificates is terminated, the task will sleep
and will wakeup after 12 hours.
2025-05-02 16:01:32 +02:00
William Lallemand
7251c13c77 MINOR: acme: move the acme task init in a dedicated function
acme_start_task() is a dedicated function which starts an acme task
for a specified <store> certificate.

The initialization code was move from the "acme renew" command parser to
this function, in order to be called from a scheduler.
2025-05-02 16:01:32 +02:00
William Lallemand
626de9538e MINOR: ssl: add function to extract X509 notBefore date in time_t
Add x509_get_notbefore_time_t() which returns the notBefore date in
time_t format.
2025-05-02 16:01:32 +02:00
Valentine Krasnobaeva
8a4b3216f9 MINOR: cfgparse-global: add explicit error messages in cfg_parse_global_env_opts
When env variable name or value are not provided for setenv/presetenv it's not
clear from the old error message shown at stderr, what exactly is missed. User
needs to search in it's configuration.

Let's add more explicit error messages about these inconsistencies.

No need to be backported.
2025-05-02 15:37:45 +02:00
Olivier Houchard
994cc58576 MEDIUM: stick-tables: Limit the number of entries we expire
In process_table_expire(), limit the number of entries we remove in one
call, and just reschedule the task if there's more to do. Removing
entries require to use the heavily contended update write lock, and we
don't want to hold it for too long.
This helps getting stick tables perform better under heavy load.
2025-05-02 15:27:55 +02:00
Olivier Houchard
d2d4c3eb65 MEDIUM: stick-tables: Limit the number of old entries we remove
Limit the number of old entries we remove in one call of
stktable_trash_oldest(), as we do so while holding the heavily contended
update write lock, so we'd rather not hold it for too long.
This helps getting stick tables perform better under heavy load.
2025-05-02 15:27:55 +02:00
Olivier Houchard
388539faa3 MEDIUM: stick-tables: defer adding updates to a tasklet
There is a lot of contention trying to add updates to the tree. So
instead of trying to add the updates to the tree right away, just add
them to a mt-list (with one mt-list per thread group, so that the
mt-list does not become the new point of contention that much), and
create a tasklet dedicated to adding updates to the tree, in batchs, to
avoid keeping the update lock for too long.
This helps getting stick tables perform better under heavy load.
2025-05-02 15:27:55 +02:00
Olivier Houchard
b3ad7b6371 MEDIUM: peers: Give up if we fail to take locks in hot path
In peer_send_msgs(), give up in order to retry later if we failed at
getting the update read lock.
Similarly, in __process_running_peer_sync(), give up and just reschedule
the task if we failed to get the peer lock. There is an heavy contention
on both those locks, so we could spend a lot of time trying to get them.
This helps getting peers perform better under heavy load.
2025-05-02 15:27:55 +02:00
Aurelien DARRAGON
7a8d1a3122 MINOR: hlua: ignore "tune.lua.bool-sample-conversion" if set after "lua-load"
tune.lua.bool-sample-conversion must be set before any lua-load or
lua-load-per-thread is used for it to be considered. Indeed, lua-load
directives are parsed on the fly and will cause some parts of the scripts
to be executed during init already (script body/init contexts).

As such, we cannot afford to have "tune.lua.bool-sample-conversion" set
after some Lua code was loaded, because it would mean that the setting
would be handled differently for Lua's code executed during or after
config parsing.

To avoid ambiguities, the documentation now states that the setting must
be set before any lua-load(-per-thread) directive, and if the setting
is met after some Lua was already loaded, the directive is ignored and
a warning informs about that.

It should fix GH #2957

It may be backported with 29b6d8af16 ("MINOR: hlua: rename
"tune.lua.preserve-smp-bool" to "tune.lua.bool-sample-conversion"")
2025-05-02 14:38:37 +02:00
Willy Tarreau
1ed238101a CLEANUP: tasks: use the local state, not t->state, to check for tasklets
There's no point reading t->state to check for a tasklet after we've
atomically read the state into the local "state" variable. Not only it's
more expensive, it's also less clear whether that state is supposed to
be atomic or not. And in any case, tasks and tasklets have their type
forever and the one reflected in state is correct and stable.
2025-05-02 11:09:28 +02:00
Willy Tarreau
45e83e8e81 BUG/MAJOR: tasks: fix task accounting when killed
After recent commit b81c9390f ("MEDIUM: tasks: Mutualize the TASK_KILLED
code between tasks and tasklets"), the task accounting was no longer
correct for killed tasks due to the decrement of tasks in list that was
no longer done, resulting in infinite loops in process_runnable_tasks().
This just illustrates that this code remains complex and should be further
cleaned up. No backport is needed, as this was in 3.2.
2025-05-02 11:09:28 +02:00
Olivier Houchard
faa18c1ad8 BUG/MEDIUM: quic: Let it be known if the tasklet has been released.
quic_conn_release() may, or may not, free the tasklet associated with
the connection. So make it return 1 if it was, and 0 otherwise, so that
if it was called from the tasklet handler itself, the said handler can
act accordingly and return NULL if the tasklet was destroyed.
This should be backported if 9240cd4a27
is backported.
2025-05-02 11:09:28 +02:00
William Lallemand
f63ceeded0 MINOR: acme: delay of 5s after the finalize
Let 5 seconds by default to the server after the finalize to generate
the certificate. Some servers would not send a Retry-After during
processing.
2025-05-02 10:34:48 +02:00
William Lallemand
2db4848fc8 MINOR: acme: emit a log when starting
Emit a administrative log when starting the ACME client for a
certificate.
2025-05-02 10:23:42 +02:00
William Lallemand
fbd740ef3e MINOR: acme: wait 5s before checking the challenges results
Wait 5 seconds before trying to check if the challenges are ready, so it
let time to server to execute the challenges.
2025-05-02 10:18:24 +02:00
William Lallemand
f7cae0e55b MINOR: acme: allow a delay after a valid response
Use the retryafter value to set a delay before doing the next request
when the previous response was valid.
2025-05-02 10:16:12 +02:00
William Lallemand
24fbd1f724 BUG/MINOR: acme: reinit the retries only at next request
The retries were reinitialized incorrectly, it must be reinit only
when we didn't retry. So any valid response would reinit the retries
number.
2025-05-02 09:34:45 +02:00
William Lallemand
6626011720 MINOR: acme: does not leave task for next request
The next request was always leaving the task befor initializing the
httpclient. This patch optimize it by jumping to the next step at the
end of the current one. This way, only the httpclient is doing a
task_wakeup() to handle the response. But transiting from response to
the next request does not leave the task.
2025-05-02 09:31:39 +02:00
William Lallemand
51f9415d5e MINOR: acme: retry label always do a request
Doing a retry always result in initializing a request again, set
ACME_HTTP_REQ directly in the label instead of doing it for each step.
2025-05-02 09:15:07 +02:00
Olivier Houchard
81e4083efb BUILD/MEDIUM: quic: Make sure we build with recent changes
TASK_IN_LIST has been changed to TASK_QUEUED, but one was missed in
quic_conn.c, so fix that.
2025-04-30 18:00:56 +02:00
Olivier Houchard
b138eab302 BUG/MEDIUM: connections: Report connection closing in conn_create_mux()
Add an extra parametre to conn_create_mux(), "closed_connection".
If a pointer is provided, then let it know if the connection was closed.
Callers have no way to determine that otherwise, and we need to know
that, at least in ssl_sock_io_cb(), as if the connection was closed we
need to return NULL, as the tasklet was free'd, otherwise that can lead
to memory corruption and crashes.
This should be backported if 9240cd4a27
is backported too.
2025-04-30 17:17:36 +02:00
Olivier Houchard
b81c9390f4 MEDIUM: tasks: Mutualize the TASK_KILLED code between tasks and tasklets
The code to handle a task/tasklet when it's been killed before it were
to run is mostly identical, so move it outside of task and tasklet
specific code, and inside the common code.

This commit is just cosmetic, and should have no impact.
2025-04-30 17:09:14 +02:00
Olivier Houchard
2bab043c8c MEDIUM: tasks: Remove TASK_IN_LIST and use TASK_QUEUED instead.
TASK_QUEUED was used to mean "the task has been scheduled to run",
TASK_IN_LIST was used to mean "the tasklet has been scheduled to run",
remove TASK_IN_LIST and just use TASK_QUEUED for tasklets instead.

This commit is just cosmetic, and should not have any impact.
2025-04-30 17:08:57 +02:00
Olivier Houchard
35df7cbe34 MEDIUM: tasks: More code factorization
There is some code that should run no matter if the task was killed or
not, and was needlessly duplicated, so only use one instance.
This also fixes a small bug when a tasklet that got killed before it
could run would still count as a tasklet that ran, when it should not,
which just means that we'd run one less useful task before going back to
the poller.
This commit is mostly cosmetic, and should not have any impact.
2025-04-30 17:08:57 +02:00
Olivier Houchard
438c000e9f MEDIUM: tasks: Mutualize code between tasks and tasklets.
The code that checks if we're currently running, and waits if so, was
identical between tasks and tasklets, so move it in code common to tasks
and tasklets.
This commit is just cosmetic, and should not have any impact.
2025-04-30 17:08:57 +02:00
William Lallemand
6462f183ad MINOR: acme: use acme_ctx_destroy() upon error
Use acme_ctx_destroy() instead of a simple free() upon error in the
"acme renew" error handling.

It's better to use this function to be sure than everything has been
been freed.
2025-04-30 17:18:46 +02:00
William Lallemand
b8a5270334 MINOR: acme: acme_ctx_destroy() returns upon NULL
acme_ctx_destroy() returns when its argument is NULL.
2025-04-30 17:17:58 +02:00
William Lallemand
563ca94ab8 MINOR: ssl/cli: "acme ps" shows the acme tasks
Implement a way to display the running acme tasks over the CLI.

It currently only displays a "Running" status with the certificate name
and the acme section from the configuration.

The displayed running tasks are limited to the size of a buffer for now,
it will require a backref list later to be called multiple times to
resume the list.
2025-04-30 17:12:50 +02:00
Aurelien DARRAGON
7f418ac7d2 MINOR: hlua_fcn: enforce yield after *_get_stats() methods
{listener,proxy,server}_get_stats() methods are know to be expensive,
expecially if used under an iteration. Indeed, while automatic yield
is performed every X lua instructions (defaults to 10k), computing an
object's stats 10K times in a single cpu loop is not desirable and
could create contention.

In this patch we leverage hlua_yield_asap() at the end of *_get_stats()
methods in order to force the automatic yield to occur ASAP after the
method returns. Hopefully this should help in similar scenarios as the
one described in GH #2903
2025-04-30 17:00:31 +02:00
Aurelien DARRAGON
97363015a5 MINOR: add hlua_yield_asap() helper
When called, this function will try to enforce a yield (if available) as
soon as possible. Indeed, automatic yield is already enforced every X
Lua instructions. However, there may be some cases where we know after
running heavy operation that we should yield already to avoid taking too
much CPU at once.

This is what this function offers, instead of asking the user to manually
yield using "core.yield()" from Lua itself after using an expensive
Lua method offered by haproxy, we can directly enforce the yield without
the need to do it in the Lua script.
2025-04-30 17:00:27 +02:00
Amaury Denoyelle
df50d3e39f MINOR: mux-quic: limit emitted MSD frames count per qcs
The previous commit has implemented a new calcul method for
MAX_STREAM_DATA frame emission. Now, a frame may be emitted as soon as a
buffer was consumed by a QCS instance.

This will probably increase the number of MAX_STREAM_DATA frame
emission. It may even cause a series of frame emitted for the same
stream with increasing values under high load, which is completely
unnecessary.

To improve this, limit the number of MAX_STREAM_DATA frames built to one
per QCS instance. This is implemented by storing a reference to this
frame in QCS structure via a new member <tx.msd_frm>.

Note that to properly reset QCS msd_frm member, emission of flow-control
frames have been changed. Now, each frame is emitted individually. On
one side, it is better as it prevent to emit frames related to different
streams in a single datagram, which is not desirable in case of packet
loss. However, this can also increase sendto() syscall invocation.
2025-04-30 16:08:47 +02:00
Amaury Denoyelle
14a3fb679f MEDIUM: mux-quic: increase flow-control on each bufsize
Recently, QCS Rx allocation buffer method has been improved. It is now
possible to allocate multiple buffers per QCS instances, which was
necessary to improve HTTP/3 POST throughput.

However, a limitation remained related to the emission of
MAX_STREAM_DATA. These frames are only emitted once at least half of the
receive capacity has been consumed by its QCS instance. This may be too
restrictive when a client need to upload a large payload.

Improve this by adjusting MAX_STREAM_DATA allocation. If QCS capacity is
still limited to 1 or 2 buffers max, the old calcul is still used. This
is necessary when user has limited upload throughput via their
configuration. If QCS capacity is more than 2 buffers, a new frame is
emitted if at least a buffer was consumed.

This patch has reduced number of STREAM_DATA_BLOCKED frames received in
POST tests with some specific clients.
2025-04-30 16:08:47 +02:00
Christopher Faulet
2ccfebcebf BUG/MINOR: mux-spop: Use the right bitwise operator in spop_ctl()
Becaues of a typo, '||' was used instead of '|' to test the SPOP conneciton
flags and decide if the mux is ready or not. The regression was introduced
in the commit fd7ebf117 ("BUG/MEDIUM: mux-spop: Wait end of handshake to
declare a spop connection ready").

This patch must be backported to 3.1 with the commit above.
2025-04-30 16:01:36 +02:00
Remi Tricot-Le Breton
f191a830d8 BUILD: ssl: Fix wolfssl build
The newly added SSL traces require an extra 'conn' parameter to
ssl_sock_chose_sni_ctx which was added in the "regular" code but not in
the wolfssl specific one.
Wolfssl also has a different prototype for some getter functions
(SSL_get_servername for instance), which do not expect a const SSL while
openssl version does.
2025-04-30 15:50:10 +02:00