When the command line parsing was refactored (20ec1de21 "MAJOR: cli: Refacor
parsing and execution of pipelined commands"), a regression was introduced.
When input data are consumed, information about the applet's input buffer
are no longer updated accordingly to state it is no longer full. So it is
possible to freeze the CLI applet. And a spinning loop may be encountered if
a client shutdown is detected in this state.
The fix is obivous. When data are consumed from the applet's input buffer,
APPCTX_FL_INBLK_FULL flag is removed to notify the input buffer is no longer
full and more data can be sent to the CLI applet.
This patch should fix the issue #3064. It must be backported to 3.2.
Since the SPOE was refactored, the detection of empty NOTIFY frames is
broken. So it is possible to send a NOTIFY frames to an agent with no
message at all. The bug happens because the frame type is now added to the
buffer before the messages encoding. So the buffer is never really empty.
To fix the issue, the condition to detect empty frame was adapted.
This patch must be backported as far as 3.1.
When we perform connect call for a datagram socket, used to send DNS requests,
we set for it the default destination address to some given nameserver. Then we
simply use send(), as the destination address is already set. In some usecases
described in GitHub issues #3001 and #2654, this approach becames inefficient,
nameservers change its IP addresses dynamically, this triggers DNS resolution
errors.
To fix this, let's perform the bind() on the wildcard address for the datagram
AF_INET* client socket. Like this we will allocate a port for it. Then let's
use sendto() instead of send().
If the nameserver is local and is listening on the UNIX domain socket, we
continue to use the existed approach (connect() and then send()).
This fixes issues #3001 and #2654.
This may be backported in all stable versions.
Wrong lock label is used when manipulating idle lock on h1_timeout_task.
Fix this by replacing OTHER_LOCK by IDLE_CONNS_LOCK.
This only concerns thread debugging statistics.
This must be backported up to 2.4.
This issue was reported in GH #3071 by @famfo where a wireshark capture
reveals that some handshake could not complete after having received
two Initial packets. This could happen when the packets were parsed
in two times, calling qc_ssl_provide_all_quic_data() two times.
This is due to crypto data stream counter which was incremented two times
from qc_ssl_provide_all_quic_data() (see cstream->rx.offset += data
statement around line 1223 in quic_ssl.c). One time by the callback
which "receives" the crypto data, and on time by qc_ssl_provide_all_quic_data().
Then when parsing the second crypto data frame, the parser detected
that the crypto were already provided.
To fix this, one could comment the code which increment the crypto data
stream counter by <data>. That said, when using the OpenSSL 3.5 QUIC API
one should not modified the crypto data stream outside of the OpenSSL 3.5
QUIC API.
So, this patch stop calling qc_ssl_provide_all_quic_data() and
qc_ssl_provide_quic_data() and only calls qc_ssl_do_hanshake() after
having received some crypto data. In addition to this, as these functions
are no more called when building haproxy against OpenSSL 3.5, this patch
disable their compilations (with #ifndef HAVE_OPENSSL_QUIC).
This patch depends on this previous one:
MINOR: quic: implement qc_ssl_do_hanshake()
Thank you to @famto for this report.
Must be backported to 3.2.
The rings were manually padded to place the various areas that compose
them into different cache lines, provided that the allocator returned
a cache-aligned address, which until now was not granted. By now
switching to the aligned API we can finally have this guarantee and
hope for more consistent ring performance between tests. Like previously
the few carefully crafted THREAD_PAD() could simply be replaced by
generic THREAD_ALIGN() that dictate the type's alignment.
This was the last user of THREAD_PAD() by the way.
It happens that we free servers at various places in the code, both
on error paths and at runtime thanks to the "server delete" feature. In
order to switch to an aligned struct, we'll need to change the calloc()
and free() calls. Let's first spot them and switch them to srv_alloc()
and srv_free() instead of using calloc() and either free() or ha_free().
An easy trap to fall into is that some of them are default-server
entries. The new srv_free() function also resets the pointer like
ha_free() does.
This was done by running the following coccinelle script all over the
code:
@@
struct server *srv;
@@
(
- free(srv)
+ srv_free(&srv)
|
- ha_free(&srv)
+ srv_free(&srv)
)
@@
struct server *srv;
expression e1;
expression e2;
@@
(
- srv = malloc(e1)
+ srv = srv_alloc()
|
- srv = calloc(e1, e2)
+ srv = srv_alloc()
)
This is marked medium because despite spotting all call places, we can
never rule out the possibility that some out-of-tree patches would
allocate their own servers and continue to use the old API... at their
own risk.
This one is a macro and will allocate a properly aligned and sized
object. This will help make sure that the alignment promised to the
compiler is respected.
When memstats is used, the type name is passed as a string into the
.extra field so that it can be displayed in "debug dev memstats". Two
tiny mistakes related to memstats macros were also fixed (calloc
instead of malloc for zalloc), and the doc was also added to document
how to use these calls.
This is currently for per-thread arrays like idle conns etc. We're
now cache-aligning the per-thread arrays so as to put an end to false
sharing. A comparative test between no alignment and alignment on a
simple config with round robin between 4 servers showed an average
rate of 1.75M/s vs 1.72M/s before for 100M requests. The gain seems
to be more commonly less than 1% however. This should mostly help
make measurements more reproducible across multiple runs.
The struct connection is used a lot by the muxes during many operations,
particularly at the beginning of the struct (flags, ctrl, xprt and mux).
We definitely want this one not to be falsely shared with another thread,
so let's align the pools to a cache line.
This struct is used by memcpy() and friends, particularly during the
early recv() and send(). By keeping it 64-byte aligned, we let the
underlying libs/kernel use optimal operations (e.g. AVX512) for memory
copies while right now it's just random (buffers are found to be equally
aligned to 32 and 64 in practice).
These structs are intensively used and really must not experience false
sharing, so let's declare them aligned to 64. We don't try to align the
struct themselves, as we don't want the compiler to expand them either.
This will make the pools size and alignment automatically inherit
the type declaration. It was done like this:
sed -i -e 's:DECLARE_POOL(\([^,]*,[^,]*,\s*\)sizeof(\([^)]*\))):DECLARE_TYPED_POOL(\1\2):g' $(git grep -lw DECLARE_POOL src addons)
sed -i -e 's:DECLARE_STATIC_POOL(\([^,]*,[^,]*,\s*\)sizeof(\([^)]*\))):DECLARE_STATIC_TYPED_POOL(\1\2):g' $(git grep -lw DECLARE_STATIC_POOL src addons)
81 replacements were made. The only remaining ones are those which set
their own size without depending on a structure. The few ones with an
extra size were manually handled.
It also means that the requested alignments are now checked against the
type's. Given that none is specified for now, no issue is reported.
It was verified with "show pools detailed" that the definitions are
exactly the same, and that the binaries are similar.
For pool registrations that are created from the type declaration, we
now have the ability to verify that the requested alignment matches
the type's one. Let's not miss this opportunity, as we've met bugs in
the past that were caused by such mismatches. The principle is simple:
if the type alignment is known, we check that the configured alignment
is at least as large as that one otherwise we refuse to start (since
the code may crash at any moment). Obviously it doesn't crash for now!
We've forcefully aligned the fdtab in commit 97ea9c49f1 ("BUG/MEDIUM:
fd: always align fdtab[] to 64 bytes"), but now we don't need such hacks
anymore thanks to ha_aligned_alloc(). Let's use it and get rid of
fdtab_addr.
In previous versions of haproxy, insertions of certificates in a
crt-list from the CLI would require to have the path of the directory,
in the path of the certificate. This would help avoiding that the
certificate wasn't loaded upon a reload because it is not at the right
place.
However, since version 3.0 and crt-store, the name stored in the tree
could be an alias and not a path, so that does not make sense anymore.
Even though path would be right, the check is not right anymore in this
case.
The tool or user inserting the certificate must now check itself that
the certificate was placed at the right spot on the filesystem.
Reported in issue #3053.
Could be backported as far as haproxy 3.0.
The random seed used in ha_random functions needs to be first
initialized by calling ha_random_boot. This function was called rather
late in the init process, after the init functions (INITCALLS) are
called and after the configuration parsing for instance which means that
any ha_random call in an init function would return 0. This was the case
in 'vars_init' and 'cache_init' which tried to build seeds for specific
hash calculations but ended up not being seeded.
This patch can be backported on all stable branches.
Both the RFC and the IANA registry refers to challenge names in
lowercase. If we need to implement more challenges, it's better to
use the correct naming.
In order to keep the compatibility with the previous configurations, the
parsing does a strcasecmp() instead of a strcmp().
Also rename every occurence in the code and doc in lowercase.
This was discussed in issue #1864
AWS-LC doesn't provide SSL_in_before(), and doesn't provide an easy way
to know if we already started the handshake or not. So instead, just add
a new field in ssl_sock_ctx, "can_write_early_data", that will be
initialized to 1, and will be set to 0 as soon as we start the
handshake.
This should be backported up to 2.8 with
13aa5616c9f99dbca0711fd18f716bd6f48eb2ae.
In order to send early data, we have to make sure no handshake has been
initiated at all. To do that, we remove the CO_FL_SSL_WAIT_HS flag, so
that we won't attempt to start a handshake. However, by removing those
flags, we allow ssl_sock_to_buf() to call SSL_read(), as it's no longer
aware that no handshake has been done, and SSL_read() will begin the
handshake, thus preventing us from sending early data.
The fix is to just call SSL_in_before() to check if no handshake has
been done yet, in addition to checking CO_FL_SSL_WAIT_HS (both are
needed, as CO_FL_SSL_WAIT_HS may come back in case of renegociation).
In ssl_sock_from_buf(), fix the check to see if we may attempt to send
early data. Use SSL_in_before() instead of SSL_is_init_finished(), as
SSL_is_init_finished() will return 1 if the handshake has been started,
but not terminated, and if the handshake has been started, we can no
longer send early data.
This fixes errors when attempting to send early data (as well as
actually sending early data).
This should be backported up to 2.8.
Cap sticky counter index with tune.nb_stk_ctr instead of MAX_SESS_STKCTR for
sc-add-gpc. Same logic is already implemented for sc-inc-gpc and sc-set-gpt
keywords. So, it seems missed for sc-add-gpc.
This fixes the issue #3061 reported at GitHub. Thanks to @ma311 for
reporting their analysis of the issue.
This should be backported in all versions until 2.8, included 2.8.
Similar to REGISTER_POST_DEINIT() hook (which is invoked during deinit)
but for master process only, when haproxy was started in master-worker
mode. The goal is to be able to register cleanup functions that will
only run for the master process right before exiting.
Since now_offset is a static variable and is not exposed outside from
clock.c, let's add an helper so that it becomes possible to set its
value from another source file.
post_section_px_cleanup(), which was implemented in abcc73830
("MEDIUM: proxy: register a post-section cleanup function"), is called
for the current section no matter if the parsing was aborted due to
a fatal error. In this case, the curproxy pointer may point to NULL,
yet post_section_px_cleanup() assumes curproxy pointer is always valid,
which could lead to NULL-deref.
For instance, the config below will cause SEGFAULT:
listen toto titi
To fix the issue, let's simply consider that the curproxy pointer may
be NULL in post_section_px_cleanup(), in which case we skip the cleanup
for the curproxy since there is nothing we can do.
No backport needed
When improper arguments are provided on proxy directive (listen,
frontend or backend), such alert may be emitted:
"please use the 'bind' keyword for listening addresses"
This was introduced in 6e62fb6405 ("MEDIUM: cfgparse: check section
maximum number of arguments"). However, despite the error being reported
as alert, the err_code isn't updated accordingly, which could make the
upper parser think there was no error, while it isn't the case.
In practise since the proxy directive is ignored following proxy related
directives should raise errors, so this didn't cause much harm, yet
better fix that.
It could be backported to all stable versions.
Since 368d01361 (" MEDIUM: server: add and use srv_init() function"), in
case of srv_init() error, we simply increment cfgerr variable and keep
going.
It isn't enough, some treatment occuring later in check_config_validity()
assume that srv_init() succeeded for servers, and may cause undefined
behavior. To fix the issue, let's consider that if (srv_init() & ERR_CODE)
returns true, then we must stop checking the config immediately.
No backport needed unless 368d01361 is.
Previously quic_conn <target> member was used to determine if quic_conn
was used on the frontend (as server) or backend side (as client). A new
helper function can now be used to directly check flag
QUIC_FL_CONN_IS_BACK.
This reduces the dependency between quic_conn and their relative
listener/server instances.
Define a new quic_conn flag assign if the connection is used on the
backend side. This is similar to other haproxy components such as struct
connection and muxes element.
This flag is positionned via qc_new_conn(). Also update quic traces to
mark proxy side as 'F' or 'B' suffix.
QUIC emission can use GSO to emit multiple datagrams with a single
syscall invokation. However, this feature relies on several kernel
parameters which are checked on haproxy process startup.
Even if these checks report no issue, GSO may still be unable due to the
underlying network adapter underneath. Thus, if a EIO occured on
sendmsg() with GSO, listener is flagged to mark GSO as unsupported. This
allows every other QUIC connections to share the status and avoid using
GSO when using this listener.
Previously, listener flag was checked for every QUIC emission. This was
done using an atomic operation to prevent races. Improve this by
duplicating GSO unsupported status as the connection level. This is done
on qc_new_conn() and also on thread rebinding if a new listener instance
is used.
The main benefit from this patch is to reduce the dependency between
quic_conn and listener instances.
Now pool_alloc_area() takes the alignment in argument and makes use
of ha_aligned_malloc() instead of malloc(). pool_alloc_area_uaf()
simply applies the alignment before returning the mapped area. The
pool_free() functionn calls ha_aligned_free() so as to permit to use
a specific API for aligned alloc/free like mingw requires.
Note that it's possible to see warnings about mismatching sized
during pool_free() since we know both the pool and the type. In
pool_free, adding just this is sufficient to detect potential
offenders:
WARN_ON(__alignof__(*__ptr) > pool->align);
This adds an alignment argument to create_pool_from_loc() and
completes the existing low-level macros with new ones that expose
the alignment and the new macros permit to specify it. For now
they're not used.
This will be used to declare aligned pools. For now it's not used,
but it's properly set from the various registrations that compose
a pool, and rounded up to the next power of 2, with a minimum of
sizeof(void*).
The alignment is returned in the "show pools" part that indicates
the entry size. E.g. "(56 bytes/8)" means 56 bytes, aligned by 8.
Just like previous patch, we want to retrieve the location of the caller.
For this we turn create_pool() into a macro that collects __FILE__ and
__LINE__ and passes them to the now renamed function create_pool_with_loc().
Now the remaining ~30 pools also have their location stored.
When pools are declared using DECLARE_POOL(), REGISTER_POOL etc, we
know where they are and it's trivial to retrieve the file name and line
number, so let's store them in the pool_registration, and display them
when known in "show pools detailed".
Now we're creating statically allocated registrations instead of
passing all the parameters and allocating them on the fly. Not only
this is simpler to extend (we're limited in number of INITCALL args),
but it also leaves all of these in the data segment where they are
easier to find when debugging.
This is already the case as all names are constant so that's fine. If
it would ever change, it's not very hard to just replace it in-situ
via an strdup() and set a flag to mention that it's dynamically
allocated. We just don't need this right now.
One immediately visible effect is in "show pools detailed" where the
names are no longer truncated.
We've recently introduced pool registrations to be able to enumerate
all pool creation requests with their respective parameters, but till
now they were only used for debugging ("show pools detailed"). Let's
go a step further and split create_pool() in two:
- the first half only allocates and sets the pool registration
- the second half creates the pool from the registration
This is what this patch does. This now opens the ability to pre-create
registrations and create pools directly from there.
The struct was mistakenly spelled flt_fcgi_ctx() in fcgi_flt_stop()
when it was introduced in 2.1 with commit 78fbb9f991 ("MEDIUM:
fcgi-app: Add FCGI application and filter"), causing build issues
when trying to get the alignment of the object in pool_free() for
debugging purposes. No backport is needed as it's just used to convey
a pointer.
This commit introduces a sample fetch, `le2dec`, to convert
little-endian binary input samples into their decimal representations.
The function converts the input into a string containing unsigned
integer numbers, with each number derived from a specified number of
input bytes. The numbers are separated using a user-defined separator.
This new sample is achieved by adding a parametrized sample_conv_2dec
function, unifying the logic for be2dec and le2dec converters.
Co-authored-by: Christian Norbert Menges <christian.norbert.menges@sap.com>
[wt: tracked as GH issue #2915]
Signed-off-by: Willy Tarreau <w@1wt.eu>
In 358166a ("BUG/MINOR: hlua_fcn: restore server pairs iterator pointer
consistency"), I wrongly assumed that because the iterator was a temporary
object, no specific cleanup was needed for the watcher.
In fact watcher_detach() is not only relevant for the watcher itself, but
especially for its parent list to remove the current watcher from it.
As iterators are temporary objects, failing to remove their watchers from
the server watcher list causes the server watcher list to be corrupted.
On a normal iteration sequence, the last watcher_next() receives NULL
as target so it successfully detaches the last watcher from the list.
However the corner case here is with interrupted iterators: users are
free to break away from the iteration loop when a specific condition is
met for instance from the lua script, when this happens
hlua_listable_servers_pairs_iterator() doesn't get a chance to detach the
last iterator.
Also, Lua doesn't tell us that the loop was interrupted,
so to fix the issue we rely on the garbage collector to force a last
detach right before the object is freed. To achieve that, watcher_detach()
was slightly modified so that it becomes possible to call it without
knowing if the watcher is already detached or not, if watcher_detach() is
called on a detached watcher, the function does nothing. This way it saves
the caller from having to track the watcher state and makes the API a
little more convenient to use. This way we now systematically call
watcher_detach() for server iterators right before they are garbage
collected.
This was first reported in GH #3055. It can be observed when the server
list is browsed one than more time when it was already browsed from Lua
for a given proxy and the iteration was interrupted before the end. As the
watcher list is corrupted, the common symptom is watcher_attach() or
watcher_next() not ending due to the internal mt_list call looping
forever.
Thanks to GH users @sabretus and @sabretus for their precious help.
It should be backported everywhere 358166a was.
a2base64url() can return a negative value is olen is too short to
accept ilen. This is not supposed to happen since the sha256 should
always fit in a buffer. But this is confusing since a2base64()
returns a signed integer which is pt in output->data which is unsigned.
Fix the issue by setting ret to 0 instead of -1 upon error. And returns
a unsigned integer instead of a signed one.
This patch also checks the return value from the caller in order
to emit an error instead of setting trash.data which is already done
from the function.
DNS-01 needs a external process which would register a TXT record on a
DNS provider, using a REST API or something else.
To achieve this, the process should read the dpapi sink and wait for
events. With the DNS-01 challenge, HAProxy will put the task to sleep
before asking the ACME server to achieve the challenge. The task then
need to be woke up, using the command implemented by this patch.
This patch implements the "acme challenge_ready" command which should be
used by the agent once the challenge was configured in order to wake the
task up.
Example:
echo "@1 acme challenge_ready foobar.pem.rsa domain kikyo" | socat /tmp/master.sock -
This commit adds a new message to the dpapi sink which is emitted during
the new authorization request.
One message is emitted by challenge to resolve. The certificate name as
well as the thumprint of the account key are on the first line of the
message. A dump of the JSON response for 1 challenge is dumped, en the
message ends with a \0.
The agent consuming these messages MUST NOT access the URLs, and SHOULD
only uses the thumbprint, dns and token to configure a challenge.
Example:
$ ( echo "@@1 show events dpapi -w -0"; cat - ) | socat /tmp/master.sock - | cat -e
<0>2025-08-01T16:23:14.797733+02:00 acme deploy foobar.pem.rsa thumbprint Gv7pmGKiv_cjo3aZDWkUPz5ZMxctmd-U30P2GeqpnCo$
{$
"status": "pending",$
"identifier": {$
"type": "dns",$
"value": "foobar.com"$
},$
"challenges": [$
{$
"type": "dns-01",$
"url": "https://0.0.0.0:14000/chalZ/1o7sxLnwcVCcmeriH1fbHJhRgn4UBIZ8YCbcrzfREZc",$
"token": "tvAcRXpNjbgX964ScRVpVL2NXPid1_V8cFwDbRWH_4Q",$
"status": "pending"$
},$
{$
"type": "dns-account-01",$
"url": "https://0.0.0.0:14000/chalZ/z2_WzibwTPvE2zzIiP3BF0zNy3fgpU_8Nj-V085equ0",$
"token": "UedIMFsI-6Y9Nq3oXgHcG72vtBFWBTqZx-1snG_0iLs",$
"status": "pending"$
},$
{$
"type": "tls-alpn-01",$
"url": "https://0.0.0.0:14000/chalZ/AHnQcRvZlFw6e7F6rrc7GofUMq7S8aIoeDileByYfEI",$
"token": "QhT4ejBEu6ZLl6pI1HsOQ3jD9piu__N0Hr8PaWaIPyo",$
"status": "pending"$
},$
{$
"type": "http-01",$
"url": "https://0.0.0.0:14000/chalZ/Q_qTTPDW43-hsPW3C60NHpGDm_-5ZtZaRfOYDsK3kY8",$
"token": "g5Y1WID1v-hZeuqhIa6pvdDyae7Q7mVdxG9CfRV2-t4",$
"status": "pending"$
}$
],$
"expires": "2025-08-01T15:23:14Z"$
}$
^@