This commit introduces a sample fetch, `le2dec`, to convert
little-endian binary input samples into their decimal representations.
The function converts the input into a string containing unsigned
integer numbers, with each number derived from a specified number of
input bytes. The numbers are separated using a user-defined separator.
This new sample is achieved by adding a parametrized sample_conv_2dec
function, unifying the logic for be2dec and le2dec converters.
Co-authored-by: Christian Norbert Menges <christian.norbert.menges@sap.com>
[wt: tracked as GH issue #2915]
Signed-off-by: Willy Tarreau <w@1wt.eu>
In 358166a ("BUG/MINOR: hlua_fcn: restore server pairs iterator pointer
consistency"), I wrongly assumed that because the iterator was a temporary
object, no specific cleanup was needed for the watcher.
In fact watcher_detach() is not only relevant for the watcher itself, but
especially for its parent list to remove the current watcher from it.
As iterators are temporary objects, failing to remove their watchers from
the server watcher list causes the server watcher list to be corrupted.
On a normal iteration sequence, the last watcher_next() receives NULL
as target so it successfully detaches the last watcher from the list.
However the corner case here is with interrupted iterators: users are
free to break away from the iteration loop when a specific condition is
met for instance from the lua script, when this happens
hlua_listable_servers_pairs_iterator() doesn't get a chance to detach the
last iterator.
Also, Lua doesn't tell us that the loop was interrupted,
so to fix the issue we rely on the garbage collector to force a last
detach right before the object is freed. To achieve that, watcher_detach()
was slightly modified so that it becomes possible to call it without
knowing if the watcher is already detached or not, if watcher_detach() is
called on a detached watcher, the function does nothing. This way it saves
the caller from having to track the watcher state and makes the API a
little more convenient to use. This way we now systematically call
watcher_detach() for server iterators right before they are garbage
collected.
This was first reported in GH #3055. It can be observed when the server
list is browsed one than more time when it was already browsed from Lua
for a given proxy and the iteration was interrupted before the end. As the
watcher list is corrupted, the common symptom is watcher_attach() or
watcher_next() not ending due to the internal mt_list call looping
forever.
Thanks to GH users @sabretus and @sabretus for their precious help.
It should be backported everywhere 358166a was.
a2base64url() can return a negative value is olen is too short to
accept ilen. This is not supposed to happen since the sha256 should
always fit in a buffer. But this is confusing since a2base64()
returns a signed integer which is pt in output->data which is unsigned.
Fix the issue by setting ret to 0 instead of -1 upon error. And returns
a unsigned integer instead of a signed one.
This patch also checks the return value from the caller in order
to emit an error instead of setting trash.data which is already done
from the function.
DNS-01 needs a external process which would register a TXT record on a
DNS provider, using a REST API or something else.
To achieve this, the process should read the dpapi sink and wait for
events. With the DNS-01 challenge, HAProxy will put the task to sleep
before asking the ACME server to achieve the challenge. The task then
need to be woke up, using the command implemented by this patch.
This patch implements the "acme challenge_ready" command which should be
used by the agent once the challenge was configured in order to wake the
task up.
Example:
echo "@1 acme challenge_ready foobar.pem.rsa domain kikyo" | socat /tmp/master.sock -
This commit adds a new message to the dpapi sink which is emitted during
the new authorization request.
One message is emitted by challenge to resolve. The certificate name as
well as the thumprint of the account key are on the first line of the
message. A dump of the JSON response for 1 challenge is dumped, en the
message ends with a \0.
The agent consuming these messages MUST NOT access the URLs, and SHOULD
only uses the thumbprint, dns and token to configure a challenge.
Example:
$ ( echo "@@1 show events dpapi -w -0"; cat - ) | socat /tmp/master.sock - | cat -e
<0>2025-08-01T16:23:14.797733+02:00 acme deploy foobar.pem.rsa thumbprint Gv7pmGKiv_cjo3aZDWkUPz5ZMxctmd-U30P2GeqpnCo$
{$
"status": "pending",$
"identifier": {$
"type": "dns",$
"value": "foobar.com"$
},$
"challenges": [$
{$
"type": "dns-01",$
"url": "https://0.0.0.0:14000/chalZ/1o7sxLnwcVCcmeriH1fbHJhRgn4UBIZ8YCbcrzfREZc",$
"token": "tvAcRXpNjbgX964ScRVpVL2NXPid1_V8cFwDbRWH_4Q",$
"status": "pending"$
},$
{$
"type": "dns-account-01",$
"url": "https://0.0.0.0:14000/chalZ/z2_WzibwTPvE2zzIiP3BF0zNy3fgpU_8Nj-V085equ0",$
"token": "UedIMFsI-6Y9Nq3oXgHcG72vtBFWBTqZx-1snG_0iLs",$
"status": "pending"$
},$
{$
"type": "tls-alpn-01",$
"url": "https://0.0.0.0:14000/chalZ/AHnQcRvZlFw6e7F6rrc7GofUMq7S8aIoeDileByYfEI",$
"token": "QhT4ejBEu6ZLl6pI1HsOQ3jD9piu__N0Hr8PaWaIPyo",$
"status": "pending"$
},$
{$
"type": "http-01",$
"url": "https://0.0.0.0:14000/chalZ/Q_qTTPDW43-hsPW3C60NHpGDm_-5ZtZaRfOYDsK3kY8",$
"token": "g5Y1WID1v-hZeuqhIa6pvdDyae7Q7mVdxG9CfRV2-t4",$
"status": "pending"$
}$
],$
"expires": "2025-08-01T15:23:14Z"$
}$
^@
This commit emits a log which output the TXT entry to create in case of
DNS-01. This is useful in cases you want to update your TXT entry
manually.
Example:
acme: foobar.pem.rsa: DNS-01 requires to set the "acme-challenge.example.com" TXT record to "7L050ytWm6ityJqolX-PzBPR0LndHV8bkZx3Zsb-FMg"
Files ending with '-t.h' are supposed to be used for structure
definitions and could be included in the same file to check API
definitions.
This patch removes TRACE_SOURCE from acme-t.h to avoid conflicts with
other TRACE_SOURCE definitions.
QUIC MUX may be initialized prior to handshake completion, when 0-RTT is
used. In this case, connection is flagged with CO_FL_EARLY_SSL_HS, which
is notably used by wait-for-hs http rule.
Early data may be subject to replay attacks. For this reason, haproxy
adds the header 'Early-data: 1' to all requests handled as TLS early
data. Thus the server can reject it if it is deemed unsafe. This header
injection is implemented by http-ana. However, it was not functional
with QUIC due to missing CO_FL_EARLY_DATA connection flag.
Fix this by ensuring that QUIC MUX sets CO_FL_EARLY_DATA when needed.
This is performed during qcc_recv() for STREAM frame reception. It is
only set if QC_CF_WAIT_HS is set, meaning that the handshake is not yet
completed. After this, the request is considered safe and Early-data
header is not necessary anymore.
This should fix github issue #3054.
This must be backported up to 3.2 at least. If possible, it should be
backported to all stable releases as well. On these versions, the
current patch relies on the following refactoring commit :
commit 0a53a008d0
MINOR: mux-quic: refactor wait-for-handshake support
Following the latest adjustment on session_add_conn() /
session_check_idle_conn(), detach muxes callbacks were rewritten for
private connection handling.
Nothing really fancy here : some more explicit comments and the removal
of a duplicate checks on idle conn status for muxes with true
multipexing support.
session_check_idle_conn() is called by muxes when a connection becomes
idle. It ensures that the session idle limit is not yet reached. Else,
the connection is removed from the session and it can be freed.
Prior to this patch, session_check_idle_conn() was compatible with a
NULL session argument. In this case, it would return true, considering
that no limit was reached and connection not removed.
However, this renders the function error-prone and subject to future
bugs. This patch streamlines it by ensuring it is never called with a
NULL argument. Thus it can now only returns true if connection is kept
in the session or false if it was removed, as first intended.
session_check_idle_conn() is called to flag a connection already
inserted in a session list as idle. If the session limit on the number
of idle connections (max-session-srv-conns) is exceeded, the connection
is removed from the session list.
In addition to the connection removal, session_check_idle_conn()
directly calls MUX destroy callback on the connection. This means the
connection is freed by the function itself and should not be used by the
caller anymore.
This is not practical when an alternative connection closure method
should be used, such as a graceful shutdown with QUIC. As such, remove
MUX destroy invokation : this is now the responsability of the caller to
either close or release immediately the connection.
Add a BUG_ON() on session_check_idle_conn() to ensure the connection is
not already flagged as CO_FL_SESS_IDLE.
This checks that this function is only called one time per connection
transition from active to idle. This is necessary to ensure that session
idle counter is only incremented one time per connection.
session_add_conn() uses three argument : connection and session
instances, plus a void pointer labelled as target. Typically, it
represents the server, but can also be a backend instance (for example
on dispatch).
In fact, this argument is redundant as <target> is already a member of
the connection. This commit simplifies session_add_conn() by removing
it. A BUG_ON() on target is extended to ensure it is never NULL.
This commit is the first one of a serie to refactor insertion of backend
private connection into the session list.
session_add_conn() is used to attach a connection into a session list.
Previously, this function would report an error if the connection
specified was already attached to another session. However, this case
currently never happens and thus can be considered as buggy.
Remove this check and replace it with a BUG_ON(). This allows to ensure
that session insertion remains consistent. The same check is also
transformed in session_check_idle_conn().
On stream detach on backend side, connection is inserted in the proper
server/session list to be able to reuse it later. If insertion fails and
the connection is idle, the connection can be removed immediately.
If this occurs on a QUIC connection, QUIC MUX implements graceful
shutdown to ensure the server is notified of the closure. However, the
connection instance is not freed. Change this to ensure that both
shutdown and release is performed.
This is preparation work for shared counters between co-processes. As
co-processes will need to share a common date. global_now_ms will be used
for that as it will point to the shm when sharing is enabled.
Thus in this patch we turn global_now_ms into a pointer (and adjust the
places where it is written to and read from, hopefully atomic operations
through pointer are already used so the change is trivial)
For now global_now_ms points to process-local _global_now_ms which is a
fallback for when sharing through the shm is not enabled.
75e480d10 ("MEDIUM: stats: avoid 1 indirection by storing the shared
stats directly in counters struct") took care of renaming
counters_fe_shared_init() but we forgot counters_be_shared_init().
Let's fix that for consistency
As discussed in GH #3051, default-path is not taken into account when
loading files using lua-load-per-thread. In fact, the initial
hlua_load_state() (performed on first thread which parses the config)
is successful, but other threads run hlua_load_state() later based
on config hints which were saved by the first thread, and those config
hints only contain the file path provided on the lua-load-per-thread
config line, not the absolute one. Indeed, `default-path` directive
changes the current working directory only for the thread parsing the
configuration.
To fix the issue, when storing config hints under hlua_load_per_thread()
we now make sure to save the absolute file path for `lua-load-per-thread'
argument.
Thanks to GH user @zhanhb for having reported the issue
It may be backported to all stable versions.
Implement traces for the ACME protocol.
-dt acme:data:complete will dump every input and output buffers,
including decoded buffers before being converted to JWS.
It will also dump certificates in the traces.
-dt acme:user:complete will only dump the state of the task handler.
Released version 3.3-dev5 with the following main changes :
- BUG/MEDIUM: queue/stats: also use stream_set_srv_target() for pendconns
- DOC: list missing global QUIC settings
Complete list of global keywords with missing QUIC entries.
This could be backported to stable versions. This requires to take into
account the version of introduction for each keyword.
* limited-quic, introduced in 2.8
* no-quic, introduced in 2.8
* tune.quic.cc.cubic.min-losses, introduced in 3.1
Following c24de07 ("OPTIM: stats: store fast sharded counters pointers
at session and stream level") some crashes were observed in
connect_server():
#0 0x00000000007ba39c in connect_server (s=0x65117b0) at src/backend.c:2101
2101 _HA_ATOMIC_INC(&s->sv_tgcounters->connect);
Missing separate debuginfos, use: debuginfo-install glibc-2.17-325.el7_9.x86_64 libgcc-4.8.5-44.el7.x86_64 nss-softokn-freebl-3.67.0-3.el7_9.x86_64 pcre-8.32-17.el7.x86_64
(gdb) bt
#0 0x00000000007ba39c in connect_server (s=0x65117b0) at src/backend.c:2101
#1 0x00000000007baff8 in back_try_conn_req (s=0x65117b0) at src/backend.c:2378
#2 0x00000000006c0e9f in process_stream (t=0x650f180, context=0x65117b0, state=8196) at src/stream.c:2366
#3 0x0000000000bd3e51 in run_tasks_from_lists (budgets=0x7ffd592752e0) at src/task.c:655
#4 0x0000000000bd49ef in process_runnable_tasks () at src/task.c:889
#5 0x0000000000851169 in run_poll_loop () at src/haproxy.c:2834
#6 0x0000000000851865 in run_thread_poll_loop (data=0x1a03580 <ha_thread_info>) at src/haproxy.c:3050
#7 0x0000000000852a53 in main (argc=7, argv=0x7ffd592755f8) at src/haproxy.c:3637
Here the crash occurs during the atomic inc of a sv_tgcounters metric from
the stream pointer, which tells us the pointer is likely garbage.
In fact, we assign s->sv_tgcounters each time the stream target is set to
a valid server. For that we use stream_set_srv_target() helper which does
assigment for us. By reviewing the code, in turns out we forgot to call
stream_set_srv_target() in pendconn_dequeue(), where the stream target
is set to the server who picked the pendconn.
Let's fix the bug by using stream_set_srv_target() there.
No backport needed unless c24de07 is.
Released version 3.3-dev4 with the following main changes :
- CLEANUP: server: do not check for duplicates anymore in findserver()
- REORG: server: move findserver() from proxy.c to server.c
- MINOR: server: use the tree to look up the server name in findserver()
- CLEANUP: server: rename server_find_by_name() to server_find()
- CLEANUP: server: rename findserver() to server_find_by_name()
- CLEANUP: server: use server_find_by_name() where relevant
- CLEANUP: cfgparse: lookup proxy ID using existing functions
- CLEANUP: stream: lookup server ID using standard functions
- CLEANUP: server: simplify server_find_by_id()
- CLEANUP: server: add server_find_by_addr()
- CLEANUP: stream: use server_find_by_addr() in sticking_rule_find_target()
- CLEANUP: server: be sure never to compare src against a non-existing defsrv
- MEDIUM: proxy: take the defsrv out of the struct proxy
- MINOR: proxy: add checks for defsrv's validity
- MEDIUM: proxy: no longer allocate the default-server entry by default
- MEDIUM: proxy: register a post-section cleanup function
- MINOR: debug: report haproxy and operating system info in panic dumps
- BUG/MEDIUM: h3: do not overwrite interim with final response
- BUG/MINOR: h3: properly realloc buffer after interim response encoding
- BUG/MINOR: h3: ensure that invalid status code are not encoded (FE side)
- MINOR: qmux: change API for snd_buf FIN transmission
- BUG/MEDIUM: h3: handle interim response properly on FE side
- BUG/MINOR: h3: properly handle interim response on BE side
- BUG/MINOR: quic: Wrong source address use on FreeBSD
- MINOR: h3: remove unused outbuf in h3_resp_headers_send()
- BUG/MINOR: applet: Don't trigger BUG_ON if the tid is not on appctx init
- DEV: gdb: add a memprofile decoder to the debug tools
- MINOR: quic: Get rid of qc_is_listener()
- DOC: connection: explain the rules for idle/safe/avail connections
- BUG/MEDIUM: quic-be: CC buffer released from wrong pool
- BUG/MINOR: halog: exit with error when some output filters are set simultaneosly
- MINOR: cpu-topo: split cpu_dump_topology() to show its summary in show dev
- MINOR: cpu-topo: write thread-cpu bindings into trash buffer
- MINOR: debug: align output style of debug_parse_cli_show_dev with cpu_dump_topology
- MINOR: debug: add thread-cpu bindings info in 'show dev' output
- MINOR: quic: Remove pool_head_quic_be_cc_buf pool
- BUILD: debug: add missed guard USE_CPU_AFFINITY to show cpu bindings
- BUG/MEDIUM: threads: Disable the workaround to load libgcc_s on macOS
- BUG/MINOR: logs: fix log-steps extra log origins selection
- BUG/MINOR: hq-interop: fix FIN transmission
- MINOR: ssl: Add ciphers in ssl traces
- MINOR: ssl: Add curve id to curve name table and mapping functions
- MINOR: ssl: Add curves in ssl traces
- MINOR: ssl: Dump ciphers and sigalgs details in trace with 'advanced' verbosity
- MINOR: ssl: Remove ClientHello specific traces if !HAVE_SSL_CLIENT_HELLO_CB
- MINOR: h3: use smallbuf for request header emission
- MINOR: h3: add traces to h3_req_headers_send()
- BUG/MINOR: h3: fix uninitialized value in h3_req_headers_send()
- MINOR: log: explicitly ignore "log-steps" on backends
- BUG/MEDIUM: acme: use POST-as-GET instead of GET for resources
- BUG/MINOR mux-quic: apply correctly timeout on output pending data
- BUG/MINOR: mux-quic: ensure close-spread-time is properly applied
- MINOR: mux-quic: refactor timeout code
- MINOR: mux-quic: correctly implement backend timeout
- MINOR: mux-quic: disable glitch on backend side
- MINOR: mux-quic: store session in QCS instance
- MEDIUM: mux-quic: implement be connection reuse
- MINOR: mux-quic: do not reuse connection if app already shut
- MEDIUM: mux-quic: support backend private connection
- MINOR: acme: remove acme_req_auth() and use acme_post_as_get() instead
- BUG/MINOR: acme: allow "processing" in challenge requests
- CLEANUP: acme: fix wrong spelling of "resources"
- CLEANUP: ssl: Use only NIDs in curve name to id table
- MINOR: acme: add ACME to the haproxy -vv feature list
- BUG/MINOR: hlua: Skip headers when a receive is performed on an HTTP applet
- BUG/MEDIUM: applet: State inbuf is no longer full if input data are skipped
- BUG/MEDIUM: stconn: Fix conditions to know an applet can get data from stream
- BUG/MINOR: applet: Fix applet_getword() to not return one extra byte
- BUG/MEDIUM: Remove sync sends from streams to applets
- MINOR: applet: Add HTX versions for applet_input_data() and applet_output_room()
- MINOR: applet: Improve applet API to take care of inbuf/outbuf alloc failures
- MEDIUM: hlua: Update the tcp applet to use its own buffers
- MINOR: hlua: Fill the request array on the first HTTP applet run
- MINOR: hlua: Use the buffer instead of the HTTP message to get HTTP headers
- MEDIUM: hlua: Update the http applet to use its own buffers
- BUG/MEDIUM: hlua: Report to SC when data were consumed on a lua socket
- BUG/MEDIUM: hlua: Report to SC when output data are blocked on a lua socket
- MEDIUM: hlua: Update the socket applet to use its own buffers
- BUG/MEDIUM: dns: Reset reconnect tempo when connection is finally established
- MEDIUM: dns: Update the dns_session applet to use its own buffers
- CLEANUP: http-client: Remove useless indentation when sending request body
- MINOR: http-client: Try to send request body with headers if possible
- MINOR: http-client: Trigger an error if first response block isn't a start-line
- BUG/MINOR: httpclient-cli: Don't try to dump raw headers in HTX mode
- MINOR: httpclient-cli: Reset httpclient HTX buffer instead of removing blocks
- MEDIUM: http-client: Update the http-client applet to use its own buffers
- MEDIUM: log: Update the log applet to use its own buffers
- MEDIUM: sink: Update the sink applets to use their own buffers
- MEDIUM: peers: Update the peer applet to use its own buffers
- MEDIUM: promex: Update the promex applet to use their own buffers
- MINOR: applet: Add support for flags on applets with a flag about the new API
- MEDIUM: applet: Emit a warning when a legacy applet is spawned
- BUG/MEDIUM: logs: fix sess_build_logline_orig() recursion with options
- MEDIUM: stats: avoid 1 indirection by storing the shared stats directly in counters struct
- CLEANUP: compiler: prefer char * over void * for pointer arithmetic
- CLEANUP: include: replace hand-rolled offsetof to avoid UB
- CLEANUP: peers: remove unused peer_session_target()
- OPTIM: stats: store fast sharded counters pointers at session and stream level
Following commit 75e480d10 ("MEDIUM: stats: avoid 1 indirection by storing
the shared stats directly in counters struct"), in order to minimize the
impact of the recent sharded counters work, we try to push things a bit
further in this patch by storing and using "fast" pointers at the session
and stream levels when available to avoid costly indirections and
systematic "tgid" resolution (which can not be cached by the CPU due to
its THREAD-local nature).
Indeed, we know that a session/stream is tied to a given CPU, thanks to
this we know that the tgid for a given session/stream will never change.
Given that, we are able to store sharded frontend and listener counters
pointer at the session level (namely sess->fe_tgcounters and
sess->li_tgcounters), and once the backend and the server are selected,
we are also able to store backend and server sharded counters
pointer at the stream level (namely s->be_tgcounters and s->sv_tgcounters)
Everywhere we rely on these counters and the stream or session context is
available, we use the fast pointers it instead of the indirect pointers
path to make the pointer resolution a bit faster.
This optimization proved to bring a few percents back, and together with
the previous 75e480d10 commit we now fixed the performance regression (we
are back to back with 3.2 stats performance)
Since commit 7293eb68 ("MEDIUM: peers: use server as stream target") peer
session target always point to server in order to benefit from existing
server transport options.
Thanks to that, it is no longer necessary to have peer_session_target()
helper function, because all it does is return the pointer to the
server object. Let's get rid of that
The C standard specifies that it's undefined behavior to dereference
NULL (even if you use & right after). The hand-rolled offsetof idiom
&(((s*)NULL)->f) is thus technically undefined. This clutters the
output of UBSan and is simple to fix: just use the real offsetof when
it's available.
Note that there's no clear statement about this point in the spec,
only several points which together converge to this:
- From N3220, 6.5.3.4:
A postfix expression followed by the -> operator and an identifier
designates a member of a structure or union object. The value is
that of the named member of the object to which the first expression
points, and is an lvalue.
- From N3220, 6.3.2.1:
An lvalue is an expression (with an object type other than void) that
potentially designates an object; if an lvalue does not designate an
object when it is evaluated, the behavior is undefined.
- From N3220, 6.5.4.4 p3:
The unary & operator yields the address of its operand. If the
operand has type "type", the result has type "pointer to type". If
the operand is the result of a unary * operator, neither that operator
nor the & operator is evaluated and the result is as if both were
omitted, except that the constraints on the operators still apply and
the result is not an lvalue. Similarly, if the operand is the result
of a [] operator, neither the & operator nor the unary * that is
implied by the [] is evaluated and the result is as if the & operator
were removed and the [] operator were changed to a + operator.
=> In short, this is saying that C guarantees these identities:
1. &(*p) is equivalent to p
2. &(p[n]) is equivalent to p + n
As a consequence, &(*p) doesn't result in the evaluation of *p, only
the evaluation of p (and similar for []). There is no corresponding
special carve-out for ->.
See also: https://pvs-studio.com/en/blog/posts/cpp/0306/
After this patch, HAProxy can run without crashing after building w/
clang-19 -fsanitize=undefined -fno-sanitize=function,alignment
This patch changes two instances of pointer arithmetic on void *
to use char * instead, to avoid UB. This is essentially to please
UB analyzers, though.
Between 3.2 and 3.3-dev we noticed a noticeable performance regression
due to stats handling. After bisecting, Willy found out that recent
work to split stats computing accross multiple thread groups (stats
sharding) was responsible for that performance regression. We're looking
at roughly 20% performance loss.
More precisely, it is the added indirections, multiplied by the number
of statistics that are updated for each request, which in the end causes
a significant amount of time being spent resolving pointers.
We noticed that the fe_counters_shared and be_counters_shared structures
which are currently allocated in dedicated memory since a0dcab5c
("MAJOR: counters: add shared counters base infrastructure")
are no longer huge since 16eb0fab31 ("MAJOR: counters: dispatch counters
over thread groups") because they now essentially hold flags plus the
per-thread group id pointer mapping, not the counters themselves.
As such we decided to try merging fe_counters_shared and
be_counters_shared in their parent structures. The cost is slight memory
overhead for the parent structure, but it allows to get rid of one
pointer indirection. This patch alone yields visible performance gains
and almost restores 3.2 stats performance.
counters_fe_shared_get() was renamed to counters_fe_shared_prepare() and
now returns either failure or success instead of a pointer because we
don't need to retrieve a shared pointer anymore, the function takes care
of initializing existing pointer.
Since ccc43412 ("OPTIM: log: use thread local lf_buildctx to stop pushing
it on the stack"), recursively calling sess_build_logline_orig(), which
may for instance happen when leveraging %ID (or unique-id fetch) for the
first time, would lead to undefined behavior because the parent
sess_build_logline_orig() build context was shared between recursive calls
(only one build ctx per thread to avoid pushing it on the stack for each
call)
In short, the parent build ctx would be altered by the recursive calls,
which is obviously not expected and could result in log formatting errors.
To fix the issue but still avoid polluting the stack with large lf_buildctx
struct, let's move the static 256 bytes build buffer out of the buildctx
so that the buildctx is now stored in the stack again (each function
invokation has its own dedicated build ctx). On the other hand, it's
acceptable to have only 1 256 bytes build buffer per thread because the
build buffer is not involved in recursives calls (unlike the build ctx)
Thanks to Willy and Vincent Gramer for spotting the bug and providing
useful repro.
It should be backported in 3.0 with ccc43412.
To motivate developers to support the new applets API, a warning is now
emitted when a legacy applet is spawned. To not flood users, this warning is
only emitted once per legacy applet. To do so, the applet flag
APPLET_FL_WARNED was added. It is set when the warning is emitted.
Note that test and set on this flag are not performed via atomic operations.
So it is possible to have more than one warning for a given applet if it is
spawned in same time on several threads. At worrst, there is one warning per
thread.
A new field was added in the applet structure to be able to set flags on the
applets The first one is related to the new API. APPLET_FL_NEW_API is set
for applets based on the new API. It was set on all HAProxy's applets.
Thanks to this patch, the promex applet is now using its own buffers.
.rcv_buf and .snd_buf callback functions are now defined to use the default
HTX functions. Parts to receive and send data have also been updated to use
the applet API and to remove any dependencies on the stream-connectors and
the channels.
Thanks to this patch, the peer applet is now using its own buffers. .rcv_buf
and .snd_buf callback functions are now defined to use the default raw
functions. The applet API is now used and any dependencies on the
stream-connectors and the channels were removed.
Thanks to this patch, the sink applets is now using their own buffers.
.rcv_buf and .snd_buf callback functions are now defined to use the default
raw functions. The applet API is now used and any dependencies on the
stream-connectors and the channels were removed.
Thanks to this patch, the log applet is now using its own buffers. .rcv_buf
and .snd_buf callback functions are now defined to use the default raw
functions. The applet API is now used and any dependencies on the
stream-connectors and the channels were removed.
Thanks to this patch, the http-client applet is now using its own buffers.
.rcv_buf and .snd_buf callback functions are now defined to use the default
HTX functions. Parts to receive and send data have also been updated to use
the applet API and to remove any dependencies on the stream-connectors and
the channels.
In the CLI I/O handler interacting with the HTTP client, in HTX mode, after
a dump of the HTX message, data must be removed. Instead of removng all
blocks one by one, we can call htx_reset() because all the message must be
flushed.
In the CLI I/O handler interacting with the HTTP client, we must not try to
push raw headers in HTX mode, because there is no raw data in this
mode. This prevent the HTX dump at the end of the I/O handle.
It is a 3.3-specific issue. No backport needed.
The first HTX block of a response must be a start-line. There is no reason
to wait for something else. And if there are output data in the response
channel buffer, it means we must found the start-line.
There is no reason to yield after sending the request headers, except if the
request was fully sent. If there is a payload, it is better to send it as
well. However, when the whole request was sent, we can leave the I/O handler.
Thanks to this patch, the dns_session applet is now using its own
buffers. .rcv_buf and .snd_buf callback functions are now defined to use the
default raw functions. Functions to receive and send data have also been
updated to use the applet API and to remove any dependencies on the
stream-connectors and the channels.
The issue was introduced by commit 27236f221 ("BUG/MINOR: dns: add tempo
between 2 connection attempts for dns servers"). In this patch, to delay the
reconnection, a timer is used on the appctx when it is created. This
postpones the appctx initialization. However, once initialized, the
expiration time of the underlying task is not reset. So, it is always
considered as expired and the appctx is woken up in loop.
The fix is quite simple. In dns_session_init(), the expiration time of the
appctx's task is alwaus set to TICK_ETERNITY.
This patch must be backported everywhere the commit above was backported. So
as far as 2.8 for now but possibly to all stable versions.
Thanks to this patch, the lua cosocket applet is now using its own
buffers. .rcv_buf and .snd_buf callback functions are now defined to use the
default raw functions. Functions to receive and send data have also been
updated to use the applet API and to remove any dependencies on the
stream-connectors and the channels.
It is a fix similar to the previous one ("BUG/MEDIUM: hlua: Report to SC
when data were consumed on a lua socket"), but for the write side. The
writer must notify the cosocket it needs more space in the request buffer to
produce more data by calling sc_need_room(). Otherwise, there is nothing to
prevent to wake the cosocket applet up again and again.
This patch must be backported as far as 2.8, and maybe to 2.6 too.
The lua cosocket are quite strange. There is an applet used to handle the
connection and writer and readers subscribed on it to write or read
data. Writers and readers are tasks woken up by the cosocket applet when
data can be consumed or produced, depending on the channels buffers
state. Then the cosocket applet is woken up by writers and readers when read
or write events were performed.
It means the cosocket applet has only few information on what was produced
or consumed. It is the writers and readers responsibility to notify any
blocking. Among other things, the readers must take care to notify the
stream on top of the cosocket applet that some data was consumed. Otherwise,
it may remain blocked, waiting for a write event (a write event from the
stream point of view is a read event from the cosocket point of view).
Thie patch must be backported as far as 2.8, and maybe to 2.6 too.
Thanks to this patch, the lua HTTP applet is now using its own buffers.
.rcv_buf and .snd_buf callback functions are now defined to use the default
HTX functions. Functions to receive and send data have also been updated to
use the applet API and to remove any dependencies on the stream-connectors
and the channels.
hlua_http_get_headers() function was using the HTTP message from the stream
TXN to retrieve headers from a message. However, this will be an issue to
update the lua HTTP applet to use its own buffers. Indeed, in that case,
information from the channels will be unavailable. So now,
hlua_http_get_headers() is now using a buffer containing an HTX message. It
is just an API change bacause, internally, the function was already
manipulation an HTX message.
When a lua HTTP applet is created, a "request" object is created, filled
with the request information (method, path, headers...), to be able to
easily retrieve these information from the script. However, this was done
when thee appctx was created, retrieving the info from the request channel.
To be ale to update the applet to use its own buffer, it is now performed on
the first applet run. Indead, when the applet is created, the info are not
forwarded yet and should not be accessed. Note that for now, information are
still retrieved from the channel.
Thanks to this patch, the lua TCP applet is now using its own buffers.
.rcv_buf and .snd_buf callback functions are now defined to use the default
raw functions. Other changes are quite light. Mainly, end of stream and
errors are reported on the appctx instead of the stream-endpoint descriptor.
applet_get_inbuf() and applet_get_outbuf() functions were not testing if the
buffers were available. So, the caller had to check them before calling one
of these functions. It is not really handy. So now, these functions take
care to have a fully usable buffer before returning. Otherwise NULL is
returned.
It will be useful for HTX applets because availale data in the input buffer and
available space in the output buffer are computed from the HTX message and not
the buffer itself. So now, applet_htx_input_data() and applet_htx_output_room()
functions can be used.
When the applet API was reviewed to use dedicated buffers, the support for
sends from the streams to applets was added. Unfortunately, it was not a
good idea because this way it is possible to deliver data to an applet and
release it just after, truncated data. Indeed, the release stage for applets
is related to the stream release itself. However, unlike the multiplexers,
the applets cannot survive to a stream for now.
So, for now, the sync sends from the streams is removed for applets, waiting
for a better way to handle the applets release stage.
Note that this only concerns applets using their own buffers. And of now,
the bug is harmless because all refactored applets are on server side and
consume data first. But this will be an issue with the HTTP client.
This patch should be backported as far as 3.0 after a period of observation.
applet_getword() function is returning one extra byte when a string is
returned because the "ret" variable is not reset before the loop on the
data. The patch also fixes applet_getline().
It is a 3.3-specific issue. No need to backport.
sc_is_send_allowed() function is used to know if an applet is able to
receive data from the stream. But this function was designed for applets
using the channels buffer. It is not adapted to applets using their own
buffers.
when the SE_FL_WAIT_DATA flag is set, it means the applet is waiting for
more data and should not be woken up without new data. For applets using
channels buffer, just testing the flag is enough because process_stream()
will remove if when more data will be available. For applets using their own
buffers, it is more complicated. Some data may be blocked in the output
channel buffer. In that case, and when the applet input buffer can receive
daa, the applet can be woken up.
This patch must be backported as far as 3.0 after a period of observation.
When data are skipped from the input buffer of an applet, we must take care
to notify the input buffer is no longer full. Otherwise, this could prevent
the stream to push data to the applet.
It is 3.3-specific. No backport needed.
When an HTTP applet tries to retrieve data, the request headers are still in
the buffer. But, instead of being silently removed, their size is removed
from the amount of data retrieved. When the request payload is fully
retrieved, it is not an issue. But it is a problem when a length is
specified. The data are shorten from the headers size.
So now, we take care to silently remove headers.
This patch must be backported to all stable versions.
The curve name to curve id mapping table was built out of multiple
internal tables found in openssl sources, namely the 'nid_to_group'
table found in 'ssl/t1_lib.c' which maps openssl specific NIDs to public
IANA curve identifiers. In this table, there were two instances of
EVP_PKEY_XXX ids being used while all the other ones are NID_XXX
identifiers.
Since the two EVP_PKEY are actually equal to their NID equivalent in
'include/openssl/evp.h' we can use NIDs all along for better coherence.
Allow the "processing" status in the challenge object when requesting
to do the challenge, in addition to "pending".
According to RFC 8555 https://datatracker.ietf.org/doc/html/rfc8555/#section-7.1.6
Challenge objects are created in the "pending" state. They
transition to the "processing" state when the client responds to the
challenge (see Section 7.5.1)
However some CA could respond with a "processing" state without ever
transitioning to "pending".
Must be backported to 3.2.
If a backend connection is private, it should not be reused outside of
its original attached session. As such, on stream detach operation, such
connection is never inserted into server idle/avail list. Instead, it is
stored directly on the session.
The purpose of this commit is to implement proper handling of private
backend connections via QUIC multiplexer.
QUIC connection graceful closure is performed in two steps. First, the
application layer is closed. In the context of HTTP/3, this is done with
a GOAWAY frame emission, which forbids opening of new streams. Then the
whole connection is terminated via CONNECTION_CLOSE which is the final
emitted frame.
This commit ensures that when app layer is shut for a backend
connection, this connection is removed from either idle or avail server
tree. The objective is to prevent stream layer to try to reuse a
connection if no new stream can be attached on it.
New BUG_ON checks are inserted in qmux_strm_attach() and h3_attach() to
ensure that this assertion is always true.
Implement support for QUIC connection reuse on the backend side. The
main change is done during detach stream operation. If a connection is
idle, it is inserted in the server list. Else, it is stored in the
server avail tree if there is room for more streams.
For non idle connection, qmux_avail_streams() is reused to detect that
stream flow-control limit is not yet reached. If this is the case, the
connection is not inserted in the avail tree, so it cannot be reuse,
even if flow-control is unblocked later by the peer. This latter point
could be improved in the future.
Note that support for QUIC private connections is still missing. Reuse
code will evolved to fully support this case.
Add a new <sess> member into QCS structure. It is used to store the
parent session of the stream on attach operation. This is only done for
backend side.
This new member will become necessary when connection reuse will be
implemented. <owner> member of connection is not suitable as it could be
set to NULL, notably after a session_add_conn() failure.
Also, a single BE conn can be shared along different session instance,
in particular when using aggressive/always reuse mode. Thus it is
necessary to linked each QCS instance with its session.
For now, QUIC glitch limit counter is only available on the frontend
side. Thus, disable incrementation on the backend side for now. Also,
session is only available as conn <owner> reliably on the frontend side,
so session_add_glitch_ctr() operation is also securised.
qcc_refresh_timeout() is the function called on QUIC MUX activity. Its
purpose is to update the timeout by selecting the correct value
depending on the connection state.
Prior to this patch, backend connections were mostly ignored by the
function. However, the default server timeout was selecting as a
fallback. This is incompatible with backend connections reuse.
This patch fixes timeout applied on backend connections. Only values
specific to frontend which are http-request and http-keep-alive timeouts
are now ignored for a backend connection. Also, fallback timeout is only
used for frontend connections.
This patch ensures that an idle backend connection won't be deleted due
to server timeout. This is necessary for proper connection reuse which
will be implemented in a future patch.
This commit is a small reorganization of condition used into
qcc_refresh_timeout(). Its objective is to render the code more logical
before the next patch which will ensure that timeout is properly set for
backend connections.
If a connection remains on a proxy currently disabled or stopped, a
special spread timeout is set if active close is configured. For QUIC
MUX, this is set via qcc_refresh_timeout() as with all other timeout
values.
Fix this closing timeout setting : it is now used as an override to any
other timeout that may have been chosen if calculated spread time is
lower than the previously selected value. This is done for backend
connections as well.
This should be backported up to 2.6 after a period of observation.
When no stream is attached, mux layer is responsible to maintain a
timeout. The first criteria is to apply client/server timeout if there
is still data waiting for emission.
Previously, <hreq> qcc member was used to determine this state. However,
this only covers bidirectional streams. Fix this by testing if
<send_list> is empty or not. This is enough to take into account both
bidi and uni streams.
Theorically, this should be backported to every stable versions.
However, send-list is not available on 2.6 and there is no alternative
to quickly determine if there is waiting output data. Thus, it's better
to backport it up to 2.8 only.
The requests that checked the status of the challenge and the retrieval
of the certificate were done using a GET.
This is working with letsencrypt and other CA providers, but it might
not work everywhere. RFC 8555 specifies that only the directory and
newNonce resources MUST work with a GET requests, but everything else
must use POST-as-GET.
Must be backported to 3.2.
"log-steps" was already ignored if directly defined in a backend section,
however, when defined in a defaults section it was inherited to all
proxies no matter their capability (ie: including backends).
As configurations often contain more backends than frontends, this would
result in wasted memory given that the log-steps setting is only
considered on frontends.
Let's fix that by preventing the inheritance from defaults section to
anything else than frontends. Also adjust the documentation to mention
that the setting in not relevant for backends.
Due to the introduction of smallbuf usage for HTTP/3 headers emission,
ret variable may be used uninitialized if buffer allocation fails due to
not enough room in QUIC connection window.
Fix this by setting ret value to 0.
Function variable declaration are also adjusted so that the pattern is
similar to h3_resp_headers_send(). Finally, outbuf buffer is also
removed as it is now unused.
No need to backport.
Similarly to HTTP/3 response encoding, a small buffer is first allocated
for the request encoding on the backend side. If this is not sufficient,
the smallbuf is replaced by a standard buffer and encoding is restarted.
This is useful to reduce the window usage over a connection of smaller
requests.
SSL libraries like wolfSSL that don't have the clienthello callback
mechanism enabled do not need to have the traces that are only called
from the said callback.
The code added to parse the ciphers relied on a function that wes not
defined in wolfSSL (SSL_CIPHER_find).
The contents of the extensions were only dumped with verbosity
'complete' which meant that the 'advanced' verbosity was pretty much
useless despite what its name implies (it was the same as the 'simple'
one).
The 'advanced' verbosity is now the "maximum" one, using 'complete'
would not add any extra information yet, but it leaves more room for
some actually large traces to be dumped later on (some complete
ClientHello dumps for instance).
The SSL libraries like OpenSSL for instance do not seem to actually
provide a public mapping between IANA defined curve IDs and curve names,
or even a mapping between curve IDs and internal NIDs.
This new table regroups all those information in a single table so that
we can convert curve names (be it SECG or NIST format) to curve IDs or
NIDs.
The previously existing 'curves2nid' function now uses the new table,
and a new 'curveid2str' one is added.
Since the following patch, app_ops layer is now responsible to report
that HTX block was the last transmitted so that FIN STREAM can be set.
This is mandatory to properly support HTTP 1xx interim responses.
f349df44b4
MINOR: qmux: change API for snd_buf FIN transmission
This change was correctly implemented in HTTP/3 code, however an issue
appeared on hq-interop transcoder in case zero-copy DATA transfer is
performed when HTX buffer is swapped. If this occured during the
transfer of the last HTX block, EOM is not detected and thus STREAM FIN
is never set.
Most of the times, QMUX shut callback is called immediately after. This
results in an emission of a RESET_STREAM to the client, which prevents
the data transfer.
To fix this, use the same method as HTTP/3 : HTX EOM flag status is
checked before any transfer, thus preserving it even after a zero-copy.
Criticity of this bug is low as hq-interop is experimental and is mostly
used for interop testing.
This should fix github issue #3038.
This patch must be backported wherever the above one is.
Willy noticed that it was not possible to select extra log origins using
log-steps directive. Extra origins are the one registered using
log_orig_register() such as http-req.
Reason was the error path was always executed during extra log origin
matching for log-steps parser, while it should only be executed if no
match was found.
It should be backported to 3.1.
Don't use the workaround to load libgcc_s on macOS. It is not needed
there, and it causes issues, as recent macOS dislike processes that fork
after threads where created (and the workaround creates a temporary
thread). This fixes crashes on macOS at least when using master-worker,
and using the system resolver.
This should fix Github issue #3035
This should be backported up to 2.8.
Not all platforms support thread-cpu bindings, so let's put
cpu_topo_dump_summary() under USE_CPU_AFFINITY guards.
Only needs to be backported if 1cc0e023ce ("MINOR: debug: add thread-cpu
bindings info in 'show dev' output") is backported.
This patch impacts the QUIC frontends. It reverts this patch
MINOR: quic-be: add a "CC connection" backend TX buffer pool
which adds <pool_head_quic_be_cc_buf> new pool to allocate CC (connection closed state)
TX buffers with bigger object size than the one for <pool_head_quic_cc_buf>.
Indeed the QUIC backends must be able to send at least 1200 bytes Initial packets.
For now on, both the QUIC frontends and backend use the same pool with
MAX(QUIC_INITIAL_IPV6_MTU, QUIC_INITIAL_IPV4_MTU)(1252 bytes) as object size.
Align titles style of debug_parse_cli_show_dev() with
cpu_dump_topology(). We will call the latter inside of
debug_parse_cli_show_dev() to show thread-cpu bindings info.
cpu_dump_topology() prints details about each enabled CPU and a summary with
clusters info and thread-cpu bindings. The latter is often usefull for
debugging and we want to add it in the 'show dev' output.
So, let's split cpu_dump_topology() in two parts: cpu_topo_debug() to print the
details about each enabled CPU; and cpu_topo_dump_summary() to print only the
summary.
In the next commit we will modify cpu_topo_dump_summary() to write into local
trash buffer and it could be easily called from debug_parse_cli_show_dev().
Exit with an error if multiple output filters (-ic, -srv, -st, -tc, -u*, etc.)
are used at the same time.
halog is designed to process and display output for only one filter at a time.
Using multiple filters simultaneously can cause a crash because the program is
not designed to manage multiple, separate result sets (e.g., one for
IP counts, another for URLs).
Supporting simultaneous filters would require a redesign to collect entries for
each filter in separate ebtree. This would negatively impact performance and is
not requested for the moment. This patch prevents the crash by checking filter
combinations just after the command line parsing.
This issue was reported in GitHUB #3031.
This should be backported in all stable versions.
The "connection close state" TX buffer is used to build the datagram with
basically a CONNECTION_CLOSE frame to notify the peer about the connection
closure. It allows the quic_conn memory release and its replacement by a lighter
quic_cc_conn struct.
For the QUIC backend, there is a dedicated pool to build such datagrams from
bigger TX buffers. But from quic_conn_release(), this is the pool dedicated
to the QUIC frontends which was used to release the QUIC backend TX buffers.
This patch simply adds a test about the target of the connection to release
the "connection close state" TX buffers from the correct pool.
No backport needed.
It's super difficult to find the rules that operate idle conns depending
on their idle/safe/avail/private status. Some are in lists, others not.
Some are in trees, others not. Some have a flag set, others not. This
documents the rules before the definitions in connection-t.h. It could
even be backported to help during backport sessions.
Replace all calls to qc_is_listener() (resp. !qc_is_listener()) by calls to
objt_listener() (resp. objt_server()).
Remove qc_is_listener() implement and QUIC_FL_CONN_LISTENER the flag it
relied on.
When an appctx is initialized, there is a BUG_ON() to be sure the appctx is
really initialized on the right thread to avoid bugs on the thread
affinity. However, it is possible to not choose the thread when the appctx
is created and let it starts on any thread. In that case, the thread
affinity is set when the appctx is initialized. So, we must take cate to not
trigger the BUG_ON() in that case.
For now, we never hit the bug because the thread affinity is always set
during the appctx creation.
This patch must be backport as far as 2.8.
The bug is a listener only one, and only occured on FreeBSD.
The FreeBSD issue has been reported here:
https://forums.freebsd.org/threads/quic-http-3-with-haproxy.98443/
where QUIC traces could reveal that sendmsg() calls lead to EINVAL
syscall errnos.
Such a similar issue could be reproduced from a FreeBSD 14-2 VM
with reg-tests/quic/retry.vtc as reg test.
As noted by Olivier, this issue could be fixed within the VM binding
the listener socket to INADDR_ANY.
That said, the symptoms are not exactly the same as the one reporte by the user.
What could be observed from such a VM is that if the first recvmsg() call
returns the datagram destination address, and if the listener
listening address is bound to a specific address, the calls to
sendmsg() fail because of the IP_SENDSRCADDR ip option value
set by cmsg_set_saddr(). According to the ip(4) freebsd manual
such an IP options must be used if the listening socket is
bound to a specific address. It is to be noted that into a VM
the first call to recvmsg() of the first connection does not return the datagram
destination address. This leads the first quic_conn to be initialized without
->local_addr value. This is this value which is used by IP_SENDSRCADDR
ip option. In this case, the sendmsg() calls (without IP_SENDSRCADDR)
never fail. The issue appears at the second condition.
This patch replaces the conditions to use IP_SENDSRCADDR to a call to
qc_may_use_saddr(). This latter also checks that the listener listening
address is not INADDR_ANY to allow the use of the source address.
It is generalized to all the OSes. Indeed, there is no reason to set the source
address when the listener is bound to a specific address.
Must be backported as far as 2.8.
On backend side, H3 layer is responsible to decode a HTTP/3 response
into an HTX message. Multiple responses may be received on a single
stream with interim status codes prior to the final one.
h3_resp_headers_to_htx() is the function used solely on backend side
responsible for H3 response to HTX transcoding. This patch extends it to
be able to properly support interim responses. When such a response is
received, the new flag H3_SF_RECV_INTERIM is set. This is converted to
QMUX qcs flag QC_SF_EOI_SUSPENDED.
The objective of this latter flag is to prevent stream EOI to be
reported during stream rcv_buf callback, even if HTX message contains
EOM and is empty. QC_SF_EOI_SUSPENDED will be cleared when the final
response is finally converted, which unblock stream EOI notification for
next rcv_buf invocations. Note however that HTX EOM is untouched : it is
always set for both interim and final response reception.
As a minor adjustment, HTX_SL_F_BODYLESS is always set for interim
responses.
Contrary to frontend interim response handling, a flag is necessary on
QMUX layer. This is because H3 to HTX transcoding and rcv_buf callback
are two distinct operations, called under different context (MUX vs
stream tasklet).
Also note that H3 layer has two distinct flags for interim response
handling, one only used as a server (FE side) and the other as a client
(BE side). It was preferred to used two distinct flags which is
considered less error-prone, contrary to a single unified flag which
would require to always set the proxy side to ensure it is relevant or
not.
No need to backport.
On frontend side, HTTP/3 layer is responsible to transcode an HTX
response message into HTTP/3 HEADERS frame. This operations is handled
via h3_resp_headers_send().
Prior to this patch, if HTX EOM was encountered in the HTX message after
response transcoding, <fin> was reported to the QMUX layer. This will in
turn cause FIN stream bit to be set when the response is emitted.
However, this is not correct as a single HTX response can be constitued
of several interim message, each delimited by EOM block.
Most of the time, this bug will cause the client to close the connection
as it is invalid to receive an interim response with FIN bit set.
Fixes this by now properly differentiate interim and final response.
During interim response transcoding, the new flag H3_SF_SENT_INTERIM
will be set, which will prevent <fin> to be reported. Thus, <fin> will
only be notified for the final response.
This must be backported up to 2.6. Note that it relies on the previous
patch which also must be taken.
Previous patches have fixes interim response encoding via
h3_resp_headers_send(). However, it is still necessary to adjust h3
layer state-machine so that several successive HTTP responses are
accepted for a single stream.
Prior to this, QMUX was responsible to decree that the final HTX message
was encoded so that FIN stream can be emitted. However, with interim
response, MUX is in fact unable to properly determine this. As such,
this is the responsibility of the application protocol layer. To reflect
this, app_ops snd_buf callback is modified so that a new output argument
<fin> is added to it.
Note that for now this commit does not bring any functional change.
However, it will be necessary for the following patch. As such, it
should be backported prior to it to every versions as necessary.
On frontend side, H3 layer transcodes HTX status code into HTTP/3
HEADERS frame. This is done by calling qpack_encode_int_status().
Prior to this patch, the latter function was also responsible to reject
an invalid value, which guarantee that only valid codes are encoded
(between 100 and 999 values). However, this is not practical as it is
impossible to differentiate between an invalid code error and a buffer
room exhaustation.
Changes this so that now HTTP/3 layer first ensures that HTX code is
valid. The stream is closed with H3_INTERNAL_ERROR if invalid value is
present. Thus, qpack_encode_int_status() will only report an error due
to buffer room exhaustion. If a small buffer is used, a standard buffer
will be reallocated which should be sufficient to encode the response.
The impact of this bug is minimal. Its main benefit is code clarity,
while also removing an unnecessary realloc when confronting with an
invalid HTTP code.
This should be backported at least up to 3.1. Prior to it, smallbuf
mechanism isn't present, hence the impact of this patch is less
important. However, it may still be backported to older versions, which
should facilitate picking patches for HTTP 1xx interim response support.
Previous commit fixes encoding of several following HTTP response
message when interim status codes are first reported. However,
h3_resp_headers_send() still was unable to interrupt encoding if output
buffer room was not sufficient. This case may be likely because small
buffers are used for headers encoding.
This commit fixes this situation. If output buffer is not empty prior to
response encoding, this means that a previous interim response message
was already encoded before. In this case, and if remaining space is not
sufficient, use buffer release mechanism : this allows to restart
response encoding by using a newer buffer. This process has already been
used for DATA and trailers encoding.
This must be backported up to 2.6. However, note that buffer release
mechanism is not present for version 2.8 and lower. In this case, qcs
flag QC_SF_BLK_MROOM should be enough as a replacement.
An HTTP response may contain several interim response message prior (1xx
status) to a final response message (all other status codes). This may
cause issues with h3_resp_headers_send() called for response encoding
which assumes that it is only call one time per stream, most notably
during output buffer handling.
This commit fixes output buffer handling when h3_resp_headers_send() is
called multiple times due to an interim response. Prior to it, interim
response was overwritten with newer response message. Most of the time,
this resulted in error for the client due to QPACK decoding failure.
This is now fixed so that each response is encoded one after the other.
Note that if encoding of several responses is bigger than output buffer,
an error is reported. This can definitely occurs as small buffer are
used during header encoding. This situation will be improved by the next
patch.
This must be backported up to 2.6.
The goal is to help figure the OS version (kernel and userland), any
virtualization/containers, and the haproxy version and build features.
Sometimes even reporters themselve can be mistaken about the running
version or environment. Also printing this at the top hepls draw a
visual delimitation between warnings and panic. Now we get something
like this:
PANIC! Thread 1 is about to kill the process.
HAProxy info:
version: 3.3-dev3-c863c0-18
features: +51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY (...)
Operating system info:
virtual machine: no
container: no
kernel: Linux 6.1.131 #1 SMP PREEMPT_DYNAMIC Fri Mar 14 01:04:55 CET 2025 x86_64
userland: Slackware 15.0 x86_64
* Thread 1 : id=0x7f615a8775c0 act=1 glob=0 wq=1 rq=0 tl=0 tlsz=0 rqsz=0
1/1 stuck=0 prof=0 harmless=0 isolated=0
cpu_ns: poll=1835010197 now=1835066102 diff=55905
(...)
For listen/frontend/backend, we now want to be able to clean up the
default-server directive that's no longer used past the end of the
section. For this we register a post-section function and perform the
cleanup there.
The default-server entry used to always be allocated. Now we'll postpone
its allocation for the first time we need it, i.e. during a "default-server"
directive, or when inheriting a defaults section which has one. The memory
savings are significant, on a large configuration with 100k backends and
no default-server directive, the memory usage dropped from 800MB RSS to
420MB (380 MB saved). It should be possible to also address configs using
default-server by releasing this entry when leaving the proxy section,
which is not done yet.
Now we only copy the default server's settings if such a default server
exists, otherwise we only initialize it. At the moment it always exists.
The change is mostly performed in srv_settings_cpy() since that's where
each caller passes through, and there's no point duplicating that test
everywhere.
The server struct has gone huge over time (~3.8kB), and having a copy
of it in the defsrv section of the struct proxy costs a lot of RAM,
that is not needed anymore at run time.
This patch replaces this struct with a dynamically allocated one. The
field is allocated and initialized during alloc_new_proxy() and is
freed when the proxy is destroyed for now. But the goal will be to
support freeing it after parsing the section.
The test in srv_ssl_settings_cpy() comparing src to the server's proxy's
default server does work but it's a subtle trap. Indeed, no check is made
on srv->proxy to be valid, and this only works because the compiler is
comparing pointer offsets. During the boot, it's common to have NULL here
in srv->proxy and of course in this case srv does not match that value
which is NULL plus epsilon. But when trying to turn defsrv to a dynamic
pointer instead, then the compiler is forced to dereference this NULL
srv->proxy and dies during init.
Let's always add the null check for srv->proxy before the test to avoid
this situation.
No backport is needed since the problem cannot happen yet.
At a few places we're seeing some open-coding of the same function, likely
because it looks overkill for what it's supposed to do, due to extraneous
tests that are not needed (e.g. check of the backend's PR_CAP_BE etc).
Let's just remove all these superfluous tests and inline it so that it
feels more suitable for use everywhere it's needed.
The server lookup in sticking_rule_find_target() uses an open-coded tree
search while we have a function for this server_find_by_id(). In addition,
due to the way it's coded, the stick-table lock also covers the server
lookup by accident instead of being released earlier. This is not a real
problem though since such feature is rarely used nowadays.
Let's clean all this stuff by first retrieving the ID under the lock and
then looking up the corresponding server.
The code used to detect proxy id conflicts uses an open-coded lookup
in the ID tree which is not necessary since we already have functions
for this. Let's switch to that instead.
This function doesn't just look at the name but also the ID when the
argument starts with a '#'. So the name is not correct and explains
why this function is not always used when the name only is needed,
and why the list-based findserver() is used instead. So let's just
call the function "server_find()", and rename its generation-id based
cousin "server_find_unique()".
Let's just use the tree-based lookup instead of walking through the list.
This function is used to find duplicates in "track" statements and a few
such places, so it's important not to waste too much time on large setups.
findserver() used to check for duplicate server names. These are no
longer accepted in 3.3 so let's get rid of that test and simplify the
code. Note that the function still only uses the list instead of the
tree.
Released version 3.3-dev3 with the following main changes :
- BUG/MINOR: quic-be: Wrong retry_source_connection_id check
- MEDIUM: sink: change the sink mode type to PR_MODE_SYSLOG
- MEDIUM: server: move _srv_check_proxy_mode() checks from server init to finalize
- MINOR: server: move send-proxy* incompatibility check in _srv_check_proxy_mode()
- MINOR: mailers: warn if mailers are configured but not actually used
- BUG/MEDIUM: counters/server: fix server and proxy last_change mixup
- MEDIUM: server: add and use a separate last_change variable for internal use
- MEDIUM: proxy: add and use a separate last_change variable for internal use
- MINOR: counters: rename last_change counter to last_state_change
- MINOR: ssl: check TLS1.3 ciphersuites again in clienthello with recent AWS-LC
- BUG/MEDIUM: hlua: Forbid any L6/L7 sample fetche functions from lua services
- BUG/MEDIUM: mux-h2: Properly handle connection error during preface sending
- BUG/MINOR: jwt: Copy input and parameters in dedicated buffers in jwt_verify converter
- DOC: Fix 'jwt_verify' converter doc
- MINOR: jwt: Rename pkey to pubkey in jwt_cert_tree_entry struct
- MINOR: jwt: Remove unused parameter in convert_ecdsa_sig
- MAJOR: jwt: Allow certificate instead of public key in jwt_verify converter
- MINOR: ssl: Allow 'commit ssl cert' with no privkey
- MINOR: ssl: Prevent delete on certificate used by jwt_verify
- REGTESTS: jwt: Add test with actual certificate passed to jwt_verify
- REGTESTS: jwt: Test update of certificate used in jwt_verify
- DOC: 'jwt_verify' converter now supports certificates
- REGTESTS: restrict execution to a single thread group
- MINOR: ssl: Introduce new smp_client_hello_parse() function
- MEDIUM: stats: add persistent state to typed output format
- BUG/MINOR: httpclient: wrongly named httpproxy flag
- MINOR: ssl/ocsp: stop using the flags from the httpclient CLI
- MEDIUM: httpclient: split the CLI from the actual httpclient API
- MEDIUM: httpclient: implement a way to use directly htx data
- MINOR: httpclient/cli: add --htx option
- BUILD: dev/phash: remove the accidentally committed a.out file
- BUG/MINOR: ssl: crash in ssl_sock_io_cb() with SSL traces and idle connections
- BUILD/MEDIUM: deviceatlas: fix when installed in custom locations.
- DOC: deviceatlas build clarifications
- BUG/MINOR: ssl/ocsp: fix definition discrepancies with ocsp_update_init()
- MINOR: proto-tcp: Add support for TCP MD5 signature for listeners and servers
- BUILD: cfgparse-tcp: Add _GNU_SOURCE for TCP_MD5SIG_MAXKEYLEN
- BUG/MINOR: proto-tcp: Take care to initialized tcp_md5sig structure
- BUG/MINOR: http-act: Fix parsing of the expression argument for pause action
- MEDIUM: httpclient: add a Content-Length when the payload is known
- CLEANUP: ssl: Rename ssl_trace-t.h to ssl_trace.h
- MINOR: pattern: add a counter of added/freed patterns
- CI: set DEBUG_STRICT=2 for coverity scan
- CI: enable USE_QUIC=1 for OpenSSL versions >= 3.5.0
- CI: github: add an OpenSSL 3.5.0 job
- CI: github: update the stable CI to ubuntu-24.04
- BUG/MEDIUM: quic: SSL/TCP handshake failures with OpenSSL 3.5
- CI: github: update to OpenSSL 3.5.1
- BUG/MINOR: quic: Missing TLS 1.3 QUIC cipher suites and groups inits (OpenSSL 3.5 QUIC API)
- BUG/MINOR: quic-be: Malformed coalesced Initial packets
- MINOR: quic: Prevent QUIC backend use with the OpenSSL QUIC compatibility module (USE_OPENSS_COMPAT)
- MINOR: reg-tests: first QUIC+H3 reg tests (QUIC address validation)
- MINOR: quic-be: Set the backend alpn if not set by conf
- MINOR: quic-be: TLS version restriction to 1.3
- MINOR: cfgparse: enforce QUIC MUX compat on server line
- MINOR: server: support QUIC for dynamic servers
- CI: github: skip a ssl library version when latest is already in the list
- MEDIUM: resolvers: switch dns-accept-family to "auto" by default
- BUG/MINOR: resolvers: don't lower the case of binary DNS format
- MINOR: resolvers: do not duplicate the hostname_dn field
- MINOR: proto-tcp: Register a feature to report TCP MD5 signature support
- BUG/MINOR: listener: really assign distinct IDs to shards
- MINOR: quic: Prevent QUIC build with OpenSSL 3.5 new QUIC API version < 3.5.1
- BUG/MEDIUM: quic: Crash after QUIC server callbacks restoration (OpenSSL 3.5)
- REGTESTS: use two haproxy instances to distinguish the QUIC traces
- BUG/MEDIUM: http-client: Don't wake http-client applet if nothing was xferred
- BUG/MEDIUM: http-client: Properly inc input data when HTX blocks are xferred
- BUG/MEDIUM: http-client: Ask for more room when request data cannot be xferred
- BUG/MEDIUM: http-client: Test HTX_FL_EOM flag before commiting the HTX buffer
- BUG/MINOR: http-client: Ignore 1XX interim responses in non-HTX mode
- BUG/MINOR: http-client: Reject any 101-switching-protocols response
- BUG/MEDIUM: http-client: Drain the request if an early response is received
- BUG/MEDIUM: http-client: Notify applet has more data to deliver until the EOM
- BUG/MINOR: h3: fix https scheme request encoding for BE side
- MINOR: h1-htx: Add function to format an HTX message in its H1 representation
- BUG/MINOR: mux-h1: Use configured error files if possible for early H1 errors
- BUG/MINOR: h1-htx: Don't forget to init flags in h1_format_htx_msg function
- CLEANUP: assorted typo fixes in the code, commits and doc
- BUILD: adjust scripts/build-ssl.sh to modern CMake system of QuicTLS
- MINOR: debug: add distro name and version in postmortem
Since 2012, systemd compliant distributions contain
/etc/os-release file. This file has some standardized format, see details at
https://www.freedesktop.org/software/systemd/man/latest/os-release.html.
Let's read it in feed_post_mortem_linux() to gather more info about the
distribution.
(cherry picked from commit f1594c41368baf8f60737b229e4359fa7e1289a9)
Signed-off-by: Willy Tarreau <w@1wt.eu>
The regression was introduced by commit 187ae28 ("MINOR: h1-htx: Add
function to format an HTX message in its H1 representation"). We must be
sure the flags variable must be initialized in h1_format_htx_msg() function.
This patch must be backported with the commit above.
The H1 multiplexer is able to produce some errors on its own to report early
errors, before the stream is created. In that case, the error files of the
proxy were tested to detect empty files (or /dev/null) but they were not
used to produce the error itself.
But the documentation states that configured error files are used in all
cases. And in fact, it is not really a problem to use these files. We must
just format a full HTX message. Thanks to the previous patch, it is now
possible.
This patch should fix the issue #3032. It should be backported to 3.2. For
older versions, it must be discussed but it should be quite easy to do.
The function h1_format_htx_msg() can now be used to convert a valid HTX
message in its H1 representation. No validity test is performed, the HTX
message must be valid. Only trailers are silently ignored if the message is
not chunked. In addition, the destination buffer must be empty. 1XX interim
responses should be supported. But again, there is no validity tests.
An HTTP/3 request must contains :scheme pseudo-header. Currently, only
"https" value is expected due to QUIC transport layer in use.
However, https value is incorrectly encoded due to a QPACK index value
mismatch in qpack_encode_scheme(). Fix it to ensure that scheme is now
properly set for HTTP/3 requests on the backend side.
No need to backport this.
When we leave the I/O handler with an unfinished request, we must report the
applet has more data to deliver. Otherwise, when the channel request buffer
is emptied, the http-client applet is not always woken up to forward the
remaining request data.
This issue was probably revealed by commit "BUG/MEDIUM: http-client: Don't
wake http-client applet if nothing was xferred". It is only an issue with
large POSTs, when the payload is streamed.
This patch must be backported as far as 2.6 with the commit above. But on
older versions, the applet API may differ. So be careful.
When a large request is sent, it is possible to have a response before the
end of the request. It is valid from HTTP perspective but it is an issue
with the current design of the http-client. Indded, the request and the
response are handled sequentially. So the response will be blocked, waiting
for the end of the request. Most of time, it is not an issue, except when
the request transfer is blocked. In that case, the applet is blocked.
With the current API, it is not possible to handle early response and
continue the request transfer. So, this case cannot be handle. In that case,
it seems reasonnable to drain the request if a response is received. This
way, the request transfer, from the caller point of view, is never blocked
and the response can be properly processed.
To do so, the action flag HTTPCLIENT_FA_DRAIN_REQ is added to the
http-client. When it is set, the request payload is just dropped. In that
case, we take care to not report the end of input to properly report the
request was truncated, especially in logs.
It is only an issue with large POSTs, when the payload is streamed.
This patch must be backported as far as 2.6.
Protocol updages are not supported by the http-client. So report an error is
a 101-switching-protocols response is received. Of course, it is unexpected
because the API is not designed to support upgrades. But it is better to
properly handle this case.
This patch could be backported as far as 2.6. It depends on the commit
"BUG/MINOR: http-client: Ignore 1XX interim responses in non-HTX mode".
When the response is re-formatted in raw message, the 1XX interim responses
must be skipped. Otherwise, information of the first interim response will
be saved (status line and headers) and those from the final response will be
dropped.
Note that for now, in HTX-mode, the interim messages are removed.
This patch must be backported as far as 2.6.
when htx_to_buf() function is called, if the HTX message is empty, the
buffer is reset. So HTX flags must not be tested after because the info may
be lost.
So now, we take care to test HTX_FL_EOM flag before calling htx_to_buf().
This patch must be backported as far as 2.8.
When the request payload cannot be xferred to the channel because its buffer
is full, we must request for more room by calling sc_need_room(). It is
important to be sure the httpclient applet will not be woken up in loop to
push more data while it is not possible.
It is only an issue with large POSTs, when the payload is streamed.
This patch must be backported as far as 2.6. Note that on 2.6,
sc_need_room() only takes one argument.
When HTX blocks from the requests are transferred into the channel buffer,
the return value of htx_xfer_blks() function must not be used to increment
the channel input value because meta data are counted here while they are
not part of input data. Because of this bug, it is possible to forward more
data than these present in the channel buffer.
Instead, we look at the input data before and after the transfer and the
difference is added.
It is only an issue with large POSTs, when the payload is streamed.
This patch must be backported as far as 2.6.
When data are transferred to or from the htt-pclient, the applet is
systematically woken up, even when no data are transferred. This could lead
to needlessly wakeups. When called from a lua script, if data are blocked
for a while, this leads to a wakeup ping-pong loop where the http-client
applet is woken up by the lua script which wakes back the script.
To fix the issue, in httpclient_req_xfer() and httpclient_res_xfer()
functions, we now take care to not wake the http-client applet up when no
data are transferred.
This patch must be backported as far as 2.6.
The aim of this patch is to identify the QUIC traces between the QUIC frontend
and backend parts. Two haproxy instances are created. The c(1|2) http clients
connect to ha1 with TCP frontends and QUIC backends. ha2 embeds two QUIC listeners
with s1 as TCP backend. When the traces are activated, they are dumped to stderr.
Hopefully, they are prefixed by the haproxy instance name (h1 or h2). This is very
useful to identify the QUIC instances.
Revert this patch which is no more useful since OpenSSL 3.5.1 to remove the
QUIC server callback restoration after SSL context switch:
MINOR: quic: OpenSSL 3.5 internal QUIC custom extension for transport parameters reset
It was required for 3.5.0. That said, there was no CI for OpenSSL 3.5 at the date
of this commit. The CI recently revealed that the QUIC server side could crash
during QUIC reg tests just after having restored the callbacks as implemented by
the commit above.
Also revert this commit which is no more useful because it arrived with the commit
above:
BUG/MEDIUM: quic: SSL/TCP handshake failures with OpenSSL 3.
Must be backported to 3.2.
The QUIC listener part was impacted by the 3.5.0 OpenSSL new QUIC API with several
issues which have been fixed by 3.5.1.
Add a #error to prevent such OpenSSL 3.5 new QUIC API use with version below 3.5.1.
Must be backported to 3.2.
A fix was made in 3.0 for the case where sharded listeners were using
a same ID with commit 0db8b6034d ("BUG/MINOR: listener: always assign
distinct IDs to shards"). However, the fix is incorrect. By checking the
ID of temporary node instead of the kept one in bind_complete_thread_setup()
it ends up never inserting the used nodes at this point, thus not reserving
them. The side effect is that assigning too close IDs to subsequent
listeners results in the same ID still being assigned twice since not
reserved. Example:
global
nbthread 20
frontend foo
bind :8000 shards by-thread id 10
bind :8010 shards by-thread id 20
The first one will start a series from 10 to 29 and the second one a
series from 20 to 39. But 20 not being inserted when creating the shards,
it will remain available for the post-parsing phase that assigns all
unassigned IDs by filling holes, and two listeners will have ID 20.
By checking the correct node, the problem disappears. The patch above
was marked for backporting to 2.6, so this fix should be backported that
far as well.
"HAVE_TCP_MD5SIG" feature is now registered if TCP MD5 signature is
supported. This will help the feature detection in the reg-test script
dedicated to this feature.
The hostdn.key field in the server contains a pure copy of the hostname_dn
since commit 3406766d57 ("MEDIUM: resolvers: add a ref between servers and
srv request or used SRV record") which wanted to lowercase it. Since it's
not necessary, let's drop this useless copy. In addition, the return from
strdup() was not tested, so it could theoretically crash the process under
heavy memory contention.
The server's "hostname_dn" is in Domain Name format, not a pure string, as
converted by resolv_str_to_dn_label(). It is made of lower-case string
components delimited by binary lengths, e.g. <0x03>www<0x07>haproxy<0x03)org.
As such it must not be lowercased again in srv_state_srv_update(), because
1) it's useless on the name components since already done, and 2) because
it would replace component lengths 97 and above by 32-char shorter ones.
Granted, not many domain names have that large components so the risk is
very low but the operation is always wrong anyway. This was brought in
2.5 by commit 3406766d57 ("MEDIUM: resolvers: add a ref between servers
and srv request or used SRV record").
In the same vein, let's fix the confusing strcasecmp() that are applied
to this binary format, and use memcmp() instead. Here there's basically
no risk to incorrectly match the wrong record, but that test alone is
confusing enough to provoke the existence of the bug above.
Finally let's update the component for that field to mention that it's
in this format and already lower cased.
Better not backport this, the risk of facing this bug is almost zero, and
every time we touch such files something breaks for bad reasons.
Skip the job for "latest" libssl version, when this version is the same
as a one already in the list.
This avoid having 2 jobs for OpenSSL 3.5.1 since no new dev version are
available for now and 3.5.1 is already in the list.
To properly support QUIC for dynamic servers, it is required to extend
add server CLI handler :
* ensure conformity between server address and proto
* automatically set proto to QUIC if not specified
* prepare_srv callback must be called to initialize required SSL context
Prior to this patch, crashes may occur when trying to use QUIC with
dynamic servers.
Also, destroy_srv callback must be called when a dynamic server is
deallocated. This ensures that there is no memory leak due to SSL
context.
No need to backport.
Add postparsing checks to control server line conformity regarding QUIC
both on the server address and the MUX protocol. An error is reported in
the following case :
* proto quic is explicitely specified but server address does not
specify quic4/quic6 prefix
* another proto is explicitely specified but server address uses
quic4/quic6 prefix
This patch skips the TLS version settings. They have as a side effect to add
all the TLS version extensions to the ClientHello message (TLS 1.0 to TLS 1.3).
QUIC supports only TLS 1.3.
First simple VTC file for QUIC reg tests. Two listeners are configured, one without
Retry enabled and the other without. Two clients simply tries to connect to these
listeners to make an basic H3 request.
Make the server line parsing fail when a QUIC backend is configured if haproxy
is built to use the OpenSSL stack compatibility module. This latter does not
support the QUIC client part.
This bug fix completes this patch which was not sufficient:
MINOR: quic-be: Allow sending 1200 bytes Initial datagrams
This patch could not allow the build of well formed Initial packets coalesced to
others (Handshake) packets. Indeed, the <padding> parameter passed to qc_build_pkt()
is deduced from a first value: <padding> value and must be set to 1 for
the last encryption level. As a client, the last encryption level is always
the Handshake encryption level. But <padding> was always set to 1 for a QUIC
client, leading the first Initial packet to be malformed because considered
as the second one into the same datagram.
So, this patch sets <padding> value passed to qc_build_pkt() to 1 only when there
is no last encryption level at all, to allow the build of Initial only packets
(not coalesced) or when it frames to send (coalesced packets).
No need to backport.
This bug impacts both QUIC backends and frontends with OpenSSL 3.5 as QUIC API.
The connections to a haproxy QUIC listener from a haproxy QUIC backend could not
work at all without HelloRetryRequest TLS messages emitted by the backend
asking the QUIC client to restart the handshake followed by TLS alerts:
conn. @(nil) OpenSSL error[0xa000098] read_state_machine: excessive message size
Furthermore, the Initial CRYPTO data sent by the client were big (about two 1252 bytes
packets) (ClientHello TLS message). After analyzing the packets a key_share extension
with <unknown> as value was long (more that 1Ko). This extension is in relation with
the groups but does not belong to the groups supported by QUIC.
That said such connections could work with ngtcp2 as backend built against the same
OSSL TLS stack API but with a HelloRetryRequest.
ngtcp2 always set the QUIC default cipher suites and group, for all the stacks it
supports as implemented by this patch.
So this patch configures both QUIC backend and frontend cipher suites and groups
calling SSL_CTX_set_ciphersuites() and SSL_CTX_set1_groups_list() with the correct
argument, except for SSL_CTX_set1_groups_list() which fails with QUIC TLS for
a unknown reason at this time.
The call to SSL_CTX_set_options() is useless from ssl_quic_initial_ctx() for the QUIC
clients. One relies on ssl_sock_prepare_srv_ssl_ctx() to set them for now on.
This patch is effective for all the supported stacks without impact for AWS-LC,
and QUIC TLS and fixes the connections for haproxy QUIC frontend and backends
when builts against OpenSSL 3.5 QUIC API).
A new define HAVE_OPENSSL_QUICTLS has been added to openssl-compat.h to distinguish
the QUIC TLS stack.
Must be backported to 3.2.
This bug arrived with this commit:
MINOR: quic: OpenSSL 3.5 internal QUIC custom extension for transport parameters reset
To make QUIC connection succeed with OpenSSL 3.5 API, a call to quic_ssl_set_tls_cbs()
was needed from several callback which call SSL_set_SSL_CTX(). This has as side effect
to set the QUIC callbacks used by the OpenSSL 3.5 API.
But quic_ssl_set_tls_cbs() was also called for TCP sessions leading the SSL stack
to run QUIC code, if the QUIC support is enabled.
To fix this, simply ignore the TCP connections inspecting the <ssl_qc_app_data_index>
index value which is NULL for such connections.
Must be backported to 3.2.
OpenSSL 3.5.0 introduced experimental support for QUIC. This change enables the use_quic option when a compatible version of OpenSSL is detected, allowing QUIC-based functionality to be leveraged where applicable. Feature remains disabled for earlier versions to ensure compatibility.
Patterns are allocated when loading maps/acls from a file or dynamically
via the CLI, and are released only from the CLI (e.g. "clear map xxx").
These ones do not use pools and are much harder to monitor, e.g. in case
a script adds many and forgets to clear them, etc.
Let's add a new pair of metrics "PatternsAdded" and "PatternsFreed" that
will report the number of added and freed patterns respectively. This
can allow to simply graph both. The difference between the two normally
represents the number of allocated patterns. If Added grows without
Freed following, it can indicate a faulty script that doesn't perform
the needed cleanup. The metrics are also made available to Prometheus
as patterns_added_total and patterns_freed_total respectively.
This introduce a change of behavior in the httpclient API. When
generating a request with a payload buffer, the size of the buffer
payload is known and does not need to be streamed in chunks.
This patch force to sends payload buffer using a Content-Length header
in the request, however the behavior does not change if a callback is
still used instead of a buffer.
When the "pause" action is parsed, if an expression is used instead of a
static value, the position of the current argument after the expression
evaluation is incremented while it should not. The sample_parse_expr()
function already take care of it. However, it should still be incremented
when an time value was parsed.
This patch must be backported to 3.2.
When the TCP MD5 signature is enabled, on a listening socket or an outgoing
one, the tcp_md5sig structure must be initialized first.
It is a 3.3-specific issue. No backport needed.
This patch adds the support for the RFC2385 (Protection of BGP Sessions via
the + TCP MD5 Signature Option) for the listeners and the servers. The
feature is only available on Linux. Keywords are not exposed otherwise.
By setting "tcp-md5sig <password>" option on a bind line, TCP segments of
all connections instantiated from the listening socket will be signed with a
16-byte MD5 digest. The same option can be set on a server line to protect
outgoing connections to the corresponding server.
The primary use case for this option is to allow BGP to protect itself
against the introduction of spoofed TCP segments into the connection
stream. But it can be useful for any very long-lived TCP connections.
A reg-test was added and it will be executed only on linux. All other
targets are excluded.
Since patch 20718f40b6 ("MEDIUM: ssl/ckch: add filename and linenum
argument to crt-store parsing"), the definition of ocsp_update_init()
and its declaration does not share the same arguments.
Must be backported to 3.2.
We are reusing DEVICEATLAS_INC/DEVICEATLAS_LIB when the DeviceAtlas
library had been compiled and installed with cmake and make install targets.
Works fine except when ldconfig is unaware of the path, thus adding
cflags/ldflags into the mix.
Ideally, to be backported down to the lowest stable branch.
TRACE_ENTER is crashing in ssl_sock_io_cb() in case a connection idle is
being stolen. Indeed the function could be called with a NULL context
and dereferencing it will crash.
This patch fixes the issue by initializing ctx only once it is usable,
and moving TRACE_ENTER after the initialization.
This must be backported to 3.2.
Commit 41f28b3c53 ("DEV: phash: Update 414 and 431 status codes to phash")
accidentally committed a.out, resulting in build/checkout issues when
locally rebuilt. Let's drop it.
This should be backported to 3.1.
Use the new HTTPCLIENT_O_RES_HTX flag when using the CLI httpclient with
--htx.
It allows to process directly the response in HTX, then the htx_dump()
function is used to display a debug output.
Example:
echo "httpclient --htx GET https://haproxy.org" | socat /tmp/haproxy.sock
htx=0x79fd72a2e200(size=16336,data=139,used=6,wrap=NO,flags=0x00000010,extra=0,first=0,head=0,tail=5,tail_addr=139,head_addr=0,end_addr=0)
[0] type=HTX_BLK_RES_SL - size=31 - addr=0 HTTP/2.0 301
[1] type=HTX_BLK_HDR - size=15 - addr=31 content-length: 0
[2] type=HTX_BLK_HDR - size=32 - addr=46 location: https://www.haproxy.org/
[3] type=HTX_BLK_HDR - size=25 - addr=78 alt-svc: h3=":443"; ma=3600
[4] type=HTX_BLK_HDR - size=35 - addr=103 set-cookie: served=2:TLSv1.3+TCP:IPv4
[5] type=HTX_BLK_EOH - size=1 - addr=138 <empty>
Add a HTTPCLIENT_O_RES_HTX flag which allow to store directly the HTX
data in the response buffer instead of extracting the data in raw
format.
This is useful when the data need to be reused in another request.
This patch split the httpclient code to prevent confusion between the
httpclient CLI command and the actual httpclient API.
Indeed there was a confusion between the flag used internally by the
CLI command, and the actual httpclient API.
hc_cli_* functions as well as HC_C_F_* defines were moved to
httpclient_cli.c.
The ocsp-update uses the flags from the httpclient CLI, which are not
supposed to be used elsewhere since this is a state for the CLI.
This patch implements HC_OCSP flags for the ocsp-update.
The HC_F_HTTPPROXY flag was wrongly named and does not use the correct
value, indeed this flag was meant to be used for the httpclient API, not
the httpclient CLI.
This patch fixes the problem by introducing HTTPCLIENT_FO_HTTPPROXY
which has must be set in hc->flags.
Also add a member 'options' in the httpclient structure, because the
member flags is reinitialized when starting.
Must be backported as far as 3.0.
Add a fourth character to the second column of the "typed output format"
to indicate whether the value results from a volatile or persistent metric
('V' or 'P' characters respectively). A persistent metric means the value
could possibily be preserved across reloads by leveraging a shared memory
between multiple co-processes. Such metrics are identified as "shared" in
the code (since they are possibly shared between multiple co-processes)
Some reg-tests were updated to take that change into account, also, some
outputs in the configuration manual were updated to reflect current
behavior.
In this patch we introduce a new helped function called `smp_client_hello_parse()` to extract
information presented in a TLS client hello handshake message. 7 sample fetches have also been
modified to use this helped function to do the common client hello parsing and use the result
to do further processing of extensions/cipher.
Fixes: #2532
When threads are enabled and running on a machine with multiple CCX
or multiple nodes, thread groups are now enabled since 3.3-dev2, causing
load-balancing algorithms to randomly fail due to incoming connections
spreading over multiple groups and using different load balancing indexes.
Let's just force "thread-groups 1" into all configs when threads are
enabled to avoid this.
Using certificates in the jwt_verify converter allows to make use of the
CLI certificate updates, which is still impossible with public keys (the
legacy behavior).
The jwt_verify can now take public certificates as second parameter,
either with actual certificate path (no previously mentioned) or from a
predefined crt-store or from a variable.
A ckch_store used in JWT verification might not have any ckch instances
or crt-list entries linked but we don't want to be able to remove it via
the CLI anyway since it would make all future jwt_verify calls using
this certificate fail.
The ckch_stores might be used to store public certificates only so in
this case we won't provide private keys when updating the certificate
via the CLI.
If the ckch_store is actually used in a bind or server line an error
will still be raised if the private key is missing.
The 'jwt_verify' converter could only be passed public keys as second
parameter instead of full-on public certificates. This patch allows
proper certificates to be used.
Those certificates can be loaded in ckch_stores like any other
certificate which means that all the certificate-related operations that
can be made via the CLI can now benefit JWT validation as well.
We now have two ways JWT validation can work, the legacy one which only
relies on public keys which could not be stored in ckch_stores without
some in depth changes in the way the ckch_stores are built. In this
legacy way, the public keys are fully stored in a cache dedicated to JWT
only which does not have any CLI commands and any way to update them
during runtime. It also requires that all the public keys used are
passed at least once explicitely to the 'jwt_verify' converter so that
they can be loaded during init.
The new way uses actual certificates, either already stored in the
ckch_store tree (if predefined in a crt-store or already used previously
in the configuration) or loaded in the ckch_store tree during init if
they are explicitely used in the configuration like so:
var(txn.bearer),jwt_verify(txn.jwt_alg,"cert.pem")
When using a variable (or any other way that can only be resolved during
runtime) in place of the converter's <key> parameter, the first time we
encounter a new value (for which we don't have any entry in the jwt
tree) we will lock the ckch_store tree and try to perform a lookup in
it. If the lookup fails, an entry will still be inserted into the jwt
tree so that any following call with this value avoids performing the
ckch_store tree lookup.
Contrary to what the doc says, the jwt_verify converter only works with
a public key and not a full certificate for certificate based protocols
(everything but HMAC).
This patch should be backported up to 2.8.
When resolving variable values the temporary trash chunks are used so
when calling the 'jwt_verify' converter with two variable parameters
like in the following line, the input would be overwritten by the value
of the second parameter :
var(txn.bearer),jwt_verify(txn.jwt_alg,txn.cert)
Copying the values into dedicated alloc'ed buffers prevents any new call
to get_trash_chunk from erasing the data we need in the converter.
This patch can be backported up to 2.8.
On backend side, an error at connection level during the preface sending was
not properly handled and could lead to a spinning loop on process_stream()
when the h2 stream on client side was blocked, for instance because of h2
flow control.
It appeared that no transition was perfromed from the PREFACE state to an
ERROR state on the H2 connection when an error occurred on the underlying
connection. In that case, the H2 connection was woken up in loop to try to
receive data, waking up the upper stream at the same time.
To fix the issue, an H2C error must be reported. Most state transitions are
handled by the demux function. So it is the right place to do so. First, in
PREFACE state and on server side, if an error occurred on the TCP
connection, an error is now reported on the H2 connection. REFUSED_STREAM
error code is used in that case. In addition, in that case, we also take
care to properly handle the connection shutdown.
This patch should fix the issue #3020. It must be backported to all stable
versions.
It was already forbidden to use HTTP sample fetch functions from lua
services. An error is triggered if it happens. However, the error must be
extended to any L6/L7 sample fetch functions.
Indeed, a lua service is an applet. It totally unexepected for an applet to
access to input data in a channel's buffer. These data have not been
analyzed yet and are still subject to any change. An applet, lua or not,
must never access to "not forwarded" data. Only output data are
available. For now, if a lua applet relies on any L6/L7 sampel fetch
functions, the behavior is undefined and not consistent.
So to fix the issue, hlua flag HLUA_F_MAY_USE_HTTP is renamed to
HLUA_F_MAY_USE_CHANNELS_DATA. This flag is used to prevent any lua applet to
use L6/L7 sample fetch functions.
This patch could be backported to all stable versions.
Patch ed9b8fec49 ("BUG/MEDIUM: ssl: AWS-LC + TLSv1.3 won't do ECDSA in
RSA+ECDSA configuration") partly fixed a cipher selection problem with
AWS-LC. However this was not checking anymore if the ciphersuites was
available in haproxy which is still a problem.
The problem was fixed in AWS-LC 1.46.0 with this PR
https://github.com/aws/aws-lc/pull/2092.
This patch allows to filter again the TLS13 ciphersuites with recent
versions of AWS-LC. However, since there are no macros to check the
AWS-LC version, it is enabled at the next AWS-LC API version change
following the fix in AWS-LC v1.50.0.
This could be backported where ed9b8fec49 was backported.
Since proxy and server struct already have an internal last_change
variable and we cannot merge it with the shared counter one, let's
rename the last_change counter to be more specific and prevent the
mixup between the two.
last_change counter is renamed to last_state_change, and unlike the
internal last_change, this one is a shared counter so it is expected
to be updated by other processes in our back.
However, when updating last_state_change counter, we use the value
of the server/proxy last_change as reference value.
Same motivation as previous commit, proxy last_change is "abused" because
it is used for 2 different purposes, one for stats, and the other one
for process-local internal use.
Let's add a separate proxy-only last_change variable for internal use,
and leave the last_change shared (and thread-grouped) counter for
statistics.
last_change server metric is used for 2 separate purposes. First it is
used to report last server state change date for stats and other related
metrics. But it is also used internally, including in sensitive paths,
such as lb related stuff to take decision or perform computations
(ie: in srv_dynamic_maxconn()).
Due to last_change counter now being split over thread groups since 16eb0fa
("MAJOR: counters: dispatch counters over thread groups"), reading the
aggregated value has a cost, and we cannot afford to consult last_change
value from srv_dynamic_maxconn() anymore. Moreover, since the value is
used to take decision for the current process we don't wan't the variable
to be updated by another process in our back.
To prevent performance regression and sharing issues, let's instead add a
separate srv->last_change value, which is not updated atomically (given how
rare the updates are), and only serves for places where the use of the
aggregated last_change counter/stats (split over thread groups) is too
costly.
16eb0fa ("MAJOR: counters: dispatch counters over thread groups")
introduced some bugs: as a result of improper copy paste during
COUNTERS_SHARED_LAST() macro introduction, some functions such as
srv_downtime() which used to make use of the server last_change variable
now use the proxy one, which doesn't make sense and will likely cause
unexpected logical errors/bugs.
Let's fix them all at once by properly pointing to the server last_change
variable when relevant.
No backport needed.
Now that native mailers configuration is only usable with Lua mailers,
Willy noticed that we lack a way to warn the user if mailers were
previously configured on an older version but Lua mailers were not loaded,
which could trick the user into thinking mailers keep working when
transitionning to 3.2 while it is not.
In this patch we add the 'core.use_native_mailers_config()' Lua function
which should be called in Lua script body before making use of
'Proxy:get_mailers()' function to retrieve legacy mailers configuration
from haproxy main config. This way haproxy effectively knows that the
native mailers config is actually being used from Lua (which indicates
user correctly migrated from native mailers to Lua mailers), else if
mailers are configured but not used from Lua then haproxy warns the user
about the fact that they will be ignored unless they are used from Lua.
(e.g.: using the provided 'examples/lua/mailers.lua' to ease transition)
_srv_check_proxy_mode() is currently executed during server init (from
_srv_parse_init()), while it used to be fine for current checks, it
seems it occurs a bit too early to be usable for some checks that depend
on server keywords to be evaluated for instance.
As such, to make _srv_check_proxy_mode() more relevant and be extended
with additional checks in the future, let's call it later during server
finalization, once all server keywords were evaluated.
No change of behavior is expected
This commit broke the QUIC backend connection to servers without address validation
or retry activated:
MINOR: quic-be: address validation support implementation (RETRY)
Indeed the retry_source_connection_id transport parameter was already checked as
as if it was required, as if the peer (server) was always using the address validation.
Furthermore, relying on ->odcid.len to ensure a retry token was received is not
correct.
This patch ensures the retry_source_connection_id transport parameter is checked
only when a retry token was received (->retry_token != NULL). In this case
it also checks that this transport parameter is present when a retry token
has been received (tx_params->retry_source_connection_id.len != 0).
No need to backport.
2025-06-27 07:59:12 +02:00
362 changed files with 6106 additions and 2577 deletions
shortonmarkedup;/* what to do when marked up: one of HANA_ONMARKEDUP_* */
shortonmarkedup;/* what to do when marked up: one of HANA_ONMARKEDUP_* */
intslowstart;/* slowstart time in seconds (ms in the conf) */
intslowstart;/* slowstart time in seconds (ms in the conf) */
intidle_ping;/* MUX idle-ping interval in ms */
intidle_ping;/* MUX idle-ping interval in ms */
unsignedlonglast_change;/* internal use only (not for stats purpose): last time the server state was changed, doesn't change often, not updated atomically on purpose */
char*id;/* just for identification */
char*id;/* just for identification */
uint32_trid;/* revision: if id has been reused for a new server, rid won't match */
uint32_trid;/* revision: if id has been reused for a new server, rid won't match */
@ -430,6 +431,7 @@ struct server {
intpuid;/* proxy-unique server ID, used for SNMP, and "first" LB algo */
intpuid;/* proxy-unique server ID, used for SNMP, and "first" LB algo */
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.