Compare commits

..

498 Commits

Author SHA1 Message Date
Alexander Stephan
ffbb3cc306 MINOR: sample: Add le2dec (little endian to decimal) sample fetch
This commit introduces a sample fetch, `le2dec`, to convert
little-endian binary input samples into their decimal representations.
The function converts the input into a string containing unsigned
integer numbers, with each number derived from a specified number of
input bytes. The numbers are separated using a user-defined separator.

This new sample is achieved by adding a parametrized sample_conv_2dec
function, unifying the logic for be2dec and le2dec converters.

Co-authored-by: Christian Norbert Menges <christian.norbert.menges@sap.com>
[wt: tracked as GH issue #2915]
Signed-off-by: Willy Tarreau <w@1wt.eu>
2025-08-05 13:47:53 +02:00
Aurelien DARRAGON
aeff2a3b2a BUG/MEDIUM: hlua_fcn: ensure systematic watcher cleanup for server list iterator
In 358166a ("BUG/MINOR: hlua_fcn: restore server pairs iterator pointer
consistency"), I wrongly assumed that because the iterator was a temporary
object, no specific cleanup was needed for the watcher.

In fact watcher_detach() is not only relevant for the watcher itself, but
especially for its parent list to remove the current watcher from it.

As iterators are temporary objects, failing to remove their watchers from
the server watcher list causes the server watcher list to be corrupted.

On a normal iteration sequence, the last watcher_next() receives NULL
as target so it successfully detaches the last watcher from the list.
However the corner case here is with interrupted iterators: users are
free to break away from the iteration loop when a specific condition is
met for instance from the lua script, when this happens
hlua_listable_servers_pairs_iterator() doesn't get a chance to detach the
last iterator.

Also, Lua doesn't tell us that the loop was interrupted,
so to fix the issue we rely on the garbage collector to force a last
detach right before the object is freed. To achieve that, watcher_detach()
was slightly modified so that it becomes possible to call it without
knowing if the watcher is already detached or not, if watcher_detach() is
called on a detached watcher, the function does nothing. This way it saves
the caller from having to track the watcher state and makes the API a
little more convenient to use. This way we now systematically call
watcher_detach() for server iterators right before they are garbage
collected.

This was first reported in GH #3055. It can be observed when the server
list is browsed one than more time when it was already browsed from Lua
for a given proxy and the iteration was interrupted before the end. As the
watcher list is corrupted, the common symptom is watcher_attach() or
watcher_next() not ending due to the internal mt_list call looping
forever.

Thanks to GH users @sabretus and @sabretus for their precious help.

It should be backported everywhere 358166a was.
2025-08-05 13:06:46 +02:00
William Lallemand
66f28dbd3f BUG/MINOR: acme: possible integer underflow in acme_txt_record()
a2base64url() can return a negative value is olen is too short to
accept ilen. This is not supposed to happen since the sha256 should
always fit in a buffer. But this is confusing since a2base64()
returns a signed integer which is pt in output->data which is unsigned.

Fix the issue by setting ret to 0 instead of -1 upon error. And returns
a unsigned integer instead of a signed one.
This patch also checks the return value from the caller in order
to emit an error instead of setting trash.data which is already done
from the function.
2025-08-05 12:12:50 +02:00
William Lallemand
8afd3e588d MINOR: acme: update the log for DNS-01
Update the log for DNS-01 by mentionning the challenge_ready command
over the CLI.
2025-08-01 18:08:43 +02:00
William Lallemand
9ee14ed2d9 MEDIUM: acme: allow to wait and restart the task for DNS-01
DNS-01 needs a external process which would register a TXT record on a
DNS provider, using a REST API or something else.

To achieve this, the process should read the dpapi sink and wait for
events. With the DNS-01 challenge, HAProxy will put the task to sleep
before asking the ACME server to achieve the challenge. The task then
need to be woke up, using the command implemented by this patch.

This patch implements the "acme challenge_ready" command which should be
used by the agent once the challenge was configured in order to wake the
task up.

Example:
    echo "@1 acme challenge_ready foobar.pem.rsa domain kikyo" | socat /tmp/master.sock -
2025-08-01 18:07:12 +02:00
William Lallemand
3dde7626ba MINOR: acme: emit the DNS-01 challenge details on the dpapi sink
This commit adds a new message to the dpapi sink which is emitted during
the new authorization request.

One message is emitted by challenge to resolve. The certificate name as
well as the thumprint of the account key are on the first line of the
message. A dump of the JSON response for 1 challenge is dumped, en the
message ends with a \0.

The agent consuming these messages MUST NOT access the URLs, and SHOULD
only uses the thumbprint, dns and token to configure a challenge.

Example:

    $ ( echo "@@1 show events dpapi -w -0"; cat - ) | socat /tmp/master.sock -  | cat -e
    <0>2025-08-01T16:23:14.797733+02:00 acme deploy foobar.pem.rsa thumbprint Gv7pmGKiv_cjo3aZDWkUPz5ZMxctmd-U30P2GeqpnCo$
    {$
       "status": "pending",$
       "identifier": {$
          "type": "dns",$
          "value": "foobar.com"$
       },$
       "challenges": [$
          {$
             "type": "dns-01",$
             "url": "https://0.0.0.0:14000/chalZ/1o7sxLnwcVCcmeriH1fbHJhRgn4UBIZ8YCbcrzfREZc",$
             "token": "tvAcRXpNjbgX964ScRVpVL2NXPid1_V8cFwDbRWH_4Q",$
             "status": "pending"$
          },$
          {$
             "type": "dns-account-01",$
             "url": "https://0.0.0.0:14000/chalZ/z2_WzibwTPvE2zzIiP3BF0zNy3fgpU_8Nj-V085equ0",$
             "token": "UedIMFsI-6Y9Nq3oXgHcG72vtBFWBTqZx-1snG_0iLs",$
             "status": "pending"$
          },$
          {$
             "type": "tls-alpn-01",$
             "url": "https://0.0.0.0:14000/chalZ/AHnQcRvZlFw6e7F6rrc7GofUMq7S8aIoeDileByYfEI",$
             "token": "QhT4ejBEu6ZLl6pI1HsOQ3jD9piu__N0Hr8PaWaIPyo",$
             "status": "pending"$
          },$
          {$
             "type": "http-01",$
             "url": "https://0.0.0.0:14000/chalZ/Q_qTTPDW43-hsPW3C60NHpGDm_-5ZtZaRfOYDsK3kY8",$
             "token": "g5Y1WID1v-hZeuqhIa6pvdDyae7Q7mVdxG9CfRV2-t4",$
             "status": "pending"$
          }$
       ],$
       "expires": "2025-08-01T15:23:14Z"$
    }$
    ^@
2025-08-01 16:48:22 +02:00
William Lallemand
365a69648c MINOR: acme: emit a log for DNS-01 challenge response
This commit emits a log which output the TXT entry to create in case of
DNS-01. This is useful in cases you want to update your TXT entry
manually.

Example:

    acme: foobar.pem.rsa: DNS-01 requires to set the "acme-challenge.example.com" TXT record to "7L050ytWm6ityJqolX-PzBPR0LndHV8bkZx3Zsb-FMg"
2025-08-01 16:12:27 +02:00
William Lallemand
09275fd549 BUILD: acme: avoid declaring TRACE_SOURCE in acme-t.h
Files ending with '-t.h' are supposed to be used for structure
definitions and could be included in the same file to check API
definitions.

This patch removes TRACE_SOURCE from acme-t.h to avoid conflicts with
other TRACE_SOURCE definitions.
2025-07-31 16:03:28 +02:00
Amaury Denoyelle
a6e67e7b41 BUG/MEDIUM: mux-quic: ensure Early-data header is set
QUIC MUX may be initialized prior to handshake completion, when 0-RTT is
used. In this case, connection is flagged with CO_FL_EARLY_SSL_HS, which
is notably used by wait-for-hs http rule.

Early data may be subject to replay attacks. For this reason, haproxy
adds the header 'Early-data: 1' to all requests handled as TLS early
data. Thus the server can reject it if it is deemed unsafe. This header
injection is implemented by http-ana. However, it was not functional
with QUIC due to missing CO_FL_EARLY_DATA connection flag.

Fix this by ensuring that QUIC MUX sets CO_FL_EARLY_DATA when needed.
This is performed during qcc_recv() for STREAM frame reception. It is
only set if QC_CF_WAIT_HS is set, meaning that the handshake is not yet
completed. After this, the request is considered safe and Early-data
header is not necessary anymore.

This should fix github issue #3054.

This must be backported up to 3.2 at least. If possible, it should be
backported to all stable releases as well. On these versions, the
current patch relies on the following refactoring commit :
  commit 0a53a008d0
  MINOR: mux-quic: refactor wait-for-handshake support
2025-07-31 15:25:59 +02:00
Amaury Denoyelle
697f7d1142 MINOR: muxes: refactor private connection detach
Following the latest adjustment on session_add_conn() /
session_check_idle_conn(), detach muxes callbacks were rewritten for
private connection handling.

Nothing really fancy here : some more explicit comments and the removal
of a duplicate checks on idle conn status for muxes with true
multipexing support.
2025-07-30 16:14:00 +02:00
Amaury Denoyelle
2ecc5290f2 MINOR: session: streamline session_check_idle_conn() usage
session_check_idle_conn() is called by muxes when a connection becomes
idle. It ensures that the session idle limit is not yet reached. Else,
the connection is removed from the session and it can be freed.

Prior to this patch, session_check_idle_conn() was compatible with a
NULL session argument. In this case, it would return true, considering
that no limit was reached and connection not removed.

However, this renders the function error-prone and subject to future
bugs. This patch streamlines it by ensuring it is never called with a
NULL argument. Thus it can now only returns true if connection is kept
in the session or false if it was removed, as first intended.
2025-07-30 16:13:30 +02:00
Amaury Denoyelle
dd9645d6b9 MINOR: session: do not release conn in session_check_idle_conn()
session_check_idle_conn() is called to flag a connection already
inserted in a session list as idle. If the session limit on the number
of idle connections (max-session-srv-conns) is exceeded, the connection
is removed from the session list.

In addition to the connection removal, session_check_idle_conn()
directly calls MUX destroy callback on the connection. This means the
connection is freed by the function itself and should not be used by the
caller anymore.

This is not practical when an alternative connection closure method
should be used, such as a graceful shutdown with QUIC. As such, remove
MUX destroy invokation : this is now the responsability of the caller to
either close or release immediately the connection.
2025-07-30 11:43:41 +02:00
Amaury Denoyelle
57e9425dbc MINOR: session: strengthen idle conn limit check
Add a BUG_ON() on session_check_idle_conn() to ensure the connection is
not already flagged as CO_FL_SESS_IDLE.

This checks that this function is only called one time per connection
transition from active to idle. This is necessary to ensure that session
idle counter is only incremented one time per connection.
2025-07-30 11:40:16 +02:00
Amaury Denoyelle
ec1ab8d171 MINOR: session: remove redundant target argument from session_add_conn()
session_add_conn() uses three argument : connection and session
instances, plus a void pointer labelled as target. Typically, it
represents the server, but can also be a backend instance (for example
on dispatch).

In fact, this argument is redundant as <target> is already a member of
the connection. This commit simplifies session_add_conn() by removing
it. A BUG_ON() on target is extended to ensure it is never NULL.
2025-07-30 11:39:57 +02:00
Amaury Denoyelle
668c2cfb09 MINOR: session: strengthen connection attach to session
This commit is the first one of a serie to refactor insertion of backend
private connection into the session list.

session_add_conn() is used to attach a connection into a session list.
Previously, this function would report an error if the connection
specified was already attached to another session. However, this case
currently never happens and thus can be considered as buggy.

Remove this check and replace it with a BUG_ON(). This allows to ensure
that session insertion remains consistent. The same check is also
transformed in session_check_idle_conn().
2025-07-30 11:39:26 +02:00
Amaury Denoyelle
cfe9bec1ea MINOR: mux-quic: release conn after shutdown on BE reuse failure
On stream detach on backend side, connection is inserted in the proper
server/session list to be able to reuse it later. If insertion fails and
the connection is idle, the connection can be removed immediately.

If this occurs on a QUIC connection, QUIC MUX implements graceful
shutdown to ensure the server is notified of the closure. However, the
connection instance is not freed. Change this to ensure that both
shutdown and release is performed.
2025-07-30 10:04:19 +02:00
Aurelien DARRAGON
14966c856b MINOR: clock: make global_now_ns a pointer as well
Similar to previous commit but for global_now_ns
2025-07-29 18:04:15 +02:00
Aurelien DARRAGON
4a20b3835a MINOR: clock: make global_now_ms a pointer
This is preparation work for shared counters between co-processes. As
co-processes will need to share a common date. global_now_ms will be used
for that as it will point to the shm when sharing is enabled.

Thus in this patch we turn global_now_ms into a pointer (and adjust the
places where it is written to and read from, hopefully atomic operations
through pointer are already used so the change is trivial)

For now global_now_ms points to process-local _global_now_ms which is a
fallback for when sharing through the shm is not enabled.
2025-07-29 18:04:14 +02:00
Aurelien DARRAGON
713ebd2750 CLEANUP: counters: rename counters_be_shared_init to counters_be_shared_prepare
75e480d10 ("MEDIUM: stats: avoid 1 indirection by storing the shared
stats directly in counters struct") took care of renaming
counters_fe_shared_init() but we forgot counters_be_shared_init().

Let's fix that for consistency
2025-07-29 18:00:13 +02:00
Aurelien DARRAGON
2ffe515d97 BUG/MINOR: hlua: take default-path into account with lua-load-per-thread
As discussed in GH #3051, default-path is not taken into account when
loading files using lua-load-per-thread. In fact, the initial
hlua_load_state() (performed on first thread which parses the config)
is successful, but other threads run hlua_load_state() later based
on config hints which were saved by the first thread, and those config
hints only contain the file path provided on the lua-load-per-thread
config line, not the absolute one. Indeed, `default-path` directive
changes the current working directory only for the thread parsing the
configuration.

To fix the issue, when storing config hints under hlua_load_per_thread()
we now make sure to save the absolute file path for `lua-load-per-thread'
argument.

Thanks to GH user @zhanhb for having reported the issue

It may be backported to all stable versions.
2025-07-29 17:58:28 +02:00
William Lallemand
83a335f925 MINOR: acme: implement traces
Implement traces for the ACME protocol.

 -dt acme:data:complete will dump every input and output buffers,
 including decoded buffers before being converted to JWS.
 It will also dump certificates in the traces.

 -dt acme:user:complete will only dump the state of the task handler.
2025-07-29 17:25:10 +02:00
Willy Tarreau
cedb4f0461 [RELEASE] Released version 3.3-dev5
Released version 3.3-dev5 with the following main changes :
    - BUG/MEDIUM: queue/stats: also use stream_set_srv_target() for pendconns
    - DOC: list missing global QUIC settings
2025-07-28 11:26:22 +02:00
Amaury Denoyelle
7fa812a1ac DOC: list missing global QUIC settings
Complete list of global keywords with missing QUIC entries.

This could be backported to stable versions. This requires to take into
account the version of introduction for each keyword.
* limited-quic, introduced in 2.8
* no-quic, introduced in 2.8
* tune.quic.cc.cubic.min-losses, introduced in 3.1
2025-07-28 11:22:35 +02:00
Aurelien DARRAGON
021a0681be BUG/MEDIUM: queue/stats: also use stream_set_srv_target() for pendconns
Following c24de07 ("OPTIM: stats: store fast sharded counters pointers
at session and stream level") some crashes were observed in
connect_server():

  #0  0x00000000007ba39c in connect_server (s=0x65117b0) at src/backend.c:2101
  2101                            _HA_ATOMIC_INC(&s->sv_tgcounters->connect);
  Missing separate debuginfos, use: debuginfo-install glibc-2.17-325.el7_9.x86_64 libgcc-4.8.5-44.el7.x86_64 nss-softokn-freebl-3.67.0-3.el7_9.x86_64 pcre-8.32-17.el7.x86_64
  (gdb) bt
  #0  0x00000000007ba39c in connect_server (s=0x65117b0) at src/backend.c:2101
  #1  0x00000000007baff8 in back_try_conn_req (s=0x65117b0) at src/backend.c:2378
  #2  0x00000000006c0e9f in process_stream (t=0x650f180, context=0x65117b0, state=8196) at src/stream.c:2366
  #3  0x0000000000bd3e51 in run_tasks_from_lists (budgets=0x7ffd592752e0) at src/task.c:655
  #4  0x0000000000bd49ef in process_runnable_tasks () at src/task.c:889
  #5  0x0000000000851169 in run_poll_loop () at src/haproxy.c:2834
  #6  0x0000000000851865 in run_thread_poll_loop (data=0x1a03580 <ha_thread_info>) at src/haproxy.c:3050
  #7  0x0000000000852a53 in main (argc=7, argv=0x7ffd592755f8) at src/haproxy.c:3637

Here the crash occurs during the atomic inc of a sv_tgcounters metric from
the stream pointer, which tells us the pointer is likely garbage.

In fact, we assign s->sv_tgcounters each time the stream target is set to
a valid server. For that we use stream_set_srv_target() helper which does
assigment for us. By reviewing the code, in turns out we forgot to call
stream_set_srv_target() in pendconn_dequeue(), where the stream target
is set to the server who picked the pendconn.

Let's fix the bug by using stream_set_srv_target() there.

No backport needed unless c24de07 is.
2025-07-28 08:54:38 +02:00
Willy Tarreau
5d4ff9f02e [RELEASE] Released version 3.3-dev4
Released version 3.3-dev4 with the following main changes :
    - CLEANUP: server: do not check for duplicates anymore in findserver()
    - REORG: server: move findserver() from proxy.c to server.c
    - MINOR: server: use the tree to look up the server name in findserver()
    - CLEANUP: server: rename server_find_by_name() to server_find()
    - CLEANUP: server: rename findserver() to server_find_by_name()
    - CLEANUP: server: use server_find_by_name() where relevant
    - CLEANUP: cfgparse: lookup proxy ID using existing functions
    - CLEANUP: stream: lookup server ID using standard functions
    - CLEANUP: server: simplify server_find_by_id()
    - CLEANUP: server: add server_find_by_addr()
    - CLEANUP: stream: use server_find_by_addr() in sticking_rule_find_target()
    - CLEANUP: server: be sure never to compare src against a non-existing defsrv
    - MEDIUM: proxy: take the defsrv out of the struct proxy
    - MINOR: proxy: add checks for defsrv's validity
    - MEDIUM: proxy: no longer allocate the default-server entry by default
    - MEDIUM: proxy: register a post-section cleanup function
    - MINOR: debug: report haproxy and operating system info in panic dumps
    - BUG/MEDIUM: h3: do not overwrite interim with final response
    - BUG/MINOR: h3: properly realloc buffer after interim response encoding
    - BUG/MINOR: h3: ensure that invalid status code are not encoded (FE side)
    - MINOR: qmux: change API for snd_buf FIN transmission
    - BUG/MEDIUM: h3: handle interim response properly on FE side
    - BUG/MINOR: h3: properly handle interim response on BE side
    - BUG/MINOR: quic: Wrong source address use on FreeBSD
    - MINOR: h3: remove unused outbuf in h3_resp_headers_send()
    - BUG/MINOR: applet: Don't trigger BUG_ON if the tid is not on appctx init
    - DEV: gdb: add a memprofile decoder to the debug tools
    - MINOR: quic: Get rid of qc_is_listener()
    - DOC: connection: explain the rules for idle/safe/avail connections
    - BUG/MEDIUM: quic-be: CC buffer released from wrong pool
    - BUG/MINOR: halog: exit with error when some output filters are set simultaneosly
    - MINOR: cpu-topo: split cpu_dump_topology() to show its summary in show dev
    - MINOR: cpu-topo: write thread-cpu bindings into trash buffer
    - MINOR: debug: align output style of debug_parse_cli_show_dev with cpu_dump_topology
    - MINOR: debug: add thread-cpu bindings info in 'show dev' output
    - MINOR: quic: Remove pool_head_quic_be_cc_buf pool
    - BUILD: debug: add missed guard USE_CPU_AFFINITY to show cpu bindings
    - BUG/MEDIUM: threads: Disable the workaround to load libgcc_s on macOS
    - BUG/MINOR: logs: fix log-steps extra log origins selection
    - BUG/MINOR: hq-interop: fix FIN transmission
    - MINOR: ssl: Add ciphers in ssl traces
    - MINOR: ssl: Add curve id to curve name table and mapping functions
    - MINOR: ssl: Add curves in ssl traces
    - MINOR: ssl: Dump ciphers and sigalgs details in trace with 'advanced' verbosity
    - MINOR: ssl: Remove ClientHello specific traces if !HAVE_SSL_CLIENT_HELLO_CB
    - MINOR: h3: use smallbuf for request header emission
    - MINOR: h3: add traces to h3_req_headers_send()
    - BUG/MINOR: h3: fix uninitialized value in h3_req_headers_send()
    - MINOR: log: explicitly ignore "log-steps" on backends
    - BUG/MEDIUM: acme: use POST-as-GET instead of GET for resources
    - BUG/MINOR mux-quic: apply correctly timeout on output pending data
    - BUG/MINOR: mux-quic: ensure close-spread-time is properly applied
    - MINOR: mux-quic: refactor timeout code
    - MINOR: mux-quic: correctly implement backend timeout
    - MINOR: mux-quic: disable glitch on backend side
    - MINOR: mux-quic: store session in QCS instance
    - MEDIUM: mux-quic: implement be connection reuse
    - MINOR: mux-quic: do not reuse connection if app already shut
    - MEDIUM: mux-quic: support backend private connection
    - MINOR: acme: remove acme_req_auth() and use acme_post_as_get() instead
    - BUG/MINOR: acme: allow "processing" in challenge requests
    - CLEANUP: acme: fix wrong spelling of "resources"
    - CLEANUP: ssl: Use only NIDs in curve name to id table
    - MINOR: acme: add ACME to the haproxy -vv feature list
    - BUG/MINOR: hlua: Skip headers when a receive is performed on an HTTP applet
    - BUG/MEDIUM: applet: State inbuf is no longer full if input data are skipped
    - BUG/MEDIUM: stconn: Fix conditions to know an applet can get data from stream
    - BUG/MINOR: applet: Fix applet_getword() to not return one extra byte
    - BUG/MEDIUM: Remove sync sends from streams to applets
    - MINOR: applet: Add HTX versions for applet_input_data() and applet_output_room()
    - MINOR: applet: Improve applet API to take care of inbuf/outbuf alloc failures
    - MEDIUM: hlua: Update the tcp applet to use its own buffers
    - MINOR: hlua: Fill the request array on the first HTTP applet run
    - MINOR: hlua: Use the buffer instead of the HTTP message to get HTTP headers
    - MEDIUM: hlua: Update the http applet to use its own buffers
    - BUG/MEDIUM: hlua: Report to SC when data were consumed on a lua socket
    - BUG/MEDIUM: hlua: Report to SC when output data are blocked on a lua socket
    - MEDIUM: hlua: Update the socket applet to use its own buffers
    - BUG/MEDIUM: dns: Reset reconnect tempo when connection is finally established
    - MEDIUM: dns: Update the dns_session applet to use its own buffers
    - CLEANUP: http-client: Remove useless indentation when sending request body
    - MINOR: http-client: Try to send request body with headers if possible
    - MINOR: http-client: Trigger an error if first response block isn't a start-line
    - BUG/MINOR: httpclient-cli: Don't try to dump raw headers in HTX mode
    - MINOR: httpclient-cli: Reset httpclient HTX buffer instead of removing blocks
    - MEDIUM: http-client: Update the http-client applet to use its own buffers
    - MEDIUM: log: Update the log applet to use its own buffers
    - MEDIUM: sink: Update the sink applets to use their own buffers
    - MEDIUM: peers: Update the peer applet to use its own buffers
    - MEDIUM: promex: Update the promex applet to use their own buffers
    - MINOR: applet: Add support for flags on applets with a flag about the new API
    - MEDIUM: applet: Emit a warning when a legacy applet is spawned
    - BUG/MEDIUM: logs: fix sess_build_logline_orig() recursion with options
    - MEDIUM: stats: avoid 1 indirection by storing the shared stats directly in counters struct
    - CLEANUP: compiler: prefer char * over void * for pointer arithmetic
    - CLEANUP: include: replace hand-rolled offsetof to avoid UB
    - CLEANUP: peers: remove unused peer_session_target()
    - OPTIM: stats: store fast sharded counters pointers at session and stream level
2025-07-26 09:55:26 +02:00
Aurelien DARRAGON
c24de077bd OPTIM: stats: store fast sharded counters pointers at session and stream level
Following commit 75e480d10 ("MEDIUM: stats: avoid 1 indirection by storing
the shared stats directly in counters struct"), in order to minimize the
impact of the recent sharded counters work, we try to push things a bit
further in this patch by storing and using "fast" pointers at the session
and stream levels when available to avoid costly indirections and
systematic "tgid" resolution (which can not be cached by the CPU due to
its THREAD-local nature).

Indeed, we know that a session/stream is tied to a given CPU, thanks to
this we know that the tgid for a given session/stream will never change.

Given that, we are able to store sharded frontend and listener counters
pointer at the session level (namely sess->fe_tgcounters and
sess->li_tgcounters), and once the backend and the server are selected,
we are also able to store backend and server sharded counters
pointer at the stream level (namely s->be_tgcounters and s->sv_tgcounters)

Everywhere we rely on these counters and the stream or session context is
available, we use the fast pointers it instead of the indirect pointers
path to make the pointer resolution a bit faster.

This optimization proved to bring a few percents back, and together with
the previous 75e480d10 commit we now fixed the performance regression (we
are back to back with 3.2 stats performance)
2025-07-25 18:24:23 +02:00
Aurelien DARRAGON
cf8ba60c88 CLEANUP: peers: remove unused peer_session_target()
Since commit 7293eb68 ("MEDIUM: peers: use server as stream target") peer
session target always point to server in order to benefit from existing
server transport options.

Thanks to that, it is no longer necessary to have peer_session_target()
helper function, because all it does is return the pointer to the
server object. Let's get rid of that
2025-07-25 18:24:17 +02:00
Ben Kallus
1e48ec7f6c CLEANUP: include: replace hand-rolled offsetof to avoid UB
The C standard specifies that it's undefined behavior to dereference
NULL (even if you use & right after). The hand-rolled offsetof idiom
&(((s*)NULL)->f) is thus technically undefined. This clutters the
output of UBSan and is simple to fix: just use the real offsetof when
it's available.

Note that there's no clear statement about this point in the spec,
only several points which together converge to this:

- From N3220, 6.5.3.4:
  A postfix expression followed by the -> operator and an identifier
  designates a member of a structure or union object. The value is
  that of the named member of the object to which the first expression
  points, and is an lvalue.

- From N3220, 6.3.2.1:
  An lvalue is an expression (with an object type other than void) that
  potentially designates an object; if an lvalue does not designate an
  object when it is evaluated, the behavior is undefined.

- From N3220, 6.5.4.4 p3:
  The unary & operator yields the address of its operand. If the
  operand has type "type", the result has type "pointer to type". If
  the operand is the result of a unary * operator, neither that operator
  nor the & operator is evaluated and the result is as if both were
  omitted, except that the constraints on the operators still apply and
  the result is not an lvalue. Similarly, if the operand is the result
  of a [] operator, neither the & operator nor the unary * that is
  implied by the [] is evaluated and the result is as if the & operator
  were removed and the [] operator were changed to a + operator.

=> In short, this is saying that C guarantees these identities:
    1. &(*p) is equivalent to p
    2. &(p[n]) is equivalent to p + n

As a consequence, &(*p) doesn't result in the evaluation of *p, only
the evaluation of p (and similar for []). There is no corresponding
special carve-out for ->.

See also: https://pvs-studio.com/en/blog/posts/cpp/0306/

After this patch, HAProxy can run without crashing after building w/
clang-19 -fsanitize=undefined -fno-sanitize=function,alignment
2025-07-25 17:54:32 +02:00
Ben Kallus
d3b46cca7b CLEANUP: compiler: prefer char * over void * for pointer arithmetic
This patch changes two instances of pointer arithmetic on void *
to use char * instead, to avoid UB. This is essentially to please
UB analyzers, though.
2025-07-25 17:54:32 +02:00
Aurelien DARRAGON
75e480d107 MEDIUM: stats: avoid 1 indirection by storing the shared stats directly in counters struct
Between 3.2 and 3.3-dev we noticed a noticeable performance regression
due to stats handling. After bisecting, Willy found out that recent
work to split stats computing accross multiple thread groups (stats
sharding) was responsible for that performance regression. We're looking
at roughly 20% performance loss.

More precisely, it is the added indirections, multiplied by the number
of statistics that are updated for each request, which in the end causes
a significant amount of time being spent resolving pointers.

We noticed that the fe_counters_shared and be_counters_shared structures
which are currently allocated in dedicated memory since a0dcab5c
("MAJOR: counters: add shared counters base infrastructure")
are no longer huge since 16eb0fab31 ("MAJOR: counters: dispatch counters
over thread groups") because they now essentially hold flags plus the
per-thread group id pointer mapping, not the counters themselves.

As such we decided to try merging fe_counters_shared and
be_counters_shared in their parent structures. The cost is slight memory
overhead for the parent structure, but it allows to get rid of one
pointer indirection. This patch alone yields visible performance gains
and almost restores 3.2 stats performance.

counters_fe_shared_get() was renamed to counters_fe_shared_prepare() and
now returns either failure or success instead of a pointer because we
don't need to retrieve a shared pointer anymore, the function takes care
of initializing existing pointer.
2025-07-25 16:46:10 +02:00
Aurelien DARRAGON
31adfb6c15 BUG/MEDIUM: logs: fix sess_build_logline_orig() recursion with options
Since ccc43412 ("OPTIM: log: use thread local lf_buildctx to stop pushing
it on the stack"), recursively calling sess_build_logline_orig(), which
may for instance happen when leveraging %ID (or unique-id fetch) for the
first time, would lead to undefined behavior because the parent
sess_build_logline_orig() build context was shared between recursive calls
(only one build ctx per thread to avoid pushing it on the stack for each
call)

In short, the parent build ctx would be altered by the recursive calls,
which is obviously not expected and could result in log formatting errors.

To fix the issue but still avoid polluting the stack with large lf_buildctx
struct, let's move the static 256 bytes build buffer out of the buildctx
so that the buildctx is now stored in the stack again (each function
invokation has its own dedicated build ctx). On the other hand, it's
acceptable to have only 1 256 bytes build buffer per thread because the
build buffer is not involved in recursives calls (unlike the build ctx)

Thanks to Willy and Vincent Gramer for spotting the bug and providing
useful repro.

It should be backported in 3.0 with ccc43412.
2025-07-25 16:46:03 +02:00
Christopher Faulet
b8d5307bd9 MEDIUM: applet: Emit a warning when a legacy applet is spawned
To motivate developers to support the new applets API, a warning is now
emitted when a legacy applet is spawned. To not flood users, this warning is
only emitted once per legacy applet. To do so, the applet flag
APPLET_FL_WARNED was added. It is set when the warning is emitted.

Note that test and set on this flag are not performed via atomic operations.
So it is possible to have more than one warning for a given applet if it is
spawned in same time on several threads. At worrst, there is one warning per
thread.
2025-07-25 15:53:33 +02:00
Christopher Faulet
337768656b MINOR: applet: Add support for flags on applets with a flag about the new API
A new field was added in the applet structure to be able to set flags on the
applets The first one is related to the new API. APPLET_FL_NEW_API is set
for applets based on the new API. It was set on all HAProxy's applets.
2025-07-25 15:44:02 +02:00
Christopher Faulet
2e5e6cdf23 MEDIUM: promex: Update the promex applet to use their own buffers
Thanks to this patch, the promex applet is now using its own buffers.
.rcv_buf and .snd_buf callback functions are now defined to use the default
HTX functions. Parts to receive and send data have also been updated to use
the applet API and to remove any dependencies on the stream-connectors and
the channels.
2025-07-24 12:13:42 +02:00
Christopher Faulet
a2cb0033bd MEDIUM: peers: Update the peer applet to use its own buffers
Thanks to this patch, the peer applet is now using its own buffers. .rcv_buf
and .snd_buf callback functions are now defined to use the default raw
functions. The applet API is now used and any dependencies on the
stream-connectors and the channels were removed.
2025-07-24 12:13:42 +02:00
Christopher Faulet
576361c23e MEDIUM: sink: Update the sink applets to use their own buffers
Thanks to this patch, the sink applets is now using their own buffers.
.rcv_buf and .snd_buf callback functions are now defined to use the default
raw functions. The applet API is now used and any dependencies on the
stream-connectors and the channels were removed.
2025-07-24 12:13:42 +02:00
Christopher Faulet
5da704b55f MEDIUM: log: Update the log applet to use its own buffers
Thanks to this patch, the log applet is now using its own buffers. .rcv_buf
and .snd_buf callback functions are now defined to use the default raw
functions. The applet API is now used and any dependencies on the
stream-connectors and the channels were removed.
2025-07-24 12:13:42 +02:00
Christopher Faulet
6a2b354dea MEDIUM: http-client: Update the http-client applet to use its own buffers
Thanks to this patch, the http-client applet is now using its own buffers.
.rcv_buf and .snd_buf callback functions are now defined to use the default
HTX functions. Parts to receive and send data have also been updated to use
the applet API and to remove any dependencies on the stream-connectors and
the channels.
2025-07-24 12:13:42 +02:00
Christopher Faulet
d05ff904bf MINOR: httpclient-cli: Reset httpclient HTX buffer instead of removing blocks
In the CLI I/O handler interacting with the HTTP client, in HTX mode, after
a dump of the HTX message, data must be removed. Instead of removng all
blocks one by one, we can call htx_reset() because all the message must be
flushed.
2025-07-24 12:13:42 +02:00
Christopher Faulet
1741bc4bf0 BUG/MINOR: httpclient-cli: Don't try to dump raw headers in HTX mode
In the CLI I/O handler interacting with the HTTP client, we must not try to
push raw headers in HTX mode, because there is no raw data in this
mode. This prevent the HTX dump at the end of the I/O handle.

It is a 3.3-specific issue. No backport needed.
2025-07-24 12:13:42 +02:00
Christopher Faulet
88aa7a780c MINOR: http-client: Trigger an error if first response block isn't a start-line
The first HTX block of a response must be a start-line. There is no reason
to wait for something else. And if there are output data in the response
channel buffer, it means we must found the start-line.
2025-07-24 12:13:42 +02:00
Christopher Faulet
c08a0dae30 MINOR: http-client: Try to send request body with headers if possible
There is no reason to yield after sending the request headers, except if the
request was fully sent. If there is a payload, it is better to send it as
well. However, when the whole request was sent, we can leave the I/O handler.
2025-07-24 12:13:42 +02:00
Christopher Faulet
96aa251d20 CLEANUP: http-client: Remove useless indentation when sending request body
It was useless to have an indentation to handle HTTPCLIENT_S_REQ_BODY state
in the http-client I/O handler.
2025-07-24 12:13:42 +02:00
Christopher Faulet
217da087fd MEDIUM: dns: Update the dns_session applet to use its own buffers
Thanks to this patch, the dns_session applet is now using its own
buffers. .rcv_buf and .snd_buf callback functions are now defined to use the
default raw functions. Functions to receive and send data have also been
updated to use the applet API and to remove any dependencies on the
stream-connectors and the channels.
2025-07-24 12:13:41 +02:00
Christopher Faulet
765f14e0e3 BUG/MEDIUM: dns: Reset reconnect tempo when connection is finally established
The issue was introduced by commit 27236f221 ("BUG/MINOR: dns: add tempo
between 2 connection attempts for dns servers"). In this patch, to delay the
reconnection, a timer is used on the appctx when it is created. This
postpones the appctx initialization. However, once initialized, the
expiration time of the underlying task is not reset. So, it is always
considered as expired and the appctx is woken up in loop.

The fix is quite simple. In dns_session_init(), the expiration time of the
appctx's task is alwaus set to TICK_ETERNITY.

This patch must be backported everywhere the commit above was backported. So
as far as 2.8 for now but possibly to all stable versions.
2025-07-24 12:13:41 +02:00
Christopher Faulet
e542d2dfaa MEDIUM: hlua: Update the socket applet to use its own buffers
Thanks to this patch, the lua cosocket applet is now using its own
buffers. .rcv_buf and .snd_buf callback functions are now defined to use the
default raw functions. Functions to receive and send data have also been
updated to use the applet API and to remove any dependencies on the
stream-connectors and the channels.
2025-07-24 12:13:41 +02:00
Christopher Faulet
7e96ff6b84 BUG/MEDIUM: hlua: Report to SC when output data are blocked on a lua socket
It is a fix similar to the previous one ("BUG/MEDIUM: hlua: Report to SC
when data were consumed on a lua socket"), but for the write side. The
writer must notify the cosocket it needs more space in the request buffer to
produce more data by calling sc_need_room(). Otherwise, there is nothing to
prevent to wake the cosocket applet up again and again.

This patch must be backported as far as 2.8, and maybe to 2.6 too.
2025-07-24 12:13:41 +02:00
Christopher Faulet
21e45a61d1 BUG/MEDIUM: hlua: Report to SC when data were consumed on a lua socket
The lua cosocket are quite strange. There is an applet used to handle the
connection and writer and readers subscribed on it to write or read
data. Writers and readers are tasks woken up by the cosocket applet when
data can be consumed or produced, depending on the channels buffers
state. Then the cosocket applet is woken up by writers and readers when read
or write events were performed.

It means the cosocket applet has only few information on what was produced
or consumed. It is the writers and readers responsibility to notify any
blocking. Among other things, the readers must take care to notify the
stream on top of the cosocket applet that some data was consumed. Otherwise,
it may remain blocked, waiting for a write event (a write event from the
stream point of view is a read event from the cosocket point of view).

Thie patch must be backported as far as 2.8, and maybe to 2.6 too.
2025-07-24 12:13:41 +02:00
Christopher Faulet
48df877dab MEDIUM: hlua: Update the http applet to use its own buffers
Thanks to this patch, the lua HTTP applet is now using its own buffers.
.rcv_buf and .snd_buf callback functions are now defined to use the default
HTX functions. Functions to receive and send data have also been updated to
use the applet API and to remove any dependencies on the stream-connectors
and the channels.
2025-07-24 12:13:41 +02:00
Christopher Faulet
3e456be5ae MINOR: hlua: Use the buffer instead of the HTTP message to get HTTP headers
hlua_http_get_headers() function was using the HTTP message from the stream
TXN to retrieve headers from a message. However, this will be an issue to
update the lua HTTP applet to use its own buffers. Indeed, in that case,
information from the channels will be unavailable. So now,
hlua_http_get_headers() is now using a buffer containing an HTX message. It
is just an API change bacause, internally, the function was already
manipulation an HTX message.
2025-07-24 12:13:41 +02:00
Christopher Faulet
15080d9aae MINOR: hlua: Fill the request array on the first HTTP applet run
When a lua HTTP applet is created, a "request" object is created, filled
with the request information (method, path, headers...), to be able to
easily retrieve these information from the script. However, this was done
when thee appctx was created, retrieving the info from the request channel.

To be ale to update the applet to use its own buffer, it is now performed on
the first applet run. Indead, when the applet is created, the info are not
forwarded yet and should not be accessed. Note that for now, information are
still retrieved from the channel.
2025-07-24 12:13:41 +02:00
Christopher Faulet
fdb66e6c5e MEDIUM: hlua: Update the tcp applet to use its own buffers
Thanks to this patch, the lua TCP applet is now using its own buffers.
.rcv_buf and .snd_buf callback functions are now defined to use the default
raw functions. Other changes are quite light. Mainly, end of stream and
errors are reported on the appctx instead of the stream-endpoint descriptor.
2025-07-24 12:13:41 +02:00
Christopher Faulet
1f9a1cbefc MINOR: applet: Improve applet API to take care of inbuf/outbuf alloc failures
applet_get_inbuf() and applet_get_outbuf() functions were not testing if the
buffers were available. So, the caller had to check them before calling one
of these functions. It is not really handy. So now, these functions take
care to have a fully usable buffer before returning. Otherwise NULL is
returned.
2025-07-24 12:13:41 +02:00
Christopher Faulet
44aae94ab9 MINOR: applet: Add HTX versions for applet_input_data() and applet_output_room()
It will be useful for HTX applets because availale data in the input buffer and
available space in the output buffer are computed from the HTX message and not
the buffer itself. So now, applet_htx_input_data() and applet_htx_output_room()
functions can be used.
2025-07-24 12:13:41 +02:00
Christopher Faulet
d9855102cf BUG/MEDIUM: Remove sync sends from streams to applets
When the applet API was reviewed to use dedicated buffers, the support for
sends from the streams to applets was added. Unfortunately, it was not a
good idea because this way it is possible to deliver data to an applet and
release it just after, truncated data. Indeed, the release stage for applets
is related to the stream release itself. However, unlike the multiplexers,
the applets cannot survive to a stream for now.

So, for now, the sync sends from the streams is removed for applets, waiting
for a better way to handle the applets release stage.

Note that this only concerns applets using their own buffers. And of now,
the bug is harmless because all refactored applets are on server side and
consume data first. But this will be an issue with the HTTP client.

This patch should be backported as far as 3.0 after a period of observation.
2025-07-24 12:13:41 +02:00
Christopher Faulet
574d0d8211 BUG/MINOR: applet: Fix applet_getword() to not return one extra byte
applet_getword() function is returning one extra byte when a string is
returned because the "ret" variable is not reset before the loop on the
data. The patch also fixes applet_getline().

It is a 3.3-specific issue. No need to backport.
2025-07-24 12:13:41 +02:00
Christopher Faulet
41a40680ce BUG/MEDIUM: stconn: Fix conditions to know an applet can get data from stream
sc_is_send_allowed() function is used to know if an applet is able to
receive data from the stream. But this function was designed for applets
using the channels buffer. It is not adapted to applets using their own
buffers.

when the SE_FL_WAIT_DATA flag is set, it means the applet is waiting for
more data and should not be woken up without new data. For applets using
channels buffer, just testing the flag is enough because process_stream()
will remove if when more data will be available. For applets using their own
buffers, it is more complicated. Some data may be blocked in the output
channel buffer. In that case, and when the applet input buffer can receive
daa, the applet can be woken up.

This patch must be backported as far as 3.0 after a period of observation.
2025-07-24 12:13:41 +02:00
Christopher Faulet
0d371d2729 BUG/MEDIUM: applet: State inbuf is no longer full if input data are skipped
When data are skipped from the input buffer of an applet, we must take care
to notify the input buffer is no longer full. Otherwise, this could prevent
the stream to push data to the applet.

It is 3.3-specific. No backport needed.
2025-07-24 12:13:41 +02:00
Christopher Faulet
5b5ecf848d BUG/MINOR: hlua: Skip headers when a receive is performed on an HTTP applet
When an HTTP applet tries to retrieve data, the request headers are still in
the buffer. But, instead of being silently removed, their size is removed
from the amount of data retrieved. When the request payload is fully
retrieved, it is not an issue. But it is a problem when a length is
specified. The data are shorten from the headers size.

So now, we take care to silently remove headers.

This patch must be backported to all stable versions.
2025-07-24 12:13:41 +02:00
William Lallemand
8258c8166a MINOR: acme: add ACME to the haproxy -vv feature list
Add "ACME" in the feature list in order to check if the support was
built successfully.
2025-07-24 11:49:11 +02:00
Remi Tricot-Le Breton
14615a8672 CLEANUP: ssl: Use only NIDs in curve name to id table
The curve name to curve id mapping table was built out of multiple
internal tables found in openssl sources, namely the 'nid_to_group'
table found in 'ssl/t1_lib.c' which maps openssl specific NIDs to public
IANA curve identifiers. In this table, there were two instances of
EVP_PKEY_XXX ids being used while all the other ones are NID_XXX
identifiers.
Since the two EVP_PKEY are actually equal to their NID equivalent in
'include/openssl/evp.h' we can use NIDs all along for better coherence.
2025-07-24 10:58:54 +02:00
Ilia Shipitsin
a2267fafcf CLEANUP: acme: fix wrong spelling of "resources"
"ressources" was used as a variable name, let's use English variant
to make spell check happier
2025-07-24 08:11:42 +02:00
William Lallemand
02db0e6b9f BUG/MINOR: acme: allow "processing" in challenge requests
Allow the "processing" status in the challenge object when requesting
to do the challenge, in addition to "pending".

According to RFC 8555 https://datatracker.ietf.org/doc/html/rfc8555/#section-7.1.6

   Challenge objects are created in the "pending" state.  They
   transition to the "processing" state when the client responds to the
   challenge (see Section 7.5.1)

However some CA could respond with a "processing" state without ever
transitioning to "pending".

Must be backported to 3.2.
2025-07-23 16:07:03 +02:00
William Lallemand
c103123c9e MINOR: acme: remove acme_req_auth() and use acme_post_as_get() instead
acme_req_auth() is only a call to acme_post_as_get() now, there's no
reason to keep the function. This patch removes it.
2025-07-23 16:07:03 +02:00
Amaury Denoyelle
08d664b17c MEDIUM: mux-quic: support backend private connection
If a backend connection is private, it should not be reused outside of
its original attached session. As such, on stream detach operation, such
connection is never inserted into server idle/avail list. Instead, it is
stored directly on the session.

The purpose of this commit is to implement proper handling of private
backend connections via QUIC multiplexer.
2025-07-23 15:49:51 +02:00
Amaury Denoyelle
00d668549e MINOR: mux-quic: do not reuse connection if app already shut
QUIC connection graceful closure is performed in two steps. First, the
application layer is closed. In the context of HTTP/3, this is done with
a GOAWAY frame emission, which forbids opening of new streams. Then the
whole connection is terminated via CONNECTION_CLOSE which is the final
emitted frame.

This commit ensures that when app layer is shut for a backend
connection, this connection is removed from either idle or avail server
tree. The objective is to prevent stream layer to try to reuse a
connection if no new stream can be attached on it.

New BUG_ON checks are inserted in qmux_strm_attach() and h3_attach() to
ensure that this assertion is always true.
2025-07-23 15:45:18 +02:00
Amaury Denoyelle
3217835b1d MEDIUM: mux-quic: implement be connection reuse
Implement support for QUIC connection reuse on the backend side. The
main change is done during detach stream operation. If a connection is
idle, it is inserted in the server list. Else, it is stored in the
server avail tree if there is room for more streams.

For non idle connection, qmux_avail_streams() is reused to detect that
stream flow-control limit is not yet reached. If this is the case, the
connection is not inserted in the avail tree, so it cannot be reuse,
even if flow-control is unblocked later by the peer. This latter point
could be improved in the future.

Note that support for QUIC private connections is still missing. Reuse
code will evolved to fully support this case.
2025-07-23 15:45:09 +02:00
Amaury Denoyelle
3bf37596ba MINOR: mux-quic: store session in QCS instance
Add a new <sess> member into QCS structure. It is used to store the
parent session of the stream on attach operation. This is only done for
backend side.

This new member will become necessary when connection reuse will be
implemented. <owner> member of connection is not suitable as it could be
set to NULL, notably after a session_add_conn() failure.

Also, a single BE conn can be shared along different session instance,
in particular when using aggressive/always reuse mode. Thus it is
necessary to linked each QCS instance with its session.
2025-07-23 15:42:37 +02:00
Amaury Denoyelle
826f797bb0 MINOR: mux-quic: disable glitch on backend side
For now, QUIC glitch limit counter is only available on the frontend
side. Thus, disable incrementation on the backend side for now. Also,
session is only available as conn <owner> reliably on the frontend side,
so session_add_glitch_ctr() operation is also securised.
2025-07-23 14:39:18 +02:00
Amaury Denoyelle
89329b147d MINOR: mux-quic: correctly implement backend timeout
qcc_refresh_timeout() is the function called on QUIC MUX activity. Its
purpose is to update the timeout by selecting the correct value
depending on the connection state.

Prior to this patch, backend connections were mostly ignored by the
function. However, the default server timeout was selecting as a
fallback. This is incompatible with backend connections reuse.

This patch fixes timeout applied on backend connections. Only values
specific to frontend which are http-request and http-keep-alive timeouts
are now ignored for a backend connection. Also, fallback timeout is only
used for frontend connections.

This patch ensures that an idle backend connection won't be deleted due
to server timeout. This is necessary for proper connection reuse which
will be implemented in a future patch.
2025-07-23 14:36:48 +02:00
Amaury Denoyelle
95cb763cd6 MINOR: mux-quic: refactor timeout code
This commit is a small reorganization of condition used into
qcc_refresh_timeout(). Its objective is to render the code more logical
before the next patch which will ensure that timeout is properly set for
backend connections.
2025-07-23 14:36:48 +02:00
Amaury Denoyelle
558532fc57 BUG/MINOR: mux-quic: ensure close-spread-time is properly applied
If a connection remains on a proxy currently disabled or stopped, a
special spread timeout is set if active close is configured. For QUIC
MUX, this is set via qcc_refresh_timeout() as with all other timeout
values.

Fix this closing timeout setting : it is now used as an override to any
other timeout that may have been chosen if calculated spread time is
lower than the previously selected value. This is done for backend
connections as well.

This should be backported up to 2.6 after a period of observation.
2025-07-23 14:36:48 +02:00
Amaury Denoyelle
c5bcc3a21e BUG/MINOR mux-quic: apply correctly timeout on output pending data
When no stream is attached, mux layer is responsible to maintain a
timeout. The first criteria is to apply client/server timeout if there
is still data waiting for emission.

Previously, <hreq> qcc member was used to determine this state. However,
this only covers bidirectional streams. Fix this by testing if
<send_list> is empty or not. This is enough to take into account both
bidi and uni streams.

Theorically, this should be backported to every stable versions.
However, send-list is not available on 2.6 and there is no alternative
to quickly determine if there is waiting output data. Thus, it's better
to backport it up to 2.8 only.
2025-07-23 14:36:48 +02:00
William Lallemand
7139ebd676 BUG/MEDIUM: acme: use POST-as-GET instead of GET for resources
The requests that checked the status of the challenge and the retrieval
of the certificate were done using a GET.

This is working with letsencrypt and other CA providers, but it might
not work everywhere. RFC 8555 specifies that only the directory and
newNonce resources MUST work with a GET requests, but everything else
must use POST-as-GET.

Must be backported to 3.2.
2025-07-23 12:42:23 +02:00
Aurelien DARRAGON
054fa05e1f MINOR: log: explicitly ignore "log-steps" on backends
"log-steps" was already ignored if directly defined in a backend section,
however, when defined in a defaults section it was inherited to all
proxies no matter their capability (ie: including backends).

As configurations often contain more backends than frontends, this would
result in wasted memory given that the log-steps setting is only
considered on frontends.

Let's fix that by preventing the inheritance from defaults section to
anything else than frontends. Also adjust the documentation to mention
that the setting in not relevant for backends.
2025-07-22 10:22:04 +02:00
Amaury Denoyelle
e02939108e BUG/MINOR: h3: fix uninitialized value in h3_req_headers_send()
Due to the introduction of smallbuf usage for HTTP/3 headers emission,
ret variable may be used uninitialized if buffer allocation fails due to
not enough room in QUIC connection window.

Fix this by setting ret value to 0.

Function variable declaration are also adjusted so that the pattern is
similar to h3_resp_headers_send(). Finally, outbuf buffer is also
removed as it is now unused.

No need to backport.
2025-07-22 09:42:52 +02:00
Amaury Denoyelle
cbbbf4ea43 MINOR: h3: add traces to h3_req_headers_send()
Add traces during HTTP/3 request encoding. This operation is performed
on the backend side.
2025-07-21 16:58:12 +02:00
Amaury Denoyelle
3126cba82e MINOR: h3: use smallbuf for request header emission
Similarly to HTTP/3 response encoding, a small buffer is first allocated
for the request encoding on the backend side. If this is not sufficient,
the smallbuf is replaced by a standard buffer and encoding is restarted.

This is useful to reduce the window usage over a connection of smaller
requests.
2025-07-21 16:58:12 +02:00
Remi Tricot-Le Breton
7fd849f4e0 MINOR: ssl: Remove ClientHello specific traces if !HAVE_SSL_CLIENT_HELLO_CB
SSL libraries like wolfSSL that don't have the clienthello callback
mechanism enabled do not need to have the traces that are only called
from the said callback.
The code added to parse the ciphers relied on a function that wes not
defined in wolfSSL (SSL_CIPHER_find).
2025-07-21 16:44:50 +02:00
Remi Tricot-Le Breton
665b7d4fa9 MINOR: ssl: Dump ciphers and sigalgs details in trace with 'advanced' verbosity
The contents of the extensions were only dumped with verbosity
'complete' which meant that the 'advanced' verbosity was pretty much
useless despite what its name implies (it was the same as the 'simple'
one).
The 'advanced' verbosity is now the "maximum" one, using 'complete'
would not add any extra information yet, but it leaves more room for
some actually large traces to be dumped later on (some complete
ClientHello dumps for instance).
2025-07-21 16:44:50 +02:00
Remi Tricot-Le Breton
8f2b787241 MINOR: ssl: Add curves in ssl traces
Dump the ClientHello curves in the SSL traces.
2025-07-21 16:44:50 +02:00
Remi Tricot-Le Breton
d799a1b3b2 MINOR: ssl: Add curve id to curve name table and mapping functions
The SSL libraries like OpenSSL for instance do not seem to actually
provide a public mapping between IANA defined curve IDs and curve names,
or even a mapping between curve IDs and internal NIDs.
This new table regroups all those information in a single table so that
we can convert curve names (be it SECG or NIST format) to curve IDs or
NIDs.
The previously existing 'curves2nid' function now uses the new table,
and a new 'curveid2str' one is added.
2025-07-21 16:44:50 +02:00
Remi Tricot-Le Breton
f00d9bf12d MINOR: ssl: Add ciphers in ssl traces
Decode the contents of the ClientHello ciphers extension and dump a
human readable list in the ssl traces.
2025-07-21 16:44:50 +02:00
Amaury Denoyelle
b0fe453079 BUG/MINOR: hq-interop: fix FIN transmission
Since the following patch, app_ops layer is now responsible to report
that HTX block was the last transmitted so that FIN STREAM can be set.
This is mandatory to properly support HTTP 1xx interim responses.

  f349df44b4
  MINOR: qmux: change API for snd_buf FIN transmission

This change was correctly implemented in HTTP/3 code, however an issue
appeared on hq-interop transcoder in case zero-copy DATA transfer is
performed when HTX buffer is swapped. If this occured during the
transfer of the last HTX block, EOM is not detected and thus STREAM FIN
is never set.

Most of the times, QMUX shut callback is called immediately after. This
results in an emission of a RESET_STREAM to the client, which prevents
the data transfer.

To fix this, use the same method as HTTP/3 : HTX EOM flag status is
checked before any transfer, thus preserving it even after a zero-copy.

Criticity of this bug is low as hq-interop is experimental and is mostly
used for interop testing.

This should fix github issue #3038.

This patch must be backported wherever the above one is.
2025-07-21 15:38:02 +02:00
Aurelien DARRAGON
563b4fafc2 BUG/MINOR: logs: fix log-steps extra log origins selection
Willy noticed that it was not possible to select extra log origins using
log-steps directive. Extra origins are the one registered using
log_orig_register() such as http-req.

Reason was the error path was always executed during extra log origin
matching for log-steps parser, while it should only be executed if no
match was found.

It should be backported to 3.1.
2025-07-21 15:33:55 +02:00
Olivier Houchard
f8e9545f70 BUG/MEDIUM: threads: Disable the workaround to load libgcc_s on macOS
Don't use the workaround to load libgcc_s on macOS. It is not needed
there, and it causes issues, as recent macOS dislike processes that fork
after threads where created (and the workaround creates a temporary
thread). This fixes crashes on macOS at least when using master-worker,
and using the system resolver.

This should fix Github issue #3035

This should be backported up to 2.8.
2025-07-21 13:56:29 +02:00
Valentine Krasnobaeva
5b45251d19 BUILD: debug: add missed guard USE_CPU_AFFINITY to show cpu bindings
Not all platforms support thread-cpu bindings, so let's put
cpu_topo_dump_summary() under USE_CPU_AFFINITY guards.

Only needs to be backported if 1cc0e023ce ("MINOR: debug: add thread-cpu
bindings info in 'show dev' output") is backported.
2025-07-21 11:25:08 +02:00
Frederic Lecaille
14d0f74052 MINOR: quic: Remove pool_head_quic_be_cc_buf pool
This patch impacts the QUIC frontends. It reverts this patch

    MINOR: quic-be: add a "CC connection" backend TX buffer pool

which adds <pool_head_quic_be_cc_buf> new pool to allocate CC (connection closed state)
TX buffers with bigger object size than the one for <pool_head_quic_cc_buf>.
Indeed the QUIC backends must be able to send at least 1200 bytes Initial packets.

For now on, both the QUIC frontends and backend use the same pool with
MAX(QUIC_INITIAL_IPV6_MTU, QUIC_INITIAL_IPV4_MTU)(1252 bytes) as object size.
2025-07-17 19:33:21 +02:00
Valentine Krasnobaeva
1cc0e023ce MINOR: debug: add thread-cpu bindings info in 'show dev' output
Add thread-cpu bindings info in 'show dev' output, as it can be useful for
debugging.
2025-07-17 19:08:13 +02:00
Valentine Krasnobaeva
ff461efc59 MINOR: debug: align output style of debug_parse_cli_show_dev with cpu_dump_topology
Align titles style of debug_parse_cli_show_dev() with
cpu_dump_topology(). We will call the latter inside of
debug_parse_cli_show_dev() to show thread-cpu bindings info.
2025-07-17 19:08:06 +02:00
Valentine Krasnobaeva
9e11c852fe MINOR: cpu-topo: write thread-cpu bindings into trash buffer
Write thread-cpu bindings and cluster summary into provided trash buffer.
Like this we can call this function in any place, when this info is needed.
2025-07-17 19:07:58 +02:00
Valentine Krasnobaeva
2405283230 MINOR: cpu-topo: split cpu_dump_topology() to show its summary in show dev
cpu_dump_topology() prints details about each enabled CPU and a summary with
clusters info and thread-cpu bindings. The latter is often usefull for
debugging and we want to add it in the 'show dev' output.

So, let's split cpu_dump_topology() in two parts: cpu_topo_debug() to print the
details about each enabled CPU; and cpu_topo_dump_summary() to print only the
summary.

In the next commit we will modify cpu_topo_dump_summary() to write into local
trash buffer and it could be easily called from debug_parse_cli_show_dev().
2025-07-17 19:07:46 +02:00
Valentine Krasnobaeva
254e4d59f7 BUG/MINOR: halog: exit with error when some output filters are set simultaneosly
Exit with an error if multiple output filters (-ic, -srv, -st, -tc, -u*, etc.)
are used at the same time.

halog is designed to process and display output for only one filter at a time.
Using multiple filters simultaneously can cause a crash because the program is
not designed to manage multiple, separate result sets (e.g., one for
IP counts, another for URLs).

Supporting simultaneous filters would require a redesign to collect entries for
each filter in separate ebtree. This would negatively impact performance and is
not requested for the moment. This patch prevents the crash by checking filter
combinations just after the command line parsing.

This issue was reported in GitHUB #3031.
This should be backported in all stable versions.
2025-07-17 17:22:37 +02:00
Frederic Lecaille
4eef300a2c BUG/MEDIUM: quic-be: CC buffer released from wrong pool
The "connection close state" TX buffer is used to build the datagram with
basically a CONNECTION_CLOSE frame to notify the peer about the connection
closure. It allows the quic_conn memory release and its replacement by a lighter
quic_cc_conn struct.

For the QUIC backend, there is a dedicated pool to build such datagrams from
bigger TX buffers. But from quic_conn_release(), this is the pool dedicated
to the QUIC frontends which was used to release the QUIC backend TX buffers.

This patch simply adds a test about the target of the connection to release
the "connection close state" TX buffers from the correct pool.

No backport needed.
2025-07-17 11:48:41 +02:00
Willy Tarreau
b6d0ecd258 DOC: connection: explain the rules for idle/safe/avail connections
It's super difficult to find the rules that operate idle conns depending
on their idle/safe/avail/private status. Some are in lists, others not.
Some are in trees, others not. Some have a flag set, others not. This
documents the rules before the definitions in connection-t.h. It could
even be backported to help during backport sessions.
2025-07-16 18:53:57 +02:00
Frederic Lecaille
838024e07e MINOR: quic: Get rid of qc_is_listener()
Replace all calls to qc_is_listener() (resp. !qc_is_listener()) by calls to
objt_listener() (resp. objt_server()).
Remove qc_is_listener() implement and QUIC_FL_CONN_LISTENER the flag it
relied on.
2025-07-16 16:42:21 +02:00
Willy Tarreau
d9701d312d DEV: gdb: add a memprofile decoder to the debug tools
"memprof_dump" will visit memprofile entries and dump them in a
synthetic format counting allocations/releases count/size, type
and calling address.
2025-07-16 15:33:33 +02:00
Christopher Faulet
4f7c26cbb3 BUG/MINOR: applet: Don't trigger BUG_ON if the tid is not on appctx init
When an appctx is initialized, there is a BUG_ON() to be sure the appctx is
really initialized on the right thread to avoid bugs on the thread
affinity. However, it is possible to not choose the thread when the appctx
is created and let it starts on any thread. In that case, the thread
affinity is set when the appctx is initialized. So, we must take cate to not
trigger the BUG_ON() in that case.

For now, we never hit the bug because the thread affinity is always set
during the appctx creation.

This patch must be backport as far as 2.8.
2025-07-16 13:47:33 +02:00
Amaury Denoyelle
88c0422e49 MINOR: h3: remove unused outbuf in h3_resp_headers_send()
Cleanup h3_resp_headers_send() by removing outbuf buffer variable which
is not necessary anymore.
2025-07-16 10:30:59 +02:00
Frederic Lecaille
1c33756f78 BUG/MINOR: quic: Wrong source address use on FreeBSD
The bug is a listener only one, and only occured on FreeBSD.

The FreeBSD issue has been reported here:
https://forums.freebsd.org/threads/quic-http-3-with-haproxy.98443/
where QUIC traces could reveal that sendmsg() calls lead to EINVAL
syscall errnos.

Such a similar issue could be reproduced from a FreeBSD 14-2 VM
with reg-tests/quic/retry.vtc as reg test.

As noted by Olivier, this issue could be fixed within the VM binding
the listener socket to INADDR_ANY.

That said, the symptoms are not exactly the same as the one reporte by the user.
What could be observed from such a VM is that if the first recvmsg() call
returns the datagram destination address, and if the listener
listening address is bound to a specific address, the calls to
sendmsg() fail because of the IP_SENDSRCADDR ip option value
set by cmsg_set_saddr(). According to the ip(4) freebsd manual
such an IP options must be used if the listening socket is
bound to a specific address. It is to be noted that into a VM
the first call to recvmsg() of the first connection does not return the datagram
destination address. This leads the first quic_conn to be initialized without
->local_addr value. This is this value which is used by IP_SENDSRCADDR
ip option. In this case, the sendmsg() calls (without IP_SENDSRCADDR)
never fail. The issue appears at the second condition.

This patch replaces the conditions to use IP_SENDSRCADDR to a call to
qc_may_use_saddr(). This latter also checks that the listener listening
address is not INADDR_ANY to allow the use of the source address.
It is generalized to all the OSes. Indeed, there is no reason to set the source
address when the listener is bound to a specific address.

Must be backported as far as 2.8.
2025-07-16 10:17:54 +02:00
Amaury Denoyelle
63586a8ab4 BUG/MINOR: h3: properly handle interim response on BE side
On backend side, H3 layer is responsible to decode a HTTP/3 response
into an HTX message. Multiple responses may be received on a single
stream with interim status codes prior to the final one.

h3_resp_headers_to_htx() is the function used solely on backend side
responsible for H3 response to HTX transcoding. This patch extends it to
be able to properly support interim responses. When such a response is
received, the new flag H3_SF_RECV_INTERIM is set. This is converted to
QMUX qcs flag QC_SF_EOI_SUSPENDED.

The objective of this latter flag is to prevent stream EOI to be
reported during stream rcv_buf callback, even if HTX message contains
EOM and is empty. QC_SF_EOI_SUSPENDED will be cleared when the final
response is finally converted, which unblock stream EOI notification for
next rcv_buf invocations. Note however that HTX EOM is untouched : it is
always set for both interim and final response reception.

As a minor adjustment, HTX_SL_F_BODYLESS is always set for interim
responses.

Contrary to frontend interim response handling, a flag is necessary on
QMUX layer. This is because H3 to HTX transcoding and rcv_buf callback
are two distinct operations, called under different context (MUX vs
stream tasklet).

Also note that H3 layer has two distinct flags for interim response
handling, one only used as a server (FE side) and the other as a client
(BE side). It was preferred to used two distinct flags which is
considered less error-prone, contrary to a single unified flag which
would require to always set the proxy side to ensure it is relevant or
not.

No need to backport.
2025-07-15 18:39:23 +02:00
Amaury Denoyelle
e7b3a69c59 BUG/MEDIUM: h3: handle interim response properly on FE side
On frontend side, HTTP/3 layer is responsible to transcode an HTX
response message into HTTP/3 HEADERS frame. This operations is handled
via h3_resp_headers_send().

Prior to this patch, if HTX EOM was encountered in the HTX message after
response transcoding, <fin> was reported to the QMUX layer. This will in
turn cause FIN stream bit to be set when the response is emitted.
However, this is not correct as a single HTX response can be constitued
of several interim message, each delimited by EOM block.

Most of the time, this bug will cause the client to close the connection
as it is invalid to receive an interim response with FIN bit set.

Fixes this by now properly differentiate interim and final response.
During interim response transcoding, the new flag H3_SF_SENT_INTERIM
will be set, which will prevent <fin> to be reported. Thus, <fin> will
only be notified for the final response.

This must be backported up to 2.6. Note that it relies on the previous
patch which also must be taken.
2025-07-15 18:39:23 +02:00
Amaury Denoyelle
f349df44b4 MINOR: qmux: change API for snd_buf FIN transmission
Previous patches have fixes interim response encoding via
h3_resp_headers_send(). However, it is still necessary to adjust h3
layer state-machine so that several successive HTTP responses are
accepted for a single stream.

Prior to this, QMUX was responsible to decree that the final HTX message
was encoded so that FIN stream can be emitted. However, with interim
response, MUX is in fact unable to properly determine this. As such,
this is the responsibility of the application protocol layer. To reflect
this, app_ops snd_buf callback is modified so that a new output argument
<fin> is added to it.

Note that for now this commit does not bring any functional change.
However, it will be necessary for the following patch. As such, it
should be backported prior to it to every versions as necessary.
2025-07-15 18:39:23 +02:00
Amaury Denoyelle
d8b34459b5 BUG/MINOR: h3: ensure that invalid status code are not encoded (FE side)
On frontend side, H3 layer transcodes HTX status code into HTTP/3
HEADERS frame. This is done by calling qpack_encode_int_status().

Prior to this patch, the latter function was also responsible to reject
an invalid value, which guarantee that only valid codes are encoded
(between 100 and 999 values). However, this is not practical as it is
impossible to differentiate between an invalid code error and a buffer
room exhaustation.

Changes this so that now HTTP/3 layer first ensures that HTX code is
valid. The stream is closed with H3_INTERNAL_ERROR if invalid value is
present. Thus, qpack_encode_int_status() will only report an error due
to buffer room exhaustion. If a small buffer is used, a standard buffer
will be reallocated which should be sufficient to encode the response.

The impact of this bug is minimal. Its main benefit is code clarity,
while also removing an unnecessary realloc when confronting with an
invalid HTTP code.

This should be backported at least up to 3.1. Prior to it, smallbuf
mechanism isn't present, hence the impact of this patch is less
important. However, it may still be backported to older versions, which
should facilitate picking patches for HTTP 1xx interim response support.
2025-07-15 18:39:23 +02:00
Amaury Denoyelle
d59bdfb8ec BUG/MINOR: h3: properly realloc buffer after interim response encoding
Previous commit fixes encoding of several following HTTP response
message when interim status codes are first reported. However,
h3_resp_headers_send() still was unable to interrupt encoding if output
buffer room was not sufficient. This case may be likely because small
buffers are used for headers encoding.

This commit fixes this situation. If output buffer is not empty prior to
response encoding, this means that a previous interim response message
was already encoded before. In this case, and if remaining space is not
sufficient, use buffer release mechanism : this allows to restart
response encoding by using a newer buffer. This process has already been
used for DATA and trailers encoding.

This must be backported up to 2.6. However, note that buffer release
mechanism is not present for version 2.8 and lower. In this case, qcs
flag QC_SF_BLK_MROOM should be enough as a replacement.
2025-07-15 18:39:23 +02:00
Amaury Denoyelle
1290fb731d BUG/MEDIUM: h3: do not overwrite interim with final response
An HTTP response may contain several interim response message prior (1xx
status) to a final response message (all other status codes). This may
cause issues with h3_resp_headers_send() called for response encoding
which assumes that it is only call one time per stream, most notably
during output buffer handling.

This commit fixes output buffer handling when h3_resp_headers_send() is
called multiple times due to an interim response. Prior to it, interim
response was overwritten with newer response message. Most of the time,
this resulted in error for the client due to QPACK decoding failure.
This is now fixed so that each response is encoded one after the other.

Note that if encoding of several responses is bigger than output buffer,
an error is reported. This can definitely occurs as small buffer are
used during header encoding. This situation will be improved by the next
patch.

This must be backported up to 2.6.
2025-07-15 18:39:23 +02:00
Willy Tarreau
110625bdb2 MINOR: debug: report haproxy and operating system info in panic dumps
The goal is to help figure the OS version (kernel and userland), any
virtualization/containers, and the haproxy version and build features.
Sometimes even reporters themselve can be mistaken about the running
version or environment. Also printing this at the top hepls draw a
visual delimitation between warnings and panic. Now we get something
like this:

  PANIC! Thread 1 is about to kill the process.

  HAProxy info:
    version: 3.3-dev3-c863c0-18
    features: +51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY (...)

  Operating system info:
    virtual machine: no
    container: no
    kernel: Linux 6.1.131 #1 SMP PREEMPT_DYNAMIC Fri Mar 14 01:04:55 CET 2025 x86_64
    userland: Slackware 15.0 x86_64

  * Thread 1 : id=0x7f615a8775c0 act=1 glob=0 wq=1 rq=0 tl=0 tlsz=0 rqsz=0
        1/1    stuck=0 prof=0 harmless=0 isolated=0
               cpu_ns: poll=1835010197 now=1835066102 diff=55905
               (...)
2025-07-15 17:18:29 +02:00
Willy Tarreau
abcc73830f MEDIUM: proxy: register a post-section cleanup function
For listen/frontend/backend, we now want to be able to clean up the
default-server directive that's no longer used past the end of the
section. For this we register a post-section function and perform the
cleanup there.
2025-07-15 10:40:17 +02:00
Willy Tarreau
49a619acae MEDIUM: proxy: no longer allocate the default-server entry by default
The default-server entry used to always be allocated. Now we'll postpone
its allocation for the first time we need it, i.e. during a "default-server"
directive, or when inheriting a defaults section which has one. The memory
savings are significant, on a large configuration with 100k backends and
no default-server directive, the memory usage dropped from 800MB RSS to
420MB (380 MB saved). It should be possible to also address configs using
default-server by releasing this entry when leaving the proxy section,
which is not done yet.
2025-07-15 10:39:44 +02:00
Willy Tarreau
76828d4120 MINOR: proxy: add checks for defsrv's validity
Now we only copy the default server's settings if such a default server
exists, otherwise we only initialize it. At the moment it always exists.

The change is mostly performed in srv_settings_cpy() since that's where
each caller passes through, and there's no point duplicating that test
everywhere.
2025-07-15 10:36:58 +02:00
Willy Tarreau
4ac28f07d0 MEDIUM: proxy: take the defsrv out of the struct proxy
The server struct has gone huge over time (~3.8kB), and having a copy
of it in the defsrv section of the struct proxy costs a lot of RAM,
that is not needed anymore at run time.

This patch replaces this struct with a dynamically allocated one. The
field is allocated and initialized during alloc_new_proxy() and is
freed when the proxy is destroyed for now. But the goal will be to
support freeing it after parsing the section.
2025-07-15 10:34:18 +02:00
Willy Tarreau
2414c5ce2f CLEANUP: server: be sure never to compare src against a non-existing defsrv
The test in srv_ssl_settings_cpy() comparing src to the server's proxy's
default server does work but it's a subtle trap. Indeed, no check is made
on srv->proxy to be valid, and this only works because the compiler is
comparing pointer offsets. During the boot, it's common to have NULL here
in srv->proxy and of course in this case srv does not match that value
which is NULL plus epsilon. But when trying to turn defsrv to a dynamic
pointer instead, then the compiler is forced to dereference this NULL
srv->proxy and dies during init.

Let's always add the null check for srv->proxy before the test to avoid
this situation.

No backport is needed since the problem cannot happen yet.
2025-07-15 10:33:08 +02:00
Willy Tarreau
36f339d2fe CLEANUP: stream: use server_find_by_addr() in sticking_rule_find_target()
This makes this function a bit less of a mess by no longer manipulating
the low-level server address nodes nor the proxy lock.
2025-07-15 10:30:28 +02:00
Willy Tarreau
616c10f608 CLEANUP: server: add server_find_by_addr()
Server lookup by address requires locking and manipulation of the tree
from user code. Let's provide server_find_by_addr() which does that for
us.
2025-07-15 10:30:28 +02:00
Willy Tarreau
fda04994d9 CLEANUP: server: simplify server_find_by_id()
At a few places we're seeing some open-coding of the same function, likely
because it looks overkill for what it's supposed to do, due to extraneous
tests that are not needed (e.g. check of the backend's PR_CAP_BE etc).
Let's just remove all these superfluous tests and inline it so that it
feels more suitable for use everywhere it's needed.
2025-07-15 10:30:28 +02:00
Willy Tarreau
c8f0b69587 CLEANUP: stream: lookup server ID using standard functions
The server lookup in sticking_rule_find_target() uses an open-coded tree
search while we have a function for this server_find_by_id(). In addition,
due to the way it's coded, the stick-table lock also covers the server
lookup by accident instead of being released earlier. This is not a real
problem though since such feature is rarely used nowadays.

Let's clean all this stuff by first retrieving the ID under the lock and
then looking up the corresponding server.
2025-07-15 10:30:28 +02:00
Willy Tarreau
a3443db2eb CLEANUP: cfgparse: lookup proxy ID using existing functions
The code used to detect proxy id conflicts uses an open-coded lookup
in the ID tree which is not necessary since we already have functions
for this. Let's switch to that instead.
2025-07-15 10:30:28 +02:00
Willy Tarreau
31526f73e6 CLEANUP: server: use server_find_by_name() where relevant
Instead of open-coding a tree lookup, in sticking rules and server_find(),
let's just rely on server_find_by_name() which now does exactly the same.
2025-07-15 10:30:28 +02:00
Willy Tarreau
61acd15ea8 CLEANUP: server: rename findserver() to server_find_by_name()
Now it's more logical and matches what is done in the rest of these
functions. server_find() now relies on it.
2025-07-15 10:30:28 +02:00
Willy Tarreau
6ad9285796 CLEANUP: server: rename server_find_by_name() to server_find()
This function doesn't just look at the name but also the ID when the
argument starts with a '#'. So the name is not correct and explains
why this function is not always used when the name only is needed,
and why the list-based findserver() is used instead. So let's just
call the function "server_find()", and rename its generation-id based
cousin "server_find_unique()".
2025-07-15 10:30:28 +02:00
Willy Tarreau
5e78ab33cd MINOR: server: use the tree to look up the server name in findserver()
Let's just use the tree-based lookup instead of walking through the list.
This function is used to find duplicates in "track" statements and a few
such places, so it's important not to waste too much time on large setups.
2025-07-15 10:30:27 +02:00
Willy Tarreau
12a6a3bb3f REORG: server: move findserver() from proxy.c to server.c
The reason this function was overlooked is that it had mostly equivalent
ones in server.c, let's move them together.
2025-07-15 10:30:27 +02:00
Willy Tarreau
732cd0dfa2 CLEANUP: server: do not check for duplicates anymore in findserver()
findserver() used to check for duplicate server names. These are no
longer accepted in 3.3 so let's get rid of that test and simplify the
code. Note that the function still only uses the list instead of the
tree.
2025-07-15 10:30:27 +02:00
Willy Tarreau
d4d72e2303 [RELEASE] Released version 3.3-dev3
Released version 3.3-dev3 with the following main changes :
    - BUG/MINOR: quic-be: Wrong retry_source_connection_id check
    - MEDIUM: sink: change the sink mode type to PR_MODE_SYSLOG
    - MEDIUM: server: move _srv_check_proxy_mode() checks from server init to finalize
    - MINOR: server: move send-proxy* incompatibility check in _srv_check_proxy_mode()
    - MINOR: mailers: warn if mailers are configured but not actually used
    - BUG/MEDIUM: counters/server: fix server and proxy last_change mixup
    - MEDIUM: server: add and use a separate last_change variable for internal use
    - MEDIUM: proxy: add and use a separate last_change variable for internal use
    - MINOR: counters: rename last_change counter to last_state_change
    - MINOR: ssl: check TLS1.3 ciphersuites again in clienthello with recent AWS-LC
    - BUG/MEDIUM: hlua: Forbid any L6/L7 sample fetche functions from lua services
    - BUG/MEDIUM: mux-h2: Properly handle connection error during preface sending
    - BUG/MINOR: jwt: Copy input and parameters in dedicated buffers in jwt_verify converter
    - DOC: Fix 'jwt_verify' converter doc
    - MINOR: jwt: Rename pkey to pubkey in jwt_cert_tree_entry struct
    - MINOR: jwt: Remove unused parameter in convert_ecdsa_sig
    - MAJOR: jwt: Allow certificate instead of public key in jwt_verify converter
    - MINOR: ssl: Allow 'commit ssl cert' with no privkey
    - MINOR: ssl: Prevent delete on certificate used by jwt_verify
    - REGTESTS: jwt: Add test with actual certificate passed to jwt_verify
    - REGTESTS: jwt: Test update of certificate used in jwt_verify
    - DOC: 'jwt_verify' converter now supports certificates
    - REGTESTS: restrict execution to a single thread group
    - MINOR: ssl: Introduce new smp_client_hello_parse() function
    - MEDIUM: stats: add persistent state to typed output format
    - BUG/MINOR: httpclient: wrongly named httpproxy flag
    - MINOR: ssl/ocsp: stop using the flags from the httpclient CLI
    - MEDIUM: httpclient: split the CLI from the actual httpclient API
    - MEDIUM: httpclient: implement a way to use directly htx data
    - MINOR: httpclient/cli: add --htx option
    - BUILD: dev/phash: remove the accidentally committed a.out file
    - BUG/MINOR: ssl: crash in ssl_sock_io_cb() with SSL traces and idle connections
    - BUILD/MEDIUM: deviceatlas: fix when installed in custom locations.
    - DOC: deviceatlas build clarifications
    - BUG/MINOR: ssl/ocsp: fix definition discrepancies with ocsp_update_init()
    - MINOR: proto-tcp: Add support for TCP MD5 signature for listeners and servers
    - BUILD: cfgparse-tcp: Add _GNU_SOURCE for TCP_MD5SIG_MAXKEYLEN
    - BUG/MINOR: proto-tcp: Take care to initialized tcp_md5sig structure
    - BUG/MINOR: http-act: Fix parsing of the expression argument for pause action
    - MEDIUM: httpclient: add a Content-Length when the payload is known
    - CLEANUP: ssl: Rename ssl_trace-t.h to ssl_trace.h
    - MINOR: pattern: add a counter of added/freed patterns
    - CI: set DEBUG_STRICT=2 for coverity scan
    - CI: enable USE_QUIC=1 for OpenSSL versions >= 3.5.0
    - CI: github: add an OpenSSL 3.5.0 job
    - CI: github: update the stable CI to ubuntu-24.04
    - BUG/MEDIUM: quic: SSL/TCP handshake failures with OpenSSL 3.5
    - CI: github: update to OpenSSL 3.5.1
    - BUG/MINOR: quic: Missing TLS 1.3 QUIC cipher suites and groups inits (OpenSSL 3.5 QUIC API)
    - BUG/MINOR: quic-be: Malformed coalesced Initial packets
    - MINOR: quic: Prevent QUIC backend use with the OpenSSL QUIC compatibility module (USE_OPENSS_COMPAT)
    - MINOR: reg-tests: first QUIC+H3 reg tests (QUIC address validation)
    - MINOR: quic-be: Set the backend alpn if not set by conf
    - MINOR: quic-be: TLS version restriction to 1.3
    - MINOR: cfgparse: enforce QUIC MUX compat on server line
    - MINOR: server: support QUIC for dynamic servers
    - CI: github: skip a ssl library version when latest is already in the list
    - MEDIUM: resolvers: switch dns-accept-family to "auto" by default
    - BUG/MINOR: resolvers: don't lower the case of binary DNS format
    - MINOR: resolvers: do not duplicate the hostname_dn field
    - MINOR: proto-tcp: Register a feature to report TCP MD5 signature support
    - BUG/MINOR: listener: really assign distinct IDs to shards
    - MINOR: quic: Prevent QUIC build with OpenSSL 3.5 new QUIC API version < 3.5.1
    - BUG/MEDIUM: quic: Crash after QUIC server callbacks restoration (OpenSSL 3.5)
    - REGTESTS: use two haproxy instances to distinguish the QUIC traces
    - BUG/MEDIUM: http-client: Don't wake http-client applet if nothing was xferred
    - BUG/MEDIUM: http-client: Properly inc input data when HTX blocks are xferred
    - BUG/MEDIUM: http-client: Ask for more room when request data cannot be xferred
    - BUG/MEDIUM: http-client: Test HTX_FL_EOM flag before commiting the HTX buffer
    - BUG/MINOR: http-client: Ignore 1XX interim responses in non-HTX mode
    - BUG/MINOR: http-client: Reject any 101-switching-protocols response
    - BUG/MEDIUM: http-client: Drain the request if an early response is received
    - BUG/MEDIUM: http-client: Notify applet has more data to deliver until the EOM
    - BUG/MINOR: h3: fix https scheme request encoding for BE side
    - MINOR: h1-htx: Add function to format an HTX message in its H1 representation
    - BUG/MINOR: mux-h1: Use configured error files if possible for early H1 errors
    - BUG/MINOR: h1-htx: Don't forget to init flags in h1_format_htx_msg function
    - CLEANUP: assorted typo fixes in the code, commits and doc
    - BUILD: adjust scripts/build-ssl.sh to modern CMake system of QuicTLS
    - MINOR: debug: add distro name and version in postmortem
2025-07-11 16:45:50 +02:00
Valentine Krasnobaeva
0c63883be1 MINOR: debug: add distro name and version in postmortem
Since 2012, systemd compliant distributions contain
/etc/os-release file. This file has some standardized format, see details at
https://www.freedesktop.org/software/systemd/man/latest/os-release.html.

Let's read it in feed_post_mortem_linux() to gather more info about the
distribution.

(cherry picked from commit f1594c41368baf8f60737b229e4359fa7e1289a9)
Signed-off-by: Willy Tarreau <w@1wt.eu>
2025-07-11 11:48:19 +02:00
Ilia Shipitsin
1888991e12 BUILD: adjust scripts/build-ssl.sh to modern CMake system of QuicTLS
QuicTLS in master branch has migrated to CMake, let's adopt script to
it. Previous OpenSSL+QuicTLS patch is built as usual.
2025-07-11 05:04:31 +02:00
Ilia Shipitsin
0ee3d739b8 CLEANUP: assorted typo fixes in the code, commits and doc
Corrected various spelling and phrasing errors to improve clarity and consistency.
2025-07-10 19:49:48 +02:00
Christopher Faulet
516dfe16ff BUG/MINOR: h1-htx: Don't forget to init flags in h1_format_htx_msg function
The regression was introduced by commit 187ae28 ("MINOR: h1-htx: Add
function to format an HTX message in its H1 representation"). We must be
sure the flags variable must be initialized in h1_format_htx_msg() function.

This patch must be backported with the commit above.
2025-07-10 14:10:42 +02:00
Christopher Faulet
d252ec2beb BUG/MINOR: mux-h1: Use configured error files if possible for early H1 errors
The H1 multiplexer is able to produce some errors on its own to report early
errors, before the stream is created. In that case, the error files of the
proxy were tested to detect empty files (or /dev/null) but they were not
used to produce the error itself.

But the documentation states that configured error files are used in all
cases. And in fact, it is not really a problem to use these files. We must
just format a full HTX message. Thanks to the previous patch, it is now
possible.

This patch should fix the issue #3032. It should be backported to 3.2. For
older versions, it must be discussed but it should be quite easy to do.
2025-07-10 10:29:49 +02:00
Christopher Faulet
187ae28cf4 MINOR: h1-htx: Add function to format an HTX message in its H1 representation
The function h1_format_htx_msg() can now be used to convert a valid HTX
message in its H1 representation. No validity test is performed, the HTX
message must be valid. Only trailers are silently ignored if the message is
not chunked. In addition, the destination buffer must be empty. 1XX interim
responses should be supported. But again, there is no validity tests.
2025-07-10 10:29:49 +02:00
Amaury Denoyelle
378c182192 BUG/MINOR: h3: fix https scheme request encoding for BE side
An HTTP/3 request must contains :scheme pseudo-header. Currently, only
"https" value is expected due to QUIC transport layer in use.

However, https value is incorrectly encoded due to a QPACK index value
mismatch in qpack_encode_scheme(). Fix it to ensure that scheme is now
properly set for HTTP/3 requests on the backend side.

No need to backport this.
2025-07-09 17:41:34 +02:00
Christopher Faulet
0b97bf36fa BUG/MEDIUM: http-client: Notify applet has more data to deliver until the EOM
When we leave the I/O handler with an unfinished request, we must report the
applet has more data to deliver. Otherwise, when the channel request buffer
is emptied, the http-client applet is not always woken up to forward the
remaining request data.

This issue was probably revealed by commit "BUG/MEDIUM: http-client: Don't
wake http-client applet if nothing was xferred". It is only an issue with
large POSTs, when the payload is streamed.

This patch must be backported as far as 2.6 with the commit above. But on
older versions, the applet API may differ. So be careful.
2025-07-09 16:27:24 +02:00
Christopher Faulet
25b0625d5c BUG/MEDIUM: http-client: Drain the request if an early response is received
When a large request is sent, it is possible to have a response before the
end of the request. It is valid from HTTP perspective but it is an issue
with the current design of the http-client. Indded, the request and the
response are handled sequentially. So the response will be blocked, waiting
for the end of the request. Most of time, it is not an issue, except when
the request transfer is blocked. In that case, the applet is blocked.

With the current API, it is not possible to handle early response and
continue the request transfer. So, this case cannot be handle. In that case,
it seems reasonnable to drain the request if a response is received. This
way, the request transfer, from the caller point of view, is never blocked
and the response can be properly processed.

To do so, the action flag HTTPCLIENT_FA_DRAIN_REQ is added to the
http-client. When it is set, the request payload is just dropped. In that
case, we take care to not report the end of input to properly report the
request was truncated, especially in logs.

It is only an issue with large POSTs, when the payload is streamed.

This patch must be backported as far as 2.6.
2025-07-09 16:27:24 +02:00
Christopher Faulet
8ba754108d BUG/MINOR: http-client: Reject any 101-switching-protocols response
Protocol updages are not supported by the http-client. So report an error is
a 101-switching-protocols response is received. Of course, it is unexpected
because the API is not designed to support upgrades. But it is better to
properly handle this case.

This patch could be backported as far as 2.6. It depends on the commit
"BUG/MINOR: http-client: Ignore 1XX interim responses in non-HTX mode".
2025-07-09 16:27:24 +02:00
Christopher Faulet
9d10be33ae BUG/MINOR: http-client: Ignore 1XX interim responses in non-HTX mode
When the response is re-formatted in raw message, the 1XX interim responses
must be skipped. Otherwise, information of the first interim response will
be saved (status line and headers) and those from the final response will be
dropped.

Note that for now, in HTX-mode, the interim messages are removed.

This patch must be backported as far as 2.6.
2025-07-09 16:27:24 +02:00
Christopher Faulet
4bdb2e5a26 BUG/MEDIUM: http-client: Test HTX_FL_EOM flag before commiting the HTX buffer
when htx_to_buf() function is called, if the HTX message is empty, the
buffer is reset. So HTX flags must not be tested after because the info may
be lost.

So now, we take care to test HTX_FL_EOM flag before calling htx_to_buf().

This patch must be backported as far as 2.8.
2025-07-09 16:27:24 +02:00
Christopher Faulet
e4a0d40c62 BUG/MEDIUM: http-client: Ask for more room when request data cannot be xferred
When the request payload cannot be xferred to the channel because its buffer
is full, we must request for more room by calling sc_need_room(). It is
important to be sure the httpclient applet will not be woken up in loop to
push more data while it is not possible.

It is only an issue with large POSTs, when the payload is streamed.

This patch must be backported as far as 2.6. Note that on 2.6,
sc_need_room() only takes one argument.
2025-07-09 16:27:24 +02:00
Christopher Faulet
d9ca8f6b71 BUG/MEDIUM: http-client: Properly inc input data when HTX blocks are xferred
When HTX blocks from the requests are transferred into the channel buffer,
the return value of htx_xfer_blks() function must not be used to increment
the channel input value because meta data are counted here while they are
not part of input data. Because of this bug, it is possible to forward more
data than these present in the channel buffer.

Instead, we look at the input data before and after the transfer and the
difference is added.

It is only an issue with large POSTs, when the payload is streamed.

This patch must be backported as far as 2.6.
2025-07-09 16:27:24 +02:00
Christopher Faulet
fffdac42df BUG/MEDIUM: http-client: Don't wake http-client applet if nothing was xferred
When data are transferred to or from the htt-pclient, the applet is
systematically woken up, even when no data are transferred. This could lead
to needlessly wakeups. When called from a lua script, if data are blocked
for a while, this leads to a wakeup ping-pong loop where the http-client
applet is woken up by the lua script which wakes back the script.

To fix the issue, in httpclient_req_xfer() and httpclient_res_xfer()
functions, we now take care to not wake the http-client applet up when no
data are transferred.

This patch must be backported as far as 2.6.
2025-07-09 16:27:24 +02:00
Frederic Lecaille
479c9fb067 REGTESTS: use two haproxy instances to distinguish the QUIC traces
The aim of this patch is to identify the QUIC traces between the QUIC frontend
and backend parts. Two haproxy instances are created. The c(1|2) http clients
connect to ha1 with TCP frontends and QUIC backends. ha2 embeds two QUIC listeners
with s1 as TCP backend. When the traces are activated, they are dumped to stderr.
Hopefully, they are prefixed by the haproxy instance name (h1 or h2). This is very
useful to identify the QUIC instances.
2025-07-09 16:01:02 +02:00
Frederic Lecaille
45ac235baa BUG/MEDIUM: quic: Crash after QUIC server callbacks restoration (OpenSSL 3.5)
Revert this patch which is no more useful since OpenSSL 3.5.1 to remove the
QUIC server callback restoration after SSL context switch:

    MINOR: quic: OpenSSL 3.5 internal QUIC custom extension for transport parameters reset

It was required for 3.5.0. That said, there was no CI for OpenSSL 3.5 at the date
of this commit. The CI recently revealed that the QUIC server side could crash
during QUIC reg tests just after having restored the callbacks as implemented by
the commit above.

Also revert this commit which is no more useful because it arrived with the commit
above:

	BUG/MEDIUM: quic: SSL/TCP handshake failures with OpenSSL 3.

Must be backported to 3.2.
2025-07-09 16:01:02 +02:00
Frederic Lecaille
c01eb1040e MINOR: quic: Prevent QUIC build with OpenSSL 3.5 new QUIC API version < 3.5.1
The QUIC listener part was impacted by the 3.5.0 OpenSSL new QUIC API with several
issues which have been fixed by 3.5.1.

Add a #error to prevent such OpenSSL 3.5 new QUIC API use with version below 3.5.1.

Must be backported to 3.2.
2025-07-09 16:01:02 +02:00
Willy Tarreau
dd49f1ee62 BUG/MINOR: listener: really assign distinct IDs to shards
A fix was made in 3.0 for the case where sharded listeners were using
a same ID with commit 0db8b6034d ("BUG/MINOR: listener: always assign
distinct IDs to shards"). However, the fix is incorrect. By checking the
ID of temporary node instead of the kept one in bind_complete_thread_setup()
it ends up never inserting the used nodes at this point, thus not reserving
them. The side effect is that assigning too close IDs to subsequent
listeners results in the same ID still being assigned twice since not
reserved. Example:

   global
       nbthread 20

   frontend foo
       bind :8000 shards by-thread id 10
       bind :8010 shards by-thread id 20

The first one will start a series from 10 to 29 and the second one a
series from 20 to 39. But 20 not being inserted when creating the shards,
it will remain available for the post-parsing phase that assigns all
unassigned IDs by filling holes, and two listeners will have ID 20.

By checking the correct node, the problem disappears. The patch above
was marked for backporting to 2.6, so this fix should be backported that
far as well.
2025-07-09 15:52:33 +02:00
Christopher Faulet
adba8ffb49 MINOR: proto-tcp: Register a feature to report TCP MD5 signature support
"HAVE_TCP_MD5SIG" feature is now registered if TCP MD5 signature is
supported. This will help the feature detection in the reg-test script
dedicated to this feature.
2025-07-09 09:51:24 +02:00
Willy Tarreau
96da670cd7 MINOR: resolvers: do not duplicate the hostname_dn field
The hostdn.key field in the server contains a pure copy of the hostname_dn
since commit 3406766d57 ("MEDIUM: resolvers: add a ref between servers and
srv request or used SRV record") which wanted to lowercase it. Since it's
not necessary, let's drop this useless copy. In addition, the return from
strdup() was not tested, so it could theoretically crash the process under
heavy memory contention.
2025-07-08 07:54:45 +02:00
Willy Tarreau
95cf518bfa BUG/MINOR: resolvers: don't lower the case of binary DNS format
The server's "hostname_dn" is in Domain Name format, not a pure string, as
converted by resolv_str_to_dn_label(). It is made of lower-case string
components delimited by binary lengths, e.g. <0x03>www<0x07>haproxy<0x03)org.
As such it must not be lowercased again in srv_state_srv_update(), because
1) it's useless on the name components since already done, and 2) because
it would replace component lengths 97 and above by 32-char shorter ones.
Granted, not many domain names have that large components so the risk is
very low but the operation is always wrong anyway. This was brought in
2.5 by commit 3406766d57 ("MEDIUM: resolvers: add a ref between servers
and srv request or used SRV record").

In the same vein, let's fix the confusing strcasecmp() that are applied
to this binary format, and use memcmp() instead. Here there's basically
no risk to incorrectly match the wrong record, but that test alone is
confusing enough to provoke the existence of the bug above.

Finally let's update the component for that field to mention that it's
in this format and already lower cased.

Better not backport this, the risk of facing this bug is almost zero, and
every time we touch such files something breaks for bad reasons.
2025-07-08 07:54:45 +02:00
Willy Tarreau
54d36f3e65 MEDIUM: resolvers: switch dns-accept-family to "auto" by default
As notified in the 3.2 announce [1], dns-accept-family needed to switch
to "auto" by default in 3.3. This is now done.

[1] https://www.mail-archive.com/haproxy@formilux.org/msg45917.html
2025-07-08 07:54:45 +02:00
William Lallemand
9e78859fb3 CI: github: skip a ssl library version when latest is already in the list
Skip the job for "latest" libssl version, when this version is the same
as a one already in the list.

This avoid having 2 jobs for OpenSSL 3.5.1 since no new dev version are
available for now and 3.5.1 is already in the list.
2025-07-07 19:46:07 +02:00
Amaury Denoyelle
42365f53e8 MINOR: server: support QUIC for dynamic servers
To properly support QUIC for dynamic servers, it is required to extend
add server CLI handler :
* ensure conformity between server address and proto
* automatically set proto to QUIC if not specified
* prepare_srv callback must be called to initialize required SSL context

Prior to this patch, crashes may occur when trying to use QUIC with
dynamic servers.

Also, destroy_srv callback must be called when a dynamic server is
deallocated. This ensures that there is no memory leak due to SSL
context.

No need to backport.
2025-07-07 14:29:29 +02:00
Amaury Denoyelle
626cfd85aa MINOR: cfgparse: enforce QUIC MUX compat on server line
Add postparsing checks to control server line conformity regarding QUIC
both on the server address and the MUX protocol. An error is reported in
the following case :
* proto quic is explicitely specified but server address does not
  specify quic4/quic6 prefix
* another proto is explicitely specified but server address uses
  quic4/quic6 prefix
2025-07-07 14:29:24 +02:00
Frederic Lecaille
e76f1ad171 MINOR: quic-be: TLS version restriction to 1.3
This patch skips the TLS version settings. They have as a side effect to add
all the TLS version extensions to the ClientHello message (TLS 1.0 to TLS 1.3).
QUIC supports only TLS 1.3.
2025-07-07 14:13:02 +02:00
Frederic Lecaille
93a94ba87b MINOR: quic-be: Set the backend alpn if not set by conf
Simply set the alpn string to "h3,hq_interop" if there is no "alpn" setting for
QUIC backends.
2025-07-07 14:13:02 +02:00
Frederic Lecaille
a9b5a2eb90 MINOR: reg-tests: first QUIC+H3 reg tests (QUIC address validation)
First simple VTC file for QUIC reg tests. Two listeners are configured, one without
Retry enabled and the other without. Two clients simply tries to connect to these
listeners to make an basic H3 request.
2025-07-07 14:13:02 +02:00
Frederic Lecaille
5a87f4673a MINOR: quic: Prevent QUIC backend use with the OpenSSL QUIC compatibility module (USE_OPENSS_COMPAT)
Make the server line parsing fail when a QUIC backend is configured  if haproxy
is built to use the OpenSSL stack compatibility module. This latter does not
support the QUIC client part.
2025-07-07 14:13:02 +02:00
Frederic Lecaille
87ada46f38 BUG/MINOR: quic-be: Malformed coalesced Initial packets
This bug fix completes this patch which was not sufficient:

   MINOR: quic-be: Allow sending 1200 bytes Initial datagrams

This patch could not allow the build of well formed Initial packets coalesced to
others (Handshake) packets. Indeed, the <padding> parameter passed to qc_build_pkt()
is deduced from a first value: <padding> value and must be set to 1 for
the last encryption level. As a client, the last encryption level is always
the Handshake encryption level. But <padding> was always set to 1 for a QUIC
client, leading the first Initial packet to be malformed because considered
as the second one into the same datagram.

So, this patch sets <padding> value passed to qc_build_pkt() to 1 only when there
is no last encryption level at all, to allow the build of Initial only packets
(not coalesced) or when it frames to send (coalesced packets).

No need to backport.
2025-07-07 14:13:02 +02:00
Frederic Lecaille
6aebca7f2c BUG/MINOR: quic: Missing TLS 1.3 QUIC cipher suites and groups inits (OpenSSL 3.5 QUIC API)
This bug impacts both QUIC backends and frontends with OpenSSL 3.5 as QUIC API.

The connections to a haproxy QUIC listener from a haproxy QUIC backend could not
work at all without HelloRetryRequest TLS messages emitted by the backend
asking the QUIC client to restart the handshake followed by TLS alerts:

    conn. @(nil) OpenSSL error[0xa000098] read_state_machine: excessive message size

Furthermore, the Initial CRYPTO data sent by the client were big (about two 1252 bytes
packets) (ClientHello TLS message). After analyzing the packets a key_share extension
with <unknown> as value was long (more that 1Ko). This extension is in relation with
the groups but does not belong to the groups supported by QUIC.

That said such connections could work with ngtcp2 as backend built against the same
OSSL TLS stack API but with a HelloRetryRequest.

ngtcp2 always set the QUIC default cipher suites and group, for all the stacks it
supports as implemented by this patch.

So this patch configures both QUIC backend and frontend cipher suites and groups
calling SSL_CTX_set_ciphersuites() and SSL_CTX_set1_groups_list() with the correct
argument, except for SSL_CTX_set1_groups_list() which fails with QUIC TLS for
a unknown reason at this time.

The call to SSL_CTX_set_options() is useless from ssl_quic_initial_ctx() for the QUIC
clients. One relies on ssl_sock_prepare_srv_ssl_ctx() to set them for now on.

This patch is effective for all the supported stacks without impact for AWS-LC,
and QUIC TLS and fixes the connections for haproxy QUIC frontend and backends
when builts against OpenSSL 3.5 QUIC API).

A new define HAVE_OPENSSL_QUICTLS has been added to openssl-compat.h to distinguish
the QUIC TLS stack.

Must be backported to 3.2.
2025-07-07 14:13:02 +02:00
William Lallemand
0efbe6da88 CI: github: update to OpenSSL 3.5.1
Update the OpenSSL 3.5 job to 3.5.1.

This must be backported to 3.2.
2025-07-07 13:58:38 +02:00
Frederic Lecaille
fb0324eb09 BUG/MEDIUM: quic: SSL/TCP handshake failures with OpenSSL 3.5
This bug arrived with this commit:

    MINOR: quic: OpenSSL 3.5 internal QUIC custom extension for transport parameters reset

To make QUIC connection succeed with OpenSSL 3.5 API, a call to quic_ssl_set_tls_cbs()
was needed from several callback which call SSL_set_SSL_CTX(). This has as side effect
to set the QUIC callbacks used by the OpenSSL 3.5 API.

But quic_ssl_set_tls_cbs() was also called for TCP sessions leading the SSL stack
to run QUIC code, if the QUIC support is enabled.

To fix this, simply ignore the TCP connections inspecting the <ssl_qc_app_data_index>
index value which is NULL for such connections.

Must be backported to 3.2.
2025-07-07 12:01:22 +02:00
William Lallemand
d0bd0595da CI: github: update the stable CI to ubuntu-24.04
Update the stable CI to ubuntu-24.04.

Must be backported to 3.2.
2025-07-07 09:29:33 +02:00
William Lallemand
b6fec27ef6 CI: github: add an OpenSSL 3.5.0 job
Add an OpenSSL 3.5.0 job to test USE_QUIC.

This must be backported to 3.2.
2025-07-07 09:27:17 +02:00
Ilia Shipitsin
d8c867a1e6 CI: enable USE_QUIC=1 for OpenSSL versions >= 3.5.0
OpenSSL 3.5.0 introduced experimental support for QUIC. This change enables the use_quic option when a compatible version of OpenSSL is detected, allowing QUIC-based functionality to be leveraged where applicable. Feature remains disabled for earlier versions to ensure compatibility.
2025-07-07 09:02:11 +02:00
Ilia Shipitsin
198d422a31 CI: set DEBUG_STRICT=2 for coverity scan
enabling DEBUG_STRICT=2 will enable BUG_ON_HOT() and help coverity
in bug detection

for the reference: https://github.com/haproxy/haproxy/issues/3008
2025-07-06 08:17:37 +02:00
Willy Tarreau
573143e0c8 MINOR: pattern: add a counter of added/freed patterns
Patterns are allocated when loading maps/acls from a file or dynamically
via the CLI, and are released only from the CLI (e.g. "clear map xxx").
These ones do not use pools and are much harder to monitor, e.g. in case
a script adds many and forgets to clear them, etc.

Let's add a new pair of metrics "PatternsAdded" and "PatternsFreed" that
will report the number of added and freed patterns respectively. This
can allow to simply graph both. The difference between the two normally
represents the number of allocated patterns. If Added grows without
Freed following, it can indicate a faulty script that doesn't perform
the needed cleanup. The metrics are also made available to Prometheus
as patterns_added_total and patterns_freed_total respectively.
2025-07-05 00:12:45 +02:00
Remi Tricot-Le Breton
a075d6928a CLEANUP: ssl: Rename ssl_trace-t.h to ssl_trace.h
This header does not actually contain any structures so it's best to
remove the '-t' from the name for better consistency.
2025-07-04 15:21:50 +02:00
William Lallemand
f07f0ee21c MEDIUM: httpclient: add a Content-Length when the payload is known
This introduce a change of behavior in the httpclient API. When
generating a request with a payload buffer, the size of the buffer
payload is known and does not need to be streamed in chunks.

This patch force to sends payload buffer using a Content-Length header
in the request, however the behavior does not change if a callback is
still used instead of a buffer.
2025-07-04 15:21:50 +02:00
Christopher Faulet
5da4da0bb6 BUG/MINOR: http-act: Fix parsing of the expression argument for pause action
When the "pause" action is parsed, if an expression is used instead of a
static value, the position of the current argument after the expression
evaluation is incremented while it should not. The sample_parse_expr()
function already take care of it. However, it should still be incremented
when an time value was parsed.

This patch must be backported to 3.2.
2025-07-04 14:38:32 +02:00
Christopher Faulet
3cc5991c9b BUG/MINOR: proto-tcp: Take care to initialized tcp_md5sig structure
When the TCP MD5 signature is enabled, on a listening socket or an outgoing
one, the tcp_md5sig structure must be initialized first.

It is a 3.3-specific issue. No backport needed.
2025-07-04 08:32:06 +02:00
Christopher Faulet
45cb232062 BUILD: cfgparse-tcp: Add _GNU_SOURCE for TCP_MD5SIG_MAXKEYLEN
It is required for the musl librairy to be sure TCP_MD5SIG_MAXKEYLEN is
defined and avoid build errors.
2025-07-03 16:30:15 +02:00
Christopher Faulet
5232df57ab MINOR: proto-tcp: Add support for TCP MD5 signature for listeners and servers
This patch adds the support for the RFC2385 (Protection of BGP Sessions via
the + TCP MD5 Signature Option) for the listeners and the servers. The
feature is only available on Linux. Keywords are not exposed otherwise.

By setting "tcp-md5sig <password>" option on a bind line, TCP segments of
all connections instantiated from the listening socket will be signed with a
16-byte MD5 digest. The same option can be set on a server line to protect
outgoing connections to the corresponding server.

The primary use case for this option is to allow BGP to protect itself
against the introduction of spoofed TCP segments into the connection
stream. But it can be useful for any very long-lived TCP connections.

A reg-test was added and it will be executed only on linux. All other
targets are excluded.
2025-07-03 15:25:40 +02:00
William Lallemand
6f6c6fa4cb BUG/MINOR: ssl/ocsp: fix definition discrepancies with ocsp_update_init()
Since patch 20718f40b6 ("MEDIUM: ssl/ckch: add filename and linenum
argument to crt-store parsing"), the definition of ocsp_update_init()
and its declaration does not share the same arguments.

Must be backported to 3.2.
2025-07-03 15:14:13 +02:00
David Carlier
e7c59a7a84 DOC: deviceatlas build clarifications
Update accordingly the related documentation, removing/clarifying confusing
parts as it was more complicated than it needed to be.
2025-07-03 09:08:06 +02:00
David Carlier
0e8e20a83f BUILD/MEDIUM: deviceatlas: fix when installed in custom locations.
We are reusing DEVICEATLAS_INC/DEVICEATLAS_LIB when the DeviceAtlas
library had been compiled and installed with cmake and make install targets.
Works fine except when ldconfig is unaware of the path, thus adding
cflags/ldflags into the mix.

Ideally, to be backported down to the lowest stable branch.
2025-07-03 09:08:06 +02:00
William Lallemand
720efd0409 BUG/MINOR: ssl: crash in ssl_sock_io_cb() with SSL traces and idle connections
TRACE_ENTER is crashing in ssl_sock_io_cb() in case a connection idle is
being stolen. Indeed the function could be called with a NULL context
and dereferencing it will crash.

This patch fixes the issue by initializing ctx only once it is usable,
and moving TRACE_ENTER after the initialization.

This must be backported to 3.2.
2025-07-02 16:14:19 +02:00
Willy Tarreau
e34a0a50ae BUILD: dev/phash: remove the accidentally committed a.out file
Commit 41f28b3c53 ("DEV: phash: Update 414 and 431 status codes to phash")
accidentally committed a.out, resulting in build/checkout issues when
locally rebuilt. Let's drop it.

This should be backported to 3.1.
2025-07-02 10:55:13 +02:00
William Lallemand
0f1c206b8f MINOR: httpclient/cli: add --htx option
Use the new HTTPCLIENT_O_RES_HTX flag when using the CLI httpclient with
--htx.

It allows to process directly the response in HTX, then the htx_dump()
function is used to display a debug output.

Example:

echo "httpclient --htx GET https://haproxy.org" | socat /tmp/haproxy.sock
 htx=0x79fd72a2e200(size=16336,data=139,used=6,wrap=NO,flags=0x00000010,extra=0,first=0,head=0,tail=5,tail_addr=139,head_addr=0,end_addr=0)
		[0] type=HTX_BLK_RES_SL    - size=31     - addr=0     	HTTP/2.0 301
		[1] type=HTX_BLK_HDR       - size=15     - addr=31    	content-length: 0
		[2] type=HTX_BLK_HDR       - size=32     - addr=46    	location: https://www.haproxy.org/
		[3] type=HTX_BLK_HDR       - size=25     - addr=78    	alt-svc: h3=":443"; ma=3600
		[4] type=HTX_BLK_HDR       - size=35     - addr=103   	set-cookie: served=2:TLSv1.3+TCP:IPv4
		[5] type=HTX_BLK_EOH       - size=1      - addr=138   	<empty>
2025-07-01 16:33:38 +02:00
William Lallemand
3e05e20029 MEDIUM: httpclient: implement a way to use directly htx data
Add a HTTPCLIENT_O_RES_HTX flag which allow to store directly the HTX
data in the response buffer instead of extracting the data in raw
format.

This is useful when the data need to be reused in another request.
2025-07-01 16:31:47 +02:00
William Lallemand
2f4219ed68 MEDIUM: httpclient: split the CLI from the actual httpclient API
This patch split the httpclient code to prevent confusion between the
httpclient CLI command and the actual httpclient API.

Indeed there was a confusion between the flag used internally by the
CLI command, and the actual httpclient API.

hc_cli_* functions as well as HC_C_F_* defines were moved to
httpclient_cli.c.
2025-07-01 15:46:04 +02:00
William Lallemand
149f6a4879 MINOR: ssl/ocsp: stop using the flags from the httpclient CLI
The ocsp-update uses the flags from the httpclient CLI, which are not
supposed to be used elsewhere since this is a state for the CLI.

This patch implements HC_OCSP flags for the ocsp-update.
2025-07-01 15:46:04 +02:00
William Lallemand
519abefb57 BUG/MINOR: httpclient: wrongly named httpproxy flag
The HC_F_HTTPPROXY flag was wrongly named and does not use the correct
value, indeed this flag was meant to be used for the httpclient API, not
the httpclient CLI.

This patch fixes the problem by introducing HTTPCLIENT_FO_HTTPPROXY
which has must be set in hc->flags.

Also add a member 'options' in the httpclient structure, because the
member flags is reinitialized when starting.

Must be backported as far as 3.0.
2025-07-01 14:47:52 +02:00
Aurelien DARRAGON
747a812066 MEDIUM: stats: add persistent state to typed output format
Add a fourth character to the second column of the "typed output format"
to indicate whether the value results from a volatile or persistent metric
('V' or 'P' characters respectively). A persistent metric means the value
could possibily be preserved across reloads by leveraging a shared memory
between multiple co-processes. Such metrics are identified as "shared" in
the code (since they are possibly shared between multiple co-processes)

Some reg-tests were updated to take that change into account, also, some
outputs in the configuration manual were updated to reflect current
behavior.
2025-07-01 14:15:03 +02:00
Mariam John
bd076f8619 MINOR: ssl: Introduce new smp_client_hello_parse() function
In this patch we introduce a new helped function called `smp_client_hello_parse()` to extract
information presented in a TLS client hello handshake message. 7 sample fetches have also been
modified to use this helped function to do the common client hello parsing and use the result
to do further processing of extensions/cipher.

Fixes: #2532
2025-07-01 11:55:36 +02:00
Willy Tarreau
48d5ef363d REGTESTS: restrict execution to a single thread group
When threads are enabled and running on a machine with multiple CCX
or multiple nodes, thread groups are now enabled since 3.3-dev2, causing
load-balancing algorithms to randomly fail due to incoming connections
spreading over multiple groups and using different load balancing indexes.

Let's just force "thread-groups 1" into all configs when threads are
enabled to avoid this.
2025-06-30 18:54:35 +02:00
Remi Tricot-Le Breton
94d750421c DOC: 'jwt_verify' converter now supports certificates
The 'jwt_verify' converter can now accept certificates as a second
parameter, which can be updated via the CLI.
2025-06-30 17:59:55 +02:00
Remi Tricot-Le Breton
db5ca5a106 REGTESTS: jwt: Test update of certificate used in jwt_verify
Using certificates in the jwt_verify converter allows to make use of the
CLI certificate updates, which is still impossible with public keys (the
legacy behavior).
2025-06-30 17:59:55 +02:00
Remi Tricot-Le Breton
663ba093aa REGTESTS: jwt: Add test with actual certificate passed to jwt_verify
The jwt_verify can now take public certificates as second parameter,
either with actual certificate path (no previously mentioned) or from a
predefined crt-store or from a variable.
2025-06-30 17:59:55 +02:00
Remi Tricot-Le Breton
093a3ad7f2 MINOR: ssl: Prevent delete on certificate used by jwt_verify
A ckch_store used in JWT verification might not have any ckch instances
or crt-list entries linked but we don't want to be able to remove it via
the CLI anyway since it would make all future jwt_verify calls using
this certificate fail.
2025-06-30 17:59:55 +02:00
Remi Tricot-Le Breton
31955e6e0a MINOR: ssl: Allow 'commit ssl cert' with no privkey
The ckch_stores might be used to store public certificates only so in
this case we won't provide private keys when updating the certificate
via the CLI.
If the ckch_store is actually used in a bind or server line an error
will still be raised if the private key is missing.
2025-06-30 17:59:55 +02:00
Remi Tricot-Le Breton
522bca98e1 MAJOR: jwt: Allow certificate instead of public key in jwt_verify converter
The 'jwt_verify' converter could only be passed public keys as second
parameter instead of full-on public certificates. This patch allows
proper certificates to be used.
Those certificates can be loaded in ckch_stores like any other
certificate which means that all the certificate-related operations that
can be made via the CLI can now benefit JWT validation as well.

We now have two ways JWT validation can work, the legacy one which only
relies on public keys which could not be stored in ckch_stores without
some in depth changes in the way the ckch_stores are built. In this
legacy way, the public keys are fully stored in a cache dedicated to JWT
only which does not have any CLI commands and any way to update them
during runtime. It also requires that all the public keys used are
passed at least once explicitely to the 'jwt_verify' converter so that
they can be loaded during init.
The new way uses actual certificates, either already stored in the
ckch_store tree (if predefined in a crt-store or already used previously
in the configuration) or loaded in the ckch_store tree during init if
they are explicitely used in the configuration like so:
    var(txn.bearer),jwt_verify(txn.jwt_alg,"cert.pem")

When using a variable (or any other way that can only be resolved during
runtime) in place of the converter's <key> parameter, the first time we
encounter a new value (for which we don't have any entry in the jwt
tree) we will lock the ckch_store tree and try to perform a lookup in
it. If the lookup fails, an entry will still be inserted into the jwt
tree so that any following call with this value avoids performing the
ckch_store tree lookup.
2025-06-30 17:59:55 +02:00
Remi Tricot-Le Breton
6e9f886c4d MINOR: jwt: Remove unused parameter in convert_ecdsa_sig
The pubkey parameter in convert_ecdsa_sig was not actually used.
2025-06-30 17:59:55 +02:00
Remi Tricot-Le Breton
cd89ce1766 MINOR: jwt: Rename pkey to pubkey in jwt_cert_tree_entry struct
Rename the jwt_cert_tree_entry member pkey to pubkey to avoid any
confusion between private and public key.
2025-06-30 17:59:55 +02:00
Remi Tricot-Le Breton
5c3d0a554b DOC: Fix 'jwt_verify' converter doc
Contrary to what the doc says, the jwt_verify converter only works with
a public key and not a full certificate for certificate based protocols
(everything but HMAC).

This patch should be backported up to 2.8.
2025-06-30 17:59:55 +02:00
Remi Tricot-Le Breton
3465f88f8a BUG/MINOR: jwt: Copy input and parameters in dedicated buffers in jwt_verify converter
When resolving variable values the temporary trash chunks are used so
when calling the 'jwt_verify' converter with two variable parameters
like in the following line, the input would be overwritten by the value
of the second parameter :
    var(txn.bearer),jwt_verify(txn.jwt_alg,txn.cert)
Copying the values into dedicated alloc'ed buffers prevents any new call
to get_trash_chunk from erasing the data we need in the converter.

This patch can be backported up to 2.8.
2025-06-30 17:59:55 +02:00
Christopher Faulet
5ba0a2d527 BUG/MEDIUM: mux-h2: Properly handle connection error during preface sending
On backend side, an error at connection level during the preface sending was
not properly handled and could lead to a spinning loop on process_stream()
when the h2 stream on client side was blocked, for instance because of h2
flow control.

It appeared that no transition was perfromed from the PREFACE state to an
ERROR state on the H2 connection when an error occurred on the underlying
connection. In that case, the H2 connection was woken up in loop to try to
receive data, waking up the upper stream at the same time.

To fix the issue, an H2C error must be reported. Most state transitions are
handled by the demux function. So it is the right place to do so. First, in
PREFACE state and on server side, if an error occurred on the TCP
connection, an error is now reported on the H2 connection. REFUSED_STREAM
error code is used in that case. In addition, in that case, we also take
care to properly handle the connection shutdown.

This patch should fix the issue #3020. It must be backported to all stable
versions.
2025-06-30 16:48:00 +02:00
Christopher Faulet
a2a142bf40 BUG/MEDIUM: hlua: Forbid any L6/L7 sample fetche functions from lua services
It was already forbidden to use HTTP sample fetch functions from lua
services. An error is triggered if it happens. However, the error must be
extended to any L6/L7 sample fetch functions.

Indeed, a lua service is an applet. It totally unexepected for an applet to
access to input data in a channel's buffer. These data have not been
analyzed yet and are still subject to any change. An applet, lua or not,
must never access to "not forwarded" data. Only output data are
available. For now, if a lua applet relies on any L6/L7 sampel fetch
functions, the behavior is undefined and not consistent.

So to fix the issue, hlua flag HLUA_F_MAY_USE_HTTP is renamed to
HLUA_F_MAY_USE_CHANNELS_DATA. This flag is used to prevent any lua applet to
use L6/L7 sample fetch functions.

This patch could be backported to all stable versions.
2025-06-30 16:47:59 +02:00
William Lallemand
7fc8ab0397 MINOR: ssl: check TLS1.3 ciphersuites again in clienthello with recent AWS-LC
Patch ed9b8fec49 ("BUG/MEDIUM: ssl: AWS-LC + TLSv1.3 won't do ECDSA in
RSA+ECDSA configuration") partly fixed a cipher selection problem with
AWS-LC. However this was not checking anymore if the ciphersuites was
available in haproxy which is still a problem.

The problem was fixed in AWS-LC 1.46.0 with this PR
https://github.com/aws/aws-lc/pull/2092.

This patch allows to filter again the TLS13 ciphersuites with recent
versions of AWS-LC. However, since there are no macros to check the
AWS-LC version, it is enabled at the next AWS-LC API version change
following the fix in AWS-LC v1.50.0.

This could be backported where ed9b8fec49 was backported.
2025-06-30 16:43:51 +02:00
Aurelien DARRAGON
4fcc9b5572 MINOR: counters: rename last_change counter to last_state_change
Since proxy and server struct already have an internal last_change
variable and we cannot merge it with the shared counter one, let's
rename the last_change counter to be more specific and prevent the
mixup between the two.

last_change counter is renamed to last_state_change, and unlike the
internal last_change, this one is a shared counter so it is expected
to be updated by other processes in our back.

However, when updating last_state_change counter, we use the value
of the server/proxy last_change as reference value.
2025-06-30 16:26:38 +02:00
Aurelien DARRAGON
5b1480c9d4 MEDIUM: proxy: add and use a separate last_change variable for internal use
Same motivation as previous commit, proxy last_change is "abused" because
it is used for 2 different purposes, one for stats, and the other one
for process-local internal use.

Let's add a separate proxy-only last_change variable for internal use,
and leave the last_change shared (and thread-grouped) counter for
statistics.
2025-06-30 16:26:31 +02:00
Aurelien DARRAGON
01dfe17acf MEDIUM: server: add and use a separate last_change variable for internal use
last_change server metric is used for 2 separate purposes. First it is
used to report last server state change date for stats and other related
metrics. But it is also used internally, including in sensitive paths,
such as lb related stuff to take decision or perform computations
(ie: in srv_dynamic_maxconn()).

Due to last_change counter now being split over thread groups since 16eb0fa
("MAJOR: counters: dispatch counters over thread groups"), reading the
aggregated value has a cost, and we cannot afford to consult last_change
value from srv_dynamic_maxconn() anymore. Moreover, since the value is
used to take decision for the current process we don't wan't the variable
to be updated by another process in our back.

To prevent performance regression and sharing issues, let's instead add a
separate srv->last_change value, which is not updated atomically (given how
rare the  updates are), and only serves for places where the use of the
aggregated last_change counter/stats (split over thread groups) is too
costly.
2025-06-30 16:26:25 +02:00
Aurelien DARRAGON
9d3c73c9f2 BUG/MEDIUM: counters/server: fix server and proxy last_change mixup
16eb0fa ("MAJOR: counters: dispatch counters over thread groups")
introduced some bugs: as a result of improper copy paste during
COUNTERS_SHARED_LAST() macro introduction, some functions such as
srv_downtime() which used to make use of the server last_change variable
now use the proxy one, which doesn't make sense and will likely cause
unexpected logical errors/bugs.

Let's fix them all at once by properly pointing to the server last_change
variable when relevant.

No backport needed.
2025-06-30 16:26:19 +02:00
Aurelien DARRAGON
837762e2ee MINOR: mailers: warn if mailers are configured but not actually used
Now that native mailers configuration is only usable with Lua mailers,
Willy noticed that we lack a way to warn the user if mailers were
previously configured on an older version but Lua mailers were not loaded,
which could trick the user into thinking mailers keep working when
transitionning to 3.2 while it is not.

In this patch we add the 'core.use_native_mailers_config()' Lua function
which should be called in Lua script body before making use of
'Proxy:get_mailers()' function to retrieve legacy mailers configuration
from haproxy main config. This way haproxy effectively knows that the
native mailers config is actually being used from Lua (which indicates
user correctly migrated from native mailers to Lua mailers), else if
mailers are configured but not used from Lua then haproxy warns the user
about the fact that they will be ignored unless they are used from Lua.
(e.g.: using the provided 'examples/lua/mailers.lua' to ease transition)
2025-06-27 16:41:18 +02:00
Aurelien DARRAGON
c7c6d8d295 MINOR: server: move send-proxy* incompatibility check in _srv_check_proxy_mode()
This way the check is executed no matter the section where the server
is declared (ie: not only under the "ring" section)
2025-06-27 16:41:13 +02:00
Aurelien DARRAGON
14d68c2ff7 MEDIUM: server: move _srv_check_proxy_mode() checks from server init to finalize
_srv_check_proxy_mode() is currently executed during server init (from
_srv_parse_init()), while it used to be fine for current checks, it
seems it occurs a bit too early to be usable for some checks that depend
on server keywords to be evaluated for instance.

As such, to make _srv_check_proxy_mode() more relevant and be extended
with additional checks in the future, let's call it later during server
finalization, once all server keywords were evaluated.

No change of behavior is expected
2025-06-27 16:41:07 +02:00
Aurelien DARRAGON
23e5f18b8e MEDIUM: sink: change the sink mode type to PR_MODE_SYSLOG
No change of behavior expected, but some compat checks will now be aware
that the proxy type is not TCP but SYSLOG instead.
2025-06-27 16:41:01 +02:00
Frederic Lecaille
1045623cb8 BUG/MINOR: quic-be: Wrong retry_source_connection_id check
This commit broke the QUIC backend connection to servers without address validation
or retry activated:

  MINOR: quic-be: address validation support implementation (RETRY)

Indeed the retry_source_connection_id transport parameter was already checked as
as if it was required, as if the peer (server) was always using the address validation.
Furthermore, relying on ->odcid.len to ensure a retry token was received is not
correct.

This patch ensures the retry_source_connection_id transport parameter is checked
only when a retry token was received (->retry_token != NULL). In this case
it also checks that this transport parameter is present when a retry token
has been received (tx_params->retry_source_connection_id.len != 0).

No need to backport.
2025-06-27 07:59:12 +02:00
Willy Tarreau
299a441110 [RELEASE] Released version 3.3-dev2
Released version 3.3-dev2 with the following main changes :
    - BUG/MINOR: config/server: reject QUIC addresses
    - MINOR: server: implement helper to identify QUIC servers
    - MINOR: server: mark QUIC support as experimental
    - MINOR: mux-quic-be: allow QUIC proto on backend side
    - MINOR: quic-be: Correct Version Information transp. param encoding
    - MINOR: quic-be: Version Information transport parameter check
    - MINOR: quic-be: Call ->prepare_srv() callback at parsing time
    - MINOR: quic-be: QUIC backend XPRT and transport parameters init during parsing
    - MINOR: quic-be: QUIC server xprt already set when preparing their CTXs
    - MINOR: quic-be: Add a function for the TLS context allocations
    - MINOR: quic-be: Correct the QUIC protocol lookup
    - MINOR: quic-be: ssl_sock contexts allocation and misc adaptations
    - MINOR: quic-be: SSL sessions initializations
    - MINOR: quic-be: Add a function to initialize the QUIC client transport parameters
    - MINOR: sock: Add protocol and socket types parameters to sock_create_server_socket()
    - MINOR: quic-be: ->connect() protocol callback adaptations
    - MINOR: quic-be: QUIC connection allocation adaptation (qc_new_conn())
    - MINOR: quic-be: xprt ->init() adapatations
    - MINOR: quic-be: add field for max_udp_payload_size into quic_conn
    - MINOR: quic-be: Do not redispatch the datagrams
    - MINOR: quic-be: Datagrams and packet parsing support
    - MINOR: quic-be: Handshake packet number space discarding
    - MINOR: h3-be: Correctly retrieve h3 counters
    - MINOR: quic-be: Store asap the DCID
    - MINOR: quic-be: Build post handshake frames
    - MINOR: quic-be: Add the conn object to the server SSL context
    - MINOR: quic-be: Initial packet number space discarding.
    - MINOR: quic-be: I/O handler switch adaptation
    - MINOR: quic-be: Store the remote transport parameters asap
    - MINOR: quic-be: Missing callbacks initializations (USE_QUIC_OPENSSL_COMPAT)
    - MINOR: quic-be: Make the secret derivation works for QUIC backends (USE_QUIC_OPENSSL_COMPAT)
    - MINOR: quic-be: SSL_get_peer_quic_transport_params() not defined by OpenSSL 3.5 QUIC API
    - MINOR: quic-be: get rid of ->li quic_conn member
    - MINOR: quic-be: Prevent the MUX to send/receive data
    - MINOR: quic: define proper proto on QUIC servers
    - MEDIUM: quic-be: initialize MUX on handshake completion
    - BUG/MINOR: hlua: Don't forget the return statement after a hlua_yieldk()
    - BUILD: hlua: Fix warnings about uninitialized variables
    - BUILD: listener: fix 'for' loop inline variable declaration
    - BUILD: hlua: Fix warnings about uninitialized variables (2)
    - BUG/MEDIUM: mux-quic: adjust wakeup behavior
    - MEDIUM: backend: delay MUX init with ALPN even if proto is forced
    - MINOR: quic: mark ctrl layer as ready on quic_connect_server()
    - MINOR: mux-quic: improve documentation for snd/rcv app-ops
    - MINOR: mux-quic: define flag for backend side
    - MINOR: mux-quic: set expect data only on frontend side
    - MINOR: mux-quic: instantiate first stream on backend side
    - MINOR: quic: wakeup backend MUX on handshake completed
    - MINOR: hq-interop: decode response into HTX for backend side support
    - MINOR: hq-interop: encode request from HTX for backend side support
    - CLEANUP: quic-be: Add comments about qc_new_conn() usage
    - BUG/MINOR: quic-be: CID double free upon qc_new_conn() failures
    - MINOR: quic-be: Avoid SSL context unreachable code without USE_QUIC_OPENSSL_COMPAT
    - BUG/MINOR: quic: prevent crash on startup with -dt
    - MINOR: server: reject QUIC servers without explicit SSL
    - BUG/MINOR: quic: work around NEW_TOKEN parsing error on backend side
    - BUG/MINOR: http-ana: Properly handle keep-query redirect option if no QS
    - BUG/MINOR: quic: don't restrict reception on backend privileged ports
    - MINOR: hq-interop: handle HTX response forward if not enough space
    - BUG/MINOR: quic: Fix OSSL_FUNC_SSL_QUIC_TLS_got_transport_params_fn callback (OpenSSL3.5)
    - BUG/MINOR: quic: fix ODCID initialization on frontend side
    - BUG/MEDIUM: cli: Don't consume data if outbuf is full or not available
    - MINOR: cli: handle EOS/ERROR first
    - BUG/MEDIUM: check: Set SOCKERR by default when a connection error is reported
    - BUG/MINOR: mux-quic: check sc_attach_mux return value
    - MINOR: h3: support basic HTX start-line conversion into HTTP/3 request
    - MINOR: h3: encode request headers
    - MINOR: h3: complete HTTP/3 request method encoding
    - MINOR: h3: complete HTTP/3 request scheme encoding
    - MINOR: h3: adjust path request encoding
    - MINOR: h3: adjust auth request encoding or fallback to host
    - MINOR: h3: prepare support for response parsing
    - MINOR: h3: convert HTTP/3 response into HTX for backend side support
    - MINOR: h3: complete response status transcoding
    - MINOR: h3: transcode H3 response headers into HTX blocks
    - MINOR: h3: use BUG_ON() on missing request start-line
    - MINOR: h3: reject invalid :status in response
    - DOC: config: prefer-last-server: add notes for non-deterministic algorithms
    - CLEANUP: connection: remove unused mux-ops dedicated to QUIC
    - BUG/MINOR: mux-quic/h3: properly handle too low peer fctl initial stream
    - MINOR: mux-quic: support max bidi streams value set by the peer
    - MINOR: mux-quic: abort conn if cannot create stream due to fctl
    - MEDIUM: mux-quic: implement attach for new streams on backend side
    - BUG/MAJOR: fwlc: Count an avoided server as unusable.
    - MINOR: fwlc: Factorize code.
    - BUG/MEDIUM: quic: do not release BE quic-conn prior to upper conn
    - MAJOR: cfgparse: turn the same proxy name warning to an error
    - MAJOR: cfgparse: make sure server names are unique within a backend
    - BUG/MINOR: tools: only reset argument start upon new argument
    - BUG/MINOR: stream: Avoid recursive evaluation for unique-id based on itself
    - BUG/MINOR: log: Be able to use %ID alias at anytime of the stream's evaluation
    - MINOR: hlua: emit a log instead of an alert for aborted actions due to unavailable yield
    - MAJOR: mailers: remove native mailers support
    - BUG/MEDIUM: ssl/clienthello: ECDSA with ssl-max-ver TLSv1.2 and no ECDSA ciphers
    - DOC: configuration: add details on prefer-client-ciphers
    - MINOR: ssl: Add "renegotiate" server option
    - DOC: remove the program section from the documentation
    - MAJOR: mworker: remove program section support
    - BUG/MINOR: quic: wrong QUIC_FT_CONNECTION_CLOSE(0x1c) frame encoding
    - MINOR: quic-be: add a "CC connection" backend TX buffer pool
    - MINOR: quic: Useless TX buffer size reduction in closing state
    - MINOR: quic-be: Allow sending 1200 bytes Initial datagrams
    - MINOR: quic-be: address validation support implementation (RETRY)
    - MEDIUM: proxy: deprecate the "transparent" and "option transparent" directives
    - REGTESTS: update http_reuse_be_transparent with "transparent" deprecated
    - REGTESTS: script: also add a line pointing to the log file
    - DOC: config: explain how to deal with "transparent" deprecation
    - MEDIUM: proxy: mark the "dispatch" directive as deprecated
    - DOC: config: crt-list clarify default cert + cert-bundle
    - MEDIUM: cpu-topo: switch to the "performance" cpu-policy by default
    - SCRIPTS: drop the HTML generation from announce-release
    - BUG/MINOR: tools: use my_unsetenv instead of unsetenv
    - CLEANUP: startup: move comment about nbthread where it's more appropriate
    - BUILD: qpack: fix a build issue on older compilers
2025-06-26 18:26:45 +02:00
Willy Tarreau
543b629427 BUILD: qpack: fix a build issue on older compilers
Got this on gcc-4.8:

  src/qpack-enc.c: In function 'qpack_encode_method':
  src/qpack-enc.c:168:3: error: 'for' loop initial declarations are only allowed in C99 mode
     for (size_t i = 0; i < istlen(other); ++i)
     ^

This came from commit a0912cf914 ("MINOR: h3: complete HTTP/3 request
method encoding"), no backport is needed.
2025-06-26 18:09:24 +02:00
Valentine Krasnobaeva
20110491d3 CLEANUP: startup: move comment about nbthread where it's more appropriate
Move the comment about non_global_section_parsed just above the line, where
we reset it.
2025-06-26 18:02:16 +02:00
Valentine Krasnobaeva
a9afc10ae8 BUG/MINOR: tools: use my_unsetenv instead of unsetenv
Let's use our own implementation of unsetenv() instead of the one, which is
provided in libc. Implementation from libc may vary in dependency of UNIX
distro. Implemenation from libc.so.1 ported on Illumos (see the link below) has
caused an eternal loop in the clean_env(), where we invoke unsetenv().

(https://github.com/illumos/illumos-gate/blob/master/usr/src/lib/libc/port/gen/getenv.c#L411C1-L456C1)

This is reported at GitHUB #3018 and the reporter has proposed the patch, which
we really appreciate! But looking at his fix and to the implementations of
unsetenv() in FreeBSD libc and in Linux glibc 2.31, it seems, that the algorithm
of clean_env() will perform better with our my_unsetenv() implementation.

This should be backported in versions 3.1 and 3.2.
2025-06-26 18:02:16 +02:00
Willy Tarreau
27baa3f9ff SCRIPTS: drop the HTML generation from announce-release
It has not been used over the last 5 years or so and systematically
requires manual removal. Let's just stop producing it. Also take
this opportunity to add the missing link to /discussions.
2025-06-26 18:02:16 +02:00
Willy Tarreau
b74336984d MEDIUM: cpu-topo: switch to the "performance" cpu-policy by default
As mentioned during the NUMA series development, the goal is to use
all available cores in the most efficient way by default, which
normally corresponds to "cpu-policy performance". The previous default
choice of "cpu-policy first-usable-node" was only meant to stay 100%
identical to before cpu-policy.

So let's switch the default cpu-policy to "performance" right now.
The doc was updated to reflect this.
2025-06-26 16:27:43 +02:00
Maximilian Moehl
5128178256 DOC: config: crt-list clarify default cert + cert-bundle
Clarify that HAProxy duplicates crt-list entries for multi-cert bundles
which can create unexpected side-effects as only the very first
certificate after duplication is considered as default implicitly.
2025-06-26 16:27:07 +02:00
Willy Tarreau
5c15ba5eff MEDIUM: proxy: mark the "dispatch" directive as deprecated
As mentioned in [1], the "dispatch" directive from haproxy 1.0 has long
outlived its original purpose and still suffers from a number of technical
limitations (no checks, no SSL, no idle connes etc) and still hinders some
internal evolutions. It's now time to mark it as deprecated, and to remove
it in 3.5 [2]. It was already recommended against in the documentation but
remained popular in raw TCP environments for being shorter to write.

The directive will now cause a warning to be emitted, suggesting an
alternate method involving "server". The warning can be shut using
"expose-deprecated-directives". The rare configs from 1.0 where
"dispatch" is combined with sticky servers using cookies will just
need to set these servers's weights to zero to prevent them from
being selected by the load balancing algorithm. All of this is
explained in the doc with examples.

Two reg tests were using this method, one purposely for this directive,
which now has expose-deprecated-directives, and another one to test the
behavior of idle connections, which was updated to use "server" and
extended to test both "http-reuse never" and "http-reuse always".

[1] https://github.com/orgs/haproxy/discussions/2921
[2] https://github.com/haproxy/wiki/wiki/Breaking-changes
2025-06-26 15:29:47 +02:00
Willy Tarreau
19140ca666 DOC: config: explain how to deal with "transparent" deprecation
The explanations for the "option transparent" keyword were a bit scarce
regarding deprecation, so let's explain how to replace it with a server
line that does the same.
2025-06-26 14:52:07 +02:00
Willy Tarreau
16f382f2d9 REGTESTS: script: also add a line pointing to the log file
I never counted the number of hours I've been spending selecting then
copy-pasting the directory output and manually appending "/LOG" to read
a log file but it amounts in tens to hundreds. Let's just add a direct
pointer to the log file at the end of the log for a failed run.
2025-06-26 14:33:09 +02:00
Willy Tarreau
1d3ab10423 REGTESTS: update http_reuse_be_transparent with "transparent" deprecated
With commit e93f3ea3f8 ("MEDIUM: proxy: deprecate the "transparent" and
"option transparent" directives") this one no longer works as the config
either has to be adjusted to use server 0.0.0.0 or to enable the deprecated
feature. The test used to validate a technical limitation ("transparent"
not supporting shared connections), indicated as being comparable to
"http-reuse never". Let's now duplicate the test for "http-reuse never"
and "http-reuse always" and validate both behaviors.

Take this opportunity to fix a few problems in this config:
  - use "nbthread 1": depending on the thread where the connection
    arrives, the connection may or may not be reused
  - add explicit URLs to the clients so that they can be recognized
    in the logs
  - add comments to make it clearer what to expect for each test
2025-06-26 14:32:20 +02:00
Willy Tarreau
e93f3ea3f8 MEDIUM: proxy: deprecate the "transparent" and "option transparent" directives
As discussed here [1], "transparent" (already deprecated) and
"option transparent" are horrible hacks which should really disappear
in favor of "server xxx 0.0.0.0" which doesn't rely on hackish code
path. This old feature is now deprecated in 3.3 and will disappear in
3.5, as indicated here [2]. A warning is emitted when used, explaining
how to proceed, and how to silence the warning using the global
"expose-deprecated-directives" if needed. The doc was updated to
reflect this new state.

[1] https://github.com/orgs/haproxy/discussions/2921
[2] https://github.com/haproxy/wiki/wiki/Breaking-changes
2025-06-26 11:55:47 +02:00
Frederic Lecaille
194e3bc2d5 MINOR: quic-be: address validation support implementation (RETRY)
- Add ->retry_token and ->retry_token_len new quic_conn struct members to store
  the retry tokens. These objects are allocated by quic_rx_packet_parse() and
  released by quic_conn_release().
- Add <pool_head_quic_retry_token> new pool for these tokens.
- Implement quic_retry_packet_check() to check the integrity tag of these tokens
  upon RETRY packets receipt. quic_tls_generate_retry_integrity_tag() is called
  by this new function. It has been modified to pass the address where the tag
  must be generated
- Add <resend> new parameter to quic_pktns_discard(). This function is called
  to discard the packet number spaces where the already TX packets and frames are
  attached to. <resend> allows the caller to prevent this function to release
  the in flight TX packets/frames. The frames are requeued to be resent.
- Modify quic_rx_pkt_parse() to handle the RETRY packets. What must be done upon
  such packets receipt is:
  - store the retry token,
  - store the new peer SCID as the DCID of the connection. Note that the peer will
    modify again its SCID. This is why this SCID is also stored as the ODCID
    which must be matched with the peer retry_source_connection_id transport parameter,
  - discard the Initial packet number space without flagging it as discarded and
    prevent retransmissions calling qc_set_timer(),
  - modify the TLS cryptographic cipher contexts (RX/TX),
  - wakeup the I/O handler to send new Initial packets asap.
- Modify quic_transport_param_decode() to handle the retry_source_connection_id
  transport parameter as a QUIC client. Then its caller is modified to
  check this transport parameter matches with the SCID sent by the peer with
  the RETRY packet.
2025-06-26 09:48:00 +02:00
Frederic Lecaille
8a25fcd36e MINOR: quic-be: Allow sending 1200 bytes Initial datagrams
This easy to understand patch is not intrusive at all and cannot break the QUIC
listeners.

The QUIC client MUST always pad its datagrams with Initial packets. A "!l" (not
a listener) test OR'ed with the existing ones is added to satisfy the condition
to allow the build of such datagrams.
2025-06-26 09:48:00 +02:00
Frederic Lecaille
c898b29e64 MINOR: quic: Useless TX buffer size reduction in closing state
There is no need to limit the size of the TX buffer to QUIC_MIN_CC_PKTSIZE bytes
when the connection is in closing state. There is already a test which limits the
number of bytes to be used from this TX buffer after this useless test removed.
It limits this number of bytes to the size of the TX buffer itself:

    if (end > (unsigned char *)b_wrap(buf))
	    end = (unsigned char *)b_wrap(buf);

This is exactly what is needed when the connection is in closing state. Indeed,
the size of the TX buffers are limited to reduce the memory usage. The connection
only needs to send short datagrams with at most 2 packets with a CONNECTION_CLOSE*
frames. They are built only one time and backed up into small TX buffer allocated
from a dedicated pool.
The size of this TX buffer is QUIC_MAX_CC_BUFSIZE which depends on QUIC_MIN_CC_PKTSIZE:

 #define QUIC_MIN_CC_PKTSIZE  128
 #define QUIC_MAX_CC_BUFSIZE (2 * (QUIC_MIN_CC_PKTSIZE + QUIC_DGRAM_HEADLEN))

This size is smaller than an MTU.

This patch should be backported as far as 2.9 to ease further backports to come.
2025-06-26 09:48:00 +02:00
Frederic Lecaille
9cb2acd2f2 MINOR: quic-be: add a "CC connection" backend TX buffer pool
A QUIC client must be able to close a connection sending Initial packets. But
QUIC client Initial packets must always be at least 1200 bytes long. To reduce
the memory use of TX buffers of a connection when in "closing" state, a pool
was dedicated for this purpose but with a too much reduced TX buffer size
(QUIC_MAX_CC_BUFSIZE).

This patch adds a "closing state connection" TX buffer pool with the same role
for QUIC backends.
2025-06-26 09:48:00 +02:00
Frederic Lecaille
1e6d8f199c BUG/MINOR: quic: wrong QUIC_FT_CONNECTION_CLOSE(0x1c) frame encoding
This is an old bug which was there since this commit:

     MINOR: quic: Avoid zeroing frame structures

It seems QUIC_FT_CONNECTION_CLOSE was confused with QUIC_FT_CONNECTION_CLOSE_APP
which does not include a "frame type" field. This field was not initialized
(so with a random value) which prevent the packet to be built because the
packet builder supposes the packet with such frames are very short.

Must be backported as far as 2.6.
2025-06-26 09:48:00 +02:00
William Lallemand
7cb6167d04 MAJOR: mworker: remove program section support
This patch removes completely the support for the program section, the
parsing of the section as well as the internals in the mworker does not
support it anymore.

The program section was considered dysfonctional and not fully
compatible with the "mworker V3" model. Users that want to run an
external program must use their init system.

The documentation is cleaned up in another patch.
2025-06-25 16:11:34 +02:00
William Lallemand
9b5bf81f3c DOC: remove the program section from the documentation
The program section is obsolete and can be remove from the
documentation.
2025-06-25 15:42:57 +02:00
Remi Tricot-Le Breton
34fc73ba81 MINOR: ssl: Add "renegotiate" server option
This "renegotiate" option can be set on SSL backends to allow secure
renegotiation. It is mostly useful with SSL libraries that disable
secure regotiation by default (such as AWS-LC).
The "no-renegotiate" one can be used the other way around, to disable
secure renegotation that could be allowed by default.
Those two options can be set via "ssl-default-server-options" as well.
2025-06-25 15:23:48 +02:00
William Lallemand
370a8cea4a DOC: configuration: add details on prefer-client-ciphers
prefer-client-ciphers does not work exactly the same way when used with
a dual algorithm stack (ECDSA + RSA). This patch details its behavior.

This patch must be backported in every maintained version.

Problem was discovered in #2988.
2025-06-25 14:41:45 +02:00
William Lallemand
4a298c6c5c BUG/MEDIUM: ssl/clienthello: ECDSA with ssl-max-ver TLSv1.2 and no ECDSA ciphers
Patch 23093c72 ("BUG/MINOR: ssl: suboptimal certificate selection with TLSv1.3
and dual ECDSA/RSA") introduced a problem when prioritizing the ECDSA
with TLSv1.3.

Indeed, when a client with TLSv1.3 capabilities announce a list of
ECDSA sigalgs, a list of TLSv1.3 ciphersuites compatible with ECDSA,
but only RSA ciphers for TLSv1.2, and haproxy is configured to a
ssl-max-ver TLSv1.2, then haproxy would use the ECDSA keypair, but the
client wouldn't be able to process it because TLSv1.2 was negociated.

HAProxy would be configured like that:

  ssl-default-bind-options ssl-max-ver TLSv1.2

And a client could be used this way:

  openssl s_client -connect localhost:8443 -cipher ECDHE-ECDSA-AES128-GCM-SHA256 \
          -ciphersuites TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256

This patch fixes the issue by checking if TLSv1.3 was configured before
allowing ECDSA is an TLSv1.3 ciphersuite is in the list.

This could be backported where 23093c72 ("BUG/MINOR: ssl: suboptimal
certificate selection with TLSv1.3 and dual ECDSA/RSA") was backported.
However this is quite sensible and we should wait a bit before the
backport.

This should fix issue #2988
2025-06-25 14:25:14 +02:00
Aurelien DARRAGON
5694a98744 MAJOR: mailers: remove native mailers support
As mentioned in 2.8 announce on the mailing list [1] and on the wiki [2]
native mailers were deprecated and planned for removal in 3.3. Now is
the time to drop the legacy code for native mailers which is based on a
tcpcheck "hack" and cannot be maintained. Lua mailers should be used as
a drop in replacement. Indeed, "mailers" and associated config directives
are preserved because mailers config is exposed to Lua, which helps smoothing
the transition from native mailers to Lua based ones.

As a reminder, to keep mailers configuration working as before without
making changes to the config file, simply add the line below to the global
section:

       lua-load examples/lua/mailers.lua

mailers.lua script (provided in the git repository, adjust path as needed)
may be customized by users familiar with Lua, by default it emulates the
behavior of the native (now removed) mailers.

[1]: https://www.mail-archive.com/haproxy@formilux.org/msg43600.html
[2]: https://github.com/haproxy/wiki/wiki/Breaking-changes
2025-06-24 10:55:58 +02:00
Aurelien DARRAGON
c0f6024854 MINOR: hlua: emit a log instead of an alert for aborted actions due to unavailable yield
As reported by Chris Staite in GH #3002, trying to yield from a Lua
action during a client disconnect causes the script to be interrupted
(which is expected) and an alert to be emitted with the error:
"Lua function '%s': yield not allowed".

While this error is well suited for cases where the yield is not expected
at all (ie: when context doesn't allow it) and results from a yield misuse
in the Lua script, it isn't the case when the yield is exceptionnally not
available due to an abort or error in the request/response processing.
Because of that we raise an alert but the user cannot do anything about it
(the script is correct), so it is confusing and polluting the logs.

In this patch we introduce the ACT_OPT_FINAL_EARLY flag which is a
complementary flag to ACT_OPT_FIRST. This flag is set when the
ACT_OPT_FIRST is set earlier than normal (due to error/abort).
hlua_action() then checks for this flag to decide whether an error (alert)
or a simple log message should be emitted when the yield is not available.

It should solve GH #3002. Thanks to Chris Staite (@chrisstaite-menlo) for
having reported the issue and suggested a solution.
2025-06-24 10:55:55 +02:00
Christopher Faulet
20a82027ce BUG/MINOR: log: Be able to use %ID alias at anytime of the stream's evaluation
In a log-format string, using "%[unique-id]" or "%ID" should be equivalent.
However, for the first one, the unique ID is generated when the sample fetch
function is called. For the alias, it is not true. It that case, the
stream's unique ID is generated when the log message is emitted. Otherwise,
by default, the unique id is automatically generated at the end of the HTTP
request analysis.

So, if the alias "%ID" is use in a log-format string anywhere before the end
of the request analysis, the evaluation failed and the ID is considered as
empty. It is not consistent and in contradiction with the "%ID"
documentation.

To fix the issue, instead of evaluating the unique ID when the log message
is emitted, it is now performed on demand when "%ID" format is evaluated.

This patch should fix the issue #3016. It should be backported to all stable
versions. It relies on the following commit:

  * BUG/MINOR: stream: Avoid recursive evaluation for unique-id based on itself
2025-06-24 08:04:50 +02:00
Christopher Faulet
fb7b5c8a53 BUG/MINOR: stream: Avoid recursive evaluation for unique-id based on itself
There is nothing that prevent a "unique-id-format" to reference itself,
using '%ID' or '%[unique-id]'. If the sample fetch function is used, it
leads to an infinite loop, calling recursively the function responsible to
generate the unique ID.

One solution is to detect it during the configuration parsing to trigger an
error. With this patch, we just inhibit recursive calls by considering the
unique-id as empty during its evaluation. So "id-%[unique-id]" lf string
will be evaluated as "id-".

This patch must be backported to all stable versions.
2025-06-24 08:04:50 +02:00
Willy Tarreau
68c3eb3013 BUG/MINOR: tools: only reset argument start upon new argument
In issue #2995, Thomas Kjaer reported that empty argument position
reporting had been broken yet again. This time it was broken by this
latest fix: 2b60e54fb1 ("BUG/MINOR: tools: improve parse_line()'s
robustness against empty args"). It turns out that this fix is not
the culprit and it's in fact correct. The culprit was the original
commit of this series, 7e4a2f39ef ("BUG/MINOR: tools: do not create
an empty arg from trailing spaces"), which used to reset arg_start
to outpos for every new char in addition to doing it for every arg.
This resulted in the end of the line to be seen as always being in
error, thus reporting an incorrect position that the caller would
correct in a generic way designating the beginning of the line. It
didn't reveal prior to the upper fix above because the misassigned
value was almost not used by then.

Assigning the value before entering the loop fixes this problem and
doens't break the series of previous oss-fuzz reproducers. Hopefully
it's the last one again.

This must be backported to 3.2. Thanks to @tkjaer for reporting the
issue along with a reproducer.
2025-06-23 18:41:52 +02:00
Willy Tarreau
d7fad1320e MAJOR: cfgparse: make sure server names are unique within a backend
There was already a check for this but there used to be an exception
that allowed duplicate server names only in case where their IDs were
explicit and different. This has been emitting a warning since 3.1 and
planned for removal in 3.3, so let's do it now. The doc was updated,
though it never mentioned this unicity constraint, so that was added.

Only the check for the exception was removed, the rest of the code
that is currently made to deal with duplicate server names was not
cleaned yet (e.g. the tree doesn't need to support dups anymore, and
this could be done at insertion time). This may be a subject for future
cleanups.
2025-06-23 15:42:32 +02:00
Willy Tarreau
067be38c0e MAJOR: cfgparse: turn the same proxy name warning to an error
As warned since 3.1, it's no longer permitted to have a frontend and
a backend under the same name. This causes too many designation issues,
and causes trouble with stick-tables as well. Now each proxy name is
unique.

This commit only changes the check to return an error. Some code parts
currently exist to find the best candidates, these will be able to be
simplified as future cleanup patches. The doc was updated.
2025-06-23 15:34:05 +02:00
Amaury Denoyelle
74b95922ef BUG/MEDIUM: quic: do not release BE quic-conn prior to upper conn
For frontend side, quic_conn is only released if MUX wasn't allocated,
either due to handshake abort, in which case upper layer is never
allocated, or after transfer completion when full conn + MUX layers are
already released.

On the backend side, initialization is not performed in the same order.
Indeed, in this case, connection is first instantiated, the nthe
quic_conn is created to execute the handshake, while MUX is still only
allocated on handshake completion. As such, it is not possible anymore
to free immediately quic_conn on handshake failure. Else, this can cause
crash if the connection try to reaccess to its transport layer after
quic_conn release.

Such crash can easily be reproduced in case of connection error to the
QUIC server. Here is an example of an experienced backtrace.

Thread 1 "haproxy" received signal SIGSEGV, Segmentation fault.
  0x0000555555739733 in quic_close (conn=0x55555734c0d0, xprt_ctx=0x5555573a6e50) at src/xprt_quic.c:28
  28              qc->conn = NULL;
  [ ## gdb ## ] bt
  #0  0x0000555555739733 in quic_close (conn=0x55555734c0d0, xprt_ctx=0x5555573a6e50) at src/xprt_quic.c:28
  #1  0x00005555559c9708 in conn_xprt_close (conn=0x55555734c0d0) at include/haproxy/connection.h:162
  #2  0x00005555559c97d2 in conn_full_close (conn=0x55555734c0d0) at include/haproxy/connection.h:206
  #3  0x00005555559d01a9 in sc_detach_endp (scp=0x7fffffffd648) at src/stconn.c:451
  #4  0x00005555559d05b9 in sc_reset_endp (sc=0x55555734bf00) at src/stconn.c:533
  #5  0x000055555598281d in back_handle_st_cer (s=0x55555734adb0) at src/backend.c:2754
  #6  0x000055555588158a in process_stream (t=0x55555734be10, context=0x55555734adb0, state=516) at src/stream.c:1907
  #7  0x0000555555dc31d9 in run_tasks_from_lists (budgets=0x7fffffffdb30) at src/task.c:655
  #8  0x0000555555dc3dd3 in process_runnable_tasks () at src/task.c:889
  #9  0x0000555555a1daae in run_poll_loop () at src/haproxy.c:2865
  #10 0x0000555555a1e20c in run_thread_poll_loop (data=0x5555569d1c00 <ha_thread_info>) at src/haproxy.c:3081
  #11 0x0000555555a1f66b in main (argc=5, argv=0x7fffffffde18) at src/haproxy.c:3671

To fix this, change the condition prior to calling quic_conn release. If
<conn> member is not NULL, delay the release, similarly to the case when
MUX is allocated. This allows connection to be freed first, and detach
from quic_conn layer through close xprt operation.

No need to backport.
2025-06-20 17:46:10 +02:00
Olivier Houchard
ba5738489f MINOR: fwlc: Factorize code.
Always set unusable if we could not use a server, instead of doing it in
each branch

This should be backported to 3.2 after e28e647fef
is backported.
2025-06-20 15:59:03 +02:00
Olivier Houchard
e28e647fef BUG/MAJOR: fwlc: Count an avoided server as unusable.
When fwlc_get_next_server(), if a server to avoid has been provided, and
we have to ignore it, don't forget to increase the number of unusable
servers, otherwise we may end up ignoring it over and over, never
switching to another server, in an infinite loop until the process gets
killed.
This hopefully fixes Github issues #3004 and #3014.

This should be backported to 3.2.
2025-06-20 15:29:51 +02:00
Amaury Denoyelle
4527a2912b MEDIUM: mux-quic: implement attach for new streams on backend side
Implement attach and avail_streams mux-ops callbacks, which are used on
backend side for connection reuse.

Attach operation is used to initiate new streams on the connection
outside of the first one. It simply relies on qcc_init_stream_local() to
instantiate a new QCS instance, which is immediately linked to its
stream data layer.

Outside of attach, it is also necessary to implement avail_streams so
that the stream layer will try to initiate connection reuse. This method
reports the number of bidirectional streams which can still be opened
for the QUIC connection. It depends directly to the flow-control value
advertised by the peer. Thus, this ensures that attach won't cause any
flow control violation.
2025-06-18 17:25:27 +02:00
Amaury Denoyelle
81cfaab6b4 MINOR: mux-quic: abort conn if cannot create stream due to fctl
Prior to initiate first stream on the backend side, ensure that peer
flow-control allows at least that a single bidirectional stream can be
created. If this is not the case, abort MUX init operation.

Before this patch, flow-control limit was not checked. Hence, if peer
does not allow any bidirectional stream, haproxy would violate it, which
whould then cause the peer to close the connection.

Note that with the current situation, haproxy won't be able to talk to
servers which uses a 0 for initial max bidi streams. A proper solution
could be to pause the request until a MAX_STREAMS is received, under
timeout supervision to ensure the connection is closed if no frame is
received.
2025-06-18 17:25:27 +02:00
Amaury Denoyelle
06cab99a0e MINOR: mux-quic: support max bidi streams value set by the peer
Implement support for MAX_STREAMS frame. On frontend, this was mostly
useless as haproxy would never initiate new bidirectional streams.
However, this becomes necessary to control stream flow-control when
using QUIC as a client on the backend side.

Parsing of MAX_STREAMS is implemented via new qcc_recv_max_streams().
This allows to update <ms_uni>/<ms_bidi> QCC fields.

This patch is necessary to achieve QUIC backend connection reuse.
2025-06-18 17:25:27 +02:00
Amaury Denoyelle
805a070ab9 BUG/MINOR: mux-quic/h3: properly handle too low peer fctl initial stream
Previously, no check on peer flow-control was implemented prior to open
a local QUIC stream. This was a small problem for frontend
implementation, as in this case haproxy as a server never opens
bidirectional streams.

On frontend, the only stream opened by haproxy in this case is for
HTTP/3 control unidirectional data. If the peer uses an initial value
for max uni streams set to 0, it would violate its flow control, and the
peer will probably close the connection. Note however that RFC 9114
mandates that each peer defines minimal initial value so that at least
the control stream can be created.

This commit improves the situation of too low initial max uni streams
value. Now, on HTTP/3 layer initialization, haproxy preemptively checks
flow control limit on streams via a new function
qcc_fctl_avail_streams(). If credit is already expired due to a too
small initial value, haproxy preemptively closes the connection using
H3_ERR_GENERAL_PROTOCOL_ERROR. This behavior is better as haproxy is now
the initiator of the connection closure.

This should be backported up to 2.8.
2025-06-18 17:18:55 +02:00
Amaury Denoyelle
c807182ec9 CLEANUP: connection: remove unused mux-ops dedicated to QUIC
Remove avail_streams_bidi/avail_streams_uni mux_ops. These callbacks
were designed to be specific to QUIC. However, they won't be necessary,
as stream layer only cares about bidirectional streams.
2025-06-18 17:02:50 +02:00
Valentine Krasnobaeva
cdb2f8d780 DOC: config: prefer-last-server: add notes for non-deterministic algorithms
Add some notes which load-balancing algorithm can be considered as
deterministic or non-deterministic and add some examples for each type.
This was asked via mailing list to clarify the usage of
prefer-last-server option.

This can be backported to all stable versions.
2025-06-17 21:18:23 +02:00
Amaury Denoyelle
8fc0d2fbd5 MINOR: h3: reject invalid :status in response
Add checks to ensure that :status pseudo-header received in HTTP/3
response is valid. If either the header is not provided, or it isn't a 3
digit numbers, the response is considered as invalid and the streams is
rejected. Also, glitch counter is now incremented in any of these cases.

This should fix coverity report from github issue #3009.
2025-06-17 11:39:35 +02:00
Amaury Denoyelle
f972f7d9e9 MINOR: h3: use BUG_ON() on missing request start-line
Convert BUG_ON_HOT() statements to BUG_ON() if HTX start-line is either
missing or duplicated when transcoding into a HTTP/3 request. This
ensures that such abnormal conditions will be detected even on default
builds.

This is linked to coverity report #3008.
2025-06-17 11:39:35 +02:00
Amaury Denoyelle
2284aa0d6a MINOR: h3: transcode H3 response headers into HTX blocks
Finalize HTTP/3 response transcoding into HTX message. This patch
implements conversion of HTTP/3 headers provided by the server into HTX
blocks.

Special checks have been implemented to reject connection-specific
headers, causing the stream to be shut in error. Also, handling of
content-length requires that the body size is equal to the value
advertized in the header to prevent HTTP desync.
2025-06-16 18:11:09 +02:00
Amaury Denoyelle
d83255fdc3 MINOR: h3: complete response status transcoding
On the backend side, HTTP/3 request response from server is transcoded
into a HTX message. Previously, a fixed value was used for the status
code.

Improve this by extracting the value specified by the server and set it
into the HTX status line. This requires to detect :status pseudo-header
from the HTTP/3 response.
2025-06-16 18:11:09 +02:00
Amaury Denoyelle
f79effa306 MINOR: h3: convert HTTP/3 response into HTX for backend side support
Implement basic support for HTTP/3 request response transcoding into
HTX. This is done via a new dedicated function h3_resp_headers_to_htx().
A valid HTX status-line is allocated and stored. Status code is
hardcoded to 200 for now.

Following patches will be added to remove hardcoded status value and
also handle response headers provided by the server.
2025-06-16 18:11:09 +02:00
Amaury Denoyelle
0eb35029dc MINOR: h3: prepare support for response parsing
Refactor HTTP/3 request headers transcoding to HTX done in
h3_headers_to_htx(). Some operations are extracted into dedicated
functions, to check pseudo-headers and headers conformity, and also trim
the value of headers before encoding it in HTX.

The objective will be to simplify implementation of HTTP/3 response
transcoding by reusing these functions.

Also, h3_headers_to_htx() has been renamed to h3_req_headers_to_htx(),
to highlight that it is reserved to frontend usage.
2025-06-16 18:11:09 +02:00
Amaury Denoyelle
555ec99d43 MINOR: h3: adjust auth request encoding or fallback to host
Implement proper encoding of HTTP/3 authority pseudo-header during
request transcoding on the backend side. A pseudo-header :authority is
encoded if a value can be extracted from HTX start-line. A special check
is also implemented to ensure that a host header is not encoded if
:authority already is.

A new function qpack_encode_auth() is defined to implement QPACK
encoding of :authority header using literal field line with name ref.
2025-06-16 18:11:09 +02:00
Amaury Denoyelle
96183abfbd MINOR: h3: adjust path request encoding
Previously, HTTP/3 backend request :path was hardcoded to value '/'.
Change this so that we can now encode any path as requested by the
client. Path is extracted from the HTX URI. Also, qpack_encode_path() is
extended to support literal field line with name ref.
2025-06-16 18:11:09 +02:00
Amaury Denoyelle
235e818fa1 MINOR: h3: complete HTTP/3 request scheme encoding
Previously, scheme was always set to https when transcoding an HTX
start-line into a HTTP/3 request. Change this so this conversion is now
fully compliant.

If no scheme is specified by the client, which is what happens most of
the time with HTTP/1, https is set for the HTTP/3 request. Else, reuse
the scheme requested by the client.

If either https or http is set, qpack_encode_scheme will encode it using
entry from QPACK static table. Else, a full literal field line with name
ref is used instead as the scheme value is specified as-is.
2025-06-16 18:11:09 +02:00
Amaury Denoyelle
a0912cf914 MINOR: h3: complete HTTP/3 request method encoding
On the backend side, HTX start-line is converted into a HTTP/3 request
message. Previously, GET method was hardcoded. Implement proper method
conversion, by extracting it from the HTX start-line.

qpack_encode_method() has also been extended, so that it is able to
encode any method, either using a static table entry, or with a literal
field line with name ref representation.
2025-06-16 18:11:09 +02:00
Amaury Denoyelle
f5342e0a96 MINOR: h3: encode request headers
Implement encoding of HTTP/3 request headers during HTX->H3 conversion
on the backend side. This simply relies on h3_encode_header().

Special check is implemented to ensure that connection-specific headers
are ignored. An HTTP/3 endpoint must never generate them, or the peer
will consider the message as malformed.
2025-06-16 18:11:09 +02:00
Amaury Denoyelle
7157adb154 MINOR: h3: support basic HTX start-line conversion into HTTP/3 request
This commit is the first one of a serie which aim is to implement
transcoding of a HTX request into HTTP/3, which is necessary for QUIC
backend support.

Transcoding is implementing via a new function h3_req_headers_send()
when a HTX start-line is parsed. For now, most of the request fields are
hardcoded, using a GET method. This will be adjusted in the next
following patches.
2025-06-16 18:11:09 +02:00
Amaury Denoyelle
fc1a17f169 BUG/MINOR: mux-quic: check sc_attach_mux return value
On backend side, QUIC MUX needs to initialize the first local stream
during MUX init operation. This is necessary so that the first transfer
can then be performed.

sc_attach_mux() is used to attach the created QCS instance to its stream
data layer. However, return value was not checked, which may cause
issues on allocation error. This patch fixes it by returning an error on
MUX init operation and freeing the QCS instance in case of
sc_attach_mux() error.

This fixes coverity report from github issue #3007.

No need to backport.
2025-06-16 18:11:09 +02:00
Christopher Faulet
54d74259e9 BUG/MEDIUM: check: Set SOCKERR by default when a connection error is reported
When a connection error is reported, we try to collect as much information
as possible on the connection status and the server status is adjusted
accordingly. However, the function does nothing if there is no connection
error and if the healthcheck is not expired yet. It is a problem when an
internal error occurred. It may happen at many places and it is hard to be
sure an error is reported on the connection. And in fact, it is already a
problem when the multiplexer allocation fails. In that case, the healthcheck
is not interrupted as it should be. Concretely, it could only happen when a
connection is established.

It is hard to predict the effects of this bug. It may be unimportant. But it
could probably lead to a crash. To avoid any issue, a SOCKERR status is now
set by default when a connection error is reported. There is no reason to
report a connection error for nothing. So a healthcheck failure must be
reported. There is no "internal error" status. So a socket error is
reported.

This patch must be backport to all stable versions.
2025-06-16 17:47:35 +02:00
Christopher Faulet
fb76655526 MINOR: cli: handle EOS/ERROR first
It is not especially a bug fixed. But APPCTX_FL_EOS and APPCTX_FL_ERROR
flags must be handled first. These flags are set by the applet itself and
should mark the end of all processing. So there is not reason to get the
output buffer in first place.

This patch could be backported as far as 3.0.
2025-06-16 16:47:59 +02:00
Christopher Faulet
396f0252bf BUG/MEDIUM: cli: Don't consume data if outbuf is full or not available
The output buffer must be available to process a command, at least to be
able to emit error messages. When this buffer is full or cannot be
allocated, we must wait. In that case, we must take care to notify the SE
will not consume input data. It is important to avoid wakeup in loop,
especially when the client aborts.

When the output buffer is available again and no longer full, and the CLI
applet is waiting for a command line, it must notify it will consume input
data.

This patch must be backported as far as 3.0.
2025-06-16 16:47:59 +02:00
Amaury Denoyelle
96badf86a2 BUG/MINOR: quic: fix ODCID initialization on frontend side
QUIC support on the backend side has been implemented recently. This has
lead to some adjustment on qc_new_conn() to handle both FE and BE sides,
with some of these changes performed by the following commit.

  29fb1aee57
  MINOR: quic-be: QUIC connection allocation adaptation (qc_new_conn())

An issue was introduced during some code adjustement. Initialization of
ODCID was incorrectly performed, which caused haproxy to emit invalid
transport parameters. Most of the clients detected this and immediatly
closed the connection.

Fix this by adjusting qc_lstnr_params_init() invokation : replace
<qc.dcid>, which in fact points to the received SCID, by <qc.odcid>
whose purpose is dedicated to original DCID storage.

This fixes github issue #3006. This issue also caused the majority of
tests in the interop to fail.

No backport needed.
2025-06-16 10:09:37 +02:00
Frederic Lecaille
5409a73721 BUG/MINOR: quic: Fix OSSL_FUNC_SSL_QUIC_TLS_got_transport_params_fn callback (OpenSSL3.5)
This patch is OpenSSL3.5 QUIC API specific. It fixes
OSSL_FUNC_SSL_QUIC_TLS_got_transport_params_fn() callback (see man(3) SSL_set_quic_tls_cb).

The role of this callback is to store the transport parameters received by the peer.
At this time it is never used by QUIC listeners because there is another callback
which is used to store the transport parameters. This latter callback is not specific
to OpenSSL 3.5 QUIC API. As far as I know, the TLS stack call only one time
one of the callbacks which have been set to receive and store the transport parameters.

That said, OSSL_FUNC_SSL_QUIC_TLS_got_transport_params_fn() is called for QUIC
backends to store the server transport parameters.

qc_ssl_set_quic_transport_params() is useless is this callback. It is dedicated
to store the local tranport parameters (which are sent to the peer). Furthermore
<server> second parameter of quic_transport_params_store() must be 0 for a listener
(or QUIC server) whichs call it, denoting it does not receive the transport parameters
of a QUIC server. It must be 1 for a QUIC backend (a QUIC client which receives
the transport parameter of a QUIC server).

Must be backported to 3.2.
2025-06-16 10:02:45 +02:00
Amaury Denoyelle
ab6895cc65 MINOR: hq-interop: handle HTX response forward if not enough space
On backend side, HTTP/0.9 response body is copied into stream data HTX
buffer. Properly handle the case where the HTX out buffer space is too
small. Only copy a partial copy of the HTTP response. Transcoding will
be restarted when new room is available.
2025-06-13 17:41:13 +02:00
Amaury Denoyelle
46cee07931 BUG/MINOR: quic: don't restrict reception on backend privileged ports
When QUIC is used on the frontend side, communication is restricted with
clients using privileged port. This is a simple protection against
DNS/NTP spoofing.

This feature should not be activated on the backend side, as in this
case it is quite frequent to exchange with server running on privileged
ports. As such, a new parameter is added to quic_recv() so that it is
only active on the frontend side.

Without this patch, it is impossible to communicate with QUIC servers
running on privileged ports, as incoming datagrams would be silently
dropped.

No need to backport.
2025-06-13 16:40:21 +02:00
Christopher Faulet
edb8f2bb60 BUG/MINOR: http-ana: Properly handle keep-query redirect option if no QS
The keep-query redirect option must do nothing is there is no query-string.
However, there is a bug. When there is no QS, an error is returned, leading
to return a 500-internal-error to the client.

To fix the bug, instead of returning 0 when there is no QS, we just skip the
QS processing.

This patch should fix the issue #3005. It must be backported as far as 3.1.
2025-06-13 11:27:20 +02:00
Amaury Denoyelle
577fa44691 BUG/MINOR: quic: work around NEW_TOKEN parsing error on backend side
NEW_TOKEN frame is never emitted by a client, hence parsing was not
tested on frontend side.

On backend side, an issue can occur, as expected token length is static,
based on the token length used internally by haproxy. This is not
sufficient for most server implementation which uses larger token. This
causes a parsing error, which may cause skipping of following frames in
the same packet. This issue was detected using ngtcp2 as server.

As for now tokens are unused by haproxy, simply discard test on token
length during NEW_TOKEN frame parsing. The token itself is merely
skipped without being stored. This is sufficient for now to continue on
experimenting with QUIC backend implementation.

This does not need to be backported.
2025-06-12 17:47:15 +02:00
Amaury Denoyelle
830affc17d MINOR: server: reject QUIC servers without explicit SSL
Report an error during server configuration if QUIC is used by SSL is
not activiated via 'ssl' keyword. This is done in _srv_parse_finalize(),
which is both used by static and dynamic servers.

Note that contrary to listeners, an error is reported instead of a
warning, and SSL is not automatically activated if missing. This is
mainly due to the complex server configuration : _srv_parse_finalize()
is ideal to affect every servers, including dynamic entries. However, it
is executed after server SSL context allocation performed via
<prepare_srv> XPRT operation. A proper fix would be to move SSL ctx
alloc in _srv_parse_finalize(), but this may have unknown impact. Thus,
for now a simpler solution has been chosen.
2025-06-12 16:16:43 +02:00
Amaury Denoyelle
33cd96a5e9 BUG/MINOR: quic: prevent crash on startup with -dt
QUIC traces in ssl_quic_srv_new_ssl_ctx() are problematic as this
function is called early during startup. If activating traces via -dt
command-line argument, a crash occurs due to stderr sink not yet
available.

Thus, traces from ssl_quic_srv_new_ssl_ctx() are simply removed.

No backport needed.
2025-06-12 15:15:56 +02:00
Frederic Lecaille
5a0ae9e9be MINOR: quic-be: Avoid SSL context unreachable code without USE_QUIC_OPENSSL_COMPAT
This commit added a "err" C label reachable only with USE_QUIC_OPENSSL_COMPAT:

   MINOR: quic-be: Missing callbacks initializations (USE_QUIC_OPENSSL_COMPAT)

leading coverity to warn this:

*** CID 1611481:         Control flow issues  (UNREACHABLE)
/src/quic_ssl.c: 802             in ssl_quic_srv_new_ssl_ctx()
796     		goto err;
797     #endif
798
799      leave:
800     	TRACE_LEAVE(QUIC_EV_CONN_NEW);
801     	return ctx;
>>>     CID 1611481:         Control flow issues  (UNREACHABLE)
>>>     This code cannot be reached: "err:
SSL_CTX_free(ctx);".
802      err:
803     	SSL_CTX_free(ctx);
804     	ctx = NULL;
805     	TRACE_DEVEL("leaving on error", QUIC_EV_CONN_NEW);
806     	goto leave;
807     }

The less intrusive (without #ifdef) way to fix this it to add a "goto err"
statement from the code part which is reachable without USE_QUIC_OPENSSL_COMPAT.

Thank you to @chipitsine for having reported this issue in GH #3003.
2025-06-12 11:45:21 +02:00
Frederic Lecaille
869fb457ed BUG/MINOR: quic-be: CID double free upon qc_new_conn() failures
This issue may occur when qc_new_conn() fails after having allocated
and attached <conn_cid> to its tree. This is the case when compiling
haproxy against WolfSSL for an unknown reason at this time. In this
case the <conn_cid> is freed by pool_head_quic_connection_id(), then
freed again by quic_conn_release().

This bug arrived with this commit:

    MINOR: quic-be: QUIC connection allocation adaptation (qc_new_conn())

So, the aim of this patch is to free <conn_cid> only for QUIC backends
and if it is not attached to its tree. This is the case when <conn_id>
local variable passed with NULL value to qc_new_conn() is then intialized
to the same <conn_cid> value.
2025-06-12 11:45:21 +02:00
Frederic Lecaille
dc3fb3a731 CLEANUP: quic-be: Add comments about qc_new_conn() usage
This patch should have come with this last commit for the last qc_new_conn()
modifications for QUIC backends:

     MINOR: quic-be: get rid of ->li quic_conn member

qc_new_conn() must be passed NULL pointers for several variables as mentioned
by the comment. Some of these local variables are used to avoid too much
code modifications.
2025-06-12 11:45:21 +02:00
Amaury Denoyelle
603afd495b MINOR: hq-interop: encode request from HTX for backend side support
Implement transcoding of a HTX request into HTTP/0.9. This protocol is a
simplified version of HTTP. Request only supports GET method without any
header. As such, only a request line is written during snd_buf
operation.
2025-06-12 11:28:54 +02:00
Amaury Denoyelle
a286d5476b MINOR: hq-interop: decode response into HTX for backend side support
Implement transcoding of a HTTP/0.9 response into a HTX message.

HTTP/0.9 is a really simple substract of HTTP spec. The response does
not have any status line and is contains only the payload body. Response
is finished when the underlying connection/stream is closed.

A status line is generated to be compliant with HTX. This is performed
on the first invokation of rcv_buf for the current stream. Status code
is set to 200. Payload body if present is then copied using
htx_add_data().
2025-06-12 11:28:54 +02:00
Amaury Denoyelle
4031bf7432 MINOR: quic: wakeup backend MUX on handshake completed
This commit is the second and final step to initiate QUIC MUX on the
backend side. On handshake completion, MUX is woken up just after its
creation. This step is necessary to notify the stream layer, via the QCS
instance pre-initialized on MUX init, so that the transfer can be
resumed.

This mode of operation is similar to TCP stack when TLS+ALPN are used,
which forces MUX initialization to be delayed after handshake
completion.
2025-06-12 11:28:54 +02:00
Amaury Denoyelle
1efaca8a57 MINOR: mux-quic: instantiate first stream on backend side
Adjust qmux_init() to handle frontend and backend sides differently.
Most notably, on backend side, the first bidirectional stream is created
preemptively. This step is necessary as MUX layer will be woken up just
after handshake completion.
2025-06-12 11:28:54 +02:00
Amaury Denoyelle
f8d096c05f MINOR: mux-quic: set expect data only on frontend side
Stream data layer is notified that data is expected when FIN is
received, which marks the end of the HTTP request. This prepares data
layer to be able to handle the expected HTTP response.

Thus, this step is only relevant on frontend side. On backend side, FIN
marks the end of the HTTP response. No further content is expected, thus
expect data should not be set in this case.

Note that se_expect_data() invokation via qcs_attach_sc() is not
protected. This is because this function will only be called during
request headers parsing which is performed on the frontend side.
2025-06-12 11:28:54 +02:00
Amaury Denoyelle
e8775d51df MINOR: mux-quic: define flag for backend side
Mux connection is flagged with new QC_CF_IS_BACK if used on the backend
side. For now the only change is during traces, to be able to
differentiate frontend and backend usage.
2025-06-12 11:28:54 +02:00
Amaury Denoyelle
93b904702f MINOR: mux-quic: improve documentation for snd/rcv app-ops
Complete document for rcv_buf/snd_buf operations. In particular, return
value is now explicitely defined. For H3 layer, associated functions
documentation is also extended.
2025-06-12 11:28:54 +02:00
Amaury Denoyelle
e7f1db0348 MINOR: quic: mark ctrl layer as ready on quic_connect_server()
Use conn_ctrl_init() on the connection when quic_connect_server()
succeeds. This is necessary so that the connection is considered as
completely initialized. Without this, connect operation will be call
again if connection is reused.
2025-06-12 11:25:12 +02:00
Amaury Denoyelle
a0db93f3d8 MEDIUM: backend: delay MUX init with ALPN even if proto is forced
On backend side, multiplexer layer is initialized during
connect_server(). However, this step is not performed if ALPN is used,
as the negotiated protocol may be unknown. Multiplexer initialization is
delayed after TLS handshake completion.

There are still exceptions though that forces the MUX to be initialized
even if ALPN is used. One of them was if <mux_proto> server field was
already set at this stage, which is the case when an explicit proto is
selected on the server line configuration. Remove this condition so that
now MUX init is delayed with ALPN even if proto is forced.

The scope of this change should be minimal. In fact, the only impact
concerns server config with both proto and ALPN set, which is pretty
unlikely as it is contradictory.

The main objective of this patch is to prepare QUIC support on the
backend side. Indeed, QUIC proto will be forced on the server if a QUIC
address is used, similarly to bind configuration. However, we still want
to delay MUX initialization after QUIC handshake completion. This is
mandatory to know the selected application protocol, required during
QUIC MUX init.
2025-06-12 11:21:32 +02:00
Amaury Denoyelle
044ad3a602 BUG/MEDIUM: mux-quic: adjust wakeup behavior
Change wake callback behavior for QUIC MUX. This operation loops over
each QCS and notify their stream data layer on certain events via
internal helper qcc_wake_some_streams().

Previously, streams were notified only if an error occured on the
connection. Change this to notify streams data layer everytime wake
callback is used. This behavior is now identical to H2 MUX.

qcc_wake_some_streams() is also renamed to qcc_wake_streams(), as it
better reflect its true behavior.

This change should not have performance impact as wake mux ops should
not be called frequently. Note that qcc_wake_streams() can also be
called directly via qcc_io_process() to ensure a new error is correctly
propagated. As wake callback first uses qcc_io_process(), it will only
call qcc_wake_streams() if no error is present.

No known issue is associated with this commit. However, it could prevent
freezing transfer under certain condition. As such, it is considered as
a bug fix worthy of backporting.

This should be backported after a period of observation.
2025-06-12 11:12:49 +02:00
Christopher Faulet
2c3f3eaaed BUILD: hlua: Fix warnings about uninitialized variables (2)
It was still failing on Ubuntu-24.04 with GCC+ASAN. So, instead of
understand the code path the compiler followed to report uninitialized
variables, let's init them now.

No backport needed.
2025-06-12 10:49:54 +02:00
Aurelien DARRAGON
b5067a972c BUILD: listener: fix 'for' loop inline variable declaration
commit 16eb0fab3 ("MAJOR: counters: dispatch counters over thread groups")
introduced a build regression on some compilers:

  src/listener.c: In function 'listener_accept':
  src/listener.c:1095:3: error: 'for' loop initial declarations are only allowed in C99 mode
     for (int it = 0; it < global.nbtgroups; it++)
     ^
  src/listener.c:1095:3: note: use option -std=c99 or -std=gnu99 to compile your code
  src/listener.c:1101:4: error: 'for' loop initial declarations are only allowed in C99 mode
      for (int it = 0; it < global.nbtgroups; it++) {
      ^
  make: *** [src/listener.o] Error 1
  make: *** Waiting for unfinished jobs....

Let's fix that.
No backport needed
2025-06-12 08:46:36 +02:00
Christopher Faulet
01f011faeb BUILD: hlua: Fix warnings about uninitialized variables
In hlua_applet_tcp_recv_try() and hlua_applet_tcp_getline_yield(), GCC 14.2
reports warnings about 'blk2' variable that may be used uninitialized. It is
a bit strange because the code is pretty similar than before. But to make it
happy and to avoid bugs if the API change in future, 'blk2' is now used only
when its length is greater than 0.

No need to backport.
2025-06-12 08:46:36 +02:00
Christopher Faulet
8c573deb9f BUG/MINOR: hlua: Don't forget the return statement after a hlua_yieldk()
In hlua_applet_tcp_getline_yield(), the function may yield if there is no
data available. However we must take care to add a return statement just
after the call to hlua_yieldk(). I don't know the details of the LUA API,
but at least, this return statement fix a build error about uninitialized
variables that may be used.

It is a 3.3-specific issue. No backport needed.
2025-06-12 08:46:36 +02:00
Frederic Lecaille
bf6e576cfd MEDIUM: quic-be: initialize MUX on handshake completion
On backend side, MUX is instantiated after QUIC handshake completion.
This step is performed via qc_ssl_provide_quic_data(). First, connection
flags for handshake completion are resetted. Then, MUX is instantiated
via conn_create_mux() function.
2025-06-11 18:37:34 +02:00
Amaury Denoyelle
cdcecb9b65 MINOR: quic: define proper proto on QUIC servers
Force QUIC as <mux_proto> for server if a QUIC address is used. This is
similarly to what is already done for bind instances on the frontend
side. This step ensures that conn_create_mux() will select the proper
protocol.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
855fd63f90 MINOR: quic-be: Prevent the MUX to send/receive data
Such actions must be interrupted until the handshake completion.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
b9703cf711 MINOR: quic-be: get rid of ->li quic_conn member
Replace ->li quic_conn pointer to struct listener member by  ->target which is
an object type enum and adapt the code.
Use __objt_(listener|server)() where the object type is known. Typically
this is were the code which is specific to one connection type (frontend/backend).
Remove <server> parameter passed to qc_new_conn(). It is redundant with the
<target> parameter.
GSO is not supported at this time for QUIC backend. qc_prep_pkts() is modified
to prevent it from building more than an MTU. This has as consequence to prevent
qc_send_ppkts() to use GSO.
ssl_clienthello.c code is run only by listeners. This is why __objt_listener()
is used in place of ->li.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
f6ef3bbc8a MINOR: quic-be: SSL_get_peer_quic_transport_params() not defined by OpenSSL 3.5 QUIC API
Disable the code around SSL_get_peer_quic_transport_params() as this was done
for USE_QUIC_OPENSSL_COMPAT because SSL_get_peer_quic_transport_params() is not
defined by OpenSSL 3.5 QUIC API.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
034cf74437 MINOR: quic-be: Make the secret derivation works for QUIC backends (USE_QUIC_OPENSSL_COMPAT)
quic_tls_compat_keylog_callback() is the callback used by the QUIC OpenSSL
compatibility module to derive the TLS secrets from other secrets provided
by keylog. The <write> local variable to this function is initialized to denote
the direction (write to send, read to receive) the secret is supposed to be used
for. That said, as the QUIC cryptographic algorithms are symmetrical, the
direction is inversed between the peer: a secret which is used to write/send/cipher
data from a peer point of view is also the secret which is used to
read/receive/decipher data. This was confirmed by the fact that without this
patch, the TLS stack first provides the peer with Handshake to send/cipher
data. The client could not use such secret to decipher the Handshake packets
received from the server. This patch simply reverse the direction stored by
<write> variable to make the secrets derivation works for the QUIC client.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
d1cd0bb987 MINOR: quic-be: Missing callbacks initializations (USE_QUIC_OPENSSL_COMPAT)
quic_tls_compat_init() function is called from OpenSSL QUIC compatibility module
(USE_QUIC_OPENSSL_COMPAT) to initialize the keylog callback and the callback
which stores the QUIC transport parameters as a TLS extensions into the stack.
These callbacks must also be initialized for QUIC backends.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
fc90964b55 MINOR: quic-be: Store the remote transport parameters asap
This is done from TLS secrets derivation callback at Application level (the last
encryption level) calling SSL_get_peer_quic_transport_params() to have an access
to the TLS transport paremeters extension embedded into the Server Hello TLS message.
Then, quic_transport_params_store() is called to store a decoded version of
these transport parameters.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
8c2f2615f4 MINOR: quic-be: I/O handler switch adaptation
For connection to QUIC servers, this patch modifies the moment where the I/O
handler callback is switched to quic_conn_app_io_cb(). This is no more
done as for listener just after the handshake has completed but just after
it has been confirmed.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
f085a2f5bf MINOR: quic-be: Initial packet number space discarding.
Discard the Initial packet number space as soon as possible. This is done
during handshakes in quic_conn_io_cb() as soon as an Handshake packet could
be successfully sent.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
a62098bfb0 MINOR: quic-be: Add the conn object to the server SSL context
The initialization of <ssl_app_data_index> SSL user data index is required
to make all the SSL sessions to QUIC servers work as this is done for TCP
servers. The conn object notably retrieve for SSL callback which are
server specific (e.g. ssl_sess_new_srv_cb()).
2025-06-11 18:37:34 +02:00
Frederic Lecaille
e226a7cb79 MINOR: quic-be: Build post handshake frames
This action is not specific to listeners. A QUIC client also have to send
NEW_CONNECTION_ID frames.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
2d076178c6 MINOR: quic-be: Store asap the DCID
Store the peer connection ID (SCID) as the connection DCID as soon as an Initial
packet is received.
Stop comparing the packet to QUIC_PACKET_TYPE_0RTT is already match as
QUIC_PACKET_TYPE_INITIAL.
A QUIC server must not send too short datagram with ack-eliciting packets inside.
This cannot be done from quic_rx_pkt_parse() because one does not know if
there is ack-eliciting frame into the Initial packets. If the packet must be
dropped, this is after having parsed it!
2025-06-11 18:37:34 +02:00
Frederic Lecaille
b4a9b53515 MINOR: h3-be: Correctly retrieve h3 counters
This is done using qc_counters() function which supports also QUIC servers.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
e27b7b4889 MINOR: quic-be: Handshake packet number space discarding
This is done for QUIC clients (or haproxy QUIC servers) when the handshake is
confirmed.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
43d88a44f1 MINOR: quic-be: Datagrams and packet parsing support
Modify quic_dgram_parse() to stop passing it a listener as third parameter.
In place the object type address of the connection socket owner is passed
to support the haproxy servers with QUIC as transport protocol.
qc_owner_obj_type() is implemented to return this address.
qc_counters() is also implemented to return the QUIC specific counters of
the proxy of owner of the connection.
quic_rx_pkt_parse() called by quic_dgram_parse() is also modify to use
the object type address used by this latter as last parameter. It is
also modified to send Retry packet only from listeners. A QUIC client
(connection to haproxy QUIC servers) must drop the Initial packets with
non null token length. It is also not supposed to receive O-RTT packets
which are dropped.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
266b10b8a4 MINOR: quic-be: Do not redispatch the datagrams
The QUIC datagram redispatch is there to counter the race condition which
exists only for QUIC connections to listener where datagrams may arrive
on the wrong socket between the bind() and connect() calls.
Run this code part only for listeners.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
89d5a59933 MINOR: quic-be: add field for max_udp_payload_size into quic_conn
Add ->max_udp_payload_size new member to quic_conn struct.
Initialize it from qc_new_conn().
Adapt qc_snd_buf() to use it.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
f7c0f5ac1b MINOR: quic-be: xprt ->init() adapatations
Allocate a connection to connect to QUIC servers from qc_conn_init() which is the
->init() QUIC xprt callback.
Also initialize ->prepare_srv and ->destroy_srv callback as this done for TCP
servers.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
29fb1aee57 MINOR: quic-be: QUIC connection allocation adaptation (qc_new_conn())
For haproxy QUIC servers (or QUIC clients), the peer is considered as validated.
This is a property which is more specific to QUIC servers (haproxy QUIC listeners).
No <odcid> is used for the QUIC client connection. It is used only on the QUIC server side.
The <token_odcid> is also not used on the QUIC client side. It must be embedded into
the transport parameters only on the QUIC server side.
The quic_conn is created before the socket allocation. So, the local address is
zeroed.
Initilize the transport parameter with qc_srv_params_init().
Stop hardcoding the <server> parameter passed value to qc_new_isecs() to correctly
initialize the Initial secrets.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
9831f596ea MINOR: quic-be: ->connect() protocol callback adaptations
Modify quic_connect_server() which is the ->connect() callback for QUIC protocol:
    - add a BUG_ON() run when entering this funtion: the <fd> socket must equal -1
    - conn->handle is a union. conn->handle.qc is use for QUIC connection,
      conn->handle.fd must not be used to store the fd.
    - code alignment fix for setsockopt(fd, SOL_SOCKET, (SO_SNDBUF|SO_RCVBUF))
	  statements
    - remove the section of code which was duplicated from ->connect() TCP callback
    - fd_insert() the new socket file decriptor created to connect to the QUIC
      server with quic_conn_sock_fd_iocb() as callback for read event.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
52ec3430f2 MINOR: sock: Add protocol and socket types parameters to sock_create_server_socket()
This patch only adds <proto_type> new proto_type enum parameter and <sock_type>
socket type parameter to sock_create_server_socket() and adapts its callers.
This is to prepare the use of this function by QUIC servers/backends.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
9c84f64652 MINOR: quic-be: Add a function to initialize the QUIC client transport parameters
Implement qc_srv_params_init() to initialize the QUIC client transport parameters
in relation with connections to haproxy servers/backends.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
f49bbd36b9 MINOR: quic-be: SSL sessions initializations
Modify qc_alloc_ssl_sock_ctx() to pass the connection object as parameter. It is
NULL for a QUIC listener, not NULL for a QUIC server. This connection object is
set as value for ->conn quic_conn struct member. Initialise the SSL session object from
this function for QUIC servers.
qc_ssl_set_quic_transport_params() is also modified to pass the SSL object as parameter.
This is the unique parameter this function needs. <qc> parameter is used only for
the trace.
SSL_do_handshake() must be calle as soon as the SSL object is initialized for
the QUIC backend connection. This triggers the TLS CRYPTO data delivery.
tasklet_wakeup() is also called to send asap these CRYPTO data.
Modify the QUIC_EV_CONN_NEW event trace to dump the potential errors returned by
SSL_do_handshake().
2025-06-11 18:37:34 +02:00
Frederic Lecaille
1408d94bc4 MINOR: quic-be: ssl_sock contexts allocation and misc adaptations
Implement ssl_sock_new_ssl_ctx() to allocate a SSL server context as this is currently
done for TCP servers and also for QUIC servers depending on the <is_quic> boolean value
passed as new parameter. For QUIC servers, this function calls ssl_quic_srv_new_ssl_ctx()
which is specific to QUIC.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
7c76252d8a MINOR: quic-be: Correct the QUIC protocol lookup
From connect_server(), QUIC protocol could not be retreived by protocol_lookup()
because of the PROTO_TYPE_STREAM default passed as argument. In place to support
QUIC srv->addr_type.proto_type may be safely passed.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
1e45690656 MINOR: quic-be: Add a function for the TLS context allocations
Implement ssl_quic_srv_new_ssl_ctx() whose aim is to allocate a TLS context
for QUIC servers.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
a4e1296208 MINOR: quic-be: QUIC server xprt already set when preparing their CTXs
The QUIC servers xprts have already been set at server line parsing time.
This patch prevents the QUIC servers xprts to be reset to <ssl_sock> value which is
the value used for SSL/TCP connections.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
24fc44c44d MINOR: quic-be: QUIC backend XPRT and transport parameters init during parsing
Add ->quic_params new member to server struct.
Also set the ->xprt member of the server being initialized and initialize asap its
transport parameters from _srv_parse_init().
2025-06-11 18:37:34 +02:00
Frederic Lecaille
0e67687ca9 MINOR: quic-be: Call ->prepare_srv() callback at parsing time
This XPRT callback is called from check_config_validity() after the configuration
has been parsed to initialize all the SSL server contexts.

This patch implements the same thing for the QUIC servers.
2025-06-11 18:37:34 +02:00
Frederic Lecaille
5a711551a2 MINOR: quic-be: Version Information transport parameter check
Add a little check to verify that the version chosen by the server matches
with the client one. Initiliazes local transport parameters ->negotiated_version
value with this version if this is the case. If not, return 0;
2025-06-11 18:37:34 +02:00
Frederic Lecaille
990c9f95f7 MINOR: quic-be: Correct Version Information transp. param encoding
According to the RFC, a QUIC client must encode the QUIC version it supports
into the "Available Versions" of "Version Information" transport parameter
order by descending preference.

This is done defining <quic_version_2> and <quic_version_draft_29> new variables
pointers to the corresponding version of <quic_versions> array elements.
A client announces its available versions as follows: v1, v2, draft29.
2025-06-11 18:37:34 +02:00
Amaury Denoyelle
9c751a3cc1 MINOR: mux-quic-be: allow QUIC proto on backend side
Activate QUIC protocol support for MUX-QUIC on the backend side,
additionally to current frontend support. This change is mandatory to be
able to implement QUIC on the backend side.

Without this modification, it is impossible to activate explicitely QUIC
protocol on a server line, hence an error is reported :
  config : proxy 'xxxx' : MUX protocol 'quic' is not usable for server 'yyyy'
2025-06-11 18:37:34 +02:00
Amaury Denoyelle
f66b495f8e MINOR: server: mark QUIC support as experimental
Mark QUIC address support for servers as experimental on the backend
side. Previously, it was allowed but wouldn't function as expected. As
QUIC backend support requires several changes, it is better to declare
it as experimental first.
2025-06-11 18:37:33 +02:00
Amaury Denoyelle
bdd5e58179 MINOR: server: implement helper to identify QUIC servers
Define srv_is_quic() which can be used to quickly identified if a server
uses QUIC protocol.
2025-06-11 18:37:19 +02:00
Amaury Denoyelle
1ecf2e9bab BUG/MINOR: config/server: reject QUIC addresses
QUIC is not implemented on the backend side. To prevent any issue, it is
better to reject any server configured which uses it. This is done via
_srv_parse_init() which is used both for static and dynamic servers.

This should be backported up to all stable versions.
2025-06-11 18:37:17 +02:00
Christopher Faulet
b5525fe759 [RELEASE] Released version 3.3-dev1
Released version 3.3-dev1 with the following main changes :
    - BUILD: tools: properly define ha_dump_backtrace() to avoid a build warning
    - DOC: config: Fix a typo in 2.7 (Name format for maps and ACLs)
    - REGTESTS: Do not use REQUIRE_VERSION for HAProxy 2.5+ (5)
    - REGTESTS: Remove REQUIRE_VERSION=2.3 from all tests
    - REGTESTS: Remove REQUIRE_VERSION=2.4 from all tests
    - REGTESTS: Remove tests with REQUIRE_VERSION_BELOW=2.4
    - REGTESTS: Remove support for REQUIRE_VERSION and REQUIRE_VERSION_BELOW
    - MINOR: server: group postinit server tasks under _srv_postparse()
    - MINOR: stats: add stat_col flags
    - MINOR: stats: add ME_NEW_COMMON() helper
    - MINOR: proxy: collect per-capability stat in proxy_cond_disable()
    - MINOR: proxy: add a true list containing all proxies
    - MINOR: log: only run postcheck_log_backend() checks on backend
    - MEDIUM: proxy: use global proxy list for REGISTER_POST_PROXY_CHECK() hook
    - MEDIUM: server: automatically add server to proxy list in new_server()
    - MEDIUM: server: add and use srv_init() function
    - BUG/MAJOR: leastconn: Protect tree_elt with the lbprm lock
    - BUG/MEDIUM: check: Requeue healthchecks on I/O events to handle check timeout
    - CLEANUP: applet: Update comment for applet_put* functions
    - DEBUG: check: Add the healthcheck's expiration date in the trace messags
    - BUG/MINOR: mux-spop: Fix null-pointer deref on SPOP stream allocation failure
    - CLEANUP: sink: remove useless cleanup in sink_new_from_logger()
    - MAJOR: counters: add shared counters base infrastructure
    - MINOR: counters: add shared counters helpers to get and drop shared pointers
    - MINOR: counters: add common struct and flags to {fe,be}_counters_shared
    - MEDIUM: counters: manage shared counters using dedicated helpers
    - CLEANUP: counters: merge some common counters between {fe,be}_counters_shared
    - MINOR: counters: add local-only internal rates to compute some maxes
    - MAJOR: counters: dispatch counters over thread groups
    - BUG/MEDIUM: cli: Properly parse empty lines and avoid crashed
    - BUG/MINOR: config: emit warning for empty args only in discovery mode
    - BUG/MINOR: config: fix arg number reported on empty arg warning
    - BUG/MINOR: quic: Missing SSL session object freeing
    - MINOR: applet: Add API functions to manipulate input and output buffers
    - MINOR: applet: Add API functions to get data from the input buffer
    - CLEANUP: applet: Simplify a bit comments for applet_put* functions
    - MEDIUM: hlua: Update TCP applet functions to use the new applet API
    - BUG/MEDIUM: fd: Use the provided tgid in fd_insert() to get tgroup_info
    - BUG/MINIR: h1: Fix doc of 'accept-unsafe-...-request' about URI parsing
2025-06-11 14:31:33 +02:00
Christopher Faulet
b2f64af341 BUG/MINIR: h1: Fix doc of 'accept-unsafe-...-request' about URI parsing
The description of tests performed on the URI in H1 when
'accept-unsafe-violations-in-http-request' option is wrong. It states that
only characters below 32 and 127 are blocked when this option is set,
suggesting that otherwise, when it is not set, all invalid characters in the
URI, according to the RFC3986, are blocked.

But in fact, it is not true. By default all character below 32 and above 127
are blocked. And when 'accept-unsafe-violations-in-http-request' option is
set, characters above 127 (excluded) are accepted. But characters in
(33..126) are never checked, independently of this option.

This patch should fix the issue #2906. It should be backported as far as
3.0. For older versions, the docuementation could also be clarified because
this part is not really clear.

Note the request URI validation is still under discution because invalid
characters in (33.126) are never checked and some users request a stricter
parsing.
2025-06-10 19:17:56 +02:00
Olivier Houchard
6993981cd6 BUG/MEDIUM: fd: Use the provided tgid in fd_insert() to get tgroup_info
In fd_insert(), use the provided tgid to ghet the thread group info,
instead of using the one of the current thread, as we may call
fd_insert() from a thread of another thread group, that will happen at
least when binding the listeners. Otherwise we'd end up accessing the
thread mask containing enabled thread of the wrong thread group, which
can lead to crashes if we're binding on threads not present in the
thread group.
This should fix Github issue #2991.

This should be backported up to 2.8.
2025-06-10 15:10:56 +02:00
Christopher Faulet
9df380a152 MEDIUM: hlua: Update TCP applet functions to use the new applet API
The functions responsible to extract data from the applet input buffer or to
push data into the applet output buffer are now relying on the newly added
functions in the applet API. This simplifies a bit the code.
2025-06-10 08:16:10 +02:00
Christopher Faulet
18f9c71041 CLEANUP: applet: Simplify a bit comments for applet_put* functions
Instead of repeating which buffer is used depending on the API used by the
applet, a reference to applet_get_outbuf() was added.
2025-06-10 08:16:10 +02:00
Christopher Faulet
79445766a3 MINOR: applet: Add API functions to get data from the input buffer
There was already functions to pushed data from the applet to the stream by
inserting them in the right buffer, depending the applet was using or not
the legacy API. Here, functions to retreive data pushed to the applet by the
stream were added:

  * applet_getchar   : Gets one character

  * applet_getblk    : Copies a full block of data

  * applet_getword   : Copies one text block representing a word using a
                       custom separator as delimiter

  * applet_getline   : Copies one text line

  * applet_getblk_nc : Get one or two blocks of data

  * applet_getword_nc: Gets one or two blocks of text representing a word
                       using a custom separator as delimiter

  * applet_getline_nc: Gets one or two blocks of text representing a line
2025-06-10 08:16:10 +02:00
Christopher Faulet
0d8ecb1edc MINOR: applet: Add API functions to manipulate input and output buffers
In this patch, some functions were added to ease input and output buffers
manipulation, regardless the corresponding applet is using its own buffers
or it is relying on channels buffers. Following functions were added:

  * applet_get_inbuf  : Get the buffer containing data pushed to the applet
                        by the stream

  * applet_get_outbuf : Get the buffer containing data pushed by the applet
                        to the stream

  * applet_input_data : Return the amount of data in the input buffer

  * applet_skip_input : Skips <len> bytes from the input buffer

  * applet_reset_input: Skips all bytes from the input buffer

  * applet_output_room: Returns the amout of space available at the output
                        buffer

  * applet_need_room  : Indicates that the applet have more data to deliver
                        and it needs more room in the output buffer to do
			so
2025-06-10 08:16:10 +02:00
Frederic Lecaille
6b74633069 BUG/MINOR: quic: Missing SSL session object freeing
qc_alloc_ssl_sock_ctx() allocates an SSL_CTX object for each connection. It also
allocates an SSL object. When this function failed, it freed only the SSL_CTX object.
The correct way to free both of them is to call qc_free_ssl_sock_ctx().

Must be backported as far as 2.6.
2025-06-06 17:53:13 +02:00
Amaury Denoyelle
0cdf529720 BUG/MINOR: config: fix arg number reported on empty arg warning
If an empty argument is used in configuration, for example due to an
undefined environment variable, the rest of the line is not parsed. As
such, a warning is emitted to report this.

The warning was not totally correct as it reported the wrong argument
index. Fix this by this patch. Note that there is still an issue with
the "^" indicator, but this is not as easy to fix yet.

This is related to github issue #2995.

This should be backported up to 3.2.
2025-06-06 17:03:02 +02:00
Amaury Denoyelle
5f1fad1690 BUG/MINOR: config: emit warning for empty args only in discovery mode
Hide warning about empty argument outside of discovery mode. This is
necessary, else the message will be displayed twice, which hampers
haproxy output lisibility.

This should fix github isue #2995.

This should be backported up to 3.2.
2025-06-06 17:02:58 +02:00
Christopher Faulet
f5d41803d3 BUG/MEDIUM: cli: Properly parse empty lines and avoid crashed
Empty lines was not properly parsed and could lead to crashes because the
last argument was parsed outside of the cmdline buffer. Indeed, the last
argument is parsed to look for an eventual payload pattern. It is started
one character after the newline at the end of the command line. But it is
only valid for an non-empty command line.

So, now, this case is properly detected when we leave if an empty line is
detected.

This patch must be backported to 3.2.
2025-06-05 10:46:13 +02:00
Aurelien DARRAGON
16eb0fab31 MAJOR: counters: dispatch counters over thread groups
Most fe and be counters are good candidates for being shared between
processes. They are now grouped inside "shared" struct sub member under
be_counters and fe_counters.

Now they are properly identified, they would greatly benefit from being
shared over thread groups to reduce the cost of atomic operations when
updating them. For this, we take the current tgid into account so each
thread group only updates its own counters. For this to work, it is
mandatory that the "shared" member from {fe,be}_counters is initialized
AFTER global.nbtgroups is known, because each shared counter causes the stat
to be allocated lobal.nbtgroups times. When updating a counter without
concurrency, the first counter from the array may be updated.

To consult the shared counters (which requires aggregation of per-tgid
individual counters), some helper functions were added to counter.h to
ease code maintenance and avoid computing errors.
2025-06-05 09:59:38 +02:00
Aurelien DARRAGON
12c3ffbb48 MINOR: counters: add local-only internal rates to compute some maxes
cps_max (max new connections received per second), sps_max (max new
sessions per second) and http.rps_max (maximum new http requests per
second) all rely on shared counters (namely conn_per_sec, sess_per_sec and
http.req_per_sec). The problem is that shared counters are about to be
distributed over thread groups, and we cannot afford to compute the
total (for all thread groups) each time we update the max counters.

Instead, since such max counters (relying on shared counters) are a very
few exceptions, let's add internal (sess,conn,req) per sec freq counters
that are dedicated to cps_max, sps_max and http.rps_max computing.

Thanks to that, related *_max counters shouldn't be negatively impacted
by the thread-group distribution, yet they will not benefit from it
either. Related internal freq counters are prefixed with "_" to emphasize
the fact that they should not be used for other purpose (the shared ones,
which are about to be distributed over thread groups in upcoming commits
are still available and must be used instead). The internal ones could
eventually be removed at any time if we find another way to compute the
{cps,sps,http.rps)_max counters.
2025-06-05 09:59:31 +02:00
Aurelien DARRAGON
b72a8bb138 CLEANUP: counters: merge some common counters between {fe,be}_counters_shared
Now that we have a common struct between fe and be shared counters struct
let's perform some cleanup to merge duplicate members into the common
struct part. This will ease code maintenance.
2025-06-05 09:59:24 +02:00
Aurelien DARRAGON
b599138842 MEDIUM: counters: manage shared counters using dedicated helpers
proxies, listeners and server shared counters are now managed via helpers
added in one of the previous commits.

When guid is not set (ie: when not yet assigned), shared counters pointer
is allocated using calloc() (local memory) and a flag is set on the shared
counters struct to know how to manipulate (and free it). Else if guid is
set, then it means that the counters may be shared so while for now we
don't actually use a shared memory location the API is ready for that.

The way it works, for proxies and servers (for which guid is not known
during creation), we first call counters_{fe,be}_shared_get with guid not
set, which results in local pointer being retrieved (as if we just
manually called calloc() to retrieve a pointer). Later (during postparsing)
if guid is set we try to upgrade the pointer from local to shared.

Lastly, since the memory location for some objects (proxies and servers
counters) may change from creation to postparsing, let's update
counters->last_change member directly under counters_{fe,be}_shared_get()
so we don't miss it.

No change of behavior is expected, this is only preparation work.
2025-06-05 09:59:17 +02:00
Aurelien DARRAGON
c10ce1c85b MINOR: counters: add common struct and flags to {fe,be}_counters_shared
fe_counters_shared and be_counters_shared may share some common members
since they are quite similar, so we add a common struct part shared
between the two. struct counters_shared is added for convenience as
a generic pointer to manipulate common members from fe or be shared
counters pointer.

Also, the first common member is added: shared fe and be counters now
have a flags member.
2025-06-05 09:59:10 +02:00
Aurelien DARRAGON
aa53887398 MINOR: counters: add shared counters helpers to get and drop shared pointers
create include/haproxy/counters.h and src/counters.c files to anticipate
for further helpers as some counters specific tasks needs to be carried
out and since counters are shared between multiple object types (ie:
listener, proxy, server..) we need generic helpers.

Add some shared counters helper which are not yet used but will be updated
in upcoming commits.
2025-06-05 09:59:04 +02:00
Aurelien DARRAGON
a0dcab5c45 MAJOR: counters: add shared counters base infrastructure
Shareable counters are not tagged as shared counters and are dynamically
allocated in separate memory area as a prerequisite for being stored
in shared memory area. For now, GUID and threads groups are not taken into
account, this is only a first step.

also we ensure all counters are now manipulated using atomic operations,
namely, "last_change" counter is now read from and written to using atomic
ops.

Despite the numerous changes caused by the counters being moved away from
counters struct, no change of behavior should be expected.
2025-06-05 09:58:58 +02:00
Aurelien DARRAGON
89b04f2191 CLEANUP: sink: remove useless cleanup in sink_new_from_logger()
As reported by Ilya in GH #2994, some cleanup parts in
sink_new_from_logger() function are not used.

We can actually simplify the cleanup logic to remove dead code, let's
do that by renaming "error_final" label to "error" and only making use
of the "error" label, because sink_free() already takes care of proper
cleanup for all sink members.
2025-06-05 09:58:50 +02:00
Christopher Faulet
8c4bb8cab3 BUG/MINOR: mux-spop: Fix null-pointer deref on SPOP stream allocation failure
When we try to allocate a new SPOP stream, if an error is encountered,
spop_strm_destroy() is called to released the eventually allocated
stream. But, it must only be called if a stream was allocated. If the
reported error is an SPOP stream allocation failure, we must just leave to
avoid null-pointer dereference.

This patch should fix point 1 of the issue #2993. It must be backported as
far as 3.1.
2025-06-04 08:48:49 +02:00
Christopher Faulet
6786b05297 DEBUG: check: Add the healthcheck's expiration date in the trace messags
It could help to diagnose some issues about timeout processing. So let's add
it !
2025-06-03 15:06:12 +02:00
Christopher Faulet
8ee650a88b CLEANUP: applet: Update comment for applet_put* functions
These functions were copied from the channel API and modified to work with
applets using the new API or the legacy one. However, the comments were
updated accordingly. It is the purpose of this patch.
2025-06-03 15:03:30 +02:00
Christopher Faulet
7c788f0984 BUG/MEDIUM: check: Requeue healthchecks on I/O events to handle check timeout
When a healthchecks is processed, once the first wakeup passed to start the
check, and as long as the expiration timer is not reached, only I/O events
are able to wake it up. It is an issue when there is a check timeout
defined.  Especially if the connect timeout is high and the check timeout is
low. In that case, the healthcheck's task is never requeue to handle any
timeout update. When the connection is established, the check timeout is set
to replace the connect timeout. It is thus possible to report a success
while a timeout should be reported.

So, now, when an I/O event is handled, the healthcheck is requeue, except if
an success or an abort is reported.

Thanks to Thierry Fournier for report and the reproducer.

This patch must be backported to all stable versions.
2025-06-03 15:03:30 +02:00
Olivier Houchard
913b2d6c83 BUG/MAJOR: leastconn: Protect tree_elt with the lbprm lock
In fwlc_srv_reposition(), set the server's tree_elt while we still hold
the lbprm read lock. While it was protected from concurrent
fwlc_srv_reposition() calls by the server's lb_lock, it was not from
dequeuing/requeuing that could occur if the server gets down/up or its
weight is changed, and that would lead to inconsistencies, and the
watchdog killing the process because it is stuck in an infinite loop in
fwlc_get_next_server().

This hopefully fixes github issue #2990.

This should be backported to 3.2.
2025-06-03 04:42:47 +02:00
Aurelien DARRAGON
368d01361a MEDIUM: server: add and use srv_init() function
rename _srv_postparse() internal function to srv_init() function and group
srv_init_per_thr() plus idle conns list init inside it. This way we can
perform some simplifications as srv_init() performs multiple server
init steps after parsing.

SRV_F_CHECKED flag was added, it is automatically set when srv_init()
runs successfully. If the flag is already set and srv_init() is called
again, nothing is done. This permis to manually call srv_init() earlier
than the default POST_CHECK hook when needed without risking to do things
twice.
2025-06-02 17:51:33 +02:00
Aurelien DARRAGON
889ef6f67b MEDIUM: server: automatically add server to proxy list in new_server()
while new_server() takes the parent proxy as argument and even assigns
srv->proxy to the parent proxy, it didn't actually inserted the server
to the parent proxy server list on success.

The result is that sometimes we add the server to the list after
new_server() is called, and sometimes we don't.

This is really error-prone and because of that hooks such as
REGISTER_POST_SERVER_CHECK() which as run for all servers listed in
all proxies may not be relied upon for servers which are not actually
inserted in their parent proxy server list. Plus it feels very strange
to have a server that points to a proxy, but then the proxy doesn't know
about it because it cannot find it in its server list.

To prevent errors and make proxy->srv list reliable, we move the insertion
logic directly under new_server(). This requires to know if we are called
during parsing or during runtime to either insert or append the server to
the parent proxy list. For that we use PR_FL_CHECKED flag from the parent
proxy (if the flag is set, then the proxy was checked so we are past the
init phase, thus we assume we are called during runtime)

This implies that during startup if new_server() has to be cancelled on
error paths we need to call srv_detach() (which is now exposed in server.h)
before srv_drop().

The consequence of this commit is that REGISTER_POST_SERVER_CHECK() should
not run reliably on all servers created using new_server() (without having
to manually loop on global servers_list)
2025-06-02 17:51:30 +02:00
Aurelien DARRAGON
e262e4bbe4 MEDIUM: proxy: use global proxy list for REGISTER_POST_PROXY_CHECK() hook
REGISTER_POST_PROXY_CHECK() used to iterate over "main" proxies to run
registered callbacks. This means hidden proxies (and their servers) did
not get a chance to get post-checked and could cause issues if some post-
checks are expected to be executed on all proxies no matter their type.

Instead we now rely on the global proxies list. Another side effect is that
the REGISTER_POST_SERVER_CHECK() now runs as well for servers from proxies
that are not part of the main proxies list.
2025-06-02 17:51:27 +02:00
Aurelien DARRAGON
1f12e45b0a MINOR: log: only run postcheck_log_backend() checks on backend
postcheck_log_backend() checks are executed no matter if the proxy
actually has the backend capability while the checks actually depend
on this.

Let's fix that by adding an extra condition to ensure that the BE
capability is set.

This issue is not tagged as a bug because for now it remains impossible
to have a syslog proxy without BE capability in the main proxy list, but
this may change in the future.
2025-06-02 17:51:24 +02:00
Aurelien DARRAGON
943958c3ff MINOR: proxy: add a true list containing all proxies
We have global proxies_list pointer which is announced as the list of
"all existing proxies", but in fact it only represents regular proxies
declared on the config file through "listen, frontend or backend" keywords

It is ambiguous, and we currently don't have a straightforwrd method to
iterate over all proxies (either public or internal ones) within haproxy

Instead we still have to manually iterate over multiple lists (main
proxies, log-forward proxies, peer proxies..) which is error-prone.

In this patch we add a struct list member (8 bytes) inside struct proxy
in order to store every proxy (except default ones) within a global
"proxies" list which is actually representative for all proxies existing
under haproxy process, like we already have for servers.
2025-06-02 17:51:21 +02:00
Aurelien DARRAGON
6ccf770fe2 MINOR: proxy: collect per-capability stat in proxy_cond_disable()
proxy_cond_disable() collects and prints cumulated connections for be and
fe proxies no matter their type. With shared stats it may cause issues
because depending on the proxy capabilities only fe or be counters may
be allocated.

In this patch we add some checks to ensure we only try to read from
valid memory locations, else we rely on default values (0).
2025-06-02 17:51:17 +02:00
Aurelien DARRAGON
c7c017ec3c MINOR: stats: add ME_NEW_COMMON() helper
Split ME_NEW_* helper into COMMON part and specific part so it becomes
easier to add alternative helpers without code duplication.
2025-06-02 17:51:12 +02:00
Aurelien DARRAGON
d04843167c MINOR: stats: add stat_col flags
Add stat_col flags member to store .generic bit and prepare for upcoming
flags. No functional change expected.
2025-06-02 17:51:08 +02:00
Aurelien DARRAGON
f0b40b49b8 MINOR: server: group postinit server tasks under _srv_postparse()
init_srv_requeue() and init_srv_slowstart() functions are called after
initial server parsing via REGISTER_POST_SERVER_CHECK() hook, and they
are also manually called for dynamic server after the server is
initialized.

This may conflict with _srv_postparse() which is also registered via
REGISTER_POST_SERVER_CHECK() and called during dynamic server creation

To ensure functions don't conflict with each other, let's ensure they
are executed in proper order by calling init_srv_requeue and
init_srv_slowstart() from _srv_postparse() which now becomes the parent
function for server related postparsing stuff. No change of behavior is
expected.
2025-06-02 17:51:05 +02:00
Tim Duesterhus
8ee8b8a04d REGTESTS: Remove support for REQUIRE_VERSION and REQUIRE_VERSION_BELOW
This is no longer used since the migration to the native `haproxy -cc
'version_atleast(X)'` functionality.

see 8727614dc4
see 5efc48dcf1
2025-06-02 17:37:11 +02:00
Tim Duesterhus
d8951ec70f REGTESTS: Remove tests with REQUIRE_VERSION_BELOW=2.4
HAProxy 2.4 is the lowest supported version, thus this never matches.

see 18cd4746e5
2025-06-02 17:37:07 +02:00
Tim Duesterhus
534b09f2a2 REGTESTS: Remove REQUIRE_VERSION=2.4 from all tests
HAProxy 2.4 is the lowest supported version, thus this always matches.

see 7aff1bf6b9
2025-06-02 17:37:04 +02:00
Tim Duesterhus
239785fd27 REGTESTS: Remove REQUIRE_VERSION=2.3 from all tests
HAProxy 2.4 is the lowest supported version, thus this always matches.

see 7aff1bf6b9
2025-06-02 17:37:00 +02:00
Tim Duesterhus
294c47a5ef REGTESTS: Do not use REQUIRE_VERSION for HAProxy 2.5+ (5)
Introduced in:

25bcdb1d9 BUG/MAJOR: h1: Be stricter on request target validation during message parsing

see also:

fbbbc33df REGTESTS: Do not use REQUIRE_VERSION for HAProxy 2.5+
2025-06-02 17:36:56 +02:00
Christopher Faulet
8e8cdf114b DOC: config: Fix a typo in 2.7 (Name format for maps and ACLs)
"identified" was used instead of "identifier". May be backported as far as
3.0
2025-06-02 09:19:38 +02:00
Willy Tarreau
b88164d9c0 BUILD: tools: properly define ha_dump_backtrace() to avoid a build warning
In resolve_sym_name() we declare a few symbols that we want to be able
to resolve. ha_dump_backtrace() was declared with a struct buffer instead
of a pointer to such a struct, which has no effect since we only want to
get the function's pointer, but produces a build warning with LTO, so
let's fix it.

This can be backported to 3.0.
2025-05-30 17:15:48 +02:00
Willy Tarreau
9f4cd435d3 [RELEASE] Released version 3.3-dev0
Released version 3.3-dev0 with the following main changes :
    - MINOR: version: mention that it's development again
2025-05-28 16:46:34 +02:00
Willy Tarreau
8809251ee0 MINOR: version: mention that it's development again
This essentially reverts a6458fd426.
2025-05-28 16:46:15 +02:00
Willy Tarreau
e134140d28 [RELEASE] Released version 3.2.0
Released version 3.2.0 with the following main changes :
    - MINOR: promex: Add agent check status/code/duration metrics
    - MINOR: ssl: support strict-sni in ssl-default-bind-options
    - MINOR: ssl: also provide the "tls-tickets" bind option
    - MINOR: server: define CLI I/O handler for "add server"
    - MINOR: server: implement "add server help"
    - MINOR: server: use stress mode for "add server help"
    - BUG/MEDIUM: server: fix crash after duplicate GUID insertion
    - BUG/MEDIUM: server: fix potential null-deref after previous fix
    - MINOR: config: list recently added sections with -dKcfg
    - BUG/MAJOR: cache: Crash because of wrong cache entry deleted
    - DOC: configuration: fix the example in crt-store
    - DOC: config: clarify the wording around single/double quotes
    - DOC: config: clarify the legacy cookie and header captures
    - DOC: config: fix alphabetical ordering of layer 7 sample fetch functions
    - DOC: config: fix alphabetical ordering of layer 6 sample fetch functions
    - DOC: config: fix alphabetical ordering of layer 5 sample fetch functions
    - DOC: config: fix alphabetical ordering of layer 4 sample fetch functions
    - DOC: config: fix alphabetical ordering of internal sample fetch functions
    - BUG/MINOR: h3: Set HTX flags corresponding to the scheme found in the request
    - BUG/MEDIUM: h3: Declare absolute URI as normalized when a :authority is found
    - DOC: config: mention in bytes_in and bytes_out that they're read on input
    - DOC: config: clarify the basics of ACLs (call point, multi-valued etc)
    - REGTESTS: Make the script testing conditional set-var compatible with Vtest2
    - REGTESTS: Explicitly allow failing shell commands in some scripts
    - MINOR: listeners: Add support for a label on bind line
    - BUG/MEDIUM: cli/ring: Properly handle shutdown in "show event" I/O handler
    - BUG/MEDIUM: hlua: Properly detect shudowns for TCP applets based on the new API
    - BUG/MEDIUM: hlua: Fix getline() for TCP applets to work with applet's buffers
    - BUG/MEDIUM: hlua: Fix receive API for TCP applets to properly handle shutdowns
    - CI: vtest: Rely on VTest2 to run regression tests
    - CI: vtest: Fix the build script to properly work on MaOS
    - CI: combine AWS-LC and AWS-LC-FIPS by template
    - BUG/MEDIUM: httpclient: Throw an error if an lua httpclient instance is reused
    - DOC: hlua: Add a note to warn user about httpclient object reuse
    - DOC: hlua: fix a few typos in HTTPMessage.set_body_len() documentation
    - DEV: patchbot: prepare for new version 3.3-dev
    - MINOR: version: mention that it's 3.2 LTS now.
2025-05-28 16:35:14 +02:00
Willy Tarreau
a6458fd426 MINOR: version: mention that it's 3.2 LTS now.
The version will be maintained up to around Q2 2030. Let's
also update the INSTALL file to mention this.
2025-05-28 16:31:27 +02:00
Willy Tarreau
2502435eb3 DEV: patchbot: prepare for new version 3.3-dev
The bot will now load the prompt for the upcoming 3.2 version so we have
to rename the files and update their contents to match the current version.
2025-05-28 16:23:12 +02:00
Willy Tarreau
21ce685fcd DOC: hlua: fix a few typos in HTTPMessage.set_body_len() documentation
A few typos were noticed while gathering info for the 3.2 announce
messages, this fixes them, and will probably constitute the last
commit of this release. There's no need to backport it unless commit
94055a5e7 ("MEDIUM: hlua: Add function to change the body length of
an HTTP Message") is backported.
2025-05-27 19:33:49 +02:00
Christopher Faulet
cb7a2444d1 DOC: hlua: Add a note to warn user about httpclient object reuse
It is not supported to reuse an lua httpclient instance to process several
requests. A new object must be created for each request. Thanks to the
previous patch ("BUG/MEDIUM: httpclient: Throw an error if an lua httpclient
instance is reused"), an error is now reported if this happens. But it is
not obvious for users. So the lua-api docuementation was updated accordingly.

This patch is related to issue #2986. It should be backported with the
commit above.
2025-05-27 18:48:23 +02:00
Christopher Faulet
50fca6f0b7 BUG/MEDIUM: httpclient: Throw an error if an lua httpclient instance is reused
It is not expected/supported to reuse an httpclient instance to process
several requests. A new instance must be created for each request. However,
in lua, there is nothing to prevent a user to create an httpclient object
and use it in a loop to process requests.

That's unfortunate because this will apparently work, the requests will be
sent and a response will be received and processed. However internally some
ressources will be allocated and never released. When the next response is
processed, the ressources allocated for the previous one are definitively
lost.

In this patch we take care to check that the httpclient object was never
used when a request is sent from a lua script by checking
HTTPCLIENT_FS_STARTED flags. This flag is set when a httpclient applet is
spawned to process a request and never removed after that. In lua, the
httpclient applet is created when the request is sent. So, it is the right
place to do this test.

This patch should fix the issue #2986. It should be backported as far as
2.6.
2025-05-27 18:47:24 +02:00
Ilya Shipitsin
94ded5523f CI: combine AWS-LC and AWS-LC-FIPS by template
let's reduce code duplication by involving workflow templates
2025-05-27 15:06:58 +02:00
Christopher Faulet
508e074a32 CI: vtest: Fix the build script to properly work on MaOS
"config.h" header file is new in VTest2 and includes must be adapted to be
able to build VTest on MacOS. Let's add "-I." to make it work.
2025-05-27 14:48:53 +02:00
Christopher Faulet
6a18d28ba2 CI: vtest: Rely on VTest2 to run regression tests
VTest2 (https://github.com/vtest/VTest2) was released and is a remplacement
for VTest. VTest was archived. So let's use the new version now.

If this commit is backported, the 2 following commits must also be
backported:

 * 2808e3577 ("REGTESTS: Explicitly allow failing shell commands in some scripts")
 * 82c291124 ("REGTESTS: Make the script testing conditional set-var compatible with Vtest2")
2025-05-27 14:38:46 +02:00
Christopher Faulet
bc4c3c7969 BUG/MEDIUM: hlua: Fix receive API for TCP applets to properly handle shutdowns
An optional timeout was added to AppletTCP.receive() to interrupt calls after a
delay. It was mandatory to be able to implement interactive applets (like
trisdemo). However, this broke the API and it made impossible to differentiate
the shutdowns from the delays expirations. Indeed, in both cases, an empty
string was returned.

Because historically an empty string was used to notify a connection shutdown,
it should not be changed. So now, 'nil' value is returned when no data was
available before the delay expiration.

The new AppletTCP:try_receive() function was also affected. To fix it, instead
of stating there is no delay when a receive is tried, an expired delay is
set. Concretely TICK_ETERNITY was replaced by now_ms.

Finally, AppletTCP:getline() function is not concerned for now because there
is no way to interrupt it after some delay.

The documentation and trisdemo lua script were updated accordingly.

This patch depends on "BUG/MEDIUM: hlua: Properly detect shudowns for TCP
applets based on the new API". However, it is a 3.2-specific issue, so no
backport is needed.
2025-05-27 07:53:19 +02:00
Christopher Faulet
c0ecef71d7 BUG/MEDIUM: hlua: Fix getline() for TCP applets to work with applet's buffers
The commit e5e36ce09 ("BUG/MEDIUM: hlua/cli: Fix lua CLI commands to work
with applet's buffers") fixed the TCP applets API to work with applets using
its own buffers. Howver the getline() function was not updated. It could be
an issue for anyone registering a CLI commands reading lines.

This patch should be backported as far as 3.0.
2025-05-27 07:53:01 +02:00
Christopher Faulet
c64781c2c8 BUG/MEDIUM: hlua: Properly detect shudowns for TCP applets based on the new API
The internal function responsible to receive data for TCP applets with
internal buffers is buggy. Indeed, for these applets, the buffer API is used
to get data. So there is no tests on the SE to properly detect connection
shutdowns. So, it must be performed by hand after the call to b_getblk_nc().

This patch must be backported as far as 3.0.
2025-05-26 19:00:00 +02:00
Christopher Faulet
4d4da515f2 BUG/MEDIUM: cli/ring: Properly handle shutdown in "show event" I/O handler
The commit 03dc54d802 ("BUG/MINOR: ring: Fix I/O handler of "show event"
command to not rely on the SC") introduced a regression. By removing
dependencies on the SC, a test to detect client shutdowns was removed. So
now, the CLI applet is no longer released when the client shut the
connection during a "show event -w".

So of course, we should not use the SC to detect the shutdowns. But the SE
must be used insteead.

It is a 3.2-specific issue, so no backport needed.
2025-05-26 19:00:00 +02:00
Christopher Faulet
99e755d673 MINOR: listeners: Add support for a label on bind line
It is now possile to set a label on a bind line. All sockets attached to
this bind line inherits from this label. The idea is to be able to groud of
sockets. For now, there is no mechanism to create these groups, this must be
done by hand.
2025-05-26 19:00:00 +02:00
Christopher Faulet
2808e3577f REGTESTS: Explicitly allow failing shell commands in some scripts
Vtest2, that should replaced Vtest in few months, will reject any failing
commands in shell blocks. However, some scripts are executing some commands,
expecting an error to be able to parse the error output. So, now use "set
+e" in those scripts to explicitly state failing commads are expected.

It is just used for non-final commands. At the end, the shell block must
still report a success.
2025-05-26 19:00:00 +02:00
Christopher Faulet
82c2911248 REGTESTS: Make the script testing conditional set-var compatible with Vtest2
VTest2 will replaced VTest in few months. There is not so much change
expected. One of them is that a User-Agent header is added by default in all
requests, except if an custom one is already set or if "-nouseragent" option
is used. To still be compatible with VTest, it is not possible to use the
option to avoid the header addition. So, a custom user-agent is added in the
last test of "sample_fetches/cond_set_var.vtc" to be sure it will pass with
Vtest and Vtest2. It is mandatory because the request length is tested.
2025-05-26 19:00:00 +02:00
Willy Tarreau
5b937b7a97 DOC: config: clarify the basics of ACLs (call point, multi-valued etc)
This is essentially in order to address the concerns expressed in
issue #2226 where it is mentioned that the moment they are called is
not clear enough. Admittedly, re-reading the paragraph doesn't make
it obvious on a quick read that they behave like functions. This patch
adds an extra paragraph that makes the parallel with programming
languages' boolean functions and explains the fact that they can be
multi-valued. Hoping this is clearer now.
2025-05-26 16:25:22 +02:00
Willy Tarreau
ef9511be90 DOC: config: mention in bytes_in and bytes_out that they're read on input
Issue #2267 suggests that it's unclear what exactly the byte counts mean
(particularly when compression is involved). Let's clarify that the counts
are read on data input and that they also cover headers and a bit of
internal overhead.
2025-05-26 15:54:36 +02:00
Christopher Faulet
e70c23e517 BUG/MEDIUM: h3: Declare absolute URI as normalized when a :authority is found
Since commit 2c3d656f8 ("MEDIUM: h3: use absolute URI form with
:authority"), the absolute URI form is used when a ':authority'
pseudo-header is found. However, this URI was not declared as normalized
internally.  So, when the request is reformated to be sent to an h1 server,
the absolute-form is used instead of the origin-form. It is unexpected and
may be an issue for some servers that could reject the request.

So, now, we take care to set HTX_SL_F_HAS_AUTHORITY flag on the HTX message
when an authority was found and HTX_SL_F_NORMALIZED_URI flag is set for
"http" or "https" schemes.

No backport needed because the commit above must not be backported. It
should fix a regression reported on the 3.2-dev17 in issue #2977.

This commit depends on "BUG/MINOR: h3: Set HTX flags corresponding to the
scheme found in the request".
2025-05-26 11:47:23 +02:00
Christopher Faulet
da9792cca8 BUG/MINOR: h3: Set HTX flags corresponding to the scheme found in the request
When a ":scheme" pseudo-header is found in a h3 request, the
HTX_SL_F_HAS_SCHM flag must be set on the HTX message. And if the scheme is
'http' or 'https', the corresponding HTX flag must also be set. So,
respectively, HTX_SL_F_SCHM_HTTP or HTX_SL_F_SCHM_HTTPS.

It is mainly used to send the right ":scheme" pseudo-header value to H2
server on backend side.

This patch could be backported as far as 2.6.
2025-05-26 11:38:29 +02:00
Willy Tarreau
083708daf8 DOC: config: fix alphabetical ordering of internal sample fetch functions
Some misordering has been accumulating over time, making some of them
hard to spot. Also "uptime" was not indexed.
2025-05-26 09:36:23 +02:00
Willy Tarreau
52c2247d90 DOC: config: fix alphabetical ordering of layer 4 sample fetch functions
Some misordering has been accumulating over time, making some of them
hard to spot.
2025-05-26 09:33:17 +02:00
Willy Tarreau
770098f5e3 DOC: config: fix alphabetical ordering of layer 5 sample fetch functions
Some misordering has been accumulating over time, making some of them
hard to spot.
2025-05-26 09:26:11 +02:00
Willy Tarreau
5261e35b8f DOC: config: fix alphabetical ordering of layer 6 sample fetch functions
Some misordering has been accumulating over time, making some of them
hard to spot.
2025-05-26 09:26:11 +02:00
Willy Tarreau
e9248243e9 DOC: config: fix alphabetical ordering of layer 7 sample fetch functions
Some misordering has been accumulating over time, making some of them
hard to spot.
2025-05-26 09:26:11 +02:00
Willy Tarreau
38456f63a3 DOC: config: clarify the legacy cookie and header captures
As reported in issue #2195, cookie captures and header captures are no
longer the recommended way to proceed. Let's mention that this is the
legacy way and provide a few pointers to the recommended functions and
actions to use the modern methods.
2025-05-26 08:56:33 +02:00
Willy Tarreau
da8d6d1b2c DOC: config: clarify the wording around single/double quotes
As reported in issue #2327, the wording used in the section about quoting
can be read two ways due to the use of the two types of quotes to protect
each other quote. Better only use the quoting without mixing the two when
mentioning them.
2025-05-26 08:36:33 +02:00
William Lallemand
d607940915 DOC: configuration: fix the example in crt-store
Fix a bad example in the crt-store section. site1 does not use the "web"
crt-store but the global one.

Must be backported as far as 3.0 however the section was 3.12 in
previous version.
2025-05-25 16:55:08 +02:00
Remi Tricot-Le Breton
90441e9bfe BUG/MAJOR: cache: Crash because of wrong cache entry deleted
When "vary" is enabled, we can have multiple entries for a given primary
key in the cache tree. There is a limit to how many secondary entries
can be inserted for a given key. When we try to insert a new secondary
entry, if the limit is already reached, we can try to find expired
entries with the same primary key, and if the limit is still reached we
want to abort the current insertion and to remove the node that was just
inserted.

In commit "a29b073: MEDIUM: cache: Add refcount on cache_entry" though,
a regression was introduced. Instead of removing the entry just inserted
as the comments suggested, we removed the second to last entry and
returned NULL. We then reset the eb.key of the cache_entry in the caller
because we assumed that the entry was already removed from the tree.

This means that some entries with an empty key were wrongly kept in the
tree and the last secondary entry, which keeps the number of secondary
entries of a given key was removed.

This ended up causing some crashes later on when we tried to iterate
over the elements of this given key. The crash could occur in multiple
places, either when trying to retrieve an entry or to add some new ones.

This crash was raised in GitHub issue #2950.
The fix should be backported up to 3.0.
2025-05-23 22:38:54 +02:00
Willy Tarreau
84ffb3d0a9 MINOR: config: list recently added sections with -dKcfg
Newly added sections (crt-store, traces, acme) were not listed in
-dKcfg, let's add them. For now they have to be manually enumerated.
2025-05-23 10:49:33 +02:00
Willy Tarreau
28c7a22790 BUG/MEDIUM: server: fix potential null-deref after previous fix
A valid build warning was reported in the CI with latest commit b40ce97ecc
("BUG/MEDIUM: server: fix crash after duplicate GUID insertion"). Indeed,
if the first test in the function fails, we branch to the err label
with guid==NULL and will crash there. Let's just test guid before
dereferencing it for freeing.

This needs to be backported to 3.0 as well since the commit above was
meant to go there.
2025-05-22 18:09:12 +02:00
Amaury Denoyelle
b40ce97ecc BUG/MEDIUM: server: fix crash after duplicate GUID insertion
On "add server", if a GUID is defined, guid_insert() is used to add the
entry into the global GUID tree. If a similar entry already exists, GUID
insertion fails and the server creation is eventually aborted.

A crash could occur in this case because of an invalid memory access via
guid_remove(). The latter is caused via free_server() as the server
insertion is rejected. The invalid occurs on GUID key.

The issue occurs because of guid_insert(). The function properly
deallocates the GUID key on duplicate insertion, but it failed to reset
<guid.node.key> to NULL. This caused the invalid memory access on
guid_remove(). To fix this, ensure that key member is properly resetted
on guid_insert() error path.

This must be backported up to 3.0.
2025-05-22 17:59:37 +02:00
Amaury Denoyelle
5e088e3f8e MINOR: server: use stress mode for "add server help"
Implement stress mode on "add server help". This ensures that the
command is fully reentrant on full output buffer.

For testing, it requires compilation with USE_STRESS and global setting
"stress-level 1".
2025-05-22 17:40:05 +02:00
Amaury Denoyelle
4de5090976 MINOR: server: implement "add server help"
Implement "help" as a sub-command for "add server" CLI. The objective is
to list all the keywords that are supported for dynamic servers. CLI IO
handler and add_srv_ctx are used to support reentrancy on full output
buffer.

Now that this command is implemented, the outdated keyword list on "add
server" from management documentation can be removed.
2025-05-22 17:40:05 +02:00
Amaury Denoyelle
2570892c41 MINOR: server: define CLI I/O handler for "add server"
Extend "add server" to support an IO handler function named
cli_io_handler_add_server(). A context object is also defined whose
usage will depend on IO handler capabilities.

IO handler is skipped when "add server" is run in default mode, i.e. on
a dynamic server creation. Thus, currently IO handler is unneeded.
However, it will become useful to support sub-commands for "add server".

Note that return value of "add server" parser has been changed on server
creation success. Previously, it was used incorrectly to report if
server was inserted or not. In fact, parser return value is used by CLI
generic code to detect if command processing has been completed, or
should continue to the IO handler. Now, "add server" always returns 1 to
signal that CLI processing is completed. This is necessary to preserve
CLI output emitted by parser, even now that IO handler is defined for
the command. Previously, output was emitted in every situations due to
IO handler not defined. See below code snippet from cli.c for a better
overview :

  if (kw->parse && kw->parse(args, payload, appctx, kw->private) != 0) {
          ret = 1;
          goto fail;
  }

  /* kw->parse could set its own io_handler or io_release handler */
  if (!appctx->cli_ctx.io_handler) {
          ret = 1;
          goto fail;
  }

  appctx->st0 = CLI_ST_CALLBACK;
  ret = 1;
  goto end;
2025-05-22 17:40:05 +02:00
Willy Tarreau
1c0f2e62ad MINOR: ssl: also provide the "tls-tickets" bind option
Currently there is "no-tls-tickets" that is also supported in the
ssl-default-bind-options directive, but there's no way to re-enable
them on a specific "bind" line. This patch simply provides the option
to re-enable them. Note that the flag is inverted because tickets are
enabled by default and the no-tls-ticket option sets the flag to
disable them.
2025-05-22 15:31:54 +02:00
Willy Tarreau
3494775a1f MINOR: ssl: support strict-sni in ssl-default-bind-options
Several users already reported that it would be nice to support
strict-sni in ssl-default-bind-options. However, in order to support
it, we also need an option to disable it.

This patch moves the setting of the option from the strict_sni field
to a flag in the ssl_options field so that it can be inherited from
the default bind options, and adds a new "no-strict-sni" directive to
allow to disable it on a specific "bind" line.

The test file "del_ssl_crt-list.vtc" which already tests both options
was updated to make use of the default option and the no- variant to
confirm everything continues to work.
2025-05-22 15:31:54 +02:00
Christopher Faulet
7244f16ac4 MINOR: promex: Add agent check status/code/duration metrics
In the Prometheus exporter, the last health check status is already exposed,
with its code and duration in seconds. The server status is also exposed.
But the information about the agent check are not available. It is not
really handy because when a server status is changed because of the agent,
it is not obvious by looking to the Prometheus metrics. Indeed, the server
may reported as DOWN for instance, while the health check status still
reports a success. Being able to get the agent status in that case could be
valuable.

So now, the last agent check status is exposed, with its code and duration
in seconds. Following metrics can be grabbe now:

  * haproxy_server_agent_status
  * haproxy_server_agent_code
  * haproxy_server_agent_duration_seconds

Note that unlike the other metrics, no per-backend aggregated metric is
exposed.

This patch is related to issue #2983.
2025-05-22 09:50:10 +02:00
Willy Tarreau
0ac41ff97e [RELEASE] Released version 3.2-dev17
Released version 3.2-dev17 with the following main changes :
    - DOC: configuration: explicit multi-choice on bind shards option
    - BUG/MINOR: sink: detect and warn when using "send-proxy" options with ring servers
    - BUG/MEDIUM: peers: also limit the number of incoming updates
    - MEDIUM: hlua: Add function to change the body length of an HTTP Message
    - BUG/MEDIUM: stconn: Disable 0-copy forwarding for filters altering the payload
    - BUG/MINOR: h3: don't insert more than one Host header
    - BUG/MEDIUM: h1/h2/h3: reject forbidden chars in the Host header field
    - DOC: config: properly index "table and "stick-table" in their section
    - DOC: management: change reference to configuration manual
    - BUILD: debug: mark ha_crash_now() as attribute(noreturn)
    - IMPORT: slz: avoid multiple shifts on 64-bits
    - IMPORT: slz: support crc32c for lookup hash on sse4 but only if requested
    - IMPORT: slz: use a better hash for machines with a fast multiply
    - IMPORT: slz: fix header used for empty zlib message
    - IMPORT: slz: silence a build warning on non-x86 non-arm
    - BUG/MAJOR: leastconn: do not loop forever when facing saturated servers
    - BUG/MAJOR: queue: properly keep count of the queue length
    - BUG/MINOR: quic: fix crash on quic_conn alloc failure
    - BUG/MAJOR: leastconn: never reuse the node after dropping the lock
    - MINOR: acme: renewal notification over the dpapi sink
    - CLEANUP: quic: Useless BIO_METHOD initialization
    - MINOR: quic: Add useful error traces about qc_ssl_sess_init() failures
    - MINOR: quic: Allow the use of the new OpenSSL 3.5.0 QUIC TLS API (to be completed)
    - MINOR: quic: implement all remaining callbacks for OpenSSL 3.5 QUIC API
    - MINOR: quic: OpenSSL 3.5 internal QUIC custom extension for transport parameters reset
    - MINOR: quic: OpenSSL 3.5 trick to support 0-RTT
    - DOC: update INSTALL for QUIC with OpenSSL 3.5 usages
    - DOC: management: update 'acme status'
    - BUG/MEDIUM: wdt: always ignore the first watchdog wakeup
    - CLEANUP: wdt: clarify the comments on the common exit path
    - BUILD: ssl: avoid possible printf format warning in traces
    - BUILD: acme: fix build issue on 32-bit archs with 64-bit time_t
    - DOC: management: precise some of the fields of "show servers conn"
    - BUG/MEDIUM: mux-quic: fix BUG_ON() on rxbuf alloc error
    - DOC: watchdog: update the doc to reflect the recent changes
    - BUG/MEDIUM: acme: check if acme domains are configured
    - BUG/MINOR: acme: fix formatting issue in error and logs
    - EXAMPLES: lua: avoid screen refresh effect in "trisdemo"
    - CLEANUP: quic: remove unused cbuf module
    - MINOR: quic: move function to check stream type in utils
    - MINOR: quic: refactor handling of streams after MUX release
    - MINOR: quic: add some missing includes
    - MINOR: quic: adjust quic_conn-t.h include list
    - CLEANUP: cfgparse: alphabetically sort the global keywords
    - MINOR: glitches: add global setting "tune.glitches.kill.cpu-usage"
2025-05-21 15:56:06 +02:00
Willy Tarreau
a1577a89a0 MINOR: glitches: add global setting "tune.glitches.kill.cpu-usage"
It was mentioned during the development of glitches that it would be
nice to support not killing misbehaving connections below a certain
CPU usage so that poor implementations that routinely misbehave without
impact are not killed. This is now possible by setting a CPU usage
threshold under which we don't kill them via this parameter. It defaults
to zero so that we continue to kill them by default.
2025-05-21 15:47:42 +02:00
Willy Tarreau
eee57b4d3f CLEANUP: cfgparse: alphabetically sort the global keywords
The global keywords table was no longer sorted at all, let's fix it to
ease spotting the searched ones.
2025-05-21 15:47:42 +02:00
Amaury Denoyelle
00d90e8839 MINOR: quic: adjust quic_conn-t.h include list
Adjust include list in quic_conn-t.h. This file is included in many QUIC
source, so it is useful to keep as lightweight as possible. Note that
connection/QUIC MUX are transformed into forward declaration for better
layer separation.
2025-05-21 14:44:27 +02:00
Amaury Denoyelle
01e3b2119a MINOR: quic: add some missing includes
Insert some missing includes statement in QUIC source files. This was
detected after the next commit which adjust the include list used in
quic_conn-t.h file.
2025-05-21 14:44:27 +02:00
Amaury Denoyelle
f286288471 MINOR: quic: refactor handling of streams after MUX release
quic-conn layer has to handle itself STREAM frames after MUX release. If
the stream was already seen, it is probably only a retransmitted frame
which can be safely ignored. For other streams, an active closure may be
needed.

Thus it's necessary that quic-conn layer knows the highest stream ID
already handled by the MUX after its release. Previously, this was done
via <nb_streams> member array in quic-conn structure.

Refactor this by replacing <nb_streams> by two members called
<stream_max_uni>/<stream_max_bidi>. Indeed, it is unnecessary for
quic-conn layer to monitor locally opened uni streams, as the peer
cannot by definition emit a STREAM frame on it. Also, bidirectional
streams are always opened by the remote side.

Previously, <nb_streams> were set by quic-stream layer. Now,
<stream_max_uni>/<stream_max_bidi> members are only set one time, just
prior to QUIC MUX release. This is sufficient as quic-conn do not use
them if the MUX is available.

Note that previously, IDs were used relatively to their type, thus
incremented by 1, after shifting the original value. For simplification,
use the plain stream ID, which is incremented by 4.
2025-05-21 14:26:45 +02:00
Amaury Denoyelle
07d41a043c MINOR: quic: move function to check stream type in utils
Move general function to check if a stream is uni or bidirectional from
QUIC MUX to quic_utils module. This should prevent unnecessary include
of QUIC MUX header file in other sources.
2025-05-21 14:17:41 +02:00
Amaury Denoyelle
cf45bf1ad8 CLEANUP: quic: remove unused cbuf module
Cbuf are not used anymore. Remove the related source and header files,
as well as include statements in the rest of QUIC source files.
2025-05-21 14:16:37 +02:00
Baptiste Assmann
b437094853 EXAMPLES: lua: avoid screen refresh effect in "trisdemo"
In current version of the game, there is a "screen refresh" effect: the
screen is cleared before being re-drawn.
I moved the clear right after the connection is opened and removed it
from rendering time.
2025-05-21 12:00:53 +02:00
William Lallemand
8b121ab6f7 BUG/MINOR: acme: fix formatting issue in error and logs
Stop emitting \n in errmsg for intermediate error messages, this was
emitting multiline logs and was returning to a new line in the middle of
sentences.

We don't need to emit them in acme_start_task() since the errmsg is
ouput in a send_log which already contains a \n or on the CLI which
also emits it.
2025-05-21 11:41:28 +02:00
William Lallemand
156f4bd7a6 BUG/MEDIUM: acme: check if acme domains are configured
When starting the ACME task with a ckch_conf which does not contain the
domains, the ACME task would segfault because it will try to dereference
a NULL in this case.

The patch fix the issue by emitting a warning when no domains are
configured. It's not done at configuration parsing because it is not
easy to emit the warning because there are is no callback system which
give access to the whole ckch_conf once a line is parsed.

No backport needed.
2025-05-21 11:41:28 +02:00
Willy Tarreau
f5ed309449 DOC: watchdog: update the doc to reflect the recent changes
The watchdog was improved and fixed a few months ago, but the doc had
not been updated to reflect this. That's now done.
2025-05-21 11:34:55 +02:00
Amaury Denoyelle
e399daa67e BUG/MEDIUM: mux-quic: fix BUG_ON() on rxbuf alloc error
RX buffer allocation has been reworked in current dev tree. The
objective is to support multiple buffers per QCS to improve upload
throughput.

RX buffer allocation failure is handled simply : the whole connection is
closed. This is done via qcc_set_error(), with INTERNAL_ERROR as error
code. This function contains a BUG_ON() to ensure it is called only one
time per connection instance.

On RX buffer alloc failure, the aformentioned BUG_ON() crashes due to a
double invokation of qcc_set_error(). First by qcs_get_rxbuf(), and
immediately after it by qcc_recv(), which is the caller of the previous
one. This regression was introduced by the following commit.

  60f64449fb
  MAJOR: mux-quic: support multiple QCS RX buffers

To fix this, simply remove qcc_set_error() invocation in
qcs_get_rxbuf(). On buffer alloc failture, qcc_recv() is responsible to
set the error.

This does not need to be backported.
2025-05-21 11:33:00 +02:00
Willy Tarreau
5c628d4e09 DOC: management: precise some of the fields of "show servers conn"
As reported in issue #2970, the output of "show servers conn" is not
clear. It was essentially meant as a debugging tool during some changes
to idle connections management, but if some users want to monitor or
graph them, more info is needed. The doc mentions the currently known
list of fields, and reminds that this output is not meant to be stable
over time, but as long as it does not change, it can provide some useful
metrics to some users.
2025-05-21 10:45:07 +02:00
Willy Tarreau
4b52d5e406 BUILD: acme: fix build issue on 32-bit archs with 64-bit time_t
The build failed on mips32 with a 64-bit time_t here:

  https://github.com/haproxy/haproxy/actions/runs/15150389164/job/42595310111

Let's just turn the "remain" variable used to show the remaining time
into a more portable ullong and use %llu for all format specifiers,
since long remains limited to 32-bit on 32-bit archs.

No backport needed.
2025-05-21 10:18:47 +02:00
Willy Tarreau
09d4c9519e BUILD: ssl: avoid possible printf format warning in traces
When building on MIPS-32 with gcc-9.5 and glibc-2.31, I got this:

  src/ssl_trace.c: In function 'ssl_trace':
  src/ssl_trace.c:118:42: warning: format '%ld' expects argument of type 'long int', but argument 3 has type 'ssize_t' {aka 'const int'} [-Wformat=]
    118 |     chunk_appendf(&trace_buf, " : size=%ld", *size);
        |                                        ~~^   ~~~~~
        |                                          |   |
        |                                          |   ssize_t {aka const int}
        |                                          long int
        |                                        %d

Let's just cast the type. No backport needed.
2025-05-21 10:01:14 +02:00
Willy Tarreau
3b2fb5cc15 CLEANUP: wdt: clarify the comments on the common exit path
The condition in which we reach the check for ha_panic() and
ha_stuck_warning() are not super clear, let's reformulate them.
2025-05-20 16:37:06 +02:00
Willy Tarreau
0a8bfb5b90 BUG/MEDIUM: wdt: always ignore the first watchdog wakeup
With commit a06c215f08 ("MEDIUM: wdt: always make the faulty thread
report its own warnings"), when the TH_FL_STUCK flag was flipped on,
we'd then go to the panic code instead of giving a second chance like
before the commit. This can trigger rare cases that only happen with
moderate loads like was addressed by commit 24ce001771 ("BUG/MEDIUM:
wdt: fix the stuck detection for warnings"). This is in fact due to
the loss of the common "goto update_and_leave" that used to serve
both the warning code and the flag setting for probation, and it's
apparently what hit Christian in issue #2980.

Let's make sure we exit naturally when turning the bit on for the
first time. Let's also update the confusing comment at the end of
the check that was left over by latest change.

Since the first commit was backported to 3.1, this commit should be
backported there as well.
2025-05-20 16:37:03 +02:00
William Lallemand
dcdf27af70 DOC: management: update 'acme status'
Update the 'acme status' section with the "Stopped" status and fix the
description.
2025-05-20 16:08:57 +02:00
Frederic Lecaille
bbe302087c DOC: update INSTALL for QUIC with OpenSSL 3.5 usages
Update the QUIC sections which mention the OpenSSL library use cases.
2025-05-20 15:00:06 +02:00
Frederic Lecaille
08eee0d9cf MINOR: quic: OpenSSL 3.5 trick to support 0-RTT
For an unidentified reason, SSL_do_hanshake() succeeds at its first call when 0-RTT
is enabled for the connection. This behavior looks very similar by the one encountered
by AWS-LC stack. That said, it was documented by AWS-LC. This issue leads the
connection to stop sending handshake packets after having release the handshake
encryption level. In fact, no handshake packets could even been sent leading
the handshake to always fail.

To fix this, this patch simulates a "handshake in progress" state waiting
for the application level read secret to be established by the TLS stack.
This may happen only after the QUIC listener has completed/confirmed the handshake
upon handshake CRYPTO data receipt from the peer.
2025-05-20 15:00:06 +02:00
Frederic Lecaille
849a3af14e MINOR: quic: OpenSSL 3.5 internal QUIC custom extension for transport parameters reset
A QUIC must sent its transport parameter using a TLS custom extention. This
extension is reset by SSL_set_SSL_CTX(). It can be restored calling
quic_ssl_set_tls_cbs() (which calls SSL_set_quic_tls_cbs()).
2025-05-20 15:00:06 +02:00
Frederic Lecaille
b3ac1a636c MINOR: quic: implement all remaining callbacks for OpenSSL 3.5 QUIC API
The quic_conn struct is modified for two reasons. The first one is to store
the encoded version of the local tranport parameter as this is done for
USE_QUIC_OPENSSL_COMPAT. Indeed, the local transport parameter "should remain
valid until after the parameters have been sent" as mentionned by
SSL_set_quic_tls_cbs(3) manual. In our case, the buffer is a static buffer
attached to the quic_conn object. qc_ssl_set_quic_transport_params() function
whose role is to call SSL_set_tls_quic_transport_params() (aliased by
SSL_set_quic_transport_params() to set these local tranport parameter into
the TLS stack from the buffer attached to the quic_conn struct.

The second quic_conn struct modification is the addition of the  new ->prot_level
(SSL protection level) member added to the quic_conn struct to store "the most
recent write encryption level set via the OSSL_FUNC_SSL_QUIC_TLS_yield_secret_fn
callback (if it has been called)" as mentionned by SSL_set_quic_tls_cbs(3) manual.

This patches finally implements the five remaining callacks to make the haproxy
QUIC implementation work.

OSSL_FUNC_SSL_QUIC_TLS_crypto_send_fn() (ha_quic_ossl_crypto_send) is easy to
implement. It calls ha_quic_add_handshake_data() after having converted
qc->prot_level TLS protection level value to the correct ssl_encryption_level_t
(boringSSL API/quictls) value.

OSSL_FUNC_SSL_QUIC_TLS_crypto_recv_rcd_fn() (ha_quic_ossl_crypto_recv_rcd())
provide the non-contiguous addresses to the TLS stack, without releasing
them.

OSSL_FUNC_SSL_QUIC_TLS_crypto_release_rcd_fn() (ha_quic_ossl_crypto_release_rcd())
release these non-contiguous buffer relying on the fact that the list of
encryption level (qc->qel_list) is correctly ordered by SSL protection level
secret establishements order (by the TLS stack).

OSSL_FUNC_SSL_QUIC_TLS_yield_secret_fn() (ha_quic_ossl_got_transport_params())
is a simple wrapping function over ha_quic_set_encryption_secrets() which is used
by boringSSL/quictls API.

OSSL_FUNC_SSL_QUIC_TLS_got_transport_params_fn() (ha_quic_ossl_got_transport_params())
role is to store the peer received transport parameters. It simply calls
quic_transport_params_store() and set them into the TLS stack calling
qc_ssl_set_quic_transport_params().

Also add some comments for all the OpenSSL 3.5 QUIC API callbacks.

This patch have no impact on the other use of QUIC API provided by the others TLS
stacks.
2025-05-20 15:00:06 +02:00
Frederic Lecaille
dc6a3c329a MINOR: quic: Allow the use of the new OpenSSL 3.5.0 QUIC TLS API (to be completed)
This patch allows the use of the new OpenSSL 3.5.0 QUIC TLS API when it is
available and detected at compilation time. The detection relies on the presence of the
OSSL_FUNC_SSL_QUIC_TLS_CRYPTO_SEND macro from openssl-compat.h. Indeed this
macro is defined by OpenSSL since 3.5.0 version. It is not defined by quictls.
This helps in distinguishing these two TLS stacks. When the detection succeeds,
HAVE_OPENSSL_QUIC is also defined by openssl-compat.h. Then, this is this new macro
which is used to detect the availability of the new OpenSSL 3.5.0 QUIC TLS API.

Note that this detection is done only if USE_QUIC_OPENSSL_COMPAT is not asked.
So, USE_QUIC_OPENSSL_COMPAT and HAVE_OPENSSL_QUIC are exclusive.

At the same location, from openssl-compat.h, ssl_encryption_level_t enum is
defined. This enum was defined by quictls and expansively used by the haproxy
QUIC implementation. SSL_set_quic_transport_params() is replaced by
SSL_set_quic_tls_transport_params. SSL_set_quic_early_data_enabled() (quictls) is also replaced
by SSL_set_quic_tls_early_data_enabled() (OpenSSL). SSL_quic_read_level() (quictls)
is not defined by OpenSSL. It is only used by the traces to log the current
TLS stack decryption level (read). A macro makes it return -1 which is an
usused values.

The most of the differences between quictls and OpenSSL QUI APIs are in quic_ssl.c
where some callbacks must be defined for these two APIs. This is why this
patch modifies quic_ssl.c to define an array of OSSL_DISPATCH structs: <ha_quic_dispatch>.
Each element of this arry defines a callback. So, this patch implements these
six callabcks:

  - ha_quic_ossl_crypto_send()
  - ha_quic_ossl_crypto_recv_rcd()
  - ha_quic_ossl_crypto_release_rcd()
  - ha_quic_ossl_yield_secret()
  - ha_quic_ossl_got_transport_params() and
  - ha_quic_ossl_alert().

But at this time, these implementations which must return an int return 0 interpreted
as a failure by the OpenSSL QUIC API, except for ha_quic_ossl_alert() which
is implemented the same was as for quictls. The five remaining functions above
will be implemented by the next patches to come.

ha_quic_set_encryption_secrets() and ha_quic_add_handshake_data() have been moved
to be defined for both quictls and OpenSSL QUIC API.

These callbacks are attached to the SSL objects (sessions) calling qc_ssl_set_cbs()
new function. This latter callback the correct function to attached the correct
callbacks to the SSL objects (defined by <ha_quic_method> for quictls, and
<ha_quic_dispatch> for OpenSSL).

The calls to SSL_provide_quic_data() and SSL_process_quic_post_handshake()
have been also disabled. These functions are not defined by OpenSSL QUIC API.
At this time, the functions which call them are still defined when HAVE_OPENSSL_QUIC
is defined.
2025-05-20 15:00:06 +02:00
Frederic Lecaille
894595b711 MINOR: quic: Add useful error traces about qc_ssl_sess_init() failures
There were no traces to diagnose qc_ssl_sess_init() failures from QUIC traces.
This patch add calls to TRACE_DEVEL() into qc_ssl_sess_init() and its caller
(qc_alloc_ssl_sock_ctx()). This was useful at least to diagnose SSL context
initialization failures when porting QUIC to the new OpenSSL 3.5 QUIC API.

Should be easily backported as far as 2.6.
2025-05-20 15:00:06 +02:00
Frederic Lecaille
a2822b1776 CLEANUP: quic: Useless BIO_METHOD initialization
This code is there from QUIC implementation start. It was supposed to
initialize <ha_quic_meth> as a BIO_METHOD static object. But this
BIO_METHOD is not used at all!

Should be backported as far as 2.6 to help integrate the next patches to come.
2025-05-20 15:00:06 +02:00
William Lallemand
e803385a6e MINOR: acme: renewal notification over the dpapi sink
Output a sink message when the certificate was renewed by the ACME
client.

The message is emitted on the "dpapi" sink, and ends by \n\0.
Since the message contains this binary character, the right -0 parameter
must be used when consulting the sink over the CLI:

Example:

	$ echo "show events dpapi -nw -0" | socat -t9999 /tmp/haproxy.sock -
	<0>2025-05-19T15:56:23.059755+02:00 acme newcert foobar.pem.rsa\n\0

When used with the master CLI, @@1 should be used instead of @1 in order
to keep the connection to the worker.

Example:

	$ echo "@@1 show events dpapi -nw -0" | socat -t9999 /tmp/master.sock -
	<0>2025-05-19T15:56:23.059755+02:00 acme newcert foobar.pem.rsa\n\0
2025-05-19 16:07:25 +02:00
Willy Tarreau
99d6c889d0 BUG/MAJOR: leastconn: never reuse the node after dropping the lock
On ARM with 80 cores and a single server, it's sometimes possible to see
a segfault in fwlc_get_next_server() around 600-700k RPS. It seldom
happens as well on x86 with 128 threads with the same config around 1M
rps. It turns out that in fwlc_get_next_server(), before calling
fwlc_srv_reposition(), we have to drop the lock and that one takes it
back again.

The problem is that anything can happen to our node during this time,
and it can be freed. Then when continuing our work, we later iterate
over it and its next to find a node with an acceptable key, and by
doing so we can visit either uninitialized memory or simply nodes that
are no longer in the tree.

A first attempt at fixing this consisted in artificially incrementing
the elements count before dropping the lock, but that turned out to be
even worse because other threads could loop forever on such an element
looking for an entry that does not exist. Maintaining a separate
refcount didn't work well either, and it required to deal with the
memory release while dropping it, which is really not convenient.

Here we're taking a different approach consisting in simply not
trusting this node anymore and going back to the beginning of the
loop, as is done at a few other places as well. This way we can
safely ignore the possibly released node, and the test runs reliably
both on the arm and the x86 platforms mentioned above. No performance
regression was observed either, likely because this operation is quite
rare.

No backport is needed since this appeared with the leastconn rework
in 3.2.
2025-05-19 16:05:03 +02:00
Amaury Denoyelle
d358da4d83 BUG/MINOR: quic: fix crash on quic_conn alloc failure
If there is an alloc failure during qc_new_conn(), cleaning is done via
quic_conn_release(). However, since the below commit, an unchecked
dereferencing of <qc.path> is performed in the latter.

  e841164a44
  MINOR: quic: account for global congestion window

To fix this, simply check <qc.path> before dereferencing it in
quic_conn_release(). This is safe as it is properly initialized to NULL
on qc_new_conn() first stage.

This does not need to be backported.
2025-05-19 11:03:48 +02:00
Willy Tarreau
099c1b2442 BUG/MAJOR: queue: properly keep count of the queue length
The queue length was moved to its own variable in commit 583303c48
("MINOR: proxies/servers: Calculate queueslength and use it."), however a
few places were missed in pendconn_unlink() and assign_server_and_queue()
resulting in never decreasing counts on aborted streams. This was
reproduced when injecting more connections than the total backend
could stand in TCP mode and letting some of them time out in the
queue. No backport is needed, this is only 3.2.
2025-05-17 10:46:10 +02:00
Willy Tarreau
6be02d1c6e BUG/MAJOR: leastconn: do not loop forever when facing saturated servers
Since commit 9fe72bba3 ("MAJOR: leastconn; Revamp the way servers are
ordered."), there's no way to escape the loop visiting the mt_list heads
in fwlc_get_next_server if all servers in the list are saturated,
resulting in a watchdog panic. It can be reproduced with this config
and injecting with more than 2 concurrent conns:

    balance leastconn
    server s1 127.0.0.1:8000 maxconn 1
    server s2 127.0.0.1:8000 maxconn 1

Here we count the number of saturated servers that were encountered, and
escape the loop once the number of remaining servers exceeds the number
of saturated ones. No backport is needed since this arrived in 3.2.
2025-05-17 10:44:36 +02:00
Willy Tarreau
ccc65012d3 IMPORT: slz: silence a build warning on non-x86 non-arm
Building with clang 16 on MIPS64 yields this warning:

  src/slz.c:931:24: warning: unused function 'crc32_uint32' [-Wunused-function]
  static inline uint32_t crc32_uint32(uint32_t data)
                         ^

Let's guard it using UNALIGNED_LE_OK which is the only case where it's
used. This saves us from introducing a possibly non-portable attribute.

This is libslz upstream commit f5727531dba8906842cb91a75c1ffa85685a6421.
2025-05-16 16:43:53 +02:00
Willy Tarreau
31ca29eee1 IMPORT: slz: fix header used for empty zlib message
Calling slz_rfc1950_finish() without emitting any data would result in
incorrectly emitting a gzip header (rfc1952) instead of a zlib header
(rfc1950) due to a copy-paste between the two wrappers. The impact is
almost inexistent since the zlib format is almost never used in this
context, and compressing totally empty messages is quite rare as well.
Let's take this opportunity for fixing another mistake on an RFC number
in a comment.

This is slz upstream commit 7f3fce4f33e8c2f5e1051a32a6bca58e32d4f818.
2025-05-16 16:43:53 +02:00
Willy Tarreau
411b04c7d3 IMPORT: slz: use a better hash for machines with a fast multiply
The current hash involves 3 simple shifts and additions so that it can
be mapped to a multiply on architecures having a fast multiply. This is
indeed what the compiler does on x86_64. A large range of values was
scanned to try to find more optimal factors on machines supporting such
a fast multiply, and it turned out that new factor 0x1af42f resulted in
smoother hashes that provided on average 0.4% better compression on both
the Silesia corpus and an mbox file composed of very compressible emails
and uncompressible attachments. It's even slightly better than CRC32C
while being faster on Skylake. This patch enables this factor on archs
with a fast multiply.

This is slz upstream commit 82ad1e75c13245a835c1c09764c89f2f6e8e2a40.
2025-05-16 16:43:53 +02:00
Willy Tarreau
248bbec83c IMPORT: slz: support crc32c for lookup hash on sse4 but only if requested
If building for sse4 and USE_CRC32C_HASH is defined, then we can use
crc32c to calculate the lookup hash. By default we don't do it because
even on skylake it's slower than the current hash, which only involves
a short multiply (~5% slower). But the gains are marginal (0.3%).

This is slz upstream commit 44ae4f3f85eb275adba5844d067d281e727d8850.

Note: this is not used by default and only merged in order to avoid
divergence between the code bases.
2025-05-16 16:43:53 +02:00
Willy Tarreau
ea1b70900f IMPORT: slz: avoid multiple shifts on 64-bits
On 64-bit platforms, disassembling the code shows that send_huff() performs
a left shift followed by a right one, which are the result of integer
truncation and zero-extension caused solely by using different types at
different levels in the call chain. By making encode24() take a 64-bit
int on input and send_huff() take one optionally, we can remove one shift
in the hot path and gain 1% performance without affecting other platforms.

This is slz upstream commit fd165b36c4621579c5305cf3bb3a7f5410d3720b.
2025-05-16 16:43:53 +02:00
Willy Tarreau
0a91c6dcae BUILD: debug: mark ha_crash_now() as attribute(noreturn)
Building on MIPS64 with clang16 incorrectly reports some uninitialized
value warnings in stats-proxy.c due to some calls to ABORT_NOW() where
the compiler didn't know the code wouldn't return. Let's properly mark
the function as noreturn, and take this opportunity for also marking it
unused to avoid possible warnings depending on the build options (if
ABORT_NOW is not used). No backport needed though it will not harm.
2025-05-16 16:43:53 +02:00
William Lallemand
1eebf98952 DOC: management: change reference to configuration manual
Since e24b77e7 ('DOC: config: move the extraneous sections out of the
"global" definition') the ACME section of the configuration manual was
move from 3.13 to 12.8.

Change the reference to that section in "acme renew".
2025-05-16 16:01:43 +02:00
Willy Tarreau
81e46be026 DOC: config: properly index "table and "stick-table" in their section
Tim reported in issue #2953 that "stick-table" and "table" were not
indexed as keywords. The issue was the indent level. Also let's make
sure to put a box around the "store" arguments as well.
2025-05-16 15:37:03 +02:00
Willy Tarreau
df00164fdd BUG/MEDIUM: h1/h2/h3: reject forbidden chars in the Host header field
In continuation with 9a05c1f574 ("BUG/MEDIUM: h2/h3: reject some
forbidden chars in :authority before reassembly") and the discussion
in issue #2941, @DemiMarie rightfully suggested that Host should also
be sanitized, because it is sometimes used in concatenation, such as
this:

    http-request set-url https://%[req.hdr(host)]%[pathq]

which was proposed as a workaround for h2 upstream servers that require
:authority here:

    https://www.mail-archive.com/haproxy@formilux.org/msg43261.html

The current patch then adds the same check for forbidden chars in the
Host header, using the same function as for the patch above, since in
both cases we validate the host:port part of the authority. This way
we won't reconstruct ambiguous URIs by concatenating Host and path.

Just like the patch above, this can be backported afer a period of
observation.
2025-05-16 15:13:17 +02:00
Willy Tarreau
b84762b3e0 BUG/MINOR: h3: don't insert more than one Host header
Let's make sure we drop extraneous Host headers after having compared
them. That also works when :authority was already present. This way,
like for h1 and h2, we only keep one copy of it, while still making
sure that Host matches :authority. This way, if a request has both
:authority and Host, only one Host header will be produced (from
:authority). Note that due to the different organization of the code
and wording along the evolving RFCs, here we also check that all
duplicates are identical, while h2 ignores them as per RFC7540, but
this will be re-unified later.

This should be backported to stable versions, at least 2.8, though
thanks to the existing checks the impact is probably nul.
2025-05-16 15:13:17 +02:00
Christopher Faulet
f45a632bad BUG/MEDIUM: stconn: Disable 0-copy forwarding for filters altering the payload
It is especially a problem with Lua filters, but it is important to disable
the 0-copy forwarding if a filter alters the payload, or at least to be able
to disable it. While the filter is registered on the data filtering, it is
not an issue (and it is the common case) because, there is now way to
fast-forward data at all. But it may be an issue if a filter decides to
alter the payload and to unregister from data filtering. In that case, the
0-copy forwarding can be re-enabled in a hardly precdictable state.

To fix the issue, a SC flags was added to do so. The HTTP compression filter
set it and lua filters too if the body length is changed (via
HTTPMessage.set_body_len()).

Note that it is an issue because of a bad design about the HTX. Many info
about the message are stored in the HTX structure itself. It must be
refactored to move several info to the stream-endpoint descriptor. This
should ease modifications at the stream level, from filter or a TCP/HTTP
rules.

This should be backported as far as 3.0. If necessary, it may be backported
on lower versions, as far as 2.6. In that case, it must be reviewed and
adapted.
2025-05-16 15:11:37 +02:00
Christopher Faulet
94055a5e73 MEDIUM: hlua: Add function to change the body length of an HTTP Message
There was no function for a lua filter to change the body length of an HTTP
Message. But it is mandatory to be able to alter the message payload. It is
not possible update to directly update the message headers because the
internal state of the message must also be updated accordingly.

It is the purpose of HTTPMessage.set_body_len() function. The new body
length myst be passed as argument. If it is an integer, the right
"Content-Length" header is set. If the "chunked" string is used, it forces
the message to be chunked-encoded and in that case the "Transfer-Encoding"
header.

This patch should fix the issue #2837. It could be backported as far as 2.6.
2025-05-16 14:34:12 +02:00
Willy Tarreau
f2d7aa8406 BUG/MEDIUM: peers: also limit the number of incoming updates
There's a configurable limit to the number of messages sent to a
peer (tune.peers.max-updates-at-once), but this one is not applied to
the receive side. While it can usually be OK with default settings,
setups involving a large tune.bufsize (1MB and above) regularly
experience high latencies and even watchdogs during reloads because
the full learning process sends a lot of data that manages to fill
the entire buffer, and due to the compactness of the protocol, 1MB
of buffer can contain more than 100k updates, meaning taking locks
etc during this time, which is not workable.

Let's make sure the receiving side also respects the max-updates-at-once
setting. For this it counts incoming updates, and refrains from
continuing once the limit is reached. It's a bit tricky to do because
after receiving updates we still have to send ours (and possibly some
ACKs) so we cannot just leave the loop.

This issue was reported on 3.1 but it should progressively be backported
to all versions having the max-updates-at-once option available.
2025-05-15 16:57:21 +02:00
Aurelien DARRAGON
098a5e5c0b BUG/MINOR: sink: detect and warn when using "send-proxy" options with ring servers
using "send-proxy" or "send-proxy-v2" option on a ring server is not
relevant nor supported. Worse, on 2.4 it causes haproxy process to
crash as reported in GH #2965.

Let's be more explicit about the fact that this keyword is not supported
under "ring" context by ignoring the option and emitting a warning message
to inform the user about that.

Ideally, we should do the same for peers and log servers. The proper way
would be to check servers options during postparsing but we currently lack
proper cross-type server postparsing hooks. This will come later and thus
will give us a chance to perform the compatibilty checks for server
options depending on proxy type. But for now let's simply fix the "ring"
case since it is the only one that's known to cause a crash.

It may be backported to all stable versions.
2025-05-15 16:18:31 +02:00
Basha Mougamadou
824bb93e18 DOC: configuration: explicit multi-choice on bind shards option
From the documentation, this wasn't clear enough that shards should
be followed by one of the options number / by-thread / by-group.
Align it with existing options in documentation so that it becomes
more explicit.
2025-05-14 19:41:38 +02:00
Willy Tarreau
17df04ff09 [RELEASE] Released version 3.2-dev16
Released version 3.2-dev16 with the following main changes :
    - BUG/MEDIUM: mux-quic: fix crash on invalid fctl frame dereference
    - DEBUG: pool: permit per-pool UAF configuration
    - MINOR: acme: add the global option 'acme.scheduler'
    - DEBUG: pools: add a new integrity mode "backup" to copy the released area
    - MEDIUM: sock-inet: re-check IPv6 connectivity every 30s
    - BUG/MINOR: ssl: doesn't fill conf->crt with first arg
    - BUG/MINOR: ssl: prevent multiple 'crt' on the same ssl-f-use line
    - BUG/MINOR: ssl/ckch: always free() the previous entry during parsing
    - MINOR: tools: ha_freearray() frees an array of string
    - BUG/MINOR: ssl/ckch: always ha_freearray() the previous entry during parsing
    - MINOR: ssl/ckch: warn when the same keyword was used twice
    - BUG/MINOR: threads: fix soft-stop without multithreading support
    - BUG/MINOR: tools: improve parse_line()'s robustness against empty args
    - BUG/MINOR: cfgparse: improve the empty arg position report's robustness
    - BUG/MINOR: server: dont depend on proxy for server cleanup in srv_drop()
    - BUG/MINOR: server: perform lbprm deinit for dynamic servers
    - MINOR: http: add a function to validate characters of :authority
    - BUG/MEDIUM: h2/h3: reject some forbidden chars in :authority before reassembly
    - MINOR: quic: account Tx data per stream
    - MINOR: mux-quic: account Rx data per stream
    - MINOR: quic: add stream format for "show quic"
    - MINOR: quic: display QCS info on "show quic stream"
    - MINOR: quic: display stream age
    - BUG/MINOR: cpu-topo: fix group-by-cluster policy for disordered clusters
    - MINOR: cpu-topo: add a new "group-by-ccx" CPU policy
    - MINOR: cpu-topo: provide a function to sort clusters by average capacity
    - MEDIUM: cpu-topo: change "performance" to consider per-core capacity
    - MEDIUM: cpu-topo: change "efficiency" to consider per-core capacity
    - MEDIUM: cpu-topo: prefer grouping by CCX for "performance" and "efficiency"
    - MEDIUM: config: change default limits to 1024 threads and 32 groups
    - BUG/MINOR: hlua: Fix Channel:data() and Channel:line() to respect documentation
    - DOC: config: Fix a typo in the "term_events" definition
    - BUG/MINOR: spoe: Don't report error on applet release if filter is in DONE state
    - BUG/MINOR: mux-spop: Don't report error for stream if ACK was already received
    - BUG/MINOR: mux-spop: Make the demux stream ID a signed integer
    - BUG/MINOR: mux-spop: Don't open new streams for SPOP connection on error
    - MINOR: mux-spop: Don't set SPOP connection state to FRAME_H after ACK parsing
    - BUG/MEDIUM: mux-spop: Remove frame parsing states from the SPOP connection state
    - BUG/MEDIUM: mux-spop: Properly handle CLOSING state
    - BUG/MEDIUM: spop-conn: Report short read for partial frames payload
    - BUG/MEDIUM: mux-spop: Properly detect truncated frames on demux to report error
    - BUG/MEDIUM: mux-spop; Don't report a read error if there are pending data
    - DEBUG: mux-spop: Review some trace messages to adjust the message or the level
    - DOC: config: move address formats definition to section 2
    - DOC: config: move stick-tables and peers to their own section
    - DOC: config: move the extraneous sections out of the "global" definition
    - CI: AWS-LC(fips): enable unit tests
    - CI: AWS-LC: enable unit tests
    - CI: compliance: limit run on forks only to manual + cleanup
    - CI: musl: enable unit tests
    - CI: QuicTLS (weekly): limit run on forks only to manual dispatch
    - CI: WolfSSL: enable unit tests
2025-05-14 17:01:46 +02:00
Ilia Shipitsin
12de9ecce5 CI: WolfSSL: enable unit tests
Run the new make unit-tests on the CI.
2025-05-14 17:00:31 +02:00
Ilia Shipitsin
75a1e40501 CI: QuicTLS (weekly): limit run on forks only to manual dispatch 2025-05-14 17:00:31 +02:00
Ilia Shipitsin
a8b1b08fd7 CI: musl: enable unit tests
Run the new make unit-tests on the CI.
2025-05-14 17:00:31 +02:00
Ilia Shipitsin
01225f9aa5 CI: compliance: limit run on forks only to manual + cleanup 2025-05-14 17:00:31 +02:00
Ilia Shipitsin
61b30a09c0 CI: AWS-LC: enable unit tests
Run the new make unit-tests on the CI.
2025-05-14 17:00:31 +02:00
Ilia Shipitsin
944a96156e CI: AWS-LC(fips): enable unit tests
Run the new make unit-tests on the CI.
2025-05-14 17:00:31 +02:00
Willy Tarreau
e24b77e765 DOC: config: move the extraneous sections out of the "global" definition
Due to some historic mistakes that have spread to newly added sections,
a number of of recently added small sections found themselves described
under section 3 "global parameters" which is specific to "global" section
keywords. This is highly confusing, especially given that sections 3.1,
3.2, 3.3 and 3.10 directly start with keywords valid in the global section,
while others start with keywords that describe a new section.

Let's just create a new chapter "12. other sections" and move them all
there. 3.10 "HTTPclient tuning" however was moved to 3.4 as it's really
a definition of the global options assigned to the HTTP client. The
"programs" that are going away in 3.3 were moved at the end to avoid a
renumbering later.

Another nice benefit is that it moves a lot of text that was previously
keeping the global and proxies sections apart.
2025-05-14 16:08:02 +02:00
Willy Tarreau
da67a89f30 DOC: config: move stick-tables and peers to their own section
As suggested by Tim in issue #2953, stick-tables really deserve their own
section to explain the configuration. And peers have to move there as well
since they're totally dedicated to stick-tables.

Now we introduce a new section "Stick-tables and Peers", explaining the
concepts, and under which there is one subsection for stick-tables
configuration and one for the peers (which mostly keeps the existing
peers section).
2025-05-14 16:08:02 +02:00
Willy Tarreau
423dffa308 DOC: config: move address formats definition to section 2
Section 2 describes the config file format, variables naming etc, so
there's no reason why the address format used in this file should be
in a separate section, let's bring it into section 2 as well.
2025-05-14 16:08:02 +02:00
Christopher Faulet
e2ae8a74e8 DEBUG: mux-spop: Review some trace messages to adjust the message or the level
Some trace messages were not really accurrate, reporting a CLOSED connection
while only an error was reported on it. In addition, an TRACE_ERROR() was
used to report a short read on HELLO/DISCONNECT frames header. But it is not
an error. a TRACE_DEVEL() should be used instead.

This patch could be backported to 3.1 to ease future backports.
2025-05-14 11:52:10 +02:00
Christopher Faulet
6e46f0bf93 BUG/MEDIUM: mux-spop; Don't report a read error if there are pending data
When an read error is detected, no error must be reported on the SPOP
connection is there are still some data to parse. It is important to be sure
to process all data before reporting the error and be sure to not truncate
received frames. However, we must also take care to handle short read case
to not wait data that will never be received.

This patch must be backported to 3.1.
2025-05-14 11:51:58 +02:00
Christopher Faulet
16314bb93c BUG/MEDIUM: mux-spop: Properly detect truncated frames on demux to report error
There was no test in the demux part to detect truncated frames and to report
an error at the connection level. The SPOP streams were properly switch to
half-closed state. But waiting the associated SPOE applets were woken up and
released, the SPOP connection could be woken up several times for nothing. I
never triggered the watchdog in that case, but it is not excluded.

Now, at the end of the demux function, if a specific test was added to
detect truncated frames to report an error and close the connection.

This patch must be backported to 3.1.
2025-05-14 11:47:41 +02:00
Christopher Faulet
71feb49a9f BUG/MEDIUM: spop-conn: Report short read for partial frames payload
When a frame was not fully received, a short read must be reported on the
SPOP connection to help the demux to handle truncated frames. This was
performed for frames truncated on the header part but not on the payload
part. It is now properly detected.

This patch must be backported to 3.1.
2025-05-14 09:20:10 +02:00
Christopher Faulet
ddc5f8d92e BUG/MEDIUM: mux-spop: Properly handle CLOSING state
The CLOSING state was not handled at all by the SPOP multiplexer while it is
mandatory when a DISCONNECT frame was sent and the mux should wait for the
DISCONNECT frame in reply from the agent. Thanks to this patch, it should be
fixed.

In addition, if an error occurres during the AGENT HELLO frame parsing, the
SPOP connection is no longer switched to CLOSED state and remains in ERROR
state instead. It is important to be able to send the DISCONNECT frame to
the agent instead of closing the TCP connection immediately.

This patch depends on following commits:

  * BUG/MEDIUM: mux-spop: Remove frame parsing states from the SPOP connection state
  * MINOR: mux-spop: Don't set SPOP connection state to FRAME_H after ACK parsing
  * BUG/MINOR: mux-spop: Don't open new streams for SPOP connection on error
  * BUG/MINOR: mux-spop: Make the demux stream ID a signed integer

All the series must be backported to 3.1.
2025-05-14 09:14:12 +02:00
Christopher Faulet
a3940614c2 BUG/MEDIUM: mux-spop: Remove frame parsing states from the SPOP connection state
SPOP_CS_FRAME_H and SPOP_CS_FRAME_P states, that were used to handle frame
parsing, were removed. The demux process now relies on the demux stream ID
to know if it is waiting for the frame header or the frame
payload. Concretly, when the demux stream ID is not set (dsi == -1), the
demuxer is waiting for the next frame header. Otherwise (dsi >= 0), it is
waiting for the frame payload. It is especially important to be able to
properly handle DISCONNECT frames sent by the agents.

SPOP_CS_RUNNING state is introduced to know the hello handshake was finished
and the SPOP connection is able to open SPOP streams and exchange NOTIFY/ACK
frames with the agents.

It depends on the following fixes:

  * MINOR: mux-spop: Don't set SPOP connection state to FRAME_H after ACK parsing
  * BUG/MINOR: mux-spop: Make the demux stream ID a signed integer

This change will be mandatory for the next fix. It must be backported to 3.1
with the commits above.
2025-05-13 19:51:40 +02:00
Christopher Faulet
6b0f7de4e3 MINOR: mux-spop: Don't set SPOP connection state to FRAME_H after ACK parsing
After the ACK frame was parsed, it is useless to set the SPOP connection
state to SPOP_CS_FRAME_H state because this will be automatically handled by
the demux function. If it is not an issue, but this will simplify changes
for the next commit.
2025-05-13 19:51:40 +02:00
Christopher Faulet
197eaaadfd BUG/MINOR: mux-spop: Don't open new streams for SPOP connection on error
Till now, only SPOP connections fully closed or those with a TCP connection on
error were concerned. But available streams could be reported for SPOP
connections in error or closing state. But in these states, no NOTIFY frames
will be sent and no ACK frames will be parsed. So, no new SPOP streams should be
opened.

This patch should be backported to 3.1.
2025-05-13 19:51:40 +02:00
Christopher Faulet
cbc10b896e BUG/MINOR: mux-spop: Make the demux stream ID a signed integer
The demux stream ID of a SPOP connection, used when received frames are
parsed, must be a signed integer because it is set to -1 when the SPOP
connection is initialized. It will be important for the next fix.

This patch must be backported to 3.1.
2025-05-13 19:51:40 +02:00
Christopher Faulet
6d68beace5 BUG/MINOR: mux-spop: Don't report error for stream if ACK was already received
When a SPOP connection was closed or was in error, an error was
systematically reported on all its SPOP streams. However, SPOP streams that
already received their ACK frame must be excluded. Otherwise if an agent
sends a ACK and close immediately, the ACK will be ignored because the SPOP
stream will handle the error first.

This patch must be backported to 3.1.
2025-05-13 19:51:40 +02:00
Christopher Faulet
1cd30c998b BUG/MINOR: spoe: Don't report error on applet release if filter is in DONE state
When the SPOE applet was released, if a SPOE filter context was still
attached to it, an error was reported to the filter. However, there is no
reason to report an error if the ACK message was already received. Because
of this bug, if the ACK message is received and the SPOE connection is
immediately closed, this prevents the ACK message to be processed.

This patch should be backported to 3.1.
2025-05-13 19:51:40 +02:00
Christopher Faulet
dcce02d6ed DOC: config: Fix a typo in the "term_events" definition
A space was missing before the colon.
2025-05-13 19:51:40 +02:00
Christopher Faulet
a5de0e1595 BUG/MINOR: hlua: Fix Channel:data() and Channel:line() to respect documentation
When the channel API was revisted, the both functions above was added. An
offset can be passed as argument. However, this parameter could be reported
to be out of range if there was not enough input data was received yet. It
is an issue, especially with a tcp rule, because more data could be
received. If an error is reported too early, this prevent the rule to be
reevaluated later. In fact, an error should only be reported if the offset
is part of the output data.

Another issue is about the conditions to report 'nil' instead of an empty
string. 'nil' was reported when no data was found. But it is not aligned
with the documentation. 'nil' must only be returned if no more data cannot
be received and there is no input data at all.

This patch should fix the issue #2716. It should be backported as far as 2.6.
2025-05-13 19:51:40 +02:00
Willy Tarreau
e049bd00ab MEDIUM: config: change default limits to 1024 threads and 32 groups
A test run on a dual-socket EPYC 9845 (2x160 cores) showed that we'll
be facing new limits during the lifetime of 3.2 with our current 16
groups and 256 threads max:

  $ cat test.cfg
  global
      cpu-policy perforamnce

  $ ./haproxy -dc -c -f test.cfg
  ...
  Thread CPU Bindings:
    Tgrp/Thr  Tid        CPU set
    1/1-32    1-32       32: 0-15,320-335
    2/1-32    33-64      32: 16-31,336-351
    3/1-32    65-96      32: 32-47,352-367
    4/1-32    97-128     32: 48-63,368-383
    5/1-32    129-160    32: 64-79,384-399
    6/1-32    161-192    32: 80-95,400-415
    7/1-32    193-224    32: 96-111,416-431
    8/1-32    225-256    32: 112-127,432-447

Raising the default limit to 1024 threads and 32 groups is sufficient
to buy us enough margin for a long time (hopefully, please don't laugh,
you, reader from the future):

  $ ./haproxy -dc -c -f test.cfg
  ...
  Thread CPU Bindings:
    Tgrp/Thr  Tid        CPU set
    1/1-32    1-32       32: 0-15,320-335
    2/1-32    33-64      32: 16-31,336-351
    3/1-32    65-96      32: 32-47,352-367
    4/1-32    97-128     32: 48-63,368-383
    5/1-32    129-160    32: 64-79,384-399
    6/1-32    161-192    32: 80-95,400-415
    7/1-32    193-224    32: 96-111,416-431
    8/1-32    225-256    32: 112-127,432-447
    9/1-32    257-288    32: 128-143,448-463
    10/1-32   289-320    32: 144-159,464-479
    11/1-32   321-352    32: 160-175,480-495
    12/1-32   353-384    32: 176-191,496-511
    13/1-32   385-416    32: 192-207,512-527
    14/1-32   417-448    32: 208-223,528-543
    15/1-32   449-480    32: 224-239,544-559
    16/1-32   481-512    32: 240-255,560-575
    17/1-32   513-544    32: 256-271,576-591
    18/1-32   545-576    32: 272-287,592-607
    19/1-32   577-608    32: 288-303,608-623
    20/1-32   609-640    32: 304-319,624-639

We can change this default now because it has no functional effect
without any configured cpu-policy, so this will only be an opt-in
and it's better to do it now than to have an effect during the
maintenance phase. A tiny effect is a doubling of the number of
pool buckets and stick-table shards internally, which means that
aside slightly reducing contention in these areas, a dump of tables
can enumerate keys in a different order (hence the adjustment in the
vtc).

The only really visible effect is a slightly higher static memory
consumption (29->35 MB on a small config), but that difference
remains even with 50k servers so that's pretty much acceptable.

Thanks to Erwan Velu for the quick tests and the insights!
2025-05-13 18:15:33 +02:00
Willy Tarreau
158da59c34 MEDIUM: cpu-topo: prefer grouping by CCX for "performance" and "efficiency"
Most of the time, machines made of multiple CPU types use the same L3
for them, and grouping CPUs by frequencies to form groups doesn't bring
any value and on the opposite can impair the incoming connection balancing.
This choice of grouping by cluster was made in order to constitute a good
choice on homogenous machines as well, so better rely on the per-CCX
grouping than the per-cluster one in this case. This will create less
clusters on machines where it counts without affecting other ones.

It doesn't seem necessary to change anything for the "resource" policy
since it selects a single cluster.
2025-05-13 16:48:30 +02:00
Willy Tarreau
70b0dd6b0f MEDIUM: cpu-topo: change "efficiency" to consider per-core capacity
This is similar to the previous change to the "performance" policy but
it applies to the "efficiency" one. Here we're changing the sorting
method to sort CPU clusters by average per-CPU capacity, and we evict
clusters whose per-CPU capacity is above 125% of the previous one.
Per-core capacity allows to detect discrepancies between CPU cores,
and to continue to focus on efficient ones as a priority.
2025-05-13 16:48:30 +02:00
Willy Tarreau
6c88e27cf4 MEDIUM: cpu-topo: change "performance" to consider per-core capacity
Running the "performance" policy on highly heterogenous systems yields
bad choices when there are sufficiently more small than big cores,
and/or when there are multiple cluster types, because on such setups,
the higher the frequency, the lower the number of cores, despite small
differences in frequencies. In such cases, we quickly end up with
"performance" only choosing the small or the medium cores, which is
contrary to the original intent, which was to select performance cores.
This is what happens on boards like the Orion O6 for example where only
the 4 medium cores and 2 big cores are choosen, evicting the 2 biggest
cores and the 4 smallest ones.

Here we're changing the sorting method to sort CPU clusters by average
per-CPU capacity, and we evict clusters whose per-CPU capacity falls
below 80% of the previous one. Per-core capacity allows to detect
discrepancies between CPU cores, and to continue to focus on high
performance ones as a priority.
2025-05-13 16:48:30 +02:00
Willy Tarreau
5ab2c815f1 MINOR: cpu-topo: provide a function to sort clusters by average capacity
The current per-capacity sorting function acts on a whole cluster, but
in some setups having many small cores and few big ones, it becomes
easy to observe an inversion of metrics where the many small cores show
a globally higher total capacity than the few big ones. This does not
necessarily fit all use cases. Let's add new a function to sort clusters
by their per-cpu average capacity to cover more use cases.
2025-05-13 16:48:30 +02:00
Willy Tarreau
01df98adad MINOR: cpu-topo: add a new "group-by-ccx" CPU policy
This cpu-policy will only consider CCX and not clusters. This makes
a difference on machines with heterogenous CPUs that generally share
the same L3 cache, where it's not desirable to create multiple groups
based on the CPU types, but instead create one with the different CPU
types. The variants "group-by-2/3/4-ccx" have also been added.

Let's also add some text explaining the difference between cluster
and CCX.
2025-05-13 16:48:30 +02:00
Willy Tarreau
33d8b006d4 BUG/MINOR: cpu-topo: fix group-by-cluster policy for disordered clusters
Some (rare) boards have their clusters in an erratic order. This is
the case for the Radxa Orion O6 where one of the big cores appears as
CPU0 due to booting from it, then followed by the small cores, then the
medium cores, then the remaining big cores. This results in clusters
appearing this order: 0,2,1,0.

The core in cpu_policy_group_by_cluster() expected ordered clusters,
and performs ordered comparisons to decide whether a CPU's cluster has
already been taken care of. On the board above this doesn't work, only
clusters 0 and 2 appear and 1 is skipped.

Let's replace the cluster number comparison with a cpuset to record
which clusters have been taken care of. Now the groups properly appear
like this:

  Tgrp/Thr  Tid        CPU set
  1/1-2     1-2        2: 0,11
  2/1-4     3-6        4: 1-4
  3/1-6     7-12       6: 5-10

No backport is needed, this is purely 3.2.
2025-05-13 16:48:30 +02:00
Amaury Denoyelle
f3b9676416 MINOR: quic: display stream age
Add a field to save the creation date of qc_stream_desc instance. This
is useful to display QUIC stream age in "show quic stream" output.
2025-05-13 15:44:22 +02:00
Amaury Denoyelle
dbf07c754e MINOR: quic: display QCS info on "show quic stream"
Complete stream output for "show quic" by displaying information from
its upper QCS. Note that QCS may be NULL if already released, so a
default output is also provided.
2025-05-13 15:43:28 +02:00
Amaury Denoyelle
cbadfa0163 MINOR: quic: add stream format for "show quic"
Add a new format for "show quic" command labelled as "stream". This is
an equivalent of "show sess", dedicated to the QUIC stack. Each active
QUIC streams are listed on a line with their related infos.

The main objective of this command is to ensure there is no freeze
streams remaining after a transfer.
2025-05-13 15:41:51 +02:00
Amaury Denoyelle
1ccede211c MINOR: mux-quic: account Rx data per stream
Add counters to measure Rx buffers usage per QCS. This reused the newly
defined bdata_ctr type already used for Tx accounting.

Note that for now, <tot> value of bdata_ctr is not used. This is because
it is not easy to account for data accross contiguous buffers.

These values are displayed both on log/traces and "show quic" output.
2025-05-13 15:41:51 +02:00
Amaury Denoyelle
a1dc9070e7 MINOR: quic: account Tx data per stream
Add accounting at qc_stream_desc level to be able to report the number
of allocated Tx buffers and the sum of their data. This represents data
ready for emission or already emitted and waiting on ACK.

To simplify this accounting, a new counter type bdata_ctr is defined in
quic_utils.h. This regroups both buffers and data counter, plus a
maximum on the buffer value.

These values are now displayed on QCS info used both on logline and
traces, and also on "show quic" output.
2025-05-13 15:41:41 +02:00
Willy Tarreau
9a05c1f574 BUG/MEDIUM: h2/h3: reject some forbidden chars in :authority before reassembly
As discussed here:
   https://github.com/httpwg/http2-spec/pull/936
   https://github.com/haproxy/haproxy/issues/2941

It's important to take care of some special characters in the :authority
pseudo header before reassembling a complete URI, because after assembly
it's too late (e.g. the '/'). This patch does this, both for h2 and h3.

The impact on H2 was measured in the worst case at 0.3% of the request
rate, while the impact on H3 is around 1%, but H3 was about 1% faster
than H2 before and is now on par.

It may be backported after a period of observation, and in this case it
relies on this previous commit:

   MINOR: http: add a function to validate characters of :authority

Thanks to @DemiMarie for reviving this topic in issue #2941 and bringing
new potential interesting cases.
2025-05-12 18:02:47 +02:00
Willy Tarreau
ebab479cdf MINOR: http: add a function to validate characters of :authority
As discussed here:
  https://github.com/httpwg/http2-spec/pull/936
  https://github.com/haproxy/haproxy/issues/2941

It's important to take care of some special characters in the :authority
pseudo header before reassembling a complete URI, because after assembly
it's too late (e.g. the '/').

This patch adds a specific function which was checks all such characters
and their ranges on an ist, and benefits from modern compilers
optimizations that arrange the comparisons into an evaluation tree for
faster match. That's the version that gave the most consistent performance
across various compilers, though some hand-crafted versions using bitmaps
stored in register could be slightly faster but super sensitive to code
ordering, suggesting that the results might vary with future compilers.
This one takes on average 1.2ns per character at 3 GHz (3.6 cycles per
char on avg). The resulting impact on H2 request processing time (small
requests) was measured around 0.3%, from 6.60 to 6.618us per request,
which is a bit high but remains acceptable given that the test only
focused on req rate.

The code was made usable both for H2 and H3.
2025-05-12 18:02:47 +02:00
Aurelien DARRAGON
c40d6ac840 BUG/MINOR: server: perform lbprm deinit for dynamic servers
Last commit 7361515 ("BUG/MINOR: server: dont depend on proxy for server
cleanup in srv_drop()") introduced a regression because the lbprm
server_deinit is not evaluated anymore with dynamic servers, possibly
resulting in a memory leak.

To fix the issue, in addition to free_proxy(), the server deinit check
should be manually performed in cli_parse_delete_server() as well.

No backport needed.
2025-05-12 16:29:36 +02:00
Aurelien DARRAGON
736151556c BUG/MINOR: server: dont depend on proxy for server cleanup in srv_drop()
In commit b5ee8bebfc ("MINOR: server: always call ssl->destroy_srv when
available"), we made it so srv_drop() doesn't depend on proxy to perform
server cleanup.

It turns out this is now mandatory, because during deinit, free_proxy()
can occur before the final srv_drop(). This is the case when using Lua
scripts for instance.

In 2a9436f96 ("MINOR: lbprm: Add method to deinit server and proxy") we
added a freeing check under srv_drop() that depends on the proxy.
Because of that UAF may occur during deinit when using a Lua script that
manipulate server objects.

To fix the issue, let's perform the lbprm server deinit logic under
free_proxy() directly, where the DEINIT server hooks are evaluated.

Also, to prevent similar bugs in the future, let's explicitly document
in srv_drop() that server cleanups should assume that the proxy may
already be freed.

No backport needed unless 2a9436f96 is.
2025-05-12 16:17:26 +02:00
Willy Tarreau
be4d816be2 BUG/MINOR: cfgparse: improve the empty arg position report's robustness
OSS Fuzz found that the previous fix ebb19fb367 ("BUG/MINOR: cfgparse:
consider the special case of empty arg caused by \x00") was incomplete,
as the output can sometimes be larger than the input (due to variables
expansion) in which case the work around to try to report a bad arg will
fail. While the parse_line() function has been made more robust now in
order to avoid this condition, let's fix the handling of this special
case anyway by just pointing to the beginning of the line if the supposed
error location is out of the line's buffer.

All details here:
   https://oss-fuzz.com/testcase-detail/5202563081502720

No backport is needed unless the fix above is backported.
2025-05-12 16:11:15 +02:00
Willy Tarreau
2b60e54fb1 BUG/MINOR: tools: improve parse_line()'s robustness against empty args
The fix in 10e6d0bd57 ("BUG/MINOR: tools: only fill first empty arg when
not out of range") was not that good. It focused on protecting against
<arg> becoming out of range to detect we haven't emitted anything, but
it's not the right way to detect this. We're always maintaining arg_start
as a copy of outpos, and that later one is incremented when emitting a
char, so instead of testing args[arg] against out+arg_start, we should
instead check outpos against arg_start, thereby eliminating the <out>
offset and the need to access args[]. This way we now always know if
we've emitted an empty arg without dereferencing args[].

There's no need to backport this unless the fix above is also backported.
2025-05-12 16:11:15 +02:00
Aurelien DARRAGON
7d057e56af BUG/MINOR: threads: fix soft-stop without multithreading support
When thread support is disabled ("USE_THREAD=" or "USE_THREAD=0" when
building), soft-stop doesn't work as haproxy never ends after stopping
the proxies.

This used to work fine in the past but suddenly stopped working with
ef422ced91 ("MEDIUM: thread: make stopping_threads per-group and add
stopping_tgroups") because the "break;" instruction under the stopping
condition is never executed when support for multithreading is disabled.

To fix the issue, let's add an "else" block to run the "break;"
instruction when USE_THREAD is not defined.

It should be backported up to 2.8
2025-05-12 14:18:39 +02:00
William Lallemand
8b0d1a4113 MINOR: ssl/ckch: warn when the same keyword was used twice
When using a crt-list or a crt-store, keywords mentionned twice on the
same line overwritte the previous value.

This patch emits a warning when the same keyword is found another time
on the same line.
2025-05-09 19:18:38 +02:00
William Lallemand
9c0c05b7ba BUG/MINOR: ssl/ckch: always ha_freearray() the previous entry during parsing
The ckch_conf_parse() function is the generic function which parses
crt-store keywords from the crt-store section, and also from a
crt-list.

When having multiple time the same keyword, a leak of the previous
value happens. This patch ensure that the previous value is always
freed before overwriting it.

This is the same problem as the previous "BUG/MINOR: ssl/ckch: always
free() the previous entry during parsing" patch, however this one
applies on PARSE_TYPE_ARRAY_SUBSTR.

No backport needed.
2025-05-09 19:16:02 +02:00
William Lallemand
96b1f1fd26 MINOR: tools: ha_freearray() frees an array of string
ha_freearray() is a new function which free() an array of strings
terminated by a NULL entry.

The pointer to the array will be free and set to NULL.
2025-05-09 19:12:05 +02:00
William Lallemand
311e0aa5c7 BUG/MINOR: ssl/ckch: always free() the previous entry during parsing
The ckch_conf_parse() function is the generic function which parses
crt-store keywords from the crt-store section, and also from a crt-list.

When having multiple time the same keyword, a leak of the previous value
happens. This patch ensure that the previous value is always freed
before overwriting it.

This patch should be backported as far as 3.0.
2025-05-09 19:01:28 +02:00
William Lallemand
9ce3fb35a2 BUG/MINOR: ssl: prevent multiple 'crt' on the same ssl-f-use line
The 'ssl-f-use' implementation doesn't prevent to have multiple time the
'crt' keyword, which overwrite the previous value. Letting users think
that is it possible to use multiple certificates on the same line, which
is not the case.

This patch emits an alert when setting the 'crt' keyword multiple times
on the same ssl-f-use line.

Should fix issue #2966.

No backport needed.
2025-05-09 18:52:09 +02:00
William Lallemand
0c4abf5a22 BUG/MINOR: ssl: doesn't fill conf->crt with first arg
Commit c7f29afc ("MEDIUM: ssl: replace "crt" lines by "ssl-f-use"
lines") forgot to remove an the allocation of the crt field which was
done with the first argument.

Since ssl-f-use takes keywords, this would put the first keyword in
"crt" instead of the certificate name.
2025-05-09 18:23:06 +02:00
Willy Tarreau
8a96216847 MEDIUM: sock-inet: re-check IPv6 connectivity every 30s
IPv6 connectivity might start off (e.g. network not fully up when
haproxy starts), so for features like resolvers, it would be nice to
periodically recheck.

With this change, instead of having the resolvers code rely on a variable
indicating connectivity, it will now call a function that will check for
how long a connectivity check hasn't been run, and will perform a new one
if needed. The age was set to 30s which seems reasonable considering that
the DNS will cache results anyway. There's no saving in spacing it more
since the syscall is very check (just a connect() without any packet being
emitted).

The variables remain exported so that we could present them in show info
or anywhere else.

This way, "dns-accept-family auto" will now stay up to date. Warning
though, it does perform some caching so even with a refreshed IPv6
connectivity, an older record may be returned anyway.
2025-05-09 15:45:44 +02:00
Willy Tarreau
1404f6fb7b DEBUG: pools: add a new integrity mode "backup" to copy the released area
This way we can preserve the entire contents of the released area for
later inspection. This automatically enables comparison at reallocation
time as well (like "integrity" does). If used in combination with
integrity, the comparison is disabled but the check of non-corruption
of the area mangled by integrity is still operated.
2025-05-09 14:57:00 +02:00
William Lallemand
e7574cd5f0 MINOR: acme: add the global option 'acme.scheduler'
The automatic scheduler is useful but sometimes you don't want to use,
or schedule manually.

This patch adds an 'acme.scheduler' option in the global section, which
can be set to either 'auto' or 'off'. (auto is the default value)

This also change the ouput of the 'acme status' command so it does not
shows scheduled values. The state will be 'Stopped' instead of
'Scheduled'.
2025-05-09 14:00:39 +02:00
Willy Tarreau
0ae14beb2a DEBUG: pool: permit per-pool UAF configuration
The new MEM_F_UAF flag can be set just after a pool's creation to make
this pool UAF for debugging purposes. This allows to maintain a better
overall performance required to reproduce issues while still having a
chance to catch UAF. It will only be used by developers who will manually
add it to areas worth being inspected, though.
2025-05-09 13:59:02 +02:00
Amaury Denoyelle
14e4f2b811 BUG/MEDIUM: mux-quic: fix crash on invalid fctl frame dereference
Emission of flow-control frames have been recently modified. Now, each
frame is sent one by one, via a single entry list. If a failure occurs,
emission is interrupted and frame is reinserted into the original
<qcc.lfctl.frms> list.

This code is incorrect as it only checks if qcc_send_frames() returns an
error code to perform the reinsert operation. However, an error here
does not always mean that the frame was not properly emitted by lower
quic-conn layer. As such, an extra test LIST_ISEMPTY() must be performed
prior to reinsert the frame.

This bug would cause a heap overflow. Indeed, the reinsert frame would
be a random value. A crash would occur as soon as it would be
dereferenced via <qcc.lfctl.frms> list.

This was reproduced by issuing a POST with a big file and interrupt it
after just a few seconds. This results in a crash in about a third of
the tests. Here is an example command using ngtcp2 :

 $ ngtcp2-client -q --no-quic-dump --no-http-dump \
   -m POST -d ~/infra/html/1g 127.0.0.1 20443 "http://127.0.0.1:20443/post"

Heap overflow was detected via a BUG_ON() statement from qc_frm_free()
via qcc_release() caller :

  FATAL: bug condition "!((&((*frm)->reflist))->n == (&((*frm)->reflist)))" matched at src/quic_frame.c:1270

This does not need to be backported.
2025-05-09 11:07:11 +02:00
443 changed files with 14359 additions and 7586 deletions

21
.github/matrix.py vendored
View File

@ -125,7 +125,7 @@ def main(ref_name):
# Ubuntu
if "haproxy-" in ref_name:
os = "ubuntu-22.04" # stable branch
os = "ubuntu-24.04" # stable branch
else:
os = "ubuntu-24.04" # development branch
@ -218,6 +218,7 @@ def main(ref_name):
"stock",
"OPENSSL_VERSION=1.0.2u",
"OPENSSL_VERSION=1.1.1s",
"OPENSSL_VERSION=3.5.1",
"QUICTLS=yes",
"WOLFSSL_VERSION=5.7.0",
"AWS_LC_VERSION=1.39.0",
@ -232,8 +233,7 @@ def main(ref_name):
for ssl in ssl_versions:
flags = ["USE_OPENSSL=1"]
if ssl == "BORINGSSL=yes" or ssl == "QUICTLS=yes" or "LIBRESSL" in ssl or "WOLFSSL" in ssl or "AWS_LC" in ssl:
flags.append("USE_QUIC=1")
skipdup=0
if "WOLFSSL" in ssl:
flags.append("USE_OPENSSL_WOLFSSL=1")
if "AWS_LC" in ssl:
@ -243,8 +243,23 @@ def main(ref_name):
flags.append("SSL_INC=${HOME}/opt/include")
if "LIBRESSL" in ssl and "latest" in ssl:
ssl = determine_latest_libressl(ssl)
skipdup=1
if "OPENSSL" in ssl and "latest" in ssl:
ssl = determine_latest_openssl(ssl)
skipdup=1
# if "latest" equals a version already in the list
if ssl in ssl_versions and skipdup == 1:
continue
openssl_supports_quic = False
try:
openssl_supports_quic = version.Version(ssl.split("OPENSSL_VERSION=",1)[1]) >= version.Version("3.5.0")
except:
pass
if ssl == "BORINGSSL=yes" or ssl == "QUICTLS=yes" or "LIBRESSL" in ssl or "WOLFSSL" in ssl or "AWS_LC" in ssl or openssl_supports_quic:
flags.append("USE_QUIC=1")
matrix.append(
{

View File

@ -5,82 +5,8 @@ on:
- cron: "0 0 * * 4"
workflow_dispatch:
permissions:
contents: read
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install VTest
run: |
scripts/build-vtest.sh
- name: Determine latest AWS-LC release
id: get_aws_lc_release
run: |
result=$(cd .github && python3 -c "from matrix import determine_latest_aws_lc_fips; print(determine_latest_aws_lc_fips(''))")
echo $result
echo "result=$result" >> $GITHUB_OUTPUT
- name: Cache AWS-LC
id: cache_aws_lc
uses: actions/cache@v4
with:
path: '~/opt/'
key: ssl-${{ steps.get_aws_lc_release.outputs.result }}-Ubuntu-latest-gcc
- name: Install apt dependencies
run: |
sudo apt-get update -o Acquire::Languages=none -o Acquire::Translation=none
sudo apt-get --no-install-recommends -y install socat gdb
- name: Install AWS-LC
if: ${{ steps.cache_ssl.outputs.cache-hit != 'true' }}
run: env ${{ steps.get_aws_lc_release.outputs.result }} scripts/build-ssl.sh
- name: Compile HAProxy
run: |
make -j$(nproc) ERR=1 CC=gcc TARGET=linux-glibc \
USE_OPENSSL_AWSLC=1 USE_QUIC=1 \
SSL_LIB=${HOME}/opt/lib SSL_INC=${HOME}/opt/include \
DEBUG="-DDEBUG_POOL_INTEGRITY" \
ADDLIB="-Wl,-rpath,/usr/local/lib/ -Wl,-rpath,$HOME/opt/lib/"
sudo make install
- name: Show HAProxy version
id: show-version
run: |
ldd $(which haproxy)
haproxy -vv
echo "version=$(haproxy -v |awk 'NR==1{print $3}')" >> $GITHUB_OUTPUT
- name: Install problem matcher for VTest
run: echo "::add-matcher::.github/vtest.json"
- name: Run VTest for HAProxy
id: vtest
run: |
# This is required for macOS which does not actually allow to increase
# the '-n' soft limit to the hard limit, thus failing to run.
ulimit -n 65536
# allow to catch coredumps
ulimit -c unlimited
make reg-tests VTEST_PROGRAM=../vtest/vtest REGTESTS_TYPES=default,bug,devel
- name: Show VTest results
if: ${{ failure() && steps.vtest.outcome == 'failure' }}
run: |
for folder in ${TMPDIR:-/tmp}/haregtests-*/vtc.*; do
printf "::group::"
cat $folder/INFO
cat $folder/LOG
echo "::endgroup::"
done
exit 1
- name: Show coredumps
if: ${{ failure() && steps.vtest.outcome == 'failure' }}
run: |
failed=false
shopt -s nullglob
for file in /tmp/core.*; do
failed=true
printf "::group::"
gdb -ex 'thread apply all bt full' ./haproxy $file
echo "::endgroup::"
done
if [ "$failed" = true ]; then
exit 1;
fi
uses: ./.github/workflows/aws-lc-template.yml
with:
command: "from matrix import determine_latest_aws_lc_fips; print(determine_latest_aws_lc_fips(''))"

103
.github/workflows/aws-lc-template.yml vendored Normal file
View File

@ -0,0 +1,103 @@
name: AWS-LC template
on:
workflow_call:
inputs:
command:
required: true
type: string
permissions:
contents: read
jobs:
test:
runs-on: ubuntu-latest
if: ${{ github.repository_owner == 'haproxy' || github.event_name == 'workflow_dispatch' }}
steps:
- uses: actions/checkout@v4
- name: Install VTest
run: |
scripts/build-vtest.sh
- name: Determine latest AWS-LC release
id: get_aws_lc_release
run: |
result=$(cd .github && python3 -c "${{ inputs.command }}")
echo $result
echo "result=$result" >> $GITHUB_OUTPUT
- name: Cache AWS-LC
id: cache_aws_lc
uses: actions/cache@v4
with:
path: '~/opt/'
key: ssl-${{ steps.get_aws_lc_release.outputs.result }}-Ubuntu-latest-gcc
- name: Install apt dependencies
run: |
sudo apt-get update -o Acquire::Languages=none -o Acquire::Translation=none
sudo apt-get --no-install-recommends -y install socat gdb jose
- name: Install AWS-LC
if: ${{ steps.cache_ssl.outputs.cache-hit != 'true' }}
run: env ${{ steps.get_aws_lc_release.outputs.result }} scripts/build-ssl.sh
- name: Compile HAProxy
run: |
make -j$(nproc) ERR=1 CC=gcc TARGET=linux-glibc \
USE_OPENSSL_AWSLC=1 USE_QUIC=1 \
SSL_LIB=${HOME}/opt/lib SSL_INC=${HOME}/opt/include \
DEBUG="-DDEBUG_POOL_INTEGRITY -DDEBUG_UNIT" \
ADDLIB="-Wl,-rpath,/usr/local/lib/ -Wl,-rpath,$HOME/opt/lib/"
sudo make install
- name: Show HAProxy version
id: show-version
run: |
ldd $(which haproxy)
haproxy -vv
echo "version=$(haproxy -v |awk 'NR==1{print $3}')" >> $GITHUB_OUTPUT
- name: Install problem matcher for VTest
run: echo "::add-matcher::.github/vtest.json"
- name: Run VTest for HAProxy
id: vtest
run: |
# This is required for macOS which does not actually allow to increase
# the '-n' soft limit to the hard limit, thus failing to run.
ulimit -n 65536
# allow to catch coredumps
ulimit -c unlimited
make reg-tests VTEST_PROGRAM=../vtest/vtest REGTESTS_TYPES=default,bug,devel
- name: Run Unit tests
id: unittests
run: |
make unit-tests
- name: Show VTest results
if: ${{ failure() && steps.vtest.outcome == 'failure' }}
run: |
for folder in ${TMPDIR:-/tmp}/haregtests-*/vtc.*; do
printf "::group::"
cat $folder/INFO
cat $folder/LOG
echo "::endgroup::"
done
exit 1
- name: Show coredumps
if: ${{ failure() && steps.vtest.outcome == 'failure' }}
run: |
failed=false
shopt -s nullglob
for file in /tmp/core.*; do
failed=true
printf "::group::"
gdb -ex 'thread apply all bt full' ./haproxy $file
echo "::endgroup::"
done
if [ "$failed" = true ]; then
exit 1;
fi
- name: Show Unit-Tests results
if: ${{ failure() && steps.unittests.outcome == 'failure' }}
run: |
for result in ${TMPDIR:-/tmp}/ha-unittests-*/results/res.*; do
printf "::group::"
cat $result
echo "::endgroup::"
done
exit 1

View File

@ -5,82 +5,8 @@ on:
- cron: "0 0 * * 4"
workflow_dispatch:
permissions:
contents: read
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install VTest
run: |
scripts/build-vtest.sh
- name: Determine latest AWS-LC release
id: get_aws_lc_release
run: |
result=$(cd .github && python3 -c "from matrix import determine_latest_aws_lc; print(determine_latest_aws_lc(''))")
echo $result
echo "result=$result" >> $GITHUB_OUTPUT
- name: Cache AWS-LC
id: cache_aws_lc
uses: actions/cache@v4
with:
path: '~/opt/'
key: ssl-${{ steps.get_aws_lc_release.outputs.result }}-Ubuntu-latest-gcc
- name: Install apt dependencies
run: |
sudo apt-get update -o Acquire::Languages=none -o Acquire::Translation=none
sudo apt-get --no-install-recommends -y install socat gdb
- name: Install AWS-LC
if: ${{ steps.cache_ssl.outputs.cache-hit != 'true' }}
run: env ${{ steps.get_aws_lc_release.outputs.result }} scripts/build-ssl.sh
- name: Compile HAProxy
run: |
make -j$(nproc) ERR=1 CC=gcc TARGET=linux-glibc \
USE_OPENSSL_AWSLC=1 USE_QUIC=1 \
SSL_LIB=${HOME}/opt/lib SSL_INC=${HOME}/opt/include \
DEBUG="-DDEBUG_POOL_INTEGRITY" \
ADDLIB="-Wl,-rpath,/usr/local/lib/ -Wl,-rpath,$HOME/opt/lib/"
sudo make install
- name: Show HAProxy version
id: show-version
run: |
ldd $(which haproxy)
haproxy -vv
echo "version=$(haproxy -v |awk 'NR==1{print $3}')" >> $GITHUB_OUTPUT
- name: Install problem matcher for VTest
run: echo "::add-matcher::.github/vtest.json"
- name: Run VTest for HAProxy
id: vtest
run: |
# This is required for macOS which does not actually allow to increase
# the '-n' soft limit to the hard limit, thus failing to run.
ulimit -n 65536
# allow to catch coredumps
ulimit -c unlimited
make reg-tests VTEST_PROGRAM=../vtest/vtest REGTESTS_TYPES=default,bug,devel
- name: Show VTest results
if: ${{ failure() && steps.vtest.outcome == 'failure' }}
run: |
for folder in ${TMPDIR:-/tmp}/haregtests-*/vtc.*; do
printf "::group::"
cat $folder/INFO
cat $folder/LOG
echo "::endgroup::"
done
exit 1
- name: Show coredumps
if: ${{ failure() && steps.vtest.outcome == 'failure' }}
run: |
failed=false
shopt -s nullglob
for file in /tmp/core.*; do
failed=true
printf "::group::"
gdb -ex 'thread apply all bt full' ./haproxy $file
echo "::endgroup::"
done
if [ "$failed" = true ]; then
exit 1;
fi
uses: ./.github/workflows/aws-lc-template.yml
with:
command: "from matrix import determine_latest_aws_lc; print(determine_latest_aws_lc(''))"

View File

@ -11,13 +11,8 @@ permissions:
jobs:
h2spec:
name: h2spec
runs-on: ${{ matrix.os }}
strategy:
matrix:
include:
- TARGET: linux-glibc
CC: gcc
os: ubuntu-latest
runs-on: ubuntu-latest
if: ${{ github.repository_owner == 'haproxy' || github.event_name == 'workflow_dispatch' }}
steps:
- uses: actions/checkout@v4
- name: Install h2spec
@ -28,12 +23,12 @@ jobs:
tar xvf h2spec.tar.gz
sudo install -m755 h2spec /usr/local/bin/h2spec
echo "version=${H2SPEC_VERSION}" >> $GITHUB_OUTPUT
- name: Compile HAProxy with ${{ matrix.CC }}
- name: Compile HAProxy with gcc
run: |
make -j$(nproc) all \
ERR=1 \
TARGET=${{ matrix.TARGET }} \
CC=${{ matrix.CC }} \
TARGET=linux-glibc \
CC=gcc \
DEBUG="-DDEBUG_POOL_INTEGRITY" \
USE_OPENSSL=1
sudo make install

View File

@ -38,7 +38,7 @@ jobs:
- name: Build with Coverity build tool
run: |
export PATH=`pwd`/coverity_tool/bin:$PATH
cov-build --dir cov-int make CC=clang TARGET=linux-glibc USE_ZLIB=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_LUA=1 USE_OPENSSL=1 USE_QUIC=1 USE_WURFL=1 WURFL_INC=addons/wurfl/dummy WURFL_LIB=addons/wurfl/dummy USE_DEVICEATLAS=1 DEVICEATLAS_SRC=addons/deviceatlas/dummy USE_51DEGREES=1 51DEGREES_SRC=addons/51degrees/dummy/pattern ADDLIB=\"-Wl,-rpath,$HOME/opt/lib/\" SSL_LIB=${HOME}/opt/lib SSL_INC=${HOME}/opt/include DEBUG+=-DDEBUG_STRICT=1 DEBUG+=-DDEBUG_USE_ABORT=1
cov-build --dir cov-int make CC=clang TARGET=linux-glibc USE_ZLIB=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_LUA=1 USE_OPENSSL=1 USE_QUIC=1 USE_WURFL=1 WURFL_INC=addons/wurfl/dummy WURFL_LIB=addons/wurfl/dummy USE_DEVICEATLAS=1 DEVICEATLAS_SRC=addons/deviceatlas/dummy USE_51DEGREES=1 51DEGREES_SRC=addons/51degrees/dummy/pattern ADDLIB=\"-Wl,-rpath,$HOME/opt/lib/\" SSL_LIB=${HOME}/opt/lib SSL_INC=${HOME}/opt/include DEBUG+=-DDEBUG_STRICT=2 DEBUG+=-DDEBUG_USE_ABORT=1
- name: Submit build result to Coverity Scan
run: |
tar czvf cov.tar.gz cov-int

View File

@ -22,11 +22,11 @@ jobs:
echo '/tmp/core/core.%h.%e.%t' > /proc/sys/kernel/core_pattern
- uses: actions/checkout@v4
- name: Install dependencies
run: apk add gcc gdb make tar git python3 libc-dev linux-headers pcre-dev pcre2-dev openssl-dev lua5.3-dev grep socat curl musl-dbg lua5.3-dbg
run: apk add gcc gdb make tar git python3 libc-dev linux-headers pcre-dev pcre2-dev openssl-dev lua5.3-dev grep socat curl musl-dbg lua5.3-dbg jose
- name: Install VTest
run: scripts/build-vtest.sh
- name: Build
run: make -j$(nproc) TARGET=linux-musl ARCH_FLAGS='-ggdb3' CC=cc V=1 USE_LUA=1 LUA_INC=/usr/include/lua5.3 LUA_LIB=/usr/lib/lua5.3 USE_OPENSSL=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_PROMEX=1
run: make -j$(nproc) TARGET=linux-musl DEBUG="-DDEBUG_POOL_INTEGRITY -DDEBUG_UNIT" ARCH_FLAGS='-ggdb3' CC=cc V=1 USE_LUA=1 LUA_INC=/usr/include/lua5.3 LUA_LIB=/usr/lib/lua5.3 USE_OPENSSL=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_PROMEX=1
- name: Show version
run: ./haproxy -vv
- name: Show linked libraries
@ -37,6 +37,10 @@ jobs:
- name: Run VTest
id: vtest
run: make reg-tests VTEST_PROGRAM=../vtest/vtest REGTESTS_TYPES=default,bug,devel
- name: Run Unit tests
id: unittests
run: |
make unit-tests
- name: Show coredumps
if: ${{ failure() && steps.vtest.outcome == 'failure' }}
run: |
@ -60,3 +64,13 @@ jobs:
cat $folder/LOG
echo "::endgroup::"
done
- name: Show Unit-Tests results
if: ${{ failure() && steps.unittests.outcome == 'failure' }}
run: |
for result in ${TMPDIR:-/tmp}/ha-unittests-*/results/res.*; do
printf "::group::"
cat $result
echo "::endgroup::"
done
exit 1

View File

@ -15,6 +15,7 @@ permissions:
jobs:
test:
runs-on: ubuntu-latest
if: ${{ github.repository_owner == 'haproxy' || github.event_name == 'workflow_dispatch' }}
steps:
- uses: actions/checkout@v4
- name: Install VTest

View File

@ -11,6 +11,7 @@ permissions:
jobs:
test:
runs-on: ubuntu-latest
if: ${{ github.repository_owner == 'haproxy' || github.event_name == 'workflow_dispatch' }}
steps:
- uses: actions/checkout@v4
- name: Install VTest
@ -19,7 +20,7 @@ jobs:
- name: Install apt dependencies
run: |
sudo apt-get update -o Acquire::Languages=none -o Acquire::Translation=none
sudo apt-get --no-install-recommends -y install socat gdb
sudo apt-get --no-install-recommends -y install socat gdb jose
- name: Install WolfSSL
run: env WOLFSSL_VERSION=git-master WOLFSSL_DEBUG=1 scripts/build-ssl.sh
- name: Compile HAProxy
@ -27,7 +28,7 @@ jobs:
make -j$(nproc) ERR=1 CC=gcc TARGET=linux-glibc \
USE_OPENSSL_WOLFSSL=1 USE_QUIC=1 \
SSL_LIB=${HOME}/opt/lib SSL_INC=${HOME}/opt/include \
DEBUG="-DDEBUG_POOL_INTEGRITY" \
DEBUG="-DDEBUG_POOL_INTEGRITY -DDEBUG_UNIT" \
ADDLIB="-Wl,-rpath,/usr/local/lib/ -Wl,-rpath,$HOME/opt/lib/" \
ARCH_FLAGS="-ggdb3 -fsanitize=address"
sudo make install
@ -48,6 +49,10 @@ jobs:
# allow to catch coredumps
ulimit -c unlimited
make reg-tests VTEST_PROGRAM=../vtest/vtest REGTESTS_TYPES=default,bug,devel
- name: Run Unit tests
id: unittests
run: |
make unit-tests
- name: Show VTest results
if: ${{ failure() && steps.vtest.outcome == 'failure' }}
run: |
@ -72,3 +77,13 @@ jobs:
if [ "$failed" = true ]; then
exit 1;
fi
- name: Show Unit-Tests results
if: ${{ failure() && steps.unittests.outcome == 'failure' }}
run: |
for result in ${TMPDIR:-/tmp}/ha-unittests-*/results/res.*; do
printf "::group::"
cat $result
echo "::endgroup::"
done
exit 1

486
CHANGELOG
View File

@ -1,6 +1,492 @@
ChangeLog :
===========
2025/07/28 : 3.3-dev5
- BUG/MEDIUM: queue/stats: also use stream_set_srv_target() for pendconns
- DOC: list missing global QUIC settings
2025/07/26 : 3.3-dev4
- CLEANUP: server: do not check for duplicates anymore in findserver()
- REORG: server: move findserver() from proxy.c to server.c
- MINOR: server: use the tree to look up the server name in findserver()
- CLEANUP: server: rename server_find_by_name() to server_find()
- CLEANUP: server: rename findserver() to server_find_by_name()
- CLEANUP: server: use server_find_by_name() where relevant
- CLEANUP: cfgparse: lookup proxy ID using existing functions
- CLEANUP: stream: lookup server ID using standard functions
- CLEANUP: server: simplify server_find_by_id()
- CLEANUP: server: add server_find_by_addr()
- CLEANUP: stream: use server_find_by_addr() in sticking_rule_find_target()
- CLEANUP: server: be sure never to compare src against a non-existing defsrv
- MEDIUM: proxy: take the defsrv out of the struct proxy
- MINOR: proxy: add checks for defsrv's validity
- MEDIUM: proxy: no longer allocate the default-server entry by default
- MEDIUM: proxy: register a post-section cleanup function
- MINOR: debug: report haproxy and operating system info in panic dumps
- BUG/MEDIUM: h3: do not overwrite interim with final response
- BUG/MINOR: h3: properly realloc buffer after interim response encoding
- BUG/MINOR: h3: ensure that invalid status code are not encoded (FE side)
- MINOR: qmux: change API for snd_buf FIN transmission
- BUG/MEDIUM: h3: handle interim response properly on FE side
- BUG/MINOR: h3: properly handle interim response on BE side
- BUG/MINOR: quic: Wrong source address use on FreeBSD
- MINOR: h3: remove unused outbuf in h3_resp_headers_send()
- BUG/MINOR: applet: Don't trigger BUG_ON if the tid is not on appctx init
- DEV: gdb: add a memprofile decoder to the debug tools
- MINOR: quic: Get rid of qc_is_listener()
- DOC: connection: explain the rules for idle/safe/avail connections
- BUG/MEDIUM: quic-be: CC buffer released from wrong pool
- BUG/MINOR: halog: exit with error when some output filters are set simultaneosly
- MINOR: cpu-topo: split cpu_dump_topology() to show its summary in show dev
- MINOR: cpu-topo: write thread-cpu bindings into trash buffer
- MINOR: debug: align output style of debug_parse_cli_show_dev with cpu_dump_topology
- MINOR: debug: add thread-cpu bindings info in 'show dev' output
- MINOR: quic: Remove pool_head_quic_be_cc_buf pool
- BUILD: debug: add missed guard USE_CPU_AFFINITY to show cpu bindings
- BUG/MEDIUM: threads: Disable the workaround to load libgcc_s on macOS
- BUG/MINOR: logs: fix log-steps extra log origins selection
- BUG/MINOR: hq-interop: fix FIN transmission
- MINOR: ssl: Add ciphers in ssl traces
- MINOR: ssl: Add curve id to curve name table and mapping functions
- MINOR: ssl: Add curves in ssl traces
- MINOR: ssl: Dump ciphers and sigalgs details in trace with 'advanced' verbosity
- MINOR: ssl: Remove ClientHello specific traces if !HAVE_SSL_CLIENT_HELLO_CB
- MINOR: h3: use smallbuf for request header emission
- MINOR: h3: add traces to h3_req_headers_send()
- BUG/MINOR: h3: fix uninitialized value in h3_req_headers_send()
- MINOR: log: explicitly ignore "log-steps" on backends
- BUG/MEDIUM: acme: use POST-as-GET instead of GET for resources
- BUG/MINOR mux-quic: apply correctly timeout on output pending data
- BUG/MINOR: mux-quic: ensure close-spread-time is properly applied
- MINOR: mux-quic: refactor timeout code
- MINOR: mux-quic: correctly implement backend timeout
- MINOR: mux-quic: disable glitch on backend side
- MINOR: mux-quic: store session in QCS instance
- MEDIUM: mux-quic: implement be connection reuse
- MINOR: mux-quic: do not reuse connection if app already shut
- MEDIUM: mux-quic: support backend private connection
- MINOR: acme: remove acme_req_auth() and use acme_post_as_get() instead
- BUG/MINOR: acme: allow "processing" in challenge requests
- CLEANUP: acme: fix wrong spelling of "resources"
- CLEANUP: ssl: Use only NIDs in curve name to id table
- MINOR: acme: add ACME to the haproxy -vv feature list
- BUG/MINOR: hlua: Skip headers when a receive is performed on an HTTP applet
- BUG/MEDIUM: applet: State inbuf is no longer full if input data are skipped
- BUG/MEDIUM: stconn: Fix conditions to know an applet can get data from stream
- BUG/MINOR: applet: Fix applet_getword() to not return one extra byte
- BUG/MEDIUM: Remove sync sends from streams to applets
- MINOR: applet: Add HTX versions for applet_input_data() and applet_output_room()
- MINOR: applet: Improve applet API to take care of inbuf/outbuf alloc failures
- MEDIUM: hlua: Update the tcp applet to use its own buffers
- MINOR: hlua: Fill the request array on the first HTTP applet run
- MINOR: hlua: Use the buffer instead of the HTTP message to get HTTP headers
- MEDIUM: hlua: Update the http applet to use its own buffers
- BUG/MEDIUM: hlua: Report to SC when data were consumed on a lua socket
- BUG/MEDIUM: hlua: Report to SC when output data are blocked on a lua socket
- MEDIUM: hlua: Update the socket applet to use its own buffers
- BUG/MEDIUM: dns: Reset reconnect tempo when connection is finally established
- MEDIUM: dns: Update the dns_session applet to use its own buffers
- CLEANUP: http-client: Remove useless indentation when sending request body
- MINOR: http-client: Try to send request body with headers if possible
- MINOR: http-client: Trigger an error if first response block isn't a start-line
- BUG/MINOR: httpclient-cli: Don't try to dump raw headers in HTX mode
- MINOR: httpclient-cli: Reset httpclient HTX buffer instead of removing blocks
- MEDIUM: http-client: Update the http-client applet to use its own buffers
- MEDIUM: log: Update the log applet to use its own buffers
- MEDIUM: sink: Update the sink applets to use their own buffers
- MEDIUM: peers: Update the peer applet to use its own buffers
- MEDIUM: promex: Update the promex applet to use their own buffers
- MINOR: applet: Add support for flags on applets with a flag about the new API
- MEDIUM: applet: Emit a warning when a legacy applet is spawned
- BUG/MEDIUM: logs: fix sess_build_logline_orig() recursion with options
- MEDIUM: stats: avoid 1 indirection by storing the shared stats directly in counters struct
- CLEANUP: compiler: prefer char * over void * for pointer arithmetic
- CLEANUP: include: replace hand-rolled offsetof to avoid UB
- CLEANUP: peers: remove unused peer_session_target()
- OPTIM: stats: store fast sharded counters pointers at session and stream level
2025/07/11 : 3.3-dev3
- BUG/MINOR: quic-be: Wrong retry_source_connection_id check
- MEDIUM: sink: change the sink mode type to PR_MODE_SYSLOG
- MEDIUM: server: move _srv_check_proxy_mode() checks from server init to finalize
- MINOR: server: move send-proxy* incompatibility check in _srv_check_proxy_mode()
- MINOR: mailers: warn if mailers are configured but not actually used
- BUG/MEDIUM: counters/server: fix server and proxy last_change mixup
- MEDIUM: server: add and use a separate last_change variable for internal use
- MEDIUM: proxy: add and use a separate last_change variable for internal use
- MINOR: counters: rename last_change counter to last_state_change
- MINOR: ssl: check TLS1.3 ciphersuites again in clienthello with recent AWS-LC
- BUG/MEDIUM: hlua: Forbid any L6/L7 sample fetche functions from lua services
- BUG/MEDIUM: mux-h2: Properly handle connection error during preface sending
- BUG/MINOR: jwt: Copy input and parameters in dedicated buffers in jwt_verify converter
- DOC: Fix 'jwt_verify' converter doc
- MINOR: jwt: Rename pkey to pubkey in jwt_cert_tree_entry struct
- MINOR: jwt: Remove unused parameter in convert_ecdsa_sig
- MAJOR: jwt: Allow certificate instead of public key in jwt_verify converter
- MINOR: ssl: Allow 'commit ssl cert' with no privkey
- MINOR: ssl: Prevent delete on certificate used by jwt_verify
- REGTESTS: jwt: Add test with actual certificate passed to jwt_verify
- REGTESTS: jwt: Test update of certificate used in jwt_verify
- DOC: 'jwt_verify' converter now supports certificates
- REGTESTS: restrict execution to a single thread group
- MINOR: ssl: Introduce new smp_client_hello_parse() function
- MEDIUM: stats: add persistent state to typed output format
- BUG/MINOR: httpclient: wrongly named httpproxy flag
- MINOR: ssl/ocsp: stop using the flags from the httpclient CLI
- MEDIUM: httpclient: split the CLI from the actual httpclient API
- MEDIUM: httpclient: implement a way to use directly htx data
- MINOR: httpclient/cli: add --htx option
- BUILD: dev/phash: remove the accidentally committed a.out file
- BUG/MINOR: ssl: crash in ssl_sock_io_cb() with SSL traces and idle connections
- BUILD/MEDIUM: deviceatlas: fix when installed in custom locations.
- DOC: deviceatlas build clarifications
- BUG/MINOR: ssl/ocsp: fix definition discrepancies with ocsp_update_init()
- MINOR: proto-tcp: Add support for TCP MD5 signature for listeners and servers
- BUILD: cfgparse-tcp: Add _GNU_SOURCE for TCP_MD5SIG_MAXKEYLEN
- BUG/MINOR: proto-tcp: Take care to initialized tcp_md5sig structure
- BUG/MINOR: http-act: Fix parsing of the expression argument for pause action
- MEDIUM: httpclient: add a Content-Length when the payload is known
- CLEANUP: ssl: Rename ssl_trace-t.h to ssl_trace.h
- MINOR: pattern: add a counter of added/freed patterns
- CI: set DEBUG_STRICT=2 for coverity scan
- CI: enable USE_QUIC=1 for OpenSSL versions >= 3.5.0
- CI: github: add an OpenSSL 3.5.0 job
- CI: github: update the stable CI to ubuntu-24.04
- BUG/MEDIUM: quic: SSL/TCP handshake failures with OpenSSL 3.5
- CI: github: update to OpenSSL 3.5.1
- BUG/MINOR: quic: Missing TLS 1.3 QUIC cipher suites and groups inits (OpenSSL 3.5 QUIC API)
- BUG/MINOR: quic-be: Malformed coalesced Initial packets
- MINOR: quic: Prevent QUIC backend use with the OpenSSL QUIC compatibility module (USE_OPENSS_COMPAT)
- MINOR: reg-tests: first QUIC+H3 reg tests (QUIC address validation)
- MINOR: quic-be: Set the backend alpn if not set by conf
- MINOR: quic-be: TLS version restriction to 1.3
- MINOR: cfgparse: enforce QUIC MUX compat on server line
- MINOR: server: support QUIC for dynamic servers
- CI: github: skip a ssl library version when latest is already in the list
- MEDIUM: resolvers: switch dns-accept-family to "auto" by default
- BUG/MINOR: resolvers: don't lower the case of binary DNS format
- MINOR: resolvers: do not duplicate the hostname_dn field
- MINOR: proto-tcp: Register a feature to report TCP MD5 signature support
- BUG/MINOR: listener: really assign distinct IDs to shards
- MINOR: quic: Prevent QUIC build with OpenSSL 3.5 new QUIC API version < 3.5.1
- BUG/MEDIUM: quic: Crash after QUIC server callbacks restoration (OpenSSL 3.5)
- REGTESTS: use two haproxy instances to distinguish the QUIC traces
- BUG/MEDIUM: http-client: Don't wake http-client applet if nothing was xferred
- BUG/MEDIUM: http-client: Properly inc input data when HTX blocks are xferred
- BUG/MEDIUM: http-client: Ask for more room when request data cannot be xferred
- BUG/MEDIUM: http-client: Test HTX_FL_EOM flag before commiting the HTX buffer
- BUG/MINOR: http-client: Ignore 1XX interim responses in non-HTX mode
- BUG/MINOR: http-client: Reject any 101-switching-protocols response
- BUG/MEDIUM: http-client: Drain the request if an early response is received
- BUG/MEDIUM: http-client: Notify applet has more data to deliver until the EOM
- BUG/MINOR: h3: fix https scheme request encoding for BE side
- MINOR: h1-htx: Add function to format an HTX message in its H1 representation
- BUG/MINOR: mux-h1: Use configured error files if possible for early H1 errors
- BUG/MINOR: h1-htx: Don't forget to init flags in h1_format_htx_msg function
- CLEANUP: assorted typo fixes in the code, commits and doc
- BUILD: adjust scripts/build-ssl.sh to modern CMake system of QuicTLS
- MINOR: debug: add distro name and version in postmortem
2025/06/26 : 3.3-dev2
- BUG/MINOR: config/server: reject QUIC addresses
- MINOR: server: implement helper to identify QUIC servers
- MINOR: server: mark QUIC support as experimental
- MINOR: mux-quic-be: allow QUIC proto on backend side
- MINOR: quic-be: Correct Version Information transp. param encoding
- MINOR: quic-be: Version Information transport parameter check
- MINOR: quic-be: Call ->prepare_srv() callback at parsing time
- MINOR: quic-be: QUIC backend XPRT and transport parameters init during parsing
- MINOR: quic-be: QUIC server xprt already set when preparing their CTXs
- MINOR: quic-be: Add a function for the TLS context allocations
- MINOR: quic-be: Correct the QUIC protocol lookup
- MINOR: quic-be: ssl_sock contexts allocation and misc adaptations
- MINOR: quic-be: SSL sessions initializations
- MINOR: quic-be: Add a function to initialize the QUIC client transport parameters
- MINOR: sock: Add protocol and socket types parameters to sock_create_server_socket()
- MINOR: quic-be: ->connect() protocol callback adaptations
- MINOR: quic-be: QUIC connection allocation adaptation (qc_new_conn())
- MINOR: quic-be: xprt ->init() adapatations
- MINOR: quic-be: add field for max_udp_payload_size into quic_conn
- MINOR: quic-be: Do not redispatch the datagrams
- MINOR: quic-be: Datagrams and packet parsing support
- MINOR: quic-be: Handshake packet number space discarding
- MINOR: h3-be: Correctly retrieve h3 counters
- MINOR: quic-be: Store asap the DCID
- MINOR: quic-be: Build post handshake frames
- MINOR: quic-be: Add the conn object to the server SSL context
- MINOR: quic-be: Initial packet number space discarding.
- MINOR: quic-be: I/O handler switch adaptation
- MINOR: quic-be: Store the remote transport parameters asap
- MINOR: quic-be: Missing callbacks initializations (USE_QUIC_OPENSSL_COMPAT)
- MINOR: quic-be: Make the secret derivation works for QUIC backends (USE_QUIC_OPENSSL_COMPAT)
- MINOR: quic-be: SSL_get_peer_quic_transport_params() not defined by OpenSSL 3.5 QUIC API
- MINOR: quic-be: get rid of ->li quic_conn member
- MINOR: quic-be: Prevent the MUX to send/receive data
- MINOR: quic: define proper proto on QUIC servers
- MEDIUM: quic-be: initialize MUX on handshake completion
- BUG/MINOR: hlua: Don't forget the return statement after a hlua_yieldk()
- BUILD: hlua: Fix warnings about uninitialized variables
- BUILD: listener: fix 'for' loop inline variable declaration
- BUILD: hlua: Fix warnings about uninitialized variables (2)
- BUG/MEDIUM: mux-quic: adjust wakeup behavior
- MEDIUM: backend: delay MUX init with ALPN even if proto is forced
- MINOR: quic: mark ctrl layer as ready on quic_connect_server()
- MINOR: mux-quic: improve documentation for snd/rcv app-ops
- MINOR: mux-quic: define flag for backend side
- MINOR: mux-quic: set expect data only on frontend side
- MINOR: mux-quic: instantiate first stream on backend side
- MINOR: quic: wakeup backend MUX on handshake completed
- MINOR: hq-interop: decode response into HTX for backend side support
- MINOR: hq-interop: encode request from HTX for backend side support
- CLEANUP: quic-be: Add comments about qc_new_conn() usage
- BUG/MINOR: quic-be: CID double free upon qc_new_conn() failures
- MINOR: quic-be: Avoid SSL context unreachable code without USE_QUIC_OPENSSL_COMPAT
- BUG/MINOR: quic: prevent crash on startup with -dt
- MINOR: server: reject QUIC servers without explicit SSL
- BUG/MINOR: quic: work around NEW_TOKEN parsing error on backend side
- BUG/MINOR: http-ana: Properly handle keep-query redirect option if no QS
- BUG/MINOR: quic: don't restrict reception on backend privileged ports
- MINOR: hq-interop: handle HTX response forward if not enough space
- BUG/MINOR: quic: Fix OSSL_FUNC_SSL_QUIC_TLS_got_transport_params_fn callback (OpenSSL3.5)
- BUG/MINOR: quic: fix ODCID initialization on frontend side
- BUG/MEDIUM: cli: Don't consume data if outbuf is full or not available
- MINOR: cli: handle EOS/ERROR first
- BUG/MEDIUM: check: Set SOCKERR by default when a connection error is reported
- BUG/MINOR: mux-quic: check sc_attach_mux return value
- MINOR: h3: support basic HTX start-line conversion into HTTP/3 request
- MINOR: h3: encode request headers
- MINOR: h3: complete HTTP/3 request method encoding
- MINOR: h3: complete HTTP/3 request scheme encoding
- MINOR: h3: adjust path request encoding
- MINOR: h3: adjust auth request encoding or fallback to host
- MINOR: h3: prepare support for response parsing
- MINOR: h3: convert HTTP/3 response into HTX for backend side support
- MINOR: h3: complete response status transcoding
- MINOR: h3: transcode H3 response headers into HTX blocks
- MINOR: h3: use BUG_ON() on missing request start-line
- MINOR: h3: reject invalid :status in response
- DOC: config: prefer-last-server: add notes for non-deterministic algorithms
- CLEANUP: connection: remove unused mux-ops dedicated to QUIC
- BUG/MINOR: mux-quic/h3: properly handle too low peer fctl initial stream
- MINOR: mux-quic: support max bidi streams value set by the peer
- MINOR: mux-quic: abort conn if cannot create stream due to fctl
- MEDIUM: mux-quic: implement attach for new streams on backend side
- BUG/MAJOR: fwlc: Count an avoided server as unusable.
- MINOR: fwlc: Factorize code.
- BUG/MEDIUM: quic: do not release BE quic-conn prior to upper conn
- MAJOR: cfgparse: turn the same proxy name warning to an error
- MAJOR: cfgparse: make sure server names are unique within a backend
- BUG/MINOR: tools: only reset argument start upon new argument
- BUG/MINOR: stream: Avoid recursive evaluation for unique-id based on itself
- BUG/MINOR: log: Be able to use %ID alias at anytime of the stream's evaluation
- MINOR: hlua: emit a log instead of an alert for aborted actions due to unavailable yield
- MAJOR: mailers: remove native mailers support
- BUG/MEDIUM: ssl/clienthello: ECDSA with ssl-max-ver TLSv1.2 and no ECDSA ciphers
- DOC: configuration: add details on prefer-client-ciphers
- MINOR: ssl: Add "renegotiate" server option
- DOC: remove the program section from the documentation
- MAJOR: mworker: remove program section support
- BUG/MINOR: quic: wrong QUIC_FT_CONNECTION_CLOSE(0x1c) frame encoding
- MINOR: quic-be: add a "CC connection" backend TX buffer pool
- MINOR: quic: Useless TX buffer size reduction in closing state
- MINOR: quic-be: Allow sending 1200 bytes Initial datagrams
- MINOR: quic-be: address validation support implementation (RETRY)
- MEDIUM: proxy: deprecate the "transparent" and "option transparent" directives
- REGTESTS: update http_reuse_be_transparent with "transparent" deprecated
- REGTESTS: script: also add a line pointing to the log file
- DOC: config: explain how to deal with "transparent" deprecation
- MEDIUM: proxy: mark the "dispatch" directive as deprecated
- DOC: config: crt-list clarify default cert + cert-bundle
- MEDIUM: cpu-topo: switch to the "performance" cpu-policy by default
- SCRIPTS: drop the HTML generation from announce-release
- BUG/MINOR: tools: use my_unsetenv instead of unsetenv
- CLEANUP: startup: move comment about nbthread where it's more appropriate
- BUILD: qpack: fix a build issue on older compilers
2025/06/11 : 3.3-dev1
- BUILD: tools: properly define ha_dump_backtrace() to avoid a build warning
- DOC: config: Fix a typo in 2.7 (Name format for maps and ACLs)
- REGTESTS: Do not use REQUIRE_VERSION for HAProxy 2.5+ (5)
- REGTESTS: Remove REQUIRE_VERSION=2.3 from all tests
- REGTESTS: Remove REQUIRE_VERSION=2.4 from all tests
- REGTESTS: Remove tests with REQUIRE_VERSION_BELOW=2.4
- REGTESTS: Remove support for REQUIRE_VERSION and REQUIRE_VERSION_BELOW
- MINOR: server: group postinit server tasks under _srv_postparse()
- MINOR: stats: add stat_col flags
- MINOR: stats: add ME_NEW_COMMON() helper
- MINOR: proxy: collect per-capability stat in proxy_cond_disable()
- MINOR: proxy: add a true list containing all proxies
- MINOR: log: only run postcheck_log_backend() checks on backend
- MEDIUM: proxy: use global proxy list for REGISTER_POST_PROXY_CHECK() hook
- MEDIUM: server: automatically add server to proxy list in new_server()
- MEDIUM: server: add and use srv_init() function
- BUG/MAJOR: leastconn: Protect tree_elt with the lbprm lock
- BUG/MEDIUM: check: Requeue healthchecks on I/O events to handle check timeout
- CLEANUP: applet: Update comment for applet_put* functions
- DEBUG: check: Add the healthcheck's expiration date in the trace messags
- BUG/MINOR: mux-spop: Fix null-pointer deref on SPOP stream allocation failure
- CLEANUP: sink: remove useless cleanup in sink_new_from_logger()
- MAJOR: counters: add shared counters base infrastructure
- MINOR: counters: add shared counters helpers to get and drop shared pointers
- MINOR: counters: add common struct and flags to {fe,be}_counters_shared
- MEDIUM: counters: manage shared counters using dedicated helpers
- CLEANUP: counters: merge some common counters between {fe,be}_counters_shared
- MINOR: counters: add local-only internal rates to compute some maxes
- MAJOR: counters: dispatch counters over thread groups
- BUG/MEDIUM: cli: Properly parse empty lines and avoid crashed
- BUG/MINOR: config: emit warning for empty args only in discovery mode
- BUG/MINOR: config: fix arg number reported on empty arg warning
- BUG/MINOR: quic: Missing SSL session object freeing
- MINOR: applet: Add API functions to manipulate input and output buffers
- MINOR: applet: Add API functions to get data from the input buffer
- CLEANUP: applet: Simplify a bit comments for applet_put* functions
- MEDIUM: hlua: Update TCP applet functions to use the new applet API
- BUG/MEDIUM: fd: Use the provided tgid in fd_insert() to get tgroup_info
- BUG/MINIR: h1: Fix doc of 'accept-unsafe-...-request' about URI parsing
2025/05/28 : 3.3-dev0
- MINOR: version: mention that it's development again
2025/05/28 : 3.2.0
- MINOR: promex: Add agent check status/code/duration metrics
- MINOR: ssl: support strict-sni in ssl-default-bind-options
- MINOR: ssl: also provide the "tls-tickets" bind option
- MINOR: server: define CLI I/O handler for "add server"
- MINOR: server: implement "add server help"
- MINOR: server: use stress mode for "add server help"
- BUG/MEDIUM: server: fix crash after duplicate GUID insertion
- BUG/MEDIUM: server: fix potential null-deref after previous fix
- MINOR: config: list recently added sections with -dKcfg
- BUG/MAJOR: cache: Crash because of wrong cache entry deleted
- DOC: configuration: fix the example in crt-store
- DOC: config: clarify the wording around single/double quotes
- DOC: config: clarify the legacy cookie and header captures
- DOC: config: fix alphabetical ordering of layer 7 sample fetch functions
- DOC: config: fix alphabetical ordering of layer 6 sample fetch functions
- DOC: config: fix alphabetical ordering of layer 5 sample fetch functions
- DOC: config: fix alphabetical ordering of layer 4 sample fetch functions
- DOC: config: fix alphabetical ordering of internal sample fetch functions
- BUG/MINOR: h3: Set HTX flags corresponding to the scheme found in the request
- BUG/MEDIUM: h3: Declare absolute URI as normalized when a :authority is found
- DOC: config: mention in bytes_in and bytes_out that they're read on input
- DOC: config: clarify the basics of ACLs (call point, multi-valued etc)
- REGTESTS: Make the script testing conditional set-var compatible with Vtest2
- REGTESTS: Explicitly allow failing shell commands in some scripts
- MINOR: listeners: Add support for a label on bind line
- BUG/MEDIUM: cli/ring: Properly handle shutdown in "show event" I/O handler
- BUG/MEDIUM: hlua: Properly detect shudowns for TCP applets based on the new API
- BUG/MEDIUM: hlua: Fix getline() for TCP applets to work with applet's buffers
- BUG/MEDIUM: hlua: Fix receive API for TCP applets to properly handle shutdowns
- CI: vtest: Rely on VTest2 to run regression tests
- CI: vtest: Fix the build script to properly work on MaOS
- CI: combine AWS-LC and AWS-LC-FIPS by template
- BUG/MEDIUM: httpclient: Throw an error if an lua httpclient instance is reused
- DOC: hlua: Add a note to warn user about httpclient object reuse
- DOC: hlua: fix a few typos in HTTPMessage.set_body_len() documentation
- DEV: patchbot: prepare for new version 3.3-dev
- MINOR: version: mention that it's 3.2 LTS now.
2025/05/21 : 3.2-dev17
- DOC: configuration: explicit multi-choice on bind shards option
- BUG/MINOR: sink: detect and warn when using "send-proxy" options with ring servers
- BUG/MEDIUM: peers: also limit the number of incoming updates
- MEDIUM: hlua: Add function to change the body length of an HTTP Message
- BUG/MEDIUM: stconn: Disable 0-copy forwarding for filters altering the payload
- BUG/MINOR: h3: don't insert more than one Host header
- BUG/MEDIUM: h1/h2/h3: reject forbidden chars in the Host header field
- DOC: config: properly index "table and "stick-table" in their section
- DOC: management: change reference to configuration manual
- BUILD: debug: mark ha_crash_now() as attribute(noreturn)
- IMPORT: slz: avoid multiple shifts on 64-bits
- IMPORT: slz: support crc32c for lookup hash on sse4 but only if requested
- IMPORT: slz: use a better hash for machines with a fast multiply
- IMPORT: slz: fix header used for empty zlib message
- IMPORT: slz: silence a build warning on non-x86 non-arm
- BUG/MAJOR: leastconn: do not loop forever when facing saturated servers
- BUG/MAJOR: queue: properly keep count of the queue length
- BUG/MINOR: quic: fix crash on quic_conn alloc failure
- BUG/MAJOR: leastconn: never reuse the node after dropping the lock
- MINOR: acme: renewal notification over the dpapi sink
- CLEANUP: quic: Useless BIO_METHOD initialization
- MINOR: quic: Add useful error traces about qc_ssl_sess_init() failures
- MINOR: quic: Allow the use of the new OpenSSL 3.5.0 QUIC TLS API (to be completed)
- MINOR: quic: implement all remaining callbacks for OpenSSL 3.5 QUIC API
- MINOR: quic: OpenSSL 3.5 internal QUIC custom extension for transport parameters reset
- MINOR: quic: OpenSSL 3.5 trick to support 0-RTT
- DOC: update INSTALL for QUIC with OpenSSL 3.5 usages
- DOC: management: update 'acme status'
- BUG/MEDIUM: wdt: always ignore the first watchdog wakeup
- CLEANUP: wdt: clarify the comments on the common exit path
- BUILD: ssl: avoid possible printf format warning in traces
- BUILD: acme: fix build issue on 32-bit archs with 64-bit time_t
- DOC: management: precise some of the fields of "show servers conn"
- BUG/MEDIUM: mux-quic: fix BUG_ON() on rxbuf alloc error
- DOC: watchdog: update the doc to reflect the recent changes
- BUG/MEDIUM: acme: check if acme domains are configured
- BUG/MINOR: acme: fix formatting issue in error and logs
- EXAMPLES: lua: avoid screen refresh effect in "trisdemo"
- CLEANUP: quic: remove unused cbuf module
- MINOR: quic: move function to check stream type in utils
- MINOR: quic: refactor handling of streams after MUX release
- MINOR: quic: add some missing includes
- MINOR: quic: adjust quic_conn-t.h include list
- CLEANUP: cfgparse: alphabetically sort the global keywords
- MINOR: glitches: add global setting "tune.glitches.kill.cpu-usage"
2025/05/14 : 3.2-dev16
- BUG/MEDIUM: mux-quic: fix crash on invalid fctl frame dereference
- DEBUG: pool: permit per-pool UAF configuration
- MINOR: acme: add the global option 'acme.scheduler'
- DEBUG: pools: add a new integrity mode "backup" to copy the released area
- MEDIUM: sock-inet: re-check IPv6 connectivity every 30s
- BUG/MINOR: ssl: doesn't fill conf->crt with first arg
- BUG/MINOR: ssl: prevent multiple 'crt' on the same ssl-f-use line
- BUG/MINOR: ssl/ckch: always free() the previous entry during parsing
- MINOR: tools: ha_freearray() frees an array of string
- BUG/MINOR: ssl/ckch: always ha_freearray() the previous entry during parsing
- MINOR: ssl/ckch: warn when the same keyword was used twice
- BUG/MINOR: threads: fix soft-stop without multithreading support
- BUG/MINOR: tools: improve parse_line()'s robustness against empty args
- BUG/MINOR: cfgparse: improve the empty arg position report's robustness
- BUG/MINOR: server: dont depend on proxy for server cleanup in srv_drop()
- BUG/MINOR: server: perform lbprm deinit for dynamic servers
- MINOR: http: add a function to validate characters of :authority
- BUG/MEDIUM: h2/h3: reject some forbidden chars in :authority before reassembly
- MINOR: quic: account Tx data per stream
- MINOR: mux-quic: account Rx data per stream
- MINOR: quic: add stream format for "show quic"
- MINOR: quic: display QCS info on "show quic stream"
- MINOR: quic: display stream age
- BUG/MINOR: cpu-topo: fix group-by-cluster policy for disordered clusters
- MINOR: cpu-topo: add a new "group-by-ccx" CPU policy
- MINOR: cpu-topo: provide a function to sort clusters by average capacity
- MEDIUM: cpu-topo: change "performance" to consider per-core capacity
- MEDIUM: cpu-topo: change "efficiency" to consider per-core capacity
- MEDIUM: cpu-topo: prefer grouping by CCX for "performance" and "efficiency"
- MEDIUM: config: change default limits to 1024 threads and 32 groups
- BUG/MINOR: hlua: Fix Channel:data() and Channel:line() to respect documentation
- DOC: config: Fix a typo in the "term_events" definition
- BUG/MINOR: spoe: Don't report error on applet release if filter is in DONE state
- BUG/MINOR: mux-spop: Don't report error for stream if ACK was already received
- BUG/MINOR: mux-spop: Make the demux stream ID a signed integer
- BUG/MINOR: mux-spop: Don't open new streams for SPOP connection on error
- MINOR: mux-spop: Don't set SPOP connection state to FRAME_H after ACK parsing
- BUG/MEDIUM: mux-spop: Remove frame parsing states from the SPOP connection state
- BUG/MEDIUM: mux-spop: Properly handle CLOSING state
- BUG/MEDIUM: spop-conn: Report short read for partial frames payload
- BUG/MEDIUM: mux-spop: Properly detect truncated frames on demux to report error
- BUG/MEDIUM: mux-spop; Don't report a read error if there are pending data
- DEBUG: mux-spop: Review some trace messages to adjust the message or the level
- DOC: config: move address formats definition to section 2
- DOC: config: move stick-tables and peers to their own section
- DOC: config: move the extraneous sections out of the "global" definition
- CI: AWS-LC(fips): enable unit tests
- CI: AWS-LC: enable unit tests
- CI: compliance: limit run on forks only to manual + cleanup
- CI: musl: enable unit tests
- CI: QuicTLS (weekly): limit run on forks only to manual dispatch
- CI: WolfSSL: enable unit tests
2025/05/09 : 3.2-dev15
- BUG/MEDIUM: stktable: fix sc_*(<ctr>) BUG_ON() regression with ctx > 9
- BUG/MINOR: acme/cli: don't output error on success

33
INSTALL
View File

@ -237,7 +237,7 @@ to forcefully enable it using "USE_LIBCRYPT=1".
-----------------
For SSL/TLS, it is necessary to use a cryptography library. HAProxy currently
supports the OpenSSL library, and is known to build and work with branches
1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, and 3.0 to 3.4. It is recommended to use
1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, and 3.0 to 3.5. It is recommended to use
at least OpenSSL 1.1.1 to have support for all SSL keywords and configuration
in HAProxy. OpenSSL follows a long-term support cycle similar to HAProxy's,
and each of the branches above receives its own fixes, without forcing you to
@ -259,10 +259,10 @@ reported to work as well. While there are some efforts from the community to
ensure they work well, OpenSSL remains the primary target and this means that
in case of conflicting choices, OpenSSL support will be favored over other
options. Note that QUIC is not fully supported when haproxy is built with
OpenSSL. In this case, QUICTLS is the preferred alternative. As of writing
this, the QuicTLS project follows OpenSSL very closely and provides update
simultaneously, but being a volunteer-driven project, its long-term future does
not look certain enough to convince operating systems to package it, so it
OpenSSL < 3.5 version. In this case, QUICTLS is the preferred alternative.
As of writing this, the QuicTLS project follows OpenSSL very closely and provides
update simultaneously, but being a volunteer-driven project, its long-term future
does not look certain enough to convince operating systems to package it, so it
needs to be build locally. See the section about QUIC in this document.
A fifth option is wolfSSL (https://github.com/wolfSSL/wolfssl). It is the only
@ -500,10 +500,11 @@ QUIC is the new transport layer protocol and is required for HTTP/3. This
protocol stack is currently supported as an experimental feature in haproxy on
the frontend side. In order to enable it, use "USE_QUIC=1 USE_OPENSSL=1".
Note that QUIC is not fully supported by the OpenSSL library. Indeed QUIC 0-RTT
cannot be supported by OpenSSL contrary to others libraries with full QUIC
support. The preferred option is to use QUICTLS. This is a fork of OpenSSL with
a QUIC-compatible API. Its repository is available at this location:
Note that QUIC is not always fully supported by the OpenSSL library depending on
its version. Indeed QUIC 0-RTT cannot be supported by OpenSSL for versions before
3.5 contrary to others libraries with full QUIC support. The preferred option is
to use QUICTLS. This is a fork of OpenSSL with a QUIC-compatible API. Its
repository is available at this location:
https://github.com/quictls/openssl
@ -531,14 +532,18 @@ way assuming that wolfSSL was installed in /opt/wolfssl-5.6.0 as shown in 4.5:
SSL_INC=/opt/wolfssl-5.6.0/include SSL_LIB=/opt/wolfssl-5.6.0/lib
LDFLAGS="-Wl,-rpath,/opt/wolfssl-5.6.0/lib"
As last resort, haproxy may be compiled against OpenSSL as follows:
As last resort, haproxy may be compiled against OpenSSL as follows from 3.5
version with 0-RTT support:
$ make TARGET=generic USE_OPENSSL=1 USE_QUIC=1
or as follows for all OpenSSL versions but without O-RTT support:
$ make TARGET=generic USE_OPENSSL=1 USE_QUIC=1 USE_QUIC_OPENSSL_COMPAT=1
Note that QUIC 0-RTT is not supported by haproxy QUIC stack when built against
OpenSSL. In addition to this compilation requirements, the QUIC listener
bindings must be explicitly enabled with a specific QUIC tuning parameter.
(see "limited-quic" global parameter of haproxy Configuration Manual).
In addition to this requirements, the QUIC listener bindings must be explicitly
enabled with a specific QUIC tuning parameter. (see "limited-quic" global
parameter of haproxy Configuration Manual).
5) How to build HAProxy

View File

@ -660,7 +660,7 @@ OPTIONS_OBJS += src/mux_quic.o src/h3.o src/quic_rx.o src/quic_tx.o \
src/quic_cc_nocc.o src/quic_cc.o src/quic_pacing.o \
src/h3_stats.o src/quic_stats.o src/qpack-enc.o \
src/qpack-tbl.o src/quic_cc_drs.o src/quic_fctl.o \
src/cbuf.o src/quic_enc.o
src/quic_enc.o
endif
ifneq ($(USE_QUIC_OPENSSL_COMPAT:0=),)
@ -982,9 +982,9 @@ OBJS += src/mux_h2.o src/mux_h1.o src/mux_fcgi.o src/log.o \
src/cfgcond.o src/proto_udp.o src/lb_fwlc.o src/ebmbtree.o \
src/proto_uxdg.o src/cfgdiag.o src/sock_unix.o src/sha1.o \
src/lb_fas.o src/clock.o src/sock_inet.o src/ev_select.o \
src/lb_map.o src/shctx.o src/mworker-prog.o src/hpack-dec.o \
src/lb_map.o src/shctx.o src/hpack-dec.o \
src/arg.o src/signal.o src/fix.o src/dynbuf.o src/guid.o \
src/cfgparse-tcp.o src/lb_ss.o src/chunk.o \
src/cfgparse-tcp.o src/lb_ss.o src/chunk.o src/counters.o \
src/cfgparse-unix.o src/regex.o src/fcgi.o src/uri_auth.o \
src/eb64tree.o src/eb32tree.o src/eb32sctree.o src/lru.o \
src/limits.o src/ebimtree.o src/wdt.o src/hpack-tbl.o \
@ -992,7 +992,7 @@ OBJS += src/mux_h2.o src/mux_h1.o src/mux_fcgi.o src/log.o \
src/ebsttree.o src/freq_ctr.o src/systemd.o src/init.o \
src/http_acl.o src/dict.o src/dgram.o src/pipe.o \
src/hpack-huff.o src/hpack-enc.o src/ebtree.o src/hash.o \
src/version.o
src/httpclient_cli.o src/version.o
ifneq ($(TRACE),)
OBJS += src/calltrace.o

View File

@ -1,2 +1,2 @@
$Format:%ci$
2025/05/09
2025/07/28

View File

@ -1 +1 @@
3.2-dev15
3.3-dev5

View File

@ -5,7 +5,8 @@ CXX := c++
CXXLIB := -lstdc++
ifeq ($(DEVICEATLAS_SRC),)
OPTIONS_LDFLAGS += -lda
OPTIONS_CFLAGS += -I$(DEVICEATLAS_INC)
OPTIONS_LDFLAGS += -Wl,-rpath,$(DEVICEATLAS_LIB) -L$(DEVICEATLAS_LIB) -lda
else
DEVICEATLAS_INC = $(DEVICEATLAS_SRC)
DEVICEATLAS_LIB = $(DEVICEATLAS_SRC)

View File

@ -389,6 +389,9 @@ listed below. Metrics from extra counters are not listed.
| haproxy_server_max_connect_time_seconds |
| haproxy_server_max_response_time_seconds |
| haproxy_server_max_total_time_seconds |
| haproxy_server_agent_status |
| haproxy_server_agent_code |
| haproxy_server_agent_duration_seconds |
| haproxy_server_internal_errors_total |
| haproxy_server_unsafe_idle_connections_current |
| haproxy_server_safe_idle_connections_current |

View File

@ -32,7 +32,7 @@
/* Prometheus exporter flags (ctx->flags) */
#define PROMEX_FL_METRIC_HDR 0x00000001
/* unused: 0x00000002 */
#define PROMEX_FL_BODYLESS_RESP 0x00000002
/* unused: 0x00000004 */
/* unused: 0x00000008 */
/* unused: 0x00000010 */

View File

@ -173,6 +173,8 @@ const struct ist promex_st_metric_desc[ST_I_PX_MAX] = {
[ST_I_PX_CTIME] = IST("Avg. connect time for last 1024 successful connections."),
[ST_I_PX_RTIME] = IST("Avg. response time for last 1024 successful connections."),
[ST_I_PX_TTIME] = IST("Avg. total time for last 1024 successful connections."),
[ST_I_PX_AGENT_STATUS] = IST("Status of last agent check, per state label value."),
[ST_I_PX_AGENT_DURATION] = IST("Total duration of the latest server agent check, in seconds."),
[ST_I_PX_QT_MAX] = IST("Maximum observed time spent in the queue"),
[ST_I_PX_CT_MAX] = IST("Maximum observed time spent waiting for a connection to complete"),
[ST_I_PX_RT_MAX] = IST("Maximum observed time spent waiting for a server response"),
@ -425,9 +427,8 @@ static int promex_dump_global_metrics(struct appctx *appctx, struct htx *htx)
static struct ist prefix = IST("haproxy_process_");
struct promex_ctx *ctx = appctx->svcctx;
struct field val;
struct channel *chn = sc_ic(appctx_sc(appctx));
struct ist name, desc, out = ist2(trash.area, 0);
size_t max = htx_get_max_blksz(htx, channel_htx_recv_max(chn, htx));
size_t max = htx_get_max_blksz(htx, applet_htx_output_room(appctx));
int ret = 1;
if (!stats_fill_info(stat_line_info, ST_I_INF_MAX, 0))
@ -493,7 +494,6 @@ static int promex_dump_global_metrics(struct appctx *appctx, struct htx *htx)
if (out.len) {
if (!htx_add_data_atonce(htx, out))
return -1; /* Unexpected and unrecoverable error */
channel_add_input(chn, out.len);
}
return ret;
full:
@ -510,9 +510,8 @@ static int promex_dump_front_metrics(struct appctx *appctx, struct htx *htx)
struct proxy *px = ctx->p[0];
struct stats_module *mod = ctx->p[1];
struct field val;
struct channel *chn = sc_ic(appctx_sc(appctx));
struct ist name, desc, out = ist2(trash.area, 0);
size_t max = htx_get_max_blksz(htx, channel_htx_recv_max(chn, htx));
size_t max = htx_get_max_blksz(htx, applet_htx_output_room(appctx));
struct field *stats = stat_lines[STATS_DOMAIN_PROXY];
int ret = 1;
enum promex_front_state state;
@ -692,7 +691,6 @@ static int promex_dump_front_metrics(struct appctx *appctx, struct htx *htx)
if (out.len) {
if (!htx_add_data_atonce(htx, out))
return -1; /* Unexpected and unrecoverable error */
channel_add_input(chn, out.len);
}
/* Save pointers (0=current proxy, 1=current stats module) of the current context */
@ -714,9 +712,8 @@ static int promex_dump_listener_metrics(struct appctx *appctx, struct htx *htx)
struct listener *li = ctx->p[1];
struct stats_module *mod = ctx->p[2];
struct field val;
struct channel *chn = sc_ic(appctx_sc(appctx));
struct ist name, desc, out = ist2(trash.area, 0);
size_t max = htx_get_max_blksz(htx, channel_htx_recv_max(chn, htx));
size_t max = htx_get_max_blksz(htx, applet_htx_output_room(appctx));
struct field *stats = stat_lines[STATS_DOMAIN_PROXY];
int ret = 1;
enum li_status status;
@ -897,7 +894,6 @@ static int promex_dump_listener_metrics(struct appctx *appctx, struct htx *htx)
if (out.len) {
if (!htx_add_data_atonce(htx, out))
return -1; /* Unexpected and unrecoverable error */
channel_add_input(chn, out.len);
}
/* Save pointers (0=current proxy, 1=current listener, 2=current stats module) of the current context */
ctx->p[0] = px;
@ -919,9 +915,8 @@ static int promex_dump_back_metrics(struct appctx *appctx, struct htx *htx)
struct stats_module *mod = ctx->p[1];
struct server *sv;
struct field val;
struct channel *chn = sc_ic(appctx_sc(appctx));
struct ist name, desc, out = ist2(trash.area, 0);
size_t max = htx_get_max_blksz(htx, channel_htx_recv_max(chn, htx));
size_t max = htx_get_max_blksz(htx, applet_htx_output_room(appctx));
struct field *stats = stat_lines[STATS_DOMAIN_PROXY];
int ret = 1;
double secs;
@ -1183,7 +1178,6 @@ static int promex_dump_back_metrics(struct appctx *appctx, struct htx *htx)
if (out.len) {
if (!htx_add_data_atonce(htx, out))
return -1; /* Unexpected and unrecoverable error */
channel_add_input(chn, out.len);
}
/* Save pointers (0=current proxy, 1=current stats module) of the current context */
ctx->p[0] = px;
@ -1204,9 +1198,8 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
struct server *sv = ctx->p[1];
struct stats_module *mod = ctx->p[2];
struct field val;
struct channel *chn = sc_ic(appctx_sc(appctx));
struct ist name, desc, out = ist2(trash.area, 0);
size_t max = htx_get_max_blksz(htx, channel_htx_recv_max(chn, htx));
size_t max = htx_get_max_blksz(htx, applet_htx_output_room(appctx));
struct field *stats = stat_lines[STATS_DOMAIN_PROXY];
int ret = 1;
double secs;
@ -1342,6 +1335,7 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
secs = (double)sv->check.duration / 1000.0;
val = mkf_flt(FN_DURATION, secs);
break;
case ST_I_PX_REQ_TOT:
if (px->mode != PR_MODE_HTTP) {
sv = NULL;
@ -1364,6 +1358,36 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
labels[lb_idx+1].value = promex_hrsp_code[ctx->field_num - ST_I_PX_HRSP_1XX];
break;
case ST_I_PX_AGENT_STATUS:
if ((sv->agent.state & (CHK_ST_ENABLED|CHK_ST_PAUSED)) != CHK_ST_ENABLED)
goto next_sv;
for (; ctx->obj_state < HCHK_STATUS_SIZE; ctx->obj_state++) {
if (get_check_status_result(ctx->obj_state) < CHK_RES_FAILED)
continue;
val = mkf_u32(FO_STATUS, sv->agent.status == ctx->obj_state);
check_state = get_check_status_info(ctx->obj_state);
labels[lb_idx+1].name = ist("state");
labels[lb_idx+1].value = ist(check_state);
if (!promex_dump_ts(appctx, prefix, name, desc,
type,
&val, labels, &out, max))
goto full;
}
ctx->obj_state = 0;
goto next_sv;
case ST_I_PX_AGENT_CODE:
if ((sv->agent.state & (CHK_ST_ENABLED|CHK_ST_PAUSED)) != CHK_ST_ENABLED)
goto next_sv;
val = mkf_u32(FN_OUTPUT, (sv->agent.status < HCHK_STATUS_L57DATA) ? 0 : sv->agent.code);
break;
case ST_I_PX_AGENT_DURATION:
if (sv->agent.status < HCHK_STATUS_CHECKED)
goto next_sv;
secs = (double)sv->agent.duration / 1000.0;
val = mkf_flt(FN_DURATION, secs);
break;
default:
break;
}
@ -1474,7 +1498,6 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
if (out.len) {
if (!htx_add_data_atonce(htx, out))
return -1; /* Unexpected and unrecoverable error */
channel_add_input(chn, out.len);
}
/* Decrement server refcount if it was saved through ctx.p[1]. */
@ -1570,9 +1593,8 @@ static int promex_dump_ref_modules_metrics(struct appctx *appctx, struct htx *ht
{
struct promex_ctx *ctx = appctx->svcctx;
struct promex_module_ref *ref = ctx->p[0];
struct channel *chn = sc_ic(appctx_sc(appctx));
struct ist out = ist2(trash.area, 0);
size_t max = htx_get_max_blksz(htx, channel_htx_recv_max(chn, htx));
size_t max = htx_get_max_blksz(htx, applet_htx_output_room(appctx));
int ret = 1;
if (!ref) {
@ -1596,7 +1618,6 @@ static int promex_dump_ref_modules_metrics(struct appctx *appctx, struct htx *ht
if (out.len) {
if (!htx_add_data_atonce(htx, out))
return -1; /* Unexpected and unrecoverable error */
channel_add_input(chn, out.len);
}
ctx->p[0] = ref;
return ret;
@ -1611,9 +1632,8 @@ static int promex_dump_all_modules_metrics(struct appctx *appctx, struct htx *ht
{
struct promex_ctx *ctx = appctx->svcctx;
struct promex_module *mod = ctx->p[0];
struct channel *chn = sc_ic(appctx_sc(appctx));
struct ist out = ist2(trash.area, 0);
size_t max = htx_get_max_blksz(htx, channel_htx_recv_max(chn, htx));
size_t max = htx_get_max_blksz(htx, applet_htx_output_room(appctx));
int ret = 1;
if (!mod) {
@ -1637,7 +1657,6 @@ static int promex_dump_all_modules_metrics(struct appctx *appctx, struct htx *ht
if (out.len) {
if (!htx_add_data_atonce(htx, out))
return -1; /* Unexpected and unrecoverable error */
channel_add_input(chn, out.len);
}
ctx->p[0] = mod;
return ret;
@ -1652,7 +1671,7 @@ static int promex_dump_all_modules_metrics(struct appctx *appctx, struct htx *ht
* Uses <appctx.ctx.stats.px> as a pointer to the current proxy and <sv>/<li>
* as pointers to the current server/listener respectively.
*/
static int promex_dump_metrics(struct appctx *appctx, struct stconn *sc, struct htx *htx)
static int promex_dump_metrics(struct appctx *appctx, struct htx *htx)
{
struct promex_ctx *ctx = appctx->svcctx;
int ret;
@ -1776,7 +1795,7 @@ static int promex_dump_metrics(struct appctx *appctx, struct stconn *sc, struct
return 1;
full:
sc_need_room(sc, channel_htx_recv_max(sc_ic(appctx_sc(appctx)), htx) + 1);
applet_have_more_data(appctx);
return 0;
error:
/* unrecoverable error */
@ -1789,12 +1808,11 @@ static int promex_dump_metrics(struct appctx *appctx, struct stconn *sc, struct
/* Parse the query string of request URI to filter the metrics. It returns 1 on
* success and -1 on error. */
static int promex_parse_uri(struct appctx *appctx, struct stconn *sc)
static int promex_parse_uri(struct appctx *appctx)
{
struct promex_ctx *ctx = appctx->svcctx;
struct channel *req = sc_oc(sc);
struct channel *res = sc_ic(sc);
struct htx *req_htx, *res_htx;
struct buffer *outbuf;
struct htx *req_htx;
struct htx_sl *sl;
char *p, *key, *value;
const char *end;
@ -1804,10 +1822,13 @@ static int promex_parse_uri(struct appctx *appctx, struct stconn *sc)
int len;
/* Get the query-string */
req_htx = htxbuf(&req->buf);
req_htx = htxbuf(DISGUISE(applet_get_inbuf(appctx)));
sl = http_get_stline(req_htx);
if (!sl)
goto error;
goto bad_req_error;
if (sl->info.req.meth == HTTP_METH_HEAD)
ctx->flags |= PROMEX_FL_BODYLESS_RESP;
p = http_find_param_list(HTX_SL_REQ_UPTR(sl), HTX_SL_REQ_ULEN(sl), '?');
if (!p)
goto end;
@ -1840,27 +1861,27 @@ static int promex_parse_uri(struct appctx *appctx, struct stconn *sc)
*p = 0;
len = url_decode(key, 1);
if (len == -1)
goto error;
goto bad_req_error;
/* decode value */
if (value) {
while (p < end && *p != '=' && *p != '&' && *p != '#')
++p;
if (*p == '=')
goto error;
goto bad_req_error;
if (*p == '&')
*(p++) = 0;
else if (*p == '#')
*p = 0;
len = url_decode(value, 1);
if (len == -1)
goto error;
goto bad_req_error;
}
if (strcmp(key, "scope") == 0) {
default_scopes = 0; /* at least a scope defined, unset default scopes */
if (!value)
goto error;
goto bad_req_error;
else if (*value == 0)
ctx->flags &= ~PROMEX_FL_SCOPE_ALL;
else if (*value == '*' && *(value+1) == 0)
@ -1891,14 +1912,14 @@ static int promex_parse_uri(struct appctx *appctx, struct stconn *sc)
}
}
if (!(ctx->flags & PROMEX_FL_SCOPE_MODULE))
goto error;
goto bad_req_error;
}
}
else if (strcmp(key, "metrics") == 0) {
struct ist args;
if (!value)
goto error;
goto bad_req_error;
for (args = ist(value); istlen(args); args = istadv(istfind(args, ','), 1)) {
struct eb32_node *node;
@ -1949,30 +1970,28 @@ static int promex_parse_uri(struct appctx *appctx, struct stconn *sc)
ctx->flags |= (default_scopes | default_metrics_filter);
return 1;
error:
bad_req_error:
err = &http_err_chunks[HTTP_ERR_400];
channel_erase(res);
res->buf.data = b_data(err);
memcpy(res->buf.area, b_head(err), b_data(err));
res_htx = htx_from_buf(&res->buf);
channel_add_input(res, res_htx->data);
return -1;
goto error;
internal_error:
err = &http_err_chunks[HTTP_ERR_400];
channel_erase(res);
res->buf.data = b_data(err);
memcpy(res->buf.area, b_head(err), b_data(err));
res_htx = htx_from_buf(&res->buf);
channel_add_input(res, res_htx->data);
err = &http_err_chunks[HTTP_ERR_500];
goto error;
error:
outbuf = DISGUISE(applet_get_outbuf(appctx));
b_reset(outbuf);
outbuf->data = b_data(err);
memcpy(outbuf->area, b_head(err), b_data(err));
applet_set_eoi(appctx);
applet_set_eos(appctx);
return -1;
}
/* Send HTTP headers of the response. It returns 1 on success and 0 if <htx> is
* full. */
static int promex_send_headers(struct appctx *appctx, struct stconn *sc, struct htx *htx)
static int promex_send_headers(struct appctx *appctx, struct htx *htx)
{
struct channel *chn = sc_ic(sc);
struct htx_sl *sl;
unsigned int flags;
@ -1987,11 +2006,10 @@ static int promex_send_headers(struct appctx *appctx, struct stconn *sc, struct
!htx_add_endof(htx, HTX_BLK_EOH))
goto full;
channel_add_input(chn, htx->data);
return 1;
full:
htx_reset(htx);
sc_need_room(sc, 0);
applet_have_more_data(appctx);
return 0;
}
@ -2045,52 +2063,51 @@ static void promex_appctx_release(struct appctx *appctx)
/* The main I/O handler for the promex applet. */
static void promex_appctx_handle_io(struct appctx *appctx)
{
struct stconn *sc = appctx_sc(appctx);
struct stream *s = __sc_strm(sc);
struct channel *req = sc_oc(sc);
struct channel *res = sc_ic(sc);
struct htx *req_htx, *res_htx;
struct promex_ctx *ctx = appctx->svcctx;
struct buffer *outbuf;
struct htx *res_htx;
int ret;
res_htx = htx_from_buf(&res->buf);
if (unlikely(se_fl_test(appctx->sedesc, (SE_FL_EOS|SE_FL_ERROR|SE_FL_SHR|SE_FL_SHW))))
if (unlikely(applet_fl_test(appctx, APPCTX_FL_EOS|APPCTX_FL_ERROR)))
goto out;
/* Check if the input buffer is available. */
if (!b_size(&res->buf)) {
sc_need_room(sc, 0);
outbuf = applet_get_outbuf(appctx);
if (outbuf == NULL) {
applet_have_more_data(appctx);
goto out;
}
res_htx = htx_from_buf(outbuf);
switch (appctx->st0) {
case PROMEX_ST_INIT:
if (!co_data(req)) {
if (!applet_get_inbuf(appctx) || !applet_htx_input_data(appctx)) {
applet_need_more_data(appctx);
goto out;
break;
}
ret = promex_parse_uri(appctx, sc);
ret = promex_parse_uri(appctx);
if (ret <= 0) {
if (ret == -1)
goto error;
goto out;
applet_set_error(appctx);
break;
}
appctx->st0 = PROMEX_ST_HEAD;
appctx->st1 = PROMEX_DUMPER_INIT;
__fallthrough;
case PROMEX_ST_HEAD:
if (!promex_send_headers(appctx, sc, res_htx))
goto out;
appctx->st0 = ((s->txn->meth == HTTP_METH_HEAD) ? PROMEX_ST_DONE : PROMEX_ST_DUMP);
if (!promex_send_headers(appctx, res_htx))
break;
appctx->st0 = ((ctx->flags & PROMEX_FL_BODYLESS_RESP) ? PROMEX_ST_DONE : PROMEX_ST_DUMP);
__fallthrough;
case PROMEX_ST_DUMP:
ret = promex_dump_metrics(appctx, sc, res_htx);
ret = promex_dump_metrics(appctx, res_htx);
if (ret <= 0) {
if (ret == -1)
goto error;
goto out;
applet_set_error(appctx);
break;
}
appctx->st0 = PROMEX_ST_DONE;
__fallthrough;
@ -2104,41 +2121,36 @@ static void promex_appctx_handle_io(struct appctx *appctx)
*/
if (htx_is_empty(res_htx)) {
if (!htx_add_endof(res_htx, HTX_BLK_EOT)) {
sc_need_room(sc, sizeof(struct htx_blk) + 1);
goto out;
applet_have_more_data(appctx);
break;
}
channel_add_input(res, 1);
}
res_htx->flags |= HTX_FL_EOM;
se_fl_set(appctx->sedesc, SE_FL_EOI);
applet_set_eoi(appctx);
appctx->st0 = PROMEX_ST_END;
__fallthrough;
case PROMEX_ST_END:
se_fl_set(appctx->sedesc, SE_FL_EOS);
applet_set_eos(appctx);
}
htx_to_buf(res_htx, outbuf);
out:
htx_to_buf(res_htx, &res->buf);
/* eat the whole request */
if (co_data(req)) {
req_htx = htx_from_buf(&req->buf);
co_htx_skip(req, req_htx, co_data(req));
}
applet_reset_input(appctx);
return;
error:
se_fl_set(appctx->sedesc, SE_FL_ERROR);
goto out;
}
struct applet promex_applet = {
.obj_type = OBJ_TYPE_APPLET,
.flags = APPLET_FL_NEW_API,
.name = "<PROMEX>", /* used for logging */
.init = promex_appctx_init,
.release = promex_appctx_release,
.fct = promex_appctx_handle_io,
.rcv_buf = appctx_htx_rcv_buf,
.snd_buf = appctx_htx_snd_buf,
};
static enum act_parse_ret service_parse_prometheus_exporter(const char **args, int *cur_arg, struct proxy *px,

View File

@ -123,6 +123,22 @@ struct url_stat {
#define FILT2_PRESERVE_QUERY 0x02
#define FILT2_EXTRACT_CAPTURE 0x04
#define FILT_OUTPUT_FMT (FILT_COUNT_ONLY| \
FILT_COUNT_STATUS| \
FILT_COUNT_SRV_STATUS| \
FILT_COUNT_COOK_CODES| \
FILT_COUNT_TERM_CODES| \
FILT_COUNT_URL_ONLY| \
FILT_COUNT_URL_COUNT| \
FILT_COUNT_URL_ERR| \
FILT_COUNT_URL_TAVG| \
FILT_COUNT_URL_TTOT| \
FILT_COUNT_URL_TAVGO| \
FILT_COUNT_URL_TTOTO| \
FILT_COUNT_URL_BAVG| \
FILT_COUNT_URL_BTOT| \
FILT_COUNT_IP_COUNT)
unsigned int filter = 0;
unsigned int filter2 = 0;
unsigned int filter_invert = 0;
@ -192,7 +208,7 @@ void help()
" you can also use -n to start from earlier then field %d\n"
" -query preserve the query string for per-URL (-u*) statistics\n"
"\n"
"Output format - only one may be used at a time\n"
"Output format - **only one** may be used at a time\n"
" -c only report the number of lines that would have been printed\n"
" -pct output connect and response times percentiles\n"
" -st output number of requests per HTTP status code\n"
@ -898,6 +914,9 @@ int main(int argc, char **argv)
if (!filter && !filter2)
die("No action specified.\n");
if ((filter & FILT_OUTPUT_FMT) & ((filter & FILT_OUTPUT_FMT) - 1))
die("Please, set only one output filter.\n");
if (filter & FILT_ACC_COUNT && !filter_acc_count)
filter_acc_count=1;

19
dev/gdb/memprof.dbg Normal file
View File

@ -0,0 +1,19 @@
# show non-null memprofile entries with method, alloc/free counts/tot and caller
define memprof_dump
set $i = 0
set $meth={ "UNKN", "MALL", "CALL", "REAL", "STRD", "FREE", "P_AL", "P_FR", "STND", "VALL", "ALAL", "PALG", "MALG", "PVAL" }
while $i < sizeof(memprof_stats) / sizeof(memprof_stats[0])
if memprof_stats[$i].alloc_calls || memprof_stats[$i].free_calls
set $m = memprof_stats[$i].method
printf "m:%s ac:%u fc:%u at:%u ft:%u ", $meth[$m], \
memprof_stats[$i].alloc_calls, memprof_stats[$i].free_calls, \
memprof_stats[$i].alloc_tot, memprof_stats[$i].free_tot
output/a memprof_stats[$i].caller
printf "\n"
end
set $i = $i + 1
end
end

View File

@ -0,0 +1,70 @@
BEGININPUT
BEGINCONTEXT
HAProxy's development cycle consists in one development branch, and multiple
maintenance branches.
All the development is made into the development branch exclusively. This
includes mostly new features, doc updates, cleanups and or course, fixes.
The maintenance branches, also called stable branches, never see any
development, and only receive ultra-safe fixes for bugs that affect them,
that are picked from the development branch.
Branches are numbered in 0.1 increments. Every 6 months, upon a new major
release, the development branch enters maintenance and a new development branch
is created with a new, higher version. The current development branch is
3.3-dev, and maintenance branches are 3.2 and below.
Fixes created in the development branch for issues that were introduced in an
earlier branch are applied in descending order to each and every version till
that branch that introduced the issue: 3.2 first, then 3.1, then 3.0, then 2.9
and so on. This operation is called "backporting". A fix for an issue is never
backported beyond the branch that introduced the issue. An important point is
that the project maintainers really aim at zero regression in maintenance
branches, so they're never willing to take any risk backporting patches that
are not deemed strictly necessary.
Fixes consist of patches managed using the Git version control tool and are
identified by a Git commit ID and a commit message. For this reason we
indistinctly talk about backporting fixes, commits, or patches; all mean the
same thing. When mentioning commit IDs, developers always use a short form
made of the first 8 characters only, and expect the AI assistant to do the
same.
It seldom happens that some fixes depend on changes that were brought by other
patches that were not in some branches and that will need to be backported as
well for the fix to work. In this case, such information is explicitly provided
in the commit message by the patch's author in natural language.
Developers are serious and always indicate if a patch needs to be backported.
Sometimes they omit the exact target branch, or they will say that the patch is
"needed" in some older branch, but it means the same. If a commit message
doesn't mention any backport instructions, it means that the commit does not
have to be backported. And patches that are not strictly bug fixes nor doc
improvements are normally not backported. For example, fixes for design
limitations, architectural improvements and performance optimizations are
considered too risky for a backport. Finally, all bug fixes are tagged as
"BUG" at the beginning of their subject line. Patches that are not tagged as
such are not bugs, and must never be backported unless their commit message
explicitly requests so.
ENDCONTEXT
A developer is reviewing the development branch, trying to spot which commits
need to be backported to maintenance branches. This person is already expert
on HAProxy and everything related to Git, patch management, and the risks
associated with backports, so he doesn't want to be told how to proceed nor to
review the contents of the patch.
The goal for this developer is to get some help from the AI assistant to save
some precious time on this tedious review work. In order to do a better job, he
needs an accurate summary of the information and instructions found in each
commit message. Specifically he needs to figure if the patch fixes a problem
affecting an older branch or not, if it needs to be backported, if so to which
branches, and if other patches need to be backported along with it.
The indented text block below after an "id" line and starting with a Subject line
is a commit message from the HAProxy development branch that describes a patch
applied to that branch, starting with its subject line, please read it carefully.

View File

@ -0,0 +1,29 @@
ENDINPUT
BEGININSTRUCTION
You are an AI assistant that follows instruction extremely well. Help as much
as you can, responding to a single question using a single response.
The developer wants to know if he needs to backport the patch above to fix
maintenance branches, for which branches, and what possible dependencies might
be mentioned in the commit message. Carefully study the commit message and its
backporting instructions if any (otherwise it should probably not be backported),
then provide a very concise and short summary that will help the developer decide
to backport it, or simply to skip it.
Start by explaining in one or two sentences what you recommend for this one and why.
Finally, based on your analysis, give your general conclusion as "Conclusion: X"
where X is a single word among:
- "yes", if you recommend to backport the patch right now either because
it explicitly states this or because it's a fix for a bug that affects
a maintenance branch (3.2 or lower);
- "wait", if this patch explicitly mentions that it must be backported, but
only after waiting some time.
- "no", if nothing clearly indicates a necessity to backport this patch (e.g.
lack of explicit backport instructions, or it's just an improvement);
- "uncertain" otherwise for cases not covered above
ENDINSTRUCTION
Explanation:

Binary file not shown.

View File

@ -3,7 +3,9 @@ DeviceAtlas Device Detection
In order to add DeviceAtlas Device Detection support, you would need to download
the API source code from https://deviceatlas.com/deviceatlas-haproxy-module.
Once extracted :
Once extracted, two modes are supported :
1/ Build HAProxy and DeviceAtlas in one command
$ make TARGET=<target> USE_DEVICEATLAS=1 DEVICEATLAS_SRC=<path to the API root folder>
@ -14,10 +16,6 @@ directory. Also, in the case the api cache support is not needed and/or a C++ to
$ make TARGET=<target> USE_DEVICEATLAS=1 DEVICEATLAS_SRC=<path to the API root folder> DEVICEATLAS_NOCACHE=1
However, if the API had been installed beforehand, DEVICEATLAS_SRC
can be omitted. Note that the DeviceAtlas C API version supported is from the 3.x
releases series (3.2.1 minimum recommended).
For HAProxy developers who need to verify that their changes didn't accidentally
break the DeviceAtlas code, it is possible to build a dummy library provided in
the addons/deviceatlas/dummy directory and to use it as an alternative for the
@ -27,6 +25,29 @@ validate API changes :
$ make TARGET=<target> USE_DEVICEATLAS=1 DEVICEATLAS_SRC=$PWD/addons/deviceatlas/dummy
2/ Build and install DeviceAtlas according to https://docs.deviceatlas.com/apis/enterprise/c/<release version>/README.html
For example :
In the deviceatlas library folder :
$ cmake .
$ make
$ sudo make install
In the HAProxy folder :
$ make TARGET=<target> USE_DEVICEATLAS=1
Note that if the -DCMAKE_INSTALL_PREFIX cmake option had been used, it is necessary to set as well DEVICEATLAS_LIB and
DEVICEATLAS_INC as follow :
$ make TARGET=<target> USE_DEVICEATLAS=1 DEVICEATLAS_INC=<CMAKE_INSTALL_PREFIX value>/include DEVICEATLAS_LIB=<CMAKE_INSTALL_PREFIX value>/lib
For example :
$ cmake -DCMAKE_INSTALL_PREFIX=/opt/local
$ make
$ sudo make install
$ make TARGET=<target> USE_DEVICEATLAS=1 DEVICEATLAS_INC=/opt/local/include DEVICEATLAS_LIB=/opt/local/lib
Note that DEVICEATLAS_SRC is omitted in this case.
These are supported DeviceAtlas directives (see doc/configuration.txt) :
- deviceatlas-json-file <path to the DeviceAtlas JSON data file>.
- deviceatlas-log-level <number> (0 to 3, level of information returned by

File diff suppressed because it is too large Load Diff

View File

@ -204,6 +204,14 @@ the cache, when this option is set, objects are picked from the cache from the
oldest one instead of the freshest one. This way even late memory corruptions
have a chance to be detected.
Another non-destructive approach is to use "-dMbackup". A full copy of the
object is made after its end, which eases inspection (e.g. of the parts
scratched by the pool_item elements), and a comparison is made upon allocation
of that object, just like with "-dMintegrity", causing a crash on mismatch. The
initial 4 words corresponding to the list are ignored as well. Note that when
both "-dMbackup" and "-dMintegrity" are used, the copy is performed before
being scratched, and the comparison is done by "-dMintegrity" only.
When build option DEBUG_MEMORY_POOLS is set, or the boot-time option "-dMtag"
is passed on the executable's command line, pool objects are allocated with
one extra pointer compared to the requested size, so that the bytes that follow
@ -342,7 +350,9 @@ struct pool_head *create_pool(char *name, uint size, uint flags)
"-dMno-merge" is passed on the executable's command line, the pools
also need to have the exact same name to be merged. In addition, unless
MEM_F_EXACT is set in <flags>, the object size will usually be rounded
up to the size of pointers (16 or 32 bytes). The name that will appear
up to the size of pointers (16 or 32 bytes). MEM_F_UAF may be set on a
per-pool basis to enable the UAF detection only for this specific pool,
saving the massive overhead of global usage. The name that will appear
in the pool upon merging is the name of the first created pool. The
returned pointer is the new (or reused) pool head, or NULL upon error.
Pools created this way must be destroyed using pool_destroy().

View File

@ -21,7 +21,7 @@ falls back to CLOCK_REALTIME. The former is more accurate as it really counts
the time spent in the process, while the latter might also account for time
stuck on paging in etc.
Then wdt_ping() is called to arm the timer. t's set to trigger every
Then wdt_ping() is called to arm the timer. It's set to trigger every
<wdt_warn_blocked_traffic_ns> interval. It is also called by wdt_handler()
to reprogram a new wakeup after it has ticked.
@ -37,15 +37,18 @@ If the thread was not marked as stuck, it's verified that no progress was made
for at least one second, in which case the TH_FL_STUCK flag is set. The lack of
progress is measured by the distance between the thread's current cpu_time and
its prev_cpu_time. If the lack of progress is at least as large as the warning
threshold and no context switch happened since last call, ha_stuck_warning() is
called to emit a warning about that thread. In any case the context switch
counter for that thread is updated.
threshold, then the signal is bounced to the faulty thread if it's not the
current one. Since this bounce is based on the time spent without update, it
already doesn't happen often.
If the thread was already marked as stuck, then the thread is considered as
definitely stuck. Then ha_panic() is directly called if the thread is the
current one, otherwise ha_kill() is used to resend the signal directly to the
target thread, which will in turn go through this handler and handle the panic
itself.
Once on the faulty thread, two checks are performed:
1) if the thread was already marked as stuck, then the thread is considered
as definitely stuck, and ha_panic() is called. It will not return.
2) a check is made to verify if the scheduler is still ticking, by reading
and setting a variable that only the scheduler can clear when leaving a
task. If the scheduler didn't make any progress, ha_stuck_warning() is
called to emit a warning about that thread.
Most of the time there's no panic of course, and a wdt_ping() is performed
before leaving the handler to reprogram a check for that thread.
@ -61,12 +64,12 @@ set TAINTED_WARN_BLOCKED_TRAFFIC.
ha_panic() uses the current thread's trash buffer to produce the messages, as
we don't care about its contents since that thread will never return. However
ha_stuck_warning() instead uses a local 4kB buffer in the thread's stack.
ha_stuck_warning() instead uses a local 8kB buffer in the thread's stack.
ha_panic() will call ha_thread_dump_fill() for each thread, to complete the
buffer being filled with each thread's dump messages. ha_stuck_warning() only
calls the function for the current thread. In both cases the message is then
directly sent to fd #2 (stderr) and ha_thread_dump_one() is called to release
the dumped thread.
calls ha_thread_dump_one(), which works on the current thread. In both cases
the message is then directly sent to fd #2 (stderr) and ha_thread_dump_done()
is called to release the dumped thread.
Both print a few extra messages, but ha_panic() just ends by looping on abort()
until the process dies.
@ -110,13 +113,19 @@ ha_dump_backtrace() before returning.
ha_dump_backtrace() produces a backtrace into a local buffer (100 entries max),
then dumps the code bytes nearby the crashing instrution, dumps pointers and
tries to resolve function names, and sends all of that into the target buffer.
On some architectures (x86_64, arm64), it will also try to detect and decode
call instructions and resolve them to called functions.
3. Improvements
---------------
The symbols resolution is extremely expensive, particularly for the warnings
which should be fast. But we need it, it's just unfortunate that it strikes at
the wrong moment.
the wrong moment. At least ha_dump_backtrace() does disable signals while it's
resolving, in order to avoid unwanted re-entrance. In addition, the called
function resolve_sym_name() uses some locking and refrains from calling the
dladdr family of functions in a re-entrant way (in the worst case only well
known symbols will be resolved)..
In an ideal case, ha_dump_backtrace() would dump the pointers to a local array,
which would then later be resolved asynchronously in a tasklet. This can work

View File

@ -1,7 +1,7 @@
-----------------------
HAProxy Starter Guide
-----------------------
version 3.2
version 3.3
This document is an introduction to HAProxy for all those who don't know it, as

View File

@ -893,7 +893,9 @@ Core class
**context**: init, task, action
This function returns a new object of a *httpclient* class.
This function returns a new object of a *httpclient* class. An *httpclient*
object must be used to process one and only one request. It must never be
reused to process several requests.
:returns: A :ref:`httpclient_class` object.
@ -933,7 +935,7 @@ Core class
Give back the hand at the HAProxy scheduler. Unlike :js:func:`core.yield`
the task will not be woken up automatically to resume as fast as possible.
Instead, it will wait for an event to wake the task. If milliseconds argument
is provided then the Lua excecution will be automatically resumed passed this
is provided then the Lua execution will be automatically resumed passed this
delay even if no event caused the task to wake itself up.
:param integer milliseconds: automatic wakeup passed this delay. (optional)
@ -943,7 +945,7 @@ Core class
**context**: task, action
Give back the hand at the HAProxy scheduler. It is used when the LUA
processing consumes a lot of processing time. Lua excecution will be resumed
processing consumes a lot of processing time. Lua execution will be resumed
automatically (automatic reschedule).
.. js:function:: core.parse_addr(address)
@ -1087,18 +1089,13 @@ Core class
perform the heavy job in a dedicated task and allow remaining events to be
processed more quickly.
.. js:function:: core.disable_legacy_mailers()
.. js:function:: core.use_native_mailers_config()
**LEGACY**
**context**: body
**context**: body, init
Disable the sending of email alerts through the legacy email sending
function when mailers are used in the configuration.
Use this when sending email alerts directly from lua.
:see: :js:func:`Proxy.get_mailers()`
Inform haproxy that the script will make use of the native "mailers"
config section (although legacy). In other words, inform haproxy that
:js:func:`Proxy.get_mailers()` will be used later in the program.
.. _proxy_class:
@ -1227,8 +1224,14 @@ Proxy class
**LEGACY**
Returns a table containing mailers config for the current proxy or nil
if mailers are not available for the proxy.
Returns a table containing legacy mailers config (from haproxy configuration
file) for the current proxy or nil if mailers are not available for the proxy.
.. warning::
When relying on :js:func:`Proxy.get_mailers()` to retrieve mailers
configuration, :js:func:`core.use_native_mailers_config()` must be called
first from body or init context to inform haproxy that Lua makes use of the
legacy mailers config.
:param class_proxy px: A :ref:`proxy_class` which indicates the manipulated
proxy.
@ -1245,10 +1248,6 @@ ProxyMailers class
This class provides mailers config for a given proxy.
If sending emails directly from lua, please consider
:js:func:`core.disable_legacy_mailers()` to disable the email sending from
haproxy. (Or email alerts will be sent twice...)
.. js:attribute:: ProxyMailers.track_server_health
Boolean set to true if the option "log-health-checks" is configured on
@ -2581,7 +2580,9 @@ HTTPClient class
.. js:class:: HTTPClient
The httpclient class allows issue of outbound HTTP requests through a simple
API without the knowledge of HAProxy internals.
API without the knowledge of HAProxy internals. Any instance must be used to
process one and only one request. It must never be reused to process several
requests.
.. js:function:: HTTPClient.get(httpclient, request)
.. js:function:: HTTPClient.head(httpclient, request)
@ -3916,21 +3917,25 @@ AppletTCP class
*size* is missing, the function tries to read all the content of the stream
until the end. An optional timeout may be specified in milliseconds. In this
case the function will return no longer than this delay, with the amount of
available data (possibly none).
available data, or nil if there is no data. An empty string is returned if the
connection is closed.
:param class_AppletTCP applet: An :ref:`applettcp_class`
:param integer size: the required read size.
:returns: always return a string, the string can be empty if the connection is
closed.
:returns: return nil if the timeout has expired and no data was available but
can still be received. Otherwise, a string is returned, possibly an empty
string if the connection is closed.
.. js:function:: AppletTCP.try_receive(applet)
Reads available data from the TCP stream and returns immediately. Returns a
string containing read bytes that may possibly be empty if no bytes are
available at that time.
string containing read bytes or nil if no bytes are available at that time. An
empty string is returned if the connection is closed.
:param class_AppletTCP applet: An :ref:`applettcp_class`
:returns: always return a string, the string can be empty.
:returns: return nil if no data was available but can still be
received. Otherwise, a string is returned, possibly an empty string if the
connection is closed.
.. js:function:: AppletTCP.send(appletmsg)
@ -4607,6 +4612,27 @@ HTTPMessage class
data by default.
:returns: an integer containing the amount of bytes copied or -1.
.. js:function:: HTTPMessage.set_body_len(http_msg, length)
This function changes the expected payload length of the HTTP message
**http_msg**. **length** can be an integer value. In that case, a
"Content-Length" header is added with the given value. It is also possible to
pass the **"chunked"** string instead of an integer value to force the HTTP
message to be chunk-encoded. In that case, a "Transfer-Encoding" header is
added with the "chunked" value. In both cases, all existing "Content-Length"
and "Transfer-Encoding" headers are removed.
This function should be used in the filter context to be able to alter the
payload of the HTTP message. The internal state of the HTTP message is updated
accordingly. :js:func:`HTTPMessage.add_header()` or
:js:func:`HTTPMessage.set_header()` functions must be used in that case.
:param class_httpmessage http_msg: The manipulated HTTP message.
:param type length: The new payload length to set. It can be an integer or
the string "chunked".
:returns: true if the payload length was successfully updated, false
otherwise.
.. js:function:: HTTPMessage.set_eom(http_msg)
This function set the end of message for the HTTP message **http_msg**.

View File

@ -1,7 +1,7 @@
------------------------
HAProxy Management Guide
------------------------
version 3.2
version 3.3
This document describes how to start, stop, manage, and troubleshoot HAProxy,
@ -325,6 +325,16 @@ list of options is :
last released. This works best with "no-merge", "cold-first" and "tag".
Enabling this option will slightly increase the CPU usage.
- backup / no-backup:
This option performs a copy of each released object at release time,
allowing developers to inspect them. It also performs a comparison at
allocation time to detect if anything changed in between, indicating a
use-after-free condition. This doubles the memory usage and slightly
increases the CPU usage (similar to "integrity"). If combined with
"integrity", it still duplicates the contents but doesn't perform the
comparison (which is performed by "integrity"). Just like "integrity",
it works best with "no-merge", "cold-first" and "tag".
- no-global / global:
Depending on the operating system, a process-wide global memory cache
may be enabled if it is estimated that the standard allocator is too
@ -1336,9 +1346,10 @@ The first column designates the object or metric being dumped. Its format is
specific to the command producing this output and will not be described in this
section. Usually it will consist in a series of identifiers and field names.
The second column contains 3 characters respectively indicating the origin, the
nature and the scope of the value being reported. The first character (the
origin) indicates where the value was extracted from. Possible characters are :
The second column contains 4 characters respectively indicating the origin, the
nature, the scope and the persistence state of the value being reported. The
first character (the origin) indicates where the value was extracted from.
Possible characters are :
M The value is a metric. It is valid at one instant any may change depending
on its nature .
@ -1454,7 +1465,16 @@ characters are currently supported :
current date or resource usage. At the moment this scope is not used by
any metric.
Consumers of these information will generally have enough of these 3 characters
The fourth character (persistence state) indicates that the value (the metric)
is volatile or persistent across reloads. The following characters are expected :
V The metric is volatile because it is local to the current process so
the value will be lost when reloading.
P The metric is persistent because it may be shared with other co-processes
so that the value is preserved across reloads.
Consumers of these information will generally have enough of these 4 characters
to determine how to accurately report aggregated information across multiple
processes.
@ -1643,8 +1663,8 @@ abort ssl crl-file <crlfile>
acme renew <certificate>
Starts an ACME certificate generation task with the given certificate name.
The certificate must be linked to an acme section, see section 3.13. of the
configuration manual. See also "acme status".
The certificate must be linked to an acme section, see section 12.8 "ACME"
of the configuration manual. See also "acme status".
acme status
Show the status of every certificates that were configured with ACME.
@ -1652,10 +1672,10 @@ acme status
This command outputs, separated by a tab:
- The name of the certificate configured in haproxy
- The acme section used in the configuration
- The state of the acme task, either "Running" or "Scheduled"
- The state of the acme task, either "Running", "Scheduled" or "Stopped"
- The UTC expiration date of the certificate in ISO8601 format
- The relative expiration time (0d if expired)
- The UTC expiration date of the certificate in ISO8601 format
- The UTC scheduled date of the certificate in ISO8601 format
- The relative schedule time (0d if Running)
Example:
@ -1714,8 +1734,9 @@ add server <backend>/<server> [args]*
The <server> name must not be already used in the backend. A special
restriction is put on the backend which must used a dynamic load-balancing
algorithm. A subset of keywords from the server config file statement can be
used to configure the server behavior. Also note that no settings will be
reused from an hypothetical 'default-server' statement in the same backend.
used to configure the server behavior (see "add server help" to list them).
Also note that no settings will be reused from an hypothetical
'default-server' statement in the same backend.
Currently a dynamic server is statically initialized with the "none"
init-addr method. This means that no resolution will be undertaken if a FQDN
@ -1745,78 +1766,10 @@ add server <backend>/<server> [args]*
servers. Please refer to the "u-limit" global keyword documentation in this
case.
Here is the list of the currently supported keywords :
- agent-addr
- agent-check
- agent-inter
- agent-port
- agent-send
- allow-0rtt
- alpn
- addr
- backup
- ca-file
- check
- check-alpn
- check-proto
- check-send-proxy
- check-sni
- check-ssl
- check-via-socks4
- ciphers
- ciphersuites
- cookie
- crl-file
- crt
- disabled
- downinter
- error-limit
- fall
- fastinter
- force-sslv3/tlsv10/tlsv11/tlsv12/tlsv13
- id
- init-state
- inter
- maxconn
- maxqueue
- minconn
- no-ssl-reuse
- no-sslv3/tlsv10/tlsv11/tlsv12/tlsv13
- no-tls-tickets
- npn
- observe
- on-error
- on-marked-down
- on-marked-up
- pool-low-conn
- pool-max-conn
- pool-purge-delay
- port
- proto
- proxy-v2-options
- rise
- send-proxy
- send-proxy-v2
- send-proxy-v2-ssl
- send-proxy-v2-ssl-cn
- slowstart
- sni
- source
- ssl
- ssl-max-ver
- ssl-min-ver
- tfo
- tls-tickets
- track
- usesrc
- verify
- verifyhost
- weight
- ws
Their syntax is similar to the server line from the configuration file,
please refer to their individual documentation for details.
add server help
List the keywords supported for dynamic servers by the current haproxy
version. Keyword syntax is similar to the server line from the configuration
file, please refer to their individual documentation for details.
add ssl ca-file <cafile> <payload>
Add a new certificate to a ca-file. This command is useful when you reached
@ -2356,7 +2309,7 @@ help [<command>]
the requested one. The same help screen is also displayed for unknown
commands.
httpclient <method> <URI>
httpclient [--htx] <method> <URI>
Launch an HTTP client request and print the response on the CLI. Only
supported on a CLI connection running in expert mode (see "expert-mode on").
It's only meant for debugging. The httpclient is able to resolve a server
@ -2365,6 +2318,9 @@ httpclient <method> <URI>
able to resolve an host from /etc/hosts if you don't use a local dns daemon
which can resolve those.
The --htx option allow to use the haproxy internal htx representation using
the htx_dump() function, mainly used for debugging.
new ssl ca-file <cafile>
Create a new empty CA file tree entry to be filled with a set of CA
certificates and added to a crt-list. This command should be used in
@ -2415,7 +2371,7 @@ prompt [help | n | i | p | timed]*
Without any option, this will cycle through prompt mode then non-interactive
mode. In non-interactive mode, the connection is closed after the last
command of the current line compltes. In interactive mode, the connection is
command of the current line completes. In interactive mode, the connection is
not closed after a command completes, so that a new one can be entered. In
prompt mode, the interactive mode is still in use, and a prompt will appear
at the beginning of the line, indicating to the user that the interpreter is
@ -3046,18 +3002,19 @@ show info [typed|json] [desc] [float]
(...)
> show info typed
0.Name.1:POS:str:HAProxy
1.Version.1:POS:str:1.7-dev1-de52ea-146
2.Release_date.1:POS:str:2016/03/11
3.Nbproc.1:CGS:u32:1
4.Process_num.1:KGP:u32:1
5.Pid.1:SGP:u32:28105
6.Uptime.1:MDP:str:0d 0h00m08s
7.Uptime_sec.1:MDP:u32:8
8.Memmax_MB.1:CLP:u32:0
9.PoolAlloc_MB.1:MGP:u32:0
10.PoolUsed_MB.1:MGP:u32:0
11.PoolFailed.1:MCP:u32:0
0.Name.1:POSV:str:HAProxy
1.Version.1:POSV:str:3.1-dev0-7c653d-2466
2.Release_date.1:POSV:str:2025/07/01
3.Nbthread.1:CGSV:u32:1
4.Nbproc.1:CGSV:u32:1
5.Process_num.1:KGPV:u32:1
6.Pid.1:SGPV:u32:638069
7.Uptime.1:MDPV:str:0d 0h00m07s
8.Uptime_sec.1:MDPV:u32:7
9.Memmax_MB.1:CLPV:u32:0
10.PoolAlloc_MB.1:MGPV:u32:0
11.PoolUsed_MB.1:MGPV:u32:0
12.PoolFailed.1:MCPV:u32:0
(...)
In the typed format, the presence of the process ID at the end of the
@ -3264,11 +3221,11 @@ show quic [<format>] [<filter>]
An optional argument can be specified to control the verbosity. Its value can
be interpreted in different way. The first possibility is to used predefined
values, "oneline" for the default format and "full" to display all
information. Alternatively, a list of comma-delimited fields can be specified
to restrict output. Currently supported values are "tp", "sock", "pktns",
"cc" and "mux". Finally, "help" in the format will instead show a more
detailed help message.
values, "oneline" for the default format, "stream" to list every active
streams and "full" to display all information. Alternatively, a list of
comma-delimited fields can be specified to restrict output. Currently
supported values are "tp", "sock", "pktns", "cc" and "mux". Finally, "help"
in the format will instead show a more detailed help message.
The final argument is used to restrict or extend the connection list. By
default, connections on closing or draining state are not displayed. Use the
@ -3283,7 +3240,29 @@ show servers conn [<backend>]
The output consists in a header line showing the fields titles, then one
server per line with for each, the backend name and ID, server name and ID,
the address, port and a series or values. The number of fields varies
depending on thread count.
depending on thread count. The exact format of the output may vary slightly
across versions and depending on the number of threads. One needs to pay
attention to the header line to match columns when extracting output values,
and to the number of threads as the last columns are per-thread:
bkname/svname Backend name '/' server name
bkid/svid Backend ID '/' server ID
addr Server's IP address
port Server's port (or zero if none)
- Unused field, serves as a visual delimiter
purge_delay Interval between connection purges, in milliseconds
used_cur Number of connections currently in use
used_max Highest value of used_cur since the process started
need_est Floating estimate of total needed connections
unsafe_nb Number of idle connections considered as "unsafe"
safe_nb Number of idle connections considered as "safe"
idle_lim Configured maximum number of idle connections
idle_cur Total of the per-thread currently idle connections
idle_per_thr[NB] Idle conns per thread for each one of the NB threads
HAProxy will kill a portion of <idle_cur> every <purge_delay> when the total
of <idle_cur> + <used_cur> exceeds the estimate <need_est>. This estimate
varies based on connection activity.
Given the threaded nature of idle connections, it's important to understand
that some values may change once read, and that as such, consistency within a
@ -3516,10 +3495,11 @@ show stat [domain <resolvers|proxy>] [{<iid>|<proxy>} <type> <sid>] \
The rest of the line starting after the first colon follows the "typed output
format" described in the section above. In short, the second column (after the
first ':') indicates the origin, nature and scope of the variable. The third
column indicates the field type, among "s32", "s64", "u32", "u64", "flt' and
"str". Then the fourth column is the value itself, which the consumer knows
how to parse thanks to column 3 and how to process thanks to column 2.
first ':') indicates the origin, nature, scope and persistence state of the
variable. The third column indicates the field type, among "s32", "s64",
"u32", "u64", "flt' and "str". Then the fourth column is the value itself,
which the consumer knows how to parse thanks to column 3 and how to process
thanks to column 2.
When "desc" is appended to the command, one extra colon followed by a quoted
string is appended with a description for the metric. At the time of writing,
@ -3532,37 +3512,32 @@ show stat [domain <resolvers|proxy>] [{<iid>|<proxy>} <type> <sid>] \
Here's an example of typed output format :
$ echo "show stat typed" | socat stdio unix-connect:/tmp/sock1
F.2.0.0.pxname.1:MGP:str:private-frontend
F.2.0.1.svname.1:MGP:str:FRONTEND
F.2.0.8.bin.1:MGP:u64:0
F.2.0.9.bout.1:MGP:u64:0
F.2.0.40.hrsp_2xx.1:MGP:u64:0
L.2.1.0.pxname.1:MGP:str:private-frontend
L.2.1.1.svname.1:MGP:str:sock-1
L.2.1.17.status.1:MGP:str:OPEN
L.2.1.73.addr.1:MGP:str:0.0.0.0:8001
S.3.13.60.rtime.1:MCP:u32:0
S.3.13.61.ttime.1:MCP:u32:0
S.3.13.62.agent_status.1:MGP:str:L4TOUT
S.3.13.64.agent_duration.1:MGP:u64:2001
S.3.13.65.check_desc.1:MCP:str:Layer4 timeout
S.3.13.66.agent_desc.1:MCP:str:Layer4 timeout
S.3.13.67.check_rise.1:MCP:u32:2
S.3.13.68.check_fall.1:MCP:u32:3
S.3.13.69.check_health.1:SGP:u32:0
S.3.13.70.agent_rise.1:MaP:u32:1
S.3.13.71.agent_fall.1:SGP:u32:1
S.3.13.72.agent_health.1:SGP:u32:1
S.3.13.73.addr.1:MCP:str:1.255.255.255:8888
S.3.13.75.mode.1:MAP:str:http
B.3.0.0.pxname.1:MGP:str:private-backend
B.3.0.1.svname.1:MGP:str:BACKEND
B.3.0.2.qcur.1:MGP:u32:0
B.3.0.3.qmax.1:MGP:u32:0
B.3.0.4.scur.1:MGP:u32:0
B.3.0.5.smax.1:MGP:u32:0
B.3.0.6.slim.1:MGP:u32:1000
B.3.0.55.lastsess.1:MMP:s32:-1
F.2.0.0.pxname.1:KNSV:str:dummy
F.2.0.1.svname.1:KNSV:str:FRONTEND
F.2.0.4.scur.1:MGPV:u32:0
F.2.0.5.smax.1:MMPV:u32:0
F.2.0.6.slim.1:CLPV:u32:524269
F.2.0.7.stot.1:MCPP:u64:0
F.2.0.8.bin.1:MCPP:u64:0
F.2.0.9.bout.1:MCPP:u64:0
F.2.0.10.dreq.1:MCPP:u64:0
F.2.0.11.dresp.1:MCPP:u64:0
F.2.0.12.ereq.1:MCPP:u64:0
F.2.0.17.status.1:SGPV:str:OPEN
F.2.0.26.pid.1:KGPV:u32:1
F.2.0.27.iid.1:KGSV:u32:2
F.2.0.28.sid.1:KGSV:u32:0
F.2.0.32.type.1:CGSV:u32:0
F.2.0.33.rate.1:MRPP:u32:0
F.2.0.34.rate_lim.1:CLPV:u32:0
F.2.0.35.rate_max.1:MMPV:u32:0
F.2.0.46.req_rate.1:MRPP:u32:0
F.2.0.47.req_rate_max.1:MMPV:u32:0
F.2.0.48.req_tot.1:MCPP:u64:0
F.2.0.51.comp_in.1:MCPP:u64:0
F.2.0.52.comp_out.1:MCPP:u64:0
F.2.0.53.comp_byp.1:MCPP:u64:0
F.2.0.54.comp_rsp.1:MCPP:u64:0
(...)
In the typed format, the presence of the process ID at the end of the
@ -3573,20 +3548,20 @@ show stat [domain <resolvers|proxy>] [{<iid>|<proxy>} <type> <sid>] \
$ ( echo show stat typed | socat /var/run/haproxy.sock1 - ; \
echo show stat typed | socat /var/run/haproxy.sock2 - ) | \
sort -t . -k 1,1 -k 2,2n -k 3,3n -k 4,4n -k 5,5 -k 6,6n
B.3.0.0.pxname.1:MGP:str:private-backend
B.3.0.0.pxname.2:MGP:str:private-backend
B.3.0.1.svname.1:MGP:str:BACKEND
B.3.0.1.svname.2:MGP:str:BACKEND
B.3.0.2.qcur.1:MGP:u32:0
B.3.0.2.qcur.2:MGP:u32:0
B.3.0.3.qmax.1:MGP:u32:0
B.3.0.3.qmax.2:MGP:u32:0
B.3.0.4.scur.1:MGP:u32:0
B.3.0.4.scur.2:MGP:u32:0
B.3.0.5.smax.1:MGP:u32:0
B.3.0.5.smax.2:MGP:u32:0
B.3.0.6.slim.1:MGP:u32:1000
B.3.0.6.slim.2:MGP:u32:1000
B.3.0.0.pxname.1:KNSV:str:private-backend
B.3.0.0.pxname.2:KNSV:str:private-backend
B.3.0.1.svname.1:KNSV:str:BACKEND
B.3.0.1.svname.2:KNSV:str:BACKEND
B.3.0.2.qcur.1:MGPV:u32:0
B.3.0.2.qcur.2:MGPV:u32:0
B.3.0.3.qmax.1:MMPV:u32:0
B.3.0.3.qmax.2:MMPV:u32:0
B.3.0.4.scur.1:MGPV:u32:0
B.3.0.4.scur.2:MGPV:u32:0
B.3.0.5.smax.1:MMPV:u32:0
B.3.0.5.smax.2:MMPV:u32:0
B.3.0.6.slim.1:CLPV:u32:1000
B.3.0.6.slim.2:CLPV:u32:1000
(...)
The format of JSON output is described in a schema which may be output
@ -4571,9 +4546,6 @@ show proc [debug]
1271 worker 1 0d00h00m00s 2.5-dev13
# old workers
1233 worker 3 0d00h00m43s 2.0-dev3-6019f6-289
# programs
1244 foo 0 0d00h00m00s -
1255 bar 0 0d00h00m00s -
In this example, the master has been reloaded 5 times but one of the old
worker is still running and survived 3 reloads. You could access the CLI of

View File

@ -3,7 +3,7 @@
-- Provides a pure lua alternative to tcpcheck mailers.
--
-- To be loaded using "lua-load" from haproxy configuration to handle
-- email-alerts directly from lua and disable legacy tcpcheck implementation.
-- email-alerts directly from lua
local SYSLOG_LEVEL = {
["EMERG"] = 0,
@ -364,9 +364,9 @@ local function srv_event_add(event, data)
mailers_track_server_events(data.reference)
end
-- disable legacy email-alerts since email-alerts will be sent from lua directly
core.disable_legacy_mailers()
-- tell haproxy that we do use the legacy native "mailers" config section
-- which allows us to retrieve mailers configuration using Proxy:get_mailers()
core.use_native_mailers_config()
-- event subscriptions are purposely performed in an init function to prevent
-- email alerts from being generated too early (when process is starting up)

View File

@ -112,7 +112,7 @@ local function rotate_piece(piece, piece_id, px, py, board)
end
function render(applet, board, piece, piece_id, px, py, score)
local output = clear_screen .. cursor_home
local output = cursor_home
output = output .. game_name .. " - Lines: " .. score .. "\r\n"
output = output .. "+" .. string.rep("-", board_width * 2) .. "+\r\n"
for y = 1, board_height do
@ -160,6 +160,7 @@ function handler(applet)
end
applet:send(cursor_hide)
applet:send(clear_screen)
-- fall the piece by one line every delay
local function fall_piece()
@ -214,7 +215,7 @@ function handler(applet)
local input = applet:receive(1, delay)
if input then
if input == "q" then
if input == "" or input == "q" then
game_over = true
elseif input == "\27" then
local a = applet:receive(1, delay)

View File

@ -31,7 +31,7 @@ struct acme_cfg {
};
enum acme_st {
ACME_RESSOURCES = 0,
ACME_RESOURCES = 0,
ACME_NEWNONCE,
ACME_CHKACCOUNT,
ACME_NEWACCOUNT,
@ -51,9 +51,11 @@ enum http_st {
};
struct acme_auth {
struct ist dns; /* dns entry */
struct ist auth; /* auth URI */
struct ist chall; /* challenge URI */
struct ist token; /* token */
int ready; /* is the challenge ready ? */
void *next;
};
@ -70,7 +72,7 @@ struct acme_ctx {
struct ist newNonce;
struct ist newAccount;
struct ist newOrder;
} ressources;
} resources;
struct ist nonce;
struct ist kid;
struct ist order;
@ -79,6 +81,20 @@ struct acme_ctx {
X509_REQ *req;
struct ist finalize;
struct ist certificate;
struct task *task;
struct mt_list el;
};
#define ACME_EV_SCHED (1ULL << 0) /* scheduling wakeup */
#define ACME_EV_NEW (1ULL << 1) /* new task */
#define ACME_EV_TASK (1ULL << 2) /* Task handler */
#define ACME_EV_REQ (1ULL << 3) /* HTTP Request */
#define ACME_EV_RES (1ULL << 4) /* HTTP Response */
#define ACME_VERB_CLEAN 1
#define ACME_VERB_MINIMAL 2
#define ACME_VERB_SIMPLE 3
#define ACME_VERB_ADVANCED 4
#define ACME_VERB_COMPLETE 5
#endif

View File

@ -66,7 +66,8 @@ enum act_parse_ret {
enum act_opt {
ACT_OPT_NONE = 0x00000000, /* no flag */
ACT_OPT_FINAL = 0x00000001, /* last call, cannot yield */
ACT_OPT_FIRST = 0x00000002, /* first call for this action */
ACT_OPT_FINAL_EARLY = 0x00000002, /* set in addition to ACT_OPT_FINAL if last call occurs earlier than normal due to unexpected IO/error */
ACT_OPT_FIRST = 0x00000004, /* first call for this action */
};
/* Flags used to describe the action. */

View File

@ -81,9 +81,13 @@ static forceinline char *appctx_show_flags(char *buf, size_t len, const char *de
#undef _
}
#define APPLET_FL_NEW_API 0x00000001 /* Set if the applet is based on the new API (using applet's buffers) */
#define APPLET_FL_WARNED 0x00000002 /* Set when warning was already emitted about a legacy applet */
/* Applet descriptor */
struct applet {
enum obj_type obj_type; /* object type = OBJ_TYPE_APPLET */
unsigned int flags; /* APPLET_FL_* flags */
/* 3 unused bytes here */
char *name; /* applet's name to report in logs */
int (*init)(struct appctx *); /* callback to init resources, may be NULL.

View File

@ -116,7 +116,7 @@ static inline int appctx_init(struct appctx *appctx)
* the appctx will be fully initialized. The session and the stream will
* eventually be created. The affinity must be set now !
*/
BUG_ON(appctx->t->tid != tid);
BUG_ON(appctx->t->tid != -1 && appctx->t->tid != tid);
task_set_thread(appctx->t, tid);
if (appctx->applet->init)
@ -282,6 +282,120 @@ static inline void applet_expect_data(struct appctx *appctx)
se_fl_clr(appctx->sedesc, SE_FL_EXP_NO_DATA);
}
/* Returns the buffer containing data pushed to the applet by the stream. For
* applets using its own buffers it is the appctx input buffer. For legacy
* applet, it is the output channel buffer.
*/
static inline struct buffer *applet_get_inbuf(struct appctx *appctx)
{
if (appctx->flags & APPCTX_FL_INOUT_BUFS) {
if (applet_fl_test(appctx, APPCTX_FL_INBLK_ALLOC) || !appctx_get_buf(appctx, &appctx->inbuf))
return NULL;
return &appctx->inbuf;
}
else
return sc_ob(appctx_sc(appctx));
}
/* Returns the buffer containing data pushed by the applets to the stream. For
* applets using its own buffer it is the appctx output buffer. For legacy
* applet, it is the input channel buffer.
*/
static inline struct buffer *applet_get_outbuf(struct appctx *appctx)
{
if (appctx->flags & APPCTX_FL_INOUT_BUFS) {
if (applet_fl_test(appctx, APPCTX_FL_OUTBLK_ALLOC|APPCTX_FL_OUTBLK_FULL) ||
!appctx_get_buf(appctx, &appctx->outbuf))
return NULL;
return &appctx->outbuf;
}
else
return sc_ib(appctx_sc(appctx));
}
/* Returns the amount of data in the input buffer (see applet_get_inbuf) */
static inline size_t applet_input_data(const struct appctx *appctx)
{
if (appctx->flags & APPCTX_FL_INOUT_BUFS)
return b_data(&appctx->inbuf);
else
return co_data(sc_oc(appctx_sc(appctx)));
}
/* Returns the amount of HTX data in the input buffer (see applet_get_inbuf) */
static inline size_t applet_htx_input_data(const struct appctx *appctx)
{
if (appctx->flags & APPCTX_FL_INOUT_BUFS)
return htx_used_space(htxbuf(&appctx->inbuf));
else
return co_data(sc_oc(appctx_sc(appctx)));
}
/* Skips <len> bytes from the input buffer (see applet_get_inbuf).
*
* This is useful when data have been read directly from the buffer. It is
* illegal to call this function with <len> causing a wrapping at the end of the
* buffer. It's the caller's responsibility to ensure that <len> is never larger
* than available ouput data.
*/
static inline void applet_skip_input(struct appctx *appctx, size_t len)
{
if (appctx->flags & APPCTX_FL_INOUT_BUFS) {
b_del(&appctx->inbuf, len);
applet_fl_clr(appctx, APPCTX_FL_INBLK_FULL);
}
else
co_skip(sc_oc(appctx_sc(appctx)), len);
}
/* Removes all bytes from the input buffer (see applet_get_inbuf).
*/
static inline void applet_reset_input(struct appctx *appctx)
{
if (appctx->flags & APPCTX_FL_INOUT_BUFS) {
b_reset(&appctx->inbuf);
applet_fl_clr(appctx, APPCTX_FL_INBLK_FULL);
}
else
co_skip(sc_oc(appctx_sc(appctx)), co_data(sc_oc(appctx_sc(appctx))));
}
/* Returns the amout of space available at the output buffer (see applet_get_outbuf).
*/
static inline size_t applet_output_room(const struct appctx *appctx)
{
if (appctx->flags & APPCTX_FL_INOUT_BUFS)
return b_room(&appctx->outbuf);
else
return channel_recv_max(sc_ic(appctx_sc(appctx)));
}
/* Returns the amout of space available at the HTX output buffer (see applet_get_outbuf).
*/
static inline size_t applet_htx_output_room(const struct appctx *appctx)
{
if (appctx->flags & APPCTX_FL_INOUT_BUFS)
return htx_free_data_space(htxbuf(&appctx->outbuf));
else
return channel_recv_max(sc_ic(appctx_sc(appctx)));
}
/*Indicates that the applet have more data to deliver and it needs more room in
* the output buffer to do so (see applet_get_outbuf).
*
* For applets using its own buffers, <room_needed> is not used and only
* <appctx> flags are updated. For legacy applets, the amount of free space
* required must be specified. In this last case, it is the caller
* responsibility to be sure <room_needed> is valid.
*/
static inline void applet_need_room(struct appctx *appctx, size_t room_needed)
{
if (appctx->flags & APPCTX_FL_INOUT_BUFS)
applet_have_more_data(appctx);
else
sc_need_room(appctx_sc(appctx), room_needed);
}
/* Should only be used via wrappers applet_putchk() / applet_putchk_stress(). */
static inline int _applet_putchk(struct appctx *appctx, struct buffer *chunk,
int stress)
@ -318,9 +432,10 @@ static inline int _applet_putchk(struct appctx *appctx, struct buffer *chunk,
return ret;
}
/* writes chunk <chunk> into the input channel of the stream attached to this
* appctx's endpoint, and marks the SC_FL_NEED_ROOM on a channel full error.
* See ci_putchk() for the list of return codes.
/* writes chunk <chunk> into the applet output buffer (see applet_get_outbuf).
*
* Returns the number of written bytes on success or -1 on error (lake of space,
* shutdown, invalid call...)
*/
static inline int applet_putchk(struct appctx *appctx, struct buffer *chunk)
{
@ -333,9 +448,10 @@ static inline int applet_putchk_stress(struct appctx *appctx, struct buffer *chu
return _applet_putchk(appctx, chunk, 1);
}
/* writes <len> chars from <blk> into the input channel of the stream attached
* to this appctx's endpoint, and marks the SC_FL_NEED_ROOM on a channel full
* error. See ci_putblk() for the list of return codes.
/* writes <len> chars from <blk> into the applet output buffer (see applet_get_outbuf).
*
* Returns the number of written bytes on success or -1 on error (lake of space,
* shutdown, invalid call...)
*/
static inline int applet_putblk(struct appctx *appctx, const char *blk, int len)
{
@ -367,10 +483,11 @@ static inline int applet_putblk(struct appctx *appctx, const char *blk, int len)
return ret;
}
/* writes chars from <str> up to the trailing zero (excluded) into the input
* channel of the stream attached to this appctx's endpoint, and marks the
* SC_FL_NEED_ROOM on a channel full error. See ci_putstr() for the list of
* return codes.
/* writes chars from <str> up to the trailing zero (excluded) into the applet
* output buffer (see applet_get_outbuf).
*
* Returns the number of written bytes on success or -1 on error (lake of space,
* shutdown, invalid call...)
*/
static inline int applet_putstr(struct appctx *appctx, const char *str)
{
@ -403,9 +520,10 @@ static inline int applet_putstr(struct appctx *appctx, const char *str)
return ret;
}
/* writes character <chr> into the input channel of the stream attached to this
* appctx's endpoint, and marks the SC_FL_NEED_ROOM on a channel full error.
* See ci_putchr() for the list of return codes.
/* writes character <chr> into the applet's output buffer (see applet_get_outbuf).
*
* Returns the number of written bytes on success or -1 on error (lake of space,
* shutdown, invalid call...)
*/
static inline int applet_putchr(struct appctx *appctx, char chr)
{
@ -438,6 +556,283 @@ static inline int applet_putchr(struct appctx *appctx, char chr)
return ret;
}
static inline int applet_may_get(const struct appctx *appctx, size_t len)
{
if (appctx->flags & APPCTX_FL_INOUT_BUFS) {
if (len > b_data(&appctx->inbuf)) {
if (se_fl_test(appctx->sedesc, SE_FL_SHW))
return -1;
return 0;
}
}
else {
const struct stconn *sc = appctx_sc(appctx);
if ((sc->flags & SC_FL_SHUT_DONE) || len > co_data(sc_oc(sc))) {
if (sc->flags & (SC_FL_SHUT_DONE|SC_FL_SHUT_WANTED))
return -1;
return 0;
}
}
return 1;
}
/* Gets one char from the applet input buffer (see appet_get_inbuf),
*
* Return values :
* 1 : number of bytes read, equal to requested size.
* =0 : not enough data available. <c> is left undefined.
* <0 : no more bytes readable because output is shut.
*
* The status of the corresponding buffer is not changed. The caller must call
* applet_skip_input() to update it.
*/
static inline int applet_getchar(const struct appctx *appctx, char *c)
{
int ret;
ret = applet_may_get(appctx, 1);
if (ret <= 0)
return ret;
*c = ((appctx->flags & APPCTX_FL_INOUT_BUFS)
? *(b_head(&appctx->inbuf))
: *(co_head(sc_oc(appctx_sc(appctx)))));
return 1;
}
/* Copies one full block of data from the applet input buffer (see
* appet_get_inbuf).
*
* <len> bytes are capied, starting at the offset <offset>.
*
* Return values :
* >0 : number of bytes read, equal to requested size.
* =0 : not enough data available. <blk> is left undefined.
* <0 : no more bytes readable because output is shut.
*
* The status of the corresponding buffer is not changed. The caller must call
* applet_skip_input() to update it.
*/
static inline int applet_getblk(const struct appctx *appctx, char *blk, int len, int offset)
{
const struct buffer *buf;
int ret;
ret = applet_may_get(appctx, len+offset);
if (ret <= 0)
return ret;
buf = ((appctx->flags & APPCTX_FL_INOUT_BUFS)
? &appctx->inbuf
: sc_ob(appctx_sc(appctx)));
return b_getblk(buf, blk, len, offset);
}
/* Gets one text block representing a word from the applet input buffer (see
* appet_get_inbuf).
*
* The separator is waited for as long as some data can still be received and the
* destination is not full. Otherwise, the string may be returned as is, without
* the separator.
*
* Return values :
* >0 : number of bytes read. Includes the separator if present before len or end.
* =0 : no separator before end found. <str> is left undefined.
* <0 : no more bytes readable because output is shut.
*
* The status of the corresponding buffer is not changed. The caller must call
* applet_skip_input() to update it.
*/
static inline int applet_getword(const struct appctx *appctx, char *str, int len, char sep)
{
const struct buffer *buf;
char *p;
size_t input, max = len;
int ret = 0;
ret = applet_may_get(appctx, 1);
if (ret <= 0)
goto out;
if (appctx->flags & APPCTX_FL_INOUT_BUFS) {
buf = &appctx->inbuf;
input = b_data(buf);
}
else {
struct stconn *sc = appctx_sc(appctx);
buf = sc_ob(sc);
input = co_data(sc_oc(sc));
}
if (max > input) {
max = input;
str[max-1] = 0;
}
p = b_head(buf);
ret = 0;
while (max) {
*str++ = *p;
ret++;
max--;
if (*p == sep)
goto out;
p = b_next(buf, p);
}
if (appctx->flags & APPCTX_FL_INOUT_BUFS) {
if (ret < len && (ret < input || b_room(buf)) &&
!se_fl_test(appctx->sedesc, SE_FL_SHW))
ret = 0;
}
else {
struct stconn *sc = appctx_sc(appctx);
if (ret < len && (ret < input || channel_may_recv(sc_oc(sc))) &&
!(sc->flags & (SC_FL_SHUT_DONE|SC_FL_SHUT_WANTED)))
ret = 0;
}
out:
if (max)
*str = 0;
return ret;
}
/* Gets one text block representing a line from the applet input buffer (see
* appet_get_inbuf).
*
* The '\n' is waited for as long as some data can still be received and the
* destination is not full. Otherwise, the string may be returned as is, without
* the '\n'.
*
* Return values :
* >0 : number of bytes read. Includes the \n if present before len or end.
* =0 : no '\n' before end found. <str> is left undefined.
* <0 : no more bytes readable because output is shut.
*
* The status of the corresponding buffer is not changed. The caller must call
* applet_skip_input() to update it.
*/
static inline int applet_getline(const struct appctx *appctx, char *str, int len)
{
return applet_getword(appctx, str, len, '\n');
}
/* Gets one or two blocks of data at once from the applet input buffer (see appet_get_inbuf),
*
* Data are not copied.
*
* Return values :
* >0 : number of blocks filled (1 or 2). blk1 is always filled before blk2.
* =0 : not enough data available. <blk*> are left undefined.
* <0 : no more bytes readable because output is shut.
*
* The status of the corresponding buffer is not changed. The caller must call
* applet_skip_input() to update it.
*/
static inline int applet_getblk_nc(const struct appctx *appctx, const char **blk1, size_t *len1, const char **blk2, size_t *len2)
{
const struct buffer *buf;
size_t max;
int ret;
ret = applet_may_get(appctx, 1);
if (ret <= 0)
return ret;
if (appctx->flags & APPCTX_FL_INOUT_BUFS) {
buf = &appctx->inbuf;
max = b_data(buf);
}
else {
struct stconn *sc = appctx_sc(appctx);
buf = sc_ob(sc);
max = co_data(sc_oc(sc));
}
return b_getblk_nc(buf, blk1, len1, blk2, len2, 0, max);
}
/* Gets one or two blocks of text representing a word from the applet input
* buffer (see appet_get_inbuf).
*
* Data are not copied. The separator is waited for as long as some data can
* still be received and the destination is not full. Otherwise, the string may
* be returned as is, without the separator.
*
* Return values :
* >0 : number of bytes read. Includes the separator if present before len or end.
* =0 : no separator before end found. <str> is left undefined.
* <0 : no more bytes readable because output is shut.
*
* The status of the corresponding buffer is not changed. The caller must call
* applet_skip_input() to update it.
*/
static inline int applet_getword_nc(const struct appctx *appctx, const char **blk1, size_t *len1, const char **blk2, size_t *len2, char sep)
{
int ret;
size_t l;
ret = applet_getblk_nc(appctx, blk1, len1, blk2, len2);
if (unlikely(ret <= 0))
return ret;
for (l = 0; l < *len1 && (*blk1)[l] != sep; l++);
if (l < *len1 && (*blk1)[l] == sep) {
*len1 = l + 1;
return 1;
}
if (ret >= 2) {
for (l = 0; l < *len2 && (*blk2)[l] != sep; l++);
if (l < *len2 && (*blk2)[l] == sep) {
*len2 = l + 1;
return 2;
}
}
/* If we have found no LF and the buffer is full or the SC is shut, then
* the resulting string is made of the concatenation of the pending
* blocks (1 or 2).
*/
if (appctx->flags & APPCTX_FL_INOUT_BUFS) {
if (b_full(&appctx->inbuf) || se_fl_test(appctx->sedesc, SE_FL_SHW))
return ret;
}
else {
struct stconn *sc = appctx_sc(appctx);
if (!channel_may_recv(sc_oc(sc)) || sc->flags & (SC_FL_SHUT_DONE|SC_FL_SHUT_WANTED))
return ret;
}
/* No LF yet and not shut yet */
return 0;
}
/* Gets one or two blocks of text representing a line from the applet input
* buffer (see appet_get_inbuf).
*
* Data are not copied. The '\n' is waited for as long as some data can still be
* received and the destination is not full. Otherwise, the string may be
* returned as is, without the '\n'.
*
* Return values :
* >0 : number of bytes read. Includes the \n if present before len or end.
* =0 : no '\n' before end found. <str> is left undefined.
* <0 : no more bytes readable because output is shut.
*
* The status of the corresponding buffer is not changed. The caller must call
* applet_skip_input() to update it.
*/
static inline int applet_getline_nc(const struct appctx *appctx, const char **blk1, size_t *len1, const char **blk2, size_t *len2)
{
return applet_getword_nc(appctx, blk1, len1, blk2, len2, '\n');
}
#endif /* _HAPROXY_APPLET_H */
/*

View File

@ -86,7 +86,7 @@ static inline int be_usable_srv(struct proxy *be)
/* set the time of last session on the backend */
static inline void be_set_sess_last(struct proxy *be)
{
be->be_counters.last_sess = ns_to_sec(now_ns);
HA_ATOMIC_STORE(&be->be_counters.shared.tg[tgid - 1]->last_sess, ns_to_sec(now_ns));
}
/* This function returns non-zero if the designated server will be

View File

@ -68,7 +68,7 @@
#else // not x86
/* generic implementation, causes a segfault */
static inline __attribute((always_inline)) void ha_crash_now(void)
static inline __attribute((always_inline,noreturn,unused)) void ha_crash_now(void)
{
#if __GNUC_PREREQ__(5, 0)
#pragma GCC diagnostic push

View File

@ -1,46 +0,0 @@
/*
* include/haprox/cbuf-t.h
* This file contains definition for circular buffers.
*
* Copyright 2021 HAProxy Technologies, Frederic Lecaille <flecaille@haproxy.com>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation, version 2.1
* exclusively.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef _HAPROXY_CBUF_T_H
#define _HAPROXY_CBUF_T_H
#ifdef USE_QUIC
#ifndef USE_OPENSSL
#error "Must define USE_OPENSSL"
#endif
#endif
#include <stddef.h>
#include <haproxy/list-t.h>
extern struct pool_head *pool_head_cbuf;
struct cbuf {
/* buffer */
unsigned char *buf;
/* buffer size */
size_t sz;
/* Writer index */
size_t wr;
/* Reader index */
size_t rd;
};
#endif /* _HAPROXY_CBUF_T_H */

View File

@ -1,136 +0,0 @@
/*
* include/haprox/cbuf.h
* This file contains definitions and prototypes for circular buffers.
* Inspired from Linux circular buffers (include/linux/circ_buf.h).
*
* Copyright 2021 HAProxy Technologies, Frederic Lecaille <flecaille@haproxy.com>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation, version 2.1
* exclusively.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef _HAPROXY_CBUF_H
#define _HAPROXY_CBUF_H
#ifdef USE_QUIC
#ifndef USE_OPENSSL
#error "Must define USE_OPENSSL"
#endif
#endif
#include <haproxy/atomic.h>
#include <haproxy/list.h>
#include <haproxy/cbuf-t.h>
struct cbuf *cbuf_new(unsigned char *buf, size_t sz);
void cbuf_free(struct cbuf *cbuf);
/* Amount of data between <rd> and <wr> */
#define CBUF_DATA(wr, rd, size) (((wr) - (rd)) & ((size) - 1))
/* Return the writer position in <cbuf>.
* To be used only by the writer!
*/
static inline unsigned char *cb_wr(struct cbuf *cbuf)
{
return cbuf->buf + cbuf->wr;
}
/* Reset the reader index.
* To be used by a reader!
*/
static inline void cb_rd_reset(struct cbuf *cbuf)
{
cbuf->rd = 0;
}
/* Reset the writer index.
* To be used by a writer!
*/
static inline void cb_wr_reset(struct cbuf *cbuf)
{
cbuf->wr = 0;
}
/* Increase <cbuf> circular buffer data by <count>.
* To be used by a writer!
*/
static inline void cb_add(struct cbuf *cbuf, size_t count)
{
cbuf->wr = (cbuf->wr + count) & (cbuf->sz - 1);
}
/* Return the reader position in <cbuf>.
* To be used only by the reader!
*/
static inline unsigned char *cb_rd(struct cbuf *cbuf)
{
return cbuf->buf + cbuf->rd;
}
/* Skip <count> byte in <cbuf> circular buffer.
* To be used by a reader!
*/
static inline void cb_del(struct cbuf *cbuf, size_t count)
{
cbuf->rd = (cbuf->rd + count) & (cbuf->sz - 1);
}
/* Return the amount of data left in <cbuf>.
* To be used only by the writer!
*/
static inline size_t cb_data(struct cbuf *cbuf)
{
size_t rd;
rd = HA_ATOMIC_LOAD(&cbuf->rd);
return CBUF_DATA(cbuf->wr, rd, cbuf->sz);
}
/* Return the amount of room left in <cbuf> minus 1 to distinguish
* the case where the buffer is full from the case where is is empty
* To be used only by the write!
*/
static inline size_t cb_room(struct cbuf *cbuf)
{
size_t rd;
rd = HA_ATOMIC_LOAD(&cbuf->rd);
return CBUF_DATA(rd, cbuf->wr + 1, cbuf->sz);
}
/* Return the amount of contiguous data left in <cbuf>.
* To be used only by the reader!
*/
static inline size_t cb_contig_data(struct cbuf *cbuf)
{
size_t end, n;
end = cbuf->sz - cbuf->rd;
n = (HA_ATOMIC_LOAD(&cbuf->wr) + end) & (cbuf->sz - 1);
return n < end ? n : end;
}
/* Return the amount of contiguous space left in <cbuf>.
* To be used only by the writer!
*/
static inline size_t cb_contig_space(struct cbuf *cbuf)
{
size_t end, n;
end = cbuf->sz - 1 - cbuf->wr;
n = (HA_ATOMIC_LOAD(&cbuf->rd) + end) & (cbuf->sz - 1);
return n <= end ? n : end + 1;
}
#endif /* _HAPROXY_CBUF_H */

View File

@ -28,7 +28,7 @@
extern struct timeval start_date; /* the process's start date in wall-clock time */
extern struct timeval ready_date; /* date when the process was considered ready */
extern ullong start_time_ns; /* the process's start date in internal monotonic time (ns) */
extern volatile ullong global_now_ns; /* common monotonic date between all threads, in ns (wraps every 585 yr) */
extern volatile ullong *global_now_ns;/* common monotonic date between all threads, in ns (wraps every 585 yr) */
extern THREAD_LOCAL ullong now_ns; /* internal monotonic date derived from real clock, in ns (wraps every 585 yr) */
extern THREAD_LOCAL struct timeval date; /* the real current date (wall-clock time) */

View File

@ -350,7 +350,7 @@
* <type> which has its member <name> stored at address <ptr>.
*/
#ifndef container_of
#define container_of(ptr, type, name) ((type *)(((void *)(ptr)) - ((long)&((type *)0)->name)))
#define container_of(ptr, type, name) ((type *)(((char *)(ptr)) - offsetof(type, name)))
#endif
/* returns a pointer to the structure of type <type> which has its member <name>
@ -359,7 +359,7 @@
#ifndef container_of_safe
#define container_of_safe(ptr, type, name) \
({ void *__p = (ptr); \
__p ? (type *)(__p - ((long)&((type *)0)->name)) : (type *)0; \
__p ? (type *)((char *)__p - offsetof(type, name)) : (type *)0; \
})
#endif

View File

@ -68,6 +68,50 @@ struct ssl_sock_ctx;
* conn_cond_update_polling().
*/
/* A bit of explanation is required for backend connection reuse. A connection
* may be shared between multiple streams of the same thread (e.g. h2, fcgi,
* quic) and may be reused by subsequent streams of a different thread if it
* is totally idle (i.e. not used at all). In order to permit other streams
* to find a connection, it has to appear in lists and/or trees that reflect
* its current state. If the connection is full and cannot be shared anymore,
* it is not in any of such places. The various states are the following:
*
* - private: a private connection is not visible to other threads. It is
* attached via its <idle_list> member to the <conn_list> head of a
* sess_priv_conns struct specific to the server, itself attached to the
* session. Only other streams of the same session may find this connection.
* Such connections include totally idle connections as well as connections
* with available slots left. The <hash_node> part is still used to store
* the hash key but the tree node part is otherwise left unused.
*
* - avail: an available connection is a connection that has at least one
* stream in use and at least one slot available for a new stream. Such a
* connection is indexed in the server's <avail_conns> member based on the
* key of the hash_node. It cannot be used by other threads, and is not
* present in the server's <idle_conn_list>, so its <idle_list> member is
* always empty. Since this connection is in use by a single thread and
* cannot be taken over, it doesn't require any locking to enter/leave the
* tree.
*
* - safe: a safe connection is an idle connection that has proven that it
* could reliably be reused. Such a connection may be taken over at any
* instant by other threads, and must only be manipulated under the server's
* <idle_lock>. It is indexed in the server's <safe_conns> member based on
* the key of the hash_node. It is attached to the server's <idle_conn_list>
* via its <idle_list> member. It may be purged after too long inactivity,
* though the thread responsible for doing this will first take it over. Such
* a connection has (conn->flags & CO_FL_LIST_MASK) = CO_FL_SAFE_LIST.
*
* - idle: a purely idle connection has not yet proven that it could reliably
* be reused. Such a connection may be taken over at any instant by other
* threads, and must only be manipulated under the server's <idle_lock>. It
* is indexed in the server's <idle_conns> member based on the key of the
* hash_node. It is attached to the server's <idle_conn_list> via its
* <idle_list> member. It may be purged after too long inactivity, though the
* thread responsible for doing this will first take it over. Such a
* connection has (conn->flags & CO_FL_LIST_MASK) = CO_FL_IDLE_LIST.
*/
/* flags for use in connection->flags. Please also update the conn_show_flags()
* function below in case of changes.
*/
@ -449,8 +493,6 @@ struct mux_ops {
int (*unsubscribe)(struct stconn *sc, int event_type, struct wait_event *es); /* Unsubscribe <es> from events */
int (*sctl)(struct stconn *sc, enum mux_sctl_type mux_sctl, void *arg); /* Provides information about the mux stream */
int (*avail_streams)(struct connection *conn); /* Returns the number of streams still available for a connection */
int (*avail_streams_bidi)(struct connection *conn); /* Returns the number of bidirectional streams still available for a connection */
int (*avail_streams_uni)(struct connection *conn); /* Returns the number of unidirectional streams still available for a connection */
int (*used_streams)(struct connection *conn); /* Returns the number of streams in use on a connection. */
void (*destroy)(void *ctx); /* Let the mux know one of its users left, so it may have to disappear */
int (*ctl)(struct connection *conn, enum mux_ctl_type mux_ctl, void *arg); /* Provides information about the mux connection */

View File

@ -25,108 +25,144 @@
#include <haproxy/freq_ctr-t.h>
#define COUNTERS_SHARED_F_NONE 0x0000
#define COUNTERS_SHARED_F_LOCAL 0x0001 // shared counter struct is actually process-local
// common to fe_counters_shared and be_counters_shared
#define COUNTERS_SHARED \
struct { \
uint16_t flags; /* COUNTERS_SHARED_F flags */\
};
#define COUNTERS_SHARED_TG \
struct { \
unsigned long last_state_change; /* last time, when the state was changed */\
long long srv_aborts; /* aborted responses during DATA phase caused by the server */\
long long cli_aborts; /* aborted responses during DATA phase caused by the client */\
long long internal_errors; /* internal processing errors */\
long long failed_rewrites; /* failed rewrites (warning) */\
long long bytes_out; /* number of bytes transferred from the server to the client */\
long long bytes_in; /* number of bytes transferred from the client to the server */\
long long denied_resp; /* blocked responses because of security concerns */\
long long denied_req; /* blocked requests because of security concerns */\
long long cum_sess; /* cumulated number of accepted connections */\
/* compression counters, index 0 for requests, 1 for responses */\
long long comp_in[2]; /* input bytes fed to the compressor */\
long long comp_out[2]; /* output bytes emitted by the compressor */\
long long comp_byp[2]; /* input bytes that bypassed the compressor (cpu/ram/bw limitation) */\
struct freq_ctr sess_per_sec; /* sessions per second on this server */\
}
// for convenience (generic pointer)
struct counters_shared {
COUNTERS_SHARED;
struct {
COUNTERS_SHARED_TG;
} *tg[MAX_TGROUPS];
};
/* counters used by listeners and frontends */
struct fe_counters {
unsigned int conn_max; /* max # of active sessions */
long long cum_conn; /* cumulated number of received connections */
long long cum_sess; /* cumulated number of accepted connections */
long long cum_sess_ver[3]; /* cumulated number of h1/h2/h3 sessions */
struct fe_counters_shared_tg {
COUNTERS_SHARED_TG;
unsigned int cps_max; /* maximum of new connections received per second */
unsigned int sps_max; /* maximum of new connections accepted per second (sessions) */
long long bytes_in; /* number of bytes transferred from the client to the server */
long long bytes_out; /* number of bytes transferred from the server to the client */
/* compression counters, index 0 for requests, 1 for responses */
long long comp_in[2]; /* input bytes fed to the compressor */
long long comp_out[2]; /* output bytes emitted by the compressor */
long long comp_byp[2]; /* input bytes that bypassed the compressor (cpu/ram/bw limitation) */
long long denied_req; /* blocked requests because of security concerns */
long long denied_resp; /* blocked responses because of security concerns */
long long failed_req; /* failed requests (eg: invalid or timeout) */
long long denied_conn; /* denied connection requests (tcp-req-conn rules) */
long long denied_sess; /* denied session requests (tcp-req-sess rules) */
long long failed_rewrites; /* failed rewrites (warning) */
long long internal_errors; /* internal processing errors */
long long cli_aborts; /* aborted responses during DATA phase caused by the client */
long long srv_aborts; /* aborted responses during DATA phase caused by the server */
long long denied_conn; /* denied connection requests (tcp-req-conn rules) */
long long intercepted_req; /* number of monitoring or stats requests intercepted by the frontend */
long long cum_conn; /* cumulated number of received connections */
struct freq_ctr conn_per_sec; /* received connections per second on the frontend */
struct freq_ctr req_per_sec; /* HTTP requests per second on the frontend */
long long cum_sess_ver[3]; /* cumulated number of h1/h2/h3 sessions */
union {
struct {
long long cum_req[4]; /* cumulated number of processed other/h1/h2/h3 requests */
long long comp_rsp; /* number of compressed responses */
unsigned int rps_max; /* maximum of new HTTP requests second observed */
long long rsp[6]; /* http response codes */
long long cache_lookups;/* cache lookups */
long long cache_hits; /* cache hits */
long long cache_lookups;/* cache lookups */
long long comp_rsp; /* number of compressed responses */
long long rsp[6]; /* http response codes */
} http;
} p; /* protocol-specific stats */
struct freq_ctr sess_per_sec; /* sessions per second on this server */
struct freq_ctr req_per_sec; /* HTTP requests per second on the frontend */
struct freq_ctr conn_per_sec; /* received connections per second on the frontend */
long long failed_req; /* failed requests (eg: invalid or timeout) */
};
unsigned long last_change; /* last time, when the state was changed */
struct fe_counters_shared {
COUNTERS_SHARED;
struct fe_counters_shared_tg *tg[MAX_TGROUPS];
};
struct fe_counters {
struct fe_counters_shared shared; /* shared counters */
unsigned int conn_max; /* max # of active sessions */
unsigned int cps_max; /* maximum of new connections received per second */
unsigned int sps_max; /* maximum of new connections accepted per second (sessions) */
struct freq_ctr _sess_per_sec; /* sessions per second on this frontend, used to compute sps_max (internal use only) */
struct freq_ctr _conn_per_sec; /* connections per second on this frontend, used to compute cps_max (internal use only) */
union {
struct {
unsigned int rps_max; /* maximum of new HTTP requests second observed */
struct freq_ctr _req_per_sec; /* HTTP requests per second on the frontend, only used to compute rps_max */
} http;
} p; /* protocol-specific stats */
};
struct be_counters_shared_tg {
COUNTERS_SHARED_TG;
long long cum_lbconn; /* cumulated number of sessions processed by load balancing (BE only) */
long long connect; /* number of connection establishment attempts */
long long reuse; /* number of connection reuses */
unsigned long last_sess; /* last session time */
long long failed_checks, failed_hana; /* failed health checks and health analyses for servers */
long long down_trans; /* up->down transitions */
union {
struct {
long long cum_req; /* cumulated number of processed HTTP requests */
long long cache_hits; /* cache hits */
long long cache_lookups;/* cache lookups */
long long comp_rsp; /* number of compressed responses */
long long rsp[6]; /* http response codes */
} http;
} p; /* protocol-specific stats */
long long redispatches; /* retried and redispatched connections (BE only) */
long long retries; /* retried and redispatched connections (BE only) */
long long failed_resp; /* failed responses (BE only) */
long long failed_conns; /* failed connect() attempts (BE only) */
};
struct be_counters_shared {
COUNTERS_SHARED;
struct be_counters_shared_tg *tg[MAX_TGROUPS];
};
/* counters used by servers and backends */
struct be_counters {
struct be_counters_shared shared; /* shared counters */
unsigned int conn_max; /* max # of active sessions */
long long cum_sess; /* cumulated number of accepted connections */
long long cum_lbconn; /* cumulated number of sessions processed by load balancing (BE only) */
unsigned int cps_max; /* maximum of new connections received per second */
unsigned int sps_max; /* maximum of new connections accepted per second (sessions) */
unsigned int nbpend_max; /* max number of pending connections with no server assigned yet */
unsigned int cur_sess_max; /* max number of currently active sessions */
long long bytes_in; /* number of bytes transferred from the client to the server */
long long bytes_out; /* number of bytes transferred from the server to the client */
/* compression counters, index 0 for requests, 1 for responses */
long long comp_in[2]; /* input bytes fed to the compressor */
long long comp_out[2]; /* output bytes emitted by the compressor */
long long comp_byp[2]; /* input bytes that bypassed the compressor (cpu/ram/bw limitation) */
long long denied_req; /* blocked requests because of security concerns */
long long denied_resp; /* blocked responses because of security concerns */
long long connect; /* number of connection establishment attempts */
long long reuse; /* number of connection reuses */
long long failed_conns; /* failed connect() attempts (BE only) */
long long failed_resp; /* failed responses (BE only) */
long long cli_aborts; /* aborted responses during DATA phase caused by the client */
long long srv_aborts; /* aborted responses during DATA phase caused by the server */
long long retries; /* retried and redispatched connections (BE only) */
long long redispatches; /* retried and redispatched connections (BE only) */
long long failed_rewrites; /* failed rewrites (warning) */
long long internal_errors; /* internal processing errors */
long long failed_checks, failed_hana; /* failed health checks and health analyses for servers */
long long down_trans; /* up->down transitions */
struct freq_ctr _sess_per_sec; /* sessions per second on this frontend, used to compute sps_max (internal use only) */
unsigned int q_time, c_time, d_time, t_time; /* sums of conn_time, queue_time, data_time, total_time */
unsigned int qtime_max, ctime_max, dtime_max, ttime_max; /* maximum of conn_time, queue_time, data_time, total_time observed */
union {
struct {
long long cum_req; /* cumulated number of processed HTTP requests */
long long comp_rsp; /* number of compressed responses */
unsigned int rps_max; /* maximum of new HTTP requests second observed */
long long rsp[6]; /* http response codes */
long long cache_lookups;/* cache lookups */
long long cache_hits; /* cache hits */
} http;
} p; /* protocol-specific stats */
struct freq_ctr sess_per_sec; /* sessions per second on this server */
unsigned long last_sess; /* last session time */
unsigned long last_change; /* last time, when the state was changed */
};
#endif /* _HAPROXY_COUNTERS_T_H */

102
include/haproxy/counters.h Normal file
View File

@ -0,0 +1,102 @@
/*
* include/haproxy/counters.h
* objects counters management
*
* Copyright 2025 HAProxy Technologies
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation, version 2.1
* exclusively.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef _HAPROXY_COUNTERS_H
# define _HAPROXY_COUNTERS_H
#include <stddef.h>
#include <haproxy/counters-t.h>
#include <haproxy/guid-t.h>
int counters_fe_shared_prepare(struct fe_counters_shared *counters, const struct guid_node *guid);
int counters_be_shared_prepare(struct be_counters_shared *counters, const struct guid_node *guid);
void counters_fe_shared_drop(struct fe_counters_shared *counters);
void counters_be_shared_drop(struct be_counters_shared *counters);
/* time oriented helper: get last time (relative to current time) on a given
* <scounter> array, for <elem> member (one member per thread group) which is
* assumed to be unsigned long type.
*
* wrapping is handled by taking the lowest diff between now and last counter.
* But since wrapping is expected once every ~136 years (starting 01/01/1970),
* perhaps it's not worth the extra CPU cost.. let's see.
*/
#define COUNTERS_SHARED_LAST_OFFSET(scounters, type, offset) \
({ \
unsigned long last = HA_ATOMIC_LOAD((type *)((char *)scounters[0] + offset));\
unsigned long now_seconds = ns_to_sec(now_ns); \
int it; \
\
for (it = 1; it < global.nbtgroups; it++) { \
unsigned long cur = HA_ATOMIC_LOAD((type *)((char *)scounters[it] + offset));\
if ((now_seconds - cur) < (now_seconds - last)) \
last = cur; \
} \
last; \
})
#define COUNTERS_SHARED_LAST(scounters, elem) \
({ \
int offset = offsetof(typeof(**scounters), elem); \
unsigned long last = COUNTERS_SHARED_LAST_OFFSET(scounters, typeof(scounters[0]->elem), offset); \
\
last; \
})
/* generic unsigned integer addition for all <elem> members from
* <scounters> array (one member per thread group)
* <rfunc> is function taking pointer as parameter to read from the memory
* location pointed to scounters[it].elem
*/
#define COUNTERS_SHARED_TOTAL_OFFSET(scounters, type, offset, rfunc) \
({ \
uint64_t __ret = 0; \
int it; \
\
for (it = 0; it < global.nbtgroups; it++) \
__ret += rfunc((type *)((char *)scounters[it] + offset)); \
__ret; \
})
#define COUNTERS_SHARED_TOTAL(scounters, elem, rfunc) \
({ \
int offset = offsetof(typeof(**scounters), elem); \
uint64_t __ret = COUNTERS_SHARED_TOTAL_OFFSET(scounters, typeof(scounters[0]->elem), offset, rfunc);\
\
__ret; \
})
/* same as COUNTERS_SHARED_TOTAL but with <rfunc> taking 2 extras arguments:
* <arg1> and <arg2>
*/
#define COUNTERS_SHARED_TOTAL_ARG2(scounters, elem, rfunc, arg1, arg2) \
({ \
uint64_t __ret = 0; \
int it; \
\
for (it = 0; it < global.nbtgroups; it++) \
__ret += rfunc(&scounters[it]->elem, arg1, arg2); \
__ret; \
})
#endif /* _HAPROXY_COUNTERS_H */

View File

@ -2,6 +2,7 @@
#define _HAPROXY_CPU_TOPO_H
#include <haproxy/api.h>
#include <haproxy/chunk.h>
#include <haproxy/cpuset-t.h>
#include <haproxy/cpu_topo-t.h>
@ -55,7 +56,12 @@ int cpu_map_configured(void);
/* Dump the CPU topology <topo> for up to cpu_topo_maxcpus CPUs for
* debugging purposes. Offline CPUs are skipped.
*/
void cpu_dump_topology(const struct ha_cpu_topo *topo);
void cpu_topo_debug(const struct ha_cpu_topo *topo);
/* Dump the summary of CPU topology <topo>, i.e. clusters info and thread-cpu
* bindings.
*/
void cpu_topo_dump_summary(const struct ha_cpu_topo *topo, struct buffer *trash);
/* re-order a CPU topology array by locality to help form groups. */
void cpu_reorder_by_locality(struct ha_cpu_topo *topo, int entries);

View File

@ -44,7 +44,7 @@
* doesn't engage us too far.
*/
#ifndef MAX_TGROUPS
#define MAX_TGROUPS 16
#define MAX_TGROUPS 32
#endif
#define MAX_THREADS_PER_GROUP __WORDSIZE
@ -53,7 +53,7 @@
* long bits if more tgroups are enabled.
*/
#ifndef MAX_THREADS
#define MAX_THREADS ((((MAX_TGROUPS) > 1) ? 4 : 1) * (MAX_THREADS_PER_GROUP))
#define MAX_THREADS ((((MAX_TGROUPS) > 1) ? 16 : 1) * (MAX_THREADS_PER_GROUP))
#endif
#endif // USE_THREAD
@ -115,6 +115,10 @@
// via standard input.
#define MAX_CFG_SIZE 10485760
// may be handy for some system config files, where we just need to find
// some specific values (read with fgets)
#define MAX_LINES_TO_READ 32
// max # args on a configuration line
#define MAX_LINE_ARGS 64
@ -349,6 +353,11 @@
#define SRV_CHK_INTER_THRES 1000
#endif
/* INET6 connectivity caching interval (in ms) */
#ifndef INET6_CONNECTIVITY_CACHE_TIME
#define INET6_CONNECTIVITY_CACHE_TIME 30000
#endif
/* Specifies the string used to report the version and release date on the
* statistics page. May be defined to the empty string ("") to permanently
* disable the feature.

View File

@ -499,6 +499,7 @@ static inline long fd_clr_running(int fd)
static inline void fd_insert(int fd, void *owner, void (*iocb)(int fd), int tgid, unsigned long thread_mask)
{
extern void sock_conn_iocb(int);
struct tgroup_info *tginfo = &ha_tgroup_info[tgid - 1];
int newstate;
/* conn_fd_handler should support edge-triggered FDs */
@ -528,7 +529,7 @@ static inline void fd_insert(int fd, void *owner, void (*iocb)(int fd), int tgid
BUG_ON(fdtab[fd].state != 0);
BUG_ON(tgid < 1 || tgid > MAX_TGROUPS);
thread_mask &= tg->threads_enabled;
thread_mask &= tginfo->threads_enabled;
BUG_ON(thread_mask == 0);
fd_claim_tgid(fd, tgid);

View File

@ -197,6 +197,7 @@ struct global {
int pattern_cache; /* max number of entries in the pattern cache. */
int sslcachesize; /* SSL cache size in session, defaults to 20000 */
int comp_maxlevel; /* max HTTP compression level */
uint glitch_kill_maxidle; /* have glitches kill only below this level of idle */
int pool_low_ratio; /* max ratio of FDs used before we stop using new idle connections */
int pool_high_ratio; /* max ratio of FDs used before we start killing idle connections when creating new connections */
int pool_low_count; /* max number of opened fd before we stop using new idle connections */

View File

@ -65,6 +65,7 @@ int h1_format_htx_reqline(const struct htx_sl *sl, struct buffer *chk);
int h1_format_htx_stline(const struct htx_sl *sl, struct buffer *chk);
int h1_format_htx_hdr(const struct ist n, const struct ist v, struct buffer *chk);
int h1_format_htx_data(const struct ist data, struct buffer *chk, int chunked);
int h1_format_htx_msg(const struct htx *htx, struct buffer *outbuf);
#endif /* _HAPROXY_H1_HTX_H */

View File

@ -72,8 +72,8 @@ struct stream;
#define HLUA_NOYIELD 0x00000020
#define HLUA_BUSY 0x00000040
#define HLUA_F_AS_STRING 0x01
#define HLUA_F_MAY_USE_HTTP 0x02
#define HLUA_F_AS_STRING 0x01
#define HLUA_F_MAY_USE_CHANNELS_DATA 0x02
/* HLUA TXN flags */
#define HLUA_TXN_NOTERM 0x00000001

View File

@ -232,6 +232,52 @@ static inline int http_path_has_forbidden_char(const struct ist ist, const char
return 0;
}
/* Checks whether the :authority pseudo header contains dangerous chars that
* might affect its reassembly. We want to catch anything below 0x21, above
* 0x7e, as well as '@', '[', ']', '/','?', '#', '\', CR, LF, NUL. Then we
* fall back to the slow path and decide. Brackets are used for IP-literal and
* deserve special case, that is better handled in the slow path. The function
* returns 0 if no forbidden char is presnet, non-zero otherwise.
*/
static inline int http_authority_has_forbidden_char(const struct ist ist)
{
size_t ofs, len = istlen(ist);
const char *p = istptr(ist);
int brackets = 0;
uchar c;
/* Many attempts with various methods have shown that moderately recent
* compilers (gcc >= 9, clang >= 13) will arrange the code below as an
* evaluation tree that remains efficient at -O2 and above (~1.2ns per
* char). The immediate next efficient one is the bitmap from 64-bit
* registers but it's extremely sensitive to code arrangements and
* optimization.
*/
for (ofs = 0; ofs < len; ofs++) {
c = p[ofs];
if (unlikely(c < 0x21 || c > 0x7e ||
c == '#' || c == '/' || c == '?' || c == '@' ||
c == '[' || c == '\\' || c == ']')) {
/* all of them must be rejected, except '[' which may
* only appear at the beginning, and ']' which may
* only appear at the end or before a colon.
*/
if ((c == '[' && ofs == 0) ||
(c == ']' && (ofs == len - 1 || p[ofs + 1] == ':'))) {
/* that's an IP-literal (see RFC3986#3.2), it's
* OK for now.
*/
brackets ^= 1;
} else {
return 1;
}
}
}
/* there must be no opening bracket left nor lone closing one */
return brackets;
}
/* Checks status code array <array> for the presence of status code <status>.
* Returns non-zero if the code is present, zero otherwise. Any status code is
* permitted.

View File

@ -32,6 +32,7 @@ struct httpclient {
int timeout_server; /* server timeout in ms */
void *caller; /* ptr of the caller */
unsigned int flags; /* other flags */
unsigned int options; /* options */
struct proxy *px; /* proxy for special cases */
struct server *srv_raw; /* server for clear connections */
#ifdef USE_OPENSSL
@ -42,11 +43,16 @@ struct httpclient {
/* Action (FA) to do */
#define HTTPCLIENT_FA_STOP 0x00000001 /* stops the httpclient at the next IO handler call */
#define HTTPCLIENT_FA_AUTOKILL 0x00000002 /* sets the applet to destroy the httpclient struct itself */
#define HTTPCLIENT_FA_DRAIN_REQ 0x00000004 /* drains the request */
/* status (FS) */
#define HTTPCLIENT_FS_STARTED 0x00010000 /* the httpclient was started */
#define HTTPCLIENT_FS_ENDED 0x00020000 /* the httpclient is stopped */
/* options */
#define HTTPCLIENT_O_HTTPPROXY 0x00000001 /* the request must be use an absolute URI */
#define HTTPCLIENT_O_RES_HTX 0x00000002 /* response is stored in HTX */
/* States of the HTTP Client Appctx */
enum {
HTTPCLIENT_S_REQ = 0,
@ -59,12 +65,4 @@ enum {
#define HTTPCLIENT_USERAGENT "HAProxy"
/* What kind of data we need to read */
#define HC_F_RES_STLINE 0x01
#define HC_F_RES_HDR 0x02
#define HC_F_RES_BODY 0x04
#define HC_F_RES_END 0x08
#define HC_F_HTTPPROXY 0x10
#endif /* ! _HAPROXY_HTTCLIENT__T_H */

View File

@ -64,8 +64,17 @@ enum jwt_elt {
JWT_ELT_MAX
};
enum jwt_entry_type {
JWT_ENTRY_DFLT,
JWT_ENTRY_STORE,
JWT_ENTRY_PKEY,
JWT_ENTRY_INVALID, /* already tried looking into ckch_store tree (unsuccessful) */
};
struct jwt_cert_tree_entry {
EVP_PKEY *pkey;
EVP_PKEY *pubkey;
struct ckch_store *ckch_store;
int type; /* jwt_entry_type */
struct ebmb_node node;
char path[VAR_ARRAY];
};
@ -78,7 +87,8 @@ enum jwt_vrfy_status {
JWT_VRFY_UNMANAGED_ALG = -2,
JWT_VRFY_INVALID_TOKEN = -3,
JWT_VRFY_OUT_OF_MEMORY = -4,
JWT_VRFY_UNKNOWN_CERT = -5
JWT_VRFY_UNKNOWN_CERT = -5,
JWT_VRFY_INTERNAL_ERR = -6
};
#endif /* USE_OPENSSL */

View File

@ -28,10 +28,13 @@
#ifdef USE_OPENSSL
enum jwt_alg jwt_parse_alg(const char *alg_str, unsigned int alg_len);
int jwt_tokenize(const struct buffer *jwt, struct jwt_item *items, unsigned int *item_num);
int jwt_tree_load_cert(char *path, int pathlen, char **err);
int jwt_tree_load_cert(char *path, int pathlen, const char *file, int line, char **err);
enum jwt_vrfy_status jwt_verify(const struct buffer *token, const struct buffer *alg,
const struct buffer *key);
void jwt_replace_ckch_store(struct ckch_store *old_ckchs, struct ckch_store *new_ckchs);
#endif /* USE_OPENSSL */
#endif /* _HAPROXY_JWT_H */

View File

@ -97,7 +97,7 @@
* since it's used only once.
* Example: LIST_ELEM(cur_node->args.next, struct node *, args)
*/
#define LIST_ELEM(lh, pt, el) ((pt)(((const char *)(lh)) - ((size_t)&((pt)NULL)->el)))
#define LIST_ELEM(lh, pt, el) ((pt)(((const char *)(lh)) - offsetof(typeof(*(pt)NULL), el)))
/* checks if the list head <lh> is empty or not */
#define LIST_ISEMPTY(lh) ((lh)->n == (lh))
@ -284,10 +284,11 @@ static __inline void watcher_attach(struct watcher *w, void *target)
MT_LIST_APPEND(list, &w->el);
}
/* Untracks target via <w> watcher. Invalid if <w> is not attached first. */
/* Untracks target via <w> watcher. Does nothing if <w> is not attached */
static __inline void watcher_detach(struct watcher *w)
{
BUG_ON_HOT(!MT_LIST_INLIST(&w->el));
if (!MT_LIST_INLIST(&w->el))
return;
*w->pptr = NULL;
MT_LIST_DELETE(&w->el);
}

View File

@ -121,6 +121,7 @@ enum li_status {
#define BC_SSL_O_NONE 0x0000
#define BC_SSL_O_NO_TLS_TICKETS 0x0100 /* disable session resumption tickets */
#define BC_SSL_O_PREF_CLIE_CIPH 0x0200 /* prefer client ciphers */
#define BC_SSL_O_STRICT_SNI 0x0400 /* refuse negotiation if sni doesn't match a certificate */
#endif
struct tls_version_filter {
@ -169,7 +170,6 @@ struct bind_conf {
unsigned long long ca_ignerr_bitfield[IGNERR_BF_SIZE]; /* ignored verify errors in handshake if depth > 0 */
unsigned long long crt_ignerr_bitfield[IGNERR_BF_SIZE]; /* ignored verify errors in handshake if depth == 0 */
void *initial_ctx; /* SSL context for initial negotiation */
int strict_sni; /* refuse negotiation if sni doesn't match a certificate */
int ssl_options; /* ssl options */
struct eb_root sni_ctx; /* sni_ctx tree of all known certs full-names sorted by name */
struct eb_root sni_w_ctx; /* sni_ctx tree of all known certs wildcards sorted by name */
@ -193,6 +193,7 @@ struct bind_conf {
unsigned int analysers; /* bitmap of required protocol analysers */
int maxseg; /* for TCP, advertised MSS */
int tcp_ut; /* for TCP, user timeout */
char *tcp_md5sig; /* TCP MD5 signature password (RFC2385) */
int idle_ping; /* MUX idle-ping interval in ms */
int maxaccept; /* if set, max number of connections accepted at once (-1 when disabled) */
unsigned int backlog; /* if set, listen backlog */
@ -244,7 +245,7 @@ struct listener {
struct fe_counters *counters; /* statistics counters */
struct mt_list wait_queue; /* link element to make the listener wait for something (LI_LIMITED) */
char *name; /* listener's name */
char *label; /* listener's label */
unsigned int thr_conn[MAX_THREADS_PER_GROUP]; /* number of connections per thread for the group */
struct list by_fe; /* chaining in frontend's list of listeners */

View File

@ -71,20 +71,5 @@ struct mailers {
} timeout;
};
struct email_alert {
struct list list;
struct tcpcheck_rules rules;
struct server *srv;
};
struct email_alertq {
struct list email_alerts;
struct check check; /* Email alerts are implemented using existing check
* code even though they are not checks. This structure
* is as a parameter to the check code.
* Each check corresponds to a mailer */
__decl_thread(HA_SPINLOCK_T lock);
};
#endif /* _HAPROXY_MAILERS_T_H */

View File

@ -31,13 +31,10 @@
#include <haproxy/proxy-t.h>
#include <haproxy/server-t.h>
extern int mailers_used_from_lua;
extern struct mailers *mailers;
extern int send_email_disabled;
int init_email_alert(struct mailers *mailers, struct proxy *p, char **err);
void free_email_alert(struct proxy *p);
void send_email_alert(struct server *s, int priority, const char *format, ...)
__attribute__ ((format(printf, 3, 4)));
#endif /* _HAPROXY_MAILERS_H */

View File

@ -17,6 +17,8 @@
#include <haproxy/quic_frame-t.h>
#include <haproxy/quic_pacing-t.h>
#include <haproxy/quic_stream-t.h>
#include <haproxy/quic_utils-t.h>
#include <haproxy/session-t.h>
#include <haproxy/stconn-t.h>
#include <haproxy/task-t.h>
#include <haproxy/time-t.h>
@ -27,9 +29,6 @@ enum qcs_type {
QCS_SRV_BIDI,
QCS_CLT_UNI,
QCS_SRV_UNI,
/* Must be the last one */
QCS_MAX_TYPES
};
enum qcc_app_st {
@ -68,6 +67,9 @@ struct qcc {
/* flow-control fields set by the peer which we must respect. */
struct {
uint64_t ms_uni; /* max sub-ID of uni stream allowed by the peer */
uint64_t ms_bidi; /* max sub-ID of bidi stream allowed by the peer */
uint64_t md; /* connection flow control limit updated on MAX_DATA frames reception */
uint64_t msd_bidi_l; /* initial max-stream-data from peer on local bidi streams */
uint64_t msd_bidi_r; /* initial max-stream-data from peer on remote bidi streams */
@ -145,6 +147,7 @@ struct qc_stream_rxbuf {
struct qcs {
struct qcc *qcc;
struct session *sess; /* only set for backend conns */
struct sedesc *sd;
uint32_t flags; /* QC_SF_* */
enum qcs_state st; /* QC_SS_* state */
@ -157,6 +160,7 @@ struct qcs {
struct buffer app_buf; /* receive buffer used by stconn layer */
uint64_t msd; /* current max-stream-data limit to enforce */
uint64_t msd_base; /* max-stream-data previous to latest update */
struct bdata_ctr data; /* data utilization counter. Note that <tot> is now used for now as accounting may be difficult with ncbuf. */
} rx;
struct {
struct quic_fctl fc; /* stream flow control applied on sending */
@ -203,11 +207,11 @@ struct qcc_app_ops {
/* Initialize <qcs> stream app context or leave it to NULL if rejected. */
int (*attach)(struct qcs *qcs, void *conn_ctx);
/* Convert received HTTP payload to HTX. */
/* Convert received HTTP payload to HTX. Returns amount of decoded bytes from <b> or a negative error code. */
ssize_t (*rcv_buf)(struct qcs *qcs, struct buffer *b, int fin);
/* Convert HTX to HTTP payload for sending. */
size_t (*snd_buf)(struct qcs *qcs, struct buffer *b, size_t count);
size_t (*snd_buf)(struct qcs *qcs, struct buffer *b, size_t count, char *fin);
/* Negotiate and commit fast-forward data from opposite MUX. */
size_t (*nego_ff)(struct qcs *qcs, size_t count);
@ -233,7 +237,7 @@ struct qcc_app_ops {
#define QC_CF_ERRL 0x00000001 /* fatal error detected locally, connection should be closed soon */
#define QC_CF_ERRL_DONE 0x00000002 /* local error properly handled, connection can be released */
/* unused 0x00000004 */
#define QC_CF_IS_BACK 0x00000004 /* backend side */
#define QC_CF_CONN_FULL 0x00000008 /* no stream buffers available on connection */
/* unused 0x00000010 */
#define QC_CF_ERR_CONN 0x00000020 /* fatal error reported by transport layer */
@ -273,6 +277,7 @@ static forceinline char *qcc_show_flags(char *buf, size_t len, const char *delim
#define QC_SF_TO_STOP_SENDING 0x00000200 /* a STOP_SENDING must be sent */
#define QC_SF_UNKNOWN_PL_LENGTH 0x00000400 /* HTX EOM may be missing from the stream layer */
#define QC_SF_RECV_RESET 0x00000800 /* a RESET_STREAM was received */
#define QC_SF_EOI_SUSPENDED 0x00001000 /* EOI must not be reported even if HTX EOM is present - useful when transferring HTTP interim responses */
/* This function is used to report flags in debugging tools. Please reflect
* below any single-bit flag addition above in the same order via the

View File

@ -19,6 +19,7 @@
void qcc_set_error(struct qcc *qcc, int err, int app);
int _qcc_report_glitch(struct qcc *qcc, int inc);
int qcc_fctl_avail_streams(const struct qcc *qcc, int bidi);
struct qcs *qcc_init_stream_local(struct qcc *qcc, int bidi);
void qcs_send_metadata(struct qcs *qcs);
int qcs_attach_sc(struct qcs *qcs, struct buffer *buf, char fin);
@ -43,6 +44,7 @@ int qcc_recv(struct qcc *qcc, uint64_t id, uint64_t len, uint64_t offset,
char fin, char *data);
int qcc_recv_max_data(struct qcc *qcc, uint64_t max);
int qcc_recv_max_stream_data(struct qcc *qcc, uint64_t id, uint64_t max);
int qcc_recv_max_streams(struct qcc *qcc, uint64_t max, int bidi);
int qcc_recv_reset_stream(struct qcc *qcc, uint64_t id, uint64_t err, uint64_t final_size);
int qcc_recv_stop_sending(struct qcc *qcc, uint64_t id, uint64_t err);
@ -51,15 +53,7 @@ static inline int qmux_stream_rx_bufsz(void)
return global.tune.bufsize - NCB_RESERVED_SZ;
}
/* Bit shift to get the stream sub ID for internal use which is obtained
* shifting the stream IDs by this value, knowing that the
* QCS_ID_TYPE_SHIFT less significant bits identify the stream ID
* types (client initiated bidirectional, server initiated bidirectional,
* client initiated unidirectional, server initiated bidirectional).
* Note that there is no reference to such stream sub IDs in the RFC.
*/
#define QCS_ID_TYPE_MASK 0x3
#define QCS_ID_TYPE_SHIFT 2
/* The less significant bit of a stream ID is set for a server initiated stream */
#define QCS_ID_SRV_INTIATOR_BIT 0x1
/* This bit is set for unidirectional streams */
@ -82,16 +76,6 @@ static inline int quic_stream_is_remote(struct qcc *qcc, uint64_t id)
return !quic_stream_is_local(qcc, id);
}
static inline int quic_stream_is_uni(uint64_t id)
{
return id & QCS_ID_DIR_BIT;
}
static inline int quic_stream_is_bidi(uint64_t id)
{
return !quic_stream_is_uni(id);
}
static inline char *qcs_st_to_str(enum qcs_state st)
{
switch (st) {

View File

@ -87,10 +87,9 @@ static forceinline char *spop_strm_show_flags(char *buf, size_t len, const char
/* SPOP connection state (spop_conn->state) */
enum spop_conn_st {
SPOP_CS_HA_HELLO = 0, /* init done, waiting for sending HELLO frame */
SPOP_CS_AGENT_HELLO, /* HELLO frame sent, waiting for agent HELLO frame to define the connection settings */
SPOP_CS_FRAME_H, /* HELLO handshake finished, waiting for a frame header */
SPOP_CS_FRAME_P, /* Frame header received, waiting for a frame data */
SPOP_CS_HA_HELLO = 0, /* init done, waiting for sending HELLO frame */
SPOP_CS_AGENT_HELLO, /* HELLO frame sent, waiting for agent HELLO frame to define the connection settings */
SPOP_CS_RUNNING, /* HELLO handshake finished, exchange NOTIFY/ACK frames */
SPOP_CS_ERROR, /* send DISCONNECT frame to be able ti close the connection */
SPOP_CS_CLOSING, /* DISCONNECT frame sent, waiting for the agent DISCONNECT frame before closing */
SPOP_CS_CLOSED, /* Agent DISCONNECT frame received and close the connection ASAP */
@ -103,8 +102,7 @@ static inline const char *spop_conn_st_to_str(enum spop_conn_st st)
switch (st) {
case SPOP_CS_HA_HELLO : return "HHL";
case SPOP_CS_AGENT_HELLO: return "AHL";
case SPOP_CS_FRAME_H : return "FRH";
case SPOP_CS_FRAME_P : return "FRP";
case SPOP_CS_RUNNING : return "RUN";
case SPOP_CS_ERROR : return "ERR";
case SPOP_CS_CLOSING : return "CLI";
case SPOP_CS_CLOSED : return "CLO";

View File

@ -21,7 +21,7 @@
#define PROC_O_TYPE_MASTER 0x00000001
#define PROC_O_TYPE_WORKER 0x00000002
#define PROC_O_TYPE_PROG 0x00000004
/* 0x00000004 unused */
/* 0x00000008 unused */
#define PROC_O_LEAVING 0x00000010 /* this process should be leaving */
/* state of the newly forked worker process, which hasn't sent yet its READY message to master */

View File

@ -42,8 +42,6 @@ void mworker_cleanlisteners(void);
int mworker_child_nb(void);
int mworker_ext_launch_all(void);
void mworker_kill_max_reloads(int sig);
struct mworker_proc *mworker_proc_new();

View File

@ -46,8 +46,35 @@
#ifdef USE_QUIC_OPENSSL_COMPAT
#include <haproxy/quic_openssl_compat.h>
#else
#define HAVE_OPENSSL_QUIC_CLIENT_SUPPORT
#if defined(OSSL_FUNC_SSL_QUIC_TLS_CRYPTO_SEND)
/* This macro is defined by the new OpenSSL 3.5.0 QUIC TLS API and it is not
* defined by quictls.
*/
#if defined(USE_QUIC) && (OPENSSL_VERSION_NUMBER < 0x30500010L)
#error "OpenSSL 3.5 QUIC API should only be used with OpenSSL 3.5.1 version and newer"
#endif
#define HAVE_OPENSSL_QUIC
#define SSL_set_quic_transport_params SSL_set_quic_tls_transport_params
#define SSL_set_quic_early_data_enabled SSL_set_quic_tls_early_data_enabled
#define SSL_quic_read_level(arg) -1
enum ssl_encryption_level_t {
ssl_encryption_initial = 0,
ssl_encryption_early_data,
ssl_encryption_handshake,
ssl_encryption_application
};
#else
/* QUIC TLS API */
#define HAVE_OPENSSL_QUICTLS
#endif
#endif /* USE_QUIC_OPENSSL_COMPAT */
#if defined(OPENSSL_IS_AWSLC)
#define OPENSSL_NO_DH
#define SSL_CTX_set1_sigalgs_list SSL_CTX_set1_sigalgs_list

View File

@ -33,6 +33,9 @@
extern const char *const pat_match_names[PAT_MATCH_NUM];
extern int const pat_match_types[PAT_MATCH_NUM];
extern unsigned long long patterns_added;
extern unsigned long long patterns_freed;
extern int (*const pat_parse_fcts[PAT_MATCH_NUM])(const char *, struct pattern *, int, char **);
extern int (*const pat_index_fcts[PAT_MATCH_NUM])(struct pattern_expr *, struct pattern *, char **);
extern void (*const pat_prune_fcts[PAT_MATCH_NUM])(struct pattern_expr *);

View File

@ -40,10 +40,5 @@ int peers_register_table(struct peers *, struct stktable *table);
void peers_setup_frontend(struct proxy *fe);
void peers_register_keywords(struct peers_kw_list *pkwl);
static inline enum obj_type *peer_session_target(struct peer *p, struct stream *s)
{
return &p->srv->obj_type;
}
#endif /* _HAPROXY_PEERS_H */

View File

@ -27,6 +27,7 @@
#define MEM_F_SHARED 0x1
#define MEM_F_EXACT 0x2
#define MEM_F_UAF 0x4
/* A special pointer for the pool's free_list that indicates someone is
* currently manipulating it. Serves as a short-lived lock.
@ -51,6 +52,7 @@
#define POOL_DBG_TAG 0x00000080 // place a tag at the end of the area
#define POOL_DBG_POISON 0x00000100 // poison memory area on pool_alloc()
#define POOL_DBG_UAF 0x00000200 // enable use-after-free protection
#define POOL_DBG_BACKUP 0x00000400 // backup the object contents on free()
/* This is the head of a thread-local cache */

View File

@ -311,6 +311,10 @@ struct proxy {
char flags; /* bit field PR_FL_* */
enum pr_mode mode; /* mode = PR_MODE_TCP, PR_MODE_HTTP, ... */
char cap; /* supported capabilities (PR_CAP_*) */
unsigned long last_change; /* internal use only: last time the proxy state was changed */
struct list global_list; /* list member for global proxy list */
unsigned int maxconn; /* max # of active streams on the frontend */
int options; /* PR_O_REDISP, PR_O_TRANSP, ... */
@ -348,7 +352,7 @@ struct proxy {
#ifdef USE_QUIC
struct list quic_init_rules; /* quic-initial rules */
#endif
struct server *srv, defsrv; /* known servers; default server configuration */
struct server *srv, *defsrv; /* known servers; default server configuration */
struct lbprm lbprm; /* load-balancing parameters */
int srv_act, srv_bck; /* # of servers eligible for LB (UP|!checked) AND (enabled+weight!=0) */
int served; /* # of active sessions currently being served */

View File

@ -33,6 +33,7 @@
#include <haproxy/thread.h>
extern struct proxy *proxies_list;
extern struct list proxies;
extern struct eb_root used_proxy_id; /* list of proxy IDs in use */
extern unsigned int error_snapshot_id; /* global ID assigned to each error then incremented */
extern struct eb_root proxy_by_name; /* tree of proxies sorted by name */
@ -60,7 +61,6 @@ void proxy_store_name(struct proxy *px);
struct proxy *proxy_find_by_id(int id, int cap, int table);
struct proxy *proxy_find_by_name(const char *name, int cap, int table);
struct proxy *proxy_find_best_match(int cap, const char *name, int id, int *diff);
struct server *findserver(const struct proxy *px, const char *name);
int proxy_cfg_ensure_no_http(struct proxy *curproxy);
int proxy_cfg_ensure_no_log(struct proxy *curproxy);
void init_new_proxy(struct proxy *p);
@ -135,22 +135,24 @@ static inline void proxy_reset_timeouts(struct proxy *proxy)
/* increase the number of cumulated connections received on the designated frontend */
static inline void proxy_inc_fe_conn_ctr(struct listener *l, struct proxy *fe)
{
_HA_ATOMIC_INC(&fe->fe_counters.cum_conn);
_HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->cum_conn);
if (l && l->counters)
_HA_ATOMIC_INC(&l->counters->cum_conn);
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_conn);
update_freq_ctr(&fe->fe_counters.shared.tg[tgid - 1]->conn_per_sec, 1);
HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.cps_max,
update_freq_ctr(&fe->fe_counters.conn_per_sec, 1));
update_freq_ctr(&fe->fe_counters._conn_per_sec, 1));
}
/* increase the number of cumulated connections accepted by the designated frontend */
static inline void proxy_inc_fe_sess_ctr(struct listener *l, struct proxy *fe)
{
_HA_ATOMIC_INC(&fe->fe_counters.cum_sess);
_HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->cum_sess);
if (l && l->counters)
_HA_ATOMIC_INC(&l->counters->cum_sess);
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_sess);
update_freq_ctr(&fe->fe_counters.shared.tg[tgid - 1]->sess_per_sec, 1);
HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.sps_max,
update_freq_ctr(&fe->fe_counters.sess_per_sec, 1));
update_freq_ctr(&fe->fe_counters._sess_per_sec, 1));
}
/* increase the number of cumulated HTTP sessions on the designated frontend.
@ -160,20 +162,21 @@ static inline void proxy_inc_fe_cum_sess_ver_ctr(struct listener *l, struct prox
unsigned int http_ver)
{
if (http_ver == 0 ||
http_ver > sizeof(fe->fe_counters.cum_sess_ver) / sizeof(*fe->fe_counters.cum_sess_ver))
http_ver > sizeof(fe->fe_counters.shared.tg[tgid - 1]->cum_sess_ver) / sizeof(*fe->fe_counters.shared.tg[tgid - 1]->cum_sess_ver))
return;
_HA_ATOMIC_INC(&fe->fe_counters.cum_sess_ver[http_ver - 1]);
_HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->cum_sess_ver[http_ver - 1]);
if (l && l->counters)
_HA_ATOMIC_INC(&l->counters->cum_sess_ver[http_ver - 1]);
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_sess_ver[http_ver - 1]);
}
/* increase the number of cumulated streams on the designated backend */
static inline void proxy_inc_be_ctr(struct proxy *be)
{
_HA_ATOMIC_INC(&be->be_counters.cum_sess);
_HA_ATOMIC_INC(&be->be_counters.shared.tg[tgid - 1]->cum_sess);
update_freq_ctr(&be->be_counters.shared.tg[tgid - 1]->sess_per_sec, 1);
HA_ATOMIC_UPDATE_MAX(&be->be_counters.sps_max,
update_freq_ctr(&be->be_counters.sess_per_sec, 1));
update_freq_ctr(&be->be_counters._sess_per_sec, 1));
}
/* increase the number of cumulated requests on the designated frontend.
@ -183,14 +186,15 @@ static inline void proxy_inc_be_ctr(struct proxy *be)
static inline void proxy_inc_fe_req_ctr(struct listener *l, struct proxy *fe,
unsigned int http_ver)
{
if (http_ver >= sizeof(fe->fe_counters.p.http.cum_req) / sizeof(*fe->fe_counters.p.http.cum_req))
if (http_ver >= sizeof(fe->fe_counters.shared.tg[tgid - 1]->p.http.cum_req) / sizeof(*fe->fe_counters.shared.tg[tgid - 1]->p.http.cum_req))
return;
_HA_ATOMIC_INC(&fe->fe_counters.p.http.cum_req[http_ver]);
_HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->p.http.cum_req[http_ver]);
if (l && l->counters)
_HA_ATOMIC_INC(&l->counters->p.http.cum_req[http_ver]);
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->p.http.cum_req[http_ver]);
update_freq_ctr(&fe->fe_counters.shared.tg[tgid - 1]->req_per_sec, 1);
HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.p.http.rps_max,
update_freq_ctr(&fe->fe_counters.req_per_sec, 1));
update_freq_ctr(&fe->fe_counters.p.http._req_per_sec, 1));
}
/* Returns non-zero if the proxy is configured to retry a request if we got that status, 0 otherwise */

View File

@ -1,12 +1,17 @@
#ifndef QPACK_ENC_H_
#define QPACK_ENC_H_
#include <haproxy/http-t.h>
#include <haproxy/istbuf.h>
struct buffer;
int qpack_encode_field_section_line(struct buffer *out);
int qpack_encode_int_status(struct buffer *out, unsigned int status);
int qpack_encode_method(struct buffer *out, enum http_meth_t meth, struct ist other);
int qpack_encode_scheme(struct buffer *out, const struct ist scheme);
int qpack_encode_path(struct buffer *out, const struct ist path);
int qpack_encode_auth(struct buffer *out, const struct ist auth);
int qpack_encode_header(struct buffer *out, const struct ist n, const struct ist v);
#endif /* QPACK_ENC_H_ */

View File

@ -28,22 +28,22 @@
#include <sys/socket.h>
#include <haproxy/cbuf-t.h>
#include <haproxy/list.h>
#include <haproxy/show_flags-t.h>
#include <import/ebtree-t.h>
#include <haproxy/api-t.h>
#include <haproxy/buf-t.h>
#include <haproxy/listener-t.h>
#include <haproxy/openssl-compat.h>
#include <haproxy/mux_quic-t.h>
#include <haproxy/quic_cid-t.h>
#include <haproxy/quic_cc-t.h>
#include <haproxy/quic_loss-t.h>
#include <haproxy/quic_frame-t.h>
#include <haproxy/quic_openssl_compat-t.h>
#include <haproxy/quic_stats-t.h>
#include <haproxy/quic_tls-t.h>
#include <haproxy/quic_tp-t.h>
#include <haproxy/task.h>
#include <import/ebtree-t.h>
#include <haproxy/show_flags-t.h>
#include <haproxy/ssl_sock-t.h>
#include <haproxy/task-t.h>
typedef unsigned long long ull;
@ -228,6 +228,9 @@ struct quic_version {
extern const struct quic_version quic_versions[];
extern const size_t quic_versions_nb;
extern const struct quic_version *preferred_version;
extern const struct quic_version *quic_version_draft_29;
extern const struct quic_version *quic_version_1;
extern const struct quic_version *quic_version_2;
/* unused: 0x01 */
/* Flag the packet number space as requiring an ACK frame to be sent. */
@ -248,7 +251,7 @@ extern const struct quic_version *preferred_version;
/* The maximum number of bytes of CRYPTO data in flight during handshakes. */
#define QUIC_CRYPTO_IN_FLIGHT_MAX 4096
/* Status of the connection/mux layer. This defines how to handle app data.
/* Status of the MUX layer. This defines how to handle app data.
*
* During a standard quic_conn lifetime it transitions like this :
* QC_MUX_NULL -> QC_MUX_READY -> QC_MUX_RELEASED
@ -279,6 +282,10 @@ struct quic_conn_cntrs {
long long streams_blocked_uni; /* total number of times STREAMS_BLOCKED_UNI frame was received */
};
struct connection;
struct qcc;
struct qcc_app_ops;
#define QUIC_CONN_COMMON \
struct { \
/* Connection owned socket FD. */ \
@ -301,6 +308,7 @@ struct quic_conn_cntrs {
/* Number of received bytes. */ \
uint64_t rx; \
} bytes; \
size_t max_udp_payload; \
/* First DCID used by client on its Initial packet. */ \
struct quic_cid odcid; \
/* DCID of our endpoint - not updated when a new DCID is used */ \
@ -311,7 +319,7 @@ struct quic_conn_cntrs {
* with a connection \
*/ \
struct eb_root *cids; \
struct listener *li; /* only valid for frontend connections */ \
enum obj_type *target; \
/* Idle timer task */ \
struct task *idle_timer_task; \
unsigned int idle_expire; \
@ -334,7 +342,10 @@ struct quic_conn {
int tps_tls_ext;
int state;
enum qc_mux_state mux_state; /* status of the connection/mux layer */
#ifdef USE_QUIC_OPENSSL_COMPAT
#ifdef HAVE_OPENSSL_QUIC
uint32_t prot_level;
#endif
#if defined(USE_QUIC_OPENSSL_COMPAT) || defined(HAVE_OPENSSL_QUIC)
unsigned char enc_params[QUIC_TP_MAX_ENCLEN]; /* encoded QUIC transport parameters */
size_t enc_params_len;
#endif
@ -345,6 +356,10 @@ struct quic_conn {
*/
uint64_t hash64;
/* QUIC client only retry token received from servers RETRY packet */
unsigned char *retry_token;
size_t retry_token_len;
/* Initial encryption level */
struct quic_enc_level *iel;
/* 0-RTT encryption level */
@ -383,10 +398,10 @@ struct quic_conn {
/* RX buffer */
struct buffer buf;
struct list pkt_list;
struct {
/* Number of open or closed streams */
uint64_t nb_streams;
} strms[QCS_MAX_TYPES];
/* first unhandled streams ID, set by MUX after release */
uint64_t stream_max_uni;
uint64_t stream_max_bidi;
} rx;
struct {
struct quic_tls_kp prv_rx;
@ -433,7 +448,7 @@ struct quic_conn_closed {
#define QUIC_FL_CONN_ANTI_AMPLIFICATION_REACHED (1U << 0)
#define QUIC_FL_CONN_SPIN_BIT (1U << 1) /* Spin bit set by remote peer */
#define QUIC_FL_CONN_NEED_POST_HANDSHAKE_FRMS (1U << 2) /* HANDSHAKE_DONE must be sent */
#define QUIC_FL_CONN_LISTENER (1U << 3)
/* gap here */
#define QUIC_FL_CONN_ACCEPT_REGISTERED (1U << 4)
/* gap here */
#define QUIC_FL_CONN_IDLE_TIMER_RESTARTED_AFTER_READ (1U << 6)
@ -449,6 +464,7 @@ struct quic_conn_closed {
#define QUIC_FL_CONN_HPKTNS_DCD (1U << 16) /* Handshake packet number space discarded */
#define QUIC_FL_CONN_PEER_VALIDATED_ADDR (1U << 17) /* Peer address is considered as validated for this connection. */
#define QUIC_FL_CONN_NO_TOKEN_RCVD (1U << 18) /* Client dit not send any token */
#define QUIC_FL_CONN_SCID_RECEIVED (1U << 19) /* (client only: first Initial received. */
/* gap here */
#define QUIC_FL_CONN_TO_KILL (1U << 24) /* Unusable connection, to be killed */
#define QUIC_FL_CONN_TX_TP_RECEIVED (1U << 25) /* Peer transport parameters have been received (used for the transmitting part) */
@ -472,7 +488,6 @@ static forceinline char *qc_show_flags(char *buf, size_t len, const char *delim,
_(QUIC_FL_CONN_ANTI_AMPLIFICATION_REACHED,
_(QUIC_FL_CONN_SPIN_BIT,
_(QUIC_FL_CONN_NEED_POST_HANDSHAKE_FRMS,
_(QUIC_FL_CONN_LISTENER,
_(QUIC_FL_CONN_ACCEPT_REGISTERED,
_(QUIC_FL_CONN_IDLE_TIMER_RESTARTED_AFTER_READ,
_(QUIC_FL_CONN_RETRANS_NEEDED,
@ -492,7 +507,7 @@ static forceinline char *qc_show_flags(char *buf, size_t len, const char *delim,
_(QUIC_FL_CONN_EXP_TIMER,
_(QUIC_FL_CONN_CLOSING,
_(QUIC_FL_CONN_DRAINING,
_(QUIC_FL_CONN_IMMEDIATE_CLOSE))))))))))))))))))))))));
_(QUIC_FL_CONN_IMMEDIATE_CLOSE)))))))))))))))))))))));
/* epilogue */
_(~0U);
return buf;

View File

@ -69,7 +69,8 @@ struct quic_conn *qc_new_conn(const struct quic_version *qv, int ipv4,
struct quic_connection_id *conn_id,
struct sockaddr_storage *local_addr,
struct sockaddr_storage *peer_addr,
int server, int token, void *owner);
int token, void *owner,
struct connection *conn);
int quic_build_post_handshake_frames(struct quic_conn *qc);
const struct quic_version *qc_supported_version(uint32_t version);
int quic_peer_validated_addr(struct quic_conn *qc);
@ -81,11 +82,6 @@ void qc_check_close_on_released_mux(struct quic_conn *qc);
int quic_stateless_reset_token_cpy(unsigned char *pos, size_t len,
const unsigned char *salt, size_t saltlen);
static inline int qc_is_listener(struct quic_conn *qc)
{
return qc->flags & QUIC_FL_CONN_LISTENER;
}
/* Free the CIDs attached to <conn> QUIC connection. */
static inline void free_quic_conn_cids(struct quic_conn *conn)
{
@ -163,6 +159,22 @@ static inline void quic_free_ncbuf(struct ncbuf *ncbuf)
*ncbuf = NCBUF_NULL;
}
/* Return the address of the QUIC counters attached to the proxy of
* the owner of the connection whose object type address is <o> for
* listener and servers, or NULL for others object type.
*/
static inline void *qc_counters(enum obj_type *o, const struct stats_module *m)
{
struct proxy *p;
struct listener *l = objt_listener(o);
struct server *s = objt_server(o);
p = l ? l->bind_conf->frontend :
s ? s->proxy : NULL;
return p ? EXTRA_COUNTERS_GET(p->extra_counters_fe, m) : NULL;
}
void chunk_frm_appendf(struct buffer *buf, const struct quic_frame *frm);
void quic_set_connection_close(struct quic_conn *qc, const struct quic_err err);
void quic_set_tls_alert(struct quic_conn *qc, int alert);

View File

@ -13,6 +13,8 @@
#include <haproxy/quic_rx-t.h>
#include <haproxy/quic_sock-t.h>
extern struct pool_head *pool_head_quic_retry_token;
struct listener;
int quic_generate_retry_token(unsigned char *token, size_t len,
@ -28,6 +30,9 @@ int quic_retry_token_check(struct quic_rx_packet *pkt,
struct listener *l,
struct quic_conn *qc,
struct quic_cid *odcid);
int quic_retry_packet_check(struct quic_conn *qc, struct quic_rx_packet *pkt,
const unsigned char *beg, const unsigned char *end,
const unsigned char *pos, size_t *retry_token_len);
#endif /* USE_QUIC */
#endif /* _HAPROXY_QUIC_RETRY_H */

View File

@ -26,7 +26,7 @@
#include <haproxy/quic_rx-t.h>
int quic_dgram_parse(struct quic_dgram *dgram, struct quic_conn *from_qc,
struct listener *li);
enum obj_type *obj_type);
int qc_treat_rx_pkts(struct quic_conn *qc);
int qc_parse_hd_form(struct quic_rx_packet *pkt,
unsigned char **pos, const unsigned char *end);

View File

@ -31,7 +31,9 @@
#include <haproxy/api.h>
#include <haproxy/connection-t.h>
#include <haproxy/fd-t.h>
#include <haproxy/listener-t.h>
#include <haproxy/obj_type.h>
#include <haproxy/quic_conn-t.h>
#include <haproxy/quic_sock-t.h>
@ -77,7 +79,8 @@ static inline char qc_test_fd(struct quic_conn *qc)
*/
static inline int qc_fd(struct quic_conn *qc)
{
return qc_test_fd(qc) ? qc->fd : qc->li->rx.fd;
/* TODO: check this: For backends, qc->fd is always initialized */
return qc_test_fd(qc) ? qc->fd : __objt_listener(qc->target)->rx.fd;
}
/* Try to increment <l> handshake current counter. If listener limit is

View File

@ -34,7 +34,8 @@
#include <haproxy/ssl_sock-t.h>
int ssl_quic_initial_ctx(struct bind_conf *bind_conf);
int qc_alloc_ssl_sock_ctx(struct quic_conn *qc);
SSL_CTX *ssl_quic_srv_new_ssl_ctx(void);
int qc_alloc_ssl_sock_ctx(struct quic_conn *qc, struct connection *conn);
int qc_ssl_provide_all_quic_data(struct quic_conn *qc, struct ssl_sock_ctx *ctx);
static inline void qc_free_ssl_sock_ctx(struct ssl_sock_ctx **ctx)

View File

@ -7,6 +7,7 @@
#include <haproxy/buf-t.h>
#include <haproxy/list-t.h>
#include <haproxy/quic_utils-t.h>
/* A QUIC STREAM buffer used for Tx.
*
@ -43,6 +44,8 @@ struct qc_stream_desc {
uint64_t ack_offset; /* last acknowledged offset */
struct eb_root buf_tree; /* list of active and released buffers */
struct bdata_ctr data; /* data utilization counter */
ullong origin_ts; /* timestamp for creation date of current stream instance */
int flags; /* QC_SD_FL_* values */

View File

@ -74,7 +74,8 @@ int quic_tls_decrypt(unsigned char *buf, size_t len,
const unsigned char *key, const unsigned char *iv);
int quic_tls_generate_retry_integrity_tag(unsigned char *odcid, unsigned char odcid_len,
unsigned char *buf, size_t len,
const unsigned char *buf, size_t len,
unsigned char *tag,
const struct quic_version *qv);
int quic_tls_derive_keys(const QUIC_AEAD *aead, const EVP_CIPHER *hp,
@ -291,6 +292,29 @@ static inline struct quic_enc_level **ssl_to_qel_addr(struct quic_conn *qc,
}
}
#ifdef HAVE_OPENSSL_QUIC
/* Simple helper function which translate an OpenSSL SSL protection level
* to a quictls SSL encryption. This way the code which use the OpenSSL QUIC API
* may use the code which uses the quictls API.
*/
static inline enum ssl_encryption_level_t ssl_prot_level_to_enc_level(struct quic_conn *qc,
uint32_t prot_level)
{
switch (prot_level) {
case OSSL_RECORD_PROTECTION_LEVEL_NONE:
return ssl_encryption_initial;
case OSSL_RECORD_PROTECTION_LEVEL_EARLY:
return ssl_encryption_early_data;
case OSSL_RECORD_PROTECTION_LEVEL_HANDSHAKE:
return ssl_encryption_handshake;
case OSSL_RECORD_PROTECTION_LEVEL_APPLICATION:
return ssl_encryption_application;
default:
return -1;
}
}
#endif
/* Return the address of the QUIC TLS encryption level associated to <level> internal
* encryption level and attached to <qc> QUIC connection if succeeded, or
* NULL if failed.
@ -513,7 +537,8 @@ static inline int quic_pktns_init(struct quic_conn *qc, struct quic_pktns **p)
return 1;
}
static inline void quic_pktns_tx_pkts_release(struct quic_pktns *pktns, struct quic_conn *qc)
static inline void quic_pktns_tx_pkts_release(struct quic_pktns *pktns,
struct quic_conn *qc, int resend)
{
struct eb64_node *node;
@ -534,7 +559,11 @@ static inline void quic_pktns_tx_pkts_release(struct quic_pktns *pktns, struct q
qc_frm_unref(frm, qc);
LIST_DEL_INIT(&frm->list);
quic_tx_packet_refdec(frm->pkt);
qc_frm_free(qc, &frm);
if (!resend)
qc_frm_free(qc, &frm);
else
LIST_APPEND(&pktns->tx.frms, &frm->list);
}
eb64_delete(&pkt->pn_node);
quic_tx_packet_refdec(pkt);
@ -549,9 +578,12 @@ static inline void quic_pktns_tx_pkts_release(struct quic_pktns *pktns, struct q
* connection.
* Note that all the non acknowledged TX packets and their frames are freed.
* Always succeeds.
* <resend> boolean must be 1 to resend the frames which are in flight.
* This is only used to resend the Initial packet frames upon a RETRY
* packet receipt (backend only option).
*/
static inline void quic_pktns_discard(struct quic_pktns *pktns,
struct quic_conn *qc)
struct quic_conn *qc, int resend)
{
TRACE_ENTER(QUIC_EV_CONN_PHPKTS, qc);
@ -567,7 +599,7 @@ static inline void quic_pktns_discard(struct quic_pktns *pktns,
pktns->tx.loss_time = TICK_ETERNITY;
pktns->tx.pto_probe = 0;
pktns->tx.in_flight = 0;
quic_pktns_tx_pkts_release(pktns, qc);
quic_pktns_tx_pkts_release(pktns, qc, resend);
TRACE_LEAVE(QUIC_EV_CONN_PHPKTS, qc);
}

View File

@ -26,6 +26,9 @@ int qc_lstnr_params_init(struct quic_conn *qc,
const unsigned char *dcid, size_t dcidlen,
const unsigned char *scid, size_t scidlen,
const struct quic_cid *token_odcid);
void qc_srv_params_init(struct quic_conn *qc,
const struct quic_transport_params *srv_params,
const unsigned char *scid, size_t scidlen);
/* Dump <cid> transport parameter connection ID value if present (non null length).
* Used only for debugging purposes.

View File

@ -99,5 +99,6 @@ struct quic_rx_crypto_frm {
#define QUIC_EV_CONN_KP (1ULL << 50)
#define QUIC_EV_CONN_SSL_COMPAT (1ULL << 51)
#define QUIC_EV_CONN_BIND_TID (1ULL << 52)
#define QUIC_EV_CONN_RELEASE_RCD (1ULL << 53)
#endif /* _HAPROXY_QUIC_TRACE_T_H */

View File

@ -3,7 +3,7 @@
#define QUIC_MIN_CC_PKTSIZE 128
#define QUIC_DGRAM_HEADLEN (sizeof(uint16_t) + sizeof(void *))
#define QUIC_MAX_CC_BUFSIZE (2 * (QUIC_MIN_CC_PKTSIZE + QUIC_DGRAM_HEADLEN))
#define QUIC_MAX_CC_BUFSIZE MAX(QUIC_INITIAL_IPV6_MTU, QUIC_INITIAL_IPV4_MTU)
/* Sendmsg input buffer cannot be bigger than 65535 bytes. This comes from UDP
* header which uses a 2-bytes length field. QUIC datagrams are limited to 1252

View File

@ -23,6 +23,7 @@
#include <haproxy/buf-t.h>
#include <haproxy/list-t.h>
#include <haproxy/pool.h>
#include <haproxy/quic_conn-t.h>
#include <haproxy/quic_tls-t.h>
#include <haproxy/quic_pacing-t.h>

View File

@ -0,0 +1,17 @@
#ifndef _HAPROXY_QUIC_UTILS_T_H
#define _HAPROXY_QUIC_UTILS_T_H
#ifdef USE_QUIC
#include <haproxy/api-t.h>
/* Counter which can be used to measure data amount accross several buffers. */
struct bdata_ctr {
uint64_t tot; /* sum of data present in all underlying buffers */
uint8_t bcnt; /* current number of allocated underlying buffers */
uint8_t bmax; /* max number of allocated buffers during stream lifetime */
};
#endif /* USE_QUIC */
#endif /* _HAPROXY_QUIC_UTILS_T_H */

View File

@ -0,0 +1,59 @@
#ifndef _HAPROXY_QUIC_UTILS_H
#define _HAPROXY_QUIC_UTILS_H
#ifdef USE_QUIC
#include <haproxy/quic_utils-t.h>
#include <haproxy/buf-t.h>
#include <haproxy/chunk.h>
static inline int quic_stream_is_uni(uint64_t id)
{
return id & QCS_ID_DIR_BIT;
}
static inline int quic_stream_is_bidi(uint64_t id)
{
return !quic_stream_is_uni(id);
}
static inline void bdata_ctr_init(struct bdata_ctr *ctr)
{
ctr->tot = 0;
ctr->bcnt = 0;
ctr->bmax = 0;
}
static inline void bdata_ctr_binc(struct bdata_ctr *ctr)
{
++ctr->bcnt;
ctr->bmax = MAX(ctr->bcnt, ctr->bmax);
}
static inline void bdata_ctr_bdec(struct bdata_ctr *ctr)
{
--ctr->bcnt;
}
static inline void bdata_ctr_add(struct bdata_ctr *ctr, size_t data)
{
ctr->tot += data;
}
static inline void bdata_ctr_del(struct bdata_ctr *ctr, size_t data)
{
ctr->tot -= data;
}
static inline void bdata_ctr_print(struct buffer *chunk,
const struct bdata_ctr *ctr,
const char *prefix)
{
chunk_appendf(chunk, " %s%d(%d)/%llu",
prefix, ctr->bcnt, ctr->bmax, (ullong)ctr->tot);
}
#endif /* USE_QUIC */
#endif /* _HAPROXY_QUIC_UTILS_H */

View File

@ -348,15 +348,6 @@ static inline void sc_sync_send(struct stconn *sc)
{
if (sc_ep_test(sc, SE_FL_T_MUX))
sc_conn_sync_send(sc);
else if (sc_ep_test(sc, SE_FL_T_APPLET)) {
sc_applet_sync_send(sc);
if (sc_oc(sc)->flags & CF_WRITE_EVENT) {
/* Data was send, wake the applet up. It is safe to do so because sc_applet_sync_send()
* removes CF_WRITE_EVENT flag from the channel before trying to send data to the applet.
*/
task_wakeup(__sc_appctx(sc)->t, TASK_WOKEN_OTHER);
}
}
}
/* Combines both sc_update_rx() and sc_update_tx() at once */
@ -395,7 +386,19 @@ static inline int sc_is_send_allowed(const struct stconn *sc)
if (sc->flags & SC_FL_SHUT_DONE)
return 0;
return !sc_ep_test(sc, SE_FL_WAIT_DATA | SE_FL_WONT_CONSUME);
if (!sc_appctx(sc) || !(__sc_appctx(sc)->flags & APPCTX_FL_INOUT_BUFS))
return !sc_ep_test(sc, SE_FL_WAIT_DATA | SE_FL_WONT_CONSUME);
if (sc_ep_test(sc, SE_FL_WONT_CONSUME))
return 0;
if (sc_ep_test(sc, SE_FL_WAIT_DATA)) {
if (__sc_appctx(sc)->flags & (APPCTX_FL_INBLK_FULL|APPCTX_FL_INBLK_ALLOC))
return 0;
if (!co_data(sc_oc(sc)))
return 0;
}
return 1;
}
static inline int sc_rcv_may_expire(const struct stconn *sc)

View File

@ -171,6 +171,7 @@ enum srv_init_state {
#define SRV_F_DEFSRV_USE_SSL 0x4000 /* default-server uses SSL */
#define SRV_F_DELETED 0x8000 /* srv is deleted but not yet purged */
#define SRV_F_STRICT_MAXCONN 0x10000 /* maxconn is to be strictly enforced, as a limit of outbound connections */
#define SRV_F_CHECKED 0x20000 /* set once server was postparsed */
/* configured server options for send-proxy (server->pp_opts) */
#define SRV_PP_V1 0x0001 /* proxy protocol version 1 */
@ -303,6 +304,13 @@ struct srv_pp_tlv_list {
unsigned char type;
};
/* Renegotiate mode */
enum renegotiate_mode {
SSL_RENEGOTIATE_DFLT = 0, /* Use the SSL library's default behavior */
SSL_RENEGOTIATE_OFF, /* Disable secure renegotiation */
SSL_RENEGOTIATE_ON /* Enable secure renegotiation */
};
struct proxy;
struct server {
/* mostly config or admin stuff, doesn't change often */
@ -347,6 +355,7 @@ struct server {
short onmarkedup; /* what to do when marked up: one of HANA_ONMARKEDUP_* */
int slowstart; /* slowstart time in seconds (ms in the conf) */
int idle_ping; /* MUX idle-ping interval in ms */
unsigned long last_change; /* internal use only (not for stats purpose): last time the server state was changed, doesn't change often, not updated atomically on purpose */
char *id; /* just for identification */
uint32_t rid; /* revision: if id has been reused for a new server, rid won't match */
@ -422,6 +431,7 @@ struct server {
int puid; /* proxy-unique server ID, used for SNMP, and "first" LB algo */
int tcp_ut; /* for TCP, user timeout */
char *tcp_md5sig; /* TCP MD5 signature password (RFC2385) */
int do_check; /* temporary variable used during parsing to denote if health checks must be enabled */
int do_agent; /* temporary variable used during parsing to denote if an auxiliary agent check must be enabled */
@ -434,7 +444,7 @@ struct server {
char *lastaddr; /* the address string provided by the server-state file */
struct resolv_options resolv_opts;
int hostname_dn_len; /* string length of the server hostname in Domain Name format */
char *hostname_dn; /* server hostname in Domain Name format */
char *hostname_dn; /* server hostname in Domain Name format (name is lower cased) */
char *hostname; /* server hostname */
struct sockaddr_storage init_addr; /* plain IP address specified on the init-addr line */
unsigned int init_addr_methods; /* initial address setting, 3-bit per method, ends at 0, enough to store 10 entries */
@ -476,7 +486,11 @@ struct server {
int npn_len; /* NPN protocol string length */
char *alpn_str; /* ALPN protocol string */
int alpn_len; /* ALPN protocol string length */
int renegotiate; /* Renegotiate mode (SSL_RENEGOTIATE_ flag) */
} ssl_ctx;
#ifdef USE_QUIC
struct quic_transport_params quic_params; /* QUIC transport parameters */
#endif
struct resolv_srvrq *srvrq; /* Pointer representing the DNS SRV requeest, if any */
struct list srv_rec_item; /* to attach server to a srv record item */
struct list ip_rec_item; /* to attach server to a A or AAAA record item */

View File

@ -59,10 +59,11 @@ const char *srv_update_addr_port(struct server *s, const char *addr, const char
const char *server_inetaddr_updater_by_to_str(enum server_inetaddr_updater_by by);
const char *srv_update_check_addr_port(struct server *s, const char *addr, const char *port);
const char *srv_update_agent_addr_port(struct server *s, const char *addr, const char *port);
struct server *server_find_by_id(struct proxy *bk, int id);
struct server *server_find_by_id_unique(struct proxy *bk, int id, uint32_t rid);
struct server *server_find_by_name(struct proxy *bk, const char *name);
struct server *server_find_by_name_unique(struct proxy *bk, const char *name, uint32_t rid);
struct server *server_find_by_name(struct proxy *px, const char *name);
struct server *server_find_by_addr(struct proxy *px, const char *addr);
struct server *server_find(struct proxy *bk, const char *name);
struct server *server_find_unique(struct proxy *bk, const char *name, uint32_t rid);
struct server *server_find_best_match(struct proxy *bk, char *name, int id, int *diff);
void apply_server_state(void);
void srv_compute_all_admin_states(struct proxy *px);
@ -73,7 +74,7 @@ struct server *new_server(struct proxy *proxy);
void srv_take(struct server *srv);
struct server *srv_drop(struct server *srv);
void srv_free_params(struct server *srv);
int srv_init_per_thr(struct server *srv);
int srv_init(struct server *srv);
void srv_set_ssl(struct server *s, int use_ssl);
const char *srv_adm_st_chg_cause(enum srv_adm_st_chg_cause cause);
const char *srv_op_st_chg_cause(enum srv_op_st_chg_cause cause);
@ -181,15 +182,16 @@ const struct mux_ops *srv_get_ws_proto(struct server *srv);
/* increase the number of cumulated streams on the designated server */
static inline void srv_inc_sess_ctr(struct server *s)
{
_HA_ATOMIC_INC(&s->counters.cum_sess);
_HA_ATOMIC_INC(&s->counters.shared.tg[tgid - 1]->cum_sess);
update_freq_ctr(&s->counters.shared.tg[tgid - 1]->sess_per_sec, 1);
HA_ATOMIC_UPDATE_MAX(&s->counters.sps_max,
update_freq_ctr(&s->counters.sess_per_sec, 1));
update_freq_ctr(&s->counters._sess_per_sec, 1));
}
/* set the time of last session on the designated server */
static inline void srv_set_sess_last(struct server *s)
{
s->counters.last_sess = ns_to_sec(now_ns);
HA_ATOMIC_STORE(&s->counters.shared.tg[tgid - 1]->last_sess, ns_to_sec(now_ns));
}
/* returns the current server throttle rate between 0 and 100% */
@ -319,6 +321,51 @@ static inline int srv_is_transparent(const struct server *srv)
(srv->flags & SRV_F_MAPPORTS);
}
/* Detach server from proxy list. It is supported to call this
* even if the server is not yet in the list
* Must be called under thread isolation or when it is safe to assume
* that the parent proxy doesn't is not skimming through the server list
*/
static inline void srv_detach(struct server *srv)
{
struct proxy *px = srv->proxy;
if (px->srv == srv)
px->srv = srv->next;
else {
struct server *prev;
for (prev = px->srv; prev && prev->next != srv; prev = prev->next)
;
BUG_ON(!prev);
prev->next = srv->next;
}
}
/* Returns a pointer to the first server matching id <id> in backend <bk>.
* NULL is returned if no match is found.
*/
static inline struct server *server_find_by_id(struct proxy *bk, int id)
{
struct eb32_node *eb32;
eb32 = eb32_lookup(&bk->conf.used_server_id, id);
return eb32 ? container_of(eb32, struct server, conf.id) : NULL;
}
static inline int srv_is_quic(const struct server *srv)
{
#ifdef USE_QUIC
return srv->addr_type.proto_type == PROTO_TYPE_DGRAM &&
srv->addr_type.xprt_type == PROTO_TYPE_STREAM;
#else
return 0;
#endif
}
#endif /* _HAPROXY_SERVER_H */
/*

View File

@ -61,6 +61,8 @@ struct session {
struct list priv_conns; /* list of private conns */
struct sockaddr_storage *src; /* source address (pool), when known, otherwise NULL */
struct sockaddr_storage *dst; /* destination address (pool), when known, otherwise NULL */
struct fe_counters_shared_tg *fe_tgcounters; /* pointer to current thread group shared frontend counters */
struct fe_counters_shared_tg *li_tgcounters; /* pointer to current thread group shared listener counters */
};
/*

View File

@ -171,25 +171,31 @@ static inline void session_unown_conn(struct session *sess, struct connection *c
}
}
/* Add the connection <conn> to the private conns list of session <sess>. This
* function is called only if the connection is private. Nothing is performed
* if the connection is already in the session list or if the session does not
* owned the connection.
/* Add the connection <conn> to the private conns list of session <sess>. Each
* connection is indexed by their respective target in the session. Nothing is
* performed if the connection is already in the session list.
*
* Returns true if conn is inserted or already present else false if a failure
* occurs during insertion.
*/
static inline int session_add_conn(struct session *sess, struct connection *conn, void *target)
static inline int session_add_conn(struct session *sess, struct connection *conn)
{
struct sess_priv_conns *pconns = NULL;
struct server *srv = objt_server(conn->target);
int found = 0;
BUG_ON(objt_listener(conn->target));
/* Connection target is used to index it in the session. Only BE conns are expected in session list. */
BUG_ON(!conn->target || objt_listener(conn->target));
/* Already attach to the session or not the connection owner */
if (!LIST_ISEMPTY(&conn->sess_el) || (conn->owner && conn->owner != sess))
/* A connection cannot be attached already to another session. */
BUG_ON(conn->owner && conn->owner != sess);
/* Already attach to the session */
if (!LIST_ISEMPTY(&conn->sess_el))
return 1;
list_for_each_entry(pconns, &sess->priv_conns, sess_el) {
if (pconns->target == target) {
if (pconns->target == conn->target) {
found = 1;
break;
}
@ -199,7 +205,7 @@ static inline int session_add_conn(struct session *sess, struct connection *conn
pconns = pool_alloc(pool_head_sess_priv_conns);
if (!pconns)
return 0;
pconns->target = target;
pconns->target = conn->target;
LIST_INIT(&pconns->conn_list);
LIST_APPEND(&sess->priv_conns, &pconns->sess_el);
@ -219,25 +225,34 @@ static inline int session_add_conn(struct session *sess, struct connection *conn
return 1;
}
/* Returns 0 if the session can keep the idle conn, -1 if it was destroyed. The
* connection must be private.
/* Check that session <sess> is able to keep idle connection <conn>. This must
* be called each time a connection stored in a session becomes idle.
*
* Returns 0 if the connection is kept, else non-zero if the connection was
* explicitely removed from session.
*/
static inline int session_check_idle_conn(struct session *sess, struct connection *conn)
{
/* Another session owns this connection */
if (conn->owner != sess)
/* Connection must be attached to session prior to this function call. */
BUG_ON(!conn->owner || conn->owner != sess);
/* Connection is not attached to a session. */
if (!conn->owner)
return 0;
/* Ensure conn is not already accounted as idle to prevent sess idle count excess increment. */
BUG_ON(conn->flags & CO_FL_SESS_IDLE);
if (sess->idle_conns >= sess->fe->max_out_conns) {
session_unown_conn(sess, conn);
conn->owner = NULL;
conn->flags &= ~CO_FL_SESS_IDLE;
conn->mux->destroy(conn->ctx);
return -1;
} else {
}
else {
conn->flags |= CO_FL_SESS_IDLE;
sess->idle_conns++;
}
return 0;
}

View File

@ -28,9 +28,11 @@
#include <haproxy/api.h>
#include <haproxy/connection-t.h>
#include <haproxy/listener-t.h>
#include <haproxy/protocol-t.h>
#include <haproxy/sock-t.h>
int sock_create_server_socket(struct connection *conn, struct proxy *be, int *stream_err);
int sock_create_server_socket(struct connection *conn, struct proxy *be,
enum proto_type proto_type, int sock_type, int *stream_err);
void sock_enable(struct receiver *rx);
void sock_disable(struct receiver *rx);
void sock_unbind(struct receiver *rx);

View File

@ -31,6 +31,7 @@ extern int sock_inet6_v6only_default;
extern int sock_inet_tcp_maxseg_default;
extern int sock_inet6_tcp_maxseg_default;
extern int sock_inet6_seems_reachable;
extern uint last_inet6_check;
#ifdef HA_HAVE_MPTCP
extern int sock_inet_mptcp_maxseg_default;
@ -54,5 +55,6 @@ int sock_inet_is_foreign(int fd, sa_family_t family);
int sock_inet4_make_foreign(int fd);
int sock_inet6_make_foreign(int fd);
int sock_inet_bind_receiver(struct receiver *rx, char **errmsg);
int is_inet6_reachable(void);
#endif /* _HAPROXY_SOCK_INET_H */

View File

@ -73,6 +73,8 @@ struct ckch_conf {
} acme;
};
struct jwt_cert_tree_entry;
/*
* this is used to store 1 to SSL_SOCK_NUM_KEYTYPES cert_key_and_chain and
* metadata.
@ -88,6 +90,7 @@ struct ckch_store {
struct list crtlist_entry; /* list of entries which use this store */
struct ckch_conf conf;
struct task *acme_task;
struct jwt_cert_tree_entry *jwt_entry;
struct ebmb_node node;
char path[VAR_ARRAY];
};

View File

@ -62,7 +62,7 @@ struct ckch_inst *ckch_inst_new();
int ckch_inst_new_load_store(const char *path, struct ckch_store *ckchs, struct bind_conf *bind_conf,
struct ssl_bind_conf *ssl_conf, char **sni_filter, int fcount, int is_default, struct ckch_inst **ckchi, char **err);
int ckch_inst_new_load_srv_store(const char *path, struct ckch_store *ckchs,
struct ckch_inst **ckchi, char **err);
struct ckch_inst **ckchi, char **err, int is_quic);
int ckch_inst_rebuild(struct ckch_store *ckch_store, struct ckch_inst *ckchi,
struct ckch_inst **new_inst, char **err);

View File

@ -316,6 +316,12 @@ struct global_ssl {
int disable;
} ocsp_update;
#endif
#ifdef HAVE_ACME
int acme_scheduler;
#endif
int renegotiate; /* Renegotiate mode (SSL_RENEGOTIATE_ flag) */
};
/* The order here matters for picking a default context,

View File

@ -1,18 +1,7 @@
/*
* include/haproxy/ssl_trace-t.h
* Definitions for SSL traces internal types, constants and flags.
*
* Copyright (C) 2025
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*
*/
/* SPDX-License-Identifier: LGPL-2.1-or-later */
#ifndef _HAPROXY_SSL_TRACE_T_H
#define _HAPROXY_SSL_TRACE_T_H
#ifndef _HAPROXY_SSL_TRACE_H
#define _HAPROXY_SSL_TRACE_H
#include <haproxy/trace-t.h>
@ -33,7 +22,16 @@ extern struct trace_source trace_ssl;
#define SSL_EV_CONN_SWITCHCTX_CB (1ULL << 12)
#define SSL_EV_CONN_CHOOSE_SNI_CTX (1ULL << 13)
#define SSL_EV_CONN_SIGALG_EXT (1ULL << 14)
#define SSL_EV_CONN_CIPHERS_EXT (1ULL << 15)
#define SSL_EV_CONN_CURVES_EXT (1ULL << 16)
#define SSL_VERB_CLEAN 1
#define SSL_VERB_MINIMAL 2
#define SSL_VERB_SIMPLE 3
#define SSL_VERB_ADVANCED 4
#define SSL_VERB_COMPLETE 5
#define TRACE_SOURCE &trace_ssl
#endif /* _HAPROXY_SSL_TRACE_T_H */
#endif /* _HAPROXY_SSL_TRACE_H */

View File

@ -55,6 +55,7 @@ time_t x509_get_notbefore_time_t(X509 *cert);
int curves2nid(const char *curve);
const char *nid2nist(int nid);
const char *sigalg2str(int sigalg);
const char *curveid2str(int curve_id);
#endif /* _HAPROXY_SSL_UTILS_H */
#endif /* USE_OPENSSL */

View File

@ -337,11 +337,18 @@ enum stat_idx_info {
ST_I_INF_CURR_STRM,
ST_I_INF_CUM_STRM,
ST_I_INF_WARN_BLOCKED,
ST_I_INF_PATTERNS_ADDED,
ST_I_INF_PATTERNS_FREED,
/* must always be the last one */
ST_I_INF_MAX
};
/* Flags for stat_col.flags */
#define STAT_COL_FL_NONE 0x00
#define STAT_COL_FL_GENERIC 0x01 /* stat is generic if set */
#define STAT_COL_FL_SHARED 0x02 /* stat may be shared between co-processes if set */
/* Represent an exposed statistic. */
struct stat_col {
const char *name; /* short name, used notably in CSV headers */
@ -350,8 +357,8 @@ struct stat_col {
uint32_t type; /* combination of field_nature and field_format */
uint8_t cap; /* mask of stats_domain_px_cap to restrain metrics to an object types subset */
uint8_t generic; /* bit set if generic */
/* 2 bytes hole */
/* 1 byte hole */
uint16_t flags; /* STAT_COL_FL_* flags */
/* used only for generic metrics */
struct {

View File

@ -73,13 +73,13 @@ int stats_dump_stat_to_buffer(struct stconn *sc, struct buffer *buf, struct htx
int stats_emit_raw_data_field(struct buffer *out, const struct field *f);
int stats_emit_typed_data_field(struct buffer *out, const struct field *f);
int stats_emit_field_tags(struct buffer *out, const struct field *f,
char delim);
int persistent, char delim);
/* Returns true if <col> is fully defined, false if only used as name-desc. */
static inline int stcol_is_generic(const struct stat_col *col)
{
return col->generic;
return col->flags & STAT_COL_FL_GENERIC;
}
static inline enum field_format stcol_format(const struct stat_col *col)

View File

@ -206,6 +206,7 @@ enum sc_flags {
SC_FL_EOS = 0x00040000, /* End of stream was reached (from down side to up side) */
SC_FL_HAVE_BUFF = 0x00080000, /* A buffer is ready, flag will be cleared once allocated */
SC_FL_NO_FASTFWD = 0x00100000, /* disable data fast-forwarding */
};
/* This function is used to report flags in debugging tools. Please reflect

Some files were not shown because too many files have changed in this diff Show More