Compare commits

...

204 Commits

Author SHA1 Message Date
Alexander Stephan
ffbb3cc306 MINOR: sample: Add le2dec (little endian to decimal) sample fetch
This commit introduces a sample fetch, `le2dec`, to convert
little-endian binary input samples into their decimal representations.
The function converts the input into a string containing unsigned
integer numbers, with each number derived from a specified number of
input bytes. The numbers are separated using a user-defined separator.

This new sample is achieved by adding a parametrized sample_conv_2dec
function, unifying the logic for be2dec and le2dec converters.

Co-authored-by: Christian Norbert Menges <christian.norbert.menges@sap.com>
[wt: tracked as GH issue #2915]
Signed-off-by: Willy Tarreau <w@1wt.eu>
2025-08-05 13:47:53 +02:00
Aurelien DARRAGON
aeff2a3b2a BUG/MEDIUM: hlua_fcn: ensure systematic watcher cleanup for server list iterator
In 358166a ("BUG/MINOR: hlua_fcn: restore server pairs iterator pointer
consistency"), I wrongly assumed that because the iterator was a temporary
object, no specific cleanup was needed for the watcher.

In fact watcher_detach() is not only relevant for the watcher itself, but
especially for its parent list to remove the current watcher from it.

As iterators are temporary objects, failing to remove their watchers from
the server watcher list causes the server watcher list to be corrupted.

On a normal iteration sequence, the last watcher_next() receives NULL
as target so it successfully detaches the last watcher from the list.
However the corner case here is with interrupted iterators: users are
free to break away from the iteration loop when a specific condition is
met for instance from the lua script, when this happens
hlua_listable_servers_pairs_iterator() doesn't get a chance to detach the
last iterator.

Also, Lua doesn't tell us that the loop was interrupted,
so to fix the issue we rely on the garbage collector to force a last
detach right before the object is freed. To achieve that, watcher_detach()
was slightly modified so that it becomes possible to call it without
knowing if the watcher is already detached or not, if watcher_detach() is
called on a detached watcher, the function does nothing. This way it saves
the caller from having to track the watcher state and makes the API a
little more convenient to use. This way we now systematically call
watcher_detach() for server iterators right before they are garbage
collected.

This was first reported in GH #3055. It can be observed when the server
list is browsed one than more time when it was already browsed from Lua
for a given proxy and the iteration was interrupted before the end. As the
watcher list is corrupted, the common symptom is watcher_attach() or
watcher_next() not ending due to the internal mt_list call looping
forever.

Thanks to GH users @sabretus and @sabretus for their precious help.

It should be backported everywhere 358166a was.
2025-08-05 13:06:46 +02:00
William Lallemand
66f28dbd3f BUG/MINOR: acme: possible integer underflow in acme_txt_record()
a2base64url() can return a negative value is olen is too short to
accept ilen. This is not supposed to happen since the sha256 should
always fit in a buffer. But this is confusing since a2base64()
returns a signed integer which is pt in output->data which is unsigned.

Fix the issue by setting ret to 0 instead of -1 upon error. And returns
a unsigned integer instead of a signed one.
This patch also checks the return value from the caller in order
to emit an error instead of setting trash.data which is already done
from the function.
2025-08-05 12:12:50 +02:00
William Lallemand
8afd3e588d MINOR: acme: update the log for DNS-01
Update the log for DNS-01 by mentionning the challenge_ready command
over the CLI.
2025-08-01 18:08:43 +02:00
William Lallemand
9ee14ed2d9 MEDIUM: acme: allow to wait and restart the task for DNS-01
DNS-01 needs a external process which would register a TXT record on a
DNS provider, using a REST API or something else.

To achieve this, the process should read the dpapi sink and wait for
events. With the DNS-01 challenge, HAProxy will put the task to sleep
before asking the ACME server to achieve the challenge. The task then
need to be woke up, using the command implemented by this patch.

This patch implements the "acme challenge_ready" command which should be
used by the agent once the challenge was configured in order to wake the
task up.

Example:
    echo "@1 acme challenge_ready foobar.pem.rsa domain kikyo" | socat /tmp/master.sock -
2025-08-01 18:07:12 +02:00
William Lallemand
3dde7626ba MINOR: acme: emit the DNS-01 challenge details on the dpapi sink
This commit adds a new message to the dpapi sink which is emitted during
the new authorization request.

One message is emitted by challenge to resolve. The certificate name as
well as the thumprint of the account key are on the first line of the
message. A dump of the JSON response for 1 challenge is dumped, en the
message ends with a \0.

The agent consuming these messages MUST NOT access the URLs, and SHOULD
only uses the thumbprint, dns and token to configure a challenge.

Example:

    $ ( echo "@@1 show events dpapi -w -0"; cat - ) | socat /tmp/master.sock -  | cat -e
    <0>2025-08-01T16:23:14.797733+02:00 acme deploy foobar.pem.rsa thumbprint Gv7pmGKiv_cjo3aZDWkUPz5ZMxctmd-U30P2GeqpnCo$
    {$
       "status": "pending",$
       "identifier": {$
          "type": "dns",$
          "value": "foobar.com"$
       },$
       "challenges": [$
          {$
             "type": "dns-01",$
             "url": "https://0.0.0.0:14000/chalZ/1o7sxLnwcVCcmeriH1fbHJhRgn4UBIZ8YCbcrzfREZc",$
             "token": "tvAcRXpNjbgX964ScRVpVL2NXPid1_V8cFwDbRWH_4Q",$
             "status": "pending"$
          },$
          {$
             "type": "dns-account-01",$
             "url": "https://0.0.0.0:14000/chalZ/z2_WzibwTPvE2zzIiP3BF0zNy3fgpU_8Nj-V085equ0",$
             "token": "UedIMFsI-6Y9Nq3oXgHcG72vtBFWBTqZx-1snG_0iLs",$
             "status": "pending"$
          },$
          {$
             "type": "tls-alpn-01",$
             "url": "https://0.0.0.0:14000/chalZ/AHnQcRvZlFw6e7F6rrc7GofUMq7S8aIoeDileByYfEI",$
             "token": "QhT4ejBEu6ZLl6pI1HsOQ3jD9piu__N0Hr8PaWaIPyo",$
             "status": "pending"$
          },$
          {$
             "type": "http-01",$
             "url": "https://0.0.0.0:14000/chalZ/Q_qTTPDW43-hsPW3C60NHpGDm_-5ZtZaRfOYDsK3kY8",$
             "token": "g5Y1WID1v-hZeuqhIa6pvdDyae7Q7mVdxG9CfRV2-t4",$
             "status": "pending"$
          }$
       ],$
       "expires": "2025-08-01T15:23:14Z"$
    }$
    ^@
2025-08-01 16:48:22 +02:00
William Lallemand
365a69648c MINOR: acme: emit a log for DNS-01 challenge response
This commit emits a log which output the TXT entry to create in case of
DNS-01. This is useful in cases you want to update your TXT entry
manually.

Example:

    acme: foobar.pem.rsa: DNS-01 requires to set the "acme-challenge.example.com" TXT record to "7L050ytWm6ityJqolX-PzBPR0LndHV8bkZx3Zsb-FMg"
2025-08-01 16:12:27 +02:00
William Lallemand
09275fd549 BUILD: acme: avoid declaring TRACE_SOURCE in acme-t.h
Files ending with '-t.h' are supposed to be used for structure
definitions and could be included in the same file to check API
definitions.

This patch removes TRACE_SOURCE from acme-t.h to avoid conflicts with
other TRACE_SOURCE definitions.
2025-07-31 16:03:28 +02:00
Amaury Denoyelle
a6e67e7b41 BUG/MEDIUM: mux-quic: ensure Early-data header is set
QUIC MUX may be initialized prior to handshake completion, when 0-RTT is
used. In this case, connection is flagged with CO_FL_EARLY_SSL_HS, which
is notably used by wait-for-hs http rule.

Early data may be subject to replay attacks. For this reason, haproxy
adds the header 'Early-data: 1' to all requests handled as TLS early
data. Thus the server can reject it if it is deemed unsafe. This header
injection is implemented by http-ana. However, it was not functional
with QUIC due to missing CO_FL_EARLY_DATA connection flag.

Fix this by ensuring that QUIC MUX sets CO_FL_EARLY_DATA when needed.
This is performed during qcc_recv() for STREAM frame reception. It is
only set if QC_CF_WAIT_HS is set, meaning that the handshake is not yet
completed. After this, the request is considered safe and Early-data
header is not necessary anymore.

This should fix github issue #3054.

This must be backported up to 3.2 at least. If possible, it should be
backported to all stable releases as well. On these versions, the
current patch relies on the following refactoring commit :
  commit 0a53a008d0
  MINOR: mux-quic: refactor wait-for-handshake support
2025-07-31 15:25:59 +02:00
Amaury Denoyelle
697f7d1142 MINOR: muxes: refactor private connection detach
Following the latest adjustment on session_add_conn() /
session_check_idle_conn(), detach muxes callbacks were rewritten for
private connection handling.

Nothing really fancy here : some more explicit comments and the removal
of a duplicate checks on idle conn status for muxes with true
multipexing support.
2025-07-30 16:14:00 +02:00
Amaury Denoyelle
2ecc5290f2 MINOR: session: streamline session_check_idle_conn() usage
session_check_idle_conn() is called by muxes when a connection becomes
idle. It ensures that the session idle limit is not yet reached. Else,
the connection is removed from the session and it can be freed.

Prior to this patch, session_check_idle_conn() was compatible with a
NULL session argument. In this case, it would return true, considering
that no limit was reached and connection not removed.

However, this renders the function error-prone and subject to future
bugs. This patch streamlines it by ensuring it is never called with a
NULL argument. Thus it can now only returns true if connection is kept
in the session or false if it was removed, as first intended.
2025-07-30 16:13:30 +02:00
Amaury Denoyelle
dd9645d6b9 MINOR: session: do not release conn in session_check_idle_conn()
session_check_idle_conn() is called to flag a connection already
inserted in a session list as idle. If the session limit on the number
of idle connections (max-session-srv-conns) is exceeded, the connection
is removed from the session list.

In addition to the connection removal, session_check_idle_conn()
directly calls MUX destroy callback on the connection. This means the
connection is freed by the function itself and should not be used by the
caller anymore.

This is not practical when an alternative connection closure method
should be used, such as a graceful shutdown with QUIC. As such, remove
MUX destroy invokation : this is now the responsability of the caller to
either close or release immediately the connection.
2025-07-30 11:43:41 +02:00
Amaury Denoyelle
57e9425dbc MINOR: session: strengthen idle conn limit check
Add a BUG_ON() on session_check_idle_conn() to ensure the connection is
not already flagged as CO_FL_SESS_IDLE.

This checks that this function is only called one time per connection
transition from active to idle. This is necessary to ensure that session
idle counter is only incremented one time per connection.
2025-07-30 11:40:16 +02:00
Amaury Denoyelle
ec1ab8d171 MINOR: session: remove redundant target argument from session_add_conn()
session_add_conn() uses three argument : connection and session
instances, plus a void pointer labelled as target. Typically, it
represents the server, but can also be a backend instance (for example
on dispatch).

In fact, this argument is redundant as <target> is already a member of
the connection. This commit simplifies session_add_conn() by removing
it. A BUG_ON() on target is extended to ensure it is never NULL.
2025-07-30 11:39:57 +02:00
Amaury Denoyelle
668c2cfb09 MINOR: session: strengthen connection attach to session
This commit is the first one of a serie to refactor insertion of backend
private connection into the session list.

session_add_conn() is used to attach a connection into a session list.
Previously, this function would report an error if the connection
specified was already attached to another session. However, this case
currently never happens and thus can be considered as buggy.

Remove this check and replace it with a BUG_ON(). This allows to ensure
that session insertion remains consistent. The same check is also
transformed in session_check_idle_conn().
2025-07-30 11:39:26 +02:00
Amaury Denoyelle
cfe9bec1ea MINOR: mux-quic: release conn after shutdown on BE reuse failure
On stream detach on backend side, connection is inserted in the proper
server/session list to be able to reuse it later. If insertion fails and
the connection is idle, the connection can be removed immediately.

If this occurs on a QUIC connection, QUIC MUX implements graceful
shutdown to ensure the server is notified of the closure. However, the
connection instance is not freed. Change this to ensure that both
shutdown and release is performed.
2025-07-30 10:04:19 +02:00
Aurelien DARRAGON
14966c856b MINOR: clock: make global_now_ns a pointer as well
Similar to previous commit but for global_now_ns
2025-07-29 18:04:15 +02:00
Aurelien DARRAGON
4a20b3835a MINOR: clock: make global_now_ms a pointer
This is preparation work for shared counters between co-processes. As
co-processes will need to share a common date. global_now_ms will be used
for that as it will point to the shm when sharing is enabled.

Thus in this patch we turn global_now_ms into a pointer (and adjust the
places where it is written to and read from, hopefully atomic operations
through pointer are already used so the change is trivial)

For now global_now_ms points to process-local _global_now_ms which is a
fallback for when sharing through the shm is not enabled.
2025-07-29 18:04:14 +02:00
Aurelien DARRAGON
713ebd2750 CLEANUP: counters: rename counters_be_shared_init to counters_be_shared_prepare
75e480d10 ("MEDIUM: stats: avoid 1 indirection by storing the shared
stats directly in counters struct") took care of renaming
counters_fe_shared_init() but we forgot counters_be_shared_init().

Let's fix that for consistency
2025-07-29 18:00:13 +02:00
Aurelien DARRAGON
2ffe515d97 BUG/MINOR: hlua: take default-path into account with lua-load-per-thread
As discussed in GH #3051, default-path is not taken into account when
loading files using lua-load-per-thread. In fact, the initial
hlua_load_state() (performed on first thread which parses the config)
is successful, but other threads run hlua_load_state() later based
on config hints which were saved by the first thread, and those config
hints only contain the file path provided on the lua-load-per-thread
config line, not the absolute one. Indeed, `default-path` directive
changes the current working directory only for the thread parsing the
configuration.

To fix the issue, when storing config hints under hlua_load_per_thread()
we now make sure to save the absolute file path for `lua-load-per-thread'
argument.

Thanks to GH user @zhanhb for having reported the issue

It may be backported to all stable versions.
2025-07-29 17:58:28 +02:00
William Lallemand
83a335f925 MINOR: acme: implement traces
Implement traces for the ACME protocol.

 -dt acme:data:complete will dump every input and output buffers,
 including decoded buffers before being converted to JWS.
 It will also dump certificates in the traces.

 -dt acme:user:complete will only dump the state of the task handler.
2025-07-29 17:25:10 +02:00
Willy Tarreau
cedb4f0461 [RELEASE] Released version 3.3-dev5
Released version 3.3-dev5 with the following main changes :
    - BUG/MEDIUM: queue/stats: also use stream_set_srv_target() for pendconns
    - DOC: list missing global QUIC settings
2025-07-28 11:26:22 +02:00
Amaury Denoyelle
7fa812a1ac DOC: list missing global QUIC settings
Complete list of global keywords with missing QUIC entries.

This could be backported to stable versions. This requires to take into
account the version of introduction for each keyword.
* limited-quic, introduced in 2.8
* no-quic, introduced in 2.8
* tune.quic.cc.cubic.min-losses, introduced in 3.1
2025-07-28 11:22:35 +02:00
Aurelien DARRAGON
021a0681be BUG/MEDIUM: queue/stats: also use stream_set_srv_target() for pendconns
Following c24de07 ("OPTIM: stats: store fast sharded counters pointers
at session and stream level") some crashes were observed in
connect_server():

  #0  0x00000000007ba39c in connect_server (s=0x65117b0) at src/backend.c:2101
  2101                            _HA_ATOMIC_INC(&s->sv_tgcounters->connect);
  Missing separate debuginfos, use: debuginfo-install glibc-2.17-325.el7_9.x86_64 libgcc-4.8.5-44.el7.x86_64 nss-softokn-freebl-3.67.0-3.el7_9.x86_64 pcre-8.32-17.el7.x86_64
  (gdb) bt
  #0  0x00000000007ba39c in connect_server (s=0x65117b0) at src/backend.c:2101
  #1  0x00000000007baff8 in back_try_conn_req (s=0x65117b0) at src/backend.c:2378
  #2  0x00000000006c0e9f in process_stream (t=0x650f180, context=0x65117b0, state=8196) at src/stream.c:2366
  #3  0x0000000000bd3e51 in run_tasks_from_lists (budgets=0x7ffd592752e0) at src/task.c:655
  #4  0x0000000000bd49ef in process_runnable_tasks () at src/task.c:889
  #5  0x0000000000851169 in run_poll_loop () at src/haproxy.c:2834
  #6  0x0000000000851865 in run_thread_poll_loop (data=0x1a03580 <ha_thread_info>) at src/haproxy.c:3050
  #7  0x0000000000852a53 in main (argc=7, argv=0x7ffd592755f8) at src/haproxy.c:3637

Here the crash occurs during the atomic inc of a sv_tgcounters metric from
the stream pointer, which tells us the pointer is likely garbage.

In fact, we assign s->sv_tgcounters each time the stream target is set to
a valid server. For that we use stream_set_srv_target() helper which does
assigment for us. By reviewing the code, in turns out we forgot to call
stream_set_srv_target() in pendconn_dequeue(), where the stream target
is set to the server who picked the pendconn.

Let's fix the bug by using stream_set_srv_target() there.

No backport needed unless c24de07 is.
2025-07-28 08:54:38 +02:00
Willy Tarreau
5d4ff9f02e [RELEASE] Released version 3.3-dev4
Released version 3.3-dev4 with the following main changes :
    - CLEANUP: server: do not check for duplicates anymore in findserver()
    - REORG: server: move findserver() from proxy.c to server.c
    - MINOR: server: use the tree to look up the server name in findserver()
    - CLEANUP: server: rename server_find_by_name() to server_find()
    - CLEANUP: server: rename findserver() to server_find_by_name()
    - CLEANUP: server: use server_find_by_name() where relevant
    - CLEANUP: cfgparse: lookup proxy ID using existing functions
    - CLEANUP: stream: lookup server ID using standard functions
    - CLEANUP: server: simplify server_find_by_id()
    - CLEANUP: server: add server_find_by_addr()
    - CLEANUP: stream: use server_find_by_addr() in sticking_rule_find_target()
    - CLEANUP: server: be sure never to compare src against a non-existing defsrv
    - MEDIUM: proxy: take the defsrv out of the struct proxy
    - MINOR: proxy: add checks for defsrv's validity
    - MEDIUM: proxy: no longer allocate the default-server entry by default
    - MEDIUM: proxy: register a post-section cleanup function
    - MINOR: debug: report haproxy and operating system info in panic dumps
    - BUG/MEDIUM: h3: do not overwrite interim with final response
    - BUG/MINOR: h3: properly realloc buffer after interim response encoding
    - BUG/MINOR: h3: ensure that invalid status code are not encoded (FE side)
    - MINOR: qmux: change API for snd_buf FIN transmission
    - BUG/MEDIUM: h3: handle interim response properly on FE side
    - BUG/MINOR: h3: properly handle interim response on BE side
    - BUG/MINOR: quic: Wrong source address use on FreeBSD
    - MINOR: h3: remove unused outbuf in h3_resp_headers_send()
    - BUG/MINOR: applet: Don't trigger BUG_ON if the tid is not on appctx init
    - DEV: gdb: add a memprofile decoder to the debug tools
    - MINOR: quic: Get rid of qc_is_listener()
    - DOC: connection: explain the rules for idle/safe/avail connections
    - BUG/MEDIUM: quic-be: CC buffer released from wrong pool
    - BUG/MINOR: halog: exit with error when some output filters are set simultaneosly
    - MINOR: cpu-topo: split cpu_dump_topology() to show its summary in show dev
    - MINOR: cpu-topo: write thread-cpu bindings into trash buffer
    - MINOR: debug: align output style of debug_parse_cli_show_dev with cpu_dump_topology
    - MINOR: debug: add thread-cpu bindings info in 'show dev' output
    - MINOR: quic: Remove pool_head_quic_be_cc_buf pool
    - BUILD: debug: add missed guard USE_CPU_AFFINITY to show cpu bindings
    - BUG/MEDIUM: threads: Disable the workaround to load libgcc_s on macOS
    - BUG/MINOR: logs: fix log-steps extra log origins selection
    - BUG/MINOR: hq-interop: fix FIN transmission
    - MINOR: ssl: Add ciphers in ssl traces
    - MINOR: ssl: Add curve id to curve name table and mapping functions
    - MINOR: ssl: Add curves in ssl traces
    - MINOR: ssl: Dump ciphers and sigalgs details in trace with 'advanced' verbosity
    - MINOR: ssl: Remove ClientHello specific traces if !HAVE_SSL_CLIENT_HELLO_CB
    - MINOR: h3: use smallbuf for request header emission
    - MINOR: h3: add traces to h3_req_headers_send()
    - BUG/MINOR: h3: fix uninitialized value in h3_req_headers_send()
    - MINOR: log: explicitly ignore "log-steps" on backends
    - BUG/MEDIUM: acme: use POST-as-GET instead of GET for resources
    - BUG/MINOR mux-quic: apply correctly timeout on output pending data
    - BUG/MINOR: mux-quic: ensure close-spread-time is properly applied
    - MINOR: mux-quic: refactor timeout code
    - MINOR: mux-quic: correctly implement backend timeout
    - MINOR: mux-quic: disable glitch on backend side
    - MINOR: mux-quic: store session in QCS instance
    - MEDIUM: mux-quic: implement be connection reuse
    - MINOR: mux-quic: do not reuse connection if app already shut
    - MEDIUM: mux-quic: support backend private connection
    - MINOR: acme: remove acme_req_auth() and use acme_post_as_get() instead
    - BUG/MINOR: acme: allow "processing" in challenge requests
    - CLEANUP: acme: fix wrong spelling of "resources"
    - CLEANUP: ssl: Use only NIDs in curve name to id table
    - MINOR: acme: add ACME to the haproxy -vv feature list
    - BUG/MINOR: hlua: Skip headers when a receive is performed on an HTTP applet
    - BUG/MEDIUM: applet: State inbuf is no longer full if input data are skipped
    - BUG/MEDIUM: stconn: Fix conditions to know an applet can get data from stream
    - BUG/MINOR: applet: Fix applet_getword() to not return one extra byte
    - BUG/MEDIUM: Remove sync sends from streams to applets
    - MINOR: applet: Add HTX versions for applet_input_data() and applet_output_room()
    - MINOR: applet: Improve applet API to take care of inbuf/outbuf alloc failures
    - MEDIUM: hlua: Update the tcp applet to use its own buffers
    - MINOR: hlua: Fill the request array on the first HTTP applet run
    - MINOR: hlua: Use the buffer instead of the HTTP message to get HTTP headers
    - MEDIUM: hlua: Update the http applet to use its own buffers
    - BUG/MEDIUM: hlua: Report to SC when data were consumed on a lua socket
    - BUG/MEDIUM: hlua: Report to SC when output data are blocked on a lua socket
    - MEDIUM: hlua: Update the socket applet to use its own buffers
    - BUG/MEDIUM: dns: Reset reconnect tempo when connection is finally established
    - MEDIUM: dns: Update the dns_session applet to use its own buffers
    - CLEANUP: http-client: Remove useless indentation when sending request body
    - MINOR: http-client: Try to send request body with headers if possible
    - MINOR: http-client: Trigger an error if first response block isn't a start-line
    - BUG/MINOR: httpclient-cli: Don't try to dump raw headers in HTX mode
    - MINOR: httpclient-cli: Reset httpclient HTX buffer instead of removing blocks
    - MEDIUM: http-client: Update the http-client applet to use its own buffers
    - MEDIUM: log: Update the log applet to use its own buffers
    - MEDIUM: sink: Update the sink applets to use their own buffers
    - MEDIUM: peers: Update the peer applet to use its own buffers
    - MEDIUM: promex: Update the promex applet to use their own buffers
    - MINOR: applet: Add support for flags on applets with a flag about the new API
    - MEDIUM: applet: Emit a warning when a legacy applet is spawned
    - BUG/MEDIUM: logs: fix sess_build_logline_orig() recursion with options
    - MEDIUM: stats: avoid 1 indirection by storing the shared stats directly in counters struct
    - CLEANUP: compiler: prefer char * over void * for pointer arithmetic
    - CLEANUP: include: replace hand-rolled offsetof to avoid UB
    - CLEANUP: peers: remove unused peer_session_target()
    - OPTIM: stats: store fast sharded counters pointers at session and stream level
2025-07-26 09:55:26 +02:00
Aurelien DARRAGON
c24de077bd OPTIM: stats: store fast sharded counters pointers at session and stream level
Following commit 75e480d10 ("MEDIUM: stats: avoid 1 indirection by storing
the shared stats directly in counters struct"), in order to minimize the
impact of the recent sharded counters work, we try to push things a bit
further in this patch by storing and using "fast" pointers at the session
and stream levels when available to avoid costly indirections and
systematic "tgid" resolution (which can not be cached by the CPU due to
its THREAD-local nature).

Indeed, we know that a session/stream is tied to a given CPU, thanks to
this we know that the tgid for a given session/stream will never change.

Given that, we are able to store sharded frontend and listener counters
pointer at the session level (namely sess->fe_tgcounters and
sess->li_tgcounters), and once the backend and the server are selected,
we are also able to store backend and server sharded counters
pointer at the stream level (namely s->be_tgcounters and s->sv_tgcounters)

Everywhere we rely on these counters and the stream or session context is
available, we use the fast pointers it instead of the indirect pointers
path to make the pointer resolution a bit faster.

This optimization proved to bring a few percents back, and together with
the previous 75e480d10 commit we now fixed the performance regression (we
are back to back with 3.2 stats performance)
2025-07-25 18:24:23 +02:00
Aurelien DARRAGON
cf8ba60c88 CLEANUP: peers: remove unused peer_session_target()
Since commit 7293eb68 ("MEDIUM: peers: use server as stream target") peer
session target always point to server in order to benefit from existing
server transport options.

Thanks to that, it is no longer necessary to have peer_session_target()
helper function, because all it does is return the pointer to the
server object. Let's get rid of that
2025-07-25 18:24:17 +02:00
Ben Kallus
1e48ec7f6c CLEANUP: include: replace hand-rolled offsetof to avoid UB
The C standard specifies that it's undefined behavior to dereference
NULL (even if you use & right after). The hand-rolled offsetof idiom
&(((s*)NULL)->f) is thus technically undefined. This clutters the
output of UBSan and is simple to fix: just use the real offsetof when
it's available.

Note that there's no clear statement about this point in the spec,
only several points which together converge to this:

- From N3220, 6.5.3.4:
  A postfix expression followed by the -> operator and an identifier
  designates a member of a structure or union object. The value is
  that of the named member of the object to which the first expression
  points, and is an lvalue.

- From N3220, 6.3.2.1:
  An lvalue is an expression (with an object type other than void) that
  potentially designates an object; if an lvalue does not designate an
  object when it is evaluated, the behavior is undefined.

- From N3220, 6.5.4.4 p3:
  The unary & operator yields the address of its operand. If the
  operand has type "type", the result has type "pointer to type". If
  the operand is the result of a unary * operator, neither that operator
  nor the & operator is evaluated and the result is as if both were
  omitted, except that the constraints on the operators still apply and
  the result is not an lvalue. Similarly, if the operand is the result
  of a [] operator, neither the & operator nor the unary * that is
  implied by the [] is evaluated and the result is as if the & operator
  were removed and the [] operator were changed to a + operator.

=> In short, this is saying that C guarantees these identities:
    1. &(*p) is equivalent to p
    2. &(p[n]) is equivalent to p + n

As a consequence, &(*p) doesn't result in the evaluation of *p, only
the evaluation of p (and similar for []). There is no corresponding
special carve-out for ->.

See also: https://pvs-studio.com/en/blog/posts/cpp/0306/

After this patch, HAProxy can run without crashing after building w/
clang-19 -fsanitize=undefined -fno-sanitize=function,alignment
2025-07-25 17:54:32 +02:00
Ben Kallus
d3b46cca7b CLEANUP: compiler: prefer char * over void * for pointer arithmetic
This patch changes two instances of pointer arithmetic on void *
to use char * instead, to avoid UB. This is essentially to please
UB analyzers, though.
2025-07-25 17:54:32 +02:00
Aurelien DARRAGON
75e480d107 MEDIUM: stats: avoid 1 indirection by storing the shared stats directly in counters struct
Between 3.2 and 3.3-dev we noticed a noticeable performance regression
due to stats handling. After bisecting, Willy found out that recent
work to split stats computing accross multiple thread groups (stats
sharding) was responsible for that performance regression. We're looking
at roughly 20% performance loss.

More precisely, it is the added indirections, multiplied by the number
of statistics that are updated for each request, which in the end causes
a significant amount of time being spent resolving pointers.

We noticed that the fe_counters_shared and be_counters_shared structures
which are currently allocated in dedicated memory since a0dcab5c
("MAJOR: counters: add shared counters base infrastructure")
are no longer huge since 16eb0fab31 ("MAJOR: counters: dispatch counters
over thread groups") because they now essentially hold flags plus the
per-thread group id pointer mapping, not the counters themselves.

As such we decided to try merging fe_counters_shared and
be_counters_shared in their parent structures. The cost is slight memory
overhead for the parent structure, but it allows to get rid of one
pointer indirection. This patch alone yields visible performance gains
and almost restores 3.2 stats performance.

counters_fe_shared_get() was renamed to counters_fe_shared_prepare() and
now returns either failure or success instead of a pointer because we
don't need to retrieve a shared pointer anymore, the function takes care
of initializing existing pointer.
2025-07-25 16:46:10 +02:00
Aurelien DARRAGON
31adfb6c15 BUG/MEDIUM: logs: fix sess_build_logline_orig() recursion with options
Since ccc43412 ("OPTIM: log: use thread local lf_buildctx to stop pushing
it on the stack"), recursively calling sess_build_logline_orig(), which
may for instance happen when leveraging %ID (or unique-id fetch) for the
first time, would lead to undefined behavior because the parent
sess_build_logline_orig() build context was shared between recursive calls
(only one build ctx per thread to avoid pushing it on the stack for each
call)

In short, the parent build ctx would be altered by the recursive calls,
which is obviously not expected and could result in log formatting errors.

To fix the issue but still avoid polluting the stack with large lf_buildctx
struct, let's move the static 256 bytes build buffer out of the buildctx
so that the buildctx is now stored in the stack again (each function
invokation has its own dedicated build ctx). On the other hand, it's
acceptable to have only 1 256 bytes build buffer per thread because the
build buffer is not involved in recursives calls (unlike the build ctx)

Thanks to Willy and Vincent Gramer for spotting the bug and providing
useful repro.

It should be backported in 3.0 with ccc43412.
2025-07-25 16:46:03 +02:00
Christopher Faulet
b8d5307bd9 MEDIUM: applet: Emit a warning when a legacy applet is spawned
To motivate developers to support the new applets API, a warning is now
emitted when a legacy applet is spawned. To not flood users, this warning is
only emitted once per legacy applet. To do so, the applet flag
APPLET_FL_WARNED was added. It is set when the warning is emitted.

Note that test and set on this flag are not performed via atomic operations.
So it is possible to have more than one warning for a given applet if it is
spawned in same time on several threads. At worrst, there is one warning per
thread.
2025-07-25 15:53:33 +02:00
Christopher Faulet
337768656b MINOR: applet: Add support for flags on applets with a flag about the new API
A new field was added in the applet structure to be able to set flags on the
applets The first one is related to the new API. APPLET_FL_NEW_API is set
for applets based on the new API. It was set on all HAProxy's applets.
2025-07-25 15:44:02 +02:00
Christopher Faulet
2e5e6cdf23 MEDIUM: promex: Update the promex applet to use their own buffers
Thanks to this patch, the promex applet is now using its own buffers.
.rcv_buf and .snd_buf callback functions are now defined to use the default
HTX functions. Parts to receive and send data have also been updated to use
the applet API and to remove any dependencies on the stream-connectors and
the channels.
2025-07-24 12:13:42 +02:00
Christopher Faulet
a2cb0033bd MEDIUM: peers: Update the peer applet to use its own buffers
Thanks to this patch, the peer applet is now using its own buffers. .rcv_buf
and .snd_buf callback functions are now defined to use the default raw
functions. The applet API is now used and any dependencies on the
stream-connectors and the channels were removed.
2025-07-24 12:13:42 +02:00
Christopher Faulet
576361c23e MEDIUM: sink: Update the sink applets to use their own buffers
Thanks to this patch, the sink applets is now using their own buffers.
.rcv_buf and .snd_buf callback functions are now defined to use the default
raw functions. The applet API is now used and any dependencies on the
stream-connectors and the channels were removed.
2025-07-24 12:13:42 +02:00
Christopher Faulet
5da704b55f MEDIUM: log: Update the log applet to use its own buffers
Thanks to this patch, the log applet is now using its own buffers. .rcv_buf
and .snd_buf callback functions are now defined to use the default raw
functions. The applet API is now used and any dependencies on the
stream-connectors and the channels were removed.
2025-07-24 12:13:42 +02:00
Christopher Faulet
6a2b354dea MEDIUM: http-client: Update the http-client applet to use its own buffers
Thanks to this patch, the http-client applet is now using its own buffers.
.rcv_buf and .snd_buf callback functions are now defined to use the default
HTX functions. Parts to receive and send data have also been updated to use
the applet API and to remove any dependencies on the stream-connectors and
the channels.
2025-07-24 12:13:42 +02:00
Christopher Faulet
d05ff904bf MINOR: httpclient-cli: Reset httpclient HTX buffer instead of removing blocks
In the CLI I/O handler interacting with the HTTP client, in HTX mode, after
a dump of the HTX message, data must be removed. Instead of removng all
blocks one by one, we can call htx_reset() because all the message must be
flushed.
2025-07-24 12:13:42 +02:00
Christopher Faulet
1741bc4bf0 BUG/MINOR: httpclient-cli: Don't try to dump raw headers in HTX mode
In the CLI I/O handler interacting with the HTTP client, we must not try to
push raw headers in HTX mode, because there is no raw data in this
mode. This prevent the HTX dump at the end of the I/O handle.

It is a 3.3-specific issue. No backport needed.
2025-07-24 12:13:42 +02:00
Christopher Faulet
88aa7a780c MINOR: http-client: Trigger an error if first response block isn't a start-line
The first HTX block of a response must be a start-line. There is no reason
to wait for something else. And if there are output data in the response
channel buffer, it means we must found the start-line.
2025-07-24 12:13:42 +02:00
Christopher Faulet
c08a0dae30 MINOR: http-client: Try to send request body with headers if possible
There is no reason to yield after sending the request headers, except if the
request was fully sent. If there is a payload, it is better to send it as
well. However, when the whole request was sent, we can leave the I/O handler.
2025-07-24 12:13:42 +02:00
Christopher Faulet
96aa251d20 CLEANUP: http-client: Remove useless indentation when sending request body
It was useless to have an indentation to handle HTTPCLIENT_S_REQ_BODY state
in the http-client I/O handler.
2025-07-24 12:13:42 +02:00
Christopher Faulet
217da087fd MEDIUM: dns: Update the dns_session applet to use its own buffers
Thanks to this patch, the dns_session applet is now using its own
buffers. .rcv_buf and .snd_buf callback functions are now defined to use the
default raw functions. Functions to receive and send data have also been
updated to use the applet API and to remove any dependencies on the
stream-connectors and the channels.
2025-07-24 12:13:41 +02:00
Christopher Faulet
765f14e0e3 BUG/MEDIUM: dns: Reset reconnect tempo when connection is finally established
The issue was introduced by commit 27236f221 ("BUG/MINOR: dns: add tempo
between 2 connection attempts for dns servers"). In this patch, to delay the
reconnection, a timer is used on the appctx when it is created. This
postpones the appctx initialization. However, once initialized, the
expiration time of the underlying task is not reset. So, it is always
considered as expired and the appctx is woken up in loop.

The fix is quite simple. In dns_session_init(), the expiration time of the
appctx's task is alwaus set to TICK_ETERNITY.

This patch must be backported everywhere the commit above was backported. So
as far as 2.8 for now but possibly to all stable versions.
2025-07-24 12:13:41 +02:00
Christopher Faulet
e542d2dfaa MEDIUM: hlua: Update the socket applet to use its own buffers
Thanks to this patch, the lua cosocket applet is now using its own
buffers. .rcv_buf and .snd_buf callback functions are now defined to use the
default raw functions. Functions to receive and send data have also been
updated to use the applet API and to remove any dependencies on the
stream-connectors and the channels.
2025-07-24 12:13:41 +02:00
Christopher Faulet
7e96ff6b84 BUG/MEDIUM: hlua: Report to SC when output data are blocked on a lua socket
It is a fix similar to the previous one ("BUG/MEDIUM: hlua: Report to SC
when data were consumed on a lua socket"), but for the write side. The
writer must notify the cosocket it needs more space in the request buffer to
produce more data by calling sc_need_room(). Otherwise, there is nothing to
prevent to wake the cosocket applet up again and again.

This patch must be backported as far as 2.8, and maybe to 2.6 too.
2025-07-24 12:13:41 +02:00
Christopher Faulet
21e45a61d1 BUG/MEDIUM: hlua: Report to SC when data were consumed on a lua socket
The lua cosocket are quite strange. There is an applet used to handle the
connection and writer and readers subscribed on it to write or read
data. Writers and readers are tasks woken up by the cosocket applet when
data can be consumed or produced, depending on the channels buffers
state. Then the cosocket applet is woken up by writers and readers when read
or write events were performed.

It means the cosocket applet has only few information on what was produced
or consumed. It is the writers and readers responsibility to notify any
blocking. Among other things, the readers must take care to notify the
stream on top of the cosocket applet that some data was consumed. Otherwise,
it may remain blocked, waiting for a write event (a write event from the
stream point of view is a read event from the cosocket point of view).

Thie patch must be backported as far as 2.8, and maybe to 2.6 too.
2025-07-24 12:13:41 +02:00
Christopher Faulet
48df877dab MEDIUM: hlua: Update the http applet to use its own buffers
Thanks to this patch, the lua HTTP applet is now using its own buffers.
.rcv_buf and .snd_buf callback functions are now defined to use the default
HTX functions. Functions to receive and send data have also been updated to
use the applet API and to remove any dependencies on the stream-connectors
and the channels.
2025-07-24 12:13:41 +02:00
Christopher Faulet
3e456be5ae MINOR: hlua: Use the buffer instead of the HTTP message to get HTTP headers
hlua_http_get_headers() function was using the HTTP message from the stream
TXN to retrieve headers from a message. However, this will be an issue to
update the lua HTTP applet to use its own buffers. Indeed, in that case,
information from the channels will be unavailable. So now,
hlua_http_get_headers() is now using a buffer containing an HTX message. It
is just an API change bacause, internally, the function was already
manipulation an HTX message.
2025-07-24 12:13:41 +02:00
Christopher Faulet
15080d9aae MINOR: hlua: Fill the request array on the first HTTP applet run
When a lua HTTP applet is created, a "request" object is created, filled
with the request information (method, path, headers...), to be able to
easily retrieve these information from the script. However, this was done
when thee appctx was created, retrieving the info from the request channel.

To be ale to update the applet to use its own buffer, it is now performed on
the first applet run. Indead, when the applet is created, the info are not
forwarded yet and should not be accessed. Note that for now, information are
still retrieved from the channel.
2025-07-24 12:13:41 +02:00
Christopher Faulet
fdb66e6c5e MEDIUM: hlua: Update the tcp applet to use its own buffers
Thanks to this patch, the lua TCP applet is now using its own buffers.
.rcv_buf and .snd_buf callback functions are now defined to use the default
raw functions. Other changes are quite light. Mainly, end of stream and
errors are reported on the appctx instead of the stream-endpoint descriptor.
2025-07-24 12:13:41 +02:00
Christopher Faulet
1f9a1cbefc MINOR: applet: Improve applet API to take care of inbuf/outbuf alloc failures
applet_get_inbuf() and applet_get_outbuf() functions were not testing if the
buffers were available. So, the caller had to check them before calling one
of these functions. It is not really handy. So now, these functions take
care to have a fully usable buffer before returning. Otherwise NULL is
returned.
2025-07-24 12:13:41 +02:00
Christopher Faulet
44aae94ab9 MINOR: applet: Add HTX versions for applet_input_data() and applet_output_room()
It will be useful for HTX applets because availale data in the input buffer and
available space in the output buffer are computed from the HTX message and not
the buffer itself. So now, applet_htx_input_data() and applet_htx_output_room()
functions can be used.
2025-07-24 12:13:41 +02:00
Christopher Faulet
d9855102cf BUG/MEDIUM: Remove sync sends from streams to applets
When the applet API was reviewed to use dedicated buffers, the support for
sends from the streams to applets was added. Unfortunately, it was not a
good idea because this way it is possible to deliver data to an applet and
release it just after, truncated data. Indeed, the release stage for applets
is related to the stream release itself. However, unlike the multiplexers,
the applets cannot survive to a stream for now.

So, for now, the sync sends from the streams is removed for applets, waiting
for a better way to handle the applets release stage.

Note that this only concerns applets using their own buffers. And of now,
the bug is harmless because all refactored applets are on server side and
consume data first. But this will be an issue with the HTTP client.

This patch should be backported as far as 3.0 after a period of observation.
2025-07-24 12:13:41 +02:00
Christopher Faulet
574d0d8211 BUG/MINOR: applet: Fix applet_getword() to not return one extra byte
applet_getword() function is returning one extra byte when a string is
returned because the "ret" variable is not reset before the loop on the
data. The patch also fixes applet_getline().

It is a 3.3-specific issue. No need to backport.
2025-07-24 12:13:41 +02:00
Christopher Faulet
41a40680ce BUG/MEDIUM: stconn: Fix conditions to know an applet can get data from stream
sc_is_send_allowed() function is used to know if an applet is able to
receive data from the stream. But this function was designed for applets
using the channels buffer. It is not adapted to applets using their own
buffers.

when the SE_FL_WAIT_DATA flag is set, it means the applet is waiting for
more data and should not be woken up without new data. For applets using
channels buffer, just testing the flag is enough because process_stream()
will remove if when more data will be available. For applets using their own
buffers, it is more complicated. Some data may be blocked in the output
channel buffer. In that case, and when the applet input buffer can receive
daa, the applet can be woken up.

This patch must be backported as far as 3.0 after a period of observation.
2025-07-24 12:13:41 +02:00
Christopher Faulet
0d371d2729 BUG/MEDIUM: applet: State inbuf is no longer full if input data are skipped
When data are skipped from the input buffer of an applet, we must take care
to notify the input buffer is no longer full. Otherwise, this could prevent
the stream to push data to the applet.

It is 3.3-specific. No backport needed.
2025-07-24 12:13:41 +02:00
Christopher Faulet
5b5ecf848d BUG/MINOR: hlua: Skip headers when a receive is performed on an HTTP applet
When an HTTP applet tries to retrieve data, the request headers are still in
the buffer. But, instead of being silently removed, their size is removed
from the amount of data retrieved. When the request payload is fully
retrieved, it is not an issue. But it is a problem when a length is
specified. The data are shorten from the headers size.

So now, we take care to silently remove headers.

This patch must be backported to all stable versions.
2025-07-24 12:13:41 +02:00
William Lallemand
8258c8166a MINOR: acme: add ACME to the haproxy -vv feature list
Add "ACME" in the feature list in order to check if the support was
built successfully.
2025-07-24 11:49:11 +02:00
Remi Tricot-Le Breton
14615a8672 CLEANUP: ssl: Use only NIDs in curve name to id table
The curve name to curve id mapping table was built out of multiple
internal tables found in openssl sources, namely the 'nid_to_group'
table found in 'ssl/t1_lib.c' which maps openssl specific NIDs to public
IANA curve identifiers. In this table, there were two instances of
EVP_PKEY_XXX ids being used while all the other ones are NID_XXX
identifiers.
Since the two EVP_PKEY are actually equal to their NID equivalent in
'include/openssl/evp.h' we can use NIDs all along for better coherence.
2025-07-24 10:58:54 +02:00
Ilia Shipitsin
a2267fafcf CLEANUP: acme: fix wrong spelling of "resources"
"ressources" was used as a variable name, let's use English variant
to make spell check happier
2025-07-24 08:11:42 +02:00
William Lallemand
02db0e6b9f BUG/MINOR: acme: allow "processing" in challenge requests
Allow the "processing" status in the challenge object when requesting
to do the challenge, in addition to "pending".

According to RFC 8555 https://datatracker.ietf.org/doc/html/rfc8555/#section-7.1.6

   Challenge objects are created in the "pending" state.  They
   transition to the "processing" state when the client responds to the
   challenge (see Section 7.5.1)

However some CA could respond with a "processing" state without ever
transitioning to "pending".

Must be backported to 3.2.
2025-07-23 16:07:03 +02:00
William Lallemand
c103123c9e MINOR: acme: remove acme_req_auth() and use acme_post_as_get() instead
acme_req_auth() is only a call to acme_post_as_get() now, there's no
reason to keep the function. This patch removes it.
2025-07-23 16:07:03 +02:00
Amaury Denoyelle
08d664b17c MEDIUM: mux-quic: support backend private connection
If a backend connection is private, it should not be reused outside of
its original attached session. As such, on stream detach operation, such
connection is never inserted into server idle/avail list. Instead, it is
stored directly on the session.

The purpose of this commit is to implement proper handling of private
backend connections via QUIC multiplexer.
2025-07-23 15:49:51 +02:00
Amaury Denoyelle
00d668549e MINOR: mux-quic: do not reuse connection if app already shut
QUIC connection graceful closure is performed in two steps. First, the
application layer is closed. In the context of HTTP/3, this is done with
a GOAWAY frame emission, which forbids opening of new streams. Then the
whole connection is terminated via CONNECTION_CLOSE which is the final
emitted frame.

This commit ensures that when app layer is shut for a backend
connection, this connection is removed from either idle or avail server
tree. The objective is to prevent stream layer to try to reuse a
connection if no new stream can be attached on it.

New BUG_ON checks are inserted in qmux_strm_attach() and h3_attach() to
ensure that this assertion is always true.
2025-07-23 15:45:18 +02:00
Amaury Denoyelle
3217835b1d MEDIUM: mux-quic: implement be connection reuse
Implement support for QUIC connection reuse on the backend side. The
main change is done during detach stream operation. If a connection is
idle, it is inserted in the server list. Else, it is stored in the
server avail tree if there is room for more streams.

For non idle connection, qmux_avail_streams() is reused to detect that
stream flow-control limit is not yet reached. If this is the case, the
connection is not inserted in the avail tree, so it cannot be reuse,
even if flow-control is unblocked later by the peer. This latter point
could be improved in the future.

Note that support for QUIC private connections is still missing. Reuse
code will evolved to fully support this case.
2025-07-23 15:45:09 +02:00
Amaury Denoyelle
3bf37596ba MINOR: mux-quic: store session in QCS instance
Add a new <sess> member into QCS structure. It is used to store the
parent session of the stream on attach operation. This is only done for
backend side.

This new member will become necessary when connection reuse will be
implemented. <owner> member of connection is not suitable as it could be
set to NULL, notably after a session_add_conn() failure.

Also, a single BE conn can be shared along different session instance,
in particular when using aggressive/always reuse mode. Thus it is
necessary to linked each QCS instance with its session.
2025-07-23 15:42:37 +02:00
Amaury Denoyelle
826f797bb0 MINOR: mux-quic: disable glitch on backend side
For now, QUIC glitch limit counter is only available on the frontend
side. Thus, disable incrementation on the backend side for now. Also,
session is only available as conn <owner> reliably on the frontend side,
so session_add_glitch_ctr() operation is also securised.
2025-07-23 14:39:18 +02:00
Amaury Denoyelle
89329b147d MINOR: mux-quic: correctly implement backend timeout
qcc_refresh_timeout() is the function called on QUIC MUX activity. Its
purpose is to update the timeout by selecting the correct value
depending on the connection state.

Prior to this patch, backend connections were mostly ignored by the
function. However, the default server timeout was selecting as a
fallback. This is incompatible with backend connections reuse.

This patch fixes timeout applied on backend connections. Only values
specific to frontend which are http-request and http-keep-alive timeouts
are now ignored for a backend connection. Also, fallback timeout is only
used for frontend connections.

This patch ensures that an idle backend connection won't be deleted due
to server timeout. This is necessary for proper connection reuse which
will be implemented in a future patch.
2025-07-23 14:36:48 +02:00
Amaury Denoyelle
95cb763cd6 MINOR: mux-quic: refactor timeout code
This commit is a small reorganization of condition used into
qcc_refresh_timeout(). Its objective is to render the code more logical
before the next patch which will ensure that timeout is properly set for
backend connections.
2025-07-23 14:36:48 +02:00
Amaury Denoyelle
558532fc57 BUG/MINOR: mux-quic: ensure close-spread-time is properly applied
If a connection remains on a proxy currently disabled or stopped, a
special spread timeout is set if active close is configured. For QUIC
MUX, this is set via qcc_refresh_timeout() as with all other timeout
values.

Fix this closing timeout setting : it is now used as an override to any
other timeout that may have been chosen if calculated spread time is
lower than the previously selected value. This is done for backend
connections as well.

This should be backported up to 2.6 after a period of observation.
2025-07-23 14:36:48 +02:00
Amaury Denoyelle
c5bcc3a21e BUG/MINOR mux-quic: apply correctly timeout on output pending data
When no stream is attached, mux layer is responsible to maintain a
timeout. The first criteria is to apply client/server timeout if there
is still data waiting for emission.

Previously, <hreq> qcc member was used to determine this state. However,
this only covers bidirectional streams. Fix this by testing if
<send_list> is empty or not. This is enough to take into account both
bidi and uni streams.

Theorically, this should be backported to every stable versions.
However, send-list is not available on 2.6 and there is no alternative
to quickly determine if there is waiting output data. Thus, it's better
to backport it up to 2.8 only.
2025-07-23 14:36:48 +02:00
William Lallemand
7139ebd676 BUG/MEDIUM: acme: use POST-as-GET instead of GET for resources
The requests that checked the status of the challenge and the retrieval
of the certificate were done using a GET.

This is working with letsencrypt and other CA providers, but it might
not work everywhere. RFC 8555 specifies that only the directory and
newNonce resources MUST work with a GET requests, but everything else
must use POST-as-GET.

Must be backported to 3.2.
2025-07-23 12:42:23 +02:00
Aurelien DARRAGON
054fa05e1f MINOR: log: explicitly ignore "log-steps" on backends
"log-steps" was already ignored if directly defined in a backend section,
however, when defined in a defaults section it was inherited to all
proxies no matter their capability (ie: including backends).

As configurations often contain more backends than frontends, this would
result in wasted memory given that the log-steps setting is only
considered on frontends.

Let's fix that by preventing the inheritance from defaults section to
anything else than frontends. Also adjust the documentation to mention
that the setting in not relevant for backends.
2025-07-22 10:22:04 +02:00
Amaury Denoyelle
e02939108e BUG/MINOR: h3: fix uninitialized value in h3_req_headers_send()
Due to the introduction of smallbuf usage for HTTP/3 headers emission,
ret variable may be used uninitialized if buffer allocation fails due to
not enough room in QUIC connection window.

Fix this by setting ret value to 0.

Function variable declaration are also adjusted so that the pattern is
similar to h3_resp_headers_send(). Finally, outbuf buffer is also
removed as it is now unused.

No need to backport.
2025-07-22 09:42:52 +02:00
Amaury Denoyelle
cbbbf4ea43 MINOR: h3: add traces to h3_req_headers_send()
Add traces during HTTP/3 request encoding. This operation is performed
on the backend side.
2025-07-21 16:58:12 +02:00
Amaury Denoyelle
3126cba82e MINOR: h3: use smallbuf for request header emission
Similarly to HTTP/3 response encoding, a small buffer is first allocated
for the request encoding on the backend side. If this is not sufficient,
the smallbuf is replaced by a standard buffer and encoding is restarted.

This is useful to reduce the window usage over a connection of smaller
requests.
2025-07-21 16:58:12 +02:00
Remi Tricot-Le Breton
7fd849f4e0 MINOR: ssl: Remove ClientHello specific traces if !HAVE_SSL_CLIENT_HELLO_CB
SSL libraries like wolfSSL that don't have the clienthello callback
mechanism enabled do not need to have the traces that are only called
from the said callback.
The code added to parse the ciphers relied on a function that wes not
defined in wolfSSL (SSL_CIPHER_find).
2025-07-21 16:44:50 +02:00
Remi Tricot-Le Breton
665b7d4fa9 MINOR: ssl: Dump ciphers and sigalgs details in trace with 'advanced' verbosity
The contents of the extensions were only dumped with verbosity
'complete' which meant that the 'advanced' verbosity was pretty much
useless despite what its name implies (it was the same as the 'simple'
one).
The 'advanced' verbosity is now the "maximum" one, using 'complete'
would not add any extra information yet, but it leaves more room for
some actually large traces to be dumped later on (some complete
ClientHello dumps for instance).
2025-07-21 16:44:50 +02:00
Remi Tricot-Le Breton
8f2b787241 MINOR: ssl: Add curves in ssl traces
Dump the ClientHello curves in the SSL traces.
2025-07-21 16:44:50 +02:00
Remi Tricot-Le Breton
d799a1b3b2 MINOR: ssl: Add curve id to curve name table and mapping functions
The SSL libraries like OpenSSL for instance do not seem to actually
provide a public mapping between IANA defined curve IDs and curve names,
or even a mapping between curve IDs and internal NIDs.
This new table regroups all those information in a single table so that
we can convert curve names (be it SECG or NIST format) to curve IDs or
NIDs.
The previously existing 'curves2nid' function now uses the new table,
and a new 'curveid2str' one is added.
2025-07-21 16:44:50 +02:00
Remi Tricot-Le Breton
f00d9bf12d MINOR: ssl: Add ciphers in ssl traces
Decode the contents of the ClientHello ciphers extension and dump a
human readable list in the ssl traces.
2025-07-21 16:44:50 +02:00
Amaury Denoyelle
b0fe453079 BUG/MINOR: hq-interop: fix FIN transmission
Since the following patch, app_ops layer is now responsible to report
that HTX block was the last transmitted so that FIN STREAM can be set.
This is mandatory to properly support HTTP 1xx interim responses.

  f349df44b4
  MINOR: qmux: change API for snd_buf FIN transmission

This change was correctly implemented in HTTP/3 code, however an issue
appeared on hq-interop transcoder in case zero-copy DATA transfer is
performed when HTX buffer is swapped. If this occured during the
transfer of the last HTX block, EOM is not detected and thus STREAM FIN
is never set.

Most of the times, QMUX shut callback is called immediately after. This
results in an emission of a RESET_STREAM to the client, which prevents
the data transfer.

To fix this, use the same method as HTTP/3 : HTX EOM flag status is
checked before any transfer, thus preserving it even after a zero-copy.

Criticity of this bug is low as hq-interop is experimental and is mostly
used for interop testing.

This should fix github issue #3038.

This patch must be backported wherever the above one is.
2025-07-21 15:38:02 +02:00
Aurelien DARRAGON
563b4fafc2 BUG/MINOR: logs: fix log-steps extra log origins selection
Willy noticed that it was not possible to select extra log origins using
log-steps directive. Extra origins are the one registered using
log_orig_register() such as http-req.

Reason was the error path was always executed during extra log origin
matching for log-steps parser, while it should only be executed if no
match was found.

It should be backported to 3.1.
2025-07-21 15:33:55 +02:00
Olivier Houchard
f8e9545f70 BUG/MEDIUM: threads: Disable the workaround to load libgcc_s on macOS
Don't use the workaround to load libgcc_s on macOS. It is not needed
there, and it causes issues, as recent macOS dislike processes that fork
after threads where created (and the workaround creates a temporary
thread). This fixes crashes on macOS at least when using master-worker,
and using the system resolver.

This should fix Github issue #3035

This should be backported up to 2.8.
2025-07-21 13:56:29 +02:00
Valentine Krasnobaeva
5b45251d19 BUILD: debug: add missed guard USE_CPU_AFFINITY to show cpu bindings
Not all platforms support thread-cpu bindings, so let's put
cpu_topo_dump_summary() under USE_CPU_AFFINITY guards.

Only needs to be backported if 1cc0e023ce ("MINOR: debug: add thread-cpu
bindings info in 'show dev' output") is backported.
2025-07-21 11:25:08 +02:00
Frederic Lecaille
14d0f74052 MINOR: quic: Remove pool_head_quic_be_cc_buf pool
This patch impacts the QUIC frontends. It reverts this patch

    MINOR: quic-be: add a "CC connection" backend TX buffer pool

which adds <pool_head_quic_be_cc_buf> new pool to allocate CC (connection closed state)
TX buffers with bigger object size than the one for <pool_head_quic_cc_buf>.
Indeed the QUIC backends must be able to send at least 1200 bytes Initial packets.

For now on, both the QUIC frontends and backend use the same pool with
MAX(QUIC_INITIAL_IPV6_MTU, QUIC_INITIAL_IPV4_MTU)(1252 bytes) as object size.
2025-07-17 19:33:21 +02:00
Valentine Krasnobaeva
1cc0e023ce MINOR: debug: add thread-cpu bindings info in 'show dev' output
Add thread-cpu bindings info in 'show dev' output, as it can be useful for
debugging.
2025-07-17 19:08:13 +02:00
Valentine Krasnobaeva
ff461efc59 MINOR: debug: align output style of debug_parse_cli_show_dev with cpu_dump_topology
Align titles style of debug_parse_cli_show_dev() with
cpu_dump_topology(). We will call the latter inside of
debug_parse_cli_show_dev() to show thread-cpu bindings info.
2025-07-17 19:08:06 +02:00
Valentine Krasnobaeva
9e11c852fe MINOR: cpu-topo: write thread-cpu bindings into trash buffer
Write thread-cpu bindings and cluster summary into provided trash buffer.
Like this we can call this function in any place, when this info is needed.
2025-07-17 19:07:58 +02:00
Valentine Krasnobaeva
2405283230 MINOR: cpu-topo: split cpu_dump_topology() to show its summary in show dev
cpu_dump_topology() prints details about each enabled CPU and a summary with
clusters info and thread-cpu bindings. The latter is often usefull for
debugging and we want to add it in the 'show dev' output.

So, let's split cpu_dump_topology() in two parts: cpu_topo_debug() to print the
details about each enabled CPU; and cpu_topo_dump_summary() to print only the
summary.

In the next commit we will modify cpu_topo_dump_summary() to write into local
trash buffer and it could be easily called from debug_parse_cli_show_dev().
2025-07-17 19:07:46 +02:00
Valentine Krasnobaeva
254e4d59f7 BUG/MINOR: halog: exit with error when some output filters are set simultaneosly
Exit with an error if multiple output filters (-ic, -srv, -st, -tc, -u*, etc.)
are used at the same time.

halog is designed to process and display output for only one filter at a time.
Using multiple filters simultaneously can cause a crash because the program is
not designed to manage multiple, separate result sets (e.g., one for
IP counts, another for URLs).

Supporting simultaneous filters would require a redesign to collect entries for
each filter in separate ebtree. This would negatively impact performance and is
not requested for the moment. This patch prevents the crash by checking filter
combinations just after the command line parsing.

This issue was reported in GitHUB #3031.
This should be backported in all stable versions.
2025-07-17 17:22:37 +02:00
Frederic Lecaille
4eef300a2c BUG/MEDIUM: quic-be: CC buffer released from wrong pool
The "connection close state" TX buffer is used to build the datagram with
basically a CONNECTION_CLOSE frame to notify the peer about the connection
closure. It allows the quic_conn memory release and its replacement by a lighter
quic_cc_conn struct.

For the QUIC backend, there is a dedicated pool to build such datagrams from
bigger TX buffers. But from quic_conn_release(), this is the pool dedicated
to the QUIC frontends which was used to release the QUIC backend TX buffers.

This patch simply adds a test about the target of the connection to release
the "connection close state" TX buffers from the correct pool.

No backport needed.
2025-07-17 11:48:41 +02:00
Willy Tarreau
b6d0ecd258 DOC: connection: explain the rules for idle/safe/avail connections
It's super difficult to find the rules that operate idle conns depending
on their idle/safe/avail/private status. Some are in lists, others not.
Some are in trees, others not. Some have a flag set, others not. This
documents the rules before the definitions in connection-t.h. It could
even be backported to help during backport sessions.
2025-07-16 18:53:57 +02:00
Frederic Lecaille
838024e07e MINOR: quic: Get rid of qc_is_listener()
Replace all calls to qc_is_listener() (resp. !qc_is_listener()) by calls to
objt_listener() (resp. objt_server()).
Remove qc_is_listener() implement and QUIC_FL_CONN_LISTENER the flag it
relied on.
2025-07-16 16:42:21 +02:00
Willy Tarreau
d9701d312d DEV: gdb: add a memprofile decoder to the debug tools
"memprof_dump" will visit memprofile entries and dump them in a
synthetic format counting allocations/releases count/size, type
and calling address.
2025-07-16 15:33:33 +02:00
Christopher Faulet
4f7c26cbb3 BUG/MINOR: applet: Don't trigger BUG_ON if the tid is not on appctx init
When an appctx is initialized, there is a BUG_ON() to be sure the appctx is
really initialized on the right thread to avoid bugs on the thread
affinity. However, it is possible to not choose the thread when the appctx
is created and let it starts on any thread. In that case, the thread
affinity is set when the appctx is initialized. So, we must take cate to not
trigger the BUG_ON() in that case.

For now, we never hit the bug because the thread affinity is always set
during the appctx creation.

This patch must be backport as far as 2.8.
2025-07-16 13:47:33 +02:00
Amaury Denoyelle
88c0422e49 MINOR: h3: remove unused outbuf in h3_resp_headers_send()
Cleanup h3_resp_headers_send() by removing outbuf buffer variable which
is not necessary anymore.
2025-07-16 10:30:59 +02:00
Frederic Lecaille
1c33756f78 BUG/MINOR: quic: Wrong source address use on FreeBSD
The bug is a listener only one, and only occured on FreeBSD.

The FreeBSD issue has been reported here:
https://forums.freebsd.org/threads/quic-http-3-with-haproxy.98443/
where QUIC traces could reveal that sendmsg() calls lead to EINVAL
syscall errnos.

Such a similar issue could be reproduced from a FreeBSD 14-2 VM
with reg-tests/quic/retry.vtc as reg test.

As noted by Olivier, this issue could be fixed within the VM binding
the listener socket to INADDR_ANY.

That said, the symptoms are not exactly the same as the one reporte by the user.
What could be observed from such a VM is that if the first recvmsg() call
returns the datagram destination address, and if the listener
listening address is bound to a specific address, the calls to
sendmsg() fail because of the IP_SENDSRCADDR ip option value
set by cmsg_set_saddr(). According to the ip(4) freebsd manual
such an IP options must be used if the listening socket is
bound to a specific address. It is to be noted that into a VM
the first call to recvmsg() of the first connection does not return the datagram
destination address. This leads the first quic_conn to be initialized without
->local_addr value. This is this value which is used by IP_SENDSRCADDR
ip option. In this case, the sendmsg() calls (without IP_SENDSRCADDR)
never fail. The issue appears at the second condition.

This patch replaces the conditions to use IP_SENDSRCADDR to a call to
qc_may_use_saddr(). This latter also checks that the listener listening
address is not INADDR_ANY to allow the use of the source address.
It is generalized to all the OSes. Indeed, there is no reason to set the source
address when the listener is bound to a specific address.

Must be backported as far as 2.8.
2025-07-16 10:17:54 +02:00
Amaury Denoyelle
63586a8ab4 BUG/MINOR: h3: properly handle interim response on BE side
On backend side, H3 layer is responsible to decode a HTTP/3 response
into an HTX message. Multiple responses may be received on a single
stream with interim status codes prior to the final one.

h3_resp_headers_to_htx() is the function used solely on backend side
responsible for H3 response to HTX transcoding. This patch extends it to
be able to properly support interim responses. When such a response is
received, the new flag H3_SF_RECV_INTERIM is set. This is converted to
QMUX qcs flag QC_SF_EOI_SUSPENDED.

The objective of this latter flag is to prevent stream EOI to be
reported during stream rcv_buf callback, even if HTX message contains
EOM and is empty. QC_SF_EOI_SUSPENDED will be cleared when the final
response is finally converted, which unblock stream EOI notification for
next rcv_buf invocations. Note however that HTX EOM is untouched : it is
always set for both interim and final response reception.

As a minor adjustment, HTX_SL_F_BODYLESS is always set for interim
responses.

Contrary to frontend interim response handling, a flag is necessary on
QMUX layer. This is because H3 to HTX transcoding and rcv_buf callback
are two distinct operations, called under different context (MUX vs
stream tasklet).

Also note that H3 layer has two distinct flags for interim response
handling, one only used as a server (FE side) and the other as a client
(BE side). It was preferred to used two distinct flags which is
considered less error-prone, contrary to a single unified flag which
would require to always set the proxy side to ensure it is relevant or
not.

No need to backport.
2025-07-15 18:39:23 +02:00
Amaury Denoyelle
e7b3a69c59 BUG/MEDIUM: h3: handle interim response properly on FE side
On frontend side, HTTP/3 layer is responsible to transcode an HTX
response message into HTTP/3 HEADERS frame. This operations is handled
via h3_resp_headers_send().

Prior to this patch, if HTX EOM was encountered in the HTX message after
response transcoding, <fin> was reported to the QMUX layer. This will in
turn cause FIN stream bit to be set when the response is emitted.
However, this is not correct as a single HTX response can be constitued
of several interim message, each delimited by EOM block.

Most of the time, this bug will cause the client to close the connection
as it is invalid to receive an interim response with FIN bit set.

Fixes this by now properly differentiate interim and final response.
During interim response transcoding, the new flag H3_SF_SENT_INTERIM
will be set, which will prevent <fin> to be reported. Thus, <fin> will
only be notified for the final response.

This must be backported up to 2.6. Note that it relies on the previous
patch which also must be taken.
2025-07-15 18:39:23 +02:00
Amaury Denoyelle
f349df44b4 MINOR: qmux: change API for snd_buf FIN transmission
Previous patches have fixes interim response encoding via
h3_resp_headers_send(). However, it is still necessary to adjust h3
layer state-machine so that several successive HTTP responses are
accepted for a single stream.

Prior to this, QMUX was responsible to decree that the final HTX message
was encoded so that FIN stream can be emitted. However, with interim
response, MUX is in fact unable to properly determine this. As such,
this is the responsibility of the application protocol layer. To reflect
this, app_ops snd_buf callback is modified so that a new output argument
<fin> is added to it.

Note that for now this commit does not bring any functional change.
However, it will be necessary for the following patch. As such, it
should be backported prior to it to every versions as necessary.
2025-07-15 18:39:23 +02:00
Amaury Denoyelle
d8b34459b5 BUG/MINOR: h3: ensure that invalid status code are not encoded (FE side)
On frontend side, H3 layer transcodes HTX status code into HTTP/3
HEADERS frame. This is done by calling qpack_encode_int_status().

Prior to this patch, the latter function was also responsible to reject
an invalid value, which guarantee that only valid codes are encoded
(between 100 and 999 values). However, this is not practical as it is
impossible to differentiate between an invalid code error and a buffer
room exhaustation.

Changes this so that now HTTP/3 layer first ensures that HTX code is
valid. The stream is closed with H3_INTERNAL_ERROR if invalid value is
present. Thus, qpack_encode_int_status() will only report an error due
to buffer room exhaustion. If a small buffer is used, a standard buffer
will be reallocated which should be sufficient to encode the response.

The impact of this bug is minimal. Its main benefit is code clarity,
while also removing an unnecessary realloc when confronting with an
invalid HTTP code.

This should be backported at least up to 3.1. Prior to it, smallbuf
mechanism isn't present, hence the impact of this patch is less
important. However, it may still be backported to older versions, which
should facilitate picking patches for HTTP 1xx interim response support.
2025-07-15 18:39:23 +02:00
Amaury Denoyelle
d59bdfb8ec BUG/MINOR: h3: properly realloc buffer after interim response encoding
Previous commit fixes encoding of several following HTTP response
message when interim status codes are first reported. However,
h3_resp_headers_send() still was unable to interrupt encoding if output
buffer room was not sufficient. This case may be likely because small
buffers are used for headers encoding.

This commit fixes this situation. If output buffer is not empty prior to
response encoding, this means that a previous interim response message
was already encoded before. In this case, and if remaining space is not
sufficient, use buffer release mechanism : this allows to restart
response encoding by using a newer buffer. This process has already been
used for DATA and trailers encoding.

This must be backported up to 2.6. However, note that buffer release
mechanism is not present for version 2.8 and lower. In this case, qcs
flag QC_SF_BLK_MROOM should be enough as a replacement.
2025-07-15 18:39:23 +02:00
Amaury Denoyelle
1290fb731d BUG/MEDIUM: h3: do not overwrite interim with final response
An HTTP response may contain several interim response message prior (1xx
status) to a final response message (all other status codes). This may
cause issues with h3_resp_headers_send() called for response encoding
which assumes that it is only call one time per stream, most notably
during output buffer handling.

This commit fixes output buffer handling when h3_resp_headers_send() is
called multiple times due to an interim response. Prior to it, interim
response was overwritten with newer response message. Most of the time,
this resulted in error for the client due to QPACK decoding failure.
This is now fixed so that each response is encoded one after the other.

Note that if encoding of several responses is bigger than output buffer,
an error is reported. This can definitely occurs as small buffer are
used during header encoding. This situation will be improved by the next
patch.

This must be backported up to 2.6.
2025-07-15 18:39:23 +02:00
Willy Tarreau
110625bdb2 MINOR: debug: report haproxy and operating system info in panic dumps
The goal is to help figure the OS version (kernel and userland), any
virtualization/containers, and the haproxy version and build features.
Sometimes even reporters themselve can be mistaken about the running
version or environment. Also printing this at the top hepls draw a
visual delimitation between warnings and panic. Now we get something
like this:

  PANIC! Thread 1 is about to kill the process.

  HAProxy info:
    version: 3.3-dev3-c863c0-18
    features: +51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY (...)

  Operating system info:
    virtual machine: no
    container: no
    kernel: Linux 6.1.131 #1 SMP PREEMPT_DYNAMIC Fri Mar 14 01:04:55 CET 2025 x86_64
    userland: Slackware 15.0 x86_64

  * Thread 1 : id=0x7f615a8775c0 act=1 glob=0 wq=1 rq=0 tl=0 tlsz=0 rqsz=0
        1/1    stuck=0 prof=0 harmless=0 isolated=0
               cpu_ns: poll=1835010197 now=1835066102 diff=55905
               (...)
2025-07-15 17:18:29 +02:00
Willy Tarreau
abcc73830f MEDIUM: proxy: register a post-section cleanup function
For listen/frontend/backend, we now want to be able to clean up the
default-server directive that's no longer used past the end of the
section. For this we register a post-section function and perform the
cleanup there.
2025-07-15 10:40:17 +02:00
Willy Tarreau
49a619acae MEDIUM: proxy: no longer allocate the default-server entry by default
The default-server entry used to always be allocated. Now we'll postpone
its allocation for the first time we need it, i.e. during a "default-server"
directive, or when inheriting a defaults section which has one. The memory
savings are significant, on a large configuration with 100k backends and
no default-server directive, the memory usage dropped from 800MB RSS to
420MB (380 MB saved). It should be possible to also address configs using
default-server by releasing this entry when leaving the proxy section,
which is not done yet.
2025-07-15 10:39:44 +02:00
Willy Tarreau
76828d4120 MINOR: proxy: add checks for defsrv's validity
Now we only copy the default server's settings if such a default server
exists, otherwise we only initialize it. At the moment it always exists.

The change is mostly performed in srv_settings_cpy() since that's where
each caller passes through, and there's no point duplicating that test
everywhere.
2025-07-15 10:36:58 +02:00
Willy Tarreau
4ac28f07d0 MEDIUM: proxy: take the defsrv out of the struct proxy
The server struct has gone huge over time (~3.8kB), and having a copy
of it in the defsrv section of the struct proxy costs a lot of RAM,
that is not needed anymore at run time.

This patch replaces this struct with a dynamically allocated one. The
field is allocated and initialized during alloc_new_proxy() and is
freed when the proxy is destroyed for now. But the goal will be to
support freeing it after parsing the section.
2025-07-15 10:34:18 +02:00
Willy Tarreau
2414c5ce2f CLEANUP: server: be sure never to compare src against a non-existing defsrv
The test in srv_ssl_settings_cpy() comparing src to the server's proxy's
default server does work but it's a subtle trap. Indeed, no check is made
on srv->proxy to be valid, and this only works because the compiler is
comparing pointer offsets. During the boot, it's common to have NULL here
in srv->proxy and of course in this case srv does not match that value
which is NULL plus epsilon. But when trying to turn defsrv to a dynamic
pointer instead, then the compiler is forced to dereference this NULL
srv->proxy and dies during init.

Let's always add the null check for srv->proxy before the test to avoid
this situation.

No backport is needed since the problem cannot happen yet.
2025-07-15 10:33:08 +02:00
Willy Tarreau
36f339d2fe CLEANUP: stream: use server_find_by_addr() in sticking_rule_find_target()
This makes this function a bit less of a mess by no longer manipulating
the low-level server address nodes nor the proxy lock.
2025-07-15 10:30:28 +02:00
Willy Tarreau
616c10f608 CLEANUP: server: add server_find_by_addr()
Server lookup by address requires locking and manipulation of the tree
from user code. Let's provide server_find_by_addr() which does that for
us.
2025-07-15 10:30:28 +02:00
Willy Tarreau
fda04994d9 CLEANUP: server: simplify server_find_by_id()
At a few places we're seeing some open-coding of the same function, likely
because it looks overkill for what it's supposed to do, due to extraneous
tests that are not needed (e.g. check of the backend's PR_CAP_BE etc).
Let's just remove all these superfluous tests and inline it so that it
feels more suitable for use everywhere it's needed.
2025-07-15 10:30:28 +02:00
Willy Tarreau
c8f0b69587 CLEANUP: stream: lookup server ID using standard functions
The server lookup in sticking_rule_find_target() uses an open-coded tree
search while we have a function for this server_find_by_id(). In addition,
due to the way it's coded, the stick-table lock also covers the server
lookup by accident instead of being released earlier. This is not a real
problem though since such feature is rarely used nowadays.

Let's clean all this stuff by first retrieving the ID under the lock and
then looking up the corresponding server.
2025-07-15 10:30:28 +02:00
Willy Tarreau
a3443db2eb CLEANUP: cfgparse: lookup proxy ID using existing functions
The code used to detect proxy id conflicts uses an open-coded lookup
in the ID tree which is not necessary since we already have functions
for this. Let's switch to that instead.
2025-07-15 10:30:28 +02:00
Willy Tarreau
31526f73e6 CLEANUP: server: use server_find_by_name() where relevant
Instead of open-coding a tree lookup, in sticking rules and server_find(),
let's just rely on server_find_by_name() which now does exactly the same.
2025-07-15 10:30:28 +02:00
Willy Tarreau
61acd15ea8 CLEANUP: server: rename findserver() to server_find_by_name()
Now it's more logical and matches what is done in the rest of these
functions. server_find() now relies on it.
2025-07-15 10:30:28 +02:00
Willy Tarreau
6ad9285796 CLEANUP: server: rename server_find_by_name() to server_find()
This function doesn't just look at the name but also the ID when the
argument starts with a '#'. So the name is not correct and explains
why this function is not always used when the name only is needed,
and why the list-based findserver() is used instead. So let's just
call the function "server_find()", and rename its generation-id based
cousin "server_find_unique()".
2025-07-15 10:30:28 +02:00
Willy Tarreau
5e78ab33cd MINOR: server: use the tree to look up the server name in findserver()
Let's just use the tree-based lookup instead of walking through the list.
This function is used to find duplicates in "track" statements and a few
such places, so it's important not to waste too much time on large setups.
2025-07-15 10:30:27 +02:00
Willy Tarreau
12a6a3bb3f REORG: server: move findserver() from proxy.c to server.c
The reason this function was overlooked is that it had mostly equivalent
ones in server.c, let's move them together.
2025-07-15 10:30:27 +02:00
Willy Tarreau
732cd0dfa2 CLEANUP: server: do not check for duplicates anymore in findserver()
findserver() used to check for duplicate server names. These are no
longer accepted in 3.3 so let's get rid of that test and simplify the
code. Note that the function still only uses the list instead of the
tree.
2025-07-15 10:30:27 +02:00
Willy Tarreau
d4d72e2303 [RELEASE] Released version 3.3-dev3
Released version 3.3-dev3 with the following main changes :
    - BUG/MINOR: quic-be: Wrong retry_source_connection_id check
    - MEDIUM: sink: change the sink mode type to PR_MODE_SYSLOG
    - MEDIUM: server: move _srv_check_proxy_mode() checks from server init to finalize
    - MINOR: server: move send-proxy* incompatibility check in _srv_check_proxy_mode()
    - MINOR: mailers: warn if mailers are configured but not actually used
    - BUG/MEDIUM: counters/server: fix server and proxy last_change mixup
    - MEDIUM: server: add and use a separate last_change variable for internal use
    - MEDIUM: proxy: add and use a separate last_change variable for internal use
    - MINOR: counters: rename last_change counter to last_state_change
    - MINOR: ssl: check TLS1.3 ciphersuites again in clienthello with recent AWS-LC
    - BUG/MEDIUM: hlua: Forbid any L6/L7 sample fetche functions from lua services
    - BUG/MEDIUM: mux-h2: Properly handle connection error during preface sending
    - BUG/MINOR: jwt: Copy input and parameters in dedicated buffers in jwt_verify converter
    - DOC: Fix 'jwt_verify' converter doc
    - MINOR: jwt: Rename pkey to pubkey in jwt_cert_tree_entry struct
    - MINOR: jwt: Remove unused parameter in convert_ecdsa_sig
    - MAJOR: jwt: Allow certificate instead of public key in jwt_verify converter
    - MINOR: ssl: Allow 'commit ssl cert' with no privkey
    - MINOR: ssl: Prevent delete on certificate used by jwt_verify
    - REGTESTS: jwt: Add test with actual certificate passed to jwt_verify
    - REGTESTS: jwt: Test update of certificate used in jwt_verify
    - DOC: 'jwt_verify' converter now supports certificates
    - REGTESTS: restrict execution to a single thread group
    - MINOR: ssl: Introduce new smp_client_hello_parse() function
    - MEDIUM: stats: add persistent state to typed output format
    - BUG/MINOR: httpclient: wrongly named httpproxy flag
    - MINOR: ssl/ocsp: stop using the flags from the httpclient CLI
    - MEDIUM: httpclient: split the CLI from the actual httpclient API
    - MEDIUM: httpclient: implement a way to use directly htx data
    - MINOR: httpclient/cli: add --htx option
    - BUILD: dev/phash: remove the accidentally committed a.out file
    - BUG/MINOR: ssl: crash in ssl_sock_io_cb() with SSL traces and idle connections
    - BUILD/MEDIUM: deviceatlas: fix when installed in custom locations.
    - DOC: deviceatlas build clarifications
    - BUG/MINOR: ssl/ocsp: fix definition discrepancies with ocsp_update_init()
    - MINOR: proto-tcp: Add support for TCP MD5 signature for listeners and servers
    - BUILD: cfgparse-tcp: Add _GNU_SOURCE for TCP_MD5SIG_MAXKEYLEN
    - BUG/MINOR: proto-tcp: Take care to initialized tcp_md5sig structure
    - BUG/MINOR: http-act: Fix parsing of the expression argument for pause action
    - MEDIUM: httpclient: add a Content-Length when the payload is known
    - CLEANUP: ssl: Rename ssl_trace-t.h to ssl_trace.h
    - MINOR: pattern: add a counter of added/freed patterns
    - CI: set DEBUG_STRICT=2 for coverity scan
    - CI: enable USE_QUIC=1 for OpenSSL versions >= 3.5.0
    - CI: github: add an OpenSSL 3.5.0 job
    - CI: github: update the stable CI to ubuntu-24.04
    - BUG/MEDIUM: quic: SSL/TCP handshake failures with OpenSSL 3.5
    - CI: github: update to OpenSSL 3.5.1
    - BUG/MINOR: quic: Missing TLS 1.3 QUIC cipher suites and groups inits (OpenSSL 3.5 QUIC API)
    - BUG/MINOR: quic-be: Malformed coalesced Initial packets
    - MINOR: quic: Prevent QUIC backend use with the OpenSSL QUIC compatibility module (USE_OPENSS_COMPAT)
    - MINOR: reg-tests: first QUIC+H3 reg tests (QUIC address validation)
    - MINOR: quic-be: Set the backend alpn if not set by conf
    - MINOR: quic-be: TLS version restriction to 1.3
    - MINOR: cfgparse: enforce QUIC MUX compat on server line
    - MINOR: server: support QUIC for dynamic servers
    - CI: github: skip a ssl library version when latest is already in the list
    - MEDIUM: resolvers: switch dns-accept-family to "auto" by default
    - BUG/MINOR: resolvers: don't lower the case of binary DNS format
    - MINOR: resolvers: do not duplicate the hostname_dn field
    - MINOR: proto-tcp: Register a feature to report TCP MD5 signature support
    - BUG/MINOR: listener: really assign distinct IDs to shards
    - MINOR: quic: Prevent QUIC build with OpenSSL 3.5 new QUIC API version < 3.5.1
    - BUG/MEDIUM: quic: Crash after QUIC server callbacks restoration (OpenSSL 3.5)
    - REGTESTS: use two haproxy instances to distinguish the QUIC traces
    - BUG/MEDIUM: http-client: Don't wake http-client applet if nothing was xferred
    - BUG/MEDIUM: http-client: Properly inc input data when HTX blocks are xferred
    - BUG/MEDIUM: http-client: Ask for more room when request data cannot be xferred
    - BUG/MEDIUM: http-client: Test HTX_FL_EOM flag before commiting the HTX buffer
    - BUG/MINOR: http-client: Ignore 1XX interim responses in non-HTX mode
    - BUG/MINOR: http-client: Reject any 101-switching-protocols response
    - BUG/MEDIUM: http-client: Drain the request if an early response is received
    - BUG/MEDIUM: http-client: Notify applet has more data to deliver until the EOM
    - BUG/MINOR: h3: fix https scheme request encoding for BE side
    - MINOR: h1-htx: Add function to format an HTX message in its H1 representation
    - BUG/MINOR: mux-h1: Use configured error files if possible for early H1 errors
    - BUG/MINOR: h1-htx: Don't forget to init flags in h1_format_htx_msg function
    - CLEANUP: assorted typo fixes in the code, commits and doc
    - BUILD: adjust scripts/build-ssl.sh to modern CMake system of QuicTLS
    - MINOR: debug: add distro name and version in postmortem
2025-07-11 16:45:50 +02:00
Valentine Krasnobaeva
0c63883be1 MINOR: debug: add distro name and version in postmortem
Since 2012, systemd compliant distributions contain
/etc/os-release file. This file has some standardized format, see details at
https://www.freedesktop.org/software/systemd/man/latest/os-release.html.

Let's read it in feed_post_mortem_linux() to gather more info about the
distribution.

(cherry picked from commit f1594c41368baf8f60737b229e4359fa7e1289a9)
Signed-off-by: Willy Tarreau <w@1wt.eu>
2025-07-11 11:48:19 +02:00
Ilia Shipitsin
1888991e12 BUILD: adjust scripts/build-ssl.sh to modern CMake system of QuicTLS
QuicTLS in master branch has migrated to CMake, let's adopt script to
it. Previous OpenSSL+QuicTLS patch is built as usual.
2025-07-11 05:04:31 +02:00
Ilia Shipitsin
0ee3d739b8 CLEANUP: assorted typo fixes in the code, commits and doc
Corrected various spelling and phrasing errors to improve clarity and consistency.
2025-07-10 19:49:48 +02:00
Christopher Faulet
516dfe16ff BUG/MINOR: h1-htx: Don't forget to init flags in h1_format_htx_msg function
The regression was introduced by commit 187ae28 ("MINOR: h1-htx: Add
function to format an HTX message in its H1 representation"). We must be
sure the flags variable must be initialized in h1_format_htx_msg() function.

This patch must be backported with the commit above.
2025-07-10 14:10:42 +02:00
Christopher Faulet
d252ec2beb BUG/MINOR: mux-h1: Use configured error files if possible for early H1 errors
The H1 multiplexer is able to produce some errors on its own to report early
errors, before the stream is created. In that case, the error files of the
proxy were tested to detect empty files (or /dev/null) but they were not
used to produce the error itself.

But the documentation states that configured error files are used in all
cases. And in fact, it is not really a problem to use these files. We must
just format a full HTX message. Thanks to the previous patch, it is now
possible.

This patch should fix the issue #3032. It should be backported to 3.2. For
older versions, it must be discussed but it should be quite easy to do.
2025-07-10 10:29:49 +02:00
Christopher Faulet
187ae28cf4 MINOR: h1-htx: Add function to format an HTX message in its H1 representation
The function h1_format_htx_msg() can now be used to convert a valid HTX
message in its H1 representation. No validity test is performed, the HTX
message must be valid. Only trailers are silently ignored if the message is
not chunked. In addition, the destination buffer must be empty. 1XX interim
responses should be supported. But again, there is no validity tests.
2025-07-10 10:29:49 +02:00
Amaury Denoyelle
378c182192 BUG/MINOR: h3: fix https scheme request encoding for BE side
An HTTP/3 request must contains :scheme pseudo-header. Currently, only
"https" value is expected due to QUIC transport layer in use.

However, https value is incorrectly encoded due to a QPACK index value
mismatch in qpack_encode_scheme(). Fix it to ensure that scheme is now
properly set for HTTP/3 requests on the backend side.

No need to backport this.
2025-07-09 17:41:34 +02:00
Christopher Faulet
0b97bf36fa BUG/MEDIUM: http-client: Notify applet has more data to deliver until the EOM
When we leave the I/O handler with an unfinished request, we must report the
applet has more data to deliver. Otherwise, when the channel request buffer
is emptied, the http-client applet is not always woken up to forward the
remaining request data.

This issue was probably revealed by commit "BUG/MEDIUM: http-client: Don't
wake http-client applet if nothing was xferred". It is only an issue with
large POSTs, when the payload is streamed.

This patch must be backported as far as 2.6 with the commit above. But on
older versions, the applet API may differ. So be careful.
2025-07-09 16:27:24 +02:00
Christopher Faulet
25b0625d5c BUG/MEDIUM: http-client: Drain the request if an early response is received
When a large request is sent, it is possible to have a response before the
end of the request. It is valid from HTTP perspective but it is an issue
with the current design of the http-client. Indded, the request and the
response are handled sequentially. So the response will be blocked, waiting
for the end of the request. Most of time, it is not an issue, except when
the request transfer is blocked. In that case, the applet is blocked.

With the current API, it is not possible to handle early response and
continue the request transfer. So, this case cannot be handle. In that case,
it seems reasonnable to drain the request if a response is received. This
way, the request transfer, from the caller point of view, is never blocked
and the response can be properly processed.

To do so, the action flag HTTPCLIENT_FA_DRAIN_REQ is added to the
http-client. When it is set, the request payload is just dropped. In that
case, we take care to not report the end of input to properly report the
request was truncated, especially in logs.

It is only an issue with large POSTs, when the payload is streamed.

This patch must be backported as far as 2.6.
2025-07-09 16:27:24 +02:00
Christopher Faulet
8ba754108d BUG/MINOR: http-client: Reject any 101-switching-protocols response
Protocol updages are not supported by the http-client. So report an error is
a 101-switching-protocols response is received. Of course, it is unexpected
because the API is not designed to support upgrades. But it is better to
properly handle this case.

This patch could be backported as far as 2.6. It depends on the commit
"BUG/MINOR: http-client: Ignore 1XX interim responses in non-HTX mode".
2025-07-09 16:27:24 +02:00
Christopher Faulet
9d10be33ae BUG/MINOR: http-client: Ignore 1XX interim responses in non-HTX mode
When the response is re-formatted in raw message, the 1XX interim responses
must be skipped. Otherwise, information of the first interim response will
be saved (status line and headers) and those from the final response will be
dropped.

Note that for now, in HTX-mode, the interim messages are removed.

This patch must be backported as far as 2.6.
2025-07-09 16:27:24 +02:00
Christopher Faulet
4bdb2e5a26 BUG/MEDIUM: http-client: Test HTX_FL_EOM flag before commiting the HTX buffer
when htx_to_buf() function is called, if the HTX message is empty, the
buffer is reset. So HTX flags must not be tested after because the info may
be lost.

So now, we take care to test HTX_FL_EOM flag before calling htx_to_buf().

This patch must be backported as far as 2.8.
2025-07-09 16:27:24 +02:00
Christopher Faulet
e4a0d40c62 BUG/MEDIUM: http-client: Ask for more room when request data cannot be xferred
When the request payload cannot be xferred to the channel because its buffer
is full, we must request for more room by calling sc_need_room(). It is
important to be sure the httpclient applet will not be woken up in loop to
push more data while it is not possible.

It is only an issue with large POSTs, when the payload is streamed.

This patch must be backported as far as 2.6. Note that on 2.6,
sc_need_room() only takes one argument.
2025-07-09 16:27:24 +02:00
Christopher Faulet
d9ca8f6b71 BUG/MEDIUM: http-client: Properly inc input data when HTX blocks are xferred
When HTX blocks from the requests are transferred into the channel buffer,
the return value of htx_xfer_blks() function must not be used to increment
the channel input value because meta data are counted here while they are
not part of input data. Because of this bug, it is possible to forward more
data than these present in the channel buffer.

Instead, we look at the input data before and after the transfer and the
difference is added.

It is only an issue with large POSTs, when the payload is streamed.

This patch must be backported as far as 2.6.
2025-07-09 16:27:24 +02:00
Christopher Faulet
fffdac42df BUG/MEDIUM: http-client: Don't wake http-client applet if nothing was xferred
When data are transferred to or from the htt-pclient, the applet is
systematically woken up, even when no data are transferred. This could lead
to needlessly wakeups. When called from a lua script, if data are blocked
for a while, this leads to a wakeup ping-pong loop where the http-client
applet is woken up by the lua script which wakes back the script.

To fix the issue, in httpclient_req_xfer() and httpclient_res_xfer()
functions, we now take care to not wake the http-client applet up when no
data are transferred.

This patch must be backported as far as 2.6.
2025-07-09 16:27:24 +02:00
Frederic Lecaille
479c9fb067 REGTESTS: use two haproxy instances to distinguish the QUIC traces
The aim of this patch is to identify the QUIC traces between the QUIC frontend
and backend parts. Two haproxy instances are created. The c(1|2) http clients
connect to ha1 with TCP frontends and QUIC backends. ha2 embeds two QUIC listeners
with s1 as TCP backend. When the traces are activated, they are dumped to stderr.
Hopefully, they are prefixed by the haproxy instance name (h1 or h2). This is very
useful to identify the QUIC instances.
2025-07-09 16:01:02 +02:00
Frederic Lecaille
45ac235baa BUG/MEDIUM: quic: Crash after QUIC server callbacks restoration (OpenSSL 3.5)
Revert this patch which is no more useful since OpenSSL 3.5.1 to remove the
QUIC server callback restoration after SSL context switch:

    MINOR: quic: OpenSSL 3.5 internal QUIC custom extension for transport parameters reset

It was required for 3.5.0. That said, there was no CI for OpenSSL 3.5 at the date
of this commit. The CI recently revealed that the QUIC server side could crash
during QUIC reg tests just after having restored the callbacks as implemented by
the commit above.

Also revert this commit which is no more useful because it arrived with the commit
above:

	BUG/MEDIUM: quic: SSL/TCP handshake failures with OpenSSL 3.

Must be backported to 3.2.
2025-07-09 16:01:02 +02:00
Frederic Lecaille
c01eb1040e MINOR: quic: Prevent QUIC build with OpenSSL 3.5 new QUIC API version < 3.5.1
The QUIC listener part was impacted by the 3.5.0 OpenSSL new QUIC API with several
issues which have been fixed by 3.5.1.

Add a #error to prevent such OpenSSL 3.5 new QUIC API use with version below 3.5.1.

Must be backported to 3.2.
2025-07-09 16:01:02 +02:00
Willy Tarreau
dd49f1ee62 BUG/MINOR: listener: really assign distinct IDs to shards
A fix was made in 3.0 for the case where sharded listeners were using
a same ID with commit 0db8b6034d ("BUG/MINOR: listener: always assign
distinct IDs to shards"). However, the fix is incorrect. By checking the
ID of temporary node instead of the kept one in bind_complete_thread_setup()
it ends up never inserting the used nodes at this point, thus not reserving
them. The side effect is that assigning too close IDs to subsequent
listeners results in the same ID still being assigned twice since not
reserved. Example:

   global
       nbthread 20

   frontend foo
       bind :8000 shards by-thread id 10
       bind :8010 shards by-thread id 20

The first one will start a series from 10 to 29 and the second one a
series from 20 to 39. But 20 not being inserted when creating the shards,
it will remain available for the post-parsing phase that assigns all
unassigned IDs by filling holes, and two listeners will have ID 20.

By checking the correct node, the problem disappears. The patch above
was marked for backporting to 2.6, so this fix should be backported that
far as well.
2025-07-09 15:52:33 +02:00
Christopher Faulet
adba8ffb49 MINOR: proto-tcp: Register a feature to report TCP MD5 signature support
"HAVE_TCP_MD5SIG" feature is now registered if TCP MD5 signature is
supported. This will help the feature detection in the reg-test script
dedicated to this feature.
2025-07-09 09:51:24 +02:00
Willy Tarreau
96da670cd7 MINOR: resolvers: do not duplicate the hostname_dn field
The hostdn.key field in the server contains a pure copy of the hostname_dn
since commit 3406766d57 ("MEDIUM: resolvers: add a ref between servers and
srv request or used SRV record") which wanted to lowercase it. Since it's
not necessary, let's drop this useless copy. In addition, the return from
strdup() was not tested, so it could theoretically crash the process under
heavy memory contention.
2025-07-08 07:54:45 +02:00
Willy Tarreau
95cf518bfa BUG/MINOR: resolvers: don't lower the case of binary DNS format
The server's "hostname_dn" is in Domain Name format, not a pure string, as
converted by resolv_str_to_dn_label(). It is made of lower-case string
components delimited by binary lengths, e.g. <0x03>www<0x07>haproxy<0x03)org.
As such it must not be lowercased again in srv_state_srv_update(), because
1) it's useless on the name components since already done, and 2) because
it would replace component lengths 97 and above by 32-char shorter ones.
Granted, not many domain names have that large components so the risk is
very low but the operation is always wrong anyway. This was brought in
2.5 by commit 3406766d57 ("MEDIUM: resolvers: add a ref between servers
and srv request or used SRV record").

In the same vein, let's fix the confusing strcasecmp() that are applied
to this binary format, and use memcmp() instead. Here there's basically
no risk to incorrectly match the wrong record, but that test alone is
confusing enough to provoke the existence of the bug above.

Finally let's update the component for that field to mention that it's
in this format and already lower cased.

Better not backport this, the risk of facing this bug is almost zero, and
every time we touch such files something breaks for bad reasons.
2025-07-08 07:54:45 +02:00
Willy Tarreau
54d36f3e65 MEDIUM: resolvers: switch dns-accept-family to "auto" by default
As notified in the 3.2 announce [1], dns-accept-family needed to switch
to "auto" by default in 3.3. This is now done.

[1] https://www.mail-archive.com/haproxy@formilux.org/msg45917.html
2025-07-08 07:54:45 +02:00
William Lallemand
9e78859fb3 CI: github: skip a ssl library version when latest is already in the list
Skip the job for "latest" libssl version, when this version is the same
as a one already in the list.

This avoid having 2 jobs for OpenSSL 3.5.1 since no new dev version are
available for now and 3.5.1 is already in the list.
2025-07-07 19:46:07 +02:00
Amaury Denoyelle
42365f53e8 MINOR: server: support QUIC for dynamic servers
To properly support QUIC for dynamic servers, it is required to extend
add server CLI handler :
* ensure conformity between server address and proto
* automatically set proto to QUIC if not specified
* prepare_srv callback must be called to initialize required SSL context

Prior to this patch, crashes may occur when trying to use QUIC with
dynamic servers.

Also, destroy_srv callback must be called when a dynamic server is
deallocated. This ensures that there is no memory leak due to SSL
context.

No need to backport.
2025-07-07 14:29:29 +02:00
Amaury Denoyelle
626cfd85aa MINOR: cfgparse: enforce QUIC MUX compat on server line
Add postparsing checks to control server line conformity regarding QUIC
both on the server address and the MUX protocol. An error is reported in
the following case :
* proto quic is explicitely specified but server address does not
  specify quic4/quic6 prefix
* another proto is explicitely specified but server address uses
  quic4/quic6 prefix
2025-07-07 14:29:24 +02:00
Frederic Lecaille
e76f1ad171 MINOR: quic-be: TLS version restriction to 1.3
This patch skips the TLS version settings. They have as a side effect to add
all the TLS version extensions to the ClientHello message (TLS 1.0 to TLS 1.3).
QUIC supports only TLS 1.3.
2025-07-07 14:13:02 +02:00
Frederic Lecaille
93a94ba87b MINOR: quic-be: Set the backend alpn if not set by conf
Simply set the alpn string to "h3,hq_interop" if there is no "alpn" setting for
QUIC backends.
2025-07-07 14:13:02 +02:00
Frederic Lecaille
a9b5a2eb90 MINOR: reg-tests: first QUIC+H3 reg tests (QUIC address validation)
First simple VTC file for QUIC reg tests. Two listeners are configured, one without
Retry enabled and the other without. Two clients simply tries to connect to these
listeners to make an basic H3 request.
2025-07-07 14:13:02 +02:00
Frederic Lecaille
5a87f4673a MINOR: quic: Prevent QUIC backend use with the OpenSSL QUIC compatibility module (USE_OPENSS_COMPAT)
Make the server line parsing fail when a QUIC backend is configured  if haproxy
is built to use the OpenSSL stack compatibility module. This latter does not
support the QUIC client part.
2025-07-07 14:13:02 +02:00
Frederic Lecaille
87ada46f38 BUG/MINOR: quic-be: Malformed coalesced Initial packets
This bug fix completes this patch which was not sufficient:

   MINOR: quic-be: Allow sending 1200 bytes Initial datagrams

This patch could not allow the build of well formed Initial packets coalesced to
others (Handshake) packets. Indeed, the <padding> parameter passed to qc_build_pkt()
is deduced from a first value: <padding> value and must be set to 1 for
the last encryption level. As a client, the last encryption level is always
the Handshake encryption level. But <padding> was always set to 1 for a QUIC
client, leading the first Initial packet to be malformed because considered
as the second one into the same datagram.

So, this patch sets <padding> value passed to qc_build_pkt() to 1 only when there
is no last encryption level at all, to allow the build of Initial only packets
(not coalesced) or when it frames to send (coalesced packets).

No need to backport.
2025-07-07 14:13:02 +02:00
Frederic Lecaille
6aebca7f2c BUG/MINOR: quic: Missing TLS 1.3 QUIC cipher suites and groups inits (OpenSSL 3.5 QUIC API)
This bug impacts both QUIC backends and frontends with OpenSSL 3.5 as QUIC API.

The connections to a haproxy QUIC listener from a haproxy QUIC backend could not
work at all without HelloRetryRequest TLS messages emitted by the backend
asking the QUIC client to restart the handshake followed by TLS alerts:

    conn. @(nil) OpenSSL error[0xa000098] read_state_machine: excessive message size

Furthermore, the Initial CRYPTO data sent by the client were big (about two 1252 bytes
packets) (ClientHello TLS message). After analyzing the packets a key_share extension
with <unknown> as value was long (more that 1Ko). This extension is in relation with
the groups but does not belong to the groups supported by QUIC.

That said such connections could work with ngtcp2 as backend built against the same
OSSL TLS stack API but with a HelloRetryRequest.

ngtcp2 always set the QUIC default cipher suites and group, for all the stacks it
supports as implemented by this patch.

So this patch configures both QUIC backend and frontend cipher suites and groups
calling SSL_CTX_set_ciphersuites() and SSL_CTX_set1_groups_list() with the correct
argument, except for SSL_CTX_set1_groups_list() which fails with QUIC TLS for
a unknown reason at this time.

The call to SSL_CTX_set_options() is useless from ssl_quic_initial_ctx() for the QUIC
clients. One relies on ssl_sock_prepare_srv_ssl_ctx() to set them for now on.

This patch is effective for all the supported stacks without impact for AWS-LC,
and QUIC TLS and fixes the connections for haproxy QUIC frontend and backends
when builts against OpenSSL 3.5 QUIC API).

A new define HAVE_OPENSSL_QUICTLS has been added to openssl-compat.h to distinguish
the QUIC TLS stack.

Must be backported to 3.2.
2025-07-07 14:13:02 +02:00
William Lallemand
0efbe6da88 CI: github: update to OpenSSL 3.5.1
Update the OpenSSL 3.5 job to 3.5.1.

This must be backported to 3.2.
2025-07-07 13:58:38 +02:00
Frederic Lecaille
fb0324eb09 BUG/MEDIUM: quic: SSL/TCP handshake failures with OpenSSL 3.5
This bug arrived with this commit:

    MINOR: quic: OpenSSL 3.5 internal QUIC custom extension for transport parameters reset

To make QUIC connection succeed with OpenSSL 3.5 API, a call to quic_ssl_set_tls_cbs()
was needed from several callback which call SSL_set_SSL_CTX(). This has as side effect
to set the QUIC callbacks used by the OpenSSL 3.5 API.

But quic_ssl_set_tls_cbs() was also called for TCP sessions leading the SSL stack
to run QUIC code, if the QUIC support is enabled.

To fix this, simply ignore the TCP connections inspecting the <ssl_qc_app_data_index>
index value which is NULL for such connections.

Must be backported to 3.2.
2025-07-07 12:01:22 +02:00
William Lallemand
d0bd0595da CI: github: update the stable CI to ubuntu-24.04
Update the stable CI to ubuntu-24.04.

Must be backported to 3.2.
2025-07-07 09:29:33 +02:00
William Lallemand
b6fec27ef6 CI: github: add an OpenSSL 3.5.0 job
Add an OpenSSL 3.5.0 job to test USE_QUIC.

This must be backported to 3.2.
2025-07-07 09:27:17 +02:00
Ilia Shipitsin
d8c867a1e6 CI: enable USE_QUIC=1 for OpenSSL versions >= 3.5.0
OpenSSL 3.5.0 introduced experimental support for QUIC. This change enables the use_quic option when a compatible version of OpenSSL is detected, allowing QUIC-based functionality to be leveraged where applicable. Feature remains disabled for earlier versions to ensure compatibility.
2025-07-07 09:02:11 +02:00
Ilia Shipitsin
198d422a31 CI: set DEBUG_STRICT=2 for coverity scan
enabling DEBUG_STRICT=2 will enable BUG_ON_HOT() and help coverity
in bug detection

for the reference: https://github.com/haproxy/haproxy/issues/3008
2025-07-06 08:17:37 +02:00
Willy Tarreau
573143e0c8 MINOR: pattern: add a counter of added/freed patterns
Patterns are allocated when loading maps/acls from a file or dynamically
via the CLI, and are released only from the CLI (e.g. "clear map xxx").
These ones do not use pools and are much harder to monitor, e.g. in case
a script adds many and forgets to clear them, etc.

Let's add a new pair of metrics "PatternsAdded" and "PatternsFreed" that
will report the number of added and freed patterns respectively. This
can allow to simply graph both. The difference between the two normally
represents the number of allocated patterns. If Added grows without
Freed following, it can indicate a faulty script that doesn't perform
the needed cleanup. The metrics are also made available to Prometheus
as patterns_added_total and patterns_freed_total respectively.
2025-07-05 00:12:45 +02:00
Remi Tricot-Le Breton
a075d6928a CLEANUP: ssl: Rename ssl_trace-t.h to ssl_trace.h
This header does not actually contain any structures so it's best to
remove the '-t' from the name for better consistency.
2025-07-04 15:21:50 +02:00
William Lallemand
f07f0ee21c MEDIUM: httpclient: add a Content-Length when the payload is known
This introduce a change of behavior in the httpclient API. When
generating a request with a payload buffer, the size of the buffer
payload is known and does not need to be streamed in chunks.

This patch force to sends payload buffer using a Content-Length header
in the request, however the behavior does not change if a callback is
still used instead of a buffer.
2025-07-04 15:21:50 +02:00
Christopher Faulet
5da4da0bb6 BUG/MINOR: http-act: Fix parsing of the expression argument for pause action
When the "pause" action is parsed, if an expression is used instead of a
static value, the position of the current argument after the expression
evaluation is incremented while it should not. The sample_parse_expr()
function already take care of it. However, it should still be incremented
when an time value was parsed.

This patch must be backported to 3.2.
2025-07-04 14:38:32 +02:00
Christopher Faulet
3cc5991c9b BUG/MINOR: proto-tcp: Take care to initialized tcp_md5sig structure
When the TCP MD5 signature is enabled, on a listening socket or an outgoing
one, the tcp_md5sig structure must be initialized first.

It is a 3.3-specific issue. No backport needed.
2025-07-04 08:32:06 +02:00
Christopher Faulet
45cb232062 BUILD: cfgparse-tcp: Add _GNU_SOURCE for TCP_MD5SIG_MAXKEYLEN
It is required for the musl librairy to be sure TCP_MD5SIG_MAXKEYLEN is
defined and avoid build errors.
2025-07-03 16:30:15 +02:00
Christopher Faulet
5232df57ab MINOR: proto-tcp: Add support for TCP MD5 signature for listeners and servers
This patch adds the support for the RFC2385 (Protection of BGP Sessions via
the + TCP MD5 Signature Option) for the listeners and the servers. The
feature is only available on Linux. Keywords are not exposed otherwise.

By setting "tcp-md5sig <password>" option on a bind line, TCP segments of
all connections instantiated from the listening socket will be signed with a
16-byte MD5 digest. The same option can be set on a server line to protect
outgoing connections to the corresponding server.

The primary use case for this option is to allow BGP to protect itself
against the introduction of spoofed TCP segments into the connection
stream. But it can be useful for any very long-lived TCP connections.

A reg-test was added and it will be executed only on linux. All other
targets are excluded.
2025-07-03 15:25:40 +02:00
William Lallemand
6f6c6fa4cb BUG/MINOR: ssl/ocsp: fix definition discrepancies with ocsp_update_init()
Since patch 20718f40b6 ("MEDIUM: ssl/ckch: add filename and linenum
argument to crt-store parsing"), the definition of ocsp_update_init()
and its declaration does not share the same arguments.

Must be backported to 3.2.
2025-07-03 15:14:13 +02:00
David Carlier
e7c59a7a84 DOC: deviceatlas build clarifications
Update accordingly the related documentation, removing/clarifying confusing
parts as it was more complicated than it needed to be.
2025-07-03 09:08:06 +02:00
David Carlier
0e8e20a83f BUILD/MEDIUM: deviceatlas: fix when installed in custom locations.
We are reusing DEVICEATLAS_INC/DEVICEATLAS_LIB when the DeviceAtlas
library had been compiled and installed with cmake and make install targets.
Works fine except when ldconfig is unaware of the path, thus adding
cflags/ldflags into the mix.

Ideally, to be backported down to the lowest stable branch.
2025-07-03 09:08:06 +02:00
William Lallemand
720efd0409 BUG/MINOR: ssl: crash in ssl_sock_io_cb() with SSL traces and idle connections
TRACE_ENTER is crashing in ssl_sock_io_cb() in case a connection idle is
being stolen. Indeed the function could be called with a NULL context
and dereferencing it will crash.

This patch fixes the issue by initializing ctx only once it is usable,
and moving TRACE_ENTER after the initialization.

This must be backported to 3.2.
2025-07-02 16:14:19 +02:00
Willy Tarreau
e34a0a50ae BUILD: dev/phash: remove the accidentally committed a.out file
Commit 41f28b3c53 ("DEV: phash: Update 414 and 431 status codes to phash")
accidentally committed a.out, resulting in build/checkout issues when
locally rebuilt. Let's drop it.

This should be backported to 3.1.
2025-07-02 10:55:13 +02:00
William Lallemand
0f1c206b8f MINOR: httpclient/cli: add --htx option
Use the new HTTPCLIENT_O_RES_HTX flag when using the CLI httpclient with
--htx.

It allows to process directly the response in HTX, then the htx_dump()
function is used to display a debug output.

Example:

echo "httpclient --htx GET https://haproxy.org" | socat /tmp/haproxy.sock
 htx=0x79fd72a2e200(size=16336,data=139,used=6,wrap=NO,flags=0x00000010,extra=0,first=0,head=0,tail=5,tail_addr=139,head_addr=0,end_addr=0)
		[0] type=HTX_BLK_RES_SL    - size=31     - addr=0     	HTTP/2.0 301
		[1] type=HTX_BLK_HDR       - size=15     - addr=31    	content-length: 0
		[2] type=HTX_BLK_HDR       - size=32     - addr=46    	location: https://www.haproxy.org/
		[3] type=HTX_BLK_HDR       - size=25     - addr=78    	alt-svc: h3=":443"; ma=3600
		[4] type=HTX_BLK_HDR       - size=35     - addr=103   	set-cookie: served=2:TLSv1.3+TCP:IPv4
		[5] type=HTX_BLK_EOH       - size=1      - addr=138   	<empty>
2025-07-01 16:33:38 +02:00
William Lallemand
3e05e20029 MEDIUM: httpclient: implement a way to use directly htx data
Add a HTTPCLIENT_O_RES_HTX flag which allow to store directly the HTX
data in the response buffer instead of extracting the data in raw
format.

This is useful when the data need to be reused in another request.
2025-07-01 16:31:47 +02:00
William Lallemand
2f4219ed68 MEDIUM: httpclient: split the CLI from the actual httpclient API
This patch split the httpclient code to prevent confusion between the
httpclient CLI command and the actual httpclient API.

Indeed there was a confusion between the flag used internally by the
CLI command, and the actual httpclient API.

hc_cli_* functions as well as HC_C_F_* defines were moved to
httpclient_cli.c.
2025-07-01 15:46:04 +02:00
William Lallemand
149f6a4879 MINOR: ssl/ocsp: stop using the flags from the httpclient CLI
The ocsp-update uses the flags from the httpclient CLI, which are not
supposed to be used elsewhere since this is a state for the CLI.

This patch implements HC_OCSP flags for the ocsp-update.
2025-07-01 15:46:04 +02:00
William Lallemand
519abefb57 BUG/MINOR: httpclient: wrongly named httpproxy flag
The HC_F_HTTPPROXY flag was wrongly named and does not use the correct
value, indeed this flag was meant to be used for the httpclient API, not
the httpclient CLI.

This patch fixes the problem by introducing HTTPCLIENT_FO_HTTPPROXY
which has must be set in hc->flags.

Also add a member 'options' in the httpclient structure, because the
member flags is reinitialized when starting.

Must be backported as far as 3.0.
2025-07-01 14:47:52 +02:00
Aurelien DARRAGON
747a812066 MEDIUM: stats: add persistent state to typed output format
Add a fourth character to the second column of the "typed output format"
to indicate whether the value results from a volatile or persistent metric
('V' or 'P' characters respectively). A persistent metric means the value
could possibily be preserved across reloads by leveraging a shared memory
between multiple co-processes. Such metrics are identified as "shared" in
the code (since they are possibly shared between multiple co-processes)

Some reg-tests were updated to take that change into account, also, some
outputs in the configuration manual were updated to reflect current
behavior.
2025-07-01 14:15:03 +02:00
Mariam John
bd076f8619 MINOR: ssl: Introduce new smp_client_hello_parse() function
In this patch we introduce a new helped function called `smp_client_hello_parse()` to extract
information presented in a TLS client hello handshake message. 7 sample fetches have also been
modified to use this helped function to do the common client hello parsing and use the result
to do further processing of extensions/cipher.

Fixes: #2532
2025-07-01 11:55:36 +02:00
Willy Tarreau
48d5ef363d REGTESTS: restrict execution to a single thread group
When threads are enabled and running on a machine with multiple CCX
or multiple nodes, thread groups are now enabled since 3.3-dev2, causing
load-balancing algorithms to randomly fail due to incoming connections
spreading over multiple groups and using different load balancing indexes.

Let's just force "thread-groups 1" into all configs when threads are
enabled to avoid this.
2025-06-30 18:54:35 +02:00
Remi Tricot-Le Breton
94d750421c DOC: 'jwt_verify' converter now supports certificates
The 'jwt_verify' converter can now accept certificates as a second
parameter, which can be updated via the CLI.
2025-06-30 17:59:55 +02:00
Remi Tricot-Le Breton
db5ca5a106 REGTESTS: jwt: Test update of certificate used in jwt_verify
Using certificates in the jwt_verify converter allows to make use of the
CLI certificate updates, which is still impossible with public keys (the
legacy behavior).
2025-06-30 17:59:55 +02:00
Remi Tricot-Le Breton
663ba093aa REGTESTS: jwt: Add test with actual certificate passed to jwt_verify
The jwt_verify can now take public certificates as second parameter,
either with actual certificate path (no previously mentioned) or from a
predefined crt-store or from a variable.
2025-06-30 17:59:55 +02:00
Remi Tricot-Le Breton
093a3ad7f2 MINOR: ssl: Prevent delete on certificate used by jwt_verify
A ckch_store used in JWT verification might not have any ckch instances
or crt-list entries linked but we don't want to be able to remove it via
the CLI anyway since it would make all future jwt_verify calls using
this certificate fail.
2025-06-30 17:59:55 +02:00
Remi Tricot-Le Breton
31955e6e0a MINOR: ssl: Allow 'commit ssl cert' with no privkey
The ckch_stores might be used to store public certificates only so in
this case we won't provide private keys when updating the certificate
via the CLI.
If the ckch_store is actually used in a bind or server line an error
will still be raised if the private key is missing.
2025-06-30 17:59:55 +02:00
Remi Tricot-Le Breton
522bca98e1 MAJOR: jwt: Allow certificate instead of public key in jwt_verify converter
The 'jwt_verify' converter could only be passed public keys as second
parameter instead of full-on public certificates. This patch allows
proper certificates to be used.
Those certificates can be loaded in ckch_stores like any other
certificate which means that all the certificate-related operations that
can be made via the CLI can now benefit JWT validation as well.

We now have two ways JWT validation can work, the legacy one which only
relies on public keys which could not be stored in ckch_stores without
some in depth changes in the way the ckch_stores are built. In this
legacy way, the public keys are fully stored in a cache dedicated to JWT
only which does not have any CLI commands and any way to update them
during runtime. It also requires that all the public keys used are
passed at least once explicitely to the 'jwt_verify' converter so that
they can be loaded during init.
The new way uses actual certificates, either already stored in the
ckch_store tree (if predefined in a crt-store or already used previously
in the configuration) or loaded in the ckch_store tree during init if
they are explicitely used in the configuration like so:
    var(txn.bearer),jwt_verify(txn.jwt_alg,"cert.pem")

When using a variable (or any other way that can only be resolved during
runtime) in place of the converter's <key> parameter, the first time we
encounter a new value (for which we don't have any entry in the jwt
tree) we will lock the ckch_store tree and try to perform a lookup in
it. If the lookup fails, an entry will still be inserted into the jwt
tree so that any following call with this value avoids performing the
ckch_store tree lookup.
2025-06-30 17:59:55 +02:00
Remi Tricot-Le Breton
6e9f886c4d MINOR: jwt: Remove unused parameter in convert_ecdsa_sig
The pubkey parameter in convert_ecdsa_sig was not actually used.
2025-06-30 17:59:55 +02:00
Remi Tricot-Le Breton
cd89ce1766 MINOR: jwt: Rename pkey to pubkey in jwt_cert_tree_entry struct
Rename the jwt_cert_tree_entry member pkey to pubkey to avoid any
confusion between private and public key.
2025-06-30 17:59:55 +02:00
Remi Tricot-Le Breton
5c3d0a554b DOC: Fix 'jwt_verify' converter doc
Contrary to what the doc says, the jwt_verify converter only works with
a public key and not a full certificate for certificate based protocols
(everything but HMAC).

This patch should be backported up to 2.8.
2025-06-30 17:59:55 +02:00
Remi Tricot-Le Breton
3465f88f8a BUG/MINOR: jwt: Copy input and parameters in dedicated buffers in jwt_verify converter
When resolving variable values the temporary trash chunks are used so
when calling the 'jwt_verify' converter with two variable parameters
like in the following line, the input would be overwritten by the value
of the second parameter :
    var(txn.bearer),jwt_verify(txn.jwt_alg,txn.cert)
Copying the values into dedicated alloc'ed buffers prevents any new call
to get_trash_chunk from erasing the data we need in the converter.

This patch can be backported up to 2.8.
2025-06-30 17:59:55 +02:00
Christopher Faulet
5ba0a2d527 BUG/MEDIUM: mux-h2: Properly handle connection error during preface sending
On backend side, an error at connection level during the preface sending was
not properly handled and could lead to a spinning loop on process_stream()
when the h2 stream on client side was blocked, for instance because of h2
flow control.

It appeared that no transition was perfromed from the PREFACE state to an
ERROR state on the H2 connection when an error occurred on the underlying
connection. In that case, the H2 connection was woken up in loop to try to
receive data, waking up the upper stream at the same time.

To fix the issue, an H2C error must be reported. Most state transitions are
handled by the demux function. So it is the right place to do so. First, in
PREFACE state and on server side, if an error occurred on the TCP
connection, an error is now reported on the H2 connection. REFUSED_STREAM
error code is used in that case. In addition, in that case, we also take
care to properly handle the connection shutdown.

This patch should fix the issue #3020. It must be backported to all stable
versions.
2025-06-30 16:48:00 +02:00
Christopher Faulet
a2a142bf40 BUG/MEDIUM: hlua: Forbid any L6/L7 sample fetche functions from lua services
It was already forbidden to use HTTP sample fetch functions from lua
services. An error is triggered if it happens. However, the error must be
extended to any L6/L7 sample fetch functions.

Indeed, a lua service is an applet. It totally unexepected for an applet to
access to input data in a channel's buffer. These data have not been
analyzed yet and are still subject to any change. An applet, lua or not,
must never access to "not forwarded" data. Only output data are
available. For now, if a lua applet relies on any L6/L7 sampel fetch
functions, the behavior is undefined and not consistent.

So to fix the issue, hlua flag HLUA_F_MAY_USE_HTTP is renamed to
HLUA_F_MAY_USE_CHANNELS_DATA. This flag is used to prevent any lua applet to
use L6/L7 sample fetch functions.

This patch could be backported to all stable versions.
2025-06-30 16:47:59 +02:00
William Lallemand
7fc8ab0397 MINOR: ssl: check TLS1.3 ciphersuites again in clienthello with recent AWS-LC
Patch ed9b8fec49 ("BUG/MEDIUM: ssl: AWS-LC + TLSv1.3 won't do ECDSA in
RSA+ECDSA configuration") partly fixed a cipher selection problem with
AWS-LC. However this was not checking anymore if the ciphersuites was
available in haproxy which is still a problem.

The problem was fixed in AWS-LC 1.46.0 with this PR
https://github.com/aws/aws-lc/pull/2092.

This patch allows to filter again the TLS13 ciphersuites with recent
versions of AWS-LC. However, since there are no macros to check the
AWS-LC version, it is enabled at the next AWS-LC API version change
following the fix in AWS-LC v1.50.0.

This could be backported where ed9b8fec49 was backported.
2025-06-30 16:43:51 +02:00
Aurelien DARRAGON
4fcc9b5572 MINOR: counters: rename last_change counter to last_state_change
Since proxy and server struct already have an internal last_change
variable and we cannot merge it with the shared counter one, let's
rename the last_change counter to be more specific and prevent the
mixup between the two.

last_change counter is renamed to last_state_change, and unlike the
internal last_change, this one is a shared counter so it is expected
to be updated by other processes in our back.

However, when updating last_state_change counter, we use the value
of the server/proxy last_change as reference value.
2025-06-30 16:26:38 +02:00
Aurelien DARRAGON
5b1480c9d4 MEDIUM: proxy: add and use a separate last_change variable for internal use
Same motivation as previous commit, proxy last_change is "abused" because
it is used for 2 different purposes, one for stats, and the other one
for process-local internal use.

Let's add a separate proxy-only last_change variable for internal use,
and leave the last_change shared (and thread-grouped) counter for
statistics.
2025-06-30 16:26:31 +02:00
Aurelien DARRAGON
01dfe17acf MEDIUM: server: add and use a separate last_change variable for internal use
last_change server metric is used for 2 separate purposes. First it is
used to report last server state change date for stats and other related
metrics. But it is also used internally, including in sensitive paths,
such as lb related stuff to take decision or perform computations
(ie: in srv_dynamic_maxconn()).

Due to last_change counter now being split over thread groups since 16eb0fa
("MAJOR: counters: dispatch counters over thread groups"), reading the
aggregated value has a cost, and we cannot afford to consult last_change
value from srv_dynamic_maxconn() anymore. Moreover, since the value is
used to take decision for the current process we don't wan't the variable
to be updated by another process in our back.

To prevent performance regression and sharing issues, let's instead add a
separate srv->last_change value, which is not updated atomically (given how
rare the  updates are), and only serves for places where the use of the
aggregated last_change counter/stats (split over thread groups) is too
costly.
2025-06-30 16:26:25 +02:00
Aurelien DARRAGON
9d3c73c9f2 BUG/MEDIUM: counters/server: fix server and proxy last_change mixup
16eb0fa ("MAJOR: counters: dispatch counters over thread groups")
introduced some bugs: as a result of improper copy paste during
COUNTERS_SHARED_LAST() macro introduction, some functions such as
srv_downtime() which used to make use of the server last_change variable
now use the proxy one, which doesn't make sense and will likely cause
unexpected logical errors/bugs.

Let's fix them all at once by properly pointing to the server last_change
variable when relevant.

No backport needed.
2025-06-30 16:26:19 +02:00
Aurelien DARRAGON
837762e2ee MINOR: mailers: warn if mailers are configured but not actually used
Now that native mailers configuration is only usable with Lua mailers,
Willy noticed that we lack a way to warn the user if mailers were
previously configured on an older version but Lua mailers were not loaded,
which could trick the user into thinking mailers keep working when
transitionning to 3.2 while it is not.

In this patch we add the 'core.use_native_mailers_config()' Lua function
which should be called in Lua script body before making use of
'Proxy:get_mailers()' function to retrieve legacy mailers configuration
from haproxy main config. This way haproxy effectively knows that the
native mailers config is actually being used from Lua (which indicates
user correctly migrated from native mailers to Lua mailers), else if
mailers are configured but not used from Lua then haproxy warns the user
about the fact that they will be ignored unless they are used from Lua.
(e.g.: using the provided 'examples/lua/mailers.lua' to ease transition)
2025-06-27 16:41:18 +02:00
Aurelien DARRAGON
c7c6d8d295 MINOR: server: move send-proxy* incompatibility check in _srv_check_proxy_mode()
This way the check is executed no matter the section where the server
is declared (ie: not only under the "ring" section)
2025-06-27 16:41:13 +02:00
Aurelien DARRAGON
14d68c2ff7 MEDIUM: server: move _srv_check_proxy_mode() checks from server init to finalize
_srv_check_proxy_mode() is currently executed during server init (from
_srv_parse_init()), while it used to be fine for current checks, it
seems it occurs a bit too early to be usable for some checks that depend
on server keywords to be evaluated for instance.

As such, to make _srv_check_proxy_mode() more relevant and be extended
with additional checks in the future, let's call it later during server
finalization, once all server keywords were evaluated.

No change of behavior is expected
2025-06-27 16:41:07 +02:00
Aurelien DARRAGON
23e5f18b8e MEDIUM: sink: change the sink mode type to PR_MODE_SYSLOG
No change of behavior expected, but some compat checks will now be aware
that the proxy type is not TCP but SYSLOG instead.
2025-06-27 16:41:01 +02:00
Frederic Lecaille
1045623cb8 BUG/MINOR: quic-be: Wrong retry_source_connection_id check
This commit broke the QUIC backend connection to servers without address validation
or retry activated:

  MINOR: quic-be: address validation support implementation (RETRY)

Indeed the retry_source_connection_id transport parameter was already checked as
as if it was required, as if the peer (server) was always using the address validation.
Furthermore, relying on ->odcid.len to ensure a retry token was received is not
correct.

This patch ensures the retry_source_connection_id transport parameter is checked
only when a retry token was received (->retry_token != NULL). In this case
it also checks that this transport parameter is present when a retry token
has been received (tx_params->retry_source_connection_id.len != 0).

No need to backport.
2025-06-27 07:59:12 +02:00
362 changed files with 6106 additions and 2577 deletions

21
.github/matrix.py vendored
View File

@ -125,7 +125,7 @@ def main(ref_name):
# Ubuntu
if "haproxy-" in ref_name:
os = "ubuntu-22.04" # stable branch
os = "ubuntu-24.04" # stable branch
else:
os = "ubuntu-24.04" # development branch
@ -218,6 +218,7 @@ def main(ref_name):
"stock",
"OPENSSL_VERSION=1.0.2u",
"OPENSSL_VERSION=1.1.1s",
"OPENSSL_VERSION=3.5.1",
"QUICTLS=yes",
"WOLFSSL_VERSION=5.7.0",
"AWS_LC_VERSION=1.39.0",
@ -232,8 +233,7 @@ def main(ref_name):
for ssl in ssl_versions:
flags = ["USE_OPENSSL=1"]
if ssl == "BORINGSSL=yes" or ssl == "QUICTLS=yes" or "LIBRESSL" in ssl or "WOLFSSL" in ssl or "AWS_LC" in ssl:
flags.append("USE_QUIC=1")
skipdup=0
if "WOLFSSL" in ssl:
flags.append("USE_OPENSSL_WOLFSSL=1")
if "AWS_LC" in ssl:
@ -243,8 +243,23 @@ def main(ref_name):
flags.append("SSL_INC=${HOME}/opt/include")
if "LIBRESSL" in ssl and "latest" in ssl:
ssl = determine_latest_libressl(ssl)
skipdup=1
if "OPENSSL" in ssl and "latest" in ssl:
ssl = determine_latest_openssl(ssl)
skipdup=1
# if "latest" equals a version already in the list
if ssl in ssl_versions and skipdup == 1:
continue
openssl_supports_quic = False
try:
openssl_supports_quic = version.Version(ssl.split("OPENSSL_VERSION=",1)[1]) >= version.Version("3.5.0")
except:
pass
if ssl == "BORINGSSL=yes" or ssl == "QUICTLS=yes" or "LIBRESSL" in ssl or "WOLFSSL" in ssl or "AWS_LC" in ssl or openssl_supports_quic:
flags.append("USE_QUIC=1")
matrix.append(
{

View File

@ -38,7 +38,7 @@ jobs:
- name: Build with Coverity build tool
run: |
export PATH=`pwd`/coverity_tool/bin:$PATH
cov-build --dir cov-int make CC=clang TARGET=linux-glibc USE_ZLIB=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_LUA=1 USE_OPENSSL=1 USE_QUIC=1 USE_WURFL=1 WURFL_INC=addons/wurfl/dummy WURFL_LIB=addons/wurfl/dummy USE_DEVICEATLAS=1 DEVICEATLAS_SRC=addons/deviceatlas/dummy USE_51DEGREES=1 51DEGREES_SRC=addons/51degrees/dummy/pattern ADDLIB=\"-Wl,-rpath,$HOME/opt/lib/\" SSL_LIB=${HOME}/opt/lib SSL_INC=${HOME}/opt/include DEBUG+=-DDEBUG_STRICT=1 DEBUG+=-DDEBUG_USE_ABORT=1
cov-build --dir cov-int make CC=clang TARGET=linux-glibc USE_ZLIB=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_LUA=1 USE_OPENSSL=1 USE_QUIC=1 USE_WURFL=1 WURFL_INC=addons/wurfl/dummy WURFL_LIB=addons/wurfl/dummy USE_DEVICEATLAS=1 DEVICEATLAS_SRC=addons/deviceatlas/dummy USE_51DEGREES=1 51DEGREES_SRC=addons/51degrees/dummy/pattern ADDLIB=\"-Wl,-rpath,$HOME/opt/lib/\" SSL_LIB=${HOME}/opt/lib SSL_INC=${HOME}/opt/include DEBUG+=-DDEBUG_STRICT=2 DEBUG+=-DDEBUG_USE_ABORT=1
- name: Submit build result to Coverity Scan
run: |
tar czvf cov.tar.gz cov-int

186
CHANGELOG
View File

@ -1,6 +1,192 @@
ChangeLog :
===========
2025/07/28 : 3.3-dev5
- BUG/MEDIUM: queue/stats: also use stream_set_srv_target() for pendconns
- DOC: list missing global QUIC settings
2025/07/26 : 3.3-dev4
- CLEANUP: server: do not check for duplicates anymore in findserver()
- REORG: server: move findserver() from proxy.c to server.c
- MINOR: server: use the tree to look up the server name in findserver()
- CLEANUP: server: rename server_find_by_name() to server_find()
- CLEANUP: server: rename findserver() to server_find_by_name()
- CLEANUP: server: use server_find_by_name() where relevant
- CLEANUP: cfgparse: lookup proxy ID using existing functions
- CLEANUP: stream: lookup server ID using standard functions
- CLEANUP: server: simplify server_find_by_id()
- CLEANUP: server: add server_find_by_addr()
- CLEANUP: stream: use server_find_by_addr() in sticking_rule_find_target()
- CLEANUP: server: be sure never to compare src against a non-existing defsrv
- MEDIUM: proxy: take the defsrv out of the struct proxy
- MINOR: proxy: add checks for defsrv's validity
- MEDIUM: proxy: no longer allocate the default-server entry by default
- MEDIUM: proxy: register a post-section cleanup function
- MINOR: debug: report haproxy and operating system info in panic dumps
- BUG/MEDIUM: h3: do not overwrite interim with final response
- BUG/MINOR: h3: properly realloc buffer after interim response encoding
- BUG/MINOR: h3: ensure that invalid status code are not encoded (FE side)
- MINOR: qmux: change API for snd_buf FIN transmission
- BUG/MEDIUM: h3: handle interim response properly on FE side
- BUG/MINOR: h3: properly handle interim response on BE side
- BUG/MINOR: quic: Wrong source address use on FreeBSD
- MINOR: h3: remove unused outbuf in h3_resp_headers_send()
- BUG/MINOR: applet: Don't trigger BUG_ON if the tid is not on appctx init
- DEV: gdb: add a memprofile decoder to the debug tools
- MINOR: quic: Get rid of qc_is_listener()
- DOC: connection: explain the rules for idle/safe/avail connections
- BUG/MEDIUM: quic-be: CC buffer released from wrong pool
- BUG/MINOR: halog: exit with error when some output filters are set simultaneosly
- MINOR: cpu-topo: split cpu_dump_topology() to show its summary in show dev
- MINOR: cpu-topo: write thread-cpu bindings into trash buffer
- MINOR: debug: align output style of debug_parse_cli_show_dev with cpu_dump_topology
- MINOR: debug: add thread-cpu bindings info in 'show dev' output
- MINOR: quic: Remove pool_head_quic_be_cc_buf pool
- BUILD: debug: add missed guard USE_CPU_AFFINITY to show cpu bindings
- BUG/MEDIUM: threads: Disable the workaround to load libgcc_s on macOS
- BUG/MINOR: logs: fix log-steps extra log origins selection
- BUG/MINOR: hq-interop: fix FIN transmission
- MINOR: ssl: Add ciphers in ssl traces
- MINOR: ssl: Add curve id to curve name table and mapping functions
- MINOR: ssl: Add curves in ssl traces
- MINOR: ssl: Dump ciphers and sigalgs details in trace with 'advanced' verbosity
- MINOR: ssl: Remove ClientHello specific traces if !HAVE_SSL_CLIENT_HELLO_CB
- MINOR: h3: use smallbuf for request header emission
- MINOR: h3: add traces to h3_req_headers_send()
- BUG/MINOR: h3: fix uninitialized value in h3_req_headers_send()
- MINOR: log: explicitly ignore "log-steps" on backends
- BUG/MEDIUM: acme: use POST-as-GET instead of GET for resources
- BUG/MINOR mux-quic: apply correctly timeout on output pending data
- BUG/MINOR: mux-quic: ensure close-spread-time is properly applied
- MINOR: mux-quic: refactor timeout code
- MINOR: mux-quic: correctly implement backend timeout
- MINOR: mux-quic: disable glitch on backend side
- MINOR: mux-quic: store session in QCS instance
- MEDIUM: mux-quic: implement be connection reuse
- MINOR: mux-quic: do not reuse connection if app already shut
- MEDIUM: mux-quic: support backend private connection
- MINOR: acme: remove acme_req_auth() and use acme_post_as_get() instead
- BUG/MINOR: acme: allow "processing" in challenge requests
- CLEANUP: acme: fix wrong spelling of "resources"
- CLEANUP: ssl: Use only NIDs in curve name to id table
- MINOR: acme: add ACME to the haproxy -vv feature list
- BUG/MINOR: hlua: Skip headers when a receive is performed on an HTTP applet
- BUG/MEDIUM: applet: State inbuf is no longer full if input data are skipped
- BUG/MEDIUM: stconn: Fix conditions to know an applet can get data from stream
- BUG/MINOR: applet: Fix applet_getword() to not return one extra byte
- BUG/MEDIUM: Remove sync sends from streams to applets
- MINOR: applet: Add HTX versions for applet_input_data() and applet_output_room()
- MINOR: applet: Improve applet API to take care of inbuf/outbuf alloc failures
- MEDIUM: hlua: Update the tcp applet to use its own buffers
- MINOR: hlua: Fill the request array on the first HTTP applet run
- MINOR: hlua: Use the buffer instead of the HTTP message to get HTTP headers
- MEDIUM: hlua: Update the http applet to use its own buffers
- BUG/MEDIUM: hlua: Report to SC when data were consumed on a lua socket
- BUG/MEDIUM: hlua: Report to SC when output data are blocked on a lua socket
- MEDIUM: hlua: Update the socket applet to use its own buffers
- BUG/MEDIUM: dns: Reset reconnect tempo when connection is finally established
- MEDIUM: dns: Update the dns_session applet to use its own buffers
- CLEANUP: http-client: Remove useless indentation when sending request body
- MINOR: http-client: Try to send request body with headers if possible
- MINOR: http-client: Trigger an error if first response block isn't a start-line
- BUG/MINOR: httpclient-cli: Don't try to dump raw headers in HTX mode
- MINOR: httpclient-cli: Reset httpclient HTX buffer instead of removing blocks
- MEDIUM: http-client: Update the http-client applet to use its own buffers
- MEDIUM: log: Update the log applet to use its own buffers
- MEDIUM: sink: Update the sink applets to use their own buffers
- MEDIUM: peers: Update the peer applet to use its own buffers
- MEDIUM: promex: Update the promex applet to use their own buffers
- MINOR: applet: Add support for flags on applets with a flag about the new API
- MEDIUM: applet: Emit a warning when a legacy applet is spawned
- BUG/MEDIUM: logs: fix sess_build_logline_orig() recursion with options
- MEDIUM: stats: avoid 1 indirection by storing the shared stats directly in counters struct
- CLEANUP: compiler: prefer char * over void * for pointer arithmetic
- CLEANUP: include: replace hand-rolled offsetof to avoid UB
- CLEANUP: peers: remove unused peer_session_target()
- OPTIM: stats: store fast sharded counters pointers at session and stream level
2025/07/11 : 3.3-dev3
- BUG/MINOR: quic-be: Wrong retry_source_connection_id check
- MEDIUM: sink: change the sink mode type to PR_MODE_SYSLOG
- MEDIUM: server: move _srv_check_proxy_mode() checks from server init to finalize
- MINOR: server: move send-proxy* incompatibility check in _srv_check_proxy_mode()
- MINOR: mailers: warn if mailers are configured but not actually used
- BUG/MEDIUM: counters/server: fix server and proxy last_change mixup
- MEDIUM: server: add and use a separate last_change variable for internal use
- MEDIUM: proxy: add and use a separate last_change variable for internal use
- MINOR: counters: rename last_change counter to last_state_change
- MINOR: ssl: check TLS1.3 ciphersuites again in clienthello with recent AWS-LC
- BUG/MEDIUM: hlua: Forbid any L6/L7 sample fetche functions from lua services
- BUG/MEDIUM: mux-h2: Properly handle connection error during preface sending
- BUG/MINOR: jwt: Copy input and parameters in dedicated buffers in jwt_verify converter
- DOC: Fix 'jwt_verify' converter doc
- MINOR: jwt: Rename pkey to pubkey in jwt_cert_tree_entry struct
- MINOR: jwt: Remove unused parameter in convert_ecdsa_sig
- MAJOR: jwt: Allow certificate instead of public key in jwt_verify converter
- MINOR: ssl: Allow 'commit ssl cert' with no privkey
- MINOR: ssl: Prevent delete on certificate used by jwt_verify
- REGTESTS: jwt: Add test with actual certificate passed to jwt_verify
- REGTESTS: jwt: Test update of certificate used in jwt_verify
- DOC: 'jwt_verify' converter now supports certificates
- REGTESTS: restrict execution to a single thread group
- MINOR: ssl: Introduce new smp_client_hello_parse() function
- MEDIUM: stats: add persistent state to typed output format
- BUG/MINOR: httpclient: wrongly named httpproxy flag
- MINOR: ssl/ocsp: stop using the flags from the httpclient CLI
- MEDIUM: httpclient: split the CLI from the actual httpclient API
- MEDIUM: httpclient: implement a way to use directly htx data
- MINOR: httpclient/cli: add --htx option
- BUILD: dev/phash: remove the accidentally committed a.out file
- BUG/MINOR: ssl: crash in ssl_sock_io_cb() with SSL traces and idle connections
- BUILD/MEDIUM: deviceatlas: fix when installed in custom locations.
- DOC: deviceatlas build clarifications
- BUG/MINOR: ssl/ocsp: fix definition discrepancies with ocsp_update_init()
- MINOR: proto-tcp: Add support for TCP MD5 signature for listeners and servers
- BUILD: cfgparse-tcp: Add _GNU_SOURCE for TCP_MD5SIG_MAXKEYLEN
- BUG/MINOR: proto-tcp: Take care to initialized tcp_md5sig structure
- BUG/MINOR: http-act: Fix parsing of the expression argument for pause action
- MEDIUM: httpclient: add a Content-Length when the payload is known
- CLEANUP: ssl: Rename ssl_trace-t.h to ssl_trace.h
- MINOR: pattern: add a counter of added/freed patterns
- CI: set DEBUG_STRICT=2 for coverity scan
- CI: enable USE_QUIC=1 for OpenSSL versions >= 3.5.0
- CI: github: add an OpenSSL 3.5.0 job
- CI: github: update the stable CI to ubuntu-24.04
- BUG/MEDIUM: quic: SSL/TCP handshake failures with OpenSSL 3.5
- CI: github: update to OpenSSL 3.5.1
- BUG/MINOR: quic: Missing TLS 1.3 QUIC cipher suites and groups inits (OpenSSL 3.5 QUIC API)
- BUG/MINOR: quic-be: Malformed coalesced Initial packets
- MINOR: quic: Prevent QUIC backend use with the OpenSSL QUIC compatibility module (USE_OPENSS_COMPAT)
- MINOR: reg-tests: first QUIC+H3 reg tests (QUIC address validation)
- MINOR: quic-be: Set the backend alpn if not set by conf
- MINOR: quic-be: TLS version restriction to 1.3
- MINOR: cfgparse: enforce QUIC MUX compat on server line
- MINOR: server: support QUIC for dynamic servers
- CI: github: skip a ssl library version when latest is already in the list
- MEDIUM: resolvers: switch dns-accept-family to "auto" by default
- BUG/MINOR: resolvers: don't lower the case of binary DNS format
- MINOR: resolvers: do not duplicate the hostname_dn field
- MINOR: proto-tcp: Register a feature to report TCP MD5 signature support
- BUG/MINOR: listener: really assign distinct IDs to shards
- MINOR: quic: Prevent QUIC build with OpenSSL 3.5 new QUIC API version < 3.5.1
- BUG/MEDIUM: quic: Crash after QUIC server callbacks restoration (OpenSSL 3.5)
- REGTESTS: use two haproxy instances to distinguish the QUIC traces
- BUG/MEDIUM: http-client: Don't wake http-client applet if nothing was xferred
- BUG/MEDIUM: http-client: Properly inc input data when HTX blocks are xferred
- BUG/MEDIUM: http-client: Ask for more room when request data cannot be xferred
- BUG/MEDIUM: http-client: Test HTX_FL_EOM flag before commiting the HTX buffer
- BUG/MINOR: http-client: Ignore 1XX interim responses in non-HTX mode
- BUG/MINOR: http-client: Reject any 101-switching-protocols response
- BUG/MEDIUM: http-client: Drain the request if an early response is received
- BUG/MEDIUM: http-client: Notify applet has more data to deliver until the EOM
- BUG/MINOR: h3: fix https scheme request encoding for BE side
- MINOR: h1-htx: Add function to format an HTX message in its H1 representation
- BUG/MINOR: mux-h1: Use configured error files if possible for early H1 errors
- BUG/MINOR: h1-htx: Don't forget to init flags in h1_format_htx_msg function
- CLEANUP: assorted typo fixes in the code, commits and doc
- BUILD: adjust scripts/build-ssl.sh to modern CMake system of QuicTLS
- MINOR: debug: add distro name and version in postmortem
2025/06/26 : 3.3-dev2
- BUG/MINOR: config/server: reject QUIC addresses
- MINOR: server: implement helper to identify QUIC servers

View File

@ -992,7 +992,7 @@ OBJS += src/mux_h2.o src/mux_h1.o src/mux_fcgi.o src/log.o \
src/ebsttree.o src/freq_ctr.o src/systemd.o src/init.o \
src/http_acl.o src/dict.o src/dgram.o src/pipe.o \
src/hpack-huff.o src/hpack-enc.o src/ebtree.o src/hash.o \
src/version.o
src/httpclient_cli.o src/version.o
ifneq ($(TRACE),)
OBJS += src/calltrace.o

View File

@ -1,2 +1,2 @@
$Format:%ci$
2025/06/26
2025/07/28

View File

@ -1 +1 @@
3.3-dev2
3.3-dev5

View File

@ -5,7 +5,8 @@ CXX := c++
CXXLIB := -lstdc++
ifeq ($(DEVICEATLAS_SRC),)
OPTIONS_LDFLAGS += -lda
OPTIONS_CFLAGS += -I$(DEVICEATLAS_INC)
OPTIONS_LDFLAGS += -Wl,-rpath,$(DEVICEATLAS_LIB) -L$(DEVICEATLAS_LIB) -lda
else
DEVICEATLAS_INC = $(DEVICEATLAS_SRC)
DEVICEATLAS_LIB = $(DEVICEATLAS_SRC)

View File

@ -32,7 +32,7 @@
/* Prometheus exporter flags (ctx->flags) */
#define PROMEX_FL_METRIC_HDR 0x00000001
/* unused: 0x00000002 */
#define PROMEX_FL_BODYLESS_RESP 0x00000002
/* unused: 0x00000004 */
/* unused: 0x00000008 */
/* unused: 0x00000010 */

View File

@ -427,9 +427,8 @@ static int promex_dump_global_metrics(struct appctx *appctx, struct htx *htx)
static struct ist prefix = IST("haproxy_process_");
struct promex_ctx *ctx = appctx->svcctx;
struct field val;
struct channel *chn = sc_ic(appctx_sc(appctx));
struct ist name, desc, out = ist2(trash.area, 0);
size_t max = htx_get_max_blksz(htx, channel_htx_recv_max(chn, htx));
size_t max = htx_get_max_blksz(htx, applet_htx_output_room(appctx));
int ret = 1;
if (!stats_fill_info(stat_line_info, ST_I_INF_MAX, 0))
@ -495,7 +494,6 @@ static int promex_dump_global_metrics(struct appctx *appctx, struct htx *htx)
if (out.len) {
if (!htx_add_data_atonce(htx, out))
return -1; /* Unexpected and unrecoverable error */
channel_add_input(chn, out.len);
}
return ret;
full:
@ -512,9 +510,8 @@ static int promex_dump_front_metrics(struct appctx *appctx, struct htx *htx)
struct proxy *px = ctx->p[0];
struct stats_module *mod = ctx->p[1];
struct field val;
struct channel *chn = sc_ic(appctx_sc(appctx));
struct ist name, desc, out = ist2(trash.area, 0);
size_t max = htx_get_max_blksz(htx, channel_htx_recv_max(chn, htx));
size_t max = htx_get_max_blksz(htx, applet_htx_output_room(appctx));
struct field *stats = stat_lines[STATS_DOMAIN_PROXY];
int ret = 1;
enum promex_front_state state;
@ -694,7 +691,6 @@ static int promex_dump_front_metrics(struct appctx *appctx, struct htx *htx)
if (out.len) {
if (!htx_add_data_atonce(htx, out))
return -1; /* Unexpected and unrecoverable error */
channel_add_input(chn, out.len);
}
/* Save pointers (0=current proxy, 1=current stats module) of the current context */
@ -716,9 +712,8 @@ static int promex_dump_listener_metrics(struct appctx *appctx, struct htx *htx)
struct listener *li = ctx->p[1];
struct stats_module *mod = ctx->p[2];
struct field val;
struct channel *chn = sc_ic(appctx_sc(appctx));
struct ist name, desc, out = ist2(trash.area, 0);
size_t max = htx_get_max_blksz(htx, channel_htx_recv_max(chn, htx));
size_t max = htx_get_max_blksz(htx, applet_htx_output_room(appctx));
struct field *stats = stat_lines[STATS_DOMAIN_PROXY];
int ret = 1;
enum li_status status;
@ -899,7 +894,6 @@ static int promex_dump_listener_metrics(struct appctx *appctx, struct htx *htx)
if (out.len) {
if (!htx_add_data_atonce(htx, out))
return -1; /* Unexpected and unrecoverable error */
channel_add_input(chn, out.len);
}
/* Save pointers (0=current proxy, 1=current listener, 2=current stats module) of the current context */
ctx->p[0] = px;
@ -921,9 +915,8 @@ static int promex_dump_back_metrics(struct appctx *appctx, struct htx *htx)
struct stats_module *mod = ctx->p[1];
struct server *sv;
struct field val;
struct channel *chn = sc_ic(appctx_sc(appctx));
struct ist name, desc, out = ist2(trash.area, 0);
size_t max = htx_get_max_blksz(htx, channel_htx_recv_max(chn, htx));
size_t max = htx_get_max_blksz(htx, applet_htx_output_room(appctx));
struct field *stats = stat_lines[STATS_DOMAIN_PROXY];
int ret = 1;
double secs;
@ -1185,7 +1178,6 @@ static int promex_dump_back_metrics(struct appctx *appctx, struct htx *htx)
if (out.len) {
if (!htx_add_data_atonce(htx, out))
return -1; /* Unexpected and unrecoverable error */
channel_add_input(chn, out.len);
}
/* Save pointers (0=current proxy, 1=current stats module) of the current context */
ctx->p[0] = px;
@ -1206,9 +1198,8 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
struct server *sv = ctx->p[1];
struct stats_module *mod = ctx->p[2];
struct field val;
struct channel *chn = sc_ic(appctx_sc(appctx));
struct ist name, desc, out = ist2(trash.area, 0);
size_t max = htx_get_max_blksz(htx, channel_htx_recv_max(chn, htx));
size_t max = htx_get_max_blksz(htx, applet_htx_output_room(appctx));
struct field *stats = stat_lines[STATS_DOMAIN_PROXY];
int ret = 1;
double secs;
@ -1507,7 +1498,6 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
if (out.len) {
if (!htx_add_data_atonce(htx, out))
return -1; /* Unexpected and unrecoverable error */
channel_add_input(chn, out.len);
}
/* Decrement server refcount if it was saved through ctx.p[1]. */
@ -1603,9 +1593,8 @@ static int promex_dump_ref_modules_metrics(struct appctx *appctx, struct htx *ht
{
struct promex_ctx *ctx = appctx->svcctx;
struct promex_module_ref *ref = ctx->p[0];
struct channel *chn = sc_ic(appctx_sc(appctx));
struct ist out = ist2(trash.area, 0);
size_t max = htx_get_max_blksz(htx, channel_htx_recv_max(chn, htx));
size_t max = htx_get_max_blksz(htx, applet_htx_output_room(appctx));
int ret = 1;
if (!ref) {
@ -1629,7 +1618,6 @@ static int promex_dump_ref_modules_metrics(struct appctx *appctx, struct htx *ht
if (out.len) {
if (!htx_add_data_atonce(htx, out))
return -1; /* Unexpected and unrecoverable error */
channel_add_input(chn, out.len);
}
ctx->p[0] = ref;
return ret;
@ -1644,9 +1632,8 @@ static int promex_dump_all_modules_metrics(struct appctx *appctx, struct htx *ht
{
struct promex_ctx *ctx = appctx->svcctx;
struct promex_module *mod = ctx->p[0];
struct channel *chn = sc_ic(appctx_sc(appctx));
struct ist out = ist2(trash.area, 0);
size_t max = htx_get_max_blksz(htx, channel_htx_recv_max(chn, htx));
size_t max = htx_get_max_blksz(htx, applet_htx_output_room(appctx));
int ret = 1;
if (!mod) {
@ -1670,7 +1657,6 @@ static int promex_dump_all_modules_metrics(struct appctx *appctx, struct htx *ht
if (out.len) {
if (!htx_add_data_atonce(htx, out))
return -1; /* Unexpected and unrecoverable error */
channel_add_input(chn, out.len);
}
ctx->p[0] = mod;
return ret;
@ -1685,7 +1671,7 @@ static int promex_dump_all_modules_metrics(struct appctx *appctx, struct htx *ht
* Uses <appctx.ctx.stats.px> as a pointer to the current proxy and <sv>/<li>
* as pointers to the current server/listener respectively.
*/
static int promex_dump_metrics(struct appctx *appctx, struct stconn *sc, struct htx *htx)
static int promex_dump_metrics(struct appctx *appctx, struct htx *htx)
{
struct promex_ctx *ctx = appctx->svcctx;
int ret;
@ -1809,7 +1795,7 @@ static int promex_dump_metrics(struct appctx *appctx, struct stconn *sc, struct
return 1;
full:
sc_need_room(sc, channel_htx_recv_max(sc_ic(appctx_sc(appctx)), htx) + 1);
applet_have_more_data(appctx);
return 0;
error:
/* unrecoverable error */
@ -1822,12 +1808,11 @@ static int promex_dump_metrics(struct appctx *appctx, struct stconn *sc, struct
/* Parse the query string of request URI to filter the metrics. It returns 1 on
* success and -1 on error. */
static int promex_parse_uri(struct appctx *appctx, struct stconn *sc)
static int promex_parse_uri(struct appctx *appctx)
{
struct promex_ctx *ctx = appctx->svcctx;
struct channel *req = sc_oc(sc);
struct channel *res = sc_ic(sc);
struct htx *req_htx, *res_htx;
struct buffer *outbuf;
struct htx *req_htx;
struct htx_sl *sl;
char *p, *key, *value;
const char *end;
@ -1837,10 +1822,13 @@ static int promex_parse_uri(struct appctx *appctx, struct stconn *sc)
int len;
/* Get the query-string */
req_htx = htxbuf(&req->buf);
req_htx = htxbuf(DISGUISE(applet_get_inbuf(appctx)));
sl = http_get_stline(req_htx);
if (!sl)
goto error;
goto bad_req_error;
if (sl->info.req.meth == HTTP_METH_HEAD)
ctx->flags |= PROMEX_FL_BODYLESS_RESP;
p = http_find_param_list(HTX_SL_REQ_UPTR(sl), HTX_SL_REQ_ULEN(sl), '?');
if (!p)
goto end;
@ -1873,27 +1861,27 @@ static int promex_parse_uri(struct appctx *appctx, struct stconn *sc)
*p = 0;
len = url_decode(key, 1);
if (len == -1)
goto error;
goto bad_req_error;
/* decode value */
if (value) {
while (p < end && *p != '=' && *p != '&' && *p != '#')
++p;
if (*p == '=')
goto error;
goto bad_req_error;
if (*p == '&')
*(p++) = 0;
else if (*p == '#')
*p = 0;
len = url_decode(value, 1);
if (len == -1)
goto error;
goto bad_req_error;
}
if (strcmp(key, "scope") == 0) {
default_scopes = 0; /* at least a scope defined, unset default scopes */
if (!value)
goto error;
goto bad_req_error;
else if (*value == 0)
ctx->flags &= ~PROMEX_FL_SCOPE_ALL;
else if (*value == '*' && *(value+1) == 0)
@ -1924,14 +1912,14 @@ static int promex_parse_uri(struct appctx *appctx, struct stconn *sc)
}
}
if (!(ctx->flags & PROMEX_FL_SCOPE_MODULE))
goto error;
goto bad_req_error;
}
}
else if (strcmp(key, "metrics") == 0) {
struct ist args;
if (!value)
goto error;
goto bad_req_error;
for (args = ist(value); istlen(args); args = istadv(istfind(args, ','), 1)) {
struct eb32_node *node;
@ -1982,30 +1970,28 @@ static int promex_parse_uri(struct appctx *appctx, struct stconn *sc)
ctx->flags |= (default_scopes | default_metrics_filter);
return 1;
error:
bad_req_error:
err = &http_err_chunks[HTTP_ERR_400];
channel_erase(res);
res->buf.data = b_data(err);
memcpy(res->buf.area, b_head(err), b_data(err));
res_htx = htx_from_buf(&res->buf);
channel_add_input(res, res_htx->data);
return -1;
goto error;
internal_error:
err = &http_err_chunks[HTTP_ERR_400];
channel_erase(res);
res->buf.data = b_data(err);
memcpy(res->buf.area, b_head(err), b_data(err));
res_htx = htx_from_buf(&res->buf);
channel_add_input(res, res_htx->data);
err = &http_err_chunks[HTTP_ERR_500];
goto error;
error:
outbuf = DISGUISE(applet_get_outbuf(appctx));
b_reset(outbuf);
outbuf->data = b_data(err);
memcpy(outbuf->area, b_head(err), b_data(err));
applet_set_eoi(appctx);
applet_set_eos(appctx);
return -1;
}
/* Send HTTP headers of the response. It returns 1 on success and 0 if <htx> is
* full. */
static int promex_send_headers(struct appctx *appctx, struct stconn *sc, struct htx *htx)
static int promex_send_headers(struct appctx *appctx, struct htx *htx)
{
struct channel *chn = sc_ic(sc);
struct htx_sl *sl;
unsigned int flags;
@ -2020,11 +2006,10 @@ static int promex_send_headers(struct appctx *appctx, struct stconn *sc, struct
!htx_add_endof(htx, HTX_BLK_EOH))
goto full;
channel_add_input(chn, htx->data);
return 1;
full:
htx_reset(htx);
sc_need_room(sc, 0);
applet_have_more_data(appctx);
return 0;
}
@ -2078,52 +2063,51 @@ static void promex_appctx_release(struct appctx *appctx)
/* The main I/O handler for the promex applet. */
static void promex_appctx_handle_io(struct appctx *appctx)
{
struct stconn *sc = appctx_sc(appctx);
struct stream *s = __sc_strm(sc);
struct channel *req = sc_oc(sc);
struct channel *res = sc_ic(sc);
struct htx *req_htx, *res_htx;
struct promex_ctx *ctx = appctx->svcctx;
struct buffer *outbuf;
struct htx *res_htx;
int ret;
res_htx = htx_from_buf(&res->buf);
if (unlikely(se_fl_test(appctx->sedesc, (SE_FL_EOS|SE_FL_ERROR|SE_FL_SHR|SE_FL_SHW))))
if (unlikely(applet_fl_test(appctx, APPCTX_FL_EOS|APPCTX_FL_ERROR)))
goto out;
/* Check if the input buffer is available. */
if (!b_size(&res->buf)) {
sc_need_room(sc, 0);
outbuf = applet_get_outbuf(appctx);
if (outbuf == NULL) {
applet_have_more_data(appctx);
goto out;
}
res_htx = htx_from_buf(outbuf);
switch (appctx->st0) {
case PROMEX_ST_INIT:
if (!co_data(req)) {
if (!applet_get_inbuf(appctx) || !applet_htx_input_data(appctx)) {
applet_need_more_data(appctx);
goto out;
break;
}
ret = promex_parse_uri(appctx, sc);
ret = promex_parse_uri(appctx);
if (ret <= 0) {
if (ret == -1)
goto error;
goto out;
applet_set_error(appctx);
break;
}
appctx->st0 = PROMEX_ST_HEAD;
appctx->st1 = PROMEX_DUMPER_INIT;
__fallthrough;
case PROMEX_ST_HEAD:
if (!promex_send_headers(appctx, sc, res_htx))
goto out;
appctx->st0 = ((s->txn->meth == HTTP_METH_HEAD) ? PROMEX_ST_DONE : PROMEX_ST_DUMP);
if (!promex_send_headers(appctx, res_htx))
break;
appctx->st0 = ((ctx->flags & PROMEX_FL_BODYLESS_RESP) ? PROMEX_ST_DONE : PROMEX_ST_DUMP);
__fallthrough;
case PROMEX_ST_DUMP:
ret = promex_dump_metrics(appctx, sc, res_htx);
ret = promex_dump_metrics(appctx, res_htx);
if (ret <= 0) {
if (ret == -1)
goto error;
goto out;
applet_set_error(appctx);
break;
}
appctx->st0 = PROMEX_ST_DONE;
__fallthrough;
@ -2137,41 +2121,36 @@ static void promex_appctx_handle_io(struct appctx *appctx)
*/
if (htx_is_empty(res_htx)) {
if (!htx_add_endof(res_htx, HTX_BLK_EOT)) {
sc_need_room(sc, sizeof(struct htx_blk) + 1);
goto out;
applet_have_more_data(appctx);
break;
}
channel_add_input(res, 1);
}
res_htx->flags |= HTX_FL_EOM;
se_fl_set(appctx->sedesc, SE_FL_EOI);
applet_set_eoi(appctx);
appctx->st0 = PROMEX_ST_END;
__fallthrough;
case PROMEX_ST_END:
se_fl_set(appctx->sedesc, SE_FL_EOS);
applet_set_eos(appctx);
}
htx_to_buf(res_htx, outbuf);
out:
htx_to_buf(res_htx, &res->buf);
/* eat the whole request */
if (co_data(req)) {
req_htx = htx_from_buf(&req->buf);
co_htx_skip(req, req_htx, co_data(req));
}
applet_reset_input(appctx);
return;
error:
se_fl_set(appctx->sedesc, SE_FL_ERROR);
goto out;
}
struct applet promex_applet = {
.obj_type = OBJ_TYPE_APPLET,
.flags = APPLET_FL_NEW_API,
.name = "<PROMEX>", /* used for logging */
.init = promex_appctx_init,
.release = promex_appctx_release,
.fct = promex_appctx_handle_io,
.rcv_buf = appctx_htx_rcv_buf,
.snd_buf = appctx_htx_snd_buf,
};
static enum act_parse_ret service_parse_prometheus_exporter(const char **args, int *cur_arg, struct proxy *px,

View File

@ -123,6 +123,22 @@ struct url_stat {
#define FILT2_PRESERVE_QUERY 0x02
#define FILT2_EXTRACT_CAPTURE 0x04
#define FILT_OUTPUT_FMT (FILT_COUNT_ONLY| \
FILT_COUNT_STATUS| \
FILT_COUNT_SRV_STATUS| \
FILT_COUNT_COOK_CODES| \
FILT_COUNT_TERM_CODES| \
FILT_COUNT_URL_ONLY| \
FILT_COUNT_URL_COUNT| \
FILT_COUNT_URL_ERR| \
FILT_COUNT_URL_TAVG| \
FILT_COUNT_URL_TTOT| \
FILT_COUNT_URL_TAVGO| \
FILT_COUNT_URL_TTOTO| \
FILT_COUNT_URL_BAVG| \
FILT_COUNT_URL_BTOT| \
FILT_COUNT_IP_COUNT)
unsigned int filter = 0;
unsigned int filter2 = 0;
unsigned int filter_invert = 0;
@ -192,7 +208,7 @@ void help()
" you can also use -n to start from earlier then field %d\n"
" -query preserve the query string for per-URL (-u*) statistics\n"
"\n"
"Output format - only one may be used at a time\n"
"Output format - **only one** may be used at a time\n"
" -c only report the number of lines that would have been printed\n"
" -pct output connect and response times percentiles\n"
" -st output number of requests per HTTP status code\n"
@ -898,6 +914,9 @@ int main(int argc, char **argv)
if (!filter && !filter2)
die("No action specified.\n");
if ((filter & FILT_OUTPUT_FMT) & ((filter & FILT_OUTPUT_FMT) - 1))
die("Please, set only one output filter.\n");
if (filter & FILT_ACC_COUNT && !filter_acc_count)
filter_acc_count=1;

19
dev/gdb/memprof.dbg Normal file
View File

@ -0,0 +1,19 @@
# show non-null memprofile entries with method, alloc/free counts/tot and caller
define memprof_dump
set $i = 0
set $meth={ "UNKN", "MALL", "CALL", "REAL", "STRD", "FREE", "P_AL", "P_FR", "STND", "VALL", "ALAL", "PALG", "MALG", "PVAL" }
while $i < sizeof(memprof_stats) / sizeof(memprof_stats[0])
if memprof_stats[$i].alloc_calls || memprof_stats[$i].free_calls
set $m = memprof_stats[$i].method
printf "m:%s ac:%u fc:%u at:%u ft:%u ", $meth[$m], \
memprof_stats[$i].alloc_calls, memprof_stats[$i].free_calls, \
memprof_stats[$i].alloc_tot, memprof_stats[$i].free_tot
output/a memprof_stats[$i].caller
printf "\n"
end
set $i = $i + 1
end
end

Binary file not shown.

View File

@ -3,7 +3,9 @@ DeviceAtlas Device Detection
In order to add DeviceAtlas Device Detection support, you would need to download
the API source code from https://deviceatlas.com/deviceatlas-haproxy-module.
Once extracted :
Once extracted, two modes are supported :
1/ Build HAProxy and DeviceAtlas in one command
$ make TARGET=<target> USE_DEVICEATLAS=1 DEVICEATLAS_SRC=<path to the API root folder>
@ -14,10 +16,6 @@ directory. Also, in the case the api cache support is not needed and/or a C++ to
$ make TARGET=<target> USE_DEVICEATLAS=1 DEVICEATLAS_SRC=<path to the API root folder> DEVICEATLAS_NOCACHE=1
However, if the API had been installed beforehand, DEVICEATLAS_SRC
can be omitted. Note that the DeviceAtlas C API version supported is from the 3.x
releases series (3.2.1 minimum recommended).
For HAProxy developers who need to verify that their changes didn't accidentally
break the DeviceAtlas code, it is possible to build a dummy library provided in
the addons/deviceatlas/dummy directory and to use it as an alternative for the
@ -27,6 +25,29 @@ validate API changes :
$ make TARGET=<target> USE_DEVICEATLAS=1 DEVICEATLAS_SRC=$PWD/addons/deviceatlas/dummy
2/ Build and install DeviceAtlas according to https://docs.deviceatlas.com/apis/enterprise/c/<release version>/README.html
For example :
In the deviceatlas library folder :
$ cmake .
$ make
$ sudo make install
In the HAProxy folder :
$ make TARGET=<target> USE_DEVICEATLAS=1
Note that if the -DCMAKE_INSTALL_PREFIX cmake option had been used, it is necessary to set as well DEVICEATLAS_LIB and
DEVICEATLAS_INC as follow :
$ make TARGET=<target> USE_DEVICEATLAS=1 DEVICEATLAS_INC=<CMAKE_INSTALL_PREFIX value>/include DEVICEATLAS_LIB=<CMAKE_INSTALL_PREFIX value>/lib
For example :
$ cmake -DCMAKE_INSTALL_PREFIX=/opt/local
$ make
$ sudo make install
$ make TARGET=<target> USE_DEVICEATLAS=1 DEVICEATLAS_INC=/opt/local/include DEVICEATLAS_LIB=/opt/local/lib
Note that DEVICEATLAS_SRC is omitted in this case.
These are supported DeviceAtlas directives (see doc/configuration.txt) :
- deviceatlas-json-file <path to the DeviceAtlas JSON data file>.
- deviceatlas-log-level <number> (0 to 3, level of information returned by

View File

@ -3,7 +3,7 @@
Configuration Manual
----------------------
version 3.3
2025/06/26
2025/07/28
This document covers the configuration language as implemented in the version
@ -1744,6 +1744,7 @@ The following keywords are supported in the "global" section :
- insecure-setuid-wanted
- issuers-chain-path
- key-base
- limited-quic
- localpeer
- log
- log-send-hostname
@ -1753,6 +1754,7 @@ The following keywords are supported in the "global" section :
- lua-prepend-path
- mworker-max-reloads
- nbthread
- no-quic
- node
- numa-cpu-mapping
- ocsp-update.disable
@ -1882,6 +1884,7 @@ The following keywords are supported in the "global" section :
- tune.pool-low-fd-ratio
- tune.pt.zero-copy-forwarding
- tune.quic.cc-hystart
- tune.quic.cc.cubic.min-losses
- tune.quic.disable-tx-pacing
- tune.quic.disable-udp-gso
- tune.quic.frontend.glitches-threshold
@ -2280,7 +2283,7 @@ cpu-policy <policy>
respected. This is recommended on multi-socket and NUMA
systems, as well as CPUs with bad inter-CCX latencies.
On most server machines, clusters and CCX are the same,
but on heterogenous machines ("performance" vs
but on heterogeneous machines ("performance" vs
"efficiency" or "big" vs "little"), a cluster will
generally be made of only a part of a CCX composed only
of very similar CPUs (same type, +/-5% frequency
@ -2435,8 +2438,9 @@ dns-accept-family <family>[,...]
The result of the last check is cached for 30 seconds.
When a single family is used, no request will be sent to resolvers for the
other family, and any response for the othe family will be ignored. The
default value is "ipv4,ipv6", which effectively enables both families.
other family, and any response for the other family will be ignored. The
default value since 3.3 is "auto", which effectively enables both families
only once IPv6 has been proven to be routable, otherwise sticks to IPv4.
See also: "resolve-prefer", "do-resolve"
expose-deprecated-directives
@ -8777,6 +8781,8 @@ log-steps <steps>
with log-profiles is really interesting to have fine-grained control over
logs automatically generated by haproxy during transaction processing.
This setting is only relevant on frontends, it is ignored on backends.
See also : "log-profile"
log-tag <string>
@ -9734,7 +9740,7 @@ no option http-drop-request-trailers
RFC9110#section-6.5.1 stated that trailer fields could be merged into the
header fields. It should be done on purpose, but it may be a problem for some
applications, espcially if malicious clients hide sensitive header fields in
applications, especially if malicious clients hide sensitive header fields in
the trailers part and some intermediaries merge them with headers with no
specific checks. In that case, this option can be enabled on the backend to
drop any trailer fields found in requests before sending them to the server.
@ -10654,7 +10660,7 @@ no option prefer-last-server
It may be useful to precise here, which load balancing algorithms are
considered deterministic. Deterministic algorithms will always select the same
server for a given client data, assuming the set of available servers has not
changed. In general, deterministic algorithms involve hasing or lookups on the
changed. In general, deterministic algorithms involve hashing or lookups on the
incoming requests to choose the target server. However, this is not always the
case; "static-rr", for example, can be also considered as deterministic because
the server choice is based on the server's static weight, making the selection
@ -15140,7 +15146,7 @@ pause { <timeout> | <expr> }
Usable in: QUIC Ini| TCP RqCon| RqSes| RqCnt| RsCnt| HTTP Req| Res| Aft
- | - | - | - | - | X | X | -
This suspends the message analysis for the sepcified number of milliseconds.
This suspends the message analysis for the specified number of milliseconds.
The timeout can be specified in milliseconds or with any other unit if the
number is suffixed by the unit as explained at the top of this document. It
is also possible to write an expression which must return a number
@ -16561,7 +16567,7 @@ crt-list <file>
Server Name Indication field matching one of the SNI filters, or the CN and
SAN of a <crtfile>. The matching algorithm first looks for a positive domain
entry in the list, if not found it will try to look for a wildcard in the
list. If a wilcard match, haproxy checks for a negative filter from the
list. If a wildcard match, haproxy checks for a negative filter from the
same line and unmatch if necessary. In case of multiple key algorithms
(RSA,ECDSA,DSA), HAProxy will try to match one certificate per type and
chose the right one depending on what is supported by the client.
@ -16576,7 +16582,7 @@ crt-list <file>
certificate, either from crt or crt-list option.
It is also possible to declare a '*' filter, which will add this
certificate to the list of default certificates. To clarify the
configuration, the default certificates could be explicited (with a '*'
configuration, the default certificates could be explicit (with a '*'
filter) at the beginning of the list, so an implicit default is not added
before.
Due to multi-cert bundles being duplicated for each algorithm in the
@ -17079,6 +17085,16 @@ strict-sni
disabled on a "bind" line using "no-strict-sni". See the "crt" option for
more information. See "add ssl crt-list" command in the management guide.
tcp-md5sig <password>
Enables the TCP MD5 signature (RFC 2385 Protection of BGP Sessions via the
TCP MD5 Signature Option) for all incoming connections instantiated from this
listening socket. This option is only available on Linux. When enabled,
<password> string is used to sign every TCP segments with a 16-byte MD5
digest. This will protect the TCP connection against spoofing. The primary
use case for this option is to allow BGP to protect itself against the
introduction of spoofed TCP segments into the connection stream. But it can
be useful for any very long-lived TCP connections.
tcp-ut <delay>
Sets the TCP User Timeout for all incoming connections instantiated from this
listening socket. This option is available on Linux since version 2.6.37. It
@ -17170,7 +17186,7 @@ tls-tickets
This setting is only available when support for OpenSSL was built in. It
enables the stateless session resumption (RFC 5077 TLS Ticket extension). It
is the default, but it may be needed to selectively re-enable the feature on
a "bind" line if it had been globaly disabled via "no-tls-tickets" mentioned
a "bind" line if it had been globally disabled via "no-tls-tickets" mentioned
in "ssl-default-bind-options". See also the "no-tls-tickets" bind keyword.
tls-ticket-keys <keyfile>
@ -18067,10 +18083,10 @@ no-renegotiate
This setting is only available when support for OpenSSL was built in. It
disables the renegotiation mechanisms, be it the legacy unsafe one or the
more recent "secure renegotation" one (RFC 5746 TLS Renegotiation Indication
more recent "secure renegotiation" one (RFC 5746 TLS Renegotiation Indication
Extension) for the given SSL backend. This option is also available on global
statement "ssl-default-server-options".
Renegotiation is not posible anymore in TLS 1.3.
Renegotiation is not possible anymore in TLS 1.3.
If neither "renegotiate" nor "no-renegotiate" is specified, the SSL library's
default behavior is kept.
Note that for instance OpenSSL library enables secure renegotiation by
@ -18414,7 +18430,7 @@ renegotiate
backends to renegotiate when servers request it. It still requires that the
underlying SSL library actually supports renegotiation.
This option is also available on global statement "ssl-default-server-options".
Renegotiation is not posible anymore in TLS 1.3.
Renegotiation is not possible anymore in TLS 1.3.
If neither "renegotiate" nor "no-renegotiate" is specified, the SSL library's
default behavior is kept.
Note that for instance OpenSSL library enables secure renegotiation by
@ -18766,6 +18782,18 @@ socks4 <addr>:<port>
server. Using this option won't force the health check to go via socks4 by
default. You will have to use the keyword "check-via-socks4" to enable it.
tcp-md5sig <password>
May be used in the following contexts: tcp, http, log, peers, ring
Enables the TCP MD5 signature (RFC 2385 Protection of BGP Sessions via the
TCP MD5 Signature Option) for all outgoing connections to this server. This
option is only available on Linux. When enabled, <password> string is used to
sign every TCP segments with a 16-byte MD5 digest. This will protect the TCP
connection against spoofing. The primary use case for this option is to allow
BGP to protect itself against the introduction of spoofed TCP segments into
the connection stream. But it can be useful for any very long-lived TCP
connections.
tcp-ut <delay>
May be used in the following contexts: tcp, http, log, peers, ring
@ -19873,6 +19901,7 @@ and(value) integer integer
b64dec string binary
base64 binary string
be2dec(separator,chunk_size[,truncate]) binary string
le2dec(separator,chunk_size[,truncate]) binary string
be2hex([separator[,chunk_size[,truncate]]]) binary string
bool integer boolean
bytes(offset[,length]) binary binary
@ -20113,6 +20142,19 @@ be2dec(<separator>,<chunk_size>[,<truncate>])
bin(01020304050607),be2dec(,2,1) # 2587721286
bin(7f000001),be2dec(.,1) # 127.0.0.1
le2dec(<separator>,<chunk_size>[,<truncate>])
Converts little-endian binary input sample to a string containing an unsigned
integer number per <chunk_size> input bytes. <separator> is inserted every
<chunk_size> binary input bytes if specified. The <truncate> flag indicates
whether the binary input is truncated at <chunk_size> boundaries. The maximum
value for <chunk_size> is limited by the size of long long int (8 bytes).
Example:
bin(01020304050607),le2dec(:,2) # 513:1284:2055:7
bin(01020304050607),le2dec(-,2,1) # 513-1284-2055
bin(01020304050607),le2dec(,2,1) # 51312842055
bin(7f000001),le2dec(.,1) # 127.0.0.1
be2hex([<separator>[,<chunk_size>[,<truncate>]]])
Converts big-endian binary input sample to a hex string containing two hex
digits per input byte. It is used to log or transfer hex dumps of some
@ -20518,11 +20560,19 @@ jwt_payload_query([<json_path>[,<output_type>]])
jwt_verify(<alg>,<key>)
Performs a signature verification for the JSON Web Token (JWT) given in input
by using the <alg> algorithm and the <key> parameter, which should either
hold a secret or a path to a public certificate. Returns 1 in case of
verification success, 0 in case of verification error and a strictly negative
value for any other error. Because of all those non-null error return values,
the result of this converter should never be converted to a boolean. See
below for a full list of the possible return values.
hold a secret, a path to a public key or a path to a public certificate. When
using a public key, it should either be in the PKCS#1 format (for RSA keys,
starting with BEGIN RSA PUBLIC KEY) or SPKI format (Subject Public Key Info,
starting with BEGIN PUBLIC KEY). Certificates should be a regular PEM
certificate (starting with BEGIN CERTIFICATE). If a full-on certificate is
used, it can either be used directly in the converter or passed via a
variable if it was already known by haproxy (previously loaded in a crt-store
for instance).
Returns 1 in case of verification success, 0 in case of verification failure
and a strictly negative value for any other error. Because of all those
non-null error return values, the result of this converter should never be
converted to a boolean. See below for a full list of the possible return
values.
For now, only JWS tokens using the Compact Serialization format can be
processed (three dot-separated base64-url encoded strings). All the
@ -20531,16 +20581,19 @@ jwt_verify(<alg>,<key>)
If the used algorithm is of the HMAC family, <key> should be the secret used
in the HMAC signature calculation. Otherwise, <key> should be the path to the
public certificate that can be used to validate the token's signature. All
the certificates that might be used to verify JWTs must be known during init
in order to be added into a dedicated certificate cache so that no disk
access is required during runtime. For this reason, any used certificate must
be mentioned explicitly at least once in a jwt_verify call. Passing an
intermediate variable as second parameter is then not advised.
public key or certificate that can be used to validate the token's signature.
All the public keys and certificates that might be used to verify JWTs must
be known during init in order to be added into a dedicated cache so that no
disk access is required during runtime. For this reason, any used public key
must be mentioned explicitly at least once in a jwt_verify call and every
certificate used must be loaded by haproxy (in a crt-store or mentioned
explicitly in a 'jwt_verify' call). Passing a variable as second parameter is
then not advised unless you only use certificates that fill one of those
prerequisites.
This converter only verifies the signature of the token and does not perform
a full JWT validation as specified in section 7.2 of RFC7519. We do not
ensure that the header and payload contents are fully valid JSON's once
ensure that the header and payload contents are fully valid JSONs once
decoded for instance, and no checks are performed regarding their respective
contents.
@ -20556,6 +20609,7 @@ jwt_verify(<alg>,<key>)
| -3 | "Invalid token" |
| -4 | "Out of memory" |
| -5 | "Unknown certificate" |
| -6 | "Internal error" |
+----+----------------------------------------------------------------------+
Please note that this converter is only available when HAProxy has been
@ -20567,7 +20621,7 @@ jwt_verify(<alg>,<key>)
http-request set-var(txn.bearer) http_auth_bearer
http-request set-var(txn.jwt_alg) var(txn.bearer),jwt_header_query('$.alg')
http-request deny unless { var(txn.jwt_alg) -m str "RS256" }
http-request deny unless { var(txn.bearer),jwt_verify(txn.jwt_alg,"/path/to/crt.pem") 1 }
http-request deny unless { var(txn.bearer),jwt_verify(txn.jwt_alg,"/path/to/cert.pem") 1 }
language(<value>[,<default>])
Returns the value with the highest q-factor from a list as extracted from the
@ -29655,7 +29709,7 @@ table <tablename> type {ip | integer | string [len <length>] | binary [len <leng
The sections described below are less commonly used and usually support only a
few parameters. There is no implicit relation between any of them. They're all
started using a single keyword. None of them is permitted before a "global"
section. The support for some of them might be conditionned by build options
section. The support for some of them might be conditioned by build options
(e.g. anything SSL-related).
12.1. Traces

View File

@ -935,7 +935,7 @@ Core class
Give back the hand at the HAProxy scheduler. Unlike :js:func:`core.yield`
the task will not be woken up automatically to resume as fast as possible.
Instead, it will wait for an event to wake the task. If milliseconds argument
is provided then the Lua excecution will be automatically resumed passed this
is provided then the Lua execution will be automatically resumed passed this
delay even if no event caused the task to wake itself up.
:param integer milliseconds: automatic wakeup passed this delay. (optional)
@ -945,7 +945,7 @@ Core class
**context**: task, action
Give back the hand at the HAProxy scheduler. It is used when the LUA
processing consumes a lot of processing time. Lua excecution will be resumed
processing consumes a lot of processing time. Lua execution will be resumed
automatically (automatic reschedule).
.. js:function:: core.parse_addr(address)
@ -1089,6 +1089,14 @@ Core class
perform the heavy job in a dedicated task and allow remaining events to be
processed more quickly.
.. js:function:: core.use_native_mailers_config()
**context**: body
Inform haproxy that the script will make use of the native "mailers"
config section (although legacy). In other words, inform haproxy that
:js:func:`Proxy.get_mailers()` will be used later in the program.
.. _proxy_class:
Proxy class
@ -1216,8 +1224,14 @@ Proxy class
**LEGACY**
Returns a table containing mailers config for the current proxy or nil
if mailers are not available for the proxy.
Returns a table containing legacy mailers config (from haproxy configuration
file) for the current proxy or nil if mailers are not available for the proxy.
.. warning::
When relying on :js:func:`Proxy.get_mailers()` to retrieve mailers
configuration, :js:func:`core.use_native_mailers_config()` must be called
first from body or init context to inform haproxy that Lua makes use of the
legacy mailers config.
:param class_proxy px: A :ref:`proxy_class` which indicates the manipulated
proxy.

View File

@ -1346,9 +1346,10 @@ The first column designates the object or metric being dumped. Its format is
specific to the command producing this output and will not be described in this
section. Usually it will consist in a series of identifiers and field names.
The second column contains 3 characters respectively indicating the origin, the
nature and the scope of the value being reported. The first character (the
origin) indicates where the value was extracted from. Possible characters are :
The second column contains 4 characters respectively indicating the origin, the
nature, the scope and the persistence state of the value being reported. The
first character (the origin) indicates where the value was extracted from.
Possible characters are :
M The value is a metric. It is valid at one instant any may change depending
on its nature .
@ -1464,7 +1465,16 @@ characters are currently supported :
current date or resource usage. At the moment this scope is not used by
any metric.
Consumers of these information will generally have enough of these 3 characters
The fourth character (persistence state) indicates that the value (the metric)
is volatile or persistent across reloads. The following characters are expected :
V The metric is volatile because it is local to the current process so
the value will be lost when reloading.
P The metric is persistent because it may be shared with other co-processes
so that the value is preserved across reloads.
Consumers of these information will generally have enough of these 4 characters
to determine how to accurately report aggregated information across multiple
processes.
@ -2299,7 +2309,7 @@ help [<command>]
the requested one. The same help screen is also displayed for unknown
commands.
httpclient <method> <URI>
httpclient [--htx] <method> <URI>
Launch an HTTP client request and print the response on the CLI. Only
supported on a CLI connection running in expert mode (see "expert-mode on").
It's only meant for debugging. The httpclient is able to resolve a server
@ -2308,6 +2318,9 @@ httpclient <method> <URI>
able to resolve an host from /etc/hosts if you don't use a local dns daemon
which can resolve those.
The --htx option allow to use the haproxy internal htx representation using
the htx_dump() function, mainly used for debugging.
new ssl ca-file <cafile>
Create a new empty CA file tree entry to be filled with a set of CA
certificates and added to a crt-list. This command should be used in
@ -2358,7 +2371,7 @@ prompt [help | n | i | p | timed]*
Without any option, this will cycle through prompt mode then non-interactive
mode. In non-interactive mode, the connection is closed after the last
command of the current line compltes. In interactive mode, the connection is
command of the current line completes. In interactive mode, the connection is
not closed after a command completes, so that a new one can be entered. In
prompt mode, the interactive mode is still in use, and a prompt will appear
at the beginning of the line, indicating to the user that the interpreter is
@ -2989,18 +3002,19 @@ show info [typed|json] [desc] [float]
(...)
> show info typed
0.Name.1:POS:str:HAProxy
1.Version.1:POS:str:1.7-dev1-de52ea-146
2.Release_date.1:POS:str:2016/03/11
3.Nbproc.1:CGS:u32:1
4.Process_num.1:KGP:u32:1
5.Pid.1:SGP:u32:28105
6.Uptime.1:MDP:str:0d 0h00m08s
7.Uptime_sec.1:MDP:u32:8
8.Memmax_MB.1:CLP:u32:0
9.PoolAlloc_MB.1:MGP:u32:0
10.PoolUsed_MB.1:MGP:u32:0
11.PoolFailed.1:MCP:u32:0
0.Name.1:POSV:str:HAProxy
1.Version.1:POSV:str:3.1-dev0-7c653d-2466
2.Release_date.1:POSV:str:2025/07/01
3.Nbthread.1:CGSV:u32:1
4.Nbproc.1:CGSV:u32:1
5.Process_num.1:KGPV:u32:1
6.Pid.1:SGPV:u32:638069
7.Uptime.1:MDPV:str:0d 0h00m07s
8.Uptime_sec.1:MDPV:u32:7
9.Memmax_MB.1:CLPV:u32:0
10.PoolAlloc_MB.1:MGPV:u32:0
11.PoolUsed_MB.1:MGPV:u32:0
12.PoolFailed.1:MCPV:u32:0
(...)
In the typed format, the presence of the process ID at the end of the
@ -3481,10 +3495,11 @@ show stat [domain <resolvers|proxy>] [{<iid>|<proxy>} <type> <sid>] \
The rest of the line starting after the first colon follows the "typed output
format" described in the section above. In short, the second column (after the
first ':') indicates the origin, nature and scope of the variable. The third
column indicates the field type, among "s32", "s64", "u32", "u64", "flt' and
"str". Then the fourth column is the value itself, which the consumer knows
how to parse thanks to column 3 and how to process thanks to column 2.
first ':') indicates the origin, nature, scope and persistence state of the
variable. The third column indicates the field type, among "s32", "s64",
"u32", "u64", "flt' and "str". Then the fourth column is the value itself,
which the consumer knows how to parse thanks to column 3 and how to process
thanks to column 2.
When "desc" is appended to the command, one extra colon followed by a quoted
string is appended with a description for the metric. At the time of writing,
@ -3497,37 +3512,32 @@ show stat [domain <resolvers|proxy>] [{<iid>|<proxy>} <type> <sid>] \
Here's an example of typed output format :
$ echo "show stat typed" | socat stdio unix-connect:/tmp/sock1
F.2.0.0.pxname.1:MGP:str:private-frontend
F.2.0.1.svname.1:MGP:str:FRONTEND
F.2.0.8.bin.1:MGP:u64:0
F.2.0.9.bout.1:MGP:u64:0
F.2.0.40.hrsp_2xx.1:MGP:u64:0
L.2.1.0.pxname.1:MGP:str:private-frontend
L.2.1.1.svname.1:MGP:str:sock-1
L.2.1.17.status.1:MGP:str:OPEN
L.2.1.73.addr.1:MGP:str:0.0.0.0:8001
S.3.13.60.rtime.1:MCP:u32:0
S.3.13.61.ttime.1:MCP:u32:0
S.3.13.62.agent_status.1:MGP:str:L4TOUT
S.3.13.64.agent_duration.1:MGP:u64:2001
S.3.13.65.check_desc.1:MCP:str:Layer4 timeout
S.3.13.66.agent_desc.1:MCP:str:Layer4 timeout
S.3.13.67.check_rise.1:MCP:u32:2
S.3.13.68.check_fall.1:MCP:u32:3
S.3.13.69.check_health.1:SGP:u32:0
S.3.13.70.agent_rise.1:MaP:u32:1
S.3.13.71.agent_fall.1:SGP:u32:1
S.3.13.72.agent_health.1:SGP:u32:1
S.3.13.73.addr.1:MCP:str:1.255.255.255:8888
S.3.13.75.mode.1:MAP:str:http
B.3.0.0.pxname.1:MGP:str:private-backend
B.3.0.1.svname.1:MGP:str:BACKEND
B.3.0.2.qcur.1:MGP:u32:0
B.3.0.3.qmax.1:MGP:u32:0
B.3.0.4.scur.1:MGP:u32:0
B.3.0.5.smax.1:MGP:u32:0
B.3.0.6.slim.1:MGP:u32:1000
B.3.0.55.lastsess.1:MMP:s32:-1
F.2.0.0.pxname.1:KNSV:str:dummy
F.2.0.1.svname.1:KNSV:str:FRONTEND
F.2.0.4.scur.1:MGPV:u32:0
F.2.0.5.smax.1:MMPV:u32:0
F.2.0.6.slim.1:CLPV:u32:524269
F.2.0.7.stot.1:MCPP:u64:0
F.2.0.8.bin.1:MCPP:u64:0
F.2.0.9.bout.1:MCPP:u64:0
F.2.0.10.dreq.1:MCPP:u64:0
F.2.0.11.dresp.1:MCPP:u64:0
F.2.0.12.ereq.1:MCPP:u64:0
F.2.0.17.status.1:SGPV:str:OPEN
F.2.0.26.pid.1:KGPV:u32:1
F.2.0.27.iid.1:KGSV:u32:2
F.2.0.28.sid.1:KGSV:u32:0
F.2.0.32.type.1:CGSV:u32:0
F.2.0.33.rate.1:MRPP:u32:0
F.2.0.34.rate_lim.1:CLPV:u32:0
F.2.0.35.rate_max.1:MMPV:u32:0
F.2.0.46.req_rate.1:MRPP:u32:0
F.2.0.47.req_rate_max.1:MMPV:u32:0
F.2.0.48.req_tot.1:MCPP:u64:0
F.2.0.51.comp_in.1:MCPP:u64:0
F.2.0.52.comp_out.1:MCPP:u64:0
F.2.0.53.comp_byp.1:MCPP:u64:0
F.2.0.54.comp_rsp.1:MCPP:u64:0
(...)
In the typed format, the presence of the process ID at the end of the
@ -3538,20 +3548,20 @@ show stat [domain <resolvers|proxy>] [{<iid>|<proxy>} <type> <sid>] \
$ ( echo show stat typed | socat /var/run/haproxy.sock1 - ; \
echo show stat typed | socat /var/run/haproxy.sock2 - ) | \
sort -t . -k 1,1 -k 2,2n -k 3,3n -k 4,4n -k 5,5 -k 6,6n
B.3.0.0.pxname.1:MGP:str:private-backend
B.3.0.0.pxname.2:MGP:str:private-backend
B.3.0.1.svname.1:MGP:str:BACKEND
B.3.0.1.svname.2:MGP:str:BACKEND
B.3.0.2.qcur.1:MGP:u32:0
B.3.0.2.qcur.2:MGP:u32:0
B.3.0.3.qmax.1:MGP:u32:0
B.3.0.3.qmax.2:MGP:u32:0
B.3.0.4.scur.1:MGP:u32:0
B.3.0.4.scur.2:MGP:u32:0
B.3.0.5.smax.1:MGP:u32:0
B.3.0.5.smax.2:MGP:u32:0
B.3.0.6.slim.1:MGP:u32:1000
B.3.0.6.slim.2:MGP:u32:1000
B.3.0.0.pxname.1:KNSV:str:private-backend
B.3.0.0.pxname.2:KNSV:str:private-backend
B.3.0.1.svname.1:KNSV:str:BACKEND
B.3.0.1.svname.2:KNSV:str:BACKEND
B.3.0.2.qcur.1:MGPV:u32:0
B.3.0.2.qcur.2:MGPV:u32:0
B.3.0.3.qmax.1:MMPV:u32:0
B.3.0.3.qmax.2:MMPV:u32:0
B.3.0.4.scur.1:MGPV:u32:0
B.3.0.4.scur.2:MGPV:u32:0
B.3.0.5.smax.1:MMPV:u32:0
B.3.0.5.smax.2:MMPV:u32:0
B.3.0.6.slim.1:CLPV:u32:1000
B.3.0.6.slim.2:CLPV:u32:1000
(...)
The format of JSON output is described in a schema which may be output

View File

@ -364,6 +364,10 @@ local function srv_event_add(event, data)
mailers_track_server_events(data.reference)
end
-- tell haproxy that we do use the legacy native "mailers" config section
-- which allows us to retrieve mailers configuration using Proxy:get_mailers()
core.use_native_mailers_config()
-- event subscriptions are purposely performed in an init function to prevent
-- email alerts from being generated too early (when process is starting up)
core.register_init(function()

View File

@ -31,7 +31,7 @@ struct acme_cfg {
};
enum acme_st {
ACME_RESSOURCES = 0,
ACME_RESOURCES = 0,
ACME_NEWNONCE,
ACME_CHKACCOUNT,
ACME_NEWACCOUNT,
@ -51,9 +51,11 @@ enum http_st {
};
struct acme_auth {
struct ist dns; /* dns entry */
struct ist auth; /* auth URI */
struct ist chall; /* challenge URI */
struct ist token; /* token */
int ready; /* is the challenge ready ? */
void *next;
};
@ -70,7 +72,7 @@ struct acme_ctx {
struct ist newNonce;
struct ist newAccount;
struct ist newOrder;
} ressources;
} resources;
struct ist nonce;
struct ist kid;
struct ist order;
@ -79,6 +81,20 @@ struct acme_ctx {
X509_REQ *req;
struct ist finalize;
struct ist certificate;
struct task *task;
struct mt_list el;
};
#define ACME_EV_SCHED (1ULL << 0) /* scheduling wakeup */
#define ACME_EV_NEW (1ULL << 1) /* new task */
#define ACME_EV_TASK (1ULL << 2) /* Task handler */
#define ACME_EV_REQ (1ULL << 3) /* HTTP Request */
#define ACME_EV_RES (1ULL << 4) /* HTTP Response */
#define ACME_VERB_CLEAN 1
#define ACME_VERB_MINIMAL 2
#define ACME_VERB_SIMPLE 3
#define ACME_VERB_ADVANCED 4
#define ACME_VERB_COMPLETE 5
#endif

View File

@ -81,9 +81,13 @@ static forceinline char *appctx_show_flags(char *buf, size_t len, const char *de
#undef _
}
#define APPLET_FL_NEW_API 0x00000001 /* Set if the applet is based on the new API (using applet's buffers) */
#define APPLET_FL_WARNED 0x00000002 /* Set when warning was already emitted about a legacy applet */
/* Applet descriptor */
struct applet {
enum obj_type obj_type; /* object type = OBJ_TYPE_APPLET */
unsigned int flags; /* APPLET_FL_* flags */
/* 3 unused bytes here */
char *name; /* applet's name to report in logs */
int (*init)(struct appctx *); /* callback to init resources, may be NULL.

View File

@ -116,7 +116,7 @@ static inline int appctx_init(struct appctx *appctx)
* the appctx will be fully initialized. The session and the stream will
* eventually be created. The affinity must be set now !
*/
BUG_ON(appctx->t->tid != tid);
BUG_ON(appctx->t->tid != -1 && appctx->t->tid != tid);
task_set_thread(appctx->t, tid);
if (appctx->applet->init)
@ -288,8 +288,11 @@ static inline void applet_expect_data(struct appctx *appctx)
*/
static inline struct buffer *applet_get_inbuf(struct appctx *appctx)
{
if (appctx->flags & APPCTX_FL_INOUT_BUFS)
if (appctx->flags & APPCTX_FL_INOUT_BUFS) {
if (applet_fl_test(appctx, APPCTX_FL_INBLK_ALLOC) || !appctx_get_buf(appctx, &appctx->inbuf))
return NULL;
return &appctx->inbuf;
}
else
return sc_ob(appctx_sc(appctx));
}
@ -300,8 +303,12 @@ static inline struct buffer *applet_get_inbuf(struct appctx *appctx)
*/
static inline struct buffer *applet_get_outbuf(struct appctx *appctx)
{
if (appctx->flags & APPCTX_FL_INOUT_BUFS)
if (appctx->flags & APPCTX_FL_INOUT_BUFS) {
if (applet_fl_test(appctx, APPCTX_FL_OUTBLK_ALLOC|APPCTX_FL_OUTBLK_FULL) ||
!appctx_get_buf(appctx, &appctx->outbuf))
return NULL;
return &appctx->outbuf;
}
else
return sc_ib(appctx_sc(appctx));
}
@ -315,6 +322,15 @@ static inline size_t applet_input_data(const struct appctx *appctx)
return co_data(sc_oc(appctx_sc(appctx)));
}
/* Returns the amount of HTX data in the input buffer (see applet_get_inbuf) */
static inline size_t applet_htx_input_data(const struct appctx *appctx)
{
if (appctx->flags & APPCTX_FL_INOUT_BUFS)
return htx_used_space(htxbuf(&appctx->inbuf));
else
return co_data(sc_oc(appctx_sc(appctx)));
}
/* Skips <len> bytes from the input buffer (see applet_get_inbuf).
*
* This is useful when data have been read directly from the buffer. It is
@ -324,8 +340,10 @@ static inline size_t applet_input_data(const struct appctx *appctx)
*/
static inline void applet_skip_input(struct appctx *appctx, size_t len)
{
if (appctx->flags & APPCTX_FL_INOUT_BUFS)
if (appctx->flags & APPCTX_FL_INOUT_BUFS) {
b_del(&appctx->inbuf, len);
applet_fl_clr(appctx, APPCTX_FL_INBLK_FULL);
}
else
co_skip(sc_oc(appctx_sc(appctx)), len);
}
@ -352,6 +370,16 @@ static inline size_t applet_output_room(const struct appctx *appctx)
return channel_recv_max(sc_ic(appctx_sc(appctx)));
}
/* Returns the amout of space available at the HTX output buffer (see applet_get_outbuf).
*/
static inline size_t applet_htx_output_room(const struct appctx *appctx)
{
if (appctx->flags & APPCTX_FL_INOUT_BUFS)
return htx_free_data_space(htxbuf(&appctx->outbuf));
else
return channel_recv_max(sc_ic(appctx_sc(appctx)));
}
/*Indicates that the applet have more data to deliver and it needs more room in
* the output buffer to do so (see applet_get_outbuf).
*
@ -643,7 +671,7 @@ static inline int applet_getword(const struct appctx *appctx, char *str, int len
}
p = b_head(buf);
ret = 0;
while (max) {
*str++ = *p;
ret++;

View File

@ -86,7 +86,7 @@ static inline int be_usable_srv(struct proxy *be)
/* set the time of last session on the backend */
static inline void be_set_sess_last(struct proxy *be)
{
HA_ATOMIC_STORE(&be->be_counters.shared->tg[tgid - 1]->last_sess, ns_to_sec(now_ns));
HA_ATOMIC_STORE(&be->be_counters.shared.tg[tgid - 1]->last_sess, ns_to_sec(now_ns));
}
/* This function returns non-zero if the designated server will be

View File

@ -28,7 +28,7 @@
extern struct timeval start_date; /* the process's start date in wall-clock time */
extern struct timeval ready_date; /* date when the process was considered ready */
extern ullong start_time_ns; /* the process's start date in internal monotonic time (ns) */
extern volatile ullong global_now_ns; /* common monotonic date between all threads, in ns (wraps every 585 yr) */
extern volatile ullong *global_now_ns;/* common monotonic date between all threads, in ns (wraps every 585 yr) */
extern THREAD_LOCAL ullong now_ns; /* internal monotonic date derived from real clock, in ns (wraps every 585 yr) */
extern THREAD_LOCAL struct timeval date; /* the real current date (wall-clock time) */

View File

@ -350,7 +350,7 @@
* <type> which has its member <name> stored at address <ptr>.
*/
#ifndef container_of
#define container_of(ptr, type, name) ((type *)(((void *)(ptr)) - ((long)&((type *)0)->name)))
#define container_of(ptr, type, name) ((type *)(((char *)(ptr)) - offsetof(type, name)))
#endif
/* returns a pointer to the structure of type <type> which has its member <name>
@ -359,7 +359,7 @@
#ifndef container_of_safe
#define container_of_safe(ptr, type, name) \
({ void *__p = (ptr); \
__p ? (type *)(__p - ((long)&((type *)0)->name)) : (type *)0; \
__p ? (type *)((char *)__p - offsetof(type, name)) : (type *)0; \
})
#endif

View File

@ -68,6 +68,50 @@ struct ssl_sock_ctx;
* conn_cond_update_polling().
*/
/* A bit of explanation is required for backend connection reuse. A connection
* may be shared between multiple streams of the same thread (e.g. h2, fcgi,
* quic) and may be reused by subsequent streams of a different thread if it
* is totally idle (i.e. not used at all). In order to permit other streams
* to find a connection, it has to appear in lists and/or trees that reflect
* its current state. If the connection is full and cannot be shared anymore,
* it is not in any of such places. The various states are the following:
*
* - private: a private connection is not visible to other threads. It is
* attached via its <idle_list> member to the <conn_list> head of a
* sess_priv_conns struct specific to the server, itself attached to the
* session. Only other streams of the same session may find this connection.
* Such connections include totally idle connections as well as connections
* with available slots left. The <hash_node> part is still used to store
* the hash key but the tree node part is otherwise left unused.
*
* - avail: an available connection is a connection that has at least one
* stream in use and at least one slot available for a new stream. Such a
* connection is indexed in the server's <avail_conns> member based on the
* key of the hash_node. It cannot be used by other threads, and is not
* present in the server's <idle_conn_list>, so its <idle_list> member is
* always empty. Since this connection is in use by a single thread and
* cannot be taken over, it doesn't require any locking to enter/leave the
* tree.
*
* - safe: a safe connection is an idle connection that has proven that it
* could reliably be reused. Such a connection may be taken over at any
* instant by other threads, and must only be manipulated under the server's
* <idle_lock>. It is indexed in the server's <safe_conns> member based on
* the key of the hash_node. It is attached to the server's <idle_conn_list>
* via its <idle_list> member. It may be purged after too long inactivity,
* though the thread responsible for doing this will first take it over. Such
* a connection has (conn->flags & CO_FL_LIST_MASK) = CO_FL_SAFE_LIST.
*
* - idle: a purely idle connection has not yet proven that it could reliably
* be reused. Such a connection may be taken over at any instant by other
* threads, and must only be manipulated under the server's <idle_lock>. It
* is indexed in the server's <idle_conns> member based on the key of the
* hash_node. It is attached to the server's <idle_conn_list> via its
* <idle_list> member. It may be purged after too long inactivity, though the
* thread responsible for doing this will first take it over. Such a
* connection has (conn->flags & CO_FL_LIST_MASK) = CO_FL_IDLE_LIST.
*/
/* flags for use in connection->flags. Please also update the conn_show_flags()
* function below in case of changes.
*/

View File

@ -35,7 +35,7 @@
};
#define COUNTERS_SHARED_TG \
struct { \
unsigned long last_change; /* last time, when the state was changed */\
unsigned long last_state_change; /* last time, when the state was changed */\
long long srv_aborts; /* aborted responses during DATA phase caused by the server */\
long long cli_aborts; /* aborted responses during DATA phase caused by the client */\
long long internal_errors; /* internal processing errors */\
@ -92,7 +92,7 @@ struct fe_counters_shared {
};
struct fe_counters {
struct fe_counters_shared *shared; /* shared counters */
struct fe_counters_shared shared; /* shared counters */
unsigned int conn_max; /* max # of active sessions */
unsigned int cps_max; /* maximum of new connections received per second */
@ -145,7 +145,7 @@ struct be_counters_shared {
/* counters used by servers and backends */
struct be_counters {
struct be_counters_shared *shared; /* shared counters */
struct be_counters_shared shared; /* shared counters */
unsigned int conn_max; /* max # of active sessions */
unsigned int cps_max; /* maximum of new connections received per second */

View File

@ -27,8 +27,8 @@
#include <haproxy/counters-t.h>
#include <haproxy/guid-t.h>
struct fe_counters_shared *counters_fe_shared_get(const struct guid_node *guid);
struct be_counters_shared *counters_be_shared_get(const struct guid_node *guid);
int counters_fe_shared_prepare(struct fe_counters_shared *counters, const struct guid_node *guid);
int counters_be_shared_prepare(struct be_counters_shared *counters, const struct guid_node *guid);
void counters_fe_shared_drop(struct fe_counters_shared *counters);
void counters_be_shared_drop(struct be_counters_shared *counters);

View File

@ -2,6 +2,7 @@
#define _HAPROXY_CPU_TOPO_H
#include <haproxy/api.h>
#include <haproxy/chunk.h>
#include <haproxy/cpuset-t.h>
#include <haproxy/cpu_topo-t.h>
@ -55,7 +56,12 @@ int cpu_map_configured(void);
/* Dump the CPU topology <topo> for up to cpu_topo_maxcpus CPUs for
* debugging purposes. Offline CPUs are skipped.
*/
void cpu_dump_topology(const struct ha_cpu_topo *topo);
void cpu_topo_debug(const struct ha_cpu_topo *topo);
/* Dump the summary of CPU topology <topo>, i.e. clusters info and thread-cpu
* bindings.
*/
void cpu_topo_dump_summary(const struct ha_cpu_topo *topo, struct buffer *trash);
/* re-order a CPU topology array by locality to help form groups. */
void cpu_reorder_by_locality(struct ha_cpu_topo *topo, int entries);

View File

@ -115,6 +115,10 @@
// via standard input.
#define MAX_CFG_SIZE 10485760
// may be handy for some system config files, where we just need to find
// some specific values (read with fgets)
#define MAX_LINES_TO_READ 32
// max # args on a configuration line
#define MAX_LINE_ARGS 64

View File

@ -65,6 +65,7 @@ int h1_format_htx_reqline(const struct htx_sl *sl, struct buffer *chk);
int h1_format_htx_stline(const struct htx_sl *sl, struct buffer *chk);
int h1_format_htx_hdr(const struct ist n, const struct ist v, struct buffer *chk);
int h1_format_htx_data(const struct ist data, struct buffer *chk, int chunked);
int h1_format_htx_msg(const struct htx *htx, struct buffer *outbuf);
#endif /* _HAPROXY_H1_HTX_H */

View File

@ -72,8 +72,8 @@ struct stream;
#define HLUA_NOYIELD 0x00000020
#define HLUA_BUSY 0x00000040
#define HLUA_F_AS_STRING 0x01
#define HLUA_F_MAY_USE_HTTP 0x02
#define HLUA_F_AS_STRING 0x01
#define HLUA_F_MAY_USE_CHANNELS_DATA 0x02
/* HLUA TXN flags */
#define HLUA_TXN_NOTERM 0x00000001

View File

@ -32,6 +32,7 @@ struct httpclient {
int timeout_server; /* server timeout in ms */
void *caller; /* ptr of the caller */
unsigned int flags; /* other flags */
unsigned int options; /* options */
struct proxy *px; /* proxy for special cases */
struct server *srv_raw; /* server for clear connections */
#ifdef USE_OPENSSL
@ -42,11 +43,16 @@ struct httpclient {
/* Action (FA) to do */
#define HTTPCLIENT_FA_STOP 0x00000001 /* stops the httpclient at the next IO handler call */
#define HTTPCLIENT_FA_AUTOKILL 0x00000002 /* sets the applet to destroy the httpclient struct itself */
#define HTTPCLIENT_FA_DRAIN_REQ 0x00000004 /* drains the request */
/* status (FS) */
#define HTTPCLIENT_FS_STARTED 0x00010000 /* the httpclient was started */
#define HTTPCLIENT_FS_ENDED 0x00020000 /* the httpclient is stopped */
/* options */
#define HTTPCLIENT_O_HTTPPROXY 0x00000001 /* the request must be use an absolute URI */
#define HTTPCLIENT_O_RES_HTX 0x00000002 /* response is stored in HTX */
/* States of the HTTP Client Appctx */
enum {
HTTPCLIENT_S_REQ = 0,
@ -59,12 +65,4 @@ enum {
#define HTTPCLIENT_USERAGENT "HAProxy"
/* What kind of data we need to read */
#define HC_F_RES_STLINE 0x01
#define HC_F_RES_HDR 0x02
#define HC_F_RES_BODY 0x04
#define HC_F_RES_END 0x08
#define HC_F_HTTPPROXY 0x10
#endif /* ! _HAPROXY_HTTCLIENT__T_H */

View File

@ -64,8 +64,17 @@ enum jwt_elt {
JWT_ELT_MAX
};
enum jwt_entry_type {
JWT_ENTRY_DFLT,
JWT_ENTRY_STORE,
JWT_ENTRY_PKEY,
JWT_ENTRY_INVALID, /* already tried looking into ckch_store tree (unsuccessful) */
};
struct jwt_cert_tree_entry {
EVP_PKEY *pkey;
EVP_PKEY *pubkey;
struct ckch_store *ckch_store;
int type; /* jwt_entry_type */
struct ebmb_node node;
char path[VAR_ARRAY];
};
@ -78,7 +87,8 @@ enum jwt_vrfy_status {
JWT_VRFY_UNMANAGED_ALG = -2,
JWT_VRFY_INVALID_TOKEN = -3,
JWT_VRFY_OUT_OF_MEMORY = -4,
JWT_VRFY_UNKNOWN_CERT = -5
JWT_VRFY_UNKNOWN_CERT = -5,
JWT_VRFY_INTERNAL_ERR = -6
};
#endif /* USE_OPENSSL */

View File

@ -28,10 +28,13 @@
#ifdef USE_OPENSSL
enum jwt_alg jwt_parse_alg(const char *alg_str, unsigned int alg_len);
int jwt_tokenize(const struct buffer *jwt, struct jwt_item *items, unsigned int *item_num);
int jwt_tree_load_cert(char *path, int pathlen, char **err);
int jwt_tree_load_cert(char *path, int pathlen, const char *file, int line, char **err);
enum jwt_vrfy_status jwt_verify(const struct buffer *token, const struct buffer *alg,
const struct buffer *key);
void jwt_replace_ckch_store(struct ckch_store *old_ckchs, struct ckch_store *new_ckchs);
#endif /* USE_OPENSSL */
#endif /* _HAPROXY_JWT_H */

View File

@ -97,7 +97,7 @@
* since it's used only once.
* Example: LIST_ELEM(cur_node->args.next, struct node *, args)
*/
#define LIST_ELEM(lh, pt, el) ((pt)(((const char *)(lh)) - ((size_t)&((pt)NULL)->el)))
#define LIST_ELEM(lh, pt, el) ((pt)(((const char *)(lh)) - offsetof(typeof(*(pt)NULL), el)))
/* checks if the list head <lh> is empty or not */
#define LIST_ISEMPTY(lh) ((lh)->n == (lh))
@ -284,10 +284,11 @@ static __inline void watcher_attach(struct watcher *w, void *target)
MT_LIST_APPEND(list, &w->el);
}
/* Untracks target via <w> watcher. Invalid if <w> is not attached first. */
/* Untracks target via <w> watcher. Does nothing if <w> is not attached */
static __inline void watcher_detach(struct watcher *w)
{
BUG_ON_HOT(!MT_LIST_INLIST(&w->el));
if (!MT_LIST_INLIST(&w->el))
return;
*w->pptr = NULL;
MT_LIST_DELETE(&w->el);
}

View File

@ -193,6 +193,7 @@ struct bind_conf {
unsigned int analysers; /* bitmap of required protocol analysers */
int maxseg; /* for TCP, advertised MSS */
int tcp_ut; /* for TCP, user timeout */
char *tcp_md5sig; /* TCP MD5 signature password (RFC2385) */
int idle_ping; /* MUX idle-ping interval in ms */
int maxaccept; /* if set, max number of connections accepted at once (-1 when disabled) */
unsigned int backlog; /* if set, listen backlog */

View File

@ -31,6 +31,7 @@
#include <haproxy/proxy-t.h>
#include <haproxy/server-t.h>
extern int mailers_used_from_lua;
extern struct mailers *mailers;
int init_email_alert(struct mailers *mailers, struct proxy *p, char **err);

View File

@ -18,6 +18,7 @@
#include <haproxy/quic_pacing-t.h>
#include <haproxy/quic_stream-t.h>
#include <haproxy/quic_utils-t.h>
#include <haproxy/session-t.h>
#include <haproxy/stconn-t.h>
#include <haproxy/task-t.h>
#include <haproxy/time-t.h>
@ -146,6 +147,7 @@ struct qc_stream_rxbuf {
struct qcs {
struct qcc *qcc;
struct session *sess; /* only set for backend conns */
struct sedesc *sd;
uint32_t flags; /* QC_SF_* */
enum qcs_state st; /* QC_SS_* state */
@ -209,7 +211,7 @@ struct qcc_app_ops {
ssize_t (*rcv_buf)(struct qcs *qcs, struct buffer *b, int fin);
/* Convert HTX to HTTP payload for sending. */
size_t (*snd_buf)(struct qcs *qcs, struct buffer *b, size_t count);
size_t (*snd_buf)(struct qcs *qcs, struct buffer *b, size_t count, char *fin);
/* Negotiate and commit fast-forward data from opposite MUX. */
size_t (*nego_ff)(struct qcs *qcs, size_t count);
@ -275,6 +277,7 @@ static forceinline char *qcc_show_flags(char *buf, size_t len, const char *delim
#define QC_SF_TO_STOP_SENDING 0x00000200 /* a STOP_SENDING must be sent */
#define QC_SF_UNKNOWN_PL_LENGTH 0x00000400 /* HTX EOM may be missing from the stream layer */
#define QC_SF_RECV_RESET 0x00000800 /* a RESET_STREAM was received */
#define QC_SF_EOI_SUSPENDED 0x00001000 /* EOI must not be reported even if HTX EOM is present - useful when transferring HTTP interim responses */
/* This function is used to report flags in debugging tools. Please reflect
* below any single-bit flag addition above in the same order via the

View File

@ -47,10 +47,16 @@
#ifdef USE_QUIC_OPENSSL_COMPAT
#include <haproxy/quic_openssl_compat.h>
#else
#define HAVE_OPENSSL_QUIC_CLIENT_SUPPORT
#if defined(OSSL_FUNC_SSL_QUIC_TLS_CRYPTO_SEND)
/* This macro is defined by the new OpenSSL 3.5.0 QUIC TLS API and it is not
* defined by quictls.
*/
#if defined(USE_QUIC) && (OPENSSL_VERSION_NUMBER < 0x30500010L)
#error "OpenSSL 3.5 QUIC API should only be used with OpenSSL 3.5.1 version and newer"
#endif
#define HAVE_OPENSSL_QUIC
#define SSL_set_quic_transport_params SSL_set_quic_tls_transport_params
#define SSL_set_quic_early_data_enabled SSL_set_quic_tls_early_data_enabled
@ -63,6 +69,9 @@ enum ssl_encryption_level_t {
ssl_encryption_application
};
#else
/* QUIC TLS API */
#define HAVE_OPENSSL_QUICTLS
#endif
#endif /* USE_QUIC_OPENSSL_COMPAT */

View File

@ -33,6 +33,9 @@
extern const char *const pat_match_names[PAT_MATCH_NUM];
extern int const pat_match_types[PAT_MATCH_NUM];
extern unsigned long long patterns_added;
extern unsigned long long patterns_freed;
extern int (*const pat_parse_fcts[PAT_MATCH_NUM])(const char *, struct pattern *, int, char **);
extern int (*const pat_index_fcts[PAT_MATCH_NUM])(struct pattern_expr *, struct pattern *, char **);
extern void (*const pat_prune_fcts[PAT_MATCH_NUM])(struct pattern_expr *);

View File

@ -40,10 +40,5 @@ int peers_register_table(struct peers *, struct stktable *table);
void peers_setup_frontend(struct proxy *fe);
void peers_register_keywords(struct peers_kw_list *pkwl);
static inline enum obj_type *peer_session_target(struct peer *p, struct stream *s)
{
return &p->srv->obj_type;
}
#endif /* _HAPROXY_PEERS_H */

View File

@ -311,7 +311,7 @@ struct proxy {
char flags; /* bit field PR_FL_* */
enum pr_mode mode; /* mode = PR_MODE_TCP, PR_MODE_HTTP, ... */
char cap; /* supported capabilities (PR_CAP_*) */
/* 4-bytes hole */
unsigned long last_change; /* internal use only: last time the proxy state was changed */
struct list global_list; /* list member for global proxy list */
@ -352,7 +352,7 @@ struct proxy {
#ifdef USE_QUIC
struct list quic_init_rules; /* quic-initial rules */
#endif
struct server *srv, defsrv; /* known servers; default server configuration */
struct server *srv, *defsrv; /* known servers; default server configuration */
struct lbprm lbprm; /* load-balancing parameters */
int srv_act, srv_bck; /* # of servers eligible for LB (UP|!checked) AND (enabled+weight!=0) */
int served; /* # of active sessions currently being served */

View File

@ -61,7 +61,6 @@ void proxy_store_name(struct proxy *px);
struct proxy *proxy_find_by_id(int id, int cap, int table);
struct proxy *proxy_find_by_name(const char *name, int cap, int table);
struct proxy *proxy_find_best_match(int cap, const char *name, int id, int *diff);
struct server *findserver(const struct proxy *px, const char *name);
int proxy_cfg_ensure_no_http(struct proxy *curproxy);
int proxy_cfg_ensure_no_log(struct proxy *curproxy);
void init_new_proxy(struct proxy *p);
@ -136,10 +135,10 @@ static inline void proxy_reset_timeouts(struct proxy *proxy)
/* increase the number of cumulated connections received on the designated frontend */
static inline void proxy_inc_fe_conn_ctr(struct listener *l, struct proxy *fe)
{
_HA_ATOMIC_INC(&fe->fe_counters.shared->tg[tgid - 1]->cum_conn);
_HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->cum_conn);
if (l && l->counters)
_HA_ATOMIC_INC(&l->counters->shared->tg[tgid - 1]->cum_conn);
update_freq_ctr(&fe->fe_counters.shared->tg[tgid - 1]->conn_per_sec, 1);
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_conn);
update_freq_ctr(&fe->fe_counters.shared.tg[tgid - 1]->conn_per_sec, 1);
HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.cps_max,
update_freq_ctr(&fe->fe_counters._conn_per_sec, 1));
}
@ -148,10 +147,10 @@ static inline void proxy_inc_fe_conn_ctr(struct listener *l, struct proxy *fe)
static inline void proxy_inc_fe_sess_ctr(struct listener *l, struct proxy *fe)
{
_HA_ATOMIC_INC(&fe->fe_counters.shared->tg[tgid - 1]->cum_sess);
_HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->cum_sess);
if (l && l->counters)
_HA_ATOMIC_INC(&l->counters->shared->tg[tgid - 1]->cum_sess);
update_freq_ctr(&fe->fe_counters.shared->tg[tgid - 1]->sess_per_sec, 1);
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_sess);
update_freq_ctr(&fe->fe_counters.shared.tg[tgid - 1]->sess_per_sec, 1);
HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.sps_max,
update_freq_ctr(&fe->fe_counters._sess_per_sec, 1));
}
@ -163,19 +162,19 @@ static inline void proxy_inc_fe_cum_sess_ver_ctr(struct listener *l, struct prox
unsigned int http_ver)
{
if (http_ver == 0 ||
http_ver > sizeof(fe->fe_counters.shared->tg[tgid - 1]->cum_sess_ver) / sizeof(*fe->fe_counters.shared->tg[tgid - 1]->cum_sess_ver))
http_ver > sizeof(fe->fe_counters.shared.tg[tgid - 1]->cum_sess_ver) / sizeof(*fe->fe_counters.shared.tg[tgid - 1]->cum_sess_ver))
return;
_HA_ATOMIC_INC(&fe->fe_counters.shared->tg[tgid - 1]->cum_sess_ver[http_ver - 1]);
_HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->cum_sess_ver[http_ver - 1]);
if (l && l->counters)
_HA_ATOMIC_INC(&l->counters->shared->tg[tgid - 1]->cum_sess_ver[http_ver - 1]);
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_sess_ver[http_ver - 1]);
}
/* increase the number of cumulated streams on the designated backend */
static inline void proxy_inc_be_ctr(struct proxy *be)
{
_HA_ATOMIC_INC(&be->be_counters.shared->tg[tgid - 1]->cum_sess);
update_freq_ctr(&be->be_counters.shared->tg[tgid - 1]->sess_per_sec, 1);
_HA_ATOMIC_INC(&be->be_counters.shared.tg[tgid - 1]->cum_sess);
update_freq_ctr(&be->be_counters.shared.tg[tgid - 1]->sess_per_sec, 1);
HA_ATOMIC_UPDATE_MAX(&be->be_counters.sps_max,
update_freq_ctr(&be->be_counters._sess_per_sec, 1));
}
@ -187,13 +186,13 @@ static inline void proxy_inc_be_ctr(struct proxy *be)
static inline void proxy_inc_fe_req_ctr(struct listener *l, struct proxy *fe,
unsigned int http_ver)
{
if (http_ver >= sizeof(fe->fe_counters.shared->tg[tgid - 1]->p.http.cum_req) / sizeof(*fe->fe_counters.shared->tg[tgid - 1]->p.http.cum_req))
if (http_ver >= sizeof(fe->fe_counters.shared.tg[tgid - 1]->p.http.cum_req) / sizeof(*fe->fe_counters.shared.tg[tgid - 1]->p.http.cum_req))
return;
_HA_ATOMIC_INC(&fe->fe_counters.shared->tg[tgid - 1]->p.http.cum_req[http_ver]);
_HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->p.http.cum_req[http_ver]);
if (l && l->counters)
_HA_ATOMIC_INC(&l->counters->shared->tg[tgid - 1]->p.http.cum_req[http_ver]);
update_freq_ctr(&fe->fe_counters.shared->tg[tgid - 1]->req_per_sec, 1);
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->p.http.cum_req[http_ver]);
update_freq_ctr(&fe->fe_counters.shared.tg[tgid - 1]->req_per_sec, 1);
HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.p.http.rps_max,
update_freq_ctr(&fe->fe_counters.p.http._req_per_sec, 1));
}

View File

@ -448,7 +448,7 @@ struct quic_conn_closed {
#define QUIC_FL_CONN_ANTI_AMPLIFICATION_REACHED (1U << 0)
#define QUIC_FL_CONN_SPIN_BIT (1U << 1) /* Spin bit set by remote peer */
#define QUIC_FL_CONN_NEED_POST_HANDSHAKE_FRMS (1U << 2) /* HANDSHAKE_DONE must be sent */
#define QUIC_FL_CONN_LISTENER (1U << 3)
/* gap here */
#define QUIC_FL_CONN_ACCEPT_REGISTERED (1U << 4)
/* gap here */
#define QUIC_FL_CONN_IDLE_TIMER_RESTARTED_AFTER_READ (1U << 6)
@ -488,7 +488,6 @@ static forceinline char *qc_show_flags(char *buf, size_t len, const char *delim,
_(QUIC_FL_CONN_ANTI_AMPLIFICATION_REACHED,
_(QUIC_FL_CONN_SPIN_BIT,
_(QUIC_FL_CONN_NEED_POST_HANDSHAKE_FRMS,
_(QUIC_FL_CONN_LISTENER,
_(QUIC_FL_CONN_ACCEPT_REGISTERED,
_(QUIC_FL_CONN_IDLE_TIMER_RESTARTED_AFTER_READ,
_(QUIC_FL_CONN_RETRANS_NEEDED,
@ -508,7 +507,7 @@ static forceinline char *qc_show_flags(char *buf, size_t len, const char *delim,
_(QUIC_FL_CONN_EXP_TIMER,
_(QUIC_FL_CONN_CLOSING,
_(QUIC_FL_CONN_DRAINING,
_(QUIC_FL_CONN_IMMEDIATE_CLOSE))))))))))))))))))))))));
_(QUIC_FL_CONN_IMMEDIATE_CLOSE)))))))))))))))))))))));
/* epilogue */
_(~0U);
return buf;

View File

@ -82,11 +82,6 @@ void qc_check_close_on_released_mux(struct quic_conn *qc);
int quic_stateless_reset_token_cpy(unsigned char *pos, size_t len,
const unsigned char *salt, size_t saltlen);
static inline int qc_is_listener(struct quic_conn *qc)
{
return qc->flags & QUIC_FL_CONN_LISTENER;
}
/* Free the CIDs attached to <conn> QUIC connection. */
static inline void free_quic_conn_cids(struct quic_conn *conn)
{

View File

@ -37,7 +37,6 @@ int ssl_quic_initial_ctx(struct bind_conf *bind_conf);
SSL_CTX *ssl_quic_srv_new_ssl_ctx(void);
int qc_alloc_ssl_sock_ctx(struct quic_conn *qc, struct connection *conn);
int qc_ssl_provide_all_quic_data(struct quic_conn *qc, struct ssl_sock_ctx *ctx);
int quic_ssl_set_tls_cbs(SSL *ssl);
static inline void qc_free_ssl_sock_ctx(struct ssl_sock_ctx **ctx)
{

View File

@ -3,8 +3,7 @@
#define QUIC_MIN_CC_PKTSIZE 128
#define QUIC_DGRAM_HEADLEN (sizeof(uint16_t) + sizeof(void *))
#define QUIC_MAX_CC_BUFSIZE (2 * (QUIC_MIN_CC_PKTSIZE + QUIC_DGRAM_HEADLEN))
#define QUIC_BE_MAX_CC_BUFSIZE MAX(QUIC_INITIAL_IPV6_MTU, QUIC_INITIAL_IPV4_MTU)
#define QUIC_MAX_CC_BUFSIZE MAX(QUIC_INITIAL_IPV6_MTU, QUIC_INITIAL_IPV4_MTU)
/* Sendmsg input buffer cannot be bigger than 65535 bytes. This comes from UDP
* header which uses a 2-bytes length field. QUIC datagrams are limited to 1252
@ -22,7 +21,6 @@
extern struct pool_head *pool_head_quic_tx_packet;
extern struct pool_head *pool_head_quic_cc_buf;
extern struct pool_head *pool_head_quic_be_cc_buf;
/* Flag a sent packet as being an ack-eliciting packet. */
#define QUIC_FL_TX_PACKET_ACK_ELICITING (1UL << 0)

View File

@ -348,15 +348,6 @@ static inline void sc_sync_send(struct stconn *sc)
{
if (sc_ep_test(sc, SE_FL_T_MUX))
sc_conn_sync_send(sc);
else if (sc_ep_test(sc, SE_FL_T_APPLET)) {
sc_applet_sync_send(sc);
if (sc_oc(sc)->flags & CF_WRITE_EVENT) {
/* Data was send, wake the applet up. It is safe to do so because sc_applet_sync_send()
* removes CF_WRITE_EVENT flag from the channel before trying to send data to the applet.
*/
task_wakeup(__sc_appctx(sc)->t, TASK_WOKEN_OTHER);
}
}
}
/* Combines both sc_update_rx() and sc_update_tx() at once */
@ -395,7 +386,19 @@ static inline int sc_is_send_allowed(const struct stconn *sc)
if (sc->flags & SC_FL_SHUT_DONE)
return 0;
return !sc_ep_test(sc, SE_FL_WAIT_DATA | SE_FL_WONT_CONSUME);
if (!sc_appctx(sc) || !(__sc_appctx(sc)->flags & APPCTX_FL_INOUT_BUFS))
return !sc_ep_test(sc, SE_FL_WAIT_DATA | SE_FL_WONT_CONSUME);
if (sc_ep_test(sc, SE_FL_WONT_CONSUME))
return 0;
if (sc_ep_test(sc, SE_FL_WAIT_DATA)) {
if (__sc_appctx(sc)->flags & (APPCTX_FL_INBLK_FULL|APPCTX_FL_INBLK_ALLOC))
return 0;
if (!co_data(sc_oc(sc)))
return 0;
}
return 1;
}
static inline int sc_rcv_may_expire(const struct stconn *sc)

View File

@ -355,6 +355,7 @@ struct server {
short onmarkedup; /* what to do when marked up: one of HANA_ONMARKEDUP_* */
int slowstart; /* slowstart time in seconds (ms in the conf) */
int idle_ping; /* MUX idle-ping interval in ms */
unsigned long last_change; /* internal use only (not for stats purpose): last time the server state was changed, doesn't change often, not updated atomically on purpose */
char *id; /* just for identification */
uint32_t rid; /* revision: if id has been reused for a new server, rid won't match */
@ -430,6 +431,7 @@ struct server {
int puid; /* proxy-unique server ID, used for SNMP, and "first" LB algo */
int tcp_ut; /* for TCP, user timeout */
char *tcp_md5sig; /* TCP MD5 signature password (RFC2385) */
int do_check; /* temporary variable used during parsing to denote if health checks must be enabled */
int do_agent; /* temporary variable used during parsing to denote if an auxiliary agent check must be enabled */
@ -442,7 +444,7 @@ struct server {
char *lastaddr; /* the address string provided by the server-state file */
struct resolv_options resolv_opts;
int hostname_dn_len; /* string length of the server hostname in Domain Name format */
char *hostname_dn; /* server hostname in Domain Name format */
char *hostname_dn; /* server hostname in Domain Name format (name is lower cased) */
char *hostname; /* server hostname */
struct sockaddr_storage init_addr; /* plain IP address specified on the init-addr line */
unsigned int init_addr_methods; /* initial address setting, 3-bit per method, ends at 0, enough to store 10 entries */

View File

@ -59,10 +59,11 @@ const char *srv_update_addr_port(struct server *s, const char *addr, const char
const char *server_inetaddr_updater_by_to_str(enum server_inetaddr_updater_by by);
const char *srv_update_check_addr_port(struct server *s, const char *addr, const char *port);
const char *srv_update_agent_addr_port(struct server *s, const char *addr, const char *port);
struct server *server_find_by_id(struct proxy *bk, int id);
struct server *server_find_by_id_unique(struct proxy *bk, int id, uint32_t rid);
struct server *server_find_by_name(struct proxy *bk, const char *name);
struct server *server_find_by_name_unique(struct proxy *bk, const char *name, uint32_t rid);
struct server *server_find_by_name(struct proxy *px, const char *name);
struct server *server_find_by_addr(struct proxy *px, const char *addr);
struct server *server_find(struct proxy *bk, const char *name);
struct server *server_find_unique(struct proxy *bk, const char *name, uint32_t rid);
struct server *server_find_best_match(struct proxy *bk, char *name, int id, int *diff);
void apply_server_state(void);
void srv_compute_all_admin_states(struct proxy *px);
@ -181,8 +182,8 @@ const struct mux_ops *srv_get_ws_proto(struct server *srv);
/* increase the number of cumulated streams on the designated server */
static inline void srv_inc_sess_ctr(struct server *s)
{
_HA_ATOMIC_INC(&s->counters.shared->tg[tgid - 1]->cum_sess);
update_freq_ctr(&s->counters.shared->tg[tgid - 1]->sess_per_sec, 1);
_HA_ATOMIC_INC(&s->counters.shared.tg[tgid - 1]->cum_sess);
update_freq_ctr(&s->counters.shared.tg[tgid - 1]->sess_per_sec, 1);
HA_ATOMIC_UPDATE_MAX(&s->counters.sps_max,
update_freq_ctr(&s->counters._sess_per_sec, 1));
}
@ -190,7 +191,7 @@ static inline void srv_inc_sess_ctr(struct server *s)
/* set the time of last session on the designated server */
static inline void srv_set_sess_last(struct server *s)
{
HA_ATOMIC_STORE(&s->counters.shared->tg[tgid - 1]->last_sess, ns_to_sec(now_ns));
HA_ATOMIC_STORE(&s->counters.shared.tg[tgid - 1]->last_sess, ns_to_sec(now_ns));
}
/* returns the current server throttle rate between 0 and 100% */
@ -343,6 +344,18 @@ static inline void srv_detach(struct server *srv)
}
}
/* Returns a pointer to the first server matching id <id> in backend <bk>.
* NULL is returned if no match is found.
*/
static inline struct server *server_find_by_id(struct proxy *bk, int id)
{
struct eb32_node *eb32;
eb32 = eb32_lookup(&bk->conf.used_server_id, id);
return eb32 ? container_of(eb32, struct server, conf.id) : NULL;
}
static inline int srv_is_quic(const struct server *srv)
{
#ifdef USE_QUIC

View File

@ -61,6 +61,8 @@ struct session {
struct list priv_conns; /* list of private conns */
struct sockaddr_storage *src; /* source address (pool), when known, otherwise NULL */
struct sockaddr_storage *dst; /* destination address (pool), when known, otherwise NULL */
struct fe_counters_shared_tg *fe_tgcounters; /* pointer to current thread group shared frontend counters */
struct fe_counters_shared_tg *li_tgcounters; /* pointer to current thread group shared listener counters */
};
/*

View File

@ -171,25 +171,31 @@ static inline void session_unown_conn(struct session *sess, struct connection *c
}
}
/* Add the connection <conn> to the private conns list of session <sess>. This
* function is called only if the connection is private. Nothing is performed
* if the connection is already in the session list or if the session does not
* owned the connection.
/* Add the connection <conn> to the private conns list of session <sess>. Each
* connection is indexed by their respective target in the session. Nothing is
* performed if the connection is already in the session list.
*
* Returns true if conn is inserted or already present else false if a failure
* occurs during insertion.
*/
static inline int session_add_conn(struct session *sess, struct connection *conn, void *target)
static inline int session_add_conn(struct session *sess, struct connection *conn)
{
struct sess_priv_conns *pconns = NULL;
struct server *srv = objt_server(conn->target);
int found = 0;
BUG_ON(objt_listener(conn->target));
/* Connection target is used to index it in the session. Only BE conns are expected in session list. */
BUG_ON(!conn->target || objt_listener(conn->target));
/* Already attach to the session or not the connection owner */
if (!LIST_ISEMPTY(&conn->sess_el) || (conn->owner && conn->owner != sess))
/* A connection cannot be attached already to another session. */
BUG_ON(conn->owner && conn->owner != sess);
/* Already attach to the session */
if (!LIST_ISEMPTY(&conn->sess_el))
return 1;
list_for_each_entry(pconns, &sess->priv_conns, sess_el) {
if (pconns->target == target) {
if (pconns->target == conn->target) {
found = 1;
break;
}
@ -199,7 +205,7 @@ static inline int session_add_conn(struct session *sess, struct connection *conn
pconns = pool_alloc(pool_head_sess_priv_conns);
if (!pconns)
return 0;
pconns->target = target;
pconns->target = conn->target;
LIST_INIT(&pconns->conn_list);
LIST_APPEND(&sess->priv_conns, &pconns->sess_el);
@ -219,25 +225,34 @@ static inline int session_add_conn(struct session *sess, struct connection *conn
return 1;
}
/* Returns 0 if the session can keep the idle conn, -1 if it was destroyed. The
* connection must be private.
/* Check that session <sess> is able to keep idle connection <conn>. This must
* be called each time a connection stored in a session becomes idle.
*
* Returns 0 if the connection is kept, else non-zero if the connection was
* explicitely removed from session.
*/
static inline int session_check_idle_conn(struct session *sess, struct connection *conn)
{
/* Another session owns this connection */
if (conn->owner != sess)
/* Connection must be attached to session prior to this function call. */
BUG_ON(!conn->owner || conn->owner != sess);
/* Connection is not attached to a session. */
if (!conn->owner)
return 0;
/* Ensure conn is not already accounted as idle to prevent sess idle count excess increment. */
BUG_ON(conn->flags & CO_FL_SESS_IDLE);
if (sess->idle_conns >= sess->fe->max_out_conns) {
session_unown_conn(sess, conn);
conn->owner = NULL;
conn->flags &= ~CO_FL_SESS_IDLE;
conn->mux->destroy(conn->ctx);
return -1;
} else {
}
else {
conn->flags |= CO_FL_SESS_IDLE;
sess->idle_conns++;
}
return 0;
}

View File

@ -73,6 +73,8 @@ struct ckch_conf {
} acme;
};
struct jwt_cert_tree_entry;
/*
* this is used to store 1 to SSL_SOCK_NUM_KEYTYPES cert_key_and_chain and
* metadata.
@ -88,6 +90,7 @@ struct ckch_store {
struct list crtlist_entry; /* list of entries which use this store */
struct ckch_conf conf;
struct task *acme_task;
struct jwt_cert_tree_entry *jwt_entry;
struct ebmb_node node;
char path[VAR_ARRAY];
};

View File

@ -1,18 +1,7 @@
/*
* include/haproxy/ssl_trace-t.h
* Definitions for SSL traces internal types, constants and flags.
*
* Copyright (C) 2025
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*
*/
/* SPDX-License-Identifier: LGPL-2.1-or-later */
#ifndef _HAPROXY_SSL_TRACE_T_H
#define _HAPROXY_SSL_TRACE_T_H
#ifndef _HAPROXY_SSL_TRACE_H
#define _HAPROXY_SSL_TRACE_H
#include <haproxy/trace-t.h>
@ -33,7 +22,16 @@ extern struct trace_source trace_ssl;
#define SSL_EV_CONN_SWITCHCTX_CB (1ULL << 12)
#define SSL_EV_CONN_CHOOSE_SNI_CTX (1ULL << 13)
#define SSL_EV_CONN_SIGALG_EXT (1ULL << 14)
#define SSL_EV_CONN_CIPHERS_EXT (1ULL << 15)
#define SSL_EV_CONN_CURVES_EXT (1ULL << 16)
#define SSL_VERB_CLEAN 1
#define SSL_VERB_MINIMAL 2
#define SSL_VERB_SIMPLE 3
#define SSL_VERB_ADVANCED 4
#define SSL_VERB_COMPLETE 5
#define TRACE_SOURCE &trace_ssl
#endif /* _HAPROXY_SSL_TRACE_T_H */
#endif /* _HAPROXY_SSL_TRACE_H */

View File

@ -55,6 +55,7 @@ time_t x509_get_notbefore_time_t(X509 *cert);
int curves2nid(const char *curve);
const char *nid2nist(int nid);
const char *sigalg2str(int sigalg);
const char *curveid2str(int curve_id);
#endif /* _HAPROXY_SSL_UTILS_H */
#endif /* USE_OPENSSL */

View File

@ -337,6 +337,8 @@ enum stat_idx_info {
ST_I_INF_CURR_STRM,
ST_I_INF_CUM_STRM,
ST_I_INF_WARN_BLOCKED,
ST_I_INF_PATTERNS_ADDED,
ST_I_INF_PATTERNS_FREED,
/* must always be the last one */
ST_I_INF_MAX

View File

@ -73,7 +73,7 @@ int stats_dump_stat_to_buffer(struct stconn *sc, struct buffer *buf, struct htx
int stats_emit_raw_data_field(struct buffer *out, const struct field *f);
int stats_emit_typed_data_field(struct buffer *out, const struct field *f);
int stats_emit_field_tags(struct buffer *out, const struct field *f,
char delim);
int persistent, char delim);
/* Returns true if <col> is fully defined, false if only used as name-desc. */

View File

@ -226,7 +226,7 @@ struct stktable {
unsigned int update; /* uses updt_lock */
unsigned int localupdate; /* uses updt_lock */
unsigned int commitupdate;/* used to identify the latest local updates pending for sync, uses updt_lock */
struct tasklet *updt_task;/* tasklet responsable for pushing the pending updates into the tree */
struct tasklet *updt_task;/* tasklet responsible for pushing the pending updates into the tree */
THREAD_ALIGN(64);
/* this lock is heavily used and must be on its own cache line */

View File

@ -340,6 +340,8 @@ struct stream {
int hostname_dn_len; /* size of hostname_dn */
/* 4 unused bytes here, recoverable via packing if needed */
} resolv_ctx; /* context information for DNS resolution */
struct be_counters_shared_tg *be_tgcounters; /* pointer to current thread group shared backend counters */
struct be_counters_shared_tg *sv_tgcounters; /* pointer to current thread group shared server counters */
};
#endif /* _HAPROXY_STREAM_T_H */

View File

@ -362,8 +362,8 @@ static inline void stream_choose_redispatch(struct stream *s)
s->scb->state = SC_ST_REQ;
} else {
if (objt_server(s->target))
_HA_ATOMIC_INC(&__objt_server(s->target)->counters.shared->tg[tgid - 1]->retries);
_HA_ATOMIC_INC(&s->be->be_counters.shared->tg[tgid - 1]->retries);
_HA_ATOMIC_INC(&s->sv_tgcounters->retries);
_HA_ATOMIC_INC(&s->be_tgcounters->retries);
s->scb->state = SC_ST_ASS;
}
@ -432,6 +432,11 @@ static inline void stream_report_term_evt(struct stconn *sc, enum strm_term_even
sc->term_evts_log = tevt_report_event(sc->term_evts_log, loc, type);
}
static inline void stream_set_srv_target(struct stream *s, struct server *srv)
{
s->target = &srv->obj_type;
s->sv_tgcounters = srv->counters.shared.tg[tgid - 1];
}
int stream_set_timeout(struct stream *s, enum act_timeout_name name, int timeout);
void stream_retnclose(struct stream *s, const struct buffer *msg);

View File

@ -217,6 +217,7 @@ enum lock_label {
QC_CID_LOCK,
CACHE_LOCK,
GUID_LOCK,
JWT_LOCK,
OTHER_LOCK,
/* WT: make sure never to use these ones outside of development,
* we need them for lock profiling!

View File

@ -64,7 +64,7 @@
/* currently updated and stored in time.c */
extern THREAD_LOCAL unsigned int now_ms; /* internal date in milliseconds (may wrap) */
extern volatile unsigned int global_now_ms;
extern volatile unsigned int *global_now_ms;
/* return 1 if tick is set, otherwise 0 */
static inline int tick_isset(int expire)

View File

@ -87,7 +87,7 @@ struct mt_list {
*
* return MT_LIST_ELEM(cur_node->args.next, struct node *, args)
*/
#define MT_LIST_ELEM(a, t, m) ((t)(size_t)(((size_t)(a)) - ((size_t)&((t)NULL)->m)))
#define MT_LIST_ELEM(a, t, m) ((t)(size_t)(((size_t)(a)) - offsetof(typeof(*(t)NULL), m)))
/* Returns a pointer of type <t> to a structure following the element which

View File

@ -21,6 +21,11 @@ server s1 {
} -start
haproxy h1 -arg "-L A" -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode http
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -47,6 +47,11 @@ server s1 {
} -start
haproxy h1 -arg "-L A" -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode http
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -22,6 +22,11 @@ server s4 {
} -repeat 2 -start
haproxy h1 -arg "-L A" -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode http
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -22,6 +22,11 @@ server s4 {
} -repeat 5 -start
haproxy h1 -arg "-L A" -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode http
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -22,6 +22,11 @@ server s4 {
} -repeat 2 -start
haproxy h1 -arg "-L A" -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode http
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -23,6 +23,11 @@ server s1 {
} -start
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode http
timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -113,6 +113,10 @@ server s2 {
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
# WT: limit false-positives causing "HTTP header incomplete" due to
# idle server connections being randomly used and randomly expiring
# under us.

View File

@ -39,6 +39,10 @@ server s3 {
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
# WT: limit false-positives causing "HTTP header incomplete" due to
# idle server connections being randomly used and randomly expiring
# under us.

View File

@ -39,6 +39,10 @@ server s1 {
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
# WT: limit false-positives causing "HTTP header incomplete" due to
# idle server connections being randomly used and randomly expiring
# under us.

View File

@ -24,6 +24,10 @@ server s1 {
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
# WT: limit false-positives causing "HTTP header incomplete" due to
# idle server connections being randomly used and randomly expiring
# under us.

View File

@ -21,6 +21,11 @@ server s1 {
} -start
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode http
timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -43,6 +43,10 @@ server s3 {
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
# WT: limit false-positives causing "HTTP header incomplete" due to
# idle server connections being randomly used and randomly expiring
# under us.

View File

@ -165,6 +165,10 @@ server s2 {
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
# WT: limit false-positives causing "HTTP header incomplete" due to
# idle server connections being randomly used and randomly expiring
# under us.

View File

@ -91,6 +91,10 @@ server s1 {
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
# WT: limit false-positives causing "HTTP header incomplete" due to
# idle server connections being randomly used and randomly expiring
# under us.

View File

@ -56,6 +56,11 @@ server s38 {} -start
server s39 {} -start
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -238,6 +238,11 @@ server s37 {} -start
server s39 {} -start
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -92,6 +92,11 @@ syslog S4 -level notice {
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -55,6 +55,11 @@ server s2 {
} -start
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -15,6 +15,11 @@ server s1 {
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode tcp
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -24,6 +24,11 @@ syslog S1 -level notice {
} -start
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode http
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -108,6 +108,11 @@ syslog S6 -level notice {
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode http
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -90,6 +90,11 @@ syslog S1 -level notice {
} -start
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode http
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -7,6 +7,11 @@ varnishtest "Test the HTTP directive monitor-uri"
feature ignore_unknown_macro
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode http
timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -50,6 +50,11 @@ syslog S4 -level notice {
} -start
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode tcp
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -63,6 +63,11 @@ syslog S5 -level notice {
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode tcp
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -48,6 +48,11 @@ syslog S4 -level notice {
} -start
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode tcp
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -29,6 +29,11 @@ syslog S2 -level notice {
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode tcp
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -64,6 +64,11 @@ syslog S5 -level notice {
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode tcp
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -48,6 +48,11 @@ syslog S4 -level notice {
} -start
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode tcp
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -24,6 +24,10 @@ syslog S3 -level notice {
haproxy htst -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
.if !ssllib_name_startswith(AWS-LC)
tune.ssl.default-dh-param 2048
.endif

View File

@ -16,6 +16,10 @@ syslog S_ok -level notice {
haproxy htst -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
ssl-default-bind-options ssl-min-ver TLSv1.2 ssl-max-ver TLSv1.3
defaults

View File

@ -29,6 +29,10 @@ syslog S4 -level notice {
haproxy htst -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
.if !ssllib_name_startswith(AWS-LC)
tune.ssl.default-dh-param 2048
.endif

View File

@ -30,6 +30,11 @@ server s2 {
} -start
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode tcp
timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -19,6 +19,11 @@ server s1 {
} -start
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode tcp
timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -30,6 +30,11 @@ syslog S1 -level notice {
} -start
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode tcp
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -34,6 +34,10 @@ syslog S1 -level notice {
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
.if !ssllib_name_startswith(AWS-LC)
tune.ssl.default-dh-param 2048
.endif
@ -85,6 +89,10 @@ syslog S6 -level notice {
haproxy h2 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
.if !ssllib_name_startswith(AWS-LC)
tune.ssl.default-dh-param 2048
.endif

View File

@ -177,6 +177,11 @@ server s1 {
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode http
timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"

View File

@ -106,6 +106,10 @@ server s1 {
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
# WT: limit false-positives causing "HTTP header incomplete" due to
# idle server connections being randomly used and randomly expiring
# under us.

Some files were not shown because too many files have changed in this diff Show More