Compare commits

...

189 Commits

Author SHA1 Message Date
Christopher Faulet
d1c7e56585 BUG/MINOR: config: Properly test warnif_misplaced_* return values
warnif_misplaced_* functions return 1 when a warning is reported and 0
otherwise. So the caller must properly handle the return value.

When parsing a proxy, ERR_WARN code must be added to the error code instead
of the return value. When a warning was reported, ERR_RETRYABLE (1) was
added instead of ERR_WARN.

And when tcp rules were parsed, warnings were ignored. Message were emitted
but the return values were ignored.

This patch should be backported to all stable versions.
2026-03-27 07:35:25 +01:00
Christopher Faulet
4e99cddde4 BUG/MINOR: config: Warn only if warnif_cond_conflicts report a conflict
When warnif_cond_conflicts() is called, we must take care to emit a warning
only when a conflict is reported. We cannot rely on the err_code variable
because some warnings may have been already reported. We now rely on the
errmsg variable. If it contains something, a warning is emitted. It is good
enough becasue warnif_cond_conflicts() only reports warnings.

This patch should fix the issue #3305. It is a 3.4-dev specific issue. No
backport needed.
2026-03-27 07:35:25 +01:00
Olivier Houchard
0e36267aac MEDIUM: server: remove a useless memset() in srv_update_check_addr_port.
Remove a memset that should not be there, and tries to zero a NULL pointer.
2026-03-26 16:43:48 +01:00
Olivier Houchard
1b0dfff552 MEDIUM: connections: Enforce mux protocol requirements
When picking a mux, pay attention to its MX_FL_FRAMED. If it is set,
then it means we explicitely want QUIC, so don't use that mux for any
protocol that is not QUIC.
2026-03-26 15:09:13 +01:00
Olivier Houchard
d3ad730d5f MINOR: protocols: Add a new proto_is_quic() function
Add a new function, proto_is_quic(), that returns true if the protocol
is QUIC (using a datagram socket but provides a stream transport).
2026-03-26 15:09:13 +01:00
Olivier Houchard
cca9245416 MINOR: checks: Store the protocol to be used in struct check
When parsing the check address, store the associated proto too.
That way we can use the notation like quic4@address, and the right
protocol will be used. It is possible for checks to use a different
protocol than the server, ie we can have a QUIC server but want to run
TCP checks, so we can't just reuse whatever the server uses.
WIP: store the protocol in checks
2026-03-26 15:09:13 +01:00
Olivier Houchard
07edaed191 BUG/MEDIUM: check: Don't reuse the server xprt if we should not
Don't assume the check will reuse the server's xprt. It may not be true
if some settings such as the ALPN has been set, and it differs from the
server's one. If the server is QUIC, and we want to use TCP for checks,
we certainly don't want to reuse its XPRT.
2026-03-26 15:09:13 +01:00
William Lallemand
1c1d9d2500 BUG/MINOR: acme: permission checks on the CLI
Permission checks on the CLI for ACME are missing.

This patch adds a check on the ACME commands
so they can only be run in admin mode.

ACME is stil a feature in experimental-mode.

Initial report by Cameron Brown.

Must be backported to 3.2 and later.
2026-03-25 18:37:47 +01:00
William Lallemand
47987ccbd9 BUG/MINOR: ech: permission checks on the CLI
Permission checks on the CLI for ECH are missing.

This patch adds a check for "(add|set|del|show) ssl ech" commands
so they can only be run in admin mode.

ECH is stil a feature in experimental-mode and is not compiled by
default.

Initial report by Cameron Brown.

Must be backported to 3.3.
2026-03-25 18:37:06 +01:00
William Lallemand
33041fe91f BUILD: tools: potential null pointer dereference in dl_collect_libs_cb
This patch fixes a warning that can be reproduced with gcc-8.5 on RHEL8
(gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-28)).

This should fix issue #3303.

Must be backported everywhere 917e82f283 ("MINOR: debug: copy debug
symbols from /usr/lib/debug when present") was backported, which is
to branch 3.2 for now.
2026-03-23 21:52:56 +01:00
William Lallemand
8e250bba8f BUG/MINOR: acme/cli: fix argument check and error in 'acme challenge_ready'
Fix the check or arguments of the 'acme challenge_ready' command which
was checking if all arguments are NULL instead of one of the argument.

Must be backported to 3.2 and later.
2026-03-23 14:39:55 +01:00
William Lallemand
c7564c19a2 BUG/MINOR: acme: replace atol with len-bounded __strl2uic() for retry-after
Replace atol() by _strl2uic() in cases the input are ISTs when parsing
the retry-after header. There's no risk of an error since it will stop
at the first non-digit.

Must be backported to 3.2 and later.
2026-03-23 14:39:55 +01:00
William Lallemand
efbf0f8ed1 BUG/MINOR: acme: free() DER buffer on a2base64url error path
In acme_req_finalize() the data buffer is only freed when a2base64url
succeed. This patch moves the allocation so it free() the DER buffer in
every cases.

Must be backported to 3.2 and later.
2026-03-23 14:39:55 +01:00
William Lallemand
52d8ee85e7 BUG/MINOR: acme: NULL check on my_strndup()
Add a NULL check on my_strndup().

Must be backported to 3.2 and later.
2026-03-23 14:39:55 +01:00
Christopher Faulet
38a7d8599d DOC: config: Reorder params for 'tcp-check expect' directive
Order of parameters for the 'tcp-check expect' directive is changed to be
the same than 'http-check expect'.
2026-03-23 14:02:43 +01:00
Christopher Faulet
82afd36b6c DOC: config: Add missing 'status-code' param for 'http-check expect' directive
In the documentation of 'http-check expect' directive, the parameter
'status-code' was missing. Let's add it.

This patch could be backported to all stable versions.
2026-03-23 14:02:43 +01:00
Christopher Faulet
ada33006ef MINOR: proxy: Add use-small-buffers option to set where to use small buffers
Thanks to previous commits, it is possible to use small buffers at different
places: to store the request when a connection is queued or when L7 retries
are enabled, or for health-checks requests. However, there was no
configuration parameter to fine tune small buffer use.

It is now possible, thanks to the proxy option "use-small-buffers".
Documentation was updated accordingly.
2026-03-23 14:02:43 +01:00
Christopher Faulet
163eba5c8c DOC: config: Fix alphabetical ordering of external-check directives
external-check directives were not at the right place. Let's fix it.
2026-03-23 14:02:43 +01:00
Christopher Faulet
61d68f14b2 DOC: config: Fix alphabetical ordering of proxy options
external-check and idle-close-on-response options were not at the right
place. Let's fix it.
2026-03-23 14:02:43 +01:00
Christopher Faulet
125cbecfa9 MINOR: proxy: Review options flags used to configure healthchecks
When healthchecks were configured for a proxy, an enum-like was used to
sepcify the check's type. The idea was to reserve some values for futur
types of healthcheck. But it is overkill. I doubt we will ever have
something else than tcp and external checks. So corresponding PR_O2 flags
were slightly reviewed and a hole was filled.

Thanks to this change, some bits were released in options2 bitfield.
2026-03-23 14:02:43 +01:00
Christopher Faulet
a61ea0f414 MEDIUM: tcpcheck: Use small buffer if possible for healthchecks
If support for small buffers is enabled, we now try to use them for
healthcheck requests. First, we take care the tcpcheck ruleset may use small
buffers. Send rules using LF strings or too large data are excluded. The
ability to use small buffers or not are set on the ruleset. All send rules
of the ruleset must be compatible. This info is then transfer to server's
healthchecks relying on this ruleset.

Then, when a healthcheck is running, when a send rule is evaluated, if
possible, we try to use small buffers. On error, the ability to use small
buffers is removed and we retry with a regular buffer. It means on the first
error, the support is disabled for the healthcheck and all other runs will
use regular buffers.
2026-03-23 14:02:43 +01:00
Christopher Faulet
cd363e0246 MEDIUM: mux-h2: Stop dealing with HTX flags transfer in h2_rcv_buf()
In h2_rcv_buf(), HTX flags are transfer with data when htx_xfer() is
called. There is no reason to continue to deal with them in the H2 mux. In
addition, there is no reason to set SE_FL_EOI flag when a parsing error was
reported. This part was added before the stconn era. Nowadays, when an HTX
parsing error is reported, an error on the sedesc should also be reported.
2026-03-23 14:02:43 +01:00
Christopher Faulet
d257dd4563 Revert "BUG/MEDIUM: mux-h2: make sure to always report pending errors to the stream"
This reverts commit 44932b6c417e472d25039ec3d7b8bf14e07629bc.

The patch above was only necessary to handle partial headers or trailers
parsing. There was nothing to prevent the H2 multiplexer to start to add
headers or trailers in an HTX message and to stop the processing on error,
leaving the HTX message with no EOH/EOT block.

From the HTX API point of view, it is unexepected. And this was fixed thanks
to the commit ba7dc46a9 ("BUG/MINOR: h2/h3: Never insert partial
headers/trailers in an HTX message").

So this patch can be reverted. It is important to not report a parsign error
too early, when there are still data to transfer to the upper layer.

This patch must be backport where 44932b6c4 was backported but only after
backporting ba7dc46a9 first.
2026-03-23 14:02:43 +01:00
Christopher Faulet
39121ceca6 MEDIUM: tree-wide: Rely on htx_xfer() instead of htx_xfer_blks()
htx_xfer() function replaced htx_xfer_blks(). So let's use it.
2026-03-23 14:02:43 +01:00
Christopher Faulet
c9a9fa813b MEDIUM: stconn: Use a small buffer if possible for L7 retries
Whe L7 retries are enabled and the request is small enough, a small buffer
is used instead of a regular one.
2026-03-23 14:02:43 +01:00
Christopher Faulet
181cd8ba8a MEDIUM: stream: Try to use small buffer when TCP stream is queued
It was performed when an HTX stream was queued. Small requests were moved in
small buffers. Here we do the same but for TCP streams.
2026-03-23 14:02:42 +01:00
Christopher Faulet
5acdda4eed MEDIUM: stream: Try to use a small buffer for HTTP request on queuing
When a HTX stream is queued, if the request is small enough, it is moved
into a small buffer. This should save memory on instances intensively using
queues.

Applet and connection receive function were update to block receive when a
small buffer is in use.
2026-03-23 14:02:42 +01:00
Christopher Faulet
92a24a4e87 MEDIUM: chunk: Add support for small chunks
In the same way support for large chunks was added to properly work with
large buffers, we are now adding supports for small chunks because it is
possible to process small buffers.

So a dedicated memory pool is added to allocate small
chunks. alloc_small_trash_chunk() must be used to allocate a small
chunk. alloc_trash_chunk_sz() and free_trash_chunk() were uppdated to
support small chunks.

In addition, small trash buffers are also created, using the same mechanism
than for regular trash buffers. So three thread-local trash buffers are
created. get_small_trash_chunk() must be used to get a small trash buffer.
And get_trash_chunk_sz() was updated to also deal with small buffers.
2026-03-23 14:02:42 +01:00
Christopher Faulet
467f911cea MINOR: http-ana: Use HTX API to move to a large buffer
Use htx_move_to_large_buffer() to move a regular HTX message to a large
buffer when we are waiting for a huge payload.
2026-03-23 14:02:42 +01:00
Christopher Faulet
0213dd70c9 MINOR: htx: Add helper functions to xfer a message to smaller or larger one
htx_move_to_small_buffer()/htx_move_to_large_buffer() and
htx_copy_to_small_buffer()/htx_copy_to_large_buffer() functions can now be
used to move or copy blocks from a default buffer to a small or large
buffer. The destination buffer is allocated and then each blocks are
transferred into it.

These funtions relies in htx_xfer() function.
2026-03-23 14:02:42 +01:00
Christopher Faulet
5ead611cc2 MEDIUM: htx: Add htx_xfer function to replace htx_xfer_blks
htx_xfer() function should replace htx_xfer_blks(). It will be a bit easier to
maintain and to use. The behavior of htx_xfer() can be changed by calling it
with specific flags:

  * HTX_XFER_KEEP_SRC_BLKS: Blocks from the source message are just copied
  * HTX_XFER_PARTIAL_HDRS_COPY: It is allowed to partially xfer headers or trailers
  * HTX_XFER_HDRS_ONLY: only headers are xferred

By default (HTX_XFER_DEFAULT or 0), all blocks from the source message are moved
into to the destination mesage. So copied in the destination messageand removed
from the source message.

The caller must still define the maximum amount of data (including meta-data)
that can be xferred.

It is no longer necessary to specify a block type to stop the copy. Most of
time, with htx_xfer_blks(), this parameter was set to HTX_BLK_UNUSED. And
otherwise it was only specified to transfer headers.

It is important to not that the caller is responsible to verify the original
HTX message is well-formated. Especially, it must be sure headers part and
trailers part are complete (finished by EOH/EOT block).

For now, htx_xfer_blks() is not removed for compatiblity reason. But it is
deprecated.
2026-03-23 14:02:42 +01:00
Christopher Faulet
41c89e4fb6 MINOR: config: Report the warning when invalid large buffer size is set
When an invalid large buffer size was found in the configuration, a warning
was emitted but it was not reported via the error code. It is now fixed.
2026-03-23 14:02:42 +01:00
Christopher Faulet
b71f70d548 MINOR: config: Relax tests on the configured size of small buffers
When small buffer size was greater than the default buffer size, an error
was triggered. We now do the same than for large buffer. A warning is
emitted and the small buffer size is set to 0 do disable small buffer
allocation.
2026-03-23 14:02:42 +01:00
Christopher Faulet
01b9b67d5c MINOR: quic: Use b_alloc_small() to allocate a small buffer
Rely on b_alloc_small to allocate a small buffer.
2026-03-23 14:02:42 +01:00
Christopher Faulet
f8c96bf9cb MINOR: dynbuf: Add helper functions to alloc large and small buffers
b_alloc_small() and b_alloc_large() can now be used to alloc small or larger
buffers. For now, unlike default buffers, buffer_wait lists are not used.
2026-03-23 14:02:42 +01:00
Christopher Faulet
4d6cba03f2 MINOR: buffers: Move small buffers management from quic to dynbuf part
Because small buffers were only used by QUIC streams, the pool used to alloc
these buffers was located in the quic code. However, their usage will be
extended to other parts. So, the small buffers pool was moved into the
dynbuf part.
2026-03-23 14:02:42 +01:00
Amaury Denoyelle
1c379cad88 BUG/MINOR: http_htx: fix null deref in http-errors config check
http-errors parsing has been refactored in a recent serie of patches.
However, a null deref was introduced by the following patch in case a
non-existent http-errors section is referenced by an "errorfiles"
directive.

  commit 2ca7601c2d6781f455cf205e4f3b52f5beb16e41
  MINOR/OPTIM: http_htx: lookup once http_errors section on check/init

Fix this by delaying ha_free() so that it is called after ha_alert().

No need to backport.
2026-03-23 13:55:48 +01:00
William Lallemand
3d9865a12c BUG/MINOR: acme/cli: wrong argument check in 'acme renew'
Argument check should be args[2] instead of args[1] which is always
'renew'.

Must be backported to 3.2 and later.
2026-03-23 11:58:53 +01:00
William Lallemand
d72be950bd BUG/MINOR: acme: wrong error when checking for duplicate section
The cfg_parse_acme() function checks if an 'acme' section is already
existing in the configuration with cur_acme->linenum > 0. But the wrong
filename and line number are displayed in the commit message.

Must be backported to 3.2 and later.
2026-03-23 11:58:53 +01:00
William Lallemand
5a0fbbf1ca BUG/MINOR: acme: leak of ext_san upon insertion error
This patch fixes a leak of the ext_san structure when
sk_X509_EXTENSION_push() failed. sk_X509_EXTENSION_pop_free() is already
suppose to free it, so ext_san must be set to NULL upon success to avoid
a double-free.

Must be backported to 3.2 and later.
2026-03-23 11:58:53 +01:00
Amaury Denoyelle
c6fc53aa99 MEDIUM: proxy: remove http-errors limitation for dynamic backends
Use proxy_check_http_errors() on defaults proxy instances. This will
emit alert messages for errorfiles directives referencing a non-existing
http-errors section, or a warning if an explicitely listed status code
is not present in the target section.

This is a small behavior changes, as previouly this was only performed
for regular proxies. Thus, errorfile/errorfiles directives in an unused
defaults were never checked.

This may prevent startup of haproxy with a configuration file previously
considered as valid. However, this change is considered as necessary to
be able to use http-errors with dynamic backends. Any invalid defaults
will be detected on startup, rather than having to discover it at
runtime via "add backend" invokation.

Thus, any restriction on http-errors usage is now lifted for the
creation of dynamic backends.
2026-03-23 11:14:07 +01:00
Amaury Denoyelle
2ca7601c2d MINOR/OPTIM: http_htx: lookup once http_errors section on check/init
The previous patch has splitted the original proxy_check_errors()
function in two, so that check and init steps are performed separately.
However, this renders the code inefficient for "errorfiles" directive as
tree lookup on http-errors section is performed twice.

Optimize this by adding a reference to the section in conf_errors
structure. This is resolved during proxy_check_http_errors() and
proxy_finalize_http_errors() can reuse it.

No need to backport.
2026-03-23 10:51:33 +01:00
Amaury Denoyelle
d250b381dc MINOR: http_htx: split check/init of http_errors
Function proxy_check_errors() is used when configuration parsing is
over. This patch splits it in two newly named ones.

The first function is named proxy_check_http_errors(). It is responsible
to check for the validity of any "errorfiles" directive which could
reference non-existent http-errors section or code not defined in such
section. This function is now called via proxy_finalize().

The second function is named proxy_finalize_http_errors(). It converts
each conf_errors type used during parsing in a proper http_reply type
for runtime usage. This function is still called via post-proxy-check,
after proxy_finalize().

This patch does not bring any functional change. However, it will become
necessary to ensure http-errors can be used as expected with dynamic
backends.
2026-03-23 10:51:33 +01:00
Amaury Denoyelle
5b184e4178 MINOR: http_htx: rename fields in struct conf_errors
This patch is the second part of the refactoring for http-errors
parsing. It renames some fields in <conf_errors> structure to clarify
their usage. In particular, union variants are renamed "inl"/"section",
which better highlight the link with the newly defined enum
http_err_directive.
2026-03-23 10:51:33 +01:00
Amaury Denoyelle
fedaf054c4 MINOR: http_htx: use enum for arbitrary values in conf_errors
In conf_errors struct, arbitrary integer values were used for both
<type> field and <status> array. This renders the code difficult to
follow.

Replaces these values with proper enums type. Two new types are defined
for each of these fields. The first one represents the directive type,
derived from the keyword used (errorfile vs errorfiles). This directly
represents which part of <info> union should be manipulated.

The second enum is used for errorfiles directive with a reference on a
http-errors section. It indicates whether or not if a status code should
be imported from this section, and if this import is explicit or
implicit.
2026-03-23 10:51:33 +01:00
David Carlier
8e469ebf2e BUG/MEDIUM: acme: fix multiple resource leaks in acme_x509_req()
Several resources were leaked on both success and error paths:

- X509_NAME *nm was never freed. X509_REQ_set_subject_name() makes
  an internal copy, so nm must be freed separately by the caller.
- str_san allocated via my_strndup() was never freed on either path.
- On error paths after allocation, x (X509_REQ) and exts
  (STACK_OF(X509_EXTENSION)) were also leaked.

Fix this by adding proper cleanup of all allocated resources in both
the success and error paths. Also move sk_X509_EXTENSION_pop_free()
after X509_REQ_sign() so it is not skipped when sign fails, and
initialize nm to NULL to make early error paths safe.

Must be backported as far as 3.2.
2026-03-23 10:44:42 +01:00
Willy Tarreau
ff7b06badb BUILD: sched: fix leftover of debugging test in single-run changes
There was a leftover of "activity[tid].ctr1++" in commit 7d40b3134
("MEDIUM: sched: do not run a same task multiple times in series")
that unfortunately only builds in development mode :-(
2026-03-23 07:29:43 +01:00
Willy Tarreau
5d0f5f8168 MINOR: mux-h2: assign a limited frames processing budget
This introduces 3 new settings: tune.h2.be.max-frames-at-once and
tune.h2.fe.max-frames-at-once, which limit the number of frames that
will be processed at once for backend and frontend side respectively,
and tune.h2.fe.max-rst-at-once which limits the number of RST_STREAM
frames processed at once on the frontend.

We can now yield when reading too many frames at once, which allows to
limit the latency caused by processing too many frames in large buffers.
However if we stop due to the RST budget being depleted, it's most likely
the sign of a protocol abuse, so we make the tasklet go to BULK since
the goal is to punish it.

By limiting the number of RST per loop to 1, the SSL response time drops
from 95ms to 1.6ms during an H2 RST flood attack, and the maximum SSL
connection rate drops from 35.5k to 28.0k instead of 11.8k. A moderate
SSL load that shows 1ms response time and 23kcps increases to 2ms with
15kcps versus 95ms and 800cps before. The average loop time goes down
from 270-280us to 160us, while still doubling the attack absorption
rate with the same CPU capacity.

This patch may usefully be backported to 3.3 and 3.2. Note that to be
effective, this relies on the following patches:

  MEDIUM: sched: do not run a same task multiple times in series
  MINOR: sched: do not requeue a tasklet into the current queue
  MINOR: sched: do not punish self-waking tasklets anymore
  MEDIUM: sched: do not punish self-waking tasklets if TASK_WOKEN_ANY
  MEDIUM: sched: change scheduler budgets to lower TL_BULK
2026-03-23 07:14:22 +01:00
Willy Tarreau
ed6a4bc807 MEDIUM: sched: change scheduler budgets to lower TL_BULK
Having less yielding tasks in TL_BULK and more in TL_NORMAL, we need
to rebalance these queues' priorities. Tests have shown that raising
TL_NORMAL to 40% and lowering TL_BULK to 3% seems to give about the
best tradeoffs.
2026-03-23 06:58:37 +01:00
Willy Tarreau
282b9b7d16 MEDIUM: sched: do not punish self-waking tasklets if TASK_WOKEN_ANY
Self-waking tasklets are currently punished and go to the BULK list.
However it's a problem with muxes or the stick-table purge that just
yield and wake themselves up to limit the latency they cause to the
rest of the process, because by doing so to help others, they punish
themselves. Let's check if any TASK_WOKEN_ANY flag is present on
the tasklet and stop sending tasks presenting such a flag to TL_BULK.
Since tasklet_wakeup() by default passes TASK_WOKEN_OTHER, it means
that such tasklets will no longer be punished. However, tasks which
only want a best-effort wakeup can simply pass 0.

It's worth noting that a comparison was made between going into
TL_BULK at all and only setting the TASK_SELF_WAKING flag, and
it shows that the average latencies are ~10% better when entirely
avoiding TL_BULK in this case.
2026-03-23 06:57:12 +01:00
Willy Tarreau
6982c2539f MINOR: sched: do not punish self-waking tasklets anymore
Nowadays due to yield etc, it's counter-productive to permanently
punish self-waking tasklets, let's abandon this principle as it prevent
finer task priority handling.

We continue to check for the TASK_SELF_WAKING flag to place a task
into TL_BULK in case some code wants to make use of it in the future
(similarly to TASK_HEAVY), but no code sets it anymore. It could
possible make sense in the future to replace this flag with a one-shot
variant requesting low-priority.
2026-03-23 06:55:31 +01:00
Willy Tarreau
9852d5be26 MINOR: sched: do not requeue a tasklet into the current queue
As found by Christopher, the concept of waking a tasklet up into the
current queue is totally flawed, because if a task is in TL_BULK or
TL_HEAVY, all the tasklets it will wake up will end up in the same
queue. Not only this will clobber such queues, but it will also
reduce their quality of service, and this can contaminate other
tasklets due to the numerous wakeups there are now with the subsribe
mechanism between layers.
2026-03-23 06:54:42 +01:00
Willy Tarreau
7d40b3134a MEDIUM: sched: do not run a same task multiple times in series
There's always a risk that some tasks run multiple times if they wake
each other up. Now we include the loop counter in the task struct and
stop processing the queue it's in when meeting a task that has already
run. We only pick 16 bits since that's only what remains free in the
task common part, so from time to time (once every 65536) it will be
possible to wrongly match a task as having already run and stop evaluating
its queue, but it's rare enough that we don't care, because this will
be OK on the next iteration.
2026-03-23 06:52:24 +01:00
Frederic Lecaille
8f6cb8f452 BUG/MINOR: qpack: fix 62-bit overflow and 1-byte OOB reads in decoding
This patch improves the robustness of the QPACK varint decoder and fixes
potential 1-byte out-of-bounds reads in qpack_decode_fs().

In qpack_decode_fs(), two 1-byte OOB reads were possible on truncated
streams between two varint decoding. These occurred when trying to read
the byte containing the Huffman bit <h> and the Value Length prefix
immediately following an Index or a Name Length.

Note that these OOB are limited to a single byte because
qpack_get_varint() already ensures that its input length is non-zero
before consuming any data.

The fixes in qpack_decode_fs() are:
- When decoding an index, we now verify that at least one byte remains
  to safely access the following <h> bit and value length.
- When decoding a literal, we now check len < name_len + 1 to ensure
  the byte starting the header value is reachable.

In qpack_get_varint(), the maximum value is now strictly capped at 2^62-1
as per RFC. This is enforced using a budget-based check:

   (v & 127) > (limit - ret) >> shift

This prevents values from  overflowing into the 63rd or 64th bits, which
would otherwise break subsequent signed comparisons (e.g., if (len < name_len))
by interpreting the length as a negative value, leading to false positive
tests.

Thank you to @jming912 for having reported this issue in GH #3302.

Must be backported as far as 2.6
2026-03-20 19:40:11 +01:00
Egor Shestakov
60c9e2975b BUG/MINOR: sock: adjust accept() error messages for ENFILE and ENOMEM
In the ENFILE and ENOMEM cases, when accept() fails, an irrelevant
global.maxsock value was printed that doesn't reflect system limits.
Now the actconn is printed that gives a hint about the failure reasons.

Should be backported in all stable branches.
2026-03-20 16:51:47 +01:00
Aurelien DARRAGON
5617e47f91 MINOR: log: support optional 'profile <log_profile_name>' argument to do-log action
We anticipated that the do-log action should be expanded with optional
arguments at some point. Now that we heard of multiple use-cases
that could be achieved with do-log action, but that are limitated by the
fact that all do-log statements inherit from the implicit log-profile
defined on the logger, we need to provide a way for the user to specify
that custom log-profile that could be used per do-log actions individually

This is what we try to achieve in this commit, by leveraging the
prerequisite work performed by the last 2 commits.
2026-03-20 11:42:48 +01:00
Aurelien DARRAGON
042b7ab763 MINOR: log: provide a way to override logger->profile from process_send_log_ctx
In process_send_log(), now also consider the ctx if ctx->profile != NULL

In that case, we do as if logger->prof was set, but we consider
ctx->profile in priority over the logger one. What this means is that
it will become possible to pass ctx.profile to a profile that will be
used no matter what to generate the log payload.

This is a pre-requisite to implement optional "profile" argument for
do-log action
2026-03-20 11:42:40 +01:00
Aurelien DARRAGON
7466f64c56 MINOR: log: split do_log() in do_log() + do_log_ctx()
do_log() is just a wrapper to use do_log_ctx() with pre-filled ctx, but
we now have the low-level do_log_ctx() variant which can be used to
pass specific ctx parameters instead.
2026-03-20 11:41:06 +01:00
Willy Tarreau
15b005fd1e [RELEASE] Released version 3.4-dev7
Released version 3.4-dev7 with the following main changes :
    - BUG/MINOR: stconn: Increase SC bytes_out value in se_done_ff()
    - BUG/MINOR: ssl-sample: Fix sample_conv_sha2() by checking EVP_Digest* failures
    - BUG/MINOR: backend: Don't get proto to use for webscoket if there is no server
    - BUG/MINOR: jwt: Missing 'jwt_tokenize' return value check
    - MINOR: flt_http_comp: define and use proxy_get_comp() helper function
    - MEDIUM: flt_http_comp: split "compression" filter in 2 distinct filters
    - CLEANUP: flt_http_comp: comp_state doesn't bother about the direction anymore
    - BUG/MINOR: admin: haproxy-reload use explicit socat address type
    - MEDIUM: admin: haproxy-reload conversion to POSIX sh
    - BUG/MINOR: admin: haproxy-reload rename -vv long option
    - SCRIPTS: git-show-backports: hide the common ancestor warning in quiet mode
    - SCRIPTS: git-show-backports: add a restart-from-last option
    - MINOR: mworker: add a BUG_ON() on mproxy_li in _send_status
    - BUG/MINOR: mworker: don't set the PROC_O_LEAVING flag on master process
    - Revert "BUG/MINOR: jwt: Missing 'jwt_tokenize' return value check"
    - MINOR: jwt: Improve 'jwt_tokenize' function
    - MINOR: jwt: Convert EC JWK to EVP_PKEY
    - MINOR: jwt: Parse ec-specific fields in jose header
    - MINOR: jwt: Manage ECDH-ES algorithm in jwt_decrypt_jwk function
    - MINOR: jwt: Add ecdh-es+axxxkw support in jwt_decrypt_jwk converter
    - MINOR: jwt: Manage ec certificates in jwt_decrypt_cert
    - DOC: jwt: Add ECDH support in jwt_decrypt converters
    - MINOR: stconn: Call sc_conn_process from the I/O callback if TASK_WOKEN_MSG state was set
    - MINOR: mux-h2: Rely on h2s_notify_send() when resuming h2s for sending
    - MINOR: mux-spop: Rely on spop_strm_notify_send() when resuming streams for sending
    - MINOR: muxes: Wakup the data layer from a mux stream with TASK_WOKEN_IO state
    - MAJOR: muxes: No longer use app_ops .wake() callback function from muxes
    - MINOR: applet: Call sc_applet_process() instead of .wake() callback function
    - MINOR: connection: Call sc_conn_process() instead of .wake() callback function
    - MEDIUM: stconn: Remove .wake() callback function from app_ops
    - MINOR: check: Remove wake_srv_chk() function
    - MINOR: haterm: Remove hstream_wake() function
    - MINOR: stconn: Wakup the SC with TASK_WOKEN_IO state from opposite side
    - MEDIUM: stconn: Merge all .chk_rcv() callback functions in sc_chk_rcv()
    - MINOR: stconn: Remove .chk_rcv() callback functions
    - MEDIUM: stconn: Merge all .chk_snd() callback functions in sc_chk_snd()
    - MINOR: stconn: Remove .chk_snd() callback functions
    - MEDIUM: stconn: Merge all .abort() callback functions in sc_abort()
    - MINOR: stconn: Remove .abort() callback functions
    - MEDIUM: stconn: Merge all .shutdown() callback functions in sc_shutdown()
    - MINOR: stconn: Remove .shutdown() callback functions
    - MINOR: stconn: Totally app_ops from the stconns
    - MINOR: stconn: Simplify sc_abort/sc_shutdown by merging calls to se_shutdown
    - DEBUG: stconn: Add a CHECK_IF() when I/O are performed on a orphan SC
    - MEDIUM: mworker: exiting when couldn't find the master mworker_proc element
    - BUILD: ssl: use ASN1_STRING accessors for OpenSSL 4.0 compatibility
    - BUILD: ssl: make X509_NAME usage OpenSSL 4.0 ready
    - BUG/MINOR: tcpcheck: Fix typo in error error message for `http-check expect`
    - BUG/MINOR: jws: fix memory leak in jws_b64_signature
    - DOC: configuration: http-check expect example typo
    - DOC/CLEANUP: config: update mentions of the old "Global parameters" section
    - BUG/MEDIUM: ssl: Handle receiving early data with BoringSSL/AWS-LC
    - BUG/MINOR: mworker: always stop the receiving listener
    - BUG/MEDIUM: ssl: Don't report read data as early data with AWS-LC
    - BUILD: makefile: fix range build without test command
    - BUG/MINOR: memprof: avoid a small memory leak in "show profiling"
    - BUG/MINOR: proxy: do not forget to validate quic-initial rules
    - MINOR: activity: use dynamic allocation for "show profiling" entries
    - MINOR: tools: extend the pointer hashing code to ease manipulations
    - MINOR: tools: add a new pointer hash function that also takes an argument
    - MINOR: memprof: attempt different retry slots for different hashes on collision
    - MINOR: tinfo: start to add basic thread_exec_ctx
    - MINOR: memprof: prepare to consider exec_ctx in reporting
    - MINOR: memprof: also permit to sort output by calling context
    - MINOR: tools: add a function to write a thread execution context.
    - MINOR: debug: report the execution context on thread dumps
    - MINOR: memprof: report the execution context on profiling output
    - MINOR: initcall: record the file and line declaration of an INITCALL
    - MINOR: tools: decode execution context TH_EX_CTX_INITCALL
    - MINOR: tools: support decoding ha_caller type exec context
    - MINOR: sample: store location for fetch/conv via initcalls
    - MINOR: sample: also report contexts registered directly
    - MINOR: tools: support an execution context that is just a function
    - MINOR: actions: store the location of keywords registered via initcalls
    - MINOR: actions: also report execution contexts registered directly
    - MINOR: filters: set the exec context to the current filter config
    - MINOR: ssl: set the thread execution context during message callbacks
    - MINOR: connection: track mux calls to report their allocation context
    - MINOR: task: set execution context on task/tasklet calls
    - MINOR: applet: set execution context on applet calls
    - MINOR: cli: keep the info of the current keyword being processed in the appctx
    - MINOR: cli: keep track of the initcall context since kw registration
    - MINOR: cli: implement execution context for manually registered keywords
    - MINOR: activity: support aggregating by caller also for memprofile
    - MINOR: activity: raise the default number of memprofile buckets to 4k
    - DOC: internals: short explanation on how thread_exec_ctx works
    - BUG/MINOR: mworker: only match worker processes when looking for unspawned proc
    - MINOR: traces: defer processing of "-dt" options
    - BUG/MINOR: mworker: fix typo &= instead of & in proc list serialization
    - BUG/MINOR: mworker: set a timeout on the worker socketpair read at startup
    - BUG/MINOR: mworker: avoid passing NULL version in proc list serialization
    - BUG/MINOR: sockpair: set FD_CLOEXEC on fd received via SCM_RIGHTS
    - BUG/MEDIUM: stconn: Don't forget to wakeup applets on shutdown
    - BUG/MINOR: spoe: Properly switch SPOE filter to WAITING_ACK state
    - BUG/MEDIUM: spoe: Properly abort processing on client abort
    - BUG/MEDIUM: stconn: Fix abort on close when a large buffer is used
    - BUG/MEDIUM: stconn: Don't perform L7 retries with large buffer
    - BUG/MINOR: h2/h3: Only test number of trailers inserted in HTX message
    - MINOR: htx: Add function to truncate all blocks after a specific block
    - BUG/MINOR: h2/h3: Never insert partial headers/trailers in an HTX message
    - BUG/MINOR: http-ana: Swap L7 buffer with request buffer by hand
    - BUG/MINOR: stream: Fix crash in stream dump if the current rule has no keyword
    - BUG/MINOR: mjson: make mystrtod() length-aware to prevent out-of-bounds reads
    - MEDIUM: stats-file/clock: automatically update now_offset based on shared clock
    - MINOR: promex: export "haproxy_sticktable_local_updates" metric
    - BUG/MINOR: spoe: Fix condition to abort processing on client abort
    - BUILD: spoe: Remove unsused variable
    - MINOR: tools: add a function to create a tar file header
    - MINOR: tools: add a function to load a file into a tar archive
    - MINOR: config: support explicit "on" and "off" for "set-dumpable"
    - MINOR: debug: read all libs in memory when set-dumpable=libs
    - DEV: gdb: add a new utility to extract libs from a core dump: libs-from-core
    - MINOR: debug: copy debug symbols from /usr/lib/debug when present
    - MINOR: debug: opportunistically load libthread_db.so.1 with set-dumpable=libs
    - BUG/MINOR: mworker: don't try to access an initializing process
    - BUG/MEDIUM: peers: enforce check on incoming table key type
    - BUG/MINOR: mux-h2: properly ignore R bit in GOAWAY stream ID
    - BUG/MINOR: mux-h2: properly ignore R bit in WINDOW_UPDATE increments
    - OPTIM: haterm: use chunk builders for generated response headers
    - BUG/MAJOR: h3: check body size with content-length on empty FIN
    - BUG/MEDIUM: h3: reject unaligned frames except DATA
    - BUG/MINOR: mworker/cli: fix show proc pagination losing entries on resume
    - CI: github: treat vX.Y.Z release tags as stable like haproxy-* branches
    - MINOR: freq_ctr: add a function to add values with a peak
    - MINOR: task: maintain a per-thread indicator of the peak run-queue size
    - MINOR: mux-h2: store the concurrent streams hard limit in the h2c
    - MINOR: mux-h2: permit to moderate the advertised streams limit depending on load
    - MINOR: mux-h2: permit to fix a minimum value for the advertised streams limit
    - BUG/MINOR: mworker: fix sort order of mworker_proc in 'show proc'
    - CLEANUP: mworker: fix tab/space mess in mworker_env_to_proc_list()
2026-03-20 10:14:59 +01:00
William Lallemand
f1e8173a43 CLEANUP: mworker: fix tab/space mess in mworker_env_to_proc_list()
The previous patch messed up with the indentation in
mworker_env_to_proc_list()
2026-03-19 18:01:06 +01:00
William Lallemand
4c61e9028c BUG/MINOR: mworker: fix sort order of mworker_proc in 'show proc'
Since version 3.1, the display order of old workers in 'show proc' was
accidentally reversed. The oldest worker was shown first and the newest
last, which was not the intended behavior. This regression was introduced
during the master-worker rework.

Fix this by sorting the list during deserialization in
mworker_env_to_proc_list().

An alternative fix would have been to iterate the list in reverse order
in the show proc function, but that approach risks introducing
inconsistencies when backporting to older versions.

Must be backported to 3.1 and later.
2026-03-19 17:51:28 +01:00
Willy Tarreau
932d77e287 MINOR: mux-h2: permit to fix a minimum value for the advertised streams limit
When using rq-load on tune.h2.fe.max-concurrent-streams, it's easy to
reach a situation where only one stream is allowed. There's nothing
wrong with this but it turns out that slightly higher values do not
necessarily cause significantly higher loads and will improve the user
experience. For this reason the keyword now also supports "min" to
specify a value. Experimentation shows that values from 5 to 15 remain
very effective at protecting the run queue while allowing a great level
of parallelism that keeps a site fluid.
2026-03-19 16:24:32 +01:00
Willy Tarreau
c238965b27 MINOR: mux-h2: permit to moderate the advertised streams limit depending on load
Global setting tune.h2.fe.max-concurrent-streams now supports an optional
"rq-load" option to pass either a target load, or a keyword among "auto"
and "ignore". These are used to quadratically reduce the advertised streams
limit when the thread's run queue size goes beyong the configured value,
and automatically reduce the load on the process from new connections.
With "auto", instead of taking an explicit value, it uses as a target the
"tune.runqueue-depth" setting (which might be automatic). Tests have shown
that values between 50 and 100 are already very effective at reducing the
loads during attacks from 100000 to around 1500. By default, "ignore"
is in effect, which means that the dynamic tuning is not enabled.
2026-03-19 16:24:31 +01:00
Willy Tarreau
b63492e4f4 MINOR: mux-h2: store the concurrent streams hard limit in the h2c
The hard limit on the number of concurrent streams is currently
determined only by configuration and returned by
h2c_max_concurrent_streams(). However this doesn't permit to
change such settings on the fly without risking to break connections,
and it doesn't allow a connection to pick a different value, which
could be desirable for example to try to slow abuse down.

Let's store a copy of h2c_max_concurrent_streams() at connection
creation time into the h2c as streams_hard_limit. This inflates
the h2c size from 1324 to 1328 (0.3%) which is acceptable for the
expected benefits.
2026-03-19 16:24:31 +01:00
Willy Tarreau
b3a84800b4 MINOR: task: maintain a per-thread indicator of the peak run-queue size
The new field th_ctx->rq_tot_peak contains the computed peak run queue
length averaged over the last 512 calls. This is computed when entering
process_runnable_tasks. It will not take into account new tasks that are
created or woken up during this round nor those which are evicted, which
is the reason why we're using a peak measurement to increase chances to
observe transient high values. Tests have shown that 512 samples are good
to provide a relatively smooth average measurement while still fading
away in a matter of milliseconds at high loads. Since this value is
only updated once per round, it cannot be used as a statistic and
shouldn't be exposed, it's only for internal use (self-regulation).
2026-03-19 16:24:31 +01:00
Willy Tarreau
eec60f14dd MINOR: freq_ctr: add a function to add values with a peak
Sometimes it's desirable to observe fading away peak values, where a new
value that is higher than the historical one instantly replaces it,
otherwise contributes to it. It is convenient when trying to observe
certain phenomenons like peak queue sizes. The new function
swrate_add_peak_local() does that to a private variable (no atomic ops
involved as it's not worth the cost since such use cases are typically
local).
2026-03-19 16:24:31 +01:00
William Lallemand
fc38ebb079 CI: github: treat vX.Y.Z release tags as stable like haproxy-* branches
Add detection of release tags matching the vX.Y.Z pattern so they use
the same stable CI configuration as haproxy-* branches, rather than the
development one.

It prevents stable tag to trigger the CI with docker images and SSL
libraries only used for development.

Must be backported in stable releases.
2026-03-19 15:58:24 +01:00
Alexander Stephan
10e78d9246 BUG/MINOR: mworker/cli: fix show proc pagination losing entries on resume
After commit 594408cd612b5 ("BUG/MINOR: mworker/cli: fix show proc
pagination using reload counter"), the old-workers pagination stores
ctx->next_reload = child->reloads on flush failure, then skips entries
with child->reloads >= ctx->next_reload on resume.

The >= comparison is direction-dependent: it assumes the list is in
descending reload order (newest first). On current master, proc_list
is in ascending order (oldest first) because mworker_env_to_proc_list()
appends deserialized entries before mworker_prepare_master() appends
the new worker. This means the skip logic is inverted and can miss
entries or loop incorrectly depending on the version.

We fix this by renaming the context field to resume_reload and changing its
semantics: it now tracks the reload count of the last *successfully
flushed* row rather than the failed one. On flush failure, resume_reload
is left unchanged so the failed row is replayed on the next call. On
resume, entries are skipped by walking the list until the marker entry is
found (exact == match), which works regardless of list direction.

Additionally, we have to handle the unlikely case where the marker entry
is deleted from proc_list between handler calls (e.g. the process exits and
SIGCHLD processing removes it). Detect this by tracking the previous
LEAVING entry's reload count during the skip phase: if two consecutive
entries straddle the skip value (one > skip, the other < skip), the
deleted entry's former position has been crossed, so skipping stops and
the current entry is emitted.

This should be backported to all stable branches. On branches where
proc_list is in descending order (2.9, 3.0), the fix applies the
same way since the skip logic is now direction-agnostic.
2026-03-19 14:46:15 +01:00
Amaury Denoyelle
4e937e0391 BUG/MEDIUM: h3: reject unaligned frames except DATA
HTTP/3 parser cannot deal with unaligned frames, except for DATA. As it
was expected that such case would not occur, a simple BUG_ON() was
written to protect HEADERS parsing.

First, this BUG_ON() was incorrectly written due an incorrect operator
'>=' vs '>' when checking if data wraps. Thus this patch correct it.

However this correction is not sufficient as it still possible to handle
a large unaligned HEADERS frame, which would trigger this BUG_ON(). This
is very unlikely as HEADERS is the first received frame on a request
stream, but not completely impossible. As HTTP/3 frame header (type +
length) is parsed first and removed, this leaves a small gap at the
buffer beginning. If this small gap is then filled with the remaining
frame payload, it would result in unaligned data. Also, trailers are
also sensitive here as in this case a HEADERS frame is handled after
other frames.

The objective of this patch is to ensure that an unaligned frame is now
handled in a safe way. This is extend to all HTTP/3 frames (except DATA)
and not only to HEADERS type. Parsing is interrupted if frame payload is
wrapping in the buffer. This should never happen except maybe with some
weird clients, so the connection is closed with H3_EXCESSIVE_LOAD error.

This approach is considered the safest one, in particular for backport
purpose. In the future, realign operation via copy may be implemented
instead if considered as useful.

This must be backported up to 2.6.
2026-03-19 10:40:25 +01:00
Amaury Denoyelle
05a295441c BUG/MAJOR: h3: check body size with content-length on empty FIN
In QUIC, a STREAM frame may be received with no data but with FIN bit
set. This situation is tedious to handle and haproxy parsing code has
changed several times to deal with this situation. Now, H3 and H09
layers parsing code are skipped in favor of the shared function
qcs_http_handle_standalone_fin() used to handle the HTX EOM emission.

However, this shortcut bypasses an important HTTP/3 validation check on
the received body size vs the announced content-length header. Under
some conditions, this could cause a desynchronization with the backend
server which could be exploited for request smuggling.

Fix HTTP/3 parsing code by adding a call to h3_check_body_size() prior
to qcs_http_handle_standalone_fin() if content-length header has been
found. If the body size is incorrect, the stream is immediately resetted
with H3_MESSAGE_ERROR code and the error is forwarded to the stream
layer.

Thanks to Martino Spagnuolo for his detailed report on this issue and
for having contacting us about it via the security mailing list.

This must be backported up to 2.6.
2026-03-19 10:38:46 +01:00
Aleksandar Lazic
4e57516c9a OPTIM: haterm: use chunk builders for generated response headers
hstream_build_http_resp() currently uses snprintf() to build the
status code and the generated X-req/X-rsp header values.

These strings are short and are fully derived from already parsed request
state, so they can be assembled directly in the HAProxy trash buffer using
`chunk_strcat()` and `ultoa_o()`.

This keeps the generated output unchanged while removing the remaining
`snprintf()` calls from the response-building path.

No functional change is expected.

Signed-off-by: Aleksandar Lazic <al-haproxy@none.at>
2026-03-19 07:42:33 +01:00
Willy Tarreau
e31640368a BUG/MINOR: mux-h2: properly ignore R bit in WINDOW_UPDATE increments
The window size increments are 31 bits and the topmost bit is reserved
and should be ignored, however it was not masked, so a peer sending it
set would emit a negative value which could actually reduce the current
window instead of increasing it. Note that the window cannot reach zero
as there's already a test for this, but transfers could slow down to
the same speed as if an initial window of just a few bytes had been
advertised. Let's just mask the reserved bit before processing.

This should be backported to all stable versions.
2026-03-19 07:21:47 +01:00
Willy Tarreau
0e231bbd7c BUG/MINOR: mux-h2: properly ignore R bit in GOAWAY stream ID
The stream ID indicated in GOAWAY frames must have its bit 31 (R) ignored
and this wasn't the case. The effect is that if this bit was present, the
GOAWAY frame would mark the last acceptable stream as negative, which is
the default situation (unlimited), thus would basically result in this
GOAWAY frame to be ignored since it would replace a negative last_sid
with another negative one. The impact is thus basically that if a peer
would emit anything non-zero in the R bit, the GOAWAY frame would be
ignored and new streams would still be initiated on the backend, before
being rejected by the server.

Thanks to Haruto Kimura (Stella) for finding and reporting this bug.

This fix needs to be backported to all stable versions.
2026-03-19 07:11:54 +01:00
Willy Tarreau
1696cfaa19 BUG/MEDIUM: peers: enforce check on incoming table key type
The key type received over the peers protocol is not checked for
validity and as a result can crash the process when passed through
peer_int_key_type[] in peer_treat_definemsg(). The risk remains
very low since only trusted peers may exchange tables, however it
represents a risk the day haproxy supports new key types, because
mixing old and new versions could then cause the old ones to crash.
Let's add the required check in peer_treat_definemsg().

It is also worth noting that in this function a few protocol identifiers
of type int read directly from a var_int via intdecode() and that some
protocol aliasing may occur (e.g. table_id, table_id_len etc). This is
not supposed to be a problem but it could hide implementation bugs and
cause interoperability issues once fixed, so these should be addressed
in a future commit that will not be marked for backporting.

Thanks to Haruto Kimura (Stella) for finding and reporting this bug.

This fix needs to be backported to all stable versions.
2026-03-19 07:03:10 +01:00
William Lallemand
c6221db375 BUG/MINOR: mworker: don't try to access an initializing process
In pcli_prefix_to_pid(), when resolving a worker by absolute pid
(@!<pid>) or by relative pid (@1), a worker that still has PROC_O_INIT
set (i.e. not yet ready, still initializing) could be returned as a
valid target.

During a reload, if a client connects to the master CLI and sends a
command targeting a worker (e.g. @@1 or @@!<pid>), the master resolves
the target pid and attempts to forward the command by transferring a fd
over the worker's sockpair. If the worker is still initializing and has
not yet sent its READY signal, its end of the sockpair is not usable,
causing send_fd_uxst() to fail with EPIPE. This results in the
following alert being repeated in a loop:

  [ALERT] (550032) : socketpair: Cannot transfer the fd 13 over sockpair@5. Giving up.

The situation is even worse if the initializing worker has already
exited (e.g. due to a bind failure) but has not yet been removed from
the process list: in that case the sockpair's remote end is already
closed, making the failure immediate and unrecoverable until the dead
worker is cleaned up.

This was not possible before 3.1 because the master's polling loop only
started once all workers were fully ready, making it impossible to
receive CLI connections while a worker was still initializing.

Fix this by skipping workers with PROC_O_INIT set in both the absolute
and relative pid resolution paths of pcli_prefix_to_pid(), so that
only fully initialized workers can be targeted.

Must be backported to 3.1 and later.
2026-03-18 17:08:30 +01:00
Willy Tarreau
b93137ce67 MINOR: debug: opportunistically load libthread_db.so.1 with set-dumpable=libs
When loading libs into the core dump, let's also try to load
libthread_db.so.1 that gdb usually requires. It can significantly help
decoding the threads for systems which require it, and the file is quite
small. It can appear at a few different locations and is generally next
to libpthread.so, or alternately libc, so we first look where we found
them, and fall back to a few other common places. The file is really
small, a few tens of kB usually.
2026-03-18 15:30:39 +01:00
Willy Tarreau
e07c9ee575 MINOR: debug: copy debug symbols from /usr/lib/debug when present
When set-dumpable=libs, let's also pick the debug symbols for the libs
we're loading. For now we only try /usr/lib/debug/<path>, which is quite
common and easy to guess. Build IDs could also be used but are more
complex to deal with, so let's stay simple for now.
2026-03-18 15:30:39 +01:00
Willy Tarreau
de4f7eaeed DEV: gdb: add a new utility to extract libs from a core dump: libs-from-core
This utility takes in argument the path to a core dump, and it looks
for the archive signature of libraries embedded with "set-dumpable libs",
and either emits the offset and size of stdout, or directly dumps the
contents so that the tar file can be extracted directly by piping the
output to tar xf.
2026-03-18 15:30:39 +01:00
Willy Tarreau
e1738b665d MINOR: debug: read all libs in memory when set-dumpable=libs
When "set-dumpable" is set to "libs", in addition to marking the process
dumpable, haproxy also reads the binary and shared objects into memory as
a tar archive in a page-aligned location so that these files are easily
extractable from a future core dump. The goal here is to always have
access to the exact same binary and libs as those which caused the core
to happen. It's indeed very frequent to miss some of these, or to get
mismatching files due to a local update that didn't experience a reload,
or to get those of a host system instead of the container.

The in-memory tar file presents everything under a directory called
"core-%d" where %d corresponds to the PID of the worker process. In
order to ease the finding of these data in the core dump, the memory
area is contiguous and surrounded by PROT_NONE pages so that it appears
in its own segment in the core file. The total size used by this is a
few tens of MB, which is not a problem on large systems.
2026-03-18 15:30:39 +01:00
Willy Tarreau
6152a4eef5 MINOR: config: support explicit "on" and "off" for "set-dumpable"
The global "set-dumpable" keyword currently is only positional. Let's
extend its syntax to support arguments. For now we support both "on"
and "off" to explicitly enable or disable it.
2026-03-18 15:30:39 +01:00
Willy Tarreau
94a4578ccf MINOR: tools: add a function to load a file into a tar archive
New function load_file_into_tar() concatenates a file into an in-memory
tar archive and grows its size. Only the base name and a provided prefix
are used to name the faile. If the file cannot be loaded, it's added as
size zero and permissions 0 to show that it failed to load. This will
be used to load post-mortem information so it needs to remain simple.
2026-03-18 15:30:39 +01:00
Willy Tarreau
c1dfea3ab3 MINOR: tools: add a function to create a tar file header
The purpose here is to create a tar file header in memory from a known
file name, prefix, size and mode. It will be used to prepare archives
of libs in use for improved debugging, but may probably be useful for
other purposes due to its simplicity.
2026-03-18 15:30:34 +01:00
Christopher Faulet
15cdcab1fc BUILD: spoe: Remove unsused variable
Since 7a1382da7 ("BUG/MINOR: spoe: Fix condition to abort processing on
client abort"), the chn variable is no longer used in
spoe_process_event(). Let's remove it

This patch must be backported with the commit above, as far as 3.1.
2026-03-18 11:28:33 +01:00
Christopher Faulet
7a1382da79 BUG/MINOR: spoe: Fix condition to abort processing on client abort
The test to detect client aborts in the SPOE, introduced by commit b3be3b94a
("BUG/MEDIUM: spoe: Properly abort processing on client abort"), was no
correct. Producer flags must not be tested. Only the frontend SC must be
tested when the abortonclose option is set.

Because of this bug, when a client aborted, the SPOE processing was aborted
too, regardless the abortonclose option.

This patch must be backpoeted with the commit above, so as far as 3.1.
2026-03-18 11:24:49 +01:00
Aurelien DARRAGON
8fe0950511 MINOR: promex: export "haproxy_sticktable_local_updates" metric
haproxy_sticktable_local_updates corresponds to the table->localupdate
counter, which is used internally by the peers protocol to identify
update messages in order to send and ack them among peers.

Here we decide to expose this information, as it is already the case in
"show peers" output, because it turns out that this value, which is
cumulative and grows in sync with the number of updates triggered on the
table due to changes initiated by the current process, can be used to
compute the update rate of the table. Computing the update rate of the
table (from the process point of view, ie: updates sent by the process and
not those received by the process), can be a great load indicator in order
to properly scale the infrastructure that is intended to handle the
table updates.

Note that there is a pitfall, which is that the value will eventually
wrap since it is stored using unsigned 32bits integer. Scripts or system
making use of this value must take wrapping into account between two
readings to properly compute the effective number of updates that were
performed between two readings. Also, they must ensure that the "polling"
rate between readings is small enough so that the value cannot wrap behind
their back.
2026-03-18 11:18:37 +01:00
Aurelien DARRAGON
4319c20363 MEDIUM: stats-file/clock: automatically update now_offset based on shared clock
We no longer rely on now_offset stored in the shm-stats-file. Instead
haproxy automatically computes the now_offset relative to the monotonic
clock and the shared global clock.

Indeed, the previous model based on static now_offset when monotonic
clock is available proved to be insufficient when used in
combination with shm-stats-file (that is when monotonic clock is shared
between multiple co-processes). In ideal situation co-processes would
correctly apply the offset to their local monotonic clock and end up
with consistent now_ns. But when restarting from an existing
shm-stats-file from a previous session (ie: prior to reboot), then the
local monotonic clock would no longer be consistent with the one used
to update the file previously, so applying a static offset would fail
to restore clock consistency.

For this specific issue, a workaround was brought by 09bf116
("BUG/MEDIUM: stats-file: detect and fix inconsistent shared clock when resuming from shm-stats-file")
but the solution implemented there was deemed too fragile, because there
is a 60sec window where the fix would fail to detect inconsistent clock
and would leave haproxy with a broken clock ranging from 0 to 60 seconds,
which can be huge..

By simply recomputing the now_offset each time we learn from another
process (through the shared map by reading global_now_ns), we simply
recompute our local offset (difference between OUR monotonic clock
and the SHARED one). Also, in clock_update_global_date(), we make
sure we always recompute the now_offset as now_ms may have been
updated from shared clock if shared clock was ahead of us.

Thanks to that new logic, interrupted processes, resumed processes,
processed started with shm-stats-file from previous session now
correctly recover from those various situations and multiple
co-processes with diverting clocks on startup end up converging to
the same values.

Since it is no longer relevant to save now_offset in the map, it was
removed but to prevent shm-stats-file incompatibility with previous
versions, 8-byte hole was forced, and we didn't bump the shm-stats-file
version on purpose.

This patch may be backported in 3.3 after a solid period of observation
to ensure we didn't break things.
2026-03-18 11:18:33 +01:00
William Lallemand
29592cb330 BUG/MINOR: mjson: make mystrtod() length-aware to prevent out-of-bounds reads
mystrtod() was not length-aware and relied on null-termination or a
non-numeric character to stop. The fix adds a length parameter as a
strict upper bound for all pointer accesses.

The practical impact in haproxy is essentially null: all callers embed
the JSON payload inside a large haproxy buffer, so the speculative read
past the last digit lands on memory that is still within the same
allocation. ASAN cannot detect it in a normal haproxy run for the same
reason — the overread never escapes the enclosing buffer. Triggering a
detectable fault requires placing the JSON payload at the exact end of
an allocation.

Note: the 'path' buffer was using a null-terminated string so the result
of strlen is passed to it, this part was not at risk.

Thanks to Kamil Frankowicz for the original bug report.

This patch must be backported to all maintained versions.
2026-03-17 17:08:28 +01:00
Christopher Faulet
8dae4f7c0b BUG/MINOR: stream: Fix crash in stream dump if the current rule has no keyword
The commit 9f1e9ee0e ("DEBUG: stream: Display the currently running rule in
stream dump") revealed a bug. When a stream is dumped, if it is blocked on a
rule, we must take care the rule has a keyword to display its name.

Indeed, some action parsings are inlined with the rule parser. In that case,
there is no keyword attached to the rule.

Because of this bug, crashes can be experienced when a stream is
dumped. Now, when there is no keyword, "?" is display instead.

This patch must be backported as far as 2.6.
2026-03-17 08:39:49 +01:00
Christopher Faulet
ef2a292585 BUG/MINOR: http-ana: Swap L7 buffer with request buffer by hand
When a L7 retry is performed, we should not rely on b_xfer() to swap the L7
buffer with the request buffer. When it is performed the request buffer is
not allocated. b_xfer() must not be called with an unallocated destination
buffer. The swap remains an optim. For instance, It is not performed on
buffers of different size. So the caller is responsible to provide an
allocated destination buffer with enough free space to transfer data.

However, when a L7 retry is performed, we cannot allocate a request buffer,
because we cannot yield. An error was reported, if we wait for a buffer, the
error will be handled by process_stream(). But we can swap the buffers by
hand. At this stage, we know there is no request buffer, so we can easily
swap it with the L7 buffer.

Note there is no real bug for now.

This patch could be backported to all stable versions.
2026-03-17 07:48:02 +01:00
Christopher Faulet
ba7dc46a92 BUG/MINOR: h2/h3: Never insert partial headers/trailers in an HTX message
In HTX, headers and trailers parts must always be complete. It is unexpected
to found header blocks without the EOH block or trailer blocks without the
EOT block. So, during H2/H3 message parsing, we must take care to remove any
HEADER/TRAILER block inserted when an error is encountered. It is mandatory
to be sure to properly report parsing error to upper layer.x

It is now performed by calling htx_truncat_blk() function on the error
path. The tail block is saved before converting any HEADERS/TRAILERS frame
to HTX. It is used to remove all inserted block on error.

This patch rely on the following one:

  "MINOR: htx: Add function to truncate all blocks after a specific block"

It should be backported with the commit above to all stable versions for
the H2 part and as far as 2.8 for h3 one.
2026-03-17 07:48:02 +01:00
Christopher Faulet
fbdb0a991a MINOR: htx: Add function to truncate all blocks after a specific block
htx_truncated_blk() function does the same than htx_trunctate(), except data
are truncated relatively to a block in the message instead of an offset.
2026-03-17 07:48:02 +01:00
Christopher Faulet
3250ec6e9c BUG/MINOR: h2/h3: Only test number of trailers inserted in HTX message
When H2 or H3 trailers are inserted in an HTX message, we must take care to
not exceed the maximum number of trailers allowed in a message (same than
the maximum number of headers, i.e tune.http.maxhdr). However, all HTX
blocks in the HTX message were considered. Only TRAILERS HTX blocks must be
considered.

To fix the issue, in h2_make_htx_trailers(), we rely on the "idx" variable
at the end of the for loop. In h3_trailers_to_htx(), we rely on the
"hdr_idx" variable.

This patch must be backported to all stables versions for the H2 part and as
far as 2.8 for the H3 one.

pouet
2026-03-17 07:48:02 +01:00
Christopher Faulet
9c0aeb3af4 BUG/MEDIUM: stconn: Don't perform L7 retries with large buffer
L7 retries are buggy when a large buffer is used on the request channel. A
memcpy is used to copy data from the request buffer into the L7 buffer. The
L7 buffer is for now always a standard buffer. So if a larger buffer is
used, this leads to a buffer overflow and crash the process.

The Best way to fix the issue is to disable L7 retries when a large buffer
was allocated for the request channel. In that case, we don't want to
allocate an extra large buffer.

No backport needed.
2026-03-17 07:48:02 +01:00
Christopher Faulet
cd91838042 BUG/MEDIUM: stconn: Fix abort on close when a large buffer is used
When a large buffer is used on a channel, once we've started to send data to
the opposite side, receives are blocked temporarily to be sure to flush the
large buffer ASAP to be able to fall back on regular buffers. This was
performed by skipping call to the endpoint (connection or applet). Howerver,
doing so, this broken the abortonclose and more generally this masked any
shut or error events reported by the lower layer.

To fix the issue, instead of skipping receives, we now try a receive but
with a requested size set to 0.

No backport needed
2026-03-17 07:48:01 +01:00
Christopher Faulet
b3be3b94a0 BUG/MEDIUM: spoe: Properly abort processing on client abort
Client abort when abortonclose is configured was ignored when messges were
sent on event while it works properly when messages are sent via an
"send-spoe-group" action.

To fix the issue, when the SPOE filter is waiting for the SPOE applet
response, it must check if a client abort was reported and if so, must
interrupt its processing.

This patch should be backported as far as 3.1.
2026-03-17 07:48:01 +01:00
Christopher Faulet
d10fc3d265 BUG/MINOR: spoe: Properly switch SPOE filter to WAITING_ACK state
When the SPOE applet is created, the SPOE filter is set in SENDING_MSGS
state. When the applet has transferred data, it should switch the filter to
WAITING_ACK state. Concretly, there is no bug. At best, it could save some
useless applet wakeups.

This patch should be backported as far as 3.1
2026-03-17 07:47:52 +01:00
Christopher Faulet
00bea05a14 BUG/MEDIUM: stconn: Don't forget to wakeup applets on shutdown
When SC's shudown callback functions were merged, a regression was
introduced. The applet was no longer woken up. Because of this bug, an
applet could remain blocked, waiting for an I/O event or a timeout.

This patch should fix the issue #3301.

No backport needed.
2026-03-17 07:38:57 +01:00
William Lallemand
ab7acdcc3a BUG/MINOR: sockpair: set FD_CLOEXEC on fd received via SCM_RIGHTS
FDs received through recv_fd_uxst() do not have FD_CLOEXEC set.
The equivalent sock_accept_conn() already handles this correctly:
any FD accepted or received in the master must be marked close-on-exec
to avoid leaking it across the execvp() performed on soft-reload.

This is currently triggering a leak in the master since 3.1: the worker
sends a socketpair fd to the master  to issue the _send_status CLI
command, and recv_fd_uxst() receive it without setting FD_CLOEXEC.  If a
re-exec is emitted before the master had the chance to close that fd, it
survives execvp() and appears as an untracked unnamed AF_UNIX socket in
the new master generation.

This must be backported to all maintained branches.
2026-03-16 16:31:58 +01:00
William Lallemand
a3bf0de651 BUG/MINOR: mworker: avoid passing NULL version in proc list serialization
Add a NULL guard for the version field. This has no functional impact
since the master process never uses this field for its own mworker_proc
element, and should be the only one impacted. This avoid seeing "(null)"
in the version field when debugging.

Must be backported to 3.1 and later.
2026-03-13 20:26:53 +01:00
William Lallemand
51d6f1ca4f BUG/MINOR: mworker: set a timeout on the worker socketpair read at startup
During a soft reload, a starting worker sends sock_pair[0] to the master
via send_fd_uxst(), then reads on sock_pair[1] waiting for the master to
acknowledge receipt. Because of a documented macOS sendmsg(2) bug, the
worker must keep sock_pair[0] open until the master confirms the fd was
received by the CLI applet. This means the read() on sock_pair[1] will
never return 0 (EOF), since the worker itself still holds a reference to
sock_pair[0]. The worker can only unblock when the master actively sends
a byte back. If the master crashes before doing so, the worker blocks
indefinitely in read().

Fix this by setting a 2-second SO_RCVTIMEO on sock_pair[1] before the
read(), so the worker can unblock and continue regardless of the master's
state.

This was introduced by d7f6819161c ("BUG/MEDIUM: mworker: fix startup
and reload on macOS").

This should be backported to 3.1 and later.
2026-03-13 18:45:58 +01:00
William Lallemand
cb51c8729d BUG/MINOR: mworker: fix typo &= instead of & in proc list serialization
In mworker_proc_list_to_env(), a typo used '&=' instead of '&' when
checking PROC_O_TYPE_WORKER in child->options. This would corrupt the
options field by clearing all bits except PROC_O_TYPE_WORKER, but since
the function is called right before the master re-execs itself during a
reload, the corruption has no actual effect: the in-memory proc_list is
discarded by the exec, and the options field is not serialized to the
environment anyway.

This should be backported to all maintained versions.
2026-03-13 18:38:24 +01:00
Maxime Henrion
a390daaee4 MINOR: traces: defer processing of "-dt" options
We defer processing of the "-dt" options until after the configuration
file has been read. This will be useful if we ever allow trace sources
to be registered later, for instance with LUA.

No backport needed.
2026-03-13 09:13:24 +01:00
William Lallemand
d172f7b923 BUG/MINOR: mworker: only match worker processes when looking for unspawned proc
In master-worker mode, when a freshly forked worker looks up its own
entry in proc_list to send its "READY" status to the master, the loop
was breaking on the first process with pid == -1 regardless of its
type. If a non-worker process (e.g. a master or program) also had
pid == -1, the wrong entry could be selected, causing send_fd_uxst()
to use an invalid ipc_fd.

Fix this by adding a PROC_O_TYPE_WORKER check to the loop condition,
and add a BUG_ON() assertion to catch any case where the loop exits
without finding a valid worker entry.

Must be backported to 3.1.
2026-03-13 09:13:11 +01:00
Willy Tarreau
4e8cf26ab6 DOC: internals: short explanation on how thread_exec_ctx works
The goal is to have enough info to be able to automatically enable the
feature on future rulesets or subsystems.
2026-03-12 18:28:09 +01:00
Willy Tarreau
f7820bcbaa MINOR: activity: raise the default number of memprofile buckets to 4k
It was set to 1k by default but with the refinement of exec_ctx it's
becoming short, so let's raise it now.
2026-03-12 18:06:38 +01:00
Willy Tarreau
892adf3cc1 MINOR: activity: support aggregating by caller also for memprofile
"show profiling" supports "aggr" for tasks but it was ignored for
memory. Now that we're having many more entries, it makes sense to
have it to ignore the call path and merge similar operations.
2026-03-12 18:06:38 +01:00
Willy Tarreau
17cbec485a MINOR: cli: implement execution context for manually registered keywords
Keywords registered out of an initcall will have a TH_EX_CTX_CLI_KWL
execution context pointing to the keyword list. The report will indicate
the 5 first words of the first command of the list, e.g.:

     exec_ctx: cli kwl starting with 'debug counters   '

This should also work for CLI keywords registered in Lua.
2026-03-12 18:06:38 +01:00
Willy Tarreau
5cd71f69ba MINOR: cli: keep track of the initcall context since kw registration
Now CLI keywords registered via an initcall will be tracked during
execution, by keeping a link to their initcall location. "show threads"
now shows "exec_ctx: kw registered at @debug.c:3093" which indeed
corresponds to the initcall for the debugging commands.
2026-03-12 18:06:38 +01:00
Willy Tarreau
8139795c64 MINOR: cli: keep the info of the current keyword being processed in the appctx
Till now the CLI didn't know what keyword was being processed after it
was parsed. In order to report the execution context, we'll need to
store it. And this may even help for post-mortem analysis to know the
exact keyword being processed, so let's store the pointer in the cli_ctx
part of the appctx.
2026-03-12 18:06:38 +01:00
Willy Tarreau
9cb11d0859 MINOR: applet: set execution context on applet calls
It allows to know when a thread is currnetly running inside an applet.
For example now "show threads" will show "applet '<CLI>'" for the thread
issuing this command.
2026-03-12 18:06:38 +01:00
Willy Tarreau
c0bf395cde MINOR: task: set execution context on task/tasklet calls
It now appears almost everywhere due to callbacks (e.g. ssl_sock_io_cb).
Muxes also become visible now on memory profiling. A small test on h1+ssl
yields 838 lines of statistics. The number of buckets should definitely
be increased, and more grouping criteria should be added.

A performance test was conducted to observe the possible effect of
setting the execution context on each task switch, and it didn't change
at all, remaining at about 1.01 billion ctxsw/s on a 128-thread EPYC.
2026-03-12 18:06:38 +01:00
Willy Tarreau
ec7b07b650 MINOR: connection: track mux calls to report their allocation context
Most calls to mux ops were instrumented with a CALL_MUX_WITH_RET() or
CALL_MUX_NO_RET() macro in order to make the current thread's context
point to the called mux and be able to track its allocations. Only
a bunch of harmless mux_ctl() and ->subscribe/unsubscribe calls were
left untouched since useless. But destroy/detach/shut/init/snd_buf
and rcv_buf are now tracked.

It will not show allocations performed in IO callback via tasklet
wakeups however.

In order to ease reading of the output, cmp_memprof_ctx() knows about
muxes and sorts based on the .subscribe function address instead of
the mux_ops address so as to keep various callers grouped.
2026-03-12 18:06:38 +01:00
Willy Tarreau
e8e4449985 MINOR: ssl: set the thread execution context during message callbacks
In order to be able to track memory allocation performed from message
callbacks, let's set the thread execution context to a generic function
pointing to them during their call. This allows for example to observe
the share of SSL allocations caused by ssl_sock_parse_clienthello() when
SSL captures are enabled.

The release calls are automatic from the SSL library for these, and are
registered directly via SSL_get_ex_new_index(). Maybe we should improve
the internal API to wrap that function and systematically track free
calls as well. In this case, maybe even registering the message callback
registration could take both the callback and the release function.
There are few such users however, essentially capture and keylog.
2026-03-12 18:06:38 +01:00
Willy Tarreau
3fb8659d04 MINOR: filters: set the exec context to the current filter config
Doing this allows to report the allocations/releases performed by filters
when running with memory profiling enabled. The flt_conf pointer is kept
and the report shows the filter name.
2026-03-12 18:06:38 +01:00
Willy Tarreau
43b56c22c7 MINOR: actions: also report execution contexts registered directly
This now reports directly registered actions using new type
TH_EX_CTX_ACTION which will report the first keyword of the
list.
2026-03-12 18:06:38 +01:00
Willy Tarreau
861d1111c3 MINOR: actions: store the location of keywords registered via initcalls
A bit similar to what was done for sample fetch functions and converters,
we now store with each action keyword the location of the initcall when
they're registered this way. Since there are many functions only calling
a LIST_APPEND() (one per ruleset), we now implement a dedicated function
to store the context in all keywords before doing the append.

However that's not sufficient, because keywords are not mandatory for
actions, so we cannot safely rely on rule->kw. Thus we then set the
exec_ctx per rule when they are all scanned in check_action_rules(),
based on the keyword if it exists, otherwise we make a context from
the action_ptr function if it is set (it should).

Finally at all call points we now check rule->exec_ctx.
2026-03-12 18:06:38 +01:00
Willy Tarreau
261cae3b6d MINOR: tools: support an execution context that is just a function
The purpose here is to be able to spot certain callbacks, such as the
SSL message callbacks, which are difficult to associate to anything.
Thus we introduce a new context type, TH_EX_CTX_FUNC, for which the
context is just the function pointed to by the void *pointer. One
difficulty with callbacks is that the allocation and release contexts
will likely be different, so the code should be properly structured
to allow proper tracking, either by instrumenting all calls, or by
making sure that the free calls are easy to spot in a report.
2026-03-12 18:06:38 +01:00
Willy Tarreau
aa4d5dd217 MINOR: sample: also report contexts registered directly
With the two new context types TH_EX_CTX_SMPF/CONV, we can now also
report contexts corresponding to direct calls to sample_register_fetches()
and sample_register_convs(). In this case, the first word of the keyword
list is reported.
2026-03-12 18:06:38 +01:00
Willy Tarreau
6e819dc4fa MINOR: sample: store location for fetch/conv via initcalls
Now keywords are registered with an exec_ctx and this one is passed
when calling ->process. The ctx is of type INITCALL when passed via
an initcall where we know the file name and line number.

This was tested with and extra "malloc(15)" added in smp_fetch_path()
which shows that it works:

  $ socat /tmp/sock1 - <<< "show profiling memory"|grep via
           Calls         |         Tot Bytes           |       Caller and method  [via]
      1893399           0       60592592              0|         0x78b2ec task_run_applet+0x3339c malloc(32) [via initcall @http_fetch.c:2416]
2026-03-12 18:06:38 +01:00
Willy Tarreau
2cd0cd84c6 MINOR: tools: support decoding ha_caller type exec context
The TH_EX_CTX_CALLER type takes an ha_caller pointer which allows a
caller to mark its caller's location using MK_CALLER().
2026-03-12 18:06:38 +01:00
Willy Tarreau
6e75da7a91 MINOR: tools: decode execution context TH_EX_CTX_INITCALL
When the execution context is set to TH_EX_CTX_INITCALL, the pointer
points to a valid initcall, and the decoder will show "kw registered
at %s:%d" with file and line number of the initcall declaration. It's
up to the caller to make the initcall pointer point to the one that was
set during the initcall. The purpose here is to be able to preserve and
pass that knowledge of an initcall down the chain so that future calls
to functions registered via the initcall are still assigned to it.
2026-03-12 18:06:38 +01:00
Willy Tarreau
33c928c745 MINOR: initcall: record the file and line declaration of an INITCALL
The INITCALL macros will now store the file and line number where they
are declared into the initcall struct, and RUN_INITCALLS() will assign
them to the global caller_file and caller_line variables, and will even
set caller_initcall to the current initall so that at any instant such
functions know where their caller declared them. This will help with
error messages and traces where a bit of context will be welcome.
2026-03-12 18:06:38 +01:00
Willy Tarreau
3f3a0609e3 MINOR: memprof: report the execution context on profiling output
This leads to the context pointer being reported in "show profiling
memory" when known, as "[via other ctx XXX]" for example.
2026-03-12 18:06:38 +01:00
Willy Tarreau
998ed00729 MINOR: debug: report the execution context on thread dumps
Now we have one extra line saying "exec_ctx: something" in thread dumps
when it's known. It may help with warnings and panics to figure what
is ongoing.
2026-03-12 18:06:37 +01:00
Willy Tarreau
5d3246205b MINOR: tools: add a function to write a thread execution context.
The new function chunk_append_thread_ctx() appends to a buffer the given
execution context based on its type and pointer. The goal is to easily
use it in profiling output and thread dumps. For now it only handles
TH_EX_CTX_NONE (which prints nothing) and TH_EX_CTX_OTHER (which indicates
"other ctx" followed by the pointer). It will be extended by new types as
they arrive.
2026-03-12 18:06:37 +01:00
Willy Tarreau
13c89bf20d MINOR: memprof: also permit to sort output by calling context
By passing "byctx" to "show profiling memory", it's possible to sort by
the calling context first, which could help group certain calls by
subsystem and ease the interpretation of the output.
2026-03-12 18:06:37 +01:00
Willy Tarreau
2dfc8417cf MINOR: memprof: prepare to consider exec_ctx in reporting
This now allows to report the same function in multiple bins based on the
th_ctx's exec_ctx discriminant. It's also worth noting that the context is
not atomically committed, but this shouldn't be a problem since a single
entry can get it. In the worst case, a second thread trying to create the
same context in parallel would create a different bin just for this call,
which is harmless. The same situation already exists with the caller
pointer.
2026-03-12 18:06:37 +01:00
Willy Tarreau
b7c8fab507 MINOR: tinfo: start to add basic thread_exec_ctx
We have the struct made of a type and a pointer in the th_ctx and a
function to switch it for the current thread. Two macros are provided
to enclose a callee within a temporary context. For now only type OTHER
is supported (only a generic pointer).
2026-03-12 18:06:37 +01:00
Willy Tarreau
fb7e5e1696 MINOR: memprof: attempt different retry slots for different hashes on collision
When two pointer hash to the same memprofile bin, we currently try again
with the same bin until we find a spare one or we reach the limit of 16.
Olivier suggested to try with a different step for different pointers so
as to limit the number of bins to visit in such a case, so let's split
the pointer hash calculation so that we keep the raw hash before reduction
and use its lowest bits as the retry step. We force lowest bit to 1 to
avoid integral multiples that would oscillate between only a few positions.

Quick tests with h1+h2 requests show that for ~744 distinct entries, we
used to have 1.17 retries per lookup before and 0.6 now so we're halving
the cost of hash collisions. A heavier workload that used to produce 920
entries with 2.01 retries per lookup now reaches 966 entries (94.3% usage
vs 89.8% before) with only 1.44 retries per lookup.

This should be safe to backport, but depends on this previous commit:

    MINOR: tools: extend the pointer hashing code to ease manipulations
2026-03-12 18:06:37 +01:00
Willy Tarreau
3b4275b072 MINOR: tools: add a new pointer hash function that also takes an argument
The purpose here is to combine two pointers and a long argument instead
of having the caller perform the mixing. Also it's cleaner and more
efficient this was as the arg is mixed after the multiplications, and
modern processors are efficient at multiplying then adding.
2026-03-12 18:06:37 +01:00
Willy Tarreau
825e5611ba MINOR: tools: extend the pointer hashing code to ease manipulations
We'll need to further extend the pointer hashing code to pass extra
parameters and to retrieve the dropped bits, so let's first split the
part that hashes the pointer from the part that reduces the hash to
the desired size.
2026-03-12 18:06:37 +01:00
Willy Tarreau
01457979b6 MINOR: activity: use dynamic allocation for "show profiling" entries
Historically, the data manipulated by "show profiling" were copied
onto the stack for sorting and aggregating, but not only this limits
the number of entries we can keep, but it also has an impact on CPU
usage (having to redo the whole copy+sort upon each resume) and the
output accuracy (if sorting changes lines, resume may happen from an
incorrect one).

Instead, let's dynamically allocate the work buffer and place it into
the service context. We only allocate it immediately before needing it
and release it immediately afterwards so that it doesn't stay long. It
also requires a release handler to release those allocates by interrupted
dumps, but that's all. The overall result is now much cleaner, more
accurate, faster and safer.

This patch may be backported to older LTS releases.
2026-03-12 18:06:37 +01:00
Willy Tarreau
07655da068 BUG/MINOR: proxy: do not forget to validate quic-initial rules
In check_config_validity() and proxy_finalize() we check the consistency
of all rule sets, but the quic_initial rules were not placed there. This
currently has little to no impact, however we're going to use that to
also finalize certain debugging info so better call the function. This
can be backported to 3.1 (proxy_finalize is 3.4-only).
2026-03-12 18:06:37 +01:00
Willy Tarreau
ed44adc3ca BUG/MINOR: memprof: avoid a small memory leak in "show profiling"
In 3.1, per-DSO statistics were added to the memprofile output by
commit 401fb0e87a ("MINOR: activity/memprofile: show per-DSO stats").
However an strdup() is performed there on the .info field, that is
never freed when leaving the function. Let's do it each time we leave
it. Ironically, this was found thanks to "show profiling" showing
itself as an unbalanced caller of strdup().

This needs to be backported to 3.0 since that commit was backported
there.
2026-03-12 18:06:37 +01:00
Willy Tarreau
4d5a91b8af BUILD: makefile: fix range build without test command
In 3.3, the "make range" target adopted a test command via the TEST_CMD
variable, with commit 90b70b61b1 ("BUILD: makefile: implement support
for running a command in range"). However now it breaks the script when
TEST_CMD is not set due to the shell expansion leaving two '||' operators
side by side. Let's fix this by passing the contents of the makefile
variable in positional arguments before executing them.
2026-03-12 18:06:37 +01:00
Olivier Houchard
4102461dd6 BUG/MEDIUM: ssl: Don't report read data as early data with AWS-LC
To read early data with AWS-LC (and BoringSSL), we have to use
SSL_read(). But SSL_read() will also try to do the handshake if it
hasn't been done yet, and at some point will do the handshake and will
return data that are actually not early data. So use SSL_in_early_data()
to make sure that the data we received are actually early data, and only
if so add the CO_FL_EARLY_DATA flag. Otherwise any data first received will be
considered early, and a Early-data header will be added.
As this bug was introduced by 76ba026548975a6d1bc23d1344807c64d994bf1e,
it should be backported with it.
2026-03-12 17:31:12 +01:00
William Lallemand
13d13691b5 BUG/MINOR: mworker: always stop the receiving listener
Upon _send_status, always stop the listener from which the request
was received, rather than looking it up from the proc_list entry via
fdtab[proc->ipc_fd[0]].owner.

A BUG_ON is added to verify that the listener which received the
request is the one expected for the reported PID.

This means it is no longer possible to send "_send_status READY XXX"
manually through the master CLI for testing, as that would trigger
the BUG_ON.

Must be backported as far as 3.1.
2026-03-12 17:29:50 +01:00
Olivier Houchard
76ba026548 BUG/MEDIUM: ssl: Handle receiving early data with BoringSSL/AWS-LC
The API for early data is a bit different with BoringSSL and AWS-LC than
it is for OpenSSL. As it was implemented, early data would be accepted,
but would not be processed until the handshake is done. Change that by
doing something similar to what OpenSSL does, and, if 0RTT has been
enabled on the listener, use SSL_read() to try to get early data before
starting the handshake, and if there's any, provide them to the mux the
same way it is done for OpenSSL.
That replaces a bunch of #ifdef SSL_READ_EARLY_DATA_SUCCESS by
something specific to OpenSSL has to be done.
This should be backported to 3.3.
2026-03-12 14:14:51 +01:00
Egor Shestakov
f24ed2a5d1 DOC/CLEANUP: config: update mentions of the old "Global parameters" section
The name of "Global section" was changed only in the summary, not in the
text itself. The names of some related refs were also updated.

Should be backported as far as 3.2.
2026-03-12 09:25:01 +01:00
Tom Braarup
b837b2b86c DOC: configuration: http-check expect example typo
On the http-check expect example
(https://docs.haproxy.org/dev/configuration.html#4.2-http-check%20expect)
there is a typo

-http-check expect header name "set-cookie" value -m beg "sessid="
+http-check expect hdr name "set-cookie" value -m beg "sessid="
2026-03-12 09:20:32 +01:00
Mia Kanashi
b6e28bb4d7 BUG/MINOR: jws: fix memory leak in jws_b64_signature
EVP_MD_CTX is allocated using EVP_MD_CTX_new() but was never freed.
ctx should be initialized to NULL otherwise EVP_MD_CTX_free(ctx) could
segfault.

Must be backported as far as 3.2.
2026-03-12 09:18:42 +01:00
Tim Duesterhus
760fef1fc0 BUG/MINOR: tcpcheck: Fix typo in error error message for http-check expect
With a config:

    backend bk_app
    	http-check expect status 200 string "status: ok"

This now correctly emits the error:

    config : parsing [./patch.cfg:2] : 'http-check expect' : only one pattern expected.

This line containing the typo is unchanged since at least HAProxy 2.2, the
patch should be backported into all supported branches.
2026-03-12 09:10:45 +01:00
William Lallemand
73732abfb2 BUILD: ssl: make X509_NAME usage OpenSSL 4.0 ready
Starting with OpenSSL 4.0, X509_get_subject_name(), X509_get_issuer_name(),
and X509_CRL_get_issuer() return a const-qualified X509_NAME pointer.
Similarly, X509_NAME_get_entry() returns a const X509_NAME_ENTRY *, and
X509_NAME_ENTRY_get_data() returns a const ASN1_STRING *.

Introduce the __X509_NAME_CONST__ macro (defined to 'const' for OpenSSL
>= 4.0.0, empty for WolfSSL and older OpenSSL version which lacks const
on these APIs) and use it to qualify X509_NAME * variables and the
parameters of the three DN helper functions ssl_sock_get_dn_entry(),
ssl_sock_get_dn_formatted(), and ssl_sock_get_dn_oneline(). This avoids
both const-qualifier warnings on OpenSSL 4.0 and discarded-qualifier
warnings on WolfSSL, without needing explicit casts at call sites.

In ssl_sock.c (ssl_get_client_ca_file) and ssl_gencert.c
(ssl_sock_do_create_cert), a __X509_NAME_CONST__ X509_NAME * variable was
being reused to store the result of X509_NAME_dup() and then passed to
mutating functions (X509_NAME_add_entry_by_txt, X509_NAME_free). Introduce
separate X509_NAME * variables (xn_dup, subject) to hold the mutable
duplicate.

Original patch from Alexandr Nedvedicky <sashan@openssl.org>:
https://www.mail-archive.com/haproxy@formilux.org/msg46696.html
2026-03-11 17:00:59 +01:00
William Lallemand
e82f03dd88 BUILD: ssl: use ASN1_STRING accessors for OpenSSL 4.0 compatibility
In OpenSSL 4.0, the ASN1_STRING struct was made opaque and direct access
to its members (->data, ->length, ->type) no longer compiles. Replace
these accesses in ssl_sock_get_serial(), ssl_sock_get_time(), and
asn1_generalizedtime_to_epoch() with the proper accessor functions
ASN1_STRING_get0_data(), ASN1_STRING_length(), and ASN1_STRING_type().

The old direct access is preserved under USE_OPENSSL_WOLFSSL since
WolfSSL does not provide these accessor functions.

Original patch from Alexandr Nedvedicky <sashan@openssl.org>:
https://www.mail-archive.com/haproxy@formilux.org/msg46696.html
2026-03-11 16:59:54 +01:00
William Lallemand
6d14fd0b29 MEDIUM: mworker: exiting when couldn't find the master mworker_proc element
When a master process is reloading, the HAPROXY_PROCESSES variable is
deserialized. In older version of the master-worker (< 1.9), no master
element was existing in this variable.

This is not suppose to happen anymore, and could have provoked problems
in the master anyway.

This patch changes the behavior by exiting the master with an alert if
mp master element was found in this variable.
2026-03-10 15:57:21 +01:00
Christopher Faulet
00563233b7 DEBUG: stconn: Add a CHECK_IF() when I/O are performed on a orphan SC
When no endpoint is attached to a SC, it is unexpected to have I/O (receive
or send). But we honestly don't know if it happens or not. So a CHECK_IF()
is added to be able to track such calls.
2026-03-10 15:10:34 +01:00
Christopher Faulet
b2b0d1a8be MINOR: stconn: Simplify sc_abort/sc_shutdown by merging calls to se_shutdown
Calls to se_shutdown were no the same between applets and mux endpoints.
Only the SHUTW flag was not the same. However, on the multiplexers are
sensitive to the true SHUTW flag. The applets handle all of them the same
way. So calls to se_shutdown() from sc_abort() and sc_shutdown() can be
merged to always use the multiplexer version.
2026-03-10 15:10:34 +01:00
Christopher Faulet
fb1bc592f5 MINOR: stconn: Totally app_ops from the stconns
The stconn app_ops structure is now empty and can be safely removed. So let's do
so.
2026-03-10 15:10:34 +01:00
Christopher Faulet
990456462f MINOR: stconn: Remove .shutdown() callback functions
These callback functions are no longer used, so they can safely be
removed. In addition, the field was removed from the app_ops structure.
2026-03-10 15:10:34 +01:00
Christopher Faulet
c65526ad57 MEDIUM: stconn: Merge all .shutdown() callback functions in sc_shutdown()
sc_shutdown() is no longer relying on .shutdown() callback functions.
Everything was merged in sc_shutdown() with a test on the app type.
2026-03-10 15:10:34 +01:00
Christopher Faulet
9dfff87b69 MINOR: stconn: Remove .abort() callback functions
These callback functions are no longer used, so they can safely be
removed. In addition, the field was removed from the app_ops structure.
2026-03-10 15:10:34 +01:00
Christopher Faulet
0fc6884bc7 MEDIUM: stconn: Merge all .abort() callback functions in sc_abort()
sc_abort() is no longer relying on .abort() callback functions.  Everything
was merged in abort() with a test on the app type.
2026-03-10 15:10:34 +01:00
Christopher Faulet
0c9741b70a MINOR: stconn: Remove .chk_snd() callback functions
These callback functions are no longer used, so they can safely be
removed. In addition, the field was removed from the app_ops structure.
2026-03-10 15:10:34 +01:00
Christopher Faulet
e33dfc4f26 MEDIUM: stconn: Merge all .chk_snd() callback functions in sc_chk_snd()
sc_chk_snd() is no longer relying on .chk_snd() callback functions.
Everything was merged in sc_chk_snd() with a test on the app type.
2026-03-10 15:10:34 +01:00
Christopher Faulet
5aa67f0587 MINOR: stconn: Remove .chk_rcv() callback functions
These callback functions are no longer used, so they can safely be
removed. In addition, the field was removed from the app_ops structure.
2026-03-10 15:10:34 +01:00
Christopher Faulet
aef7afbe65 MEDIUM: stconn: Merge all .chk_rcv() callback functions in sc_chk_rcv()
sc_chk_rcv() is no longer relying on .chk_rcv() callback functions.
Everything was merged in sc_chk_rcv() with a test on the app type.
2026-03-10 15:10:34 +01:00
Christopher Faulet
7c895092a7 MINOR: stconn: Wakup the SC with TASK_WOKEN_IO state from opposite side
When a SC is woken up by the opposite side, in inter stream-connector calls,
TASK_WOKEN_IO state is now used.
2026-03-10 15:10:34 +01:00
Christopher Faulet
aaa97c4441 MINOR: haterm: Remove hstream_wake() function
This function is no longer used, so it can be safely removed.
2026-03-10 15:10:34 +01:00
Christopher Faulet
d491329de9 MINOR: check: Remove wake_srv_chk() function
wake_srv_chk() function is now only used by srv_chk_io_cb(), the
health-checl I/O callback function. So let's remove it. The code of the
function was moved in srv_chk_io_cb().
2026-03-10 15:10:34 +01:00
Christopher Faulet
9c7c669d7a MEDIUM: stconn: Remove .wake() callback function from app_ops
.wake() callback function is no longer used by endpoints. So it can be
removed from the app_ops structure.
2026-03-10 15:10:34 +01:00
Christopher Faulet
a33b42035b MINOR: connection: Call sc_conn_process() instead of .wake() callback function
At we fail to create a mux, in conn_create_mux(), instead of calling the
app_ops .wake() callback function, we can directly call sc_conn_process().
At this stage, we know we are using an connection, so it is safe to do so.
2026-03-10 15:10:34 +01:00
Christopher Faulet
7be95eb892 MINOR: applet: Call sc_applet_process() instead of .wake() callback function
At the end of task_run_applet() and task_process_applet(), instead of
calling the app_ops .wake() callback function, we can directly call
sc_applet_process(). At this stage, we know we are using an applet, so it is
safe to do so.
2026-03-10 15:10:34 +01:00
Christopher Faulet
64d997ebfc MAJOR: muxes: No longer use app_ops .wake() callback function from muxes
Thanks to previous commits, it is now possible to wake the data layer up,
via a tasklet_wakeup, instead of using the app_ops .wake() callback
function.

When a data layer must be notified of a mux event (an error for instance),
we now always perform a tasklet_wakeup(). TASK_WOKEN_MSG state is used by
default. TASK_WOKEN_IO is eventually added if the data layer was subscribed
to receives or sends.

Changes are not trivial at all. We replaced a synchronous call to the
sc_conn_process() function by a tasklet_wakeup().
2026-03-10 15:10:34 +01:00
Christopher Faulet
26a0817c1a MINOR: muxes: Wakup the data layer from a mux stream with TASK_WOKEN_IO state
Now, when a mux stream is waking its data layer up for receives or sends, it
uses the TASK_WOKEN_IO state. The state is not used by the stconn I/O
callback function for now.
2026-03-10 15:10:34 +01:00
Christopher Faulet
376487cca9 MINOR: mux-spop: Rely on spop_strm_notify_send() when resuming streams for sending
In spop_resume_each_sending_spop_strm(), there was exactly the same code
than spop_strm_notify_send(). So let's use spop_strm_notify_send() instead
of duplicating code.
2026-03-10 15:10:34 +01:00
Christopher Faulet
aea0d38fdd MINOR: mux-h2: Rely on h2s_notify_send() when resuming h2s for sending
In h2_resume_each_sending_h2s(), there was exactly the same code than
h2s_notify_send(). So let's use h2s_notify_send() instead of duplicating
code.
2026-03-10 15:10:34 +01:00
Christopher Faulet
7abb7c4c79 MINOR: stconn: Call sc_conn_process from the I/O callback if TASK_WOKEN_MSG state was set
It is the first commit of a series to refactor the SC app_ops. The first
step is to remove the .wake() callback function from the app_ops to replace
all uses by a wakeup of the SC tasklet.

Here, when the SC is woken up, the state is now tested and if TASK_WOKEN_MSG
is set, sc_conn_process() is called.
2026-03-10 15:10:34 +01:00
Remi Tricot-Le Breton
924a92200f DOC: jwt: Add ECDH support in jwt_decrypt converters
The jwt_decrypt_jwk and jwt_decrypt_cert converters now manage
algorithms in the ECDH family.
2026-03-10 14:58:48 +01:00
Remi Tricot-Le Breton
31bbc1f0f1 MINOR: jwt: Manage ec certificates in jwt_decrypt_cert
This patch adds the support of algorithms in the ECDH family in the
jwt_decrypt_cert converter.
2026-03-10 14:58:47 +01:00
Remi Tricot-Le Breton
3925bb8efc MINOR: jwt: Add ecdh-es+axxxkw support in jwt_decrypt_jwk converter
This builds on the ECDH-ES processing and simply requires an extra AES
Key Wrap operation between the built key and the token's CEK.
2026-03-10 14:58:47 +01:00
Remi Tricot-Le Breton
32d9af559f MINOR: jwt: Manage ECDH-ES algorithm in jwt_decrypt_jwk function
When ECDH-ES algorithm is used in a JWE token, no cek is provided and
one must be built in order to decrypt the contents of the token. The
decrypting key is built by deriving a temporary key out of a public key
provided in the token and the private key provided by the user and
performing a concatKDF operation.
2026-03-10 14:58:47 +01:00
Remi Tricot-Le Breton
026652a7eb MINOR: jwt: Parse ec-specific fields in jose header
When the encoding is of the ECDH family, the optional "apu" and "apv"
fields of the JOSE header must be parsed, as well as the mandatory "epk"
field that contains an EC public key used to derive a key that allows
either to decrypt the contents of the token (in case of ECDH-ES) or to
decrypt the content encoding key (cek) when using ECDH-ES+AES Key Wrap.
2026-03-10 14:58:46 +01:00
Remi Tricot-Le Breton
3d9764f4c3 MINOR: jwt: Convert EC JWK to EVP_PKEY
Convert a JWK with the "EC" key type ("kty") into an EVP_PKEY. The JWK
can either represent a public key if it only contains the "x" and "y"
fields, or a private key if it also contains the "d" field.
2026-03-10 14:58:46 +01:00
Remi Tricot-Le Breton
e34b633be3 MINOR: jwt: Improve 'jwt_tokenize' function
The 'jwt_tokenize' function that can be used to split a JWT token into
its subparts can either fully process the token (from beginning to end)
when we need to check its signature, or only partially when using the
jwt_header_query or jwt_member_query converters. In this case we relied
on the fact that the return value of the 'jwt_tokenize' function was not
checked because a '-1' was returned (which was not actually an error).

In order to make this logic more explicit, the 'jwt_tokenize' function
now has a way to warn the caller that the token was invalid (less
subparts than the specified 'item_num') or that the token was not
processed in full (enough subparts found without parsing the token all
the way).
The function will now only return 0 if we found strictly the same number
of subparts as 'item_num'.
2026-03-10 14:20:42 +01:00
William Lallemand
1babe8cb1b Revert "BUG/MINOR: jwt: Missing 'jwt_tokenize' return value check"
This reverts commit 5e14904fef4856372433fc5b7e8d3de9450ec1b5.

The patch is broken, a better implementation is needed.
2026-03-09 16:53:06 +01:00
William Lallemand
1cbd1163f0 BUG/MINOR: mworker: don't set the PROC_O_LEAVING flag on master process
The master process in the proc_list mustn't set the PROC_O_LEAVING flag
since the reload doesn't mean the master will leave.

Could be backported as far as 3.1.
2026-03-09 16:51:56 +01:00
William Lallemand
bd3983b595 MINOR: mworker: add a BUG_ON() on mproxy_li in _send_status
mproxy_li is supposed to be used in _send_status to stop the sockpair FD
between the master and the new worker, being a listener.

This can only work if the listener has been stored in the fdtab owner,
and there's no reason it shouldn't be here.
2026-03-09 16:51:56 +01:00
Willy Tarreau
520faedda0 SCRIPTS: git-show-backports: add a restart-from-last option
It's always a bit tricky to avoid already backported patches when they
just got a different ID (e.g. a critical fix in a topic branch). Most
often with stable topic branches we just want to pick all stable commits
since the last backported one. New option -L instead of -m does exactly
this: it enumerates only commits that were added to the reference branch
after its most recent backport.
2026-03-09 15:36:05 +01:00
Willy Tarreau
459835d535 SCRIPTS: git-show-backports: hide the common ancestor warning in quiet mode
It's annoying to always see that warning in quiet mode when backporting
upstream to topic branches, let's hide it.
2026-03-09 15:36:02 +01:00
William Lallemand
9b3345237a BUG/MINOR: admin: haproxy-reload rename -vv long option
The -vv option used --verbose as its long form, which was identical to
the long form of -v. Since the case statement matches top-to-bottom,
--verbose would always trigger -v (VERBOSE=2), making -vv unreachable
via its long option. The long form is renamed to --verbose=all to avoid
the conflict, and the usage string is updated accordingly.

Must be backported to 3.3.
2026-03-08 01:37:56 +01:00
William Lallemand
2a0cf52cfc MEDIUM: admin: haproxy-reload conversion to POSIX sh
The script relied on a bash-specific process substitution (< <(...)) to
feed socat's output into the read loop. This is replaced with a standard
POSIX pipe into a command group.

The response parsing is also simplified: instead of iterating over each
line with a while loop and echoing them individually, the status line is
read first, the "--" separator consumed, and the remaining output is
streamed to stderr or discarded as a whole depending on the verbosity
level.

Could be backported to 3.3 as it makes it more portable, but introduce a
slight change in the error format.
2026-03-08 01:37:52 +01:00
William Lallemand
551e5f5fd4 BUG/MINOR: admin: haproxy-reload use explicit socat address type
socat was used with the ${MASTER_SOCKET} variable directly, letting it
auto-detect the network protocol. However, when given a plain filename
that does not point to a UNIX socket, socat would create a file at that
path instead of reporting an error.

To fix this, the address type is now determined explicitly: if
MASTER_SOCKET points to an existing UNIX socket file (checked with -S),
UNIX-CONNECT: is used; if it matches a <host>:<port> pattern, TCP: is
used; otherwise an error is reported. The socat_addr variable is also
properly scoped as local to the reload() function.

Could be backported in 3.3.
2026-03-08 01:33:29 +01:00
Aurelien DARRAGON
2a2989bb23 CLEANUP: flt_http_comp: comp_state doesn't bother about the direction anymore
no need to have duplicated comp_ctx and comp_algo for request vs response
in comp_state struct, because thanks to previous commit compression filter
is either oriented on the request or the response, and 2 distinct filters
are instanciated when we need to handle both requests and responses
compression.

Thus we can save us from duplicated struct members and related operations.
2026-03-06 13:55:41 +01:00
Aurelien DARRAGON
cbebdb4ba8 MEDIUM: flt_http_comp: split "compression" filter in 2 distinct filters
Existing "compression" filter is a multi-purpose filter that will try
to compress both requests and responses according to "compression"
settings, such as "compression direction".

One of the pre-requisite work identified to implement decompression
filter is that we needed a way to manually define the sequence of
enabled filters to chain them in the proper order to make
compression and decompression chains work as expected in regard
to the intended use-case.

Due to the current nature of the "compression" filter this was not
possible, because the filter has a combined action as it will try
to compress both requests and responses, and as we are about to
implement "filter-sequence" directive, we will not be able to
change the order of execution of the compression filter between
requests and responses.

A possible solution we identified to solve this issue is to split the
existing "compression" filter into 2 distinct filters, one which is
request-oriented, "comp-req", and another one which is response-oriented
"comp-res". This is what we are doing in this commit. Compression logic
in itself is unchanged, "comp-req" will only aim to compress the request
while "comp-res" will try to compress the response. Both filters will
still be invoked on request and responses hooks, but they only do their
part of the job.

From now on, to compress both requests and responses, both filters have
to be enabled on the proxy. To preserve original behavior, the "compression"
filter is still supported, what it does is that it instantiates both
"comp-req" and "comp-res" filters implicitly, as the compression filter is
now effectively split into 2 separate filters under the hood.

When using "comp-res" and "comp-req" filters explicitly, the use of the
"compression direction" setting is not relevant anymore. Indeed, the
compression direction is assumed as soon as one or both filters are
enabled. Thus "compression direction" is kept as a legacy option in
order to configure the "compression" generic filter.

Documentation was updated.
2026-03-06 13:55:31 +01:00
Aurelien DARRAGON
9549b05b94 MINOR: flt_http_comp: define and use proxy_get_comp() helper function
proxy_get_comp() function can be used to retrieve proxy->comp options or
allocate and initialize it if missing

For now, it is solely used by parse_compression_options(), but the goal is
to be able to use this helper from multiple origins.
2026-03-06 13:55:24 +01:00
Remi Tricot-Le Breton
5e14904fef BUG/MINOR: jwt: Missing 'jwt_tokenize' return value check
There was a "jwt_tokenize" call whose return value was not checked.

This was found by coverity and raised in GitHub #3277.
This patch can be backported to all stable branches.
2026-03-06 09:52:19 +01:00
Christopher Faulet
af6b9a0967 BUG/MINOR: backend: Don't get proto to use for webscoket if there is no server
In connect_server(), it is possible to have no server defined (dispatch mode
or transparent backend). In that case, we must be carefull to check the srv
variable in all calls involving the server. It was not perform at one place,
when the protocol to use for websocket is retrieved. This must not be done
when there is no server.

This patch should fix the first report in #3144. It must be backported to
all stable version.
2026-03-06 09:24:32 +01:00
Christopher Faulet
bfe5a2c3d7 BUG/MINOR: ssl-sample: Fix sample_conv_sha2() by checking EVP_Digest* failures
In sample_conv_sha2(), calls to EVP_Digest* can fail. So we must check
return value of each call and report a error on failure and release the
digest context.

This patch should fix the issue #3274. It should be backported as far as
2.6.
2026-03-06 09:07:16 +01:00
Christopher Faulet
b48c9a1465 BUG/MINOR: stconn: Increase SC bytes_out value in se_done_ff()
When data are sent via the zero-copy data forwarding, we must not forget to
increase the stconn bytes_out value.

This patch must be backport to 3.3.
2026-03-05 16:17:33 +01:00
129 changed files with 4820 additions and 1813 deletions

11
.github/matrix.py vendored
View File

@ -19,9 +19,10 @@ from packaging import version
#
# this CI is used for both development and stable branches of HAProxy
#
# naming convention used, if branch name matches:
# naming convention used, if branch/tag name matches:
#
# "haproxy-" - stable branches
# "vX.Y.Z" - release tags
# otherwise - development branch (i.e. "latest" ssl variants, "latest" github images)
#
@ -120,11 +121,13 @@ def clean_compression(compression):
def main(ref_name):
print("Generating matrix for branch '{}'.".format(ref_name))
is_stable = "haproxy-" in ref_name or re.match(r'^v\d+\.\d+\.\d+$', ref_name)
matrix = []
# Ubuntu
if "haproxy-" in ref_name:
if is_stable:
os = "ubuntu-24.04" # stable branch
os_arm = "ubuntu-24.04-arm" # stable branch
else:
@ -228,7 +231,7 @@ def main(ref_name):
# "BORINGSSL=yes",
]
if "haproxy-" not in ref_name: # development branch
if not is_stable: # development branch
ssl_versions = ssl_versions + [
"OPENSSL_VERSION=latest",
"LIBRESSL_VERSION=latest",
@ -276,7 +279,7 @@ def main(ref_name):
)
# macOS on dev branches
if "haproxy-" not in ref_name:
if not is_stable:
os = "macos-26" # development branch
TARGET = "osx"

132
CHANGELOG
View File

@ -1,6 +1,138 @@
ChangeLog :
===========
2026/03/20 : 3.4-dev7
- BUG/MINOR: stconn: Increase SC bytes_out value in se_done_ff()
- BUG/MINOR: ssl-sample: Fix sample_conv_sha2() by checking EVP_Digest* failures
- BUG/MINOR: backend: Don't get proto to use for webscoket if there is no server
- BUG/MINOR: jwt: Missing 'jwt_tokenize' return value check
- MINOR: flt_http_comp: define and use proxy_get_comp() helper function
- MEDIUM: flt_http_comp: split "compression" filter in 2 distinct filters
- CLEANUP: flt_http_comp: comp_state doesn't bother about the direction anymore
- BUG/MINOR: admin: haproxy-reload use explicit socat address type
- MEDIUM: admin: haproxy-reload conversion to POSIX sh
- BUG/MINOR: admin: haproxy-reload rename -vv long option
- SCRIPTS: git-show-backports: hide the common ancestor warning in quiet mode
- SCRIPTS: git-show-backports: add a restart-from-last option
- MINOR: mworker: add a BUG_ON() on mproxy_li in _send_status
- BUG/MINOR: mworker: don't set the PROC_O_LEAVING flag on master process
- Revert "BUG/MINOR: jwt: Missing 'jwt_tokenize' return value check"
- MINOR: jwt: Improve 'jwt_tokenize' function
- MINOR: jwt: Convert EC JWK to EVP_PKEY
- MINOR: jwt: Parse ec-specific fields in jose header
- MINOR: jwt: Manage ECDH-ES algorithm in jwt_decrypt_jwk function
- MINOR: jwt: Add ecdh-es+axxxkw support in jwt_decrypt_jwk converter
- MINOR: jwt: Manage ec certificates in jwt_decrypt_cert
- DOC: jwt: Add ECDH support in jwt_decrypt converters
- MINOR: stconn: Call sc_conn_process from the I/O callback if TASK_WOKEN_MSG state was set
- MINOR: mux-h2: Rely on h2s_notify_send() when resuming h2s for sending
- MINOR: mux-spop: Rely on spop_strm_notify_send() when resuming streams for sending
- MINOR: muxes: Wakup the data layer from a mux stream with TASK_WOKEN_IO state
- MAJOR: muxes: No longer use app_ops .wake() callback function from muxes
- MINOR: applet: Call sc_applet_process() instead of .wake() callback function
- MINOR: connection: Call sc_conn_process() instead of .wake() callback function
- MEDIUM: stconn: Remove .wake() callback function from app_ops
- MINOR: check: Remove wake_srv_chk() function
- MINOR: haterm: Remove hstream_wake() function
- MINOR: stconn: Wakup the SC with TASK_WOKEN_IO state from opposite side
- MEDIUM: stconn: Merge all .chk_rcv() callback functions in sc_chk_rcv()
- MINOR: stconn: Remove .chk_rcv() callback functions
- MEDIUM: stconn: Merge all .chk_snd() callback functions in sc_chk_snd()
- MINOR: stconn: Remove .chk_snd() callback functions
- MEDIUM: stconn: Merge all .abort() callback functions in sc_abort()
- MINOR: stconn: Remove .abort() callback functions
- MEDIUM: stconn: Merge all .shutdown() callback functions in sc_shutdown()
- MINOR: stconn: Remove .shutdown() callback functions
- MINOR: stconn: Totally app_ops from the stconns
- MINOR: stconn: Simplify sc_abort/sc_shutdown by merging calls to se_shutdown
- DEBUG: stconn: Add a CHECK_IF() when I/O are performed on a orphan SC
- MEDIUM: mworker: exiting when couldn't find the master mworker_proc element
- BUILD: ssl: use ASN1_STRING accessors for OpenSSL 4.0 compatibility
- BUILD: ssl: make X509_NAME usage OpenSSL 4.0 ready
- BUG/MINOR: tcpcheck: Fix typo in error error message for `http-check expect`
- BUG/MINOR: jws: fix memory leak in jws_b64_signature
- DOC: configuration: http-check expect example typo
- DOC/CLEANUP: config: update mentions of the old "Global parameters" section
- BUG/MEDIUM: ssl: Handle receiving early data with BoringSSL/AWS-LC
- BUG/MINOR: mworker: always stop the receiving listener
- BUG/MEDIUM: ssl: Don't report read data as early data with AWS-LC
- BUILD: makefile: fix range build without test command
- BUG/MINOR: memprof: avoid a small memory leak in "show profiling"
- BUG/MINOR: proxy: do not forget to validate quic-initial rules
- MINOR: activity: use dynamic allocation for "show profiling" entries
- MINOR: tools: extend the pointer hashing code to ease manipulations
- MINOR: tools: add a new pointer hash function that also takes an argument
- MINOR: memprof: attempt different retry slots for different hashes on collision
- MINOR: tinfo: start to add basic thread_exec_ctx
- MINOR: memprof: prepare to consider exec_ctx in reporting
- MINOR: memprof: also permit to sort output by calling context
- MINOR: tools: add a function to write a thread execution context.
- MINOR: debug: report the execution context on thread dumps
- MINOR: memprof: report the execution context on profiling output
- MINOR: initcall: record the file and line declaration of an INITCALL
- MINOR: tools: decode execution context TH_EX_CTX_INITCALL
- MINOR: tools: support decoding ha_caller type exec context
- MINOR: sample: store location for fetch/conv via initcalls
- MINOR: sample: also report contexts registered directly
- MINOR: tools: support an execution context that is just a function
- MINOR: actions: store the location of keywords registered via initcalls
- MINOR: actions: also report execution contexts registered directly
- MINOR: filters: set the exec context to the current filter config
- MINOR: ssl: set the thread execution context during message callbacks
- MINOR: connection: track mux calls to report their allocation context
- MINOR: task: set execution context on task/tasklet calls
- MINOR: applet: set execution context on applet calls
- MINOR: cli: keep the info of the current keyword being processed in the appctx
- MINOR: cli: keep track of the initcall context since kw registration
- MINOR: cli: implement execution context for manually registered keywords
- MINOR: activity: support aggregating by caller also for memprofile
- MINOR: activity: raise the default number of memprofile buckets to 4k
- DOC: internals: short explanation on how thread_exec_ctx works
- BUG/MINOR: mworker: only match worker processes when looking for unspawned proc
- MINOR: traces: defer processing of "-dt" options
- BUG/MINOR: mworker: fix typo &= instead of & in proc list serialization
- BUG/MINOR: mworker: set a timeout on the worker socketpair read at startup
- BUG/MINOR: mworker: avoid passing NULL version in proc list serialization
- BUG/MINOR: sockpair: set FD_CLOEXEC on fd received via SCM_RIGHTS
- BUG/MEDIUM: stconn: Don't forget to wakeup applets on shutdown
- BUG/MINOR: spoe: Properly switch SPOE filter to WAITING_ACK state
- BUG/MEDIUM: spoe: Properly abort processing on client abort
- BUG/MEDIUM: stconn: Fix abort on close when a large buffer is used
- BUG/MEDIUM: stconn: Don't perform L7 retries with large buffer
- BUG/MINOR: h2/h3: Only test number of trailers inserted in HTX message
- MINOR: htx: Add function to truncate all blocks after a specific block
- BUG/MINOR: h2/h3: Never insert partial headers/trailers in an HTX message
- BUG/MINOR: http-ana: Swap L7 buffer with request buffer by hand
- BUG/MINOR: stream: Fix crash in stream dump if the current rule has no keyword
- BUG/MINOR: mjson: make mystrtod() length-aware to prevent out-of-bounds reads
- MEDIUM: stats-file/clock: automatically update now_offset based on shared clock
- MINOR: promex: export "haproxy_sticktable_local_updates" metric
- BUG/MINOR: spoe: Fix condition to abort processing on client abort
- BUILD: spoe: Remove unsused variable
- MINOR: tools: add a function to create a tar file header
- MINOR: tools: add a function to load a file into a tar archive
- MINOR: config: support explicit "on" and "off" for "set-dumpable"
- MINOR: debug: read all libs in memory when set-dumpable=libs
- DEV: gdb: add a new utility to extract libs from a core dump: libs-from-core
- MINOR: debug: copy debug symbols from /usr/lib/debug when present
- MINOR: debug: opportunistically load libthread_db.so.1 with set-dumpable=libs
- BUG/MINOR: mworker: don't try to access an initializing process
- BUG/MEDIUM: peers: enforce check on incoming table key type
- BUG/MINOR: mux-h2: properly ignore R bit in GOAWAY stream ID
- BUG/MINOR: mux-h2: properly ignore R bit in WINDOW_UPDATE increments
- OPTIM: haterm: use chunk builders for generated response headers
- BUG/MAJOR: h3: check body size with content-length on empty FIN
- BUG/MEDIUM: h3: reject unaligned frames except DATA
- BUG/MINOR: mworker/cli: fix show proc pagination losing entries on resume
- CI: github: treat vX.Y.Z release tags as stable like haproxy-* branches
- MINOR: freq_ctr: add a function to add values with a peak
- MINOR: task: maintain a per-thread indicator of the peak run-queue size
- MINOR: mux-h2: store the concurrent streams hard limit in the h2c
- MINOR: mux-h2: permit to moderate the advertised streams limit depending on load
- MINOR: mux-h2: permit to fix a minimum value for the advertised streams limit
- BUG/MINOR: mworker: fix sort order of mworker_proc in 'show proc'
- CLEANUP: mworker: fix tab/space mess in mworker_env_to_proc_list()
2026/03/05 : 3.4-dev6
- CLEANUP: acme: remove duplicate includes
- BUG/MINOR: proxy: detect strdup error on server auto SNI

View File

@ -1043,7 +1043,7 @@ IGNORE_OPTS=help install install-man install-doc install-bin \
uninstall clean tags cscope tar git-tar version update-version \
opts reg-tests reg-tests-help unit-tests admin/halog/halog dev/flags/flags \
dev/haring/haring dev/ncpu/ncpu dev/poll/poll dev/tcploop/tcploop \
dev/term_events/term_events dev/gdb/pm-from-core
dev/term_events/term_events dev/gdb/pm-from-core dev/gdb/libs-from-core
ifneq ($(TARGET),)
ifeq ($(filter $(firstword $(MAKECMDGOALS)),$(IGNORE_OPTS)),)
@ -1077,6 +1077,9 @@ admin/dyncookie/dyncookie: admin/dyncookie/dyncookie.o
dev/flags/flags: dev/flags/flags.o
$(cmd_LD) $(ARCH_FLAGS) $(LDFLAGS) -o $@ $^ $(LDOPTS)
dev/gdb/libs-from-core: dev/gdb/libs-from-core.o
$(cmd_LD) $(ARCH_FLAGS) $(LDFLAGS) -o $@ $^ $(LDOPTS)
dev/gdb/pm-from-core: dev/gdb/pm-from-core.o
$(cmd_LD) $(ARCH_FLAGS) $(LDFLAGS) -o $@ $^ $(LDOPTS)
@ -1178,7 +1181,7 @@ distclean: clean
$(Q)rm -f admin/dyncookie/dyncookie
$(Q)rm -f dev/haring/haring dev/ncpu/ncpu{,.so} dev/poll/poll dev/tcploop/tcploop
$(Q)rm -f dev/hpack/decode dev/hpack/gen-enc dev/hpack/gen-rht
$(Q)rm -f dev/qpack/decode dev/gdb/pm-from-core
$(Q)rm -f dev/qpack/decode dev/gdb/pm-from-core dev/gdb/libs-from-core
tags:
$(Q)find src include \( -name '*.c' -o -name '*.h' \) -print0 | \
@ -1332,7 +1335,8 @@ range:
echo "[ $$index/$$count ] $$commit #############################"; \
git checkout -q $$commit || die 1; \
$(MAKE) all || die 1; \
[ -z "$(TEST_CMD)" ] || $(TEST_CMD) || die 1; \
set -- $(TEST_CMD); \
[ "$$#" -eq 0 ] || "$$@" || die 1; \
index=$$((index + 1)); \
done; \
echo;echo "Done! $${count} commit(s) built successfully for RANGE $${RANGE}" ; \

View File

@ -1,2 +1,2 @@
$Format:%ci$
2026/03/05
2026/03/20

View File

@ -1 +1 @@
3.4-dev6
3.4-dev7

View File

@ -407,6 +407,7 @@ listed below. Metrics from extra counters are not listed.
+----------------------------------------------------+
| haproxy_sticktable_size |
| haproxy_sticktable_used |
| haproxy_sticktable_local_updates |
+----------------------------------------------------+
* Resolvers metrics

View File

@ -1,11 +1,10 @@
#!/bin/bash
#!/bin/sh
set -e
export VERBOSE=1
export TIMEOUT=90
export MASTER_SOCKET=${MASTER_SOCKET:-/var/run/haproxy-master.sock}
export RET=
export MASTER_SOCKET="${MASTER_SOCKET:-/var/run/haproxy-master.sock}"
alert() {
if [ "$VERBOSE" -ge "1" ]; then
@ -15,32 +14,38 @@ alert() {
reload() {
while read -r line; do
if [ "$line" = "Success=0" ]; then
RET=1
elif [ "$line" = "Success=1" ]; then
RET=0
elif [ "$line" = "Another reload is still in progress." ]; then
alert "$line"
elif [ "$line" = "--" ]; then
continue;
else
if [ "$RET" = 1 ] && [ "$VERBOSE" = "2" ]; then
echo "$line" >&2
elif [ "$VERBOSE" = "3" ]; then
echo "$line" >&2
fi
fi
done < <(echo "reload" | socat -t"${TIMEOUT}" "${MASTER_SOCKET}" -)
if [ -z "$RET" ]; then
alert "Couldn't finish the reload before the timeout (${TIMEOUT})."
return 1
if [ -S "$MASTER_SOCKET" ]; then
socat_addr="UNIX-CONNECT:${MASTER_SOCKET}"
else
case "$MASTER_SOCKET" in
*:[0-9]*)
socat_addr="TCP:${MASTER_SOCKET}"
;;
*)
alert "Invalid master socket address '${MASTER_SOCKET}': expected a UNIX socket file or <host>:<port>"
return 1
;;
esac
fi
return "$RET"
echo "reload" | socat -t"${TIMEOUT}" "$socat_addr" - | {
read -r status || { alert "No status received (connection error or timeout after ${TIMEOUT}s)."; exit 1; }
case "$status" in
"Success=1") ret=0 ;;
"Success=0") ret=1 ;;
*) alert "Unexpected response: '$status'"; exit 1 ;;
esac
read -r _ # consume "--"
if [ "$VERBOSE" -ge 3 ] || { [ "$ret" = 1 ] && [ "$VERBOSE" -ge 2 ]; }; then
cat >&2
else
cat >/dev/null
fi
exit "$ret"
}
}
usage() {
@ -52,12 +57,12 @@ usage() {
echo " EXPERIMENTAL script!"
echo ""
echo "Options:"
echo " -S, --master-socket <path> Use the master socket at <path> (default: ${MASTER_SOCKET})"
echo " -S, --master-socket <addr> Unix socket path or <host>:<port> (default: ${MASTER_SOCKET})"
echo " -d, --debug Debug mode, set -x"
echo " -t, --timeout Timeout (socat -t) (default: ${TIMEOUT})"
echo " -s, --silent Silent mode (no output)"
echo " -v, --verbose Verbose output (output from haproxy on failure)"
echo " -vv Even more verbose output (output from haproxy on success and failure)"
echo " -vv --verbose=all Very verbose output (output from haproxy on success and failure)"
echo " -h, --help This help"
echo ""
echo "Examples:"
@ -84,7 +89,7 @@ main() {
VERBOSE=2
shift
;;
-vv|--verbose)
-vv|--verbose=all)
VERBOSE=3
shift
;;

162
dev/gdb/libs-from-core.c Normal file
View File

@ -0,0 +1,162 @@
/*
* Extracts the libs archives from a core dump
*
* Copyright (C) 2026 Willy Tarreau <w@1wt.eu>
*
* Permission is hereby granted, free of charge, to any person obtaining
* a copy of this software and associated documentation files (the
* "Software"), to deal in the Software without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sublicense, and/or sell copies of the Software, and to
* permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
* OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
* WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
/* Note: builds with no option under glibc, and can be built as a minimal
* uploadable static executable using nolibc as well:
gcc -o libs-from-core -nostdinc -nostdlib -s -Os -static -fno-ident \
-fno-exceptions -fno-asynchronous-unwind-tables -fno-unwind-tables \
-Wl,--gc-sections,--orphan-handling=discard,-znoseparate-code \
-I /path/to/nolibc-sysroot/include libs-from-core.c
*/
#define _GNU_SOURCE
#include <sys/mman.h>
#include <sys/stat.h>
#include <elf.h>
#include <fcntl.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
void usage(const char *progname)
{
const char *slash = strrchr(progname, '/');
if (slash)
progname = slash + 1;
fprintf(stderr,
"Usage: %s [-q] <core_file>\n"
"Locate a libs archive from an haproxy core dump and dump it to stdout.\n"
"Arguments:\n"
" -q Query mode: only report offset and length, do not dump\n"
" core_file Core dump produced by haproxy\n",
progname);
}
int main(int argc, char **argv)
{
Elf64_Ehdr *ehdr;
Elf64_Phdr *phdr;
struct stat st;
uint8_t *mem;
int i, fd;
const char *fname;
int quiet = 0;
int arg;
for (arg = 1; arg < argc; arg++) {
if (*argv[arg] != '-')
break;
if (strcmp(argv[arg], "-q") == 0)
quiet = 1;
else if (strcmp(argv[arg], "--") == 0) {
arg++;
break;
}
}
if (arg < argc) {
fname = argv[arg];
} else {
usage(argv[0]);
exit(1);
}
fd = open(fname, O_RDONLY);
/* Let's just map the core dump as an ELF header */
fstat(fd, &st);
mem = mmap(NULL, st.st_size, PROT_READ, MAP_PRIVATE, fd, 0);
if (mem == MAP_FAILED) {
perror("mmap()");
exit(1);
}
/* get the program headers */
ehdr = (Elf64_Ehdr *)mem;
/* check that it's really a core. Should be "\x7fELF" */
if (memcmp(ehdr->e_ident, ELFMAG, SELFMAG) != 0) {
fprintf(stderr, "ELF magic not found.\n");
exit(1);
}
if (ehdr->e_ident[EI_CLASS] != ELFCLASS64) {
fprintf(stderr, "Only 64-bit ELF supported.\n");
exit(1);
}
if (ehdr->e_type != ET_CORE) {
fprintf(stderr, "ELF type %d, not a core dump.\n", ehdr->e_type);
exit(1);
}
/* OK we can safely go with program headers */
phdr = (Elf64_Phdr *)(mem + ehdr->e_phoff);
for (i = 0; i < ehdr->e_phnum; i++) {
uint64_t size = phdr[i].p_filesz;
uint64_t offset = phdr[i].p_offset;
int ret = 0;
if (phdr[i].p_type != PT_LOAD)
continue;
//fprintf(stderr, "Scanning segment %d...\n", ehdr->e_phnum);
//fprintf(stderr, "\r%-5d: off=%lx va=%lx sz=%lx ", i, (long)offset, (long)phdr[i].p_vaddr, (long)size);
if (!size)
continue;
if (size < 512) // minimum for a tar header
continue;
/* tar magic */
if (memcmp(mem + offset + 257, "ustar\0""00", 8) != 0)
continue;
/* uid, gid */
if (memcmp(mem + offset + 108, "0000000\0""0000000\0", 16) != 0)
continue;
/* link name */
if (memcmp(mem + offset + 157, "haproxy-libs-dump\0", 18) != 0)
continue;
/* OK that's really it */
if (quiet)
printf("offset=%#lx size=%#lx\n", offset, size);
else
ret = (write(1, mem + offset, size) == size) ? 0 : 1;
return ret;
}
//fprintf(stderr, "\r%75s\n", "\r");
fprintf(stderr, "libs archive not found. Was 'set-dumpable' set to 'libs' ?\n");
return 1;
}

View File

@ -3,7 +3,7 @@
Configuration Manual
----------------------
version 3.4
2026/03/05
2026/03/20
This document covers the configuration language as implemented in the version
@ -1725,7 +1725,7 @@ Assuming haproxy is in $PATH, test these configurations in a shell with:
$ sudo haproxy -f configuration.conf -c
3. Global parameters
3. Global section
--------------------
Parameters in the "global" section are process-wide and often OS-specific. They
@ -1886,10 +1886,13 @@ The following keywords are supported in the "global" section :
- tune.h2.be.glitches-threshold
- tune.h2.be.initial-window-size
- tune.h2.be.max-concurrent-streams
- tune.h2.be.max-frames-at-once
- tune.h2.be.rxbuf
- tune.h2.fe.glitches-threshold
- tune.h2.fe.initial-window-size
- tune.h2.fe.max-concurrent-streams
- tune.h2.fe.max-frames-at-once
- tune.h2.fe.max-rst-at-once
- tune.h2.fe.max-total-streams
- tune.h2.fe.rxbuf
- tune.h2.header-table-size
@ -3148,10 +3151,29 @@ server-state-file <file>
configuration. See also "server-state-base" and "show servers state",
"load-server-state-from-file" and "server-state-file-name"
set-dumpable
set-dumpable [ on | off | libs ]
This option helps choose the core dump behavior in case of process crash.
Available options are:
- on this enables core dumping at the process level if it was
previously disabled.
- off this disables a previously enabled core dumping.
- libs this enables core dumping with an embedded copy of the binaries and
libraries that are required for debugging. This may be requested by
developers. In this case haproxy will try to load the libraries it
depends on into memory and keep them preciously. If the process
crashes, they will be dumped into the core so there is no need for
retrieving them from the file system anymore and no risk that they
do not match the core. This takes a few megabytes to a few tens of
megabytes of additional RAM, so it is better not to use it on small
systems.
This option is better left disabled by default and enabled only upon a
developer's request. If it has been enabled, it may still be forcibly
disabled by prefixing it with the "no" keyword. It has no impact on
developer's request. By default it is disabled. Without argument, it defaults
to "on". If it has been enabled, it may still be forcibly disabled by prefixing
it with the "no" keyword or by setting it to "off". It has no impact on
performance nor stability but will try hard to re-enable core dumps that were
possibly disabled by file size limitations (ulimit -f), core size limitations
(ulimit -c), or "dumpability" of a process after changing its UID/GID (such
@ -4143,8 +4165,11 @@ tune.bufsize.small <size>
If however a small buffer is not sufficient, a reallocation is automatically
done to switch to a standard size buffer.
For the moment, it is used only by HTTP/3 protocol to emit the response
headers.
For the moment, it is automatically used only by HTTP/3 protocol to emit the
response headers. Otherwise, small buffers support can be enabled for
specific proxies via the "use-small-buffers" option.
See also: option use-small-buffers
tune.comp.maxlevel <number>
Sets the maximum compression level. The compression level affects CPU
@ -4349,6 +4374,13 @@ tune.h2.be.max-concurrent-streams <number>
case). It is highly recommended not to increase this value; some might find
it optimal to run at low values (1..5 typically).
tune.h2.be.max-frames-at-once <number>
Sets the maximum number of HTTP/2 incoming frames that will be processed at
once on a backend connection. It can be useful to set this to a low value
(a few tens to a few hundreds) when dealing with very large buffers in order
to maintain a low latency and a better fairness between multiple connections.
The default value is zero, which means that no limitation is enforced.
tune.h2.be.rxbuf <size>
Sets the HTTP/2 receive buffer size for outgoing connections, in bytes. This
size will be rounded up to the next multiple of tune.bufsize and will be
@ -4399,7 +4431,7 @@ tune.h2.fe.initial-window-size <number>
See also: tune.h2.initial-window-size.
tune.h2.fe.max-concurrent-streams <number>
tune.h2.fe.max-concurrent-streams <number> [args...]
Sets the HTTP/2 maximum number of concurrent streams per incoming connection
(i.e. the number of outstanding requests on a single connection from a
client). When not set, the default set by tune.h2.max-concurrent-streams
@ -4407,7 +4439,56 @@ tune.h2.fe.max-concurrent-streams <number>
the page load time for complex sites with lots of small objects over high
latency networks but can also result in using more memory by allowing a
client to allocate more resources at once. The default value of 100 is
generally good and it is recommended not to change this value.
generally good and it is recommended not to change this value. A larger
concurrency also has an impact on the processing load and latency when
dealing with large numbers of connections which are themselves using many
streams, and it may lower the barrier to denial of service attacks. The
command supports the following optional arguments after the number:
- rq-load { <number> | auto | ignore }:
The optional argument "rq-load" permits to dynamically adjust the
advertised concurrency based on the executing thread's run-queue load:
as long as the thread's load remains below the indicated threshold, the
configured streams limit will be advertised. When the thread's load
increases beyond the configured limit, the advertised streams limit will be
decreased proportionally to the square of the excess ratio. Target load
levels between 50 and 100 generally show very good moderation under heavy
loads. Alternately, instead of specifying an explicit number, the keyword
accepts "ignore", which is the default and means that the thread's
run-queue load will not be considered to moderate the advertised streams
limit, and "auto", which sets the limit to the "tune.runqueue-depth"
value, which generally provides good results without having to tweak
the configuration any further.
- min <number>:
This sets the minimum advertised concurrency level when rq-load is used,
even if this results in a higher load than the configured target. This
allows to maintain a good level of interactivity on a site under very
heavy load. The minimum and default value is 1, but values between 5
and 15 can improve user experience.
Example:
tune.h2.fe.max-concurrent-streams 100 rq-load auto min 15
tune.h2.fe.max-frames-at-once <number>
Sets the maximum number of HTTP/2 incoming frames that will be processed at
once on a frontend connection. It can be useful to set this to a low value
(a few tens to a few hundreds) when dealing with very large buffers in order
to maintain a low latency and a better fairness between multiple connections.
The default value is zero, which means that no limitation is enforced.
tune.h2.fe.max-rst-at-once <number>
Sets the maximum number of HTTP/2 incoming RST_STREAM that will be processed
at once on a frontend connection. Once the specified number of RST_STREAM
frames are received, the connection handler will be placed in a low priority
queue and be processed after all other tasks. It can be useful to set this to
a very low value (1 or a few units) to significantly reduce the impacts of
RST_STREAM floods. RST_STREAM do happen when a user clicks on the Stop button
in their browser, but the few extra milliseconds caused by this requeuing are
generally unnoticeable, however they are generally effective at significantly
lowering the load caused from such floods. The default value is zero, which
means that no limitation is enforced.
tune.h2.fe.max-total-streams <number>
Sets the HTTP/2 maximum number of total streams processed per incoming
@ -5893,6 +5974,8 @@ errorloc302 X X X X
-- keyword -------------------------- defaults - frontend - listen -- backend -
errorloc303 X X X X
error-log-format X X X -
external-check command X - X X
external-check path X - X X
force-persist - - X X
force-be-switch - X X -
filter - X X X
@ -5938,6 +6021,7 @@ option disable-h2-upgrade (*) X X X -
option dontlog-normal (*) X X X -
option dontlognull (*) X X X -
-- keyword -------------------------- defaults - frontend - listen -- backend -
option external-check X - X X
option forwardfor X X X X
option forwarded (*) X - X X
option h1-case-adjust-bogus-client (*) X X X -
@ -5956,9 +6040,9 @@ option httpchk X - X X
option httpclose (*) X X X X
option httplog X X X -
option httpslog X X X -
option idle-close-on-response (*) X X X -
option independent-streams (*) X X X X
option ldap-check X - X X
option external-check X - X X
option log-health-checks (*) X - X X
option log-separate-errors (*) X X X -
option logasap (*) X X X -
@ -5985,9 +6069,7 @@ option tcp-smart-connect (*) X - X X
option tcpka X X X X
option tcplog X X X -
option transparent (deprecated) (*) X - X X
option idle-close-on-response (*) X X X -
external-check command X - X X
external-check path X - X X
option use-small-buffers (*) X - X X
persist rdp-cookie X - X X
quic-initial X (!) X X -
rate-limit sessions X X X -
@ -6935,12 +7017,16 @@ compression offload
See also : "compression type", "compression algo", "compression direction"
compression direction <direction>
compression direction <direction> (deprecated)
Makes haproxy able to compress both requests and responses.
Valid values are "request", to compress only requests, "response", to
compress only responses, or "both", when you want to compress both.
The default value is "response".
This directive is only relevant when legacy "filter compression" was
enabled, as with explicit comp-req and comp-res filters compression
direction is redundant.
May be used in the following contexts: http
See also : "compression type", "compression algo", "compression offload"
@ -7646,6 +7732,96 @@ force-persist { if | unless } <condition>
and section 7 about ACL usage.
external-check command <command>
Executable to run when performing an external-check
May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
yes | no | yes | yes
Arguments :
<command> is the external command to run
The arguments passed to the to the command are:
<proxy_address> <proxy_port> <server_address> <server_port>
The <proxy_address> and <proxy_port> are derived from the first listener
that is either IPv4, IPv6 or a UNIX socket. In the case of a UNIX socket
listener the proxy_address will be the path of the socket and the
<proxy_port> will be the string "NOT_USED". In a backend section, it's not
possible to determine a listener, and both <proxy_address> and <proxy_port>
will have the string value "NOT_USED".
Some values are also provided through environment variables.
Environment variables :
HAPROXY_PROXY_ADDR The first bind address if available (or empty if not
applicable, for example in a "backend" section).
HAPROXY_PROXY_ID The backend id.
HAPROXY_PROXY_NAME The backend name.
HAPROXY_PROXY_PORT The first bind port if available (or empty if not
applicable, for example in a "backend" section or
for a UNIX socket).
HAPROXY_SERVER_ADDR The server address.
HAPROXY_SERVER_CURCONN The current number of connections on the server.
HAPROXY_SERVER_ID The server id.
HAPROXY_SERVER_MAXCONN The server max connections.
HAPROXY_SERVER_NAME The server name.
HAPROXY_SERVER_PORT The server port if available (or empty for a UNIX
socket).
HAPROXY_SERVER_SSL "0" when SSL is not used, "1" when it is used
HAPROXY_SERVER_PROTO The protocol used by this server, which can be one
of "cli" (the haproxy CLI), "syslog" (syslog TCP
server), "peers" (peers TCP server), "h1" (HTTP/1.x
server), "h2" (HTTP/2 server), or "tcp" (any other
TCP server).
PATH The PATH environment variable used when executing
the command may be set using "external-check path".
If the command executed and exits with a zero status then the check is
considered to have passed, otherwise the check is considered to have
failed.
Example :
external-check command /bin/true
See also : "external-check", "option external-check", "external-check path"
external-check path <path>
The value of the PATH environment variable used when running an external-check
May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
yes | no | yes | yes
Arguments :
<path> is the path used when executing external command to run
The default path is "".
Example :
external-check path "/usr/bin:/bin"
See also : "external-check", "option external-check",
"external-check command"
force-be-switch { if | unless } <condition>
Allow content switching to select a backend instance even if it is disabled
or unpublished. This rule can be used by admins to test traffic to services
@ -8150,6 +8326,11 @@ http-check expect [min-recv <int>] [comment <msg>]
occurred during the expect rule evaluation. <fmt> is a
Custom log format string (see section 8.2.6).
status-code <expr> is optional and can be used to set the check status code
reported in logs, on success or on error. <expr> is a
standard HAProxy expression formed by a sample-fetch
followed by some converters.
<match> is a keyword indicating how to look for a specific pattern in the
response. The keyword may be one of "status", "rstatus", "hdr",
"fhdr", "string", or "rstring". The keyword may be preceded by an
@ -8259,7 +8440,7 @@ http-check expect [min-recv <int>] [comment <msg>]
http-check expect status 200,201,300-310
# be sure a sessid coookie is set
http-check expect header name "set-cookie" value -m beg "sessid="
http-check expect hdr name "set-cookie" value -m beg "sessid="
# consider SQL errors as errors
http-check expect ! string SQL\ Error
@ -9867,6 +10048,24 @@ no option dontlognull
See also : "log", "http-ignore-probes", "monitor-uri", and
section 8 about logging.
option external-check
Use external processes for server health checks
May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
yes | no | yes | yes
It is possible to test the health of a server using an external command.
This is achieved by running the executable set using "external-check
command".
Requires the "external-check" global to be set.
See also : "external-check", "external-check command", "external-check path"
option forwarded [ proto ]
[ host | host-expr <host_expr> ]
[ by | by-expr <by_expr> ] [ by_port | by_port-expr <by_port_expr>]
@ -10657,6 +10856,39 @@ option httpslog
See also : section 8 about logging.
option idle-close-on-response
no option idle-close-on-response
Avoid closing idle frontend connections if a soft stop is in progress
May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
yes | yes | yes | no
Arguments : none
By default, idle connections will be closed during a soft stop. In some
environments, a client talking to the proxy may have prepared some idle
connections in order to send requests later. If there is no proper retry on
write errors, this can result in errors while haproxy is reloading. Even
though a proper implementation should retry on connection/write errors, this
option was introduced to support backwards compatibility with haproxy prior
to version 2.4. Indeed before v2.4, haproxy used to wait for a last request
and response to add a "connection: close" header before closing, thus
notifying the client that the connection would not be reusable.
In a real life example, this behavior was seen in AWS using the ALB in front
of a haproxy. The end result was ALB sending 502 during haproxy reloads.
Users are warned that using this option may increase the number of old
processes if connections remain idle for too long. Adjusting the client
timeouts and/or the "hard-stop-after" parameter accordingly might be
needed in case of frequent reloads.
See also: "timeout client", "timeout client-fin", "timeout http-request",
"hard-stop-after"
option independent-streams
no option independent-streams
Enable or disable independent timeout processing for both directions
@ -10721,56 +10953,6 @@ option ldap-check
See also : "option httpchk"
option external-check
Use external processes for server health checks
May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
yes | no | yes | yes
It is possible to test the health of a server using an external command.
This is achieved by running the executable set using "external-check
command".
Requires the "external-check" global to be set.
See also : "external-check", "external-check command", "external-check path"
option idle-close-on-response
no option idle-close-on-response
Avoid closing idle frontend connections if a soft stop is in progress
May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
yes | yes | yes | no
Arguments : none
By default, idle connections will be closed during a soft stop. In some
environments, a client talking to the proxy may have prepared some idle
connections in order to send requests later. If there is no proper retry on
write errors, this can result in errors while haproxy is reloading. Even
though a proper implementation should retry on connection/write errors, this
option was introduced to support backwards compatibility with haproxy prior
to version 2.4. Indeed before v2.4, haproxy used to wait for a last request
and response to add a "connection: close" header before closing, thus
notifying the client that the connection would not be reusable.
In a real life example, this behavior was seen in AWS using the ALB in front
of a haproxy. The end result was ALB sending 502 during haproxy reloads.
Users are warned that using this option may increase the number of old
processes if connections remain idle for too long. Adjusting the client
timeouts and/or the "hard-stop-after" parameter accordingly might be
needed in case of frequent reloads.
See also: "timeout client", "timeout client-fin", "timeout http-request",
"hard-stop-after"
option log-health-checks
no option log-health-checks
Enable or disable logging of health checks status updates
@ -11732,95 +11914,35 @@ no option transparent (deprecated)
"transparent" option of the "bind" keyword.
external-check command <command>
Executable to run when performing an external-check
option use-small-buffers [ queue | l7-retries | check ]*
May be used in the following contexts: tcp, http, log
Enable support for small buffers for the given categories.
May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
yes | no | yes | yes
Arguments :
<command> is the external command to run
This option can be used to enable the small buffers support at diffent places
to save memory. By default, with no parameter, small buffers are used as far
as possible at all possible places. Otherwise, it is possible to limit it to
following the places:
The arguments passed to the to the command are:
- queue: When set, small buffers will be used to store the requests, if
small enough, when the connection is queued.
- l7-retries: When set, small buffers will be used to save the requests
when L7 retries are enabled.
- check: When set, small buffers will be used for the health-checks
requests.
<proxy_address> <proxy_port> <server_address> <server_port>
When enabled, small buffers are used, but only if it is possible. Otherwise,
when data are too large, a regular buffer is automtically used. The size of
small buffers is configurable via the "tune.bufsize.small" global setting.
The <proxy_address> and <proxy_port> are derived from the first listener
that is either IPv4, IPv6 or a UNIX socket. In the case of a UNIX socket
listener the proxy_address will be the path of the socket and the
<proxy_port> will be the string "NOT_USED". In a backend section, it's not
possible to determine a listener, and both <proxy_address> and <proxy_port>
will have the string value "NOT_USED".
Some values are also provided through environment variables.
Environment variables :
HAPROXY_PROXY_ADDR The first bind address if available (or empty if not
applicable, for example in a "backend" section).
HAPROXY_PROXY_ID The backend id.
HAPROXY_PROXY_NAME The backend name.
HAPROXY_PROXY_PORT The first bind port if available (or empty if not
applicable, for example in a "backend" section or
for a UNIX socket).
HAPROXY_SERVER_ADDR The server address.
HAPROXY_SERVER_CURCONN The current number of connections on the server.
HAPROXY_SERVER_ID The server id.
HAPROXY_SERVER_MAXCONN The server max connections.
HAPROXY_SERVER_NAME The server name.
HAPROXY_SERVER_PORT The server port if available (or empty for a UNIX
socket).
HAPROXY_SERVER_SSL "0" when SSL is not used, "1" when it is used
HAPROXY_SERVER_PROTO The protocol used by this server, which can be one
of "cli" (the haproxy CLI), "syslog" (syslog TCP
server), "peers" (peers TCP server), "h1" (HTTP/1.x
server), "h2" (HTTP/2 server), or "tcp" (any other
TCP server).
PATH The PATH environment variable used when executing
the command may be set using "external-check path".
If the command executed and exits with a zero status then the check is
considered to have passed, otherwise the check is considered to have
failed.
Example :
external-check command /bin/true
See also : "external-check", "option external-check", "external-check path"
external-check path <path>
The value of the PATH environment variable used when running an external-check
May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
yes | no | yes | yes
Arguments :
<path> is the path used when executing external command to run
The default path is "".
Example :
external-check path "/usr/bin:/bin"
See also : "external-check", "option external-check",
"external-check command"
If this option has been enabled in a "defaults" section, it can be disabled
in a specific instance by prepending the "no" keyword before it.
See also: tune.bufsize.small
persist rdp-cookie
persist rdp-cookie(<name>)
@ -13579,13 +13701,6 @@ tcp-check expect [min-recv <int>] [comment <msg>]
does not match, the check will wait for more data. If set to 0,
the evaluation result is always conclusive.
<match> is a keyword indicating how to look for a specific pattern in the
response. The keyword may be one of "string", "rstring", "binary" or
"rbinary".
The keyword may be preceded by an exclamation mark ("!") to negate
the match. Spaces are allowed between the exclamation mark and the
keyword. See below for more details on the supported keywords.
ok-status <st> is optional and can be used to set the check status if
the expect rule is successfully evaluated and if it is
the last rule in the tcp-check ruleset. "L7OK", "L7OKC",
@ -13633,6 +13748,13 @@ tcp-check expect [min-recv <int>] [comment <msg>]
standard HAProxy expression formed by a sample-fetch
followed by some converters.
<match> is a keyword indicating how to look for a specific pattern in the
response. The keyword may be one of "string", "rstring", "binary" or
"rbinary".
The keyword may be preceded by an exclamation mark ("!") to negate
the match. Spaces are allowed between the exclamation mark and the
keyword. See below for more details on the supported keywords.
<pattern> is the pattern to look for. It may be a string or a regular
expression. If the pattern contains spaces, they must be escaped
with the usual backslash ('\').
@ -15329,15 +15451,14 @@ disable-l7-retry
reason than a connection failure. This can be useful for example to make
sure POST requests aren't retried on failure.
do-log
do-log [profile <log_profile>]
Usable in: QUIC Ini| TCP RqCon| RqSes| RqCnt| RsCnt| HTTP Req| Res| Aft
X | X | X | X | X | X | X | X
This action manually triggers a log emission on the proxy. This means
log options on the proxy will be considered (including formatting options
such as "log-format"), but it will not interfere with the logs automatically
generated by the proxy during transaction handling. It currently doesn't
support any argument, though extensions may appear in future versions.
generated by the proxy during transaction handling.
Using "log-profile", it is possible to precisely describe how the log should
be emitted for each of the available contexts where the action may be used.
@ -15347,15 +15468,28 @@ do-log
Also, they will be properly reported when using "%OG" logformat alias.
Optional "profile" argument may be used to specify the name of a log-profile
section that should be used for this do-log action specifically instead of
the one associated to the current logger that applies by default.
Example:
log-profile myprof
log-profile my-dft-prof
on tcp-req-conn format "Connect: %ci"
log-profile my-local-prof
on tcp-req-conn format "Local Connect: %ci"
frontend myfront
log stdout format rfc5424 profile myprof local0
log stdout format rfc5424 profile my-dft-prof local0
log-format "log generated using proxy logformat, from '%OG'"
tcp-request connection do-log #uses special log-profile format
tcp-request content do-log #uses proxy logformat
acl local src 127.0.0.1
# on connection use either log-profile from the logger (my-dft-prof) or
# explicit my-local-prof if source ip is localhost
tcp-request connection do-log if !local
tcp-request connection do-log profile my-local-prof if local
# on content use proxy logformat, since no override was specified
# in my-dft-prof
tcp-request content do-log
do-resolve(<var>,<resolvers>[,ipv4|ipv6]) <expr>
Usable in: QUIC Ini| TCP RqCon| RqSes| RqCnt| RsCnt| HTTP Req| Res| Aft
@ -21499,7 +21633,8 @@ jwt_decrypt_cert(<cert>)
format (five dot-separated base64-url encoded strings).
This converter can be used for tokens that have an algorithm ("alg" field of
the JOSE header) among the following: RSA1_5, RSA-OAEP or RSA-OAEP-256.
the JOSE header) among the following: RSA1_5, RSA-OAEP, RSA-OAEP-256,
ECDH-ES, ECDH-ES+A128KW, ECDH-ES+A192KW or ECDH-ES+A256KW.
The JWE token must be provided base64url-encoded and the output will be
provided "raw". If an error happens during token parsing, signature
@ -21515,8 +21650,9 @@ jwt_decrypt_jwk(<jwk>)
Performs a signature validation of a JSON Web Token following the JSON Web
Encryption format (see RFC 7516) given in input and return its content
decrypted thanks to the provided JSON Web Key (RFC7517).
The <jwk> parameter must be a valid JWK of type 'oct' or 'RSA' ('kty' field
of the JSON key) that can be provided either as a string or via a variable.
The <jwk> parameter must be a valid JWK of type 'oct', 'EC' or 'RSA' ('kty'
field of the JSON key) that can be provided either as a string or via a
variable.
The only tokens managed yet are the ones using the Compact Serialization
format (five dot-separated base64-url encoded strings).
@ -21524,11 +21660,16 @@ jwt_decrypt_jwk(<jwk>)
This converter can be used to decode token that have a symmetric-type
algorithm ("alg" field of the JOSE header) among the following: A128KW,
A192KW, A256KW, A128GCMKW, A192GCMKW, A256GCMKW, dir. In this case, we expect
the provided JWK to be of the 'oct' type. Please note that the A128KW and
A192KW algorithms are not available on AWS-LC and decryption will not work.
This converter also manages tokens that have an algorithm ("alg" field of
the JOSE header) among the following: RSA1_5, RSA-OAEP or RSA-OAEP-256. In
such a case an 'RSA' type JWK representing a private key must be provided.
the provided JWK to be of the 'oct' type.
This converter also manages tokens that have an algorithm ("alg" field of the
JOSE header) in the RSA family (RSA1_5, RSA-OAEP or RSA-OAEP-256) when
provided an 'RSA' JWK, or in the ECDH family (ECDH-ES, ECDH-ES+A128KW,
ECDH-ES+A192KW or ECDH-ES+A256KW) when provided an 'EC' JWK.
Please note that the A128KW and A192KW algorithms are not available on AWS-LC
so the A128KW, A192KW, ECDH-ES+A128KW and ECDH-ES+A192KW algorithms won't
work.
The JWE token must be provided base64url-encoded and the output will be
provided "raw". If an error happens during token parsing, signature
@ -21542,7 +21683,7 @@ jwt_decrypt_jwk(<jwk>)
# Get a JWT from the authorization header, put its decrypted content in an
# HTTP header
http-request set-var(txn.bearer) http_auth_bearer
http-request set-header X-Decrypted %[var(txn.bearer),jwt_decrypt_secret(\'{\"kty\":\"oct\",\"k\":\"wAsgsg\"}\')
http-request set-header X-Decrypted %[var(txn.bearer),jwt_decrypt_jwk(\'{\"kty\":\"oct\",\"k\":\"wAsgsg\"}\')
# or via a variable
http-request set-var(txn.bearer) http_auth_bearer
@ -28053,7 +28194,7 @@ Detailed fields description :
limits have been reached. For instance, if actconn is close to 512 when
multiple connection errors occur, chances are high that the system limits
the process to use a maximum of 1024 file descriptors and that all of them
are used. See section 3 "Global parameters" to find how to tune the system.
are used. See section 3 "Global section" to find how to tune the system.
- "feconn" is the total number of concurrent connections on the frontend when
the session was logged. It is useful to estimate the amount of resource
@ -28293,7 +28434,7 @@ Detailed fields description :
limits have been reached. For instance, if actconn is close to 512 or 1024
when multiple connection errors occur, chances are high that the system
limits the process to use a maximum of 1024 file descriptors and that all
of them are used. See section 3 "Global parameters" to find how to tune the
of them are used. See section 3 "Global section" to find how to tune the
system.
- "feconn" is the total number of concurrent connections on the frontend when
@ -29904,7 +30045,21 @@ a server by adding some latencies in the processing.
9.2. HTTP compression
---------------------
filter compression
filter comp-req
Enables filter that explicitly tries to compress HTTP requests according to
"compression" settings. Implicitly sets "compression direction request".
filter comp-res
Enables filter that explicitly tries to compress HTTP responses according to
"compression" settings. Implicitly sets "compression direction response"
filter compression (deprecated)
Alias for backward compatibility purposes that is functionnally equivalent to
enabling both "comp-req" and "comp-res" filter. "compression" keyword must be
used to configure appropriate behavior:
The HTTP compression has been moved in a filter in HAProxy 1.7. "compression"
keyword must still be used to enable and configure the HTTP compression. And

View File

@ -539,10 +539,22 @@ message. These functions are used by HTX analyzers or by multiplexers.
with the first block not removed, or NULL if everything was removed, and
the amount of data drained.
- htx_xfer_blks() transfers HTX blocks from an HTX message to another,
stopping after the first block of a specified type is transferred or when
a specific amount of bytes, including meta-data, was moved. If the tail
block is a DATA block, it may be partially moved. All other block are
- htx_xfer() transfers HTX blocks from an HTX message to another, stopping
when a specific amount of bytes, including meta-data, was copied. If the
tail block is a DATA block, it may be partially copied. All other block
are transferred at once. By default, copied blocks are removed from the
original HTX message and headers and trailers parts cannot be partially
copied. But flags can be set to change the default behavior:
- HTX_XFER_KEEP_SRC_BLKS: source blocks are not removed
- HTX_XFER_PARTIAL_HDRS_COPY: partial headers and trailers
part can be xferred
- HTX_XFER_HDRS_ONLY: Only the headers part is xferred
- htx_xfer_blks() [DEPRECATED] transfers HTX blocks from an HTX message to
another, stopping after the first block of a specified type is transferred
or when a specific amount of bytes, including meta-data, was moved. If the
tail block is a DATA block, it may be partially moved. All other block are
transferred at once or kept. This function returns a mixed value, with the
last block moved, or NULL if nothing was moved, and the amount of data
transferred. When HEADERS or TRAILERS blocks must be transferred, this

View File

@ -0,0 +1,50 @@
2026-03-12 - thread execution context
Thread execution context (thread_exec_ctx) is a combination of type and pointer
that are set in the current running thread at th_ctx->exec_ctx when entering
certain processing (tasks, sample fetch functions, actions, CLI keywords etc).
They're refined along execution, so that a task such as process_stream could
temporarily switch to a converter while evaluating an expression and switch
back to process_stream. They are reported in thread dumps and are mixed with
caller locations for memory profiling. As such they are intentionally not too
precise in order to avoid an explosion of the number of buckets. At the moment,
the level of granularity it provides is sufficient to try to narrow a
misbehaving origin down to a list of keywords. The context types can currently
be:
- something registered via an initcall, with the initcall's location
- something registered via an ha_caller, with the caller's location
- an explicit sample fetch / converter / action / CLI keyword list
- an explicit function (mainly used for actions without keywords)
- a task / tasklet (no distinction is made), using the ->process pointer
- a filter (e.g. compression), via flt_conf, reporting name
- a mux (via the mux_ops, reporting the name)
- an applet (e.g. cache, stats, CLI)
A macro EXEC_CTX_MAKE(type, pointer) makes a thread_exec_ctx from such
values.
A macro EXEC_CTX_NO_RET(ctx, statement) calls a void statement under the
specified context.
A macro EXEC_CTX_WITH_RET(ctx, expr) calls an expression under the specified
context.
Most locations were modified to directly use these macros on the fly, by
retrieving the context from where it was set on the element being evaluated
(e.g. an action rule contains the context inherited by the action keyword
that was used to create it).
In tools.c, chunk_append_thread_ctx() tries to decode the given exec_ctx and
appends it into the provided buffer. It's used by ha_thread_dump_one() and
cli_io_handler_show_activity() for memory profiling. In this latter case,
the detected thread_ctx are reported in the output under brackets prefixed
with "[via ...]" to distinguish call paths to the same allocators.
A good way to test if a context is properly reported is to place a bleeding
malloc() call into one of the monitored functions, e.g.:
DISGUISE(malloc(8));
and issue "show profiling memory" after stressing the function. Its context
must appear on the right with the number of calls.

View File

@ -1740,10 +1740,7 @@ add backend <name> from <defproxy> [mode <mode>] [guid <guid>] [ EXPERIMENTAL ]
All named default proxies can be used, given that they validate the same
inheritance rules applied during configuration parsing. There is some
exceptions though, for example when the mode is neither TCP nor HTTP. Another
exception is that it is not yet possible to use a default proxies which
reference custom HTTP errors, for example via the errorfiles or http-rules
keywords.
exceptions though, for example when the mode is neither TCP nor HTTP.
This command is restricted and can only be issued on sockets configured for
level "admin". Moreover, this feature is still considered in development so it
@ -3359,7 +3356,7 @@ show pools [byname|bysize|byusage] [detailed] [match <pfx>] [<nb>]
- Pool quic_conn_c (152 bytes) : 1337 allocated (203224 bytes), ...
Total: 15 pools, 109578176 bytes allocated, 109578176 used ...
show profiling [{all | status | tasks | memory}] [byaddr|bytime|aggr|<max_lines>]*
show profiling [{all | status | tasks | memory}] [byaddr|bytime|byctx|aggr|<max_lines>]*
Dumps the current profiling settings, one per line, as well as the command
needed to change them. When tasks profiling is enabled, some per-function
statistics collected by the scheduler will also be emitted, with a summary
@ -3368,14 +3365,15 @@ show profiling [{all | status | tasks | memory}] [byaddr|bytime|aggr|<max_lines>
allocations/releases and their sizes will be reported. It is possible to
limit the dump to only the profiling status, the tasks, or the memory
profiling by specifying the respective keywords; by default all profiling
information are dumped. It is also possible to limit the number of lines
information are dumped. It is also possible to limit the number of lines of
of output of each category by specifying a numeric limit. If is possible to
request that the output is sorted by address or by total execution time
instead of usage, e.g. to ease comparisons between subsequent calls or to
check what needs to be optimized, and to aggregate task activity by called
function instead of seeing the details. Please note that profiling is
essentially aimed at developers since it gives hints about where CPU cycles
or memory are wasted in the code. There is nothing useful to monitor there.
request that the output is sorted by address, by total execution time, or by
calling context instead of usage, e.g. to ease comparisons between subsequent
calls or to check what needs to be optimized, and to aggregate task activity
by called function instead of seeing the details. Please note that profiling
is essentially aimed at developers since it gives hints about where CPU
cycles or memory are wasted in the code. There is nothing useful to monitor
there.
show resolvers [<resolvers section id>]
Dump statistics for the given resolvers section, or all resolvers sections

View File

@ -198,6 +198,11 @@ struct act_rule {
struct server *srv; /* target server to attach the connection */
struct sample_expr *name; /* used to differentiate idle connections */
} attach_srv; /* 'attach-srv' rule */
struct {
enum log_orig_id orig;
char *profile_name;
struct log_profile *profile;
} do_log; /* 'do-log' action */
struct {
int value;
struct sample_expr *expr;
@ -206,6 +211,7 @@ struct act_rule {
void *p[4];
} act; /* generic pointers to be used by custom actions */
} arg; /* arguments used by some actions */
struct thread_exec_ctx exec_ctx; /* execution context */
struct {
char *file; /* file name where the rule appears (or NULL) */
int line; /* line number where the rule appears */
@ -217,7 +223,9 @@ struct action_kw {
enum act_parse_ret (*parse)(const char **args, int *cur_arg, struct proxy *px,
struct act_rule *rule, char **err);
int flags;
/* 4 bytes here */
void *private;
struct thread_exec_ctx exec_ctx; /* execution context */
};
struct action_kw_list {

View File

@ -35,6 +35,7 @@ int act_resolution_cb(struct resolv_requester *requester, struct dns_counters *c
int act_resolution_error_cb(struct resolv_requester *requester, int error_code);
const char *action_suggest(const char *word, const struct list *keywords, const char **extra);
void free_act_rule(struct act_rule *rule);
void act_add_list(struct list *head, struct action_kw_list *kw_list);
static inline struct action_kw *action_lookup(struct list *keywords, const char *kw)
{

View File

@ -24,6 +24,7 @@
#include <haproxy/api-t.h>
#include <haproxy/freq_ctr-t.h>
#include <haproxy/tinfo-t.h>
/* bit fields for the "profiling" global variable */
#define HA_PROF_TASKS_OFF 0x00000000 /* per-task CPU profiling forced disabled */
@ -84,6 +85,7 @@ struct memprof_stats {
unsigned long long alloc_tot;
unsigned long long free_tot;
void *info; // for pools, ptr to the pool
struct thread_exec_ctx exec_ctx;
};
#endif

View File

@ -130,6 +130,7 @@ struct appctx {
int (*io_handler)(struct appctx *appctx); /* used within the cli_io_handler when st0 = CLI_ST_CALLBACK */
void (*io_release)(struct appctx *appctx); /* used within the cli_io_handler when st0 = CLI_ST_CALLBACK,
if the command is terminated or the session released */
struct cli_kw *kw; /* the keyword being processed */
} cli_ctx; /* context dedicated to the CLI applet */
struct buffer_wait buffer_wait; /* position in the list of objects waiting for a buffer */

View File

@ -62,6 +62,13 @@ ssize_t applet_append_line(void *ctx, struct ist v1, struct ist v2, size_t ofs,
static forceinline void applet_fl_set(struct appctx *appctx, uint on);
static forceinline void applet_fl_clr(struct appctx *appctx, uint off);
/* macros to switch the calling context to the applet during a call. There's
* one with a return value for most calls, and one without for the few like
* fct(), shut(), or release() with no return.
*/
#define CALL_APPLET_WITH_RET(applet, func) EXEC_CTX_WITH_RET(EXEC_CTX_MAKE(TH_EX_CTX_APPLET, (applet)), (applet)->func)
#define CALL_APPLET_NO_RET(applet, func) EXEC_CTX_NO_RET(EXEC_CTX_MAKE(TH_EX_CTX_APPLET, (applet)), (applet)->func)
static forceinline uint appctx_app_test(const struct appctx *appctx, uint test)
{
@ -126,7 +133,7 @@ static inline int appctx_init(struct appctx *appctx)
task_set_thread(appctx->t, tid);
if (appctx->applet->init)
return appctx->applet->init(appctx);
return CALL_APPLET_WITH_RET(appctx->applet, init(appctx));
return 0;
}

View File

@ -59,6 +59,7 @@ enum chk_result {
#define CHK_ST_FASTINTER 0x0400 /* force fastinter check */
#define CHK_ST_READY 0x0800 /* check ready to migrate or run, see below */
#define CHK_ST_SLEEPING 0x1000 /* check was sleeping, i.e. not currently bound to a thread, see below */
#define CHK_ST_USE_SMALL_BUFF 0x2000 /* Use small buffers if possible for the request */
/* 4 possible states for CHK_ST_SLEEPING and CHK_ST_READY:
* SLP RDY State Description
@ -188,6 +189,7 @@ struct check {
char **envp; /* the environment to use if running a process-based check */
struct pid_list *curpid; /* entry in pid_list used for current process-based test, or -1 if not in test */
struct sockaddr_storage addr; /* the address to check */
struct protocol *proto; /* protocol used for check, may be different from the server's one */
char *pool_conn_name; /* conn name used on reuse */
char *sni; /* Server name */
char *alpn_str; /* ALPN to use for checks */

View File

@ -78,12 +78,11 @@ struct task *process_chk(struct task *t, void *context, unsigned int state);
struct task *srv_chk_io_cb(struct task *t, void *ctx, unsigned int state);
int check_buf_available(void *target);
struct buffer *check_get_buf(struct check *check, struct buffer *bptr);
struct buffer *check_get_buf(struct check *check, struct buffer *bptr, unsigned int small_buffer);
void check_release_buf(struct check *check, struct buffer *bptr);
const char *init_check(struct check *check, int type);
void free_check(struct check *check);
void check_purge(struct check *check);
int wake_srv_chk(struct stconn *sc);
int init_srv_check(struct server *srv);
int init_srv_agent_check(struct server *srv);

View File

@ -33,6 +33,7 @@
extern struct pool_head *pool_head_trash;
extern struct pool_head *pool_head_large_trash;
extern struct pool_head *pool_head_small_trash;
/* function prototypes */
@ -48,6 +49,7 @@ int chunk_strcmp(const struct buffer *chk, const char *str);
int chunk_strcasecmp(const struct buffer *chk, const char *str);
struct buffer *get_trash_chunk(void);
struct buffer *get_large_trash_chunk(void);
struct buffer *get_small_trash_chunk(void);
struct buffer *get_trash_chunk_sz(size_t size);
struct buffer *get_larger_trash_chunk(struct buffer *chunk);
int init_trash_buffers(int first);
@ -133,6 +135,29 @@ static forceinline struct buffer *alloc_large_trash_chunk(void)
return chunk;
}
/*
* Allocate a small trash chunk from the reentrant pool. The buffer starts at
* the end of the chunk. This chunk must be freed using free_trash_chunk(). This
* call may fail and the caller is responsible for checking that the returned
* pointer is not NULL.
*/
static forceinline struct buffer *alloc_small_trash_chunk(void)
{
struct buffer *chunk;
if (!pool_head_small_trash)
return NULL;
chunk = pool_alloc(pool_head_small_trash);
if (chunk) {
char *buf = (char *)chunk + sizeof(struct buffer);
*buf = 0;
chunk_init(chunk, buf,
pool_head_small_trash->size - sizeof(struct buffer));
}
return chunk;
}
/*
* Allocate a trash chunk accordingly to the requested size. This chunk must be
* freed using free_trash_chunk(). This call may fail and the caller is
@ -140,7 +165,9 @@ static forceinline struct buffer *alloc_large_trash_chunk(void)
*/
static forceinline struct buffer *alloc_trash_chunk_sz(size_t size)
{
if (likely(size <= pool_head_trash->size))
if (pool_head_small_trash && size <= pool_head_small_trash->size)
return alloc_small_trash_chunk();
else if (size <= pool_head_trash->size)
return alloc_trash_chunk();
else if (pool_head_large_trash && size <= pool_head_large_trash->size)
return alloc_large_trash_chunk();
@ -153,10 +180,12 @@ static forceinline struct buffer *alloc_trash_chunk_sz(size_t size)
*/
static forceinline void free_trash_chunk(struct buffer *chunk)
{
if (likely(chunk && chunk->size == pool_head_trash->size - sizeof(struct buffer)))
pool_free(pool_head_trash, chunk);
else
if (pool_head_small_trash && chunk && chunk->size == pool_head_small_trash->size - sizeof(struct buffer))
pool_free(pool_head_small_trash, chunk);
else if (pool_head_large_trash && chunk && chunk->size == pool_head_large_trash->size - sizeof(struct buffer))
pool_free(pool_head_large_trash, chunk);
else
pool_free(pool_head_trash, chunk);
}
/* copies chunk <src> into <chk>. Returns 0 in case of failure. */

View File

@ -23,6 +23,7 @@
#define _HAPROXY_CLI_T_H
#include <haproxy/applet-t.h>
#include <haproxy/tinfo-t.h>
/* Access level for a stats socket (appctx->cli_ctx.level) */
#define ACCESS_LVL_NONE 0x0000
@ -120,6 +121,8 @@ struct cli_kw {
void (*io_release)(struct appctx *appctx);
void *private;
int level; /* this is the level needed to show the keyword usage and to use it */
/* 4-byte hole here */
struct thread_exec_ctx exec_ctx; /* execution context */
};
struct cli_kw_list {

View File

@ -34,6 +34,7 @@
#include <haproxy/listener-t.h>
#include <haproxy/obj_type.h>
#include <haproxy/pool-t.h>
#include <haproxy/protocol.h>
#include <haproxy/server.h>
#include <haproxy/session-t.h>
#include <haproxy/task-t.h>
@ -49,6 +50,13 @@ extern struct mux_stopping_data mux_stopping_data[MAX_THREADS];
#define IS_HTX_CONN(conn) ((conn)->mux && ((conn)->mux->flags & MX_FL_HTX))
/* macros to switch the calling context to the mux during a call. There's one
* with a return value for most calls, and one without for the few like shut(),
* detach() or destroy() with no return.
*/
#define CALL_MUX_WITH_RET(mux, func) EXEC_CTX_WITH_RET(EXEC_CTX_MAKE(TH_EX_CTX_MUX, (mux)), (mux)->func)
#define CALL_MUX_NO_RET(mux, func) EXEC_CTX_NO_RET(EXEC_CTX_MAKE(TH_EX_CTX_MUX, (mux)), (mux)->func)
/* receive a PROXY protocol header over a connection */
int conn_recv_proxy(struct connection *conn, int flag);
int conn_send_proxy(struct connection *conn, unsigned int flag);
@ -480,7 +488,7 @@ static inline int conn_install_mux(struct connection *conn, const struct mux_ops
conn->mux = mux;
conn->ctx = ctx;
ret = mux->init ? mux->init(conn, prx, sess, &BUF_NULL) : 0;
ret = mux->init ? CALL_MUX_WITH_RET(mux, init(conn, prx, sess, &BUF_NULL)) : 0;
if (ret < 0) {
conn->mux = NULL;
conn->ctx = NULL;
@ -602,13 +610,13 @@ void list_mux_proto(FILE *out);
*/
static inline const struct mux_proto_list *conn_get_best_mux_entry(
const struct ist mux_proto,
int proto_side, int proto_mode)
int proto_side, int proto_is_quic, int proto_mode)
{
struct mux_proto_list *item;
struct mux_proto_list *fallback = NULL;
list_for_each_entry(item, &mux_proto_list.list, list) {
if (!(item->side & proto_side) || !(item->mode & proto_mode))
if (!(item->side & proto_side) || !(item->mode & proto_mode) || (proto_is_quic && !(item->mux->flags & MX_FL_FRAMED)))
continue;
if (istlen(mux_proto) && isteq(mux_proto, item->token))
return item;
@ -633,7 +641,7 @@ static inline const struct mux_ops *conn_get_best_mux(struct connection *conn,
{
const struct mux_proto_list *item;
item = conn_get_best_mux_entry(mux_proto, proto_side, proto_mode);
item = conn_get_best_mux_entry(mux_proto, proto_side, proto_is_quic(conn->ctrl), proto_mode);
return item ? item->mux : NULL;
}

View File

@ -536,6 +536,11 @@
#define TIME_STATS_SAMPLES 512
#endif
/* number of samples used to measure the load in the run queue */
#ifndef RQ_LOAD_SAMPLES
#define RQ_LOAD_SAMPLES 512
#endif
/* max ocsp cert id asn1 encoded length */
#ifndef OCSP_MAX_CERTID_ASN1_LENGTH
#define OCSP_MAX_CERTID_ASN1_LENGTH 128
@ -601,7 +606,7 @@
* store stats.
*/
#ifndef MEMPROF_HASH_BITS
# define MEMPROF_HASH_BITS 10
# define MEMPROF_HASH_BITS 12
#endif
#define MEMPROF_HASH_BUCKETS (1U << MEMPROF_HASH_BITS)

View File

@ -37,6 +37,7 @@
extern struct pool_head *pool_head_buffer;
extern struct pool_head *pool_head_large_buffer;
extern struct pool_head *pool_head_small_buffer;
int init_buffer(void);
void buffer_dump(FILE *o, struct buffer *b, int from, int to);
@ -66,6 +67,12 @@ static inline int b_is_large_sz(size_t sz)
return (pool_head_large_buffer && sz == pool_head_large_buffer->size);
}
/* Return 1 if <sz> is the size of a small buffer */
static inline int b_is_small_sz(size_t sz)
{
return (pool_head_small_buffer && sz == pool_head_small_buffer->size);
}
/* Return 1 if <bug> is a default buffer */
static inline int b_is_default(struct buffer *buf)
{
@ -78,6 +85,12 @@ static inline int b_is_large(struct buffer *buf)
return b_is_large_sz(b_size(buf));
}
/* Return 1 if <buf> is a small buffer */
static inline int b_is_small(struct buffer *buf)
{
return b_is_small_sz(b_size(buf));
}
/**************************************************/
/* Functions below are used for buffer allocation */
/**************************************************/
@ -172,6 +185,8 @@ static inline char *__b_get_emergency_buf(void)
* than the default buffers */ \
if (unlikely(b_is_large_sz(sz))) \
pool_free(pool_head_large_buffer, area); \
else if (unlikely(b_is_small_sz(sz))) \
pool_free(pool_head_small_buffer, area); \
else if (th_ctx->emergency_bufs_left < global.tune.reserved_bufs) \
th_ctx->emergency_bufs[th_ctx->emergency_bufs_left++] = area; \
else \
@ -185,6 +200,35 @@ static inline char *__b_get_emergency_buf(void)
__b_free((_buf)); \
} while (0)
static inline struct buffer *b_alloc_small(struct buffer *buf)
{
char *area = NULL;
if (!buf->size) {
area = pool_alloc(pool_head_small_buffer);
if (!area)
return NULL;
buf->area = area;
buf->size = global.tune.bufsize_small;
}
return buf;
}
static inline struct buffer *b_alloc_large(struct buffer *buf)
{
char *area = NULL;
if (!buf->size) {
area = pool_alloc(pool_head_large_buffer);
if (!area)
return NULL;
buf->area = area;
buf->size = global.tune.bufsize_large;
}
return buf;
}
/* Offer one or multiple buffer currently belonging to target <from> to whoever
* needs one. Any pointer is valid for <from>, including NULL. Its purpose is
* to avoid passing a buffer to oneself in case of failed allocations (e.g.

View File

@ -28,7 +28,9 @@
#include <haproxy/stream-t.h>
extern const char *trace_flt_id;
extern const char *http_comp_flt_id;
extern const char *http_comp_req_flt_id;
extern const char *http_comp_res_flt_id;
extern const char *cache_store_flt_id;
extern const char *spoe_filter_id;
extern const char *fcgi_flt_id;

View File

@ -403,6 +403,25 @@ static inline uint swrate_add_scaled_opportunistic(uint *sum, uint n, uint v, ui
return new_sum;
}
/* Like swrate_add() except that if <v> is beyond the current average, the
* average is replaced by the peak. This is essentially used to measure peak
* loads in the scheduler, reason why it is provided as a local variant that
* does not involve atomic operations.
*/
static inline uint swrate_add_peak_local(uint *sum, uint n, uint v)
{
uint old_sum, new_sum;
old_sum = *sum;
if (v * n > old_sum)
new_sum = v * n;
else
new_sum = old_sum - (old_sum + n - 1) / n + v;
*sum = new_sum;
return new_sum;
}
/* Returns the average sample value for the sum <sum> over a sliding window of
* <n> samples. Better if <n> is a power of two. It must be the same <n> as the
* one used above in all additions.

View File

@ -79,7 +79,7 @@
#define GTUNE_DISABLE_H2_WEBSOCKET (1<<21)
#define GTUNE_DISABLE_ACTIVE_CLOSE (1<<22)
#define GTUNE_QUICK_EXIT (1<<23)
/* (1<<24) unused */
#define GTUNE_COLLECT_LIBS (1<<24)
/* (1<<25) unused */
#define GTUNE_USE_FAST_FWD (1<<26)
#define GTUNE_LISTENER_MQ_FAIR (1<<27)

View File

@ -58,6 +58,10 @@ extern int devnullfd;
extern int fileless_mode;
extern struct cfgfile fileless_cfg;
/* storage for collected libs */
extern void *lib_storage;
extern size_t lib_size;
struct proxy;
struct server;
int main(int argc, char **argv);

View File

@ -5,7 +5,6 @@
#include <haproxy/hstream-t.h>
struct task *sc_hstream_io_cb(struct task *t, void *ctx, unsigned int state);
int hstream_wake(struct stconn *sc);
void hstream_shutdown(struct stconn *sc);
void *hstream_new(struct session *sess, struct stconn *sc, struct buffer *input);

View File

@ -93,4 +93,22 @@ struct http_errors {
struct list list; /* http-errors list */
};
/* Indicates the keyword origin of an http-error definition. This is used in
* <conf_errors> type to indicate which part of the internal union should be
* manipulated.
*/
enum http_err_directive {
HTTP_ERR_DIRECTIVE_SECTION = 0, /* "errorfiles" keyword referencing a http-errors section */
HTTP_ERR_DIRECTIVE_INLINE, /* "errorfile" keyword with inline error definition */
};
/* Used with "errorfiles" directives. It indicates for each known HTTP error
* status codes if they are defined in the target http-errors section.
*/
enum http_err_import {
HTTP_ERR_IMPORT_NO = 0,
HTTP_ERR_IMPORT_IMPLICIT, /* import every errcode defined in a section */
HTTP_ERR_IMPORT_EXPLICIT, /* import a specific errcode from a section */
};
#endif /* _HAPROXY_HTTP_HTX_T_H */

View File

@ -78,6 +78,7 @@ struct buffer *http_load_errorfile(const char *file, char **errmsg);
struct buffer *http_load_errormsg(const char *key, const struct ist msg, char **errmsg);
struct buffer *http_parse_errorfile(int status, const char *file, char **errmsg);
struct buffer *http_parse_errorloc(int errloc, int status, const char *url, char **errmsg);
int proxy_check_http_errors(struct proxy *px);
int proxy_dup_default_conf_errors(struct proxy *curpx, const struct proxy *defpx, char **errmsg);
void proxy_release_conf_errors(struct proxy *px);

View File

@ -37,6 +37,7 @@ struct htx_blk *htx_add_blk(struct htx *htx, enum htx_blk_type type, uint32_t bl
struct htx_blk *htx_remove_blk(struct htx *htx, struct htx_blk *blk);
struct htx_ret htx_find_offset(struct htx *htx, uint32_t offset);
void htx_truncate(struct htx *htx, uint32_t offset);
void htx_truncate_blk(struct htx *htx, struct htx_blk *blk);
struct htx_ret htx_drain(struct htx *htx, uint32_t max);
struct htx_blk *htx_replace_blk_value(struct htx *htx, struct htx_blk *blk,
@ -56,6 +57,16 @@ size_t htx_add_data(struct htx *htx, const struct ist data);
struct htx_blk *htx_add_last_data(struct htx *htx, struct ist data);
void htx_move_blk_before(struct htx *htx, struct htx_blk **blk, struct htx_blk **ref);
int htx_append_msg(struct htx *dst, const struct htx *src);
struct buffer *htx_move_to_small_buffer(struct buffer *dst, struct buffer *src);
struct buffer *htx_move_to_large_buffer(struct buffer *dst, struct buffer *src);
struct buffer *htx_copy_to_small_buffer(struct buffer *dst, struct buffer *src);
struct buffer *htx_copy_to_large_buffer(struct buffer *dst, struct buffer *src);
#define HTX_XFER_DEFAULT 0x00000000 /* Default XFER: no partial xfer / remove blocks from source */
#define HTX_XFER_KEEP_SRC_BLKS 0x00000001 /* Don't remove xfer blocks from source messages during xfer */
#define HTX_XFER_PARTIAL_HDRS_COPY 0x00000002 /* Allow partial copy of headers and trailers part */
#define HTX_XFER_HDRS_ONLY 0x00000003 /* Only Transfert header blocks (start-line, header and EOH) */
size_t htx_xfer(struct htx *dst, struct htx *src, size_t count, unsigned int flags);
/* Functions and macros to get parts of the start-line or length of these
* parts. Request and response start-lines are both composed of 3 parts.

View File

@ -20,6 +20,11 @@ extern struct list server_deinit_list;
extern struct list per_thread_free_list;
extern struct list per_thread_deinit_list;
/* initcall caller location */
extern const struct initcall *caller_initcall;
extern const char *caller_file;
extern int caller_line;
void hap_register_pre_check(int (*fct)());
void hap_register_post_check(int (*fct)());
void hap_register_post_proxy_check(int (*fct)(struct proxy *));

View File

@ -77,6 +77,8 @@ struct initcall {
void *arg1;
void *arg2;
void *arg3;
const char *loc_file; /* file where the call is declared, or NULL */
int loc_line; /* line where the call is declared, or NULL */
#if defined(USE_OBSOLETE_LINKER)
void *next;
#endif
@ -107,6 +109,8 @@ struct initcall {
.arg1 = (void *)(a1), \
.arg2 = (void *)(a2), \
.arg3 = (void *)(a3), \
.loc_file = __FILE__, \
.loc_line = linenum, \
} : NULL
@ -131,6 +135,8 @@ __attribute__((constructor)) static void __initcb_##linenum() \
.arg1 = (void *)(a1), \
.arg2 = (void *)(a2), \
.arg3 = (void *)(a3), \
.loc_file = __FILE__, \
.loc_line = linenum, \
}; \
if (stg < STG_SIZE) { \
entry.next = __initstg[stg]; \
@ -229,8 +235,15 @@ extern struct initcall *__initstg[STG_SIZE];
const struct initcall **ptr; \
if (stg >= STG_SIZE) \
break; \
FOREACH_INITCALL(ptr, stg) \
FOREACH_INITCALL(ptr, stg) { \
caller_initcall = *ptr; \
caller_file = (*ptr)->loc_file; \
caller_line = (*ptr)->loc_line; \
(*ptr)->fct((*ptr)->arg1, (*ptr)->arg2, (*ptr)->arg3); \
caller_initcall = NULL; \
caller_file = NULL; \
caller_line = 0; \
} \
} while (0)
#else // USE_OBSOLETE_LINKER
@ -243,8 +256,15 @@ extern struct initcall *__initstg[STG_SIZE];
const struct initcall *ptr; \
if (stg >= STG_SIZE) \
break; \
FOREACH_INITCALL(ptr, stg) \
FOREACH_INITCALL(ptr, stg) { \
caller_initcall = ptr; \
caller_file = (ptr)->loc_file; \
caller_line = (ptr)->loc_line; \
(ptr)->fct((ptr)->arg1, (ptr)->arg2, (ptr)->arg3); \
caller_initcall = NULL; \
caller_file = NULL; \
caller_line = 0; \
} \
} while (0)
#endif // USE_OBSOLETE_LINKER

View File

@ -27,7 +27,7 @@
#ifdef USE_OPENSSL
enum jwt_alg jwt_parse_alg(const char *alg_str, unsigned int alg_len);
int jwt_tokenize(const struct buffer *jwt, struct jwt_item *items, unsigned int *item_num);
int jwt_tokenize(const struct buffer *jwt, struct jwt_item *items, unsigned int item_num);
int jwt_tree_load_cert(char *path, int pathlen, int tryload_cert, const char *file, int line, char **err);
enum jwt_vrfy_status jwt_verify(const struct buffer *token, const struct buffer *alg,

View File

@ -394,6 +394,12 @@ static inline unsigned long ERR_peek_error_func(const char **func)
#define __OPENSSL_110_CONST__
#endif
#if (HA_OPENSSL_VERSION_NUMBER >= 0x40000000L) && (!defined(USE_OPENSSL_WOLFSSL))
#define __X509_NAME_CONST__ const
#else
#define __X509_NAME_CONST__
#endif
/* ERR_remove_state() was deprecated in 1.0.0 in favor of
* ERR_remove_thread_state(), which was in turn deprecated in
* 1.1.0 and does nothing anymore. Let's simply silently kill

View File

@ -124,6 +124,12 @@ static inline int real_family(int ss_family)
return fam ? fam->real_family : AF_UNSPEC;
}
static inline int proto_is_quic(const struct protocol *proto)
{
return (proto->proto_type == PROTO_TYPE_DGRAM &&
proto->xprt_type == PROTO_TYPE_STREAM);
}
#endif /* _HAPROXY_PROTOCOL_H */
/*

View File

@ -156,14 +156,17 @@ enum PR_SRV_STATE_FILE {
#define PR_O2_RSTRICT_REQ_HDR_NAMES_NOOP 0x01000000 /* preserve request header names containing chars outside of [0-9a-zA-Z-] charset */
#define PR_O2_RSTRICT_REQ_HDR_NAMES_MASK 0x01c00000 /* mask for restrict-http-header-names option */
/* unused : 0x02000000 ... 0x08000000 */
/* server health checks */
#define PR_O2_CHK_NONE 0x00000000 /* no L7 health checks configured (TCP by default) */
#define PR_O2_TCPCHK_CHK 0x90000000 /* use TCPCHK check for server health */
#define PR_O2_EXT_CHK 0xA0000000 /* use external command for server health */
/* unused: 0xB0000000 to 0xF000000, reserved for health checks */
#define PR_O2_CHK_ANY 0xF0000000 /* Mask to cover any check */
#define PR_O2_CHK_NONE 0x00000000 /* no L7 health checks configured (TCP by default) */
#define PR_O2_TCPCHK_CHK 0x02000000 /* use TCPCHK check for server health */
#define PR_O2_EXT_CHK 0x04000000 /* use external command for server health */
#define PR_O2_CHK_ANY 0x06000000 /* Mask to cover any check */
#define PR_O2_USE_SBUF_QUEUE 0x08000000 /* use small buffer for request when stream are queued*/
#define PR_O2_USE_SBUF_L7_RETRY 0x10000000 /* use small buffer for request when L7 retires are enabled */
#define PR_O2_USE_SBUF_CHECK 0x20000000 /* use small buffer for request's healthchecks */
#define PR_O2_USE_SBUF_ALL 0x38000000 /* all flags for use-large-buffer option */
/* unused : 0x40000000 ... 0x80000000 */
/* end of proxy->options2 */
/* bits for proxy->options3 */

View File

@ -24,6 +24,7 @@
#define _HAPROXY_SAMPLE_T_H
#include <haproxy/api-t.h>
#include <haproxy/tinfo-t.h>
#include <haproxy/sample_data-t.h>
/* input and output sample types
@ -265,6 +266,7 @@ struct sample_conv {
unsigned int in_type; /* expected input sample type */
unsigned int out_type; /* output sample type */
void *private; /* private values. only used by maps and Lua */
struct thread_exec_ctx exec_ctx; /* execution context */
};
/* sample conversion expression */
@ -288,6 +290,7 @@ struct sample_fetch {
unsigned int use; /* fetch source (SMP_USE_*) */
unsigned int val; /* fetch validity (SMP_VAL_*) */
void *private; /* private values. only used by Lua */
struct thread_exec_ctx exec_ctx; /* execution context */
};
/* sample expression */

View File

@ -36,6 +36,10 @@
void sc_update_rx(struct stconn *sc);
void sc_update_tx(struct stconn *sc);
void sc_abort(struct stconn *sc);
void sc_shutdown(struct stconn *sc);
void sc_chk_rcv(struct stconn *sc);
struct task *sc_conn_io_cb(struct task *t, void *ctx, unsigned int state);
int sc_conn_sync_recv(struct stconn *sc);
int sc_conn_sync_send(struct stconn *sc);
@ -360,38 +364,6 @@ static inline int sc_is_recv_allowed(const struct stconn *sc)
return !(sc->flags & (SC_FL_WONT_READ|SC_FL_NEED_BUFF|SC_FL_NEED_ROOM));
}
/* This is to be used after making some room available in a channel. It will
* return without doing anything if the stream connector's RX path is blocked.
* It will automatically mark the stream connector as busy processing the end
* point in order to avoid useless repeated wakeups.
* It will then call ->chk_rcv() to enable receipt of new data.
*/
static inline void sc_chk_rcv(struct stconn *sc)
{
if (sc_ep_test(sc, SE_FL_APPLET_NEED_CONN) &&
sc_state_in(sc_opposite(sc)->state, SC_SB_RDY|SC_SB_EST|SC_SB_DIS|SC_SB_CLO)) {
sc_ep_clr(sc, SE_FL_APPLET_NEED_CONN);
sc_ep_report_read_activity(sc);
}
if (!sc_is_recv_allowed(sc))
return;
if (!sc_state_in(sc->state, SC_SB_RDY|SC_SB_EST))
return;
sc_ep_set(sc, SE_FL_HAVE_NO_DATA);
if (likely(sc->app_ops->chk_rcv))
sc->app_ops->chk_rcv(sc);
}
/* Calls chk_snd on the endpoint using the data layer */
static inline void sc_chk_snd(struct stconn *sc)
{
if (likely(sc->app_ops->chk_snd))
sc->app_ops->chk_snd(sc);
}
/* Perform a synchronous receive using the right version, depending the endpoing
* is a connection or an applet.
@ -536,24 +508,10 @@ static inline void sc_schedule_abort(struct stconn *sc)
sc->flags |= SC_FL_ABRT_WANTED;
}
/* Abort the SC and notify the endpoint using the data layer */
static inline void sc_abort(struct stconn *sc)
{
if (likely(sc->app_ops->abort))
sc->app_ops->abort(sc);
}
/* Schedule a shutdown for the SC */
static inline void sc_schedule_shutdown(struct stconn *sc)
{
sc->flags |= SC_FL_SHUT_WANTED;
}
/* Shutdown the SC and notify the endpoint using the data layer */
static inline void sc_shutdown(struct stconn *sc)
{
if (likely(sc->app_ops->shutdown))
sc->app_ops->shutdown(sc);
}
#endif /* _HAPROXY_SC_STRM_H */

View File

@ -34,10 +34,10 @@ int cert_get_pkey_algo(X509 *crt, struct buffer *out);
int ssl_sock_get_serial(X509 *crt, struct buffer *out);
int ssl_sock_crt2der(X509 *crt, struct buffer *out);
int ssl_sock_get_time(ASN1_TIME *tm, struct buffer *out);
int ssl_sock_get_dn_entry(X509_NAME *a, const struct buffer *entry, int pos,
int ssl_sock_get_dn_entry(__X509_NAME_CONST__ X509_NAME *a, const struct buffer *entry, int pos,
struct buffer *out);
int ssl_sock_get_dn_formatted(X509_NAME *a, const struct buffer *format, struct buffer *out);
int ssl_sock_get_dn_oneline(X509_NAME *a, struct buffer *out);
int ssl_sock_get_dn_formatted(__X509_NAME_CONST__ X509_NAME *a, const struct buffer *format, struct buffer *out);
int ssl_sock_get_dn_oneline(__X509_NAME_CONST__ X509_NAME *a, struct buffer *out);
X509* ssl_sock_get_peer_certificate(SSL *ssl);
X509* ssl_sock_get_verified_chain_root(SSL *ssl);
unsigned int openssl_version_parser(const char *version);

View File

@ -36,7 +36,7 @@ struct shm_stats_file_hdr {
/* 2 bytes hole */
uint global_now_ms; /* global monotonic date (ms) common to all processes using the shm */
ullong global_now_ns; /* global monotonic date (ns) common to all processes using the shm */
llong now_offset; /* offset applied to global monotonic date on startup */
ALWAYS_PAD(8); // 8 bytes hole
/* each process uses one slot and is identified using its pid, max 64 in order
* to be able to use bitmask to refer to a process and then look its pid in the
* "slots.pid" map

View File

@ -349,19 +349,6 @@ struct sedesc {
unsigned long long kop; /* Known outgoing payload length (see above) */
};
/* sc_app_ops describes the application layer's operations and notification
* callbacks when I/O activity is reported and to use to perform shutr/shutw.
* There are very few combinations in practice (strm/chk <-> none/mux/applet).
*/
struct sc_app_ops {
void (*chk_rcv)(struct stconn *); /* chk_rcv function, may not be null */
void (*chk_snd)(struct stconn *); /* chk_snd function, may not be null */
void (*abort)(struct stconn *); /* abort function, may not be null */
void (*shutdown)(struct stconn *); /* shutdown function, may not be null */
int (*wake)(struct stconn *); /* data-layer callback to report activity */
char name[8]; /* data layer name, zero-terminated */
};
/*
* This structure describes the elements of a connection relevant to a stream
*/
@ -383,7 +370,6 @@ struct stconn {
struct wait_event wait_event; /* We're in a wait list */
struct sedesc *sedesc; /* points to the stream endpoint descriptor */
enum obj_type *app; /* points to the applicative point (stream or check) */
const struct sc_app_ops *app_ops; /* general operations used at the app layer */
struct sockaddr_storage *src; /* source address (pool), when known, otherwise NULL */
struct sockaddr_storage *dst; /* destination address (pool), when known, otherwise NULL */
};

View File

@ -57,6 +57,8 @@ void sc_destroy(struct stconn *sc);
int sc_reset_endp(struct stconn *sc);
struct appctx *sc_applet_create(struct stconn *sc, struct applet *app);
int sc_applet_process(struct stconn *sc);
int sc_conn_process(struct stconn *sc);
void sc_conn_prepare_endp_upgrade(struct stconn *sc);
void sc_conn_abort_endp_upgrade(struct stconn *sc);
@ -349,16 +351,6 @@ static inline struct hstream *sc_hstream(const struct stconn *sc)
return NULL;
}
/* Returns the name of the application layer's name for the stconn,
* or "NONE" when none is attached.
*/
static inline const char *sc_get_data_name(const struct stconn *sc)
{
if (!sc->app_ops)
return "NONE";
return sc->app_ops->name;
}
/* Returns non-zero if the stream connector's Rx path is blocked because of
* lack of room in the input buffer. This usually happens after applets failed
* to deliver data into the channel's buffer and reported it via sc_need_room().
@ -460,7 +452,7 @@ static inline size_t se_nego_ff(struct sedesc *se, struct buffer *input, size_t
goto end;
}
ret = mux->nego_fastfwd(se->sc, input, count, flags);
ret = CALL_MUX_WITH_RET(mux, nego_fastfwd(se->sc, input, count, flags));
if (se->iobuf.flags & IOBUF_FL_FF_BLOCKED) {
sc_ep_report_blocked_send(se->sc, 0);
@ -493,7 +485,7 @@ static inline size_t se_done_ff(struct sedesc *se)
size_t to_send = se_ff_data(se);
BUG_ON(!mux->done_fastfwd);
ret = mux->done_fastfwd(se->sc);
ret = CALL_MUX_WITH_RET(mux, done_fastfwd(se->sc));
if (ret) {
/* Something was forwarded, unblock the zero-copy forwarding.
* If all data was sent, report and send activity.
@ -525,7 +517,7 @@ static inline size_t se_done_ff(struct sedesc *se)
}
}
}
se->sc->bytes_out += ret;
return ret;
}

View File

@ -130,20 +130,22 @@ struct notification {
* on return.
*/
#define TASK_COMMON \
struct { \
unsigned int state; /* task state : bitfield of TASK_ */ \
int tid; /* tid of task/tasklet. <0 = local for tasklet, unbound for task */ \
struct task *(*process)(struct task *t, void *ctx, unsigned int state); /* the function which processes the task */ \
void *context; /* the task's context */ \
const struct ha_caller *caller; /* call place of last wakeup(); 0 on init, -1 on free */ \
uint32_t wake_date; /* date of the last task wakeup */ \
unsigned int calls; /* number of times process was called */ \
TASK_DEBUG_STORAGE; \
}
unsigned int state; /* task state : bitfield of TASK_ */ \
int tid; /* tid of task/tasklet. <0 = local for tasklet, unbound for task */ \
struct task *(*process)(struct task *t, void *ctx, unsigned int state); /* the function which processes the task */ \
void *context; /* the task's context */ \
const struct ha_caller *caller; /* call place of last wakeup(); 0 on init, -1 on free */ \
uint32_t wake_date; /* date of the last task wakeup */ \
unsigned int calls; /* number of times process was called */ \
TASK_DEBUG_STORAGE; \
short last_run; /* 16-bit now_ms of last run */
/* a 16- or 48-bit hole remains here and is used by task */
/* The base for all tasks */
struct task {
TASK_COMMON; /* must be at the beginning! */
short nice; /* task prio from -1024 to +1024 */
int expire; /* next expiration date for this task, in ticks */
struct eb32_node rq; /* ebtree node used to hold the task in the run queue */
/* WARNING: the struct task is often aliased as a struct tasklet when
* it is NOT in the run queue. The tasklet has its struct list here
@ -151,14 +153,12 @@ struct task {
* ever reorder these fields without taking this into account!
*/
struct eb32_node wq; /* ebtree node used to hold the task in the wait queue */
int expire; /* next expiration date for this task, in ticks */
short nice; /* task prio from -1024 to +1024 */
/* 16-bit hole here */
};
/* lightweight tasks, without priority, mainly used for I/Os */
struct tasklet {
TASK_COMMON; /* must be at the beginning! */
/* 48-bit hole here */
struct list list;
/* WARNING: the struct task is often aliased as a struct tasklet when
* it is not in the run queue. The task has its struct rq here where

View File

@ -121,6 +121,7 @@ enum tcpcheck_rule_type {
/* Unused 0x000000A0..0x00000FF0 (reserved for future proto) */
#define TCPCHK_RULES_TCP_CHK 0x00000FF0
#define TCPCHK_RULES_PROTO_CHK 0x00000FF0 /* Mask to cover protocol check */
#define TCPCHK_RULES_MAY_USE_SBUF 0x00001000 /* checks may try to use small buffers if possible for the request */
struct check;
struct tcpcheck_connect {

View File

@ -75,6 +75,41 @@ enum {
/* we have 4 buffer-wait queues, in highest to lowest emergency order */
#define DYNBUF_NBQ 4
/* execution context, for tracing resource usage or warning origins */
enum thread_exec_ctx_type {
TH_EX_CTX_NONE = 0, /* context not filled */
TH_EX_CTX_OTHER, /* context only known by a generic pointer */
TH_EX_CTX_INITCALL, /* the pointer is an initcall providing file:line */
TH_EX_CTX_CALLER, /* the pointer is an ha_caller of the caller providing file:line etc */
TH_EX_CTX_SMPF, /* directly registered sample fetch function, using .smpf_kwl */
TH_EX_CTX_CONV, /* directly registered converter function, using .conv_kwl */
TH_EX_CTX_FUNC, /* hopefully recognizable function/callback, using .pointer */
TH_EX_CTX_ACTION, /* directly registered action function, using .action_kwl */
TH_EX_CTX_FLT, /* filter whose config is in .flt_conf */
TH_EX_CTX_MUX, /* mux whose mux_ops is in .mux_ops */
TH_EX_CTX_TASK, /* task or tasklet whose function is in .task */
TH_EX_CTX_APPLET, /* applet whose applet is in .applet */
TH_EX_CTX_CLI_KWL, /* CLI keyword list, using .cli_kwl */
};
struct thread_exec_ctx {
enum thread_exec_ctx_type type;
/* 32-bit hole here on 64-bit platforms */
union {
const void *pointer; /* generic pointer (for other) */
const struct initcall *initcall; /* used with TH_EX_CTX_INITCALL */
const struct ha_caller *ha_caller; /* used with TH_EX_CTX_CALLER */
const struct sample_fetch_kw_list *smpf_kwl; /* used with TH_EX_CTX_SMPF */
const struct sample_conv_kw_list *conv_kwl; /* used with TH_EX_CTX_CONV */
const struct action_kw_list *action_kwl; /* used with TH_EX_CTX_ACTION */
const struct flt_conf *flt_conf; /* used with TH_EX_CTX_FLTCONF */
const struct mux_ops *mux_ops; /* used with TH_EX_CTX_MUX */
const struct task *(*task)(struct task *, void *, unsigned int); /* used with TH_EX_CTX_TASK */
const struct applet *applet; /* used with TH_EX_CTX_APPLET */
const struct cli_kw_list *cli_kwl; /* used with TH_EX_CTX_CLI_KWL */
};
};
/* Thread group information. This defines a base and a count of global thread
* IDs which belong to it, and which can be looked up into thread_info/ctx. It
* is set up during parsing and is stable during operation. Thread groups start
@ -172,8 +207,7 @@ struct thread_ctx {
uint64_t curr_mono_time; /* latest system wide monotonic time (leaving poll) */
ulong lock_history; /* history of used locks, see thread.h for more details */
/* around 56 unused bytes here */
struct thread_exec_ctx exec_ctx; /* current execution context when known, or NULL */
// fourth cache line here on 64 bits: accessed mostly using atomic ops
ALWAYS_ALIGN(64);
@ -199,6 +233,7 @@ struct thread_ctx {
struct buffer *last_dump_buffer; /* Copy of last buffer used for a dump; may be NULL or invalid; for post-mortem only */
unsigned long long total_streams; /* Total number of streams created on this thread */
unsigned int stream_cnt; /* Number of streams attached to this thread */
unsigned int rq_tot_peak; /* total run queue size last call */
// around 68 bytes here for shared variables

View File

@ -117,4 +117,42 @@ static inline void thread_set_pin_grp1(struct thread_set *ts, ulong mask)
ts->rel[i] = 0;
}
/* switches the current execution context to <ctx> and returns the previous one
* so that this may even be used to save and restore. Setting EXEC_CTX_NONE
* resets it. It's efficient because it uses a pair of registers on input and
* output.
*/
static inline struct thread_exec_ctx switch_exec_ctx(const struct thread_exec_ctx ctx)
{
const struct thread_exec_ctx prev = th_ctx->exec_ctx;
th_ctx->exec_ctx = ctx;
return prev;
}
/* used to reset the execution context */
#define EXEC_CTX_NONE ((struct thread_exec_ctx){ .type = 0, .pointer = NULL })
/* make an execution context from a type and a pointer */
#define EXEC_CTX_MAKE(_type, _pointer) ((struct thread_exec_ctx){ .type = (_type), .pointer = (_pointer) })
/* execute expression <expr> under context <new_ctx> then restore the previous
* one, and return the expression's return value.
*/
#define EXEC_CTX_WITH_RET(new_ctx, expr) ({ \
const struct thread_exec_ctx __prev_ctx = switch_exec_ctx(new_ctx); \
typeof(expr) __ret = (expr); \
switch_exec_ctx(__prev_ctx); \
__ret; \
})
/* execute expression <expr> under context <new_ctx> then restore the previous
* one. This one has no return value.
*/
#define EXEC_CTX_NO_RET(new_ctx, expr) do { \
const struct thread_exec_ctx __prev_ctx = switch_exec_ctx(new_ctx); \
do { expr; } while (0); \
switch_exec_ctx(__prev_ctx); \
} while (0)
#endif /* _HAPROXY_TINFO_H */

View File

@ -1147,10 +1147,13 @@ void dump_hex(struct buffer *out, const char *pfx, const void *buf, int len, int
int may_access(const void *ptr);
const void *resolve_sym_name(struct buffer *buf, const char *pfx, const void *addr);
const void *resolve_dso_name(struct buffer *buf, const char *pfx, const void *addr);
void make_tar_header(char *output, const char *pfx, const char *fname, const char *link, size_t size, mode_t mode);
int load_file_into_tar(char **storage, size_t *size, const char *pfx, const char *fname, const char *input, const char *link);
const char *get_exec_path(void);
void *get_sym_curr_addr(const char *name);
void *get_sym_next_addr(const char *name);
int dump_libs(struct buffer *output, int with_addr);
void collect_libs(void);
/* Note that this may result in opening libgcc() on first call, so it may need
* to have been called once before chrooting.
@ -1324,6 +1327,62 @@ static inline uint statistical_prng_range(uint range)
return mul32hi(statistical_prng(), range ? range - 1 : 0);
}
/* The functions below are used to hash one or two pointers together and reduce
* the result to fit into a given number of bits. The first part is made of a
* multiplication (and possibly an addition) by one or two prime numbers giving
* a 64-bit number whose center bits are the most distributed, and the second
* part will reuse this value and return a mix of the most variable bits that
* fits in the requested size. The most convenient approach is to directly
* call ptr_hash() / ptr2_hash(), though for some specific use cases where a
* second value could be useful, one may prefer to call the lower level
* operations instead.
*/
/* reduce a 64-bit pointer hash to <bits> bits */
static forceinline uint _ptr_hash_reduce(unsigned long long x, const int bits)
{
if (!bits)
return 0;
if (sizeof(long) == 4)
x ^= x >> 32;
else
x >>= 31 - (bits + 1) / 2;
return x & (~0U >> (-bits & 31));
}
/* single-pointer version, low-level, use ptr_hash() instead */
static forceinline ullong _ptr_hash(const void *p)
{
unsigned long long x = (unsigned long)p;
x *= 0xacd1be85U;
return x;
}
/* two-pointer version, low-level, use ptr2_hash() instead */
static forceinline ullong _ptr2_hash(const void *p1, const void *p2)
{
unsigned long long x = (unsigned long)p1;
unsigned long long y = (unsigned long)p2;
x *= 0xacd1be85U;
y *= 0x9d28e4e9U;
return x ^ y;
}
/* two-pointer plus arg version, low-level, use ptr2_hash_arg() instead */
static forceinline ullong _ptr2_hash_arg(const void *p1, const void *p2, ulong arg)
{
unsigned long long x = (unsigned long)p1;
unsigned long long y = (unsigned long)p2;
x *= 0xacd1be85U;
x += arg;
y *= 0x9d28e4e9U;
return x ^ y;
}
/* returns a hash on <bits> bits of pointer <p> that is suitable for being used
* to compute statistic buckets, in that it's fast and reasonably distributed
* thanks to mixing the bits via a multiplication by a prime number and using
@ -1337,17 +1396,7 @@ static inline uint statistical_prng_range(uint range)
*/
static forceinline uint ptr_hash(const void *p, const int bits)
{
unsigned long long x = (unsigned long)p;
if (!bits)
return 0;
x *= 0xacd1be85U;
if (sizeof(long) == 4)
x ^= x >> 32;
else
x >>= 31 - (bits + 1) / 2;
return x & (~0U >> (-bits & 31));
return _ptr_hash_reduce(_ptr_hash(p), bits);
}
/* Same as above but works on two pointers. It will return the same values
@ -1355,20 +1404,15 @@ static forceinline uint ptr_hash(const void *p, const int bits)
*/
static forceinline uint ptr2_hash(const void *p1, const void *p2, const int bits)
{
unsigned long long x = (unsigned long)p1;
unsigned long long y = (unsigned long)p2;
return _ptr_hash_reduce(_ptr2_hash(p1, p2), bits);
}
if (!bits)
return 0;
x *= 0xacd1be85U;
y *= 0x9d28e4e9U;
x ^= y;
if (sizeof(long) == 4)
x ^= x >> 32;
else
x >>= 33 - bits / 2;
return x & (~0U >> (-bits & 31));
/* Same as above but works on two pointers and a long argument. It will return
* the same values if the second pointer is NULL.
*/
static forceinline uint ptr2_hash_arg(const void *p1, const void *p2, ulong arg, const int bits)
{
return _ptr_hash_reduce(_ptr2_hash_arg(p1, p2, arg), bits);
}
@ -1499,4 +1543,6 @@ void ha_freearray(char ***array);
void ha_memset_s(void *s, int c, size_t n);
void chunk_append_thread_ctx(struct buffer *output, const struct thread_exec_ctx *ctx, const char *pfx, const char *sfx);
#endif /* _HAPROXY_TOOLS_H */

View File

@ -190,7 +190,8 @@ void trace_no_cb(enum trace_level level, uint64_t mask, const struct trace_sourc
void trace_register_source(struct trace_source *source);
int trace_parse_cmd(const char *arg_src, char **errmsg);
int trace_add_cmd(const char *arg_src, char **errmsg);
void trace_parse_cmds(void);
/* return a single char to describe a trace state */
static inline char trace_state_char(enum trace_state st)

View File

@ -0,0 +1,13 @@
-----BEGIN CERTIFICATE-----
MIICBTCCAaugAwIBAgIUN+Ne3W00v5RwrlIBqhub+WHgq3kwCgYIKoZIzj0EAwIw
VzELMAkGA1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoMGElu
dGVybmV0IFdpZGdpdHMgUHR5IEx0ZDEQMA4GA1UEAwwHZm9vLmJhcjAgFw0yNjAy
MjYxMDM5MjhaGA8yMDUzMDcxNDEwMzkyOFowVzELMAkGA1UEBhMCQVUxEzARBgNV
BAgMClNvbWUtU3RhdGUxITAfBgNVBAoMGEludGVybmV0IFdpZGdpdHMgUHR5IEx0
ZDEQMA4GA1UEAwwHZm9vLmJhcjBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABGwE
ope2KbYUXi5bSLGiQmkaxO17SwkVbTqRrXTztIx99xj9qfSrVKFqN3lnaNDXAclG
GnfmU/j7xsEocZdYmPujUzBRMB0GA1UdDgQWBBQZSL9UUhRofXo5X9BoS0XBug4i
DzAfBgNVHSMEGDAWgBQZSL9UUhRofXo5X9BoS0XBug4iDzAPBgNVHRMBAf8EBTAD
AQH/MAoGCCqGSM49BAMCA0gAMEUCIQDFDrvj5p9R7wmMRoJGUuEJu7I2xYtXDcOP
lLE0quJtvwIgWW7vuM3B+ruCslhIrMMqD+DYeguxAxi+aHRVMnBig/c=
-----END CERTIFICATE-----

View File

@ -0,0 +1,5 @@
-----BEGIN PRIVATE KEY-----
MIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQg6qbbYYII1zqqmlDH
hTwJt+JYBe+ELI02yAecAx+nD4yhRANCAARsBKKXtim2FF4uW0ixokJpGsTte0sJ
FW06ka1087SMffcY/an0q1Shajd5Z2jQ1wHJRhp35lP4+8bBKHGXWJj7
-----END PRIVATE KEY-----

View File

@ -16,7 +16,7 @@ feature cmd "$HAPROXY_PROGRAM -cc 'feature(OPENSSL)'"
feature cmd "command -v socat"
feature ignore_unknown_macro
server s1 -repeat 27 {
server s1 -repeat 40 {
rxreq
txresp
} -start
@ -542,3 +542,44 @@ client c27 -connect ${h1_mainfe_sock} {
expect resp.http.x-jwt-verify-RS256-var2 == "1"
} -run
client c28 -connect ${h1_mainfe_sock} {
# Token content : {"alg":"none"}
# {"iss":"joe", "exp":1300819380, "http://example.com/is_root":true}
txreq -url "/none" -hdr "Authorization: Bearer eyJhbGciOiJub25lIn0.eyJpc3MiOiJqb2UiLA0KICJleHAiOjEzMDA4MTkzODAsDQogImh0dHA6Ly9leGFtcGxlLmNvbS9pc19yb290Ijp0cnVlfQ."
rxresp
expect resp.status == 200
expect resp.http.x-jwt-alg == "none"
expect resp.http.x-jwt-verify == "1"
} -run
client c29 -connect ${h1_mainfe_sock} {
# Invalid Token : too many subparts
txreq -url "/errors" -hdr "Authorization: Bearer eyJhbGciOiJub25lIn0.aa.aa.aa"
rxresp
expect resp.status == 200
expect resp.http.x-jwt-alg == "none"
expect resp.http.x-jwt-verify == "-3"
# Invalid Token : too many subparts
txreq -url "/errors" -hdr "Authorization: Bearer eyJhbGciOiJub25lIn0.aa.aa."
rxresp
expect resp.status == 200
expect resp.http.x-jwt-alg == "none"
expect resp.http.x-jwt-verify == "-3"
# Invalid Token : too few subparts
txreq -url "/errors" -hdr "Authorization: Bearer eyJhbGciOiJub25lIn0.aa"
rxresp
expect resp.status == 200
expect resp.http.x-jwt-alg == "none"
expect resp.http.x-jwt-verify == "-3"
# Invalid Token : no signature but alg different than "none"
txreq -url "/errors" -hdr "Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ."
rxresp
expect resp.status == 200
expect resp.http.x-jwt-alg == "RS256"
expect resp.http.x-jwt-verify == "-3"
} -run

View File

@ -19,7 +19,7 @@ feature cmd "$HAPROXY_PROGRAM -cc 'version_atleast(3.4-dev2)'"
feature cmd "$HAPROXY_PROGRAM -cc 'feature(OPENSSL) && openssl_version_atleast(1.1.1)'"
feature ignore_unknown_macro
server s1 -repeat 20 {
server s1 -repeat 30 {
rxreq
txresp
} -start
@ -53,6 +53,10 @@ haproxy h1 -conf {
# { "kty": "RSA", "e": "AQAB", "n": "wsqJbopx18NQFYLYOq4ZeMSE89yGiEankUpf25yV8QqroKUGrASj_OeqTWUjwPGKTN1vGFFuHYxiJeAUQH2qQPmg9Oqk6-ATBEKn9COKYniQ5459UxCwmZA2RL6ufhrNyq0JF3GfXkjLDBfhU9zJJEOhknsA0L_c-X4AI3d_NbFdMqxNe1V_UWAlLcbKdwO6iC9fAvwUmDQxgy6R0DC1CMouQpenMRcALaSHar1cm4K-syoNobv3HEuqgZ3s6-hOOSqauqAO0GUozPpaIA7OeruyRl5sTWT0r-iz39bchID2bIKtcqLiFcSYPLBcxmsaQCqRlGhmv6stjTCLV1yT9w", "kid": "ff3c5c96-392e-46ef-a839-6ff16027af78", "d": "b9hXfQ8lOtw8mX1dpqPcoElGhbczz_-xq2znCXQpbBPSZBUddZvchRSH5pSSKPEHlgb3CSGIdpLqsBCv0C_XmCM9ViN8uqsYgDO9uCLIDK5plWttbkqA_EufvW03R9UgIKWmOL3W4g4t-C2mBb8aByaGGVNjLnlb6i186uBsPGkvaeLHbQcRQKAvhOUTeNiyiiCbUGJwCm4avMiZrsz1r81Y1Z5izo0ERxdZymxM3FRZ9vjTB-6DtitvTXXnaAm1JTu6TIpj38u2mnNLkGMbflOpgelMNKBZVxSmfobIbFN8CHVc1UqLK2ElsZ9RCQANgkMHlMkOMj-XT0wHa3VBUQ", "p": "8mgriveKJAp1S7SHqirQAfZafxVuAK_A2QBYPsAUhikfBOvN0HtZjgurPXSJSdgR8KbWV7ZjdJM_eOivIb_XiuAaUdIOXbLRet7t9a_NJtmX9iybhoa9VOJFMBq_rbnbbte2kq0-FnXmv3cukbC2LaEw3aEcDgyURLCgWFqt7M0", "q": "zbbTv5421GowOfKVEuVoA35CEWgl8mdasnEZac2LWxMwKExikKU5LLacLQlcOt7A6n1ZGUC2wyH8mstO5tV34Eug3fnNrbnxFUEE_ZB_njs_rtZnwz57AoUXOXVnd194seIZF9PjdzZcuwXwXbrZ2RSVW8if_ZH5OVYEM1EsA9M", "dp": "1BaIYmIKn1X3InGlcSFcNRtSOnaJdFhRpotCqkRssKUx2qBlxs7ln_5dqLtZkx5VM_UE_GE7yzc6BZOwBxtOftdsr8HVh-14ksSR9rAGEsO2zVBiEuW4qZf_aQM-ScWfU--wcczZ0dT-Ou8P87Bk9K9fjcn0PeaLoz3WTPepzNE", "dq": "kYw2u4_UmWvcXVOeV_VKJ5aQZkJ6_sxTpodRBMPyQmkMHKcW4eKU1mcJju_deqWadw5jGPPpm5yTXm5UkAwfOeookoWpGa7CvVf4kPNI6Aphn3GBjunJHNpPuU6w-wvomGsxd-NqQDGNYKHuFFMcyXO_zWXglQdP_1o1tJ1M-BM", "qi": "j94Ens784M8zsfwWoJhYq9prcSZOGgNbtFWQZO8HP8pcNM9ls7YA4snTtAS_B4peWWFAFZ0LSKPCxAvJnrq69ocmEKEk7ss1Jo062f9pLTQ6cnhMjev3IqLocIFt5Vbsg_PWYpFSR7re6FRbF9EYOM7F2-HRv1idxKCWoyQfBqk" }
load crt rsa_oeap.pem key rsa_oeap.key jwt on
# Private key built out of the following JWK:
# {"crv":"P-256","d":"6qbbYYII1zqqmlDHhTwJt-JYBe-ELI02yAecAx-nD4w","kty":"EC","x":"bASil7YpthReLltIsaJCaRrE7XtLCRVtOpGtdPO0jH0","y":"9xj9qfSrVKFqN3lnaNDXAclGGnfmU_j7xsEocZdYmPs"}
load crt ec_decrypt.crt key ec_decrypt.key jwt on
listen main-fe
bind "fd@${mainfe}"
@ -84,6 +88,11 @@ haproxy h1 -conf {
http-request set-var(txn.decrypted) var(txn.jwe),jwt_decrypt_cert(txn.pem)
.if ssllib_name_startswith(AWS-LC)
acl aws_unmanaged var(txn.jwe),jwt_header_query('$.alg') -m end "A128KW" -m end "A192KW"
http-request set-var(txn.decrypted) str("AWS-LC UNMANAGED") if aws_unmanaged
.endif
http-after-response set-header X-Decrypted %[var(txn.decrypted)]
server s1 ${s1_addr}:${s1_port}
@ -95,7 +104,7 @@ haproxy h1 -conf {
http-request set-var(txn.decrypted) var(txn.jwe),jwt_decrypt_jwk(txn.jwk)
.if ssllib_name_startswith(AWS-LC)
acl aws_unmanaged var(txn.jwe),jwt_header_query('$.alg') -m str "A128KW"
acl aws_unmanaged var(txn.jwe),jwt_header_query('$.alg') -m end "A128KW" -m end "A192KW"
http-request set-var(txn.decrypted) str("AWS-LC UNMANAGED") if aws_unmanaged
.endif
@ -262,3 +271,53 @@ client c8 -connect ${h1_mainfe_sock} {
expect resp.http.x-decrypted == ""
} -run
# ECDH-ES
client c9 -connect ${h1_mainfe_sock} {
txreq -url "/jwk" -hdr "Authorization: Bearer eyJhbGciOiAiRUNESC1FUyIsICJlbmMiOiAiQTI1NkdDTSIsICJlcGsiOiB7Imt0eSI6ICJFQyIsICJjcnYiOiAiUC0yNTYiLCAieCI6ICJZVUg1VlJweURCb3FiUjVCUWlSMGV4anJ0bDdLb24yeTZkejFnM3NuNzZJIiwgInkiOiAiZld2RHFHb3pYNjJnMnRTS19oSkctWkVKNkFCcFhYTS1Tc1hPeE5KUXFtUSJ9fQ..0tN70AQ3P_4uEV4t.zkv7KfnUlDTKjJ82zKCMK_z7OEFk_euXGuJemShf8mnOeEUE4UN8wS5cRJzMQWxcY9d3dIvUCYx0HhzeoXnKqnkEU6be659IVtKpqtceLYKcIkpjj0XiaEalVqIKKXTU2NG2ldNsYwnEDN_XxMnIUPFOy3yJqpOfjf8v98ABYuTWfJVwk3tK9vYCj-ScCf2NK7cEIti_09VCsxMg7z0kvco5UaTXvDjEbPhj_EVfHoPlmDE6EuaO5OX5t3reOoJ1vsM2PEpADiYfmvSZxeWAmmtAH7cvrRIUCcy4Q5pNczh1Pmt0y-uJKtme16YWq8PxVtnb7lY9HDTuPeaMVqvMV6PlQ9vnfsirjpz72qx3ArAeXkIGJsPOGKfgCoW6sAWHQxCzvq8ek7zOaqTAo169PSdtxfBL4MJWxoLg38pODy4cjEGR71YYirthejEMgRs7G1A8ksxgs2bkYGInunUD_iAWkQzxYZhFlLRntWP1ikOKmx9gbqR6K9UiqCK1UG4NXF3o4OV34m-jw-cXMDF2JkekVK2-rhxTbXmqP-VhDrkQ2ANdk7fTW9elFYNisVzE1QjdClMKGhO1fdKiSJ9xSPo3W6pMuquYYN-XT1fLiu3GDtO4ELZWVdwmiucsxv9H2jzPwbhvbvlXwXsmyCBtvumcEUbiYCOIYvlddhTGjZHplvDU73O5SkxUYJTYh7H0DcSiZ-6tcWdRCs605xVZMJ_X91_gZ1tb2_df73lYT_tVo39kw78m3GVFBeK2Zy4JeLheo0fHE7n8lg13uwG77SHwrWSV61KKWhBPZR0bWGi8YvVHnqX0GWklIjpqjbIjYAk4baFv4MO4OvEkPxnGm64NNZWrGEA0U8eEHCgjF1ZagQFNb674Crgd-tRA0QPEAOc9NsnlK1Q-47KIgqNbwoc3VpbpHNLVJT4aKWV5q187YNxarbpeDqguh75M9AgbpT5bSDFhjF83f1kiEDgLdNTkAd-CPAzgtzaEAfxD1K4ViZZZ2DqXgw0PFTFZAWrWqv8Ydi61r5MJ.Srleju8Bifrc_6bqFPUF_w" \
-hdr "X-JWK: {\"kty\":\"EC\",\"crv\":\"P-256\",\"x\":\"DxaAKzwruXJh4IkdieycIJER6w8M1TYMCV3qOa-l9CM\",\"y\":\"_kRI1aD7-PMFwhUpXmcRzw6hALF_xdKwADuKOM-xsak\",\"d\":\"SOu5eRc40yn5yVrg069VjWNH4wsoErN8_AxmH4cI88s\"}"
rxresp
expect resp.http.x-decrypted == "Sed ut perspiciatis unde omnis iste natus error sit voluptatem doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. porro quisquam est, qui dolorem ipsum quia dolor sit amet, adipisci velit, sed quia non numquam eius modi tempora incidunt ut dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut ea commodi consequatur? Quis autem vel eum iure reprehenderit qui in voluptate velit esse quam nihil molestiae consequatur, vel illum qui eum fugiat quo voluptas nulla pariatur?"
txreq -url "/jwk" -hdr "Authorization: Bearer eyJhbGciOiJFQ0RILUVTIiwiZW5jIjoiQTI1NkNCQy1IUzUxMiIsImVwayI6eyJjcnYiOiJQLTUyMSIsImt0eSI6IkVDIiwieCI6IkFYelVIT0hFdXU1RzlxSHBQOGdXelBub3FQNUpSVmNrYWZGb1VxV0FUQVd6b3FUN0tmX3V5WW5HSElaNkVXdUx0U0NrUnREUHE4WDJlcV9lSDc0QVRQZGwiLCJ5IjoiQVRxelFNWV9PUE9lWUZYNlpGN0l4ZkgwQ2x6RlRjZjVhaE1UTERmMHJYRkczNmdHN1lDMjR1Q2hrR2ZoZHlBT1RRY09kN1ZyQlM4clNZeC03R0hLbzNWNSJ9fQ..kTaw9v3MWCN78jq5OXTWZA.w4o_19dlHEFEhQ0GXI08x-vJnImL_mtZ_oHXTCvfCj_aCEDL4UuiaAU7-yvtM60G3HjNO6TTvmCdvHOTz6Ynrg.H-EbBpTyi5YXNT5DHSFiNBeBcdjmClR_LDARvak4qng" \
-hdr "X-JWK: {\"alg\":\"ECDH-ES\",\"crv\":\"P-521\",\"d\":\"AVBp1yn67_t0C8WfJnrhZsgy4TDkA9XktZnwAHcCTUMWTBCURXOjCNCIaCyE65xzIQbZUc9rO-B93XKFO81u8myd\",\"key_ops\":[\"wrapKey\",\"unwrapKey\"],\"kty\":\"EC\",\"x\":\"AByuEl5P9ledNRyj4EjTtQwDcsIYpbNzUqjri5o8GPGLzeTWUzjDBVt1ZyxKfK8VMVQbj8sIrBHncYUqM1Re3pSA\",\"y\":\"AW171IiyQSWx95A9uT1m76XPcAss3jeE7lHgw8mU7yIxSi_SItDYFixJ5Xtf2Vu2BLlmpR0on6VV1UUNIyPk6qwb\"}"
rxresp
expect resp.http.x-decrypted == "Random test message for ECDH-ES encrypted tokens"
txreq -url "/pem" -hdr "Authorization: Bearer eyJhbGciOiJFQ0RILUVTIiwiZW5jIjoiQTEyOENCQy1IUzI1NiIsImVwayI6eyJjcnYiOiJQLTI1NiIsImt0eSI6IkVDIiwieCI6InhEcUZveF9oR3Q5VjZWSWZjRUpaU1VVTm1uT0V5Qk1BYzZybHlOV09lcjgiLCJ5IjoiUVZmdkstcVJ0V0J1Uk9XVzRnMmlVampqMFN3U1BzYjB6ZE10R0c2czBFUSJ9fQ..0ykoqdP2WMKra2VugMQMzg.dyCI6QGNIf-x4n0DIaXgVnGtoSCOD3sOX7I01djrFdNRRSmPnITcQiJn1lw1LbiZyqZxOLf_mJHw7BRrcgPxBG6gsP3oFBnLeXllcD6kuLtllVofaPDEKdr66W9dp6Cr.002j4NUlGTYz8d_0mTM38A" \
-hdr "X-PEM: ${testdir}/ec_decrypt.crt"
rxresp
expect resp.http.x-decrypted == "Random test message for ECDH-ES encrypted token (with some extra padding for good measure)"
} -run
# ECDH-ES+A___KW
client c10 -connect ${h1_mainfe_sock} {
# ECDH-ES+A128KW
txreq -url "/jwk" -hdr "Authorization: Bearer eyJhbGciOiJFQ0RILUVTK0ExMjhLVyIsImVuYyI6IkExMjhDQkMtSFMyNTYiLCJlcGsiOnsiY3J2IjoiUC0yNTYiLCJrdHkiOiJFQyIsIngiOiJtc2poQktWNW5oNnBjdjhoRnR0UDlFVXRzaURzWG83T3RCekVZYkVJM1EwIiwieSI6IloxQ3FPQlEya1RNR1lENWdMUWJCaHB0MzRKRkR3dW5TX2ZzSmhsMlc1OWcifX0.5l7YaATvAWFJnWK_HsBPmawJ0RMqrkiwyZ9xAuiYCFSiqWWSr8D82A.0sa1s5V2RcDf0FW6hA1lig.z2DVLxtHeY1fPp6dJHiHEuHLVIQHQ10GfYXeFxwNE7JGyto-D3K1elHQn0Yq4Pitaheja21gnXkJajXhOA0rwQ.YmpToFWmj8XQrXMeXTa9eQ" \
-hdr "X-JWK: {\"crv\":\"P-256\",\"d\":\"6qbbYYII1zqqmlDHhTwJt-JYBe-ELI02yAecAx-nD4w\",\"kty\":\"EC\",\"x\":\"bASil7YpthReLltIsaJCaRrE7XtLCRVtOpGtdPO0jH0\",\"y\":\"9xj9qfSrVKFqN3lnaNDXAclGGnfmU_j7xsEocZdYmPs\"}"
rxresp
expect resp.http.x-decrypted ~ "(Random test message for ECDH-ES encrypted tokens|AWS-LC UNMANAGED)"
txreq -url "/pem" -hdr "Authorization: Bearer eyJhbGciOiJFQ0RILUVTK0ExMjhLVyIsImVuYyI6IkExMjhDQkMtSFMyNTYiLCJlcGsiOnsiY3J2IjoiUC0yNTYiLCJrdHkiOiJFQyIsIngiOiJKeUJzcDZtZjVCMWROYUk0ZGppX2pnMm9NdFRRQUd4akxTekdGbGJ0dXZZIiwieSI6IlFfZnlBUmZiMGpjWXFtSTZUNEdXTXA1U2dGYXZiQ3lGUGF3OHhab1BIYzAifX0.JKrKKRF9QdxUyv0KX-MV11eHpP2Vz8Amdh8j3ipd_QP57jkN-OWRCQ.CjpnSVRVV51C10cUwCTaXA.bliaBk7mGYIOGdvgiMg481iC8GiOarRrjIkUgEBuqiSJENmOi90IXgnoVp4qQdi70bJVBNuCYP7Q9sLzZc4X2g.C_TCuAfAH5020v-NdR91BA" \
-hdr "X-PEM: ${testdir}/ec_decrypt.crt"
rxresp
expect resp.http.x-decrypted ~ "(Random test message for ECDH-ES\\+A128KW encrypted token|AWS-LC UNMANAGED)"
# ECDH-ES+A192KW
txreq -url "/jwk" -hdr "Authorization: Bearer eyJhbGciOiJFQ0RILUVTK0ExOTJLVyIsImVuYyI6IkExOTJDQkMtSFMzODQiLCJlcGsiOnsiY3J2IjoiUC0zODQiLCJrdHkiOiJFQyIsIngiOiJDcTd3Y0MzUm92VFRZSTMzLU9DcXBocjFlN1NzeEZWY0dOQXhEOEpWZHBRQmROaGg3Z2dLNTJKVkJ1RF9uZXVHIiwieSI6IjlaLU1MV09TQ3VZd0JZVTEtcTd2YUREWUZ1WFhqc1EwSmxpWllLVmdOU0dqVHVLY3VXQnJHemV2RzZEeGgyRHQifX0.75lt6Ixq6UhlN8uiaEphy8SiqEVsuD4Rc3QbFcmP7MJUTyt15LcZ3y-M7TJeNBh3Ajy_6K2WooU.cO9tUaQ2eVo0tIuOqb5_Bw.HQ6DqnLhW2Ad0c78WFGgwCStefYdL37xmh2Fa2mCsVNW5q0K3-xeDHYuIP9Q5xBYEY70U6wV5a0iVN87ii_iMA.feLteQh1ickYVJ2ZZ2whoVzNGRHgUpjp" \
-hdr "X-JWK: {\"alg\":\"ECDH-ES+A192KW\",\"crv\":\"P-384\",\"d\":\"pj6xIezfwtUakkkLtbRQ9FmN6uN1YJ-TSBkWn4awuDfWiHgqpQHA7_L95Hjks1cK\",\"key_ops\":[\"wrapKey\",\"unwrapKey\"],\"kty\":\"EC\",\"x\":\"JO3ojbUYOzoSb-7lAy-c7VhDIjhEtg4zrPn_NJKuGhat-cuI1c4LvOj3n8p3j4bn\",\"y\":\"CA3i4pN7t6liWxQXyxdDp9t79B8uWuubGADJuGn_2_yl6pufhnQ30OBA590fOtEm\"}"
rxresp
expect resp.http.x-decrypted ~ "(Random test message for ECDH-ES\\+A192KW encrypted tokens|AWS-LC UNMANAGED)"
# ECDH-ES+A256KW
txreq -url "/jwk" -hdr "Authorization: Bearer eyJhbGciOiJFQ0RILUVTK0EyNTZLVyIsImVuYyI6IkEyNTZDQkMtSFM1MTIiLCJlcGsiOnsiY3J2IjoiUC01MjEiLCJrdHkiOiJFQyIsIngiOiJBTFZuZXN6Tl93WVJSWVYtblp3dy1sSkVDTXB2eE1iSENXX3BjY3EyWlF2eFdsNzVKdm5TM3lKbjgzcTE1MlpnWU4zTTB4SUhzQmw1empWZS02OGR4TThwIiwieSI6IkFUX2pGel94RGt0VFY4WWYzZlo1MnRvbE5QWkwwNXlwa0dVTThPWFRNZTBaaVNfYnIzaS0xNHFlWG1OcjA3TFFjNUZMX1VTQkE5WmlyWGRaZkVLUnFqNmEifX0.MqGFvMzpIlwQHeXgPucBkXmS2BaXr2ByUugzD31XrPtxwlWw96vOmfcjSHvda2FGJ1u6InaMMVZMMp75P6AF0kvk8vuM7QF2.kHYblcqwHgXv0xRQrLHwoA.gwFUyTx3RRHWvmqyUL5N6W8HcwbNc1hPTImQPoCNPv6rkhzV1obikVj7sNuTh3Po0nBu2QCKrt-GjJTlD4Q5kw.Q_YZWSkVVxv1rcpySgENN3ZPp-chIYoCGC070kkqiXc" \
-hdr "X-JWK: {\"alg\":\"ECDH-ES+A256KW\",\"crv\":\"P-521\",\"d\":\"AGGLpIzSL1jE34wGa-owWCVt2rgk8j3jqh33QQFKwYCJ9abp3vROyQ-dNv6j6PjrnF1EFyY9dDzChNpWmzoOZAp3\",\"key_ops\":[\"wrapKey\",\"unwrapKey\"],\"kty\":\"EC\",\"x\":\"AD0EIUE6Bt_TDcyOPM6VchRocp7AFSeVd6XkVALWf8AFebeMgKIvJsCsGeRdPTO3vWWrR5AOvvpiBfurb9M9Tus-\",\"y\":\"AOeI5d0iF463g3DolhmVFn6MWk764ONuXRexLApjN-Q6_RkcnCieRSZzqqSPMYuEn-N3i4aYfiEPZV0jk8oZKQMQ\"}"
rxresp
expect resp.http.x-decrypted == "Random test message for ECDH-ES+A256KW encrypted tokens"
} -run

View File

@ -28,7 +28,7 @@
# show-backports -q -m -r hapee-r2 hapee-r1
USAGE="Usage: ${0##*/} [-q] [-H] [-m] [-u] [-r reference] [-l logexpr] [-s subject] [-b base] {branch|range} [...] [-- file*]"
USAGE="Usage: ${0##*/} [-q] [-H] [-m] [-u] [-L] [-r reference] [-l logexpr] [-s subject] [-b base] {branch|range} [...] [-- file*]"
BASES=( )
BRANCHES=( )
REF=
@ -39,6 +39,7 @@ SUBJECT=
MISSING=
UPSTREAM=
BODYHASH=
SINCELAST=
die() {
[ "$#" -eq 0 ] || echo "$*" >&2
@ -70,7 +71,7 @@ dump_commit_matrix() {
count=0
# now look up commits
while read ref subject; do
if [ -n "$MISSING" -a "${subject:0:9}" = "[RELEASE]" ]; then
if [ -n "$MISSING" -o -n "$SINCELAST" ] && [ "${subject:0:9}" = "[RELEASE]" ]; then
continue
fi
@ -153,6 +154,7 @@ while [ -n "$1" -a -z "${1##-*}" ]; do
-m) MISSING=1 ; shift ;;
-u) UPSTREAM=1 ; shift ;;
-H) BODYHASH=1 ; shift ;;
-L) SINCELAST=1 ; shift ;;
-h|--help) quit "$USAGE" ;;
*) die "$USAGE" ;;
esac
@ -255,7 +257,7 @@ if [ -z "$BASE" -a -n "$MISSING" ]; then
fi
if [ -z "$BASE" ]; then
err "Warning! No base specified, looking for common ancestor."
[ "$QUIET" != "" ] || err "Warning! No base specified, looking for common ancestor."
BASE=$(git merge-base --all "$REF" "${BRANCHES[@]}")
if [ -z "$BASE" ]; then
die "Couldn't find a common ancestor between these branches"
@ -297,9 +299,23 @@ dump_commit_matrix | column -t | \
(
left_commits=( )
right_commits=( )
since_last=( )
last_bkp=$BASE
while read line; do
# append the subject at the end of the line
set -- $line
if [ -n "$SINCELAST" ]; then
if [ "${line::1}" = ":" ]; then
continue
fi
if [ "$2" != "-" ]; then
last_bkp="$1"
since_last=( )
else
since_last[${#since_last[@]}]="$1"
fi
continue
fi
echo -n "$line "
if [ "${line::1}" = ":" ]; then
echo "---- Subject ----"
@ -315,7 +331,14 @@ dump_commit_matrix | column -t | \
right_commits[${#right_commits[@]}]="$comm"
fi
done
if [ -n "$MISSING" -a ${#left_commits[@]} -eq 0 ]; then
if [ -n "$SINCELAST" -a ${#since_last[@]} -eq 0 ]; then
echo "No new commit upstream since last commit $last_bkp."
elif [ -n "$SINCELAST" ]; then
echo "Found ${#since_last[@]} commit(s) added to branch $REF since last backported commit $last_bkp:"
echo
echo " git cherry-pick -sx ${since_last[@]}"
echo
elif [ -n "$MISSING" -a ${#left_commits[@]} -eq 0 ]; then
echo "No missing commit to apply."
elif [ -n "$MISSING" ]; then
echo

View File

@ -15,6 +15,7 @@
#include <haproxy/acme-t.h>
#include <haproxy/base64.h>
#include <haproxy/intops.h>
#include <haproxy/cfgparse.h>
#include <haproxy/cli.h>
#include <haproxy/errors.h>
@ -266,7 +267,6 @@ static int cfg_parse_acme(const char *file, int linenum, char **args, int kwm)
mark_tainted(TAINTED_CONFIG_EXP_KW_DECLARED);
if (strcmp(args[0], "acme") == 0) {
struct acme_cfg *tmp_acme = acme_cfgs;
if (alertif_too_many_args(1, file, linenum, args, &err_code))
goto out;
@ -292,7 +292,7 @@ static int cfg_parse_acme(const char *file, int linenum, char **args, int kwm)
* name */
err_code |= ERR_ALERT | ERR_FATAL;
ha_alert("parsing [%s:%d]: acme section '%s' already exists (%s:%d).\n",
file, linenum, args[1], tmp_acme->filename, tmp_acme->linenum);
file, linenum, args[1], cur_acme->filename, cur_acme->linenum);
goto out;
}
@ -1188,7 +1188,7 @@ int acme_res_certificate(struct task *task, struct acme_ctx *ctx, char **errmsg)
}
/* get the next retry timing */
if (isteqi(hdr->n, ist("Retry-After"))) {
ctx->retryafter = atol(hdr->v.ptr);
ctx->retryafter = __strl2uic(hdr->v.ptr, hdr->v.len);
}
}
@ -1261,7 +1261,7 @@ int acme_res_chkorder(struct task *task, struct acme_ctx *ctx, char **errmsg)
}
/* get the next retry timing */
if (isteqi(hdr->n, ist("Retry-After"))) {
ctx->retryafter = atol(hdr->v.ptr);
ctx->retryafter = __strl2uic(hdr->v.ptr, hdr->v.len);
}
}
@ -1344,7 +1344,6 @@ int acme_req_finalize(struct task *task, struct acme_ctx *ctx, char **errmsg)
csr->data = ret;
chunk_printf(req_in, "{ \"csr\": \"%.*s\" }", (int)csr->data, csr->area);
OPENSSL_free(data);
if (acme_jws_payload(req_in, ctx->nonce, ctx->finalize, ctx->cfg->account.pkey, ctx->kid, req_out, errmsg) != 0)
@ -1358,6 +1357,7 @@ int acme_req_finalize(struct task *task, struct acme_ctx *ctx, char **errmsg)
error:
memprintf(errmsg, "couldn't request the finalize URL");
out:
OPENSSL_free(data);
free_trash_chunk(req_in);
free_trash_chunk(req_out);
free_trash_chunk(csr);
@ -1391,7 +1391,7 @@ int acme_res_finalize(struct task *task, struct acme_ctx *ctx, char **errmsg)
}
/* get the next retry timing */
if (isteqi(hdr->n, ist("Retry-After"))) {
ctx->retryafter = atol(hdr->v.ptr);
ctx->retryafter = __strl2uic(hdr->v.ptr, hdr->v.len);
}
}
@ -1492,7 +1492,7 @@ enum acme_ret acme_res_challenge(struct task *task, struct acme_ctx *ctx, struct
}
/* get the next retry timing */
if (isteqi(hdr->n, ist("Retry-After"))) {
ctx->retryafter = atol(hdr->v.ptr);
ctx->retryafter = __strl2uic(hdr->v.ptr, hdr->v.len);
}
}
@ -1618,7 +1618,7 @@ int acme_res_auth(struct task *task, struct acme_ctx *ctx, struct acme_auth *aut
}
/* get the next retry timing */
if (isteqi(hdr->n, ist("Retry-After"))) {
ctx->retryafter = atol(hdr->v.ptr);
ctx->retryafter = __strl2uic(hdr->v.ptr, hdr->v.len);
}
}
@ -1849,7 +1849,7 @@ int acme_res_neworder(struct task *task, struct acme_ctx *ctx, char **errmsg)
}
/* get the next retry timing */
if (isteqi(hdr->n, ist("Retry-After"))) {
ctx->retryafter = atol(hdr->v.ptr);
ctx->retryafter = __strl2uic(hdr->v.ptr, hdr->v.len);
}
/* get the order URL */
if (isteqi(hdr->n, ist("Location"))) {
@ -2009,7 +2009,7 @@ int acme_res_account(struct task *task, struct acme_ctx *ctx, int newaccount, ch
}
/* get the next retry timing */
if (isteqi(hdr->n, ist("Retry-After"))) {
ctx->retryafter = atol(hdr->v.ptr);
ctx->retryafter = __strl2uic(hdr->v.ptr, hdr->v.len);
}
if (isteqi(hdr->n, ist("Replay-Nonce"))) {
istfree(&ctx->nonce);
@ -2526,9 +2526,9 @@ X509_REQ *acme_x509_req(EVP_PKEY *pkey, char **san)
{
struct buffer *san_trash = NULL;
X509_REQ *x = NULL;
X509_NAME *nm;
X509_NAME *nm = NULL;
STACK_OF(X509_EXTENSION) *exts = NULL;
X509_EXTENSION *ext_san;
X509_EXTENSION *ext_san = NULL;
char *str_san = NULL;
int i = 0;
@ -2559,26 +2559,36 @@ X509_REQ *acme_x509_req(EVP_PKEY *pkey, char **san)
for (i = 0; san[i]; i++) {
chunk_appendf(san_trash, "%sDNS:%s", i ? "," : "", san[i]);
}
str_san = my_strndup(san_trash->area, san_trash->data);
if ((str_san = my_strndup(san_trash->area, san_trash->data)) == NULL)
goto error;
if ((ext_san = X509V3_EXT_conf_nid(NULL, NULL, NID_subject_alt_name, str_san)) == NULL)
goto error;
if (!sk_X509_EXTENSION_push(exts, ext_san))
goto error;
ext_san = NULL; /* handle double-free upon error */
if (!X509_REQ_add_extensions(x, exts))
goto error;
sk_X509_EXTENSION_pop_free(exts, X509_EXTENSION_free);
if (!X509_REQ_sign(x, pkey, EVP_sha256()))
goto error;
sk_X509_EXTENSION_pop_free(exts, X509_EXTENSION_free);
X509_NAME_free(nm);
free(str_san);
free_trash_chunk(san_trash);
return x;
error:
X509_EXTENSION_free(ext_san);
sk_X509_EXTENSION_pop_free(exts, X509_EXTENSION_free);
X509_REQ_free(x);
X509_NAME_free(nm);
free(str_san);
free_trash_chunk(san_trash);
return NULL;
@ -2721,7 +2731,10 @@ static int cli_acme_renew_parse(char **args, char *payload, struct appctx *appct
struct ckch_store *store = NULL;
char *errmsg = NULL;
if (!*args[1]) {
if (!cli_has_level(appctx, ACCESS_LVL_ADMIN))
return 1;
if (!*args[2]) {
memprintf(&errmsg, ": not enough parameters\n");
goto err;
}
@ -2760,8 +2773,11 @@ static int cli_acme_chall_ready_parse(char **args, char *payload, struct appctx
int remain = 0;
struct ebmb_node *node = NULL;
if (!*args[2] && !*args[3] && !*args[4]) {
memprintf(&msg, ": not enough parameters\n");
if (!cli_has_level(appctx, ACCESS_LVL_ADMIN))
return 1;
if (!*args[2] || !*args[3] || !*args[4]) {
memprintf(&msg, "Not enough parameters: \"acme challenge_ready <certfile> domain <domain>\"\n");
goto err;
}
@ -2882,8 +2898,12 @@ end:
return 1;
}
static int cli_acme_ps(char **args, char *payload, struct appctx *appctx, void *private)
static int cli_acme_parse_status(char **args, char *payload, struct appctx *appctx, void *private)
{
if (!cli_has_level(appctx, ACCESS_LVL_ADMIN))
return 1;
return 0;
}
@ -2891,7 +2911,7 @@ static int cli_acme_ps(char **args, char *payload, struct appctx *appctx, void *
static struct cli_kw_list cli_kws = {{ },{
{ { "acme", "renew", NULL }, "acme renew <certfile> : renew a certificate using the ACME protocol", cli_acme_renew_parse, NULL, NULL, NULL, 0 },
{ { "acme", "status", NULL }, "acme status : show status of certificates configured with ACME", cli_acme_ps, cli_acme_status_io_handler, NULL, NULL, 0 },
{ { "acme", "status", NULL }, "acme status : show status of certificates configured with ACME", cli_acme_parse_status, cli_acme_status_io_handler, NULL, NULL, 0 },
{ { "acme", "challenge_ready", NULL }, "acme challenge_ready <certfile> domain <domain> : notify HAProxy that the ACME challenge is ready", cli_acme_chall_ready_parse, NULL, NULL, NULL, 0 },
{ { NULL }, NULL, NULL, NULL }
}};

View File

@ -25,7 +25,8 @@
/* Check an action ruleset validity. It returns the number of error encountered
* and err_code is updated if a warning is emitted.
* and err_code is updated if a warning is emitted. It also takes this
* opportunity for filling the execution context based on available info.
*/
int check_action_rules(struct list *rules, struct proxy *px, int *err_code)
{
@ -40,6 +41,13 @@ int check_action_rules(struct list *rules, struct proxy *px, int *err_code)
}
*err_code |= warnif_tcp_http_cond(px, rule->cond);
ha_free(&errmsg);
if (!rule->exec_ctx.type) {
if (rule->kw && rule->kw->exec_ctx.type)
rule->exec_ctx = rule->kw->exec_ctx;
else if (rule->action_ptr)
rule->exec_ctx = EXEC_CTX_MAKE(TH_EX_CTX_FUNC, rule->action_ptr);
}
}
return err;
@ -378,3 +386,24 @@ void dump_act_rules(const struct list *rules, const char *pfx)
(akwn->flags & KWF_MATCH_PREFIX) ? "*" : "");
}
}
/* adds the keyword list kw_list to the head <head> */
void act_add_list(struct list *head, struct action_kw_list *kw_list)
{
int i;
for (i = 0; kw_list->kw[i].kw != NULL; i++) {
/* store declaration file/line if known */
if (kw_list->kw[i].exec_ctx.type)
continue;
if (caller_initcall) {
kw_list->kw[i].exec_ctx.type = TH_EX_CTX_INITCALL;
kw_list->kw[i].exec_ctx.initcall = caller_initcall;
} else {
kw_list->kw[i].exec_ctx.type = TH_EX_CTX_ACTION;
kw_list->kw[i].exec_ctx.action_kwl = kw_list;
}
}
LIST_APPEND(head, &kw_list->list);
}

View File

@ -29,8 +29,11 @@ struct show_prof_ctx {
int dump_step; /* 0,1,2,4,5,6; see cli_iohandler_show_profiling() */
int linenum; /* next line to be dumped (starts at 0) */
int maxcnt; /* max line count per step (0=not set) */
int by_what; /* 0=sort by usage, 1=sort by address, 2=sort by time */
int by_what; /* 0=sort by usage, 1=sort by address, 2=sort by time, 3=sort by ctx */
int aggr; /* 0=dump raw, 1=aggregate on callee */
/* 4-byte hole here */
struct sched_activity *tmp_activity; /* dynamically allocated during dumps */
struct memprof_stats *tmp_memstats; /* dynamically allocated during dumps */
};
/* CLI context for the "show activity" command */
@ -299,13 +302,18 @@ struct memprof_stats *memprof_get_bin(const void *ra, enum memprof_method meth)
int retries = 16; // up to 16 consecutive entries may be tested.
const void *old;
unsigned int bin;
ullong hash;
if (unlikely(!ra)) {
bin = MEMPROF_HASH_BUCKETS;
goto leave;
}
bin = ptr_hash(ra, MEMPROF_HASH_BITS);
for (; memprof_stats[bin].caller != ra; bin = (bin + 1) & (MEMPROF_HASH_BUCKETS - 1)) {
hash = _ptr2_hash_arg(ra, th_ctx->exec_ctx.pointer, th_ctx->exec_ctx.type);
for (bin = _ptr_hash_reduce(hash, MEMPROF_HASH_BITS);
memprof_stats[bin].caller != ra ||
memprof_stats[bin].exec_ctx.type != th_ctx->exec_ctx.type ||
memprof_stats[bin].exec_ctx.pointer != th_ctx->exec_ctx.pointer;
bin = (bin + (hash | 1)) & (MEMPROF_HASH_BUCKETS - 1)) {
if (!--retries) {
bin = MEMPROF_HASH_BUCKETS;
break;
@ -314,6 +322,7 @@ struct memprof_stats *memprof_get_bin(const void *ra, enum memprof_method meth)
old = NULL;
if (!memprof_stats[bin].caller &&
HA_ATOMIC_CAS(&memprof_stats[bin].caller, &old, ra)) {
memprof_stats[bin].exec_ctx = th_ctx->exec_ctx;
memprof_stats[bin].method = meth;
break;
}
@ -918,6 +927,14 @@ static int cmp_memprof_stats(const void *a, const void *b)
return -1;
else if (l->alloc_tot + l->free_tot < r->alloc_tot + r->free_tot)
return 1;
else if (l->exec_ctx.type > r->exec_ctx.type)
return -1;
else if (l->exec_ctx.type < r->exec_ctx.type)
return 1;
else if (l->exec_ctx.pointer > r->exec_ctx.pointer)
return -1;
else if (l->exec_ctx.pointer < r->exec_ctx.pointer)
return 1;
else
return 0;
}
@ -931,6 +948,47 @@ static int cmp_memprof_addr(const void *a, const void *b)
return -1;
else if (l->caller < r->caller)
return 1;
else if (l->exec_ctx.type > r->exec_ctx.type)
return -1;
else if (l->exec_ctx.type < r->exec_ctx.type)
return 1;
else if (l->exec_ctx.pointer > r->exec_ctx.pointer)
return -1;
else if (l->exec_ctx.pointer < r->exec_ctx.pointer)
return 1;
else
return 0;
}
static int cmp_memprof_ctx(const void *a, const void *b)
{
const struct memprof_stats *l = (const struct memprof_stats *)a;
const struct memprof_stats *r = (const struct memprof_stats *)b;
const void *ptrl = l->exec_ctx.pointer;
const void *ptrr = r->exec_ctx.pointer;
/* in case of a mux, we'll use the always-present ->subscribe()
* function as a sorting key so that mux-ops and other mux functions
* appear grouped together.
*/
if (l->exec_ctx.type == TH_EX_CTX_MUX)
ptrl = l->exec_ctx.mux_ops->subscribe;
if (r->exec_ctx.type == TH_EX_CTX_MUX)
ptrr = r->exec_ctx.mux_ops->subscribe;
if (ptrl > ptrr)
return -1;
else if (ptrl < ptrr)
return 1;
else if (l->exec_ctx.type > r->exec_ctx.type)
return -1;
else if (l->exec_ctx.type < r->exec_ctx.type)
return 1;
else if (l->caller > r->caller)
return -1;
else if (l->caller < r->caller)
return 1;
else
return 0;
}
@ -992,9 +1050,9 @@ struct sched_activity *sched_activity_entry(struct sched_activity *array, const
static int cli_io_handler_show_profiling(struct appctx *appctx)
{
struct show_prof_ctx *ctx = appctx->svcctx;
struct sched_activity tmp_activity[SCHED_ACT_HASH_BUCKETS];
struct sched_activity *tmp_activity = ctx->tmp_activity;
#ifdef USE_MEMORY_PROFILING
struct memprof_stats tmp_memstats[MEMPROF_HASH_BUCKETS + 1];
struct memprof_stats *tmp_memstats = ctx->tmp_memstats;
unsigned long long tot_alloc_calls, tot_free_calls;
unsigned long long tot_alloc_bytes, tot_free_bytes;
#endif
@ -1035,7 +1093,20 @@ static int cli_io_handler_show_profiling(struct appctx *appctx)
if ((ctx->dump_step & 3) != 1)
goto skip_tasks;
memcpy(tmp_activity, sched_activity, sizeof(tmp_activity));
if (tmp_activity)
goto tasks_resume;
/* first call for show profiling tasks: we have to allocate a tmp
* array for sorting and processing, and possibly perform some
* sorting and aggregation.
*/
tmp_activity = ha_aligned_alloc(__alignof__(*tmp_activity), sizeof(sched_activity));
if (!tmp_activity)
goto end_tasks;
ctx->tmp_activity = tmp_activity;
memcpy(tmp_activity, sched_activity, sizeof(sched_activity));
/* for addr sort and for callee aggregation we have to first sort by address */
if (ctx->aggr || ctx->by_what == 1) // sort by addr
qsort(tmp_activity, SCHED_ACT_HASH_BUCKETS, sizeof(tmp_activity[0]), cmp_sched_activity_addr);
@ -1060,6 +1131,7 @@ static int cli_io_handler_show_profiling(struct appctx *appctx)
else if (ctx->by_what == 2) // by cpu_tot
qsort(tmp_activity, SCHED_ACT_HASH_BUCKETS, sizeof(tmp_activity[0]), cmp_sched_activity_cpu);
tasks_resume:
if (!ctx->linenum)
chunk_appendf(&trash, "Tasks activity over %.3f sec till %.3f sec ago:\n"
" function calls cpu_tot cpu_avg lkw_avg lkd_avg mem_avg lat_avg\n",
@ -1123,6 +1195,8 @@ static int cli_io_handler_show_profiling(struct appctx *appctx)
return 0;
}
end_tasks:
ha_free(&ctx->tmp_activity);
ctx->linenum = 0; // reset first line to dump
if ((ctx->dump_step & 4) == 0)
ctx->dump_step++; // next step
@ -1133,16 +1207,57 @@ static int cli_io_handler_show_profiling(struct appctx *appctx)
if ((ctx->dump_step & 3) != 2)
goto skip_mem;
memcpy(tmp_memstats, memprof_stats, sizeof(tmp_memstats));
if (ctx->by_what)
if (tmp_memstats)
goto memstats_resume;
/* first call for show profiling memory: we have to allocate a tmp
* array for sorting and processing, and possibly perform some sorting
* and aggregation.
*/
tmp_memstats = ha_aligned_alloc(__alignof__(*tmp_memstats), sizeof(memprof_stats));
if (!tmp_memstats)
goto end_memstats;
ctx->tmp_memstats = tmp_memstats;
memcpy(tmp_memstats, memprof_stats, sizeof(memprof_stats));
if (ctx->by_what == 1)
qsort(tmp_memstats, MEMPROF_HASH_BUCKETS+1, sizeof(tmp_memstats[0]), cmp_memprof_addr);
else if (ctx->by_what == 3)
qsort(tmp_memstats, MEMPROF_HASH_BUCKETS+1, sizeof(tmp_memstats[0]), cmp_memprof_ctx);
else
qsort(tmp_memstats, MEMPROF_HASH_BUCKETS+1, sizeof(tmp_memstats[0]), cmp_memprof_stats);
if (ctx->aggr) {
/* merge entries for the same caller and reset the exec_ctx */
for (i = j = 0; i < MEMPROF_HASH_BUCKETS; i++) {
if ((tmp_memstats[i].alloc_calls | tmp_memstats[i].free_calls) == 0)
continue;
for (j = i + 1; j < MEMPROF_HASH_BUCKETS; j++) {
if ((tmp_memstats[j].alloc_calls | tmp_memstats[j].free_calls) == 0)
continue;
if (tmp_memstats[j].caller != tmp_memstats[i].caller ||
tmp_memstats[j].method != tmp_memstats[i].method ||
tmp_memstats[j].info != tmp_memstats[i].info)
continue;
tmp_memstats[i].locked_calls += tmp_memstats[j].locked_calls;
tmp_memstats[i].alloc_calls += tmp_memstats[j].alloc_calls;
tmp_memstats[i].free_calls += tmp_memstats[j].free_calls;
tmp_memstats[i].alloc_tot += tmp_memstats[j].alloc_tot;
tmp_memstats[i].free_tot += tmp_memstats[j].free_tot;
/* don't dump the ctx */
tmp_memstats[i].exec_ctx.type = 0;
/* don't dump the merged entry */
tmp_memstats[j].alloc_calls = tmp_memstats[j].free_calls = 0;
}
}
}
memstats_resume:
if (!ctx->linenum)
chunk_appendf(&trash,
"Alloc/Free statistics by call place over %.3f sec till %.3f sec ago:\n"
" Calls | Tot Bytes | Caller and method\n"
" Calls | Tot Bytes | Caller, method, extra info\n"
"<- alloc -> <- free ->|<-- alloc ---> <-- free ---->|\n",
(prof_mem_start_ns ? (prof_mem_stop_ns ? prof_mem_stop_ns : now_ns) - prof_mem_start_ns : 0) / 1000000000.0,
(prof_mem_stop_ns ? now_ns - prof_mem_stop_ns : 0) / 1000000000.0);
@ -1200,6 +1315,7 @@ static int cli_io_handler_show_profiling(struct appctx *appctx)
(int)((1000ULL * entry->locked_calls / tot_calls) % 10));
}
chunk_append_thread_ctx(&trash, &entry->exec_ctx, " [via ", "]");
chunk_appendf(&trash, "\n");
if (applet_putchk(appctx, &trash) == -1)
@ -1309,9 +1425,15 @@ static int cli_io_handler_show_profiling(struct appctx *appctx)
tot_alloc_calls - tot_free_calls,
tot_alloc_bytes - tot_free_bytes);
/* release optional buffer name */
for (i = 0; i < max; i++)
ha_free(&tmp_memstats[i].info);
if (applet_putchk(appctx, &trash) == -1)
return 0;
end_memstats:
ha_free(&ctx->tmp_memstats);
ctx->linenum = 0; // reset first line to dump
if ((ctx->dump_step & 4) == 0)
ctx->dump_step++; // next step
@ -1322,6 +1444,15 @@ static int cli_io_handler_show_profiling(struct appctx *appctx)
return 1;
}
/* release structs allocated by "show profiling" */
static void cli_release_show_profiling(struct appctx *appctx)
{
struct show_prof_ctx *ctx = appctx->svcctx;
ha_free(&ctx->tmp_activity);
ha_free(&ctx->tmp_memstats);
}
/* parse a "show profiling" command. It returns 1 on failure, 0 if it starts to dump.
* - cli.i0 is set to the first state (0=all, 4=status, 5=tasks, 6=memory)
* - cli.o1 is set to 1 if the output must be sorted by addr instead of usage
@ -1354,6 +1485,9 @@ static int cli_parse_show_profiling(char **args, char *payload, struct appctx *a
else if (strcmp(args[arg], "bytime") == 0) {
ctx->by_what = 2; // sort output by total time instead of usage
}
else if (strcmp(args[arg], "byctx") == 0) {
ctx->by_what = 3; // sort output by caller context instead of usage
}
else if (strcmp(args[arg], "aggr") == 0) {
ctx->aggr = 1; // aggregate output by callee
}
@ -1361,7 +1495,7 @@ static int cli_parse_show_profiling(char **args, char *payload, struct appctx *a
ctx->maxcnt = atoi(args[arg]); // number of entries to dump
}
else
return cli_err(appctx, "Expects either 'all', 'status', 'tasks', 'memory', 'byaddr', 'bytime', 'aggr' or a max number of output lines.\n");
return cli_err(appctx, "Expects either 'all', 'status', 'tasks', 'memory', 'byaddr', 'bytime', 'byctx', 'aggr' or a max number of output lines.\n");
}
return 0;
}
@ -1705,7 +1839,7 @@ INITCALL1(STG_REGISTER, cfg_register_keywords, &cfg_kws);
static struct cli_kw_list cli_kws = {{ },{
{ { "set", "profiling", NULL }, "set profiling <what> {auto|on|off} : enable/disable resource profiling (tasks,memory)", cli_parse_set_profiling, NULL },
{ { "show", "activity", NULL }, "show activity [-1|0|thread_num] : show per-thread activity stats (for support/developers)", cli_parse_show_activity, cli_io_handler_show_activity, NULL },
{ { "show", "profiling", NULL }, "show profiling [<what>|<#lines>|<opts>]*: show profiling state (all,status,tasks,memory)", cli_parse_show_profiling, cli_io_handler_show_profiling, NULL },
{ { "show", "profiling", NULL }, "show profiling [<what>|<#lines>|<opts>]*: show profiling state (all,status,tasks,memory)", cli_parse_show_profiling, cli_io_handler_show_profiling, cli_release_show_profiling },
{ { "show", "tasks", NULL }, "show tasks : show running tasks", NULL, cli_io_handler_show_tasks, NULL },
{{},}
}};

View File

@ -31,7 +31,6 @@ unsigned int nb_applets = 0;
DECLARE_TYPED_POOL(pool_head_appctx, "appctx", struct appctx);
/* trace source and events */
static void applet_trace(enum trace_level level, uint64_t mask,
const struct trace_source *src,
@ -417,7 +416,7 @@ void appctx_shut(struct appctx *appctx)
TRACE_ENTER(APPLET_EV_RELEASE, appctx);
if (appctx->applet->release)
appctx->applet->release(appctx);
CALL_APPLET_NO_RET(appctx->applet, release(appctx));
applet_fl_set(appctx, APPCTX_FL_SHUTDOWN);
b_dequeue(&appctx->buffer_wait);
@ -512,7 +511,7 @@ size_t appctx_htx_rcv_buf(struct appctx *appctx, struct buffer *buf, size_t coun
goto out;
}
htx_xfer_blks(buf_htx, appctx_htx, count, HTX_BLK_UNUSED);
htx_xfer(buf_htx, appctx_htx, count, HTX_XFER_DEFAULT);
buf_htx->flags |= (appctx_htx->flags & (HTX_FL_PARSING_ERROR|HTX_FL_PROCESSING_ERROR));
if (htx_is_empty(appctx_htx)) {
buf_htx->flags |= (appctx_htx->flags & HTX_FL_EOM);
@ -551,7 +550,7 @@ size_t appctx_rcv_buf(struct stconn *sc, struct buffer *buf, size_t count, unsig
if (flags & CO_RFL_BUF_FLUSH)
applet_fl_set(appctx, APPCTX_FL_FASTFWD);
ret = appctx->applet->rcv_buf(appctx, buf, count, flags);
ret = CALL_APPLET_WITH_RET(appctx->applet, rcv_buf(appctx, buf, count, flags));
if (ret)
applet_fl_clr(appctx, APPCTX_FL_OUTBLK_FULL);
@ -609,7 +608,7 @@ size_t appctx_htx_snd_buf(struct appctx *appctx, struct buffer *buf, size_t coun
goto end;
}
htx_xfer_blks(appctx_htx, buf_htx, count, HTX_BLK_UNUSED);
htx_xfer(appctx_htx, buf_htx, count, HTX_XFER_DEFAULT);
if (htx_is_empty(buf_htx)) {
appctx_htx->flags |= (buf_htx->flags & HTX_FL_EOM);
}
@ -659,7 +658,7 @@ size_t appctx_snd_buf(struct stconn *sc, struct buffer *buf, size_t count, unsig
goto end;
}
ret = appctx->applet->snd_buf(appctx, buf, count, flags);
ret = CALL_APPLET_WITH_RET(appctx->applet, snd_buf(appctx, buf, count, flags));
if (applet_fl_test(appctx, (APPCTX_FL_ERROR|APPCTX_FL_ERR_PENDING)))
se_report_term_evt(appctx->sedesc, se_tevt_type_snd_err);
@ -716,7 +715,7 @@ int appctx_fastfwd(struct stconn *sc, unsigned int count, unsigned int flags)
}
b_add(sdo->iobuf.buf, sdo->iobuf.offset);
ret = appctx->applet->fastfwd(appctx, sdo->iobuf.buf, len, 0);
ret = CALL_APPLET_WITH_RET(appctx->applet, fastfwd(appctx, sdo->iobuf.buf, len, 0));
b_sub(sdo->iobuf.buf, sdo->iobuf.offset);
sdo->iobuf.data += ret;
@ -853,7 +852,7 @@ struct task *task_run_applet(struct task *t, void *context, unsigned int state)
* already called)
*/
if (!se_fl_test(app->sedesc, SE_FL_SHR) || !se_fl_test(app->sedesc, SE_FL_SHW))
app->applet->fct(app);
CALL_APPLET_NO_RET(app->applet, fct(app));
TRACE_POINT(APPLET_EV_PROCESS, app);
@ -900,7 +899,7 @@ struct task *task_run_applet(struct task *t, void *context, unsigned int state)
stream_dump_and_crash(&app->obj_type, read_freq_ctr(&app->call_rate));
}
sc->app_ops->wake(sc);
sc_applet_process(sc);
channel_release_buffer(ic, &app->buffer_wait);
TRACE_LEAVE(APPLET_EV_PROCESS, app);
return t;
@ -954,7 +953,7 @@ struct task *task_process_applet(struct task *t, void *context, unsigned int sta
* already called)
*/
if (!applet_fl_test(app, APPCTX_FL_SHUTDOWN))
app->applet->fct(app);
CALL_APPLET_NO_RET(app->applet, fct(app));
TRACE_POINT(APPLET_EV_PROCESS, app);
@ -993,7 +992,7 @@ struct task *task_process_applet(struct task *t, void *context, unsigned int sta
stream_dump_and_crash(&app->obj_type, read_freq_ctr(&app->call_rate));
}
sc->app_ops->wake(sc);
sc_applet_process(sc);
appctx_release_buffers(app);
TRACE_LEAVE(APPLET_EV_PROCESS, app);
return t;

View File

@ -1396,7 +1396,7 @@ check_tgid:
tree = search_tree ? &srv->per_thr[i].safe_conns : &srv->per_thr[i].idle_conns;
conn = srv_lookup_conn(tree, hash);
while (conn) {
if (conn->mux->takeover && conn->mux->takeover(conn, i, 0) == 0) {
if (conn->mux->takeover && CALL_MUX_WITH_RET(conn->mux, takeover(conn, i, 0)) == 0) {
conn_delete_from_tree(conn, i);
_HA_ATOMIC_INC(&activity[tid].fd_takeover);
found = 1;
@ -1498,7 +1498,7 @@ takeover_random_idle_conn(struct ceb_root **root, int curtid)
conn = ceb64_item_first(root, hash_node.node, hash_node.key, struct connection);
while (conn) {
if (conn->mux->takeover && conn->mux->takeover(conn, curtid, 1) == 0) {
if (conn->mux->takeover && CALL_MUX_WITH_RET(conn->mux, takeover(conn, curtid, 1)) == 0) {
conn_delete_from_tree(conn, curtid);
return conn;
}
@ -1555,7 +1555,7 @@ kill_random_idle_conn(struct server *srv)
*/
_HA_ATOMIC_INC(&srv->curr_used_conns);
}
conn->mux->destroy(conn->ctx);
CALL_MUX_NO_RET(conn->mux, destroy(conn->ctx));
return 1;
}
return 0;
@ -1765,7 +1765,7 @@ int be_reuse_connection(int64_t hash, struct session *sess,
}
if (avail >= 1) {
if (srv_conn->mux->attach(srv_conn, sc->sedesc, sess) == -1) {
if (CALL_MUX_WITH_RET(srv_conn->mux, attach(srv_conn, sc->sedesc, sess)) == -1) {
if (sc_reset_endp(sc) < 0)
goto err;
sc_ep_clr(sc, ~SE_FL_DETACHED);
@ -1879,7 +1879,7 @@ int connect_server(struct stream *s)
* It will in turn call srv_release_conn through
* conn_free which also uses it.
*/
tokill_conn->mux->destroy(tokill_conn->ctx);
CALL_MUX_NO_RET(tokill_conn->mux, destroy(tokill_conn->ctx));
}
else {
HA_SPIN_UNLOCK(IDLE_CONNS_LOCK, &idle_conns[tid].idle_conns_lock);
@ -2204,7 +2204,7 @@ int connect_server(struct stream *s)
*/
if (may_start_mux_now) {
const struct mux_ops *alt_mux =
likely(!(s->flags & SF_WEBSOCKET)) ? NULL : srv_get_ws_proto(srv);
likely(!(s->flags & SF_WEBSOCKET) || !srv) ? NULL : srv_get_ws_proto(srv);
if (conn_install_mux_be(srv_conn, s->scb, s->sess, alt_mux) < 0) {
conn_full_close(srv_conn);
return SF_ERR_INTERNAL;

View File

@ -626,7 +626,7 @@ cache_store_check(struct proxy *px, struct flt_conf *fconf)
return 1;
}
}
else if (f->id == http_comp_flt_id)
else if (f->id == http_comp_req_flt_id || f->id == http_comp_res_flt_id)
comp = 1;
else if (f->id == fcgi_flt_id)
continue;

View File

@ -89,12 +89,23 @@ int cfg_parse_global(const char *file, int linenum, char **args, int kwm)
global.tune.options |= GTUNE_BUSY_POLLING;
}
else if (strcmp(args[0], "set-dumpable") == 0) { /* "no set-dumpable" or "set-dumpable" */
if (alertif_too_many_args(0, file, linenum, args, &err_code))
if (alertif_too_many_args(1, file, linenum, args, &err_code))
goto out;
if (kwm == KWM_NO)
if (kwm == KWM_NO) {
global.tune.options &= ~GTUNE_SET_DUMPABLE;
else
global.tune.options |= GTUNE_SET_DUMPABLE;
goto out;
}
if (!*args[1] || strcmp(args[1], "on") == 0)
global.tune.options |= GTUNE_SET_DUMPABLE;
else if (strcmp(args[1], "libs") == 0)
global.tune.options |= GTUNE_SET_DUMPABLE | GTUNE_COLLECT_LIBS;
else if (strcmp(args[1], "off") == 0)
global.tune.options &= ~GTUNE_SET_DUMPABLE;
else {
ha_alert("parsing [%s:%d] : '%s' only supports 'on' and 'off' as an argument, found '%s'.\n", file, linenum, args[0], args[1]);
err_code |= ERR_ALERT | ERR_FATAL;
goto out;
}
}
else if (strcmp(args[0], "h2-workaround-bogus-websocket-clients") == 0) { /* "no h2-workaround-bogus-websocket-clients" or "h2-workaround-bogus-websocket-clients" */
if (alertif_too_many_args(0, file, linenum, args, &err_code))

View File

@ -1358,14 +1358,15 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
goto out;
}
err_code |= warnif_misplaced_http_req(curproxy, file, linenum, args[0], NULL);
if (warnif_misplaced_http_req(curproxy, file, linenum, args[0], NULL))
err_code |= ERR_WARN;
if (curproxy->cap & PR_CAP_FE)
where |= SMP_VAL_FE_HRQ_HDR;
if (curproxy->cap & PR_CAP_BE)
where |= SMP_VAL_BE_HRQ_HDR;
err_code |= warnif_cond_conflicts(rule->cond, where, &errmsg);
if (err_code)
if (errmsg && *errmsg)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
LIST_APPEND(&curproxy->http_req_rules, &rule->list);
@ -1400,7 +1401,7 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
if (curproxy->cap & PR_CAP_BE)
where |= SMP_VAL_BE_HRS_HDR;
err_code |= warnif_cond_conflicts(rule->cond, where, &errmsg);
if (err_code)
if (errmsg && *errmsg)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
LIST_APPEND(&curproxy->http_res_rules, &rule->list);
@ -1434,7 +1435,7 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
if (curproxy->cap & PR_CAP_BE)
where |= SMP_VAL_BE_HRS_HDR;
err_code |= warnif_cond_conflicts(rule->cond, where, &errmsg);
if (err_code)
if (errmsg && *errmsg)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
LIST_APPEND(&curproxy->http_after_res_rules, &rule->list);
@ -1491,14 +1492,15 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
}
LIST_APPEND(&curproxy->redirect_rules, &rule->list);
err_code |= warnif_misplaced_redirect(curproxy, file, linenum, args[0], NULL);
if (warnif_misplaced_redirect(curproxy, file, linenum, args[0], NULL))
err_code |= ERR_WARN;
if (curproxy->cap & PR_CAP_FE)
where |= SMP_VAL_FE_HRQ_HDR;
if (curproxy->cap & PR_CAP_BE)
where |= SMP_VAL_BE_HRQ_HDR;
err_code |= warnif_cond_conflicts(rule->cond, where, &errmsg);
if (err_code)
if (errmsg && *errmsg)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
}
else if (strcmp(args[0], "use_backend") == 0) {
@ -1528,7 +1530,7 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
}
err_code |= warnif_cond_conflicts(cond, SMP_VAL_FE_SET_BCK, &errmsg);
if (err_code)
if (errmsg && *errmsg)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
}
else if (*args[2]) {
@ -1591,7 +1593,7 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
}
err_code |= warnif_cond_conflicts(cond, SMP_VAL_BE_SET_SRV, &errmsg);
if (err_code)
if (errmsg && *errmsg)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
rule = calloc(1, sizeof(*rule));
@ -1646,7 +1648,7 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
* where force-persist is applied.
*/
err_code |= warnif_cond_conflicts(cond, SMP_VAL_BE_REQ_CNT, &errmsg);
if (err_code)
if (errmsg && *errmsg)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
rule = calloc(1, sizeof(*rule));
@ -1814,7 +1816,7 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
err_code |= warnif_cond_conflicts(cond, SMP_VAL_BE_STO_RUL, &errmsg);
else
err_code |= warnif_cond_conflicts(cond, SMP_VAL_BE_SET_SRV, &errmsg);
if (err_code)
if (errmsg && *errmsg)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
rule = calloc(1, sizeof(*rule));
@ -1872,7 +1874,7 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
if (curproxy->cap & PR_CAP_BE)
where |= SMP_VAL_BE_HRQ_HDR;
err_code |= warnif_cond_conflicts(cond, where, &errmsg);
if (err_code)
if (errmsg && *errmsg)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
rule = calloc(1, sizeof(*rule));
@ -1952,7 +1954,7 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
if (curproxy->cap & PR_CAP_BE)
where |= SMP_VAL_BE_HRQ_HDR;
err_code |= warnif_cond_conflicts(rule->cond, where, &errmsg);
if (err_code)
if (errmsg && *errmsg)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
LIST_APPEND(&curproxy->uri_auth->http_req_rules, &rule->list);
@ -2200,6 +2202,42 @@ stats_error_parsing:
err_code |= ERR_ALERT | ERR_FATAL;
goto out;
}
else if (strcmp(args[1], "use-small-buffers") == 0) {
unsigned int flags = PR_O2_USE_SBUF_ALL;
if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[1], NULL)) {
err_code |= ERR_WARN;
goto out;
}
if (*(args[2])) {
int cur_arg;
flags = 0;
for (cur_arg = 2; *(args[cur_arg]); cur_arg++) {
if (strcmp(args[cur_arg], "queue") == 0)
flags |= PR_O2_USE_SBUF_QUEUE;
else if (strcmp(args[cur_arg], "l7-retries") == 0)
flags |= PR_O2_USE_SBUF_L7_RETRY;
else if (strcmp(args[cur_arg], "check") == 0)
flags |= PR_O2_USE_SBUF_CHECK;
else {
ha_alert("parsing [%s:%d] : invalid parameter '%s'. option '%s' expects 'queue', 'l7-retries' or 'check' value.\n",
file, linenum, args[cur_arg], args[1]);
err_code |= ERR_ALERT | ERR_FATAL;
goto out;
}
}
}
if (kwm == KWM_STD) {
curproxy->options2 &= ~PR_O2_USE_SBUF_ALL;
curproxy->options2 |= flags;
}
else if (kwm == KWM_NO) {
curproxy->options2 &= ~flags;
}
goto out;
}
if (kwm != KWM_STD) {
ha_alert("parsing [%s:%d]: negation/default is not supported for option '%s'.\n",
@ -2557,7 +2595,8 @@ stats_error_parsing:
goto out;
}
err_code |= warnif_misplaced_monitor(curproxy, file, linenum, args[0], args[1]);
if (warnif_misplaced_monitor(curproxy, file, linenum, args[0], args[1]))
err_code |= ERR_WARN;
if ((cond = build_acl_cond(file, linenum, &curproxy->acl, curproxy, (const char **)args + 2, &errmsg)) == NULL) {
ha_alert("parsing [%s:%d] : error detected while parsing a '%s %s' condition : %s.\n",
file, linenum, args[0], args[1], errmsg);

View File

@ -63,6 +63,7 @@
#include <haproxy/global.h>
#include <haproxy/http_ana.h>
#include <haproxy/http_rules.h>
#include <haproxy/http_htx.h>
#include <haproxy/lb_chash.h>
#include <haproxy/lb_fas.h>
#include <haproxy/lb_fwlc.h>
@ -2318,6 +2319,18 @@ int check_config_validity()
"Please fix either value to remove this warning.\n",
global.tune.bufsize_large, global.tune.bufsize);
global.tune.bufsize_large = 0;
err_code |= ERR_WARN;
}
}
if (global.tune.bufsize_small > 0) {
if (global.tune.bufsize_small == global.tune.bufsize)
global.tune.bufsize_small = 0;
else if (global.tune.bufsize_small > global.tune.bufsize) {
ha_warning("invalid small buffer size %d bytes which is greater to default bufsize %d bytes.\n",
global.tune.bufsize_small, global.tune.bufsize);
global.tune.bufsize_small = 0;
err_code |= ERR_WARN;
}
}
@ -2377,6 +2390,9 @@ int check_config_validity()
cfgerr += check_action_rules(&defpx->http_req_rules, defpx, &err_code);
cfgerr += check_action_rules(&defpx->http_res_rules, defpx, &err_code);
cfgerr += check_action_rules(&defpx->http_after_res_rules, defpx, &err_code);
#ifdef USE_QUIC
cfgerr += check_action_rules(&defpx->quic_init_rules, defpx, &err_code);
#endif
err = NULL;
i = smp_resolve_args(defpx, &err);
@ -2389,6 +2405,8 @@ int check_config_validity()
else {
cfgerr += acl_find_targets(defpx);
}
err_code |= proxy_check_http_errors(defpx);
}
/* starting to initialize the main proxies list */

View File

@ -1045,13 +1045,12 @@ int httpchk_build_status_header(struct server *s, struct buffer *buf)
/**************************************************************************/
/***************** Health-checks based on connections *********************/
/**************************************************************************/
/* This function is used only for server health-checks. It handles connection
* status updates including errors. If necessary, it wakes the check task up.
* It returns 0 on normal cases, <0 if at least one close() has happened on the
* connection (eg: reconnect). It relies on tcpcheck_main().
/* This function handles connection status updates including errors. If
* necessary, it wakes the check task up.
*/
int wake_srv_chk(struct stconn *sc)
struct task *srv_chk_io_cb(struct task *t, void *ctx, unsigned int state)
{
struct stconn *sc = ctx;
struct connection *conn;
struct check *check = __sc_check(sc);
int ret = 0;
@ -1098,15 +1097,6 @@ int wake_srv_chk(struct stconn *sc)
end:
TRACE_LEAVE(CHK_EV_HCHK_WAKE, check);
return ret;
}
/* This function checks if any I/O is wanted, and if so, attempts to do so */
struct task *srv_chk_io_cb(struct task *t, void *ctx, unsigned int state)
{
struct stconn *sc = ctx;
wake_srv_chk(sc);
return t;
}
@ -1525,13 +1515,15 @@ int check_buf_available(void *target)
/*
* Allocate a buffer. If it fails, it adds the check in buffer wait queue.
*/
struct buffer *check_get_buf(struct check *check, struct buffer *bptr)
struct buffer *check_get_buf(struct check *check, struct buffer *bptr, unsigned int small_buffer)
{
struct buffer *buf = NULL;
if (likely(!LIST_INLIST(&check->buf_wait.list)) &&
unlikely((buf = b_alloc(bptr, DB_CHANNEL)) == NULL)) {
b_queue(DB_CHANNEL, &check->buf_wait, check, check_buf_available);
if (small_buffer == 0 || (buf = b_alloc_small(bptr)) == NULL) {
if (likely(!LIST_INLIST(&check->buf_wait.list)) &&
unlikely((buf = b_alloc(bptr, DB_CHANNEL)) == NULL)) {
b_queue(DB_CHANNEL, &check->buf_wait, check, check_buf_available);
}
}
return buf;
}
@ -1543,8 +1535,11 @@ struct buffer *check_get_buf(struct check *check, struct buffer *bptr)
void check_release_buf(struct check *check, struct buffer *bptr)
{
if (bptr->size) {
int defbuf = b_is_default(bptr);
b_free(bptr);
offer_buffers(check->buf_wait.target, 1);
if (defbuf)
offer_buffers(check->buf_wait.target, 1);
}
}
@ -1664,7 +1659,6 @@ int start_check_task(struct check *check, int mininter,
*/
static int start_checks()
{
struct proxy *px;
struct server *s;
char *errmsg = NULL;
@ -1691,6 +1685,10 @@ static int start_checks()
*/
for (px = proxies_list; px; px = px->next) {
for (s = px->srv; s; s = s->next) {
if ((px->options2 & PR_O2_USE_SBUF_CHECK) &&
(s->check.tcpcheck_rules->flags & TCPCHK_RULES_MAY_USE_SBUF))
s->check.state |= CHK_ST_USE_SMALL_BUFF;
if (s->check.state & CHK_ST_CONFIGURED) {
nbcheck++;
if ((srv_getinter(&s->check) >= SRV_CHK_INTER_THRES) &&
@ -1815,7 +1813,15 @@ int init_srv_check(struct server *srv)
* specified.
*/
if (!srv->check.port && !is_addr(&srv->check.addr)) {
if (!srv->check.use_ssl && srv->use_ssl != -1)
/*
* If any setting is set for the check, then we can't
* assume we'll use the same XPRT as the server, the
* server may be QUIC, but we want a TCP check.
*/
if (!srv->check.use_ssl && srv->use_ssl != -1 &&
!srv->check.via_socks4 && !srv->check.send_proxy &&
(!srv->check.alpn_len || (srv->check.alpn_len == srv->ssl_ctx.alpn_len && !strncmp(srv->check.alpn_str, srv->ssl_ctx.alpn_str, srv->check.alpn_len))) &&
(!srv->check.mux_proto || srv->check.mux_proto != srv->mux_proto))
srv->check.xprt = srv->xprt;
else if (srv->check.use_ssl == 1)
srv->check.xprt = xprt_get(XPRT_SSL);
@ -2066,6 +2072,7 @@ static int srv_parse_addr(char **args, int *cur_arg, struct proxy *curpx, struct
char **errmsg)
{
struct sockaddr_storage *sk;
struct protocol *proto;
int port1, port2, err_code = 0;
@ -2074,7 +2081,7 @@ static int srv_parse_addr(char **args, int *cur_arg, struct proxy *curpx, struct
goto error;
}
sk = str2sa_range(args[*cur_arg+1], NULL, &port1, &port2, NULL, NULL, NULL, errmsg, NULL, NULL, NULL,
sk = str2sa_range(args[*cur_arg+1], NULL, &port1, &port2, NULL, &proto, NULL, errmsg, NULL, NULL, NULL,
PA_O_RESOLVE | PA_O_PORT_OK | PA_O_STREAM | PA_O_CONNECT);
if (!sk) {
memprintf(errmsg, "'%s' : %s", args[*cur_arg], *errmsg);
@ -2082,6 +2089,7 @@ static int srv_parse_addr(char **args, int *cur_arg, struct proxy *curpx, struct
}
srv->check.addr = *sk;
srv->check.proto = proto;
/* if agentaddr was never set, we can use addr */
if (!(srv->flags & SRV_F_AGENTADDR))
srv->agent.addr = *sk;
@ -2111,7 +2119,11 @@ static int srv_parse_agent_addr(char **args, int *cur_arg, struct proxy *curpx,
goto error;
}
set_srv_agent_addr(srv, &sk);
/* Agent currently only uses TCP */
if (sk.ss_family == AF_INET)
srv->agent.proto = &proto_tcpv4;
else
srv->agent.proto = &proto_tcpv6;
out:
return err_code;

View File

@ -53,6 +53,22 @@ struct pool_head *pool_head_large_trash __read_mostly = NULL;
/* this is used to drain data, and as a temporary large buffer */
THREAD_LOCAL struct buffer trash_large = { };
/* small trash chunks used for various conversions */
static THREAD_LOCAL struct buffer *small_trash_chunk;
static THREAD_LOCAL struct buffer small_trash_chunk1;
static THREAD_LOCAL struct buffer small_trash_chunk2;
/* small trash buffers used for various conversions */
static int small_trash_size __read_mostly = 0;
static THREAD_LOCAL char *small_trash_buf1 = NULL;
static THREAD_LOCAL char *small_trash_buf2 = NULL;
/* the trash pool for reentrant allocations */
struct pool_head *pool_head_small_trash __read_mostly = NULL;
/* this is used to drain data, and as a temporary small buffer */
THREAD_LOCAL struct buffer trash_small = { };
/*
* Returns a pre-allocated and initialized trash chunk that can be used for any
* type of conversion. Two chunks and their respective buffers are alternatively
@ -103,14 +119,40 @@ struct buffer *get_large_trash_chunk(void)
return large_trash_chunk;
}
/* Similar to get_trash_chunk() but return a pre-allocated small chunk
* instead. Becasuse small buffers are not enabled by default, this function may
* return NULL.
*/
struct buffer *get_small_trash_chunk(void)
{
char *small_trash_buf;
if (!small_trash_size)
return NULL;
if (small_trash_chunk == &small_trash_chunk1) {
small_trash_chunk = &small_trash_chunk2;
small_trash_buf = small_trash_buf2;
}
else {
small_trash_chunk = &small_trash_chunk1;
small_trash_buf = small_trash_buf1;
}
*small_trash_buf = 0;
chunk_init(small_trash_chunk, small_trash_buf, small_trash_size);
return small_trash_chunk;
}
/* Returns a trash chunk accordingly to the requested size. This function may
* fail if the requested size is too big or if the large chubks are not
* configured.
*/
struct buffer *get_trash_chunk_sz(size_t size)
{
if (likely(size <= trash_size))
return get_trash_chunk();
if (likely(size > small_trash_size && size <= trash_size))
return get_trash_chunk();
else if (small_trash_size && size <= small_trash_size)
return get_small_trash_chunk();
else if (large_trash_size && size <= large_trash_size)
return get_large_trash_chunk();
else
@ -122,17 +164,20 @@ struct buffer *get_trash_chunk_sz(size_t size)
*/
struct buffer *get_larger_trash_chunk(struct buffer *chk)
{
struct buffer *chunk;
struct buffer *chunk = NULL;
if (!chk)
return get_trash_chunk();
if (!chk || chk->size == small_trash_size) {
/* no chunk or a small one, use a regular buffer */
chunk = get_trash_chunk();
}
else if (large_trash_size && chk->size <= large_trash_size) {
/* a regular byffer, use a large buffer if possible */
chunk = get_large_trash_chunk();
}
/* No large buffers or current chunk is alread a large trash chunk */
if (!large_trash_size || chk->size == large_trash_size)
return NULL;
if (chk && chunk)
b_xfer(chunk, chk, b_data(chk));
chunk = get_large_trash_chunk();
b_xfer(chunk, chk, b_data(chk));
return chunk;
}
@ -166,9 +211,29 @@ static int alloc_large_trash_buffers(int bufsize)
return trash_large.area && large_trash_buf1 && large_trash_buf2;
}
/* allocates the trash small buffers if necessary. Returns 0 in case of
* failure. Unlike alloc_trash_buffers(), It is unexpected to call this function
* multiple times. Small buffers are not used during configuration parsing.
*/
static int alloc_small_trash_buffers(int bufsize)
{
small_trash_size = bufsize;
if (!small_trash_size)
return 1;
BUG_ON(trash_small.area && small_trash_buf1 && small_trash_buf2);
chunk_init(&trash_small, my_realloc2(trash_small.area, bufsize), bufsize);
small_trash_buf1 = (char *)my_realloc2(small_trash_buf1, bufsize);
small_trash_buf2 = (char *)my_realloc2(small_trash_buf2, bufsize);
return trash_small.area && small_trash_buf1 && small_trash_buf2;
}
static int alloc_trash_buffers_per_thread()
{
return alloc_trash_buffers(global.tune.bufsize) && alloc_large_trash_buffers(global.tune.bufsize_large);
return (alloc_trash_buffers(global.tune.bufsize) &&
alloc_large_trash_buffers(global.tune.bufsize_large) &&
alloc_small_trash_buffers(global.tune.bufsize_large));
}
static void free_trash_buffers_per_thread()
@ -180,6 +245,10 @@ static void free_trash_buffers_per_thread()
chunk_destroy(&trash_large);
ha_free(&large_trash_buf2);
ha_free(&large_trash_buf1);
chunk_destroy(&trash_small);
ha_free(&small_trash_buf2);
ha_free(&small_trash_buf1);
}
/* Initialize the trash buffers. It returns 0 if an error occurred. */
@ -207,6 +276,14 @@ int init_trash_buffers(int first)
if (!pool_head_large_trash)
return 0;
}
if (!first && global.tune.bufsize_small) {
pool_head_small_trash = create_pool("small_trash",
sizeof(struct buffer) + global.tune.bufsize_small,
MEM_F_EXACT);
if (!pool_head_small_trash)
return 0;
}
return 1;
}

View File

@ -400,6 +400,21 @@ struct cli_kw* cli_find_kw_exact(char **args)
void cli_register_kw(struct cli_kw_list *kw_list)
{
struct cli_kw *kw;
for (kw = &kw_list->kw[0]; kw->str_kw[0]; kw++) {
/* store declaration file/line if known */
if (kw->exec_ctx.type)
continue;
if (caller_initcall) {
kw->exec_ctx.type = TH_EX_CTX_INITCALL;
kw->exec_ctx.initcall = caller_initcall;
} else {
kw->exec_ctx.type = TH_EX_CTX_CLI_KWL;
kw->exec_ctx.cli_kwl = kw_list;
}
}
LIST_APPEND(&cli_keywords.list, &kw_list->list);
}
@ -849,6 +864,7 @@ static int cli_process_cmdline(struct appctx *appctx)
else if (kw->level == ACCESS_EXPERIMENTAL)
mark_tainted(TAINTED_CLI_EXPERIMENTAL_MODE);
appctx->cli_ctx.kw = kw;
appctx->cli_ctx.io_handler = kw->io_handler;
appctx->cli_ctx.io_release = kw->io_release;
@ -868,6 +884,7 @@ static int cli_process_cmdline(struct appctx *appctx)
goto end;
fail:
appctx->cli_ctx.kw = NULL;
appctx->cli_ctx.io_handler = NULL;
appctx->cli_ctx.io_release = NULL;
@ -1209,17 +1226,19 @@ void cli_io_handler(struct appctx *appctx)
case CLI_ST_CALLBACK: /* use custom pointer */
if (appctx->cli_ctx.io_handler)
if (appctx->cli_ctx.io_handler(appctx)) {
if (EXEC_CTX_WITH_RET(appctx->cli_ctx.kw->exec_ctx, appctx->cli_ctx.io_handler(appctx))) {
appctx->t->expire = TICK_ETERNITY;
appctx->st0 = CLI_ST_PROMPT;
if (appctx->cli_ctx.io_release) {
appctx->cli_ctx.io_release(appctx);
EXEC_CTX_NO_RET(appctx->cli_ctx.kw->exec_ctx, appctx->cli_ctx.io_release(appctx));
appctx->cli_ctx.io_release = NULL;
appctx->cli_ctx.kw = NULL;
/* some release handlers might have
* pending output to print.
*/
continue;
}
appctx->cli_ctx.kw = NULL;
}
break;
default: /* abnormal state */
@ -1325,8 +1344,9 @@ static void cli_release_handler(struct appctx *appctx)
free_trash_chunk(appctx->cli_ctx.cmdline);
if (appctx->cli_ctx.io_release) {
appctx->cli_ctx.io_release(appctx);
EXEC_CTX_NO_RET(appctx->cli_ctx.kw->exec_ctx, appctx->cli_ctx.io_release(appctx));
appctx->cli_ctx.io_release = NULL;
appctx->cli_ctx.kw = NULL;
}
else if (appctx->st0 == CLI_ST_PRINT_DYN || appctx->st0 == CLI_ST_PRINT_DYNERR) {
struct cli_print_ctx *ctx = applet_reserve_svcctx(appctx, sizeof(*ctx));
@ -2614,8 +2634,9 @@ static int cli_parse_echo(char **args, char *payload, struct appctx *appctx, voi
static int _send_status(char **args, char *payload, struct appctx *appctx, void *private)
{
struct listener *mproxy_li;
struct mworker_proc *proc;
struct stconn *sc = appctx_sc(appctx);
struct listener *mproxy_li = strm_li(__sc_strm(sc));
char *msg = "READY\n";
int pid;
@ -2625,12 +2646,18 @@ static int _send_status(char **args, char *payload, struct appctx *appctx, void
pid = atoi(args[2]);
list_for_each_entry(proc, &proc_list, list) {
/* update status of the new worker */
if (proc->pid == pid) {
proc->options &= ~PROC_O_INIT;
mproxy_li = fdtab[proc->ipc_fd[0]].owner;
stop_listener(mproxy_li, 0, 0, 0);
/* the proxy used to receive the _send_status must be
* the one corresponding to the PID we received in
* argument */
BUG_ON(proc->ipc_fd[0] < 0);
BUG_ON(mproxy_li != fdtab[proc->ipc_fd[0]].owner);
}
/* send TERM to workers, which have exceeded max_reloads counter */
if (max_reloads != -1) {
if ((proc->options & PROC_O_TYPE_WORKER) &&
@ -2642,6 +2669,15 @@ static int _send_status(char **args, char *payload, struct appctx *appctx, void
}
}
/* the sockpair between the master and the worker is
* used temporarly as a listener to receive
* _send_status. Once it is received we don't want to
* use this FD as a listener anymore, but only as a
* server, to allow only connections from the master to
* the worker for the master CLI */
BUG_ON(mproxy_li == NULL);
stop_listener(mproxy_li, 0, 0, 0);
/* At this point we are sure, that newly forked worker is started,
* so we can write our PID in a pidfile, if provided. Master doesn't
* perform chroot.
@ -2840,7 +2876,7 @@ static int pcli_prefix_to_pid(const char *prefix)
if (*errtol != '\0')
return -1;
list_for_each_entry(child, &proc_list, list) {
if (!(child->options & PROC_O_TYPE_WORKER))
if (!(child->options & PROC_O_TYPE_WORKER) || (child->options & PROC_O_INIT))
continue;
if (child->pid == proc_pid){
return child->pid;
@ -2863,7 +2899,7 @@ static int pcli_prefix_to_pid(const char *prefix)
/* chose the right process, the current one is the one with the
least number of reloads */
list_for_each_entry(child, &proc_list, list) {
if (!(child->options & PROC_O_TYPE_WORKER))
if (!(child->options & PROC_O_TYPE_WORKER) || (child->options & PROC_O_INIT))
continue;
if (child->reloads == 0)
return child->pid;

View File

@ -266,6 +266,7 @@ void clock_update_global_date()
{
ullong old_now_ns;
uint old_now_ms;
int now_ns_changed = 0;
/* now that we have bounded the local time, let's check if it's
* realistic regarding the global date, which only moves forward,
@ -275,8 +276,10 @@ void clock_update_global_date()
old_now_ms = _HA_ATOMIC_LOAD(global_now_ms);
do {
if (now_ns < old_now_ns)
if (now_ns < old_now_ns) {
now_ns_changed = 1;
now_ns = old_now_ns;
}
/* now <now_ns> is expected to be the most accurate date,
* equal to <global_now_ns> or newer. Updating the global
@ -295,8 +298,11 @@ void clock_update_global_date()
if (unlikely(now_ms == TICK_ETERNITY))
now_ms++;
if (!((now_ns ^ old_now_ns) & ~0x7FFFULL))
if (!((now_ns ^ old_now_ns) & ~0x7FFFULL)) {
if (now_ns_changed)
goto end;
return;
}
/* let's try to update the global_now_ns (both in nanoseconds
* and ms forms) or loop again.
@ -305,6 +311,7 @@ void clock_update_global_date()
(now_ms != old_now_ms && !_HA_ATOMIC_CAS(global_now_ms, &old_now_ms, now_ms))) &&
__ha_cpu_relax());
end:
if (!th_ctx->curr_mono_time) {
/* Only update the offset when monotonic time is not available.
* <now_ns> and <now_ms> are now updated to the last value of
@ -314,6 +321,16 @@ void clock_update_global_date()
*/
HA_ATOMIC_STORE(&now_offset, now_ns - tv_to_ns(&date));
}
else if (global_now_ns != &_global_now_ns) {
/*
* or global_now_ns is shared with other processes: this results
* in the now_offset requiring to self-adjust so that it is consistent
* with now_offset used by other processes, as we may have learned from
* a new global_now_ns that was used in pair with a different offset from
* ours
*/
HA_ATOMIC_STORE(&now_offset, now_ns - th_ctx->curr_mono_time);
}
}
/* must be called once at boot to initialize some global variables */

View File

@ -141,7 +141,7 @@ int conn_create_mux(struct connection *conn, int *closed_connection)
fail:
/* let the upper layer know the connection failed */
if (sc) {
sc->app_ops->wake(sc);
sc_conn_process(sc);
}
else if (conn_reverse_in_preconnect(conn)) {
struct listener *l = conn_active_reverse_listener(conn);
@ -232,14 +232,14 @@ int conn_notify_mux(struct connection *conn, int old_flags, int forced_wake)
HA_SPIN_UNLOCK(IDLE_CONNS_LOCK, &idle_conns[tid].idle_conns_lock);
}
ret = conn->mux->wake(conn);
ret = CALL_MUX_WITH_RET(conn->mux, wake(conn));
if (ret < 0)
goto done;
if (conn_in_list) {
if (srv && (srv->cur_admin & SRV_ADMF_MAINT)) {
/* Do not store an idle conn if server in maintenance. */
conn->mux->destroy(conn->ctx);
CALL_MUX_NO_RET(conn->mux, destroy(conn->ctx));
ret = -1;
goto done;
}
@ -247,7 +247,7 @@ int conn_notify_mux(struct connection *conn, int old_flags, int forced_wake)
if (conn->flags & CO_FL_SESS_IDLE) {
if (!session_reinsert_idle_conn(conn->owner, conn)) {
/* session add conn failure */
conn->mux->destroy(conn->ctx);
CALL_MUX_NO_RET(conn->mux, destroy(conn->ctx));
ret = -1;
}
}
@ -291,7 +291,7 @@ int conn_upgrade_mux_fe(struct connection *conn, void *ctx, struct buffer *buf,
old_mux_ctx = conn->ctx;
conn->mux = new_mux;
conn->ctx = ctx;
if (new_mux->init(conn, bind_conf->frontend, conn->owner, buf) == -1) {
if (CALL_MUX_WITH_RET(new_mux, init(conn, bind_conf->frontend, conn->owner, buf)) == -1) {
/* The mux upgrade failed, so restore the old mux */
conn->ctx = old_mux_ctx;
conn->mux = old_mux;
@ -300,7 +300,7 @@ int conn_upgrade_mux_fe(struct connection *conn, void *ctx, struct buffer *buf,
/* The mux was upgraded, destroy the old one */
*buf = BUF_NULL;
old_mux->destroy(old_mux_ctx);
CALL_MUX_NO_RET(old_mux, destroy(old_mux_ctx));
return 0;
}
@ -658,7 +658,7 @@ void conn_free(struct connection *conn)
void conn_release(struct connection *conn)
{
if (conn->mux) {
conn->mux->destroy(conn->ctx);
CALL_MUX_NO_RET(conn->mux, destroy(conn->ctx));
}
else {
conn_stop_tracking(conn);
@ -3034,7 +3034,7 @@ static struct task *mux_stopping_process(struct task *t, void *ctx, unsigned int
list_for_each_entry_safe(conn, back, &mux_stopping_data[tid].list, stopping_list) {
if (conn->mux && conn->mux->wake)
conn->mux->wake(conn);
CALL_MUX_NO_RET(conn->mux, wake(conn));
}
return t;

View File

@ -367,6 +367,9 @@ void ha_thread_dump_one(struct buffer *buf, int is_caller)
(now - th_ctx->sched_call_date));
}
/* report the execution context when known */
chunk_append_thread_ctx(buf, &th_ctx->exec_ctx, " exec_ctx: ", "\n");
/* this is the end of what we can dump from outside the current thread */
chunk_appendf(buf, " curr_task=");

View File

@ -24,6 +24,7 @@
struct pool_head *pool_head_buffer __read_mostly;
struct pool_head *pool_head_large_buffer __read_mostly = NULL;
struct pool_head *pool_head_small_buffer __read_mostly;
/* perform minimal initializations, report 0 in case of error, 1 if OK. */
int init_buffer()
@ -43,6 +44,12 @@ int init_buffer()
return 0;
}
if (global.tune.bufsize_small) {
pool_head_small_buffer = create_aligned_pool("small_buffer", global.tune.bufsize_small, 64, MEM_F_SHARED|MEM_F_EXACT);
if (!pool_head_small_buffer)
return 0;
}
/* make sure any change to the queues assignment isn't overlooked */
BUG_ON(DB_PERMANENT - DB_UNLIKELY - 1 != DYNBUF_NBQ);
BUG_ON(DB_MUX_RX_Q < DB_SE_RX_Q || DB_MUX_RX_Q >= DYNBUF_NBQ);

View File

@ -136,6 +136,10 @@ static int cli_parse_show_ech(char **args, char *payload,
{
struct show_ech_ctx *ctx = applet_reserve_svcctx(appctx, sizeof(*ctx));
if (!cli_has_level(appctx, ACCESS_LVL_ADMIN))
return 1;
/* no parameter, shows only file list */
if (*args[3]) {
SSL_CTX *sctx = NULL;
@ -297,6 +301,9 @@ static int cli_parse_add_ech(char **args, char *payload, struct appctx *appctx,
OSSL_ECHSTORE *es = NULL;
BIO *es_in = NULL;
if (!cli_has_level(appctx, ACCESS_LVL_ADMIN))
return 1;
if (!*args[3] || !payload)
return cli_err(appctx, "syntax: add ssl ech <name> <PEM file content>");
if (cli_find_ech_specific_ctx(args[3], &sctx) != 1)
@ -324,6 +331,9 @@ static int cli_parse_set_ech(char **args, char *payload, struct appctx *appctx,
OSSL_ECHSTORE *es = NULL;
BIO *es_in = NULL;
if (!cli_has_level(appctx, ACCESS_LVL_ADMIN))
return 1;
if (!*args[3] || !payload)
return cli_err(appctx, "syntax: set ssl ech <name> <PEM file content>");
if (cli_find_ech_specific_ctx(args[3], &sctx) != 1)
@ -351,6 +361,9 @@ static int cli_parse_del_ech(char **args, char *payload, struct appctx *appctx,
char success_message[ECH_SUCCESS_MSG_MAX];
OSSL_ECHSTORE *es = NULL;
if (!cli_has_level(appctx, ACCESS_LVL_ADMIN))
return 1;
if (!*args[3])
return cli_err(appctx, "syntax: del ssl ech <name>");
if (*args[4])

View File

@ -221,7 +221,8 @@ static int fcgi_flt_check(struct proxy *px, struct flt_conf *fconf)
}
list_for_each_entry(f, &px->filter_configs, list) {
if (f->id == http_comp_flt_id || f->id == cache_store_flt_id)
if (f->id == http_comp_req_flt_id || f->id == http_comp_res_flt_id ||
f->id == cache_store_flt_id)
continue;
else if ((f->id == fconf->id) && f->conf != fcgi_conf) {
ha_alert("proxy '%s' : only one fcgi-app supported per backend.\n",

View File

@ -450,7 +450,8 @@ flt_stream_add_filter(struct stream *s, struct flt_conf *fconf, unsigned int fla
f->flags |= flags;
if (FLT_OPS(f)->attach) {
int ret = FLT_OPS(f)->attach(s, f);
struct thread_exec_ctx exec_ctx = EXEC_CTX_MAKE(TH_EX_CTX_FLT, f->config);
int ret = EXEC_CTX_WITH_RET(exec_ctx, FLT_OPS(f)->attach(s, f));
if (ret <= 0) {
pool_free(pool_head_filter, f);
return ret;
@ -506,8 +507,10 @@ flt_stream_release(struct stream *s, int only_backend)
list_for_each_entry_safe(filter, back, &strm_flt(s)->filters, list) {
if (!only_backend || (filter->flags & FLT_FL_IS_BACKEND_FILTER)) {
filter->calls++;
if (FLT_OPS(filter)->detach)
FLT_OPS(filter)->detach(s, filter);
if (FLT_OPS(filter)->detach) {
struct thread_exec_ctx exec_ctx = EXEC_CTX_MAKE(TH_EX_CTX_FLT, filter->config);
EXEC_CTX_NO_RET(exec_ctx, FLT_OPS(filter)->detach(s, filter));
}
LIST_DELETE(&filter->list);
LIST_DELETE(&filter->req_list);
LIST_DELETE(&filter->res_list);
@ -530,8 +533,10 @@ flt_stream_start(struct stream *s)
list_for_each_entry(filter, &strm_flt(s)->filters, list) {
if (FLT_OPS(filter)->stream_start) {
struct thread_exec_ctx exec_ctx = EXEC_CTX_MAKE(TH_EX_CTX_FLT, filter->config);
filter->calls++;
if (FLT_OPS(filter)->stream_start(s, filter) < 0) {
if (EXEC_CTX_WITH_RET(exec_ctx, FLT_OPS(filter)->stream_start(s, filter) < 0)) {
s->last_entity.type = STRM_ENTITY_FILTER;
s->last_entity.ptr = filter;
return -1;
@ -556,8 +561,10 @@ flt_stream_stop(struct stream *s)
list_for_each_entry(filter, &strm_flt(s)->filters, list) {
if (FLT_OPS(filter)->stream_stop) {
struct thread_exec_ctx exec_ctx = EXEC_CTX_MAKE(TH_EX_CTX_FLT, filter->config);
filter->calls++;
FLT_OPS(filter)->stream_stop(s, filter);
EXEC_CTX_NO_RET(exec_ctx, FLT_OPS(filter)->stream_stop(s, filter));
}
}
}
@ -573,8 +580,10 @@ flt_stream_check_timeouts(struct stream *s)
list_for_each_entry(filter, &strm_flt(s)->filters, list) {
if (FLT_OPS(filter)->check_timeouts) {
struct thread_exec_ctx exec_ctx = EXEC_CTX_MAKE(TH_EX_CTX_FLT, filter->config);
filter->calls++;
FLT_OPS(filter)->check_timeouts(s, filter);
EXEC_CTX_NO_RET(exec_ctx, FLT_OPS(filter)->check_timeouts(s, filter));
}
}
}
@ -601,8 +610,10 @@ flt_set_stream_backend(struct stream *s, struct proxy *be)
end:
list_for_each_entry(filter, &strm_flt(s)->filters, list) {
if (FLT_OPS(filter)->stream_set_backend) {
struct thread_exec_ctx exec_ctx = EXEC_CTX_MAKE(TH_EX_CTX_FLT, filter->config);
filter->calls++;
if (FLT_OPS(filter)->stream_set_backend(s, filter, be) < 0) {
if (EXEC_CTX_WITH_RET(exec_ctx, FLT_OPS(filter)->stream_set_backend(s, filter, be) < 0)) {
s->last_entity.type = STRM_ENTITY_FILTER;
s->last_entity.ptr = filter;
return -1;
@ -650,9 +661,11 @@ flt_http_end(struct stream *s, struct http_msg *msg)
continue;
if (FLT_OPS(filter)->http_end) {
struct thread_exec_ctx exec_ctx = EXEC_CTX_MAKE(TH_EX_CTX_FLT, filter->config);
DBG_TRACE_DEVEL(FLT_ID(filter), STRM_EV_HTTP_ANA|STRM_EV_FLT_ANA, s);
filter->calls++;
ret = FLT_OPS(filter)->http_end(s, filter, msg);
ret = EXEC_CTX_WITH_RET(exec_ctx, FLT_OPS(filter)->http_end(s, filter, msg));
if (ret <= 0) {
resume_filter_list_break(s, msg->chn, filter, ret);
goto end;
@ -681,9 +694,11 @@ flt_http_reset(struct stream *s, struct http_msg *msg)
for (filter = flt_list_start(s, msg->chn); filter;
filter = flt_list_next(s, msg->chn, filter)) {
if (FLT_OPS(filter)->http_reset) {
struct thread_exec_ctx exec_ctx = EXEC_CTX_MAKE(TH_EX_CTX_FLT, filter->config);
DBG_TRACE_DEVEL(FLT_ID(filter), STRM_EV_HTTP_ANA|STRM_EV_FLT_ANA, s);
filter->calls++;
FLT_OPS(filter)->http_reset(s, filter, msg);
EXEC_CTX_NO_RET(exec_ctx, FLT_OPS(filter)->http_reset(s, filter, msg));
}
}
DBG_TRACE_LEAVE(STRM_EV_STRM_ANA|STRM_EV_HTTP_ANA|STRM_EV_FLT_ANA, s);
@ -701,9 +716,11 @@ flt_http_reply(struct stream *s, short status, const struct buffer *msg)
DBG_TRACE_ENTER(STRM_EV_STRM_ANA|STRM_EV_HTTP_ANA|STRM_EV_FLT_ANA, s, s->txn, msg);
list_for_each_entry(filter, &strm_flt(s)->filters, list) {
if (FLT_OPS(filter)->http_reply) {
struct thread_exec_ctx exec_ctx = EXEC_CTX_MAKE(TH_EX_CTX_FLT, filter->config);
DBG_TRACE_DEVEL(FLT_ID(filter), STRM_EV_HTTP_ANA|STRM_EV_FLT_ANA, s);
filter->calls++;
FLT_OPS(filter)->http_reply(s, filter, status, msg);
EXEC_CTX_NO_RET(exec_ctx, FLT_OPS(filter)->http_reply(s, filter, status, msg));
}
}
DBG_TRACE_LEAVE(STRM_EV_STRM_ANA|STRM_EV_HTTP_ANA|STRM_EV_FLT_ANA, s);
@ -732,6 +749,7 @@ flt_http_payload(struct stream *s, struct http_msg *msg, unsigned int len)
DBG_TRACE_ENTER(STRM_EV_STRM_ANA|STRM_EV_HTTP_ANA|STRM_EV_FLT_ANA, s, s->txn, msg);
for (filter = flt_list_start(s, msg->chn); filter;
filter = flt_list_next(s, msg->chn, filter)) {
struct thread_exec_ctx exec_ctx = EXEC_CTX_MAKE(TH_EX_CTX_FLT, filter->config);
unsigned long long *flt_off = &FLT_OFF(filter, msg->chn);
unsigned int offset = *flt_off - *strm_off;
@ -745,7 +763,7 @@ flt_http_payload(struct stream *s, struct http_msg *msg, unsigned int len)
DBG_TRACE_DEVEL(FLT_ID(filter), STRM_EV_HTTP_ANA|STRM_EV_FLT_ANA, s);
filter->calls++;
ret = FLT_OPS(filter)->http_payload(s, filter, msg, out + offset, data - offset);
ret = EXEC_CTX_WITH_RET(exec_ctx, FLT_OPS(filter)->http_payload(s, filter, msg, out + offset, data - offset));
if (ret < 0) {
resume_filter_list_break(s, msg->chn, filter, ret);
goto end;
@ -815,9 +833,11 @@ flt_start_analyze(struct stream *s, struct channel *chn, unsigned int an_bit)
FLT_OFF(filter, chn) = 0;
if (FLT_OPS(filter)->channel_start_analyze) {
struct thread_exec_ctx exec_ctx = EXEC_CTX_MAKE(TH_EX_CTX_FLT, filter->config);
DBG_TRACE_DEVEL(FLT_ID(filter), STRM_EV_FLT_ANA, s);
filter->calls++;
ret = FLT_OPS(filter)->channel_start_analyze(s, filter, chn);
ret = EXEC_CTX_WITH_RET(exec_ctx, FLT_OPS(filter)->channel_start_analyze(s, filter, chn));
if (ret <= 0) {
resume_filter_list_break(s, chn, filter, ret);
goto end;
@ -852,9 +872,11 @@ flt_pre_analyze(struct stream *s, struct channel *chn, unsigned int an_bit)
for (filter = resume_filter_list_start(s, chn); filter;
filter = resume_filter_list_next(s, chn, filter)) {
if (FLT_OPS(filter)->channel_pre_analyze && (filter->pre_analyzers & an_bit)) {
struct thread_exec_ctx exec_ctx = EXEC_CTX_MAKE(TH_EX_CTX_FLT, filter->config);
DBG_TRACE_DEVEL(FLT_ID(filter), STRM_EV_FLT_ANA, s);
filter->calls++;
ret = FLT_OPS(filter)->channel_pre_analyze(s, filter, chn, an_bit);
ret = EXEC_CTX_WITH_RET(exec_ctx, FLT_OPS(filter)->channel_pre_analyze(s, filter, chn, an_bit));
if (ret <= 0) {
resume_filter_list_break(s, chn, filter, ret);
goto check_result;
@ -889,9 +911,11 @@ flt_post_analyze(struct stream *s, struct channel *chn, unsigned int an_bit)
for (filter = flt_list_start(s, chn); filter;
filter = flt_list_next(s, chn, filter)) {
if (FLT_OPS(filter)->channel_post_analyze && (filter->post_analyzers & an_bit)) {
struct thread_exec_ctx exec_ctx = EXEC_CTX_MAKE(TH_EX_CTX_FLT, filter->config);
DBG_TRACE_DEVEL(FLT_ID(filter), STRM_EV_FLT_ANA, s);
filter->calls++;
ret = FLT_OPS(filter)->channel_post_analyze(s, filter, chn, an_bit);
ret = EXEC_CTX_WITH_RET(exec_ctx, FLT_OPS(filter)->channel_post_analyze(s, filter, chn, an_bit));
if (ret < 0) {
resume_filter_list_break(s, chn, filter, ret);
break;
@ -922,9 +946,11 @@ flt_analyze_http_headers(struct stream *s, struct channel *chn, unsigned int an_
for (filter = resume_filter_list_start(s, chn); filter;
filter = resume_filter_list_next(s, chn, filter)) {
if (FLT_OPS(filter)->http_headers) {
struct thread_exec_ctx exec_ctx = EXEC_CTX_MAKE(TH_EX_CTX_FLT, filter->config);
DBG_TRACE_DEVEL(FLT_ID(filter), STRM_EV_HTTP_ANA|STRM_EV_FLT_ANA, s);
filter->calls++;
ret = FLT_OPS(filter)->http_headers(s, filter, msg);
ret = EXEC_CTX_WITH_RET(exec_ctx, FLT_OPS(filter)->http_headers(s, filter, msg));
if (ret <= 0) {
resume_filter_list_break(s, chn, filter, ret);
goto check_result;
@ -973,9 +999,11 @@ flt_end_analyze(struct stream *s, struct channel *chn, unsigned int an_bit)
unregister_data_filter(s, chn, filter);
if (FLT_OPS(filter)->channel_end_analyze) {
struct thread_exec_ctx exec_ctx = EXEC_CTX_MAKE(TH_EX_CTX_FLT, filter->config);
DBG_TRACE_DEVEL(FLT_ID(filter), STRM_EV_FLT_ANA, s);
filter->calls++;
ret = FLT_OPS(filter)->channel_end_analyze(s, filter, chn);
ret = EXEC_CTX_WITH_RET(exec_ctx, FLT_OPS(filter)->channel_end_analyze(s, filter, chn));
if (ret <= 0) {
resume_filter_list_break(s, chn, filter, ret);
goto end;
@ -1042,6 +1070,7 @@ flt_tcp_payload(struct stream *s, struct channel *chn, unsigned int len)
DBG_TRACE_ENTER(STRM_EV_TCP_ANA|STRM_EV_FLT_ANA, s);
for (filter = flt_list_start(s, chn); filter;
filter = flt_list_next(s, chn, filter)) {
struct thread_exec_ctx exec_ctx = EXEC_CTX_MAKE(TH_EX_CTX_FLT, filter->config);
unsigned long long *flt_off = &FLT_OFF(filter, chn);
unsigned int offset = *flt_off - *strm_off;
@ -1055,7 +1084,7 @@ flt_tcp_payload(struct stream *s, struct channel *chn, unsigned int len)
DBG_TRACE_DEVEL(FLT_ID(filter), STRM_EV_TCP_ANA|STRM_EV_FLT_ANA, s);
filter->calls++;
ret = FLT_OPS(filter)->tcp_payload(s, filter, chn, out + offset, data - offset);
ret = EXEC_CTX_WITH_RET(exec_ctx, FLT_OPS(filter)->tcp_payload(s, filter, chn, out + offset, data - offset));
if (ret < 0) {
resume_filter_list_break(s, chn, filter, ret);
goto end;

View File

@ -27,17 +27,15 @@
#define COMP_STATE_PROCESSING 0x01
const char *http_comp_flt_id = "compression filter";
const char *http_comp_req_flt_id = "comp-req filter";
const char *http_comp_res_flt_id = "comp-res filter";
struct flt_ops comp_ops;
struct flt_ops comp_req_ops;
struct flt_ops comp_res_ops;
struct comp_state {
/*
* For both comp_ctx and comp_algo, COMP_DIR_REQ is the index
* for requests, and COMP_DIR_RES for responses
*/
struct comp_ctx *comp_ctx[2]; /* compression context */
struct comp_algo *comp_algo[2]; /* compression algorithm if not NULL */
struct comp_ctx *comp_ctx; /* compression context */
struct comp_algo *comp_algo; /* compression algorithm if not NULL */
unsigned int flags; /* COMP_STATE_* */
};
@ -76,10 +74,8 @@ comp_strm_init(struct stream *s, struct filter *filter)
if (st == NULL)
return -1;
st->comp_algo[COMP_DIR_REQ] = NULL;
st->comp_algo[COMP_DIR_RES] = NULL;
st->comp_ctx[COMP_DIR_REQ] = NULL;
st->comp_ctx[COMP_DIR_RES] = NULL;
st->comp_algo = NULL;
st->comp_ctx = NULL;
st->flags = 0;
filter->ctx = st;
@ -100,10 +96,8 @@ comp_strm_deinit(struct stream *s, struct filter *filter)
return;
/* release any possible compression context */
if (st->comp_algo[COMP_DIR_REQ])
st->comp_algo[COMP_DIR_REQ]->end(&st->comp_ctx[COMP_DIR_REQ]);
if (st->comp_algo[COMP_DIR_RES])
st->comp_algo[COMP_DIR_RES]->end(&st->comp_ctx[COMP_DIR_RES]);
if (st->comp_algo)
st->comp_algo->end(&st->comp_ctx);
pool_free(pool_head_comp_state, st);
filter->ctx = NULL;
}
@ -172,9 +166,9 @@ comp_prepare_compress_request(struct comp_state *st, struct stream *s, struct ht
if (txn->meth == HTTP_METH_HEAD)
return;
if (s->be->comp && s->be->comp->algo_req != NULL)
st->comp_algo[COMP_DIR_REQ] = s->be->comp->algo_req;
st->comp_algo = s->be->comp->algo_req;
else if (strm_fe(s)->comp && strm_fe(s)->comp->algo_req != NULL)
st->comp_algo[COMP_DIR_REQ] = strm_fe(s)->comp->algo_req;
st->comp_algo = strm_fe(s)->comp->algo_req;
else
goto fail; /* no algo selected: nothing to do */
@ -189,43 +183,34 @@ comp_prepare_compress_request(struct comp_state *st, struct stream *s, struct ht
goto fail;
/* initialize compression */
if (st->comp_algo[COMP_DIR_REQ]->init(&st->comp_ctx[COMP_DIR_REQ], global.tune.comp_maxlevel) < 0)
if (st->comp_algo->init(&st->comp_ctx, global.tune.comp_maxlevel) < 0)
goto fail;
return;
fail:
st->comp_algo[COMP_DIR_REQ] = NULL;
st->comp_algo = NULL;
}
static int
comp_http_headers(struct stream *s, struct filter *filter, struct http_msg *msg)
comp_req_http_headers(struct stream *s, struct filter *filter, struct http_msg *msg)
{
struct comp_state *st = filter->ctx;
int comp_flags = 0;
if (!strm_fe(s)->comp && !s->be->comp)
goto end;
if (strm_fe(s)->comp)
comp_flags |= strm_fe(s)->comp->flags;
if (s->be->comp)
comp_flags |= s->be->comp->flags;
if (!(comp_flags & COMP_FL_DIR_REQ))
goto end;
if (!(msg->chn->flags & CF_ISRESP)) {
if (comp_flags & COMP_FL_DIR_REQ) {
comp_prepare_compress_request(st, s, msg);
if (st->comp_algo[COMP_DIR_REQ]) {
if (!set_compression_header(st, s, msg))
goto end;
register_data_filter(s, msg->chn, filter);
st->flags |= COMP_STATE_PROCESSING;
}
}
if (comp_flags & COMP_FL_DIR_RES)
select_compression_request_header(st, s, msg);
} else if (comp_flags & COMP_FL_DIR_RES) {
/* Response headers have already been checked in
* comp_http_post_analyze callback. */
if (st->comp_algo[COMP_DIR_RES]) {
comp_prepare_compress_request(st, s, msg);
if (st->comp_algo) {
if (!set_compression_header(st, s, msg))
goto end;
register_data_filter(s, msg->chn, filter);
@ -238,8 +223,43 @@ comp_http_headers(struct stream *s, struct filter *filter, struct http_msg *msg)
}
static int
comp_http_post_analyze(struct stream *s, struct filter *filter,
struct channel *chn, unsigned an_bit)
comp_res_http_headers(struct stream *s, struct filter *filter, struct http_msg *msg)
{
struct comp_state *st = filter->ctx;
int comp_flags = 0;
if (!strm_fe(s)->comp && !s->be->comp)
goto end;
if (strm_fe(s)->comp)
comp_flags |= strm_fe(s)->comp->flags;
if (s->be->comp)
comp_flags |= s->be->comp->flags;
if (!(comp_flags & COMP_FL_DIR_RES))
goto end;
if (!(msg->chn->flags & CF_ISRESP))
select_compression_request_header(st, s, msg);
else {
/* Response headers have already been checked in
* comp_res_http_post_analyze callback. */
if (st->comp_algo) {
if (!set_compression_header(st, s, msg))
goto end;
register_data_filter(s, msg->chn, filter);
st->flags |= COMP_STATE_PROCESSING;
}
}
end:
return 1;
}
static int
comp_res_http_post_analyze(struct stream *s, struct filter *filter,
struct channel *chn, unsigned an_bit)
{
struct http_txn *txn = s->txn;
struct http_msg *msg = &txn->rsp;
@ -259,19 +279,13 @@ comp_http_post_analyze(struct stream *s, struct filter *filter,
static int
comp_http_payload(struct stream *s, struct filter *filter, struct http_msg *msg,
unsigned int offset, unsigned int len)
unsigned int offset, unsigned int len, int dir)
{
struct comp_state *st = filter->ctx;
struct htx *htx = htxbuf(&msg->chn->buf);
struct htx_ret htxret = htx_find_offset(htx, offset);
struct htx_blk *blk, *next;
int ret, consumed = 0, to_forward = 0, last = 0;
int dir;
if (msg->chn->flags & CF_ISRESP)
dir = COMP_DIR_RES;
else
dir = COMP_DIR_REQ;
blk = htxret.blk;
offset = htxret.ret;
@ -361,7 +375,7 @@ comp_http_payload(struct stream *s, struct filter *filter, struct http_msg *msg,
if (to_forward != consumed)
flt_update_offsets(filter, msg->chn, to_forward - consumed);
if (st->comp_ctx[dir] && st->comp_ctx[dir]->cur_lvl > 0) {
if (st->comp_ctx && st->comp_ctx->cur_lvl > 0) {
update_freq_ctr(&global.comp_bps_in, consumed);
if (s->sess->fe_tgcounters) {
_HA_ATOMIC_ADD(&s->sess->fe_tgcounters->comp_in[dir], consumed);
@ -384,14 +398,33 @@ comp_http_payload(struct stream *s, struct filter *filter, struct http_msg *msg,
return -1;
}
static int
comp_req_http_payload(struct stream *s, struct filter *filter, struct http_msg *msg,
unsigned int offset, unsigned int len)
{
if (msg->chn->flags & CF_ISRESP)
return 0;
return comp_http_payload(s, filter, msg, offset, len, COMP_DIR_REQ);
}
static int
comp_http_end(struct stream *s, struct filter *filter,
struct http_msg *msg)
comp_res_http_payload(struct stream *s, struct filter *filter, struct http_msg *msg,
unsigned int offset, unsigned int len)
{
if (!(msg->chn->flags & CF_ISRESP))
return 0;
return comp_http_payload(s, filter, msg, offset, len, COMP_DIR_RES);
}
static int
comp_res_http_end(struct stream *s, struct filter *filter,
struct http_msg *msg)
{
struct comp_state *st = filter->ctx;
if (!(msg->chn->flags & CF_ISRESP) || !st || !st->comp_algo[COMP_DIR_RES])
if (!(msg->chn->flags & CF_ISRESP) || !st || !st->comp_algo)
goto end;
if (strm_fe(s)->mode == PR_MODE_HTTP && s->sess->fe_tgcounters)
@ -411,18 +444,12 @@ set_compression_header(struct comp_state *st, struct stream *s, struct http_msg
struct htx_sl *sl;
struct http_hdr_ctx ctx, last_vary;
struct comp_algo *comp_algo;
int comp_index;
if (msg->chn->flags & CF_ISRESP)
comp_index = COMP_DIR_RES;
else
comp_index = COMP_DIR_REQ;
sl = http_get_stline(htx);
if (!sl)
goto error;
comp_algo = st->comp_algo[comp_index];
comp_algo = st->comp_algo;
/* add "Transfer-Encoding: chunked" header */
if (!(msg->flags & HTTP_MSGF_TE_CHNK)) {
@ -496,8 +523,8 @@ set_compression_header(struct comp_state *st, struct stream *s, struct http_msg
return 1;
error:
st->comp_algo[comp_index]->end(&st->comp_ctx[comp_index]);
st->comp_algo[comp_index] = NULL;
st->comp_algo->end(&st->comp_ctx);
st->comp_algo = NULL;
return 0;
}
@ -525,7 +552,7 @@ select_compression_request_header(struct comp_state *st, struct stream *s, struc
*(ctx.value.ptr + 30) < '6' ||
(*(ctx.value.ptr + 30) == '6' &&
(ctx.value.len < 54 || memcmp(ctx.value.ptr + 51, "SV1", 3) != 0)))) {
st->comp_algo[COMP_DIR_RES] = NULL;
st->comp_algo = NULL;
return 0;
}
@ -579,7 +606,7 @@ select_compression_request_header(struct comp_state *st, struct stream *s, struc
for (comp_algo = comp_algo_back; comp_algo; comp_algo = comp_algo->next) {
if (*(ctx.value.ptr) == '*' ||
word_match(ctx.value.ptr, toklen, comp_algo->ua_name, comp_algo->ua_name_len)) {
st->comp_algo[COMP_DIR_RES] = comp_algo;
st->comp_algo = comp_algo;
best_q = q;
break;
}
@ -588,7 +615,7 @@ select_compression_request_header(struct comp_state *st, struct stream *s, struc
}
/* remove all occurrences of the header when "compression offload" is set */
if (st->comp_algo[COMP_DIR_RES]) {
if (st->comp_algo) {
if ((s->be->comp && (s->be->comp->flags & COMP_FL_OFFLOAD)) ||
(strm_fe(s)->comp && (strm_fe(s)->comp->flags & COMP_FL_OFFLOAD))) {
http_remove_header(htx, &ctx);
@ -604,13 +631,13 @@ select_compression_request_header(struct comp_state *st, struct stream *s, struc
(strm_fe(s)->comp && (comp_algo_back = strm_fe(s)->comp->algos_res))) {
for (comp_algo = comp_algo_back; comp_algo; comp_algo = comp_algo->next) {
if (comp_algo->cfg_name_len == 8 && memcmp(comp_algo->cfg_name, "identity", 8) == 0) {
st->comp_algo[COMP_DIR_RES] = comp_algo;
st->comp_algo = comp_algo;
return 1;
}
}
}
st->comp_algo[COMP_DIR_RES] = NULL;
st->comp_algo = NULL;
return 0;
}
@ -627,7 +654,7 @@ select_compression_response_header(struct comp_state *st, struct stream *s, stru
unsigned int comp_minsize = 0;
/* no common compression algorithm was found in request header */
if (st->comp_algo[COMP_DIR_RES] == NULL)
if (st->comp_algo == NULL)
goto fail;
/* compression already in progress */
@ -725,13 +752,13 @@ select_compression_response_header(struct comp_state *st, struct stream *s, stru
goto fail;
/* initialize compression */
if (st->comp_algo[COMP_DIR_RES]->init(&st->comp_ctx[COMP_DIR_RES], global.tune.comp_maxlevel) < 0)
if (st->comp_algo->init(&st->comp_ctx, global.tune.comp_maxlevel) < 0)
goto fail;
msg->flags |= HTTP_MSGF_COMPRESSING;
return 1;
fail:
st->comp_algo[COMP_DIR_RES] = NULL;
st->comp_algo = NULL;
return 0;
}
@ -754,7 +781,7 @@ htx_compression_buffer_add_data(struct comp_state *st, const char *data, size_t
struct buffer *out, int dir)
{
return st->comp_algo[dir]->add_data(st->comp_ctx[dir], data, len, out);
return st->comp_algo->add_data(st->comp_ctx, data, len, out);
}
static int
@ -762,26 +789,58 @@ htx_compression_buffer_end(struct comp_state *st, struct buffer *out, int end, i
{
if (end)
return st->comp_algo[dir]->finish(st->comp_ctx[dir], out);
return st->comp_algo->finish(st->comp_ctx, out);
else
return st->comp_algo[dir]->flush(st->comp_ctx[dir], out);
return st->comp_algo->flush(st->comp_ctx, out);
}
/***********************************************************************/
struct flt_ops comp_ops = {
struct flt_ops comp_req_ops = {
.init = comp_flt_init,
.attach = comp_strm_init,
.detach = comp_strm_deinit,
.channel_post_analyze = comp_http_post_analyze,
.http_headers = comp_http_headers,
.http_payload = comp_http_payload,
.http_end = comp_http_end,
.http_headers = comp_req_http_headers,
.http_payload = comp_req_http_payload,
};
struct flt_ops comp_res_ops = {
.init = comp_flt_init,
.attach = comp_strm_init,
.detach = comp_strm_deinit,
.channel_post_analyze = comp_res_http_post_analyze,
.http_headers = comp_res_http_headers,
.http_payload = comp_res_http_payload,
.http_end = comp_res_http_end,
};
/* returns compression options from <proxy> proxy or allocates them if
* needed
*
* When compression options are created, flags will be set to <defaults>
*
* Returns NULL in case of memory error
*/
static inline struct comp *proxy_get_comp(struct proxy *proxy, int defaults)
{
struct comp *comp;
if (proxy->comp == NULL) {
comp = calloc(1, sizeof(*comp));
if (unlikely(!comp))
return NULL;
comp->flags = defaults;
proxy->comp = comp;
}
return proxy->comp;
}
static int
parse_compression_options(char **args, int section, struct proxy *proxy,
const struct proxy *defpx, const char *file, int line,
@ -791,19 +850,13 @@ parse_compression_options(char **args, int section, struct proxy *proxy,
int ret = 0;
const char *res;
if (proxy->comp == NULL) {
comp = calloc(1, sizeof(*comp));
if (unlikely(!comp)) {
memprintf(err, "'%s': out of memory.", args[0]);
ret = -1;
goto end;
}
/* Always default to compress responses */
comp->flags = COMP_FL_DIR_RES;
proxy->comp = comp;
/* always default to compress responses */
comp = proxy_get_comp(proxy, COMP_FL_DIR_RES);
if (comp == NULL) {
memprintf(err, "'%s': out of memory.", args[0]);
ret = -1;
goto end;
}
else
comp = proxy->comp;
if (strcmp(args[1], "algo") == 0 || strcmp(args[1], "algo-res") == 0) {
struct comp_ctx *ctx;
@ -970,27 +1023,109 @@ parse_http_comp_flt(char **args, int *cur_arg, struct proxy *px,
struct flt_conf *fconf, char **err, void *private)
{
struct flt_conf *fc, *back;
struct flt_conf *fconf_res;
list_for_each_entry_safe(fc, back, &px->filter_configs, list) {
if (fc->id == http_comp_flt_id) {
if (fc->id == http_comp_req_flt_id || fc->id == http_comp_res_flt_id) {
memprintf(err, "%s: Proxy supports only one compression filter\n", px->id);
return -1;
}
}
fconf->id = http_comp_flt_id;
fconf->id = http_comp_req_flt_id;
fconf->conf = NULL;
fconf->ops = &comp_ops;
fconf->ops = &comp_req_ops;
/* FILTER API prepared a single filter_conf struct as it is meant to
* initialize exactly one fconf per keyword, but with the "compression"
* filter, for retro-compatibility we want to emulate the historical
* behavior which is to compress both requests and responses, so to
* emulate that we manually initialize the comp-res filter as well
*/
fconf_res = calloc(1, sizeof(*fconf_res));
if (!fconf_res) {
memprintf(err, "'%s' : out of memory", args[0]);
return -1;
}
fconf_res->id = http_comp_res_flt_id;
fconf_res->conf = NULL;
fconf_res->ops = &comp_res_ops;
/* manually add the fconf_res to the list because filter API doesn't
* know about it
*/
LIST_APPEND(&px->filter_configs, &fconf_res->list);
(*cur_arg)++;
return 0;
}
static int
parse_http_comp_req_flt(char **args, int *cur_arg, struct proxy *px,
struct flt_conf *fconf, char **err, void *private)
{
struct flt_conf *fc, *back;
struct comp *comp;
list_for_each_entry_safe(fc, back, &px->filter_configs, list) {
if (fc->id == http_comp_req_flt_id) {
memprintf(err, "%s: Proxy supports only one comp-req filter\n", px->id);
return -1;
}
}
comp = proxy_get_comp(px, 0);
if (comp == NULL) {
memprintf(err, "memory failure\n");
return -1;
}
comp->flags |= COMP_FL_DIR_REQ;
fconf->id = http_comp_req_flt_id;
fconf->conf = NULL;
fconf->ops = &comp_req_ops;
(*cur_arg)++;
return 0;
}
static int
parse_http_comp_res_flt(char **args, int *cur_arg, struct proxy *px,
struct flt_conf *fconf, char **err, void *private)
{
struct flt_conf *fc, *back;
struct comp *comp;
list_for_each_entry_safe(fc, back, &px->filter_configs, list) {
if (fc->id == http_comp_res_flt_id) {
memprintf(err, "%s: Proxy supports only one comp-res filter\n", px->id);
return -1;
}
}
comp = proxy_get_comp(px, 0);
if (comp == NULL) {
memprintf(err, "memory failure\n");
return -1;
}
comp->flags |= COMP_FL_DIR_RES;
fconf->id = http_comp_res_flt_id;
fconf->conf = NULL;
fconf->ops = &comp_res_ops;
(*cur_arg)++;
return 0;
}
int
check_implicit_http_comp_flt(struct proxy *proxy)
{
struct flt_conf *fconf;
struct flt_conf *fconf_req = NULL;
struct flt_conf *fconf_res = NULL;
int explicit = 0;
int comp = 0;
int err = 0;
@ -999,7 +1134,7 @@ check_implicit_http_comp_flt(struct proxy *proxy)
goto end;
if (!LIST_ISEMPTY(&proxy->filter_configs)) {
list_for_each_entry(fconf, &proxy->filter_configs, list) {
if (fconf->id == http_comp_flt_id)
if (fconf->id == http_comp_req_flt_id || fconf->id == http_comp_res_flt_id)
comp = 1;
else if (fconf->id == cache_store_flt_id) {
if (comp) {
@ -1027,17 +1162,25 @@ check_implicit_http_comp_flt(struct proxy *proxy)
/* Implicit declaration of the compression filter is always the last
* one */
fconf = calloc(1, sizeof(*fconf));
if (!fconf) {
fconf_req = calloc(1, sizeof(*fconf));
fconf_res = calloc(1, sizeof(*fconf));
if (!fconf_req || !fconf_res) {
ha_alert("config: %s '%s': out of memory\n",
proxy_type_str(proxy), proxy->id);
ha_free(&fconf_req);
ha_free(&fconf_res);
err++;
goto end;
}
fconf->id = http_comp_flt_id;
fconf->conf = NULL;
fconf->ops = &comp_ops;
LIST_APPEND(&proxy->filter_configs, &fconf->list);
fconf_req->id = http_comp_req_flt_id;
fconf_req->conf = NULL;
fconf_req->ops = &comp_req_ops;
LIST_APPEND(&proxy->filter_configs, &fconf_req->list);
fconf_res->id = http_comp_res_flt_id;
fconf_res->conf = NULL;
fconf_res->ops = &comp_res_ops;
LIST_APPEND(&proxy->filter_configs, &fconf_res->list);
end:
return err;
}
@ -1072,7 +1215,7 @@ smp_fetch_res_comp_algo(const struct arg *args, struct sample *smp,
return 0;
list_for_each_entry(filter, &strm_flt(smp->strm)->filters, list) {
if (FLT_ID(filter) != http_comp_flt_id)
if (FLT_ID(filter) != http_comp_res_flt_id)
continue;
if (!(st = filter->ctx))
@ -1080,8 +1223,8 @@ smp_fetch_res_comp_algo(const struct arg *args, struct sample *smp,
smp->data.type = SMP_T_STR;
smp->flags = SMP_F_CONST;
smp->data.u.str.area = st->comp_algo[COMP_DIR_RES]->cfg_name;
smp->data.u.str.data = st->comp_algo[COMP_DIR_RES]->cfg_name_len;
smp->data.u.str.area = st->comp_algo->cfg_name;
smp->data.u.str.data = st->comp_algo->cfg_name_len;
return 1;
}
return 0;
@ -1099,6 +1242,8 @@ INITCALL1(STG_REGISTER, cfg_register_keywords, &cfg_kws);
/* Declare the filter parser for "compression" keyword */
static struct flt_kw_list filter_kws = { "COMP", { }, {
{ "compression", parse_http_comp_flt, NULL },
{ "comp-req", parse_http_comp_req_flt, NULL },
{ "comp-res", parse_http_comp_res_flt, NULL },
{ NULL, NULL, NULL },
}
};

View File

@ -516,8 +516,11 @@ static void spoe_handle_appctx(struct appctx *appctx)
appctx->st0 = SPOE_APPCTX_ST_END;
applet_set_error(appctx);
}
else if (!spoe_handle_receiving_frame_appctx(appctx))
break;
else {
SPOE_APPCTX(appctx)->spoe_ctx->state = SPOE_CTX_ST_WAITING_ACK;
if (!spoe_handle_receiving_frame_appctx(appctx))
break;
}
goto switchstate;
case SPOE_APPCTX_ST_EXIT:
@ -1109,6 +1112,16 @@ static int spoe_process_event(struct stream *s, struct spoe_context *ctx,
agent->id, spoe_event_str[ev], s->uniq_id, ctx->status_code, ctx->stats.t_process,
agent->counters.nb_errors, agent->counters.nb_processed);
}
else if (ret == 0) {
if ((s->scf->flags & SC_FL_ERROR) ||
((s->scf->flags & (SC_FL_EOS|SC_FL_ABRT_DONE)) && proxy_abrt_close_def(s->be, 1))) {
ctx->status_code = SPOE_CTX_ERR_INTERRUPT;
spoe_stop_processing(agent, ctx);
spoe_handle_processing_error(s, agent, ctx, dir);
ret = 1;
}
}
return ret;
}

View File

@ -319,6 +319,7 @@ static struct htx_sl *h2_prepare_htx_reqline(uint32_t fields, struct ist *phdr,
*/
int h2_make_htx_request(struct http_hdr *list, struct htx *htx, unsigned int *msgf, unsigned long long *body_len, int relaxed)
{
struct htx_blk *tailblk = htx_get_tail_blk(htx);
struct ist phdr_val[H2_PHDR_NUM_ENTRIES];
uint32_t fields; /* bit mask of H2_PHDR_FND_* */
uint32_t idx;
@ -533,6 +534,7 @@ int h2_make_htx_request(struct http_hdr *list, struct htx *htx, unsigned int *ms
return ret;
fail:
htx_truncate_blk(htx, tailblk);
return -1;
}
@ -637,6 +639,7 @@ static struct htx_sl *h2_prepare_htx_stsline(uint32_t fields, struct ist *phdr,
*/
int h2_make_htx_response(struct http_hdr *list, struct htx *htx, unsigned int *msgf, unsigned long long *body_len, char *upgrade_protocol)
{
struct htx_blk *tailblk = htx_get_tail_blk(htx);
struct ist phdr_val[H2_PHDR_NUM_ENTRIES];
uint32_t fields; /* bit mask of H2_PHDR_FND_* */
uint32_t idx;
@ -793,6 +796,7 @@ int h2_make_htx_response(struct http_hdr *list, struct htx *htx, unsigned int *m
return ret;
fail:
htx_truncate_blk(htx, tailblk);
return -1;
}
@ -812,6 +816,7 @@ int h2_make_htx_response(struct http_hdr *list, struct htx *htx, unsigned int *m
*/
int h2_make_htx_trailers(struct http_hdr *list, struct htx *htx)
{
struct htx_blk *tailblk = htx_get_tail_blk(htx);
const char *ctl;
struct ist v;
uint32_t idx;
@ -861,8 +866,8 @@ int h2_make_htx_trailers(struct http_hdr *list, struct htx *htx)
goto fail;
}
/* Check the number of blocks against "tune.http.maxhdr" value before adding EOT block */
if (htx_nbblks(htx) > global.tune.max_http_hdr)
/* Check the number of trailers against "tune.http.maxhdr" value before adding EOT block */
if (idx > global.tune.max_http_hdr)
goto fail;
if (!htx_add_endof(htx, HTX_BLK_EOT))
@ -871,5 +876,6 @@ int h2_make_htx_trailers(struct http_hdr *list, struct htx *htx)
return 1;
fail:
htx_truncate_blk(htx, tailblk);
return -1;
}

View File

@ -641,7 +641,7 @@ static ssize_t h3_req_headers_to_htx(struct qcs *qcs, const struct buffer *buf,
/* TODO support trailer parsing in this function */
/* TODO support buffer wrapping */
BUG_ON(b_head(buf) + len >= b_wrap(buf));
BUG_ON(b_head(buf) + len > b_wrap(buf));
ret = qpack_decode_fs((const unsigned char *)b_head(buf), len, tmp,
list, sizeof(list) / sizeof(list[0]));
if (ret < 0) {
@ -1111,6 +1111,7 @@ static ssize_t h3_resp_headers_to_htx(struct qcs *qcs, const struct buffer *buf,
struct buffer *tmp = get_trash_chunk();
struct htx *htx = NULL;
struct htx_sl *sl;
struct htx_blk *tailblk = NULL;
struct http_hdr list[global.tune.max_http_hdr * 2];
unsigned int flags = HTX_SL_F_NONE;
struct ist status = IST_NULL;
@ -1141,7 +1142,7 @@ static ssize_t h3_resp_headers_to_htx(struct qcs *qcs, const struct buffer *buf,
TRACE_ENTER(H3_EV_RX_FRAME|H3_EV_RX_HDR, qcs->qcc->conn, qcs);
/* TODO support buffer wrapping */
BUG_ON(b_head(buf) + len >= b_wrap(buf));
BUG_ON(b_head(buf) + len > b_wrap(buf));
ret = qpack_decode_fs((const unsigned char *)b_head(buf), len, tmp,
list, sizeof(list) / sizeof(list[0]));
if (ret < 0) {
@ -1161,7 +1162,7 @@ static ssize_t h3_resp_headers_to_htx(struct qcs *qcs, const struct buffer *buf,
}
BUG_ON(!b_size(appbuf)); /* TODO */
htx = htx_from_buf(appbuf);
tailblk = htx_get_tail_blk(htx);
/* Only handle one HEADERS frame at a time. Thus if HTX buffer is too
* small, it happens solely from a single frame and the only option is
* to close the stream.
@ -1351,8 +1352,11 @@ static ssize_t h3_resp_headers_to_htx(struct qcs *qcs, const struct buffer *buf,
}
out:
if (appbuf)
if (appbuf) {
if ((ssize_t)len < 0)
htx_truncate_blk(htx, tailblk);
htx_to_buf(htx, appbuf);
}
TRACE_LEAVE(H3_EV_RX_FRAME|H3_EV_RX_HDR, qcs->qcc->conn, qcs);
return len;
@ -1376,6 +1380,7 @@ static ssize_t h3_trailers_to_htx(struct qcs *qcs, const struct buffer *buf,
struct buffer *appbuf = NULL;
struct htx *htx = NULL;
struct htx_sl *sl;
struct htx_blk *tailblk = NULL;
struct http_hdr list[global.tune.max_http_hdr * 2];
int hdr_idx, ret;
const char *ctl;
@ -1386,7 +1391,7 @@ static ssize_t h3_trailers_to_htx(struct qcs *qcs, const struct buffer *buf,
TRACE_ENTER(H3_EV_RX_FRAME|H3_EV_RX_HDR, qcs->qcc->conn, qcs);
/* TODO support buffer wrapping */
BUG_ON(b_head(buf) + len >= b_wrap(buf));
BUG_ON(b_head(buf) + len > b_wrap(buf));
ret = qpack_decode_fs((const unsigned char *)b_head(buf), len, tmp,
list, sizeof(list) / sizeof(list[0]));
if (ret < 0) {
@ -1406,6 +1411,7 @@ static ssize_t h3_trailers_to_htx(struct qcs *qcs, const struct buffer *buf,
}
BUG_ON(!b_size(appbuf)); /* TODO */
htx = htx_from_buf(appbuf);
tailblk = htx_get_tail_blk(htx);
if (!h3s->data_len) {
/* Notify that no body is present. This can only happens if
@ -1505,7 +1511,7 @@ static ssize_t h3_trailers_to_htx(struct qcs *qcs, const struct buffer *buf,
}
/* Check the number of blocks against "tune.http.maxhdr" value before adding EOT block */
if (htx_nbblks(htx) > global.tune.max_http_hdr) {
if (hdr_idx > global.tune.max_http_hdr) {
len = -1;
goto out;
}
@ -1521,8 +1527,11 @@ static ssize_t h3_trailers_to_htx(struct qcs *qcs, const struct buffer *buf,
out:
/* HTX may be non NULL if error before previous htx_to_buf(). */
if (appbuf)
if (appbuf) {
if ((ssize_t)len < 0)
htx_truncate_blk(htx, tailblk);
htx_to_buf(htx, appbuf);
}
TRACE_LEAVE(H3_EV_RX_FRAME|H3_EV_RX_HDR, qcs->qcc->conn, qcs);
return len;
@ -1745,6 +1754,14 @@ static ssize_t h3_rcv_buf(struct qcs *qcs, struct buffer *b, int fin)
if (!b_data(b) && fin && quic_stream_is_bidi(qcs->id)) {
TRACE_PROTO("received FIN without data", H3_EV_RX_FRAME, qcs->qcc->conn, qcs);
/* FIN received, ensure body length is conform to any content-length header. */
if ((h3s->flags & H3_SF_HAVE_CLEN) && h3_check_body_size(qcs, 1)) {
qcc_abort_stream_read(qcs);
qcc_reset_stream(qcs, h3s->err);
goto done;
}
if (qcs_http_handle_standalone_fin(qcs)) {
TRACE_ERROR("cannot set EOM", H3_EV_RX_FRAME, qcs->qcc->conn, qcs);
qcc_set_error(qcs->qcc, H3_ERR_INTERNAL_ERROR, 1);
@ -1801,10 +1818,11 @@ static ssize_t h3_rcv_buf(struct qcs *qcs, struct buffer *b, int fin)
flen = h3s->demux_frame_len;
ftype = h3s->demux_frame_type;
/* Do not demux incomplete frames except H3 DATA which can be
* fragmented in multiple HTX blocks.
/* Current HTTP/3 parser can currently only parse fully
* received and aligned frames. The only exception is for DATA
* frames as they can frequently be larger than bufsize.
*/
if (flen > b_data(b) && ftype != H3_FT_DATA) {
if (ftype != H3_FT_DATA) {
/* Reject frames bigger than bufsize.
*
* TODO HEADERS should in complement be limited with H3
@ -1817,7 +1835,20 @@ static ssize_t h3_rcv_buf(struct qcs *qcs, struct buffer *b, int fin)
qcc_report_glitch(qcs->qcc, 1);
goto err;
}
break;
/* TODO extend parser to support the realignment of a frame. */
if (b_head(b) + b_data(b) > b_wrap(b)) {
TRACE_ERROR("cannot parse unaligned data frame", H3_EV_RX_FRAME, qcs->qcc->conn, qcs);
qcc_set_error(qcs->qcc, H3_ERR_EXCESSIVE_LOAD, 1);
qcc_report_glitch(qcs->qcc, 1);
goto err;
}
/* Only parse full HTTP/3 frames. */
if (flen > b_data(b)) {
TRACE_PROTO("pause parsing on incomplete payload", H3_EV_RX_FRAME, qcs->qcc->conn, qcs);
break;
}
}
last_stream_frame = (fin && flen == b_data(b));

View File

@ -271,6 +271,10 @@ unsigned int tainted = 0;
unsigned int experimental_directives_allowed = 0;
unsigned int deprecated_directives_allowed = 0;
/* mapped storage for collected libs */
void *lib_storage = NULL;
size_t lib_size = 0;
int check_kw_experimental(struct cfg_keyword *kw, const char *file, int linenum,
char **errmsg)
{
@ -1635,18 +1639,11 @@ void haproxy_init_args(int argc, char **argv)
argc--; argv++;
}
ret = trace_parse_cmd(arg, &err_msg);
if (ret <= -1) {
if (ret < -1) {
ha_alert("-dt: %s.\n", err_msg);
ha_free(&err_msg);
exit(EXIT_FAILURE);
}
else {
printf("%s\n", err_msg);
ha_free(&err_msg);
exit(0);
}
ret = trace_add_cmd(arg, &err_msg);
if (ret) {
ha_alert("-dt: %s.\n", err_msg);
ha_free(&err_msg);
exit(EXIT_FAILURE);
}
}
#ifdef HA_USE_KTLS
@ -2523,6 +2520,10 @@ static void step_init_2(int argc, char** argv)
chunk_appendf(&trash, "TARGET='%s'", pm_target_opts);
post_mortem_add_component("haproxy", haproxy_version, cc, cflags, opts, argv[0]);
if ((global.tune.options & (GTUNE_SET_DUMPABLE | GTUNE_COLLECT_LIBS)) ==
(GTUNE_SET_DUMPABLE | GTUNE_COLLECT_LIBS))
collect_libs();
}
/* This is a third part of the late init sequence, where we register signals for
@ -3478,6 +3479,7 @@ int main(int argc, char **argv)
list_for_each_entry_safe(cfg, cfg_tmp, &cfg_cfgfiles, list)
ha_free(&cfg->content);
trace_parse_cmds();
usermsgs_clr(NULL);
}
@ -3771,6 +3773,7 @@ int main(int argc, char **argv)
char *msg = NULL;
char c;
int r __maybe_unused;
struct timeval tv = { .tv_sec = 2, .tv_usec = 0 };
if (socketpair(PF_UNIX, SOCK_STREAM, 0, sock_pair) == -1) {
ha_alert("[%s.main()] Cannot create socketpair to update the new worker state\n",
@ -3780,10 +3783,12 @@ int main(int argc, char **argv)
}
list_for_each_entry(proc, &proc_list, list) {
if (proc->pid == -1)
if (proc->pid == -1 && proc->options & PROC_O_TYPE_WORKER)
break;
}
BUG_ON(!(proc->options & PROC_O_TYPE_WORKER));
if (send_fd_uxst(proc->ipc_fd[1], sock_pair[0]) == -1) {
ha_alert("[%s.main()] Cannot transfer connection fd %d over the sockpair@%d\n",
argv[0], sock_pair[0], proc->ipc_fd[1]);
@ -3807,6 +3812,7 @@ int main(int argc, char **argv)
* we make sure that the fd is received correctly.
*/
shutdown(sock_pair[1], SHUT_WR);
setsockopt(sock_pair[1], SOL_SOCKET, SO_RCVTIMEO, &tv, sizeof(tv));
r = read(sock_pair[1], &c, 1);
close(sock_pair[1]);
close(sock_pair[0]);

View File

@ -250,7 +250,7 @@ static int hstream_htx_buf_rcv(struct connection *conn, struct hstream *hs)
htx_reset(htxbuf(&hs->req));
max = (IS_HTX_SC(hs->sc) ? htx_free_space(htxbuf(&hs->req)) : b_room(&hs->req));
sc_ep_clr(hs->sc, SE_FL_WANT_ROOM);
read = conn->mux->rcv_buf(hs->sc, &hs->req, max, 0);
read = CALL_MUX_WITH_RET(conn->mux, rcv_buf(hs->sc, &hs->req, max, 0));
cur_read += read;
if (!htx_expect_more(htxbuf(&hs->req))) {
fin = 1;
@ -313,7 +313,7 @@ static int hstream_htx_buf_snd(struct connection *conn, struct hstream *hs)
goto out;
}
nret = conn->mux->snd_buf(hs->sc, &hs->res, htxbuf(&hs->res)->data, 0);
nret = CALL_MUX_WITH_RET(conn->mux, snd_buf(hs->sc, &hs->res, htxbuf(&hs->res)->data, 0));
if (nret <= 0) {
if (hs->flags & HS_ST_CONN_ERROR ||
conn->flags & CO_FL_ERROR || sc_ep_test(sc, SE_FL_ERROR)) {
@ -447,15 +447,6 @@ err:
goto leave;
}
int hstream_wake(struct stconn *sc)
{
struct hstream *hs = __sc_hstream(sc);
TRACE_STATE("waking up task", HS_EV_HSTRM_IO_CB, hs);
task_wakeup(hs->task, TASK_WOKEN_IO);
return 0;
}
/* Add data to HTX response buffer from pre-built responses */
static void hstream_add_data(struct htx *htx, struct hstream *hs)
{
@ -510,11 +501,14 @@ static int hstream_build_http_resp(struct hstream *hs)
struct htx *htx;
unsigned int flags = HTX_SL_F_IS_RESP | HTX_SL_F_XFER_LEN | (!hs->req_chunked ? HTX_SL_F_CLEN : 0);
struct htx_sl *sl;
char hdrbuf[128];
char *end;
TRACE_ENTER(HS_EV_HSTRM_RESP, hs);
snprintf(hdrbuf, sizeof(hdrbuf), "%d", hs->req_code);
chunk_reset(&trash);
end = ultoa_o(hs->req_code, trash.area, trash.size);
if (!end)
goto err;
buf = hstream_get_buf(hs, &hs->res);
if (!buf) {
TRACE_ERROR("could not allocate response buffer", HS_EV_HSTRM_RESP, hs);
@ -524,7 +518,7 @@ static int hstream_build_http_resp(struct hstream *hs)
htx = htx_from_buf(buf);
sl = htx_add_stline(htx, HTX_BLK_RES_SL, flags,
!(hs->ka & 4) ? ist("HTTP/1.0") : ist("HTTP/1.1"),
ist(hdrbuf), IST_NULL);
ist2(trash.area, end - trash.area), IST_NULL);
if (!sl) {
TRACE_ERROR("could not add HTX start line", HS_EV_HSTRM_RESP, hs);
goto err;
@ -559,18 +553,46 @@ static int hstream_build_http_resp(struct hstream *hs)
}
/* XXX TODO time? XXX */
snprintf(hdrbuf, sizeof(hdrbuf), "time=%ld ms", 0L);
if (!htx_add_header(htx, ist("X-req"), ist(hdrbuf))) {
chunk_reset(&trash);
if (!chunk_strcat(&trash, "time=0 ms") ||
!htx_add_header(htx, ist("X-req"), ist2(trash.area, trash.data))) {
TRACE_ERROR("could not add x-req HTX header", HS_EV_HSTRM_RESP, hs);
goto err;
}
/* XXX TODO time? XXX */
snprintf(hdrbuf, sizeof(hdrbuf), "id=%s, code=%d, cache=%d,%s size=%lld, time=%d ms (%ld real)",
"dummy", hs->req_code, hs->req_cache,
hs->req_chunked ? " chunked," : "",
hs->req_size, 0, 0L);
if (!htx_add_header(htx, ist("X-rsp"), ist(hdrbuf))) {
chunk_reset(&trash);
if (!chunk_strcat(&trash, "id=dummy, code=")) {
TRACE_ERROR("could not build x-rsp HTX header", HS_EV_HSTRM_RESP, hs);
goto err;
}
end = ultoa_o(hs->req_code, trash.area + trash.data, trash.size - trash.data);
if (!end)
goto err;
trash.data = end - trash.area;
if (!chunk_strcat(&trash, ", cache=")) {
TRACE_ERROR("could not build x-rsp HTX header", HS_EV_HSTRM_RESP, hs);
goto err;
}
end = ultoa_o(hs->req_cache, trash.area + trash.data, trash.size - trash.data);
if (!end)
goto err;
trash.data = end - trash.area;
if (hs->req_chunked && !chunk_strcat(&trash, ", chunked,")) {
TRACE_ERROR("could not build x-rsp HTX header", HS_EV_HSTRM_RESP, hs);
goto err;
}
if (!chunk_strcat(&trash, " size=")) {
TRACE_ERROR("could not build x-rsp HTX header", HS_EV_HSTRM_RESP, hs);
goto err;
}
end = ultoa_o(hs->req_size, trash.area + trash.data, trash.size - trash.data);
if (!end)
goto err;
trash.data = end - trash.area;
if (!chunk_strcat(&trash, ", time=0 ms (0 real)") ||
!htx_add_header(htx, ist("X-rsp"), ist2(trash.area, trash.data))) {
TRACE_ERROR("could not add x-rsp HTX header", HS_EV_HSTRM_RESP, hs);
goto err;
}
@ -882,7 +904,7 @@ static struct task *process_hstream(struct task *t, void *context, unsigned int
out:
if (!hs->to_write && !hs->req_body && htx_is_empty(htxbuf(&hs->res))) {
TRACE_DEVEL("shutting down stream", HS_EV_HSTRM_SEND, hs);
conn->mux->shut(hs->sc, SE_SHW_SILENT|SE_SHW_NORMAL, NULL);
CALL_MUX_NO_RET(conn->mux, shut(hs->sc, SE_SHW_SILENT|SE_SHW_NORMAL, NULL));
}
if (hs->flags & HS_ST_CONN_ERROR ||

View File

@ -4926,7 +4926,7 @@ __LJMP static int hlua_run_sample_fetch(lua_State *L)
/* Run the sample fetch process. */
smp_set_owner(&smp, hsmp->p, hsmp->s->sess, hsmp->s, hsmp->dir & SMP_OPT_DIR);
if (!f->process(args, &smp, f->kw, f->private)) {
if (!EXEC_CTX_WITH_RET(f->exec_ctx, f->process(args, &smp, f->kw, f->private))) {
if (hsmp->flags & HLUA_F_AS_STRING)
lua_pushstring(L, "");
else
@ -5059,7 +5059,7 @@ __LJMP static int hlua_run_sample_conv(lua_State *L)
}
/* Run the sample conversion process. */
if (!conv->process(args, &smp, conv->private)) {
if (!EXEC_CTX_WITH_RET(conv->exec_ctx, conv->process(args, &smp, conv->private))) {
if (hsmp->flags & HLUA_F_AS_STRING)
lua_pushstring(L, "");
else

View File

@ -1224,11 +1224,14 @@ static __inline int do_l7_retry(struct stream *s, struct stconn *sc)
}
b_free(&req->buf);
/* Swap the L7 buffer with the channel buffer */
/* We know we stored the co_data as b_data, so get it there */
co_data = b_data(&s->txn->l7_buffer);
b_set_data(&s->txn->l7_buffer, b_size(&s->txn->l7_buffer));
b_xfer(&req->buf, &s->txn->l7_buffer, b_data(&s->txn->l7_buffer));
req->buf = s->txn->l7_buffer;
s->txn->l7_buffer = BUF_NULL;
co_set_data(req, co_data);
DBG_TRACE_DEVEL("perform a L7 retry", STRM_EV_STRM_ANA|STRM_EV_HTTP_ANA, s, s->txn);
@ -2881,7 +2884,8 @@ static enum rule_result http_req_get_intercept_rule(struct proxy *px, struct lis
s->waiting_entity.ptr = NULL;
}
switch (rule->action_ptr(rule, px, sess, s, act_opts)) {
switch (EXEC_CTX_WITH_RET(rule->exec_ctx,
rule->action_ptr(rule, px, sess, s, act_opts))) {
case ACT_RET_CONT:
break;
case ACT_RET_STOP:
@ -3073,7 +3077,8 @@ resume_execution:
s->waiting_entity.ptr = NULL;
}
switch (rule->action_ptr(rule, px, sess, s, act_opts)) {
switch (EXEC_CTX_WITH_RET(rule->exec_ctx,
rule->action_ptr(rule, px, sess, s, act_opts))) {
case ACT_RET_CONT:
break;
case ACT_RET_STOP:
@ -4332,20 +4337,10 @@ enum rule_result http_wait_for_msg_body(struct stream *s, struct channel *chn,
}
if (channel_htx_full(chn, htx, global.tune.maxrewrite) || sc_waiting_room(chn_prod(chn))) {
struct buffer lbuf;
char *area;
struct buffer lbuf = BUF_NULL;
if (large_buffer == 0 || b_is_large(&chn->buf))
goto end; /* don't use large buffer or large buffer is full */
/* normal buffer is full, allocate a large one
*/
area = pool_alloc(pool_head_large_buffer);
if (!area)
goto end; /* Allocation failure: TODO must be improved to use buffer_wait */
lbuf = b_make(area, global.tune.bufsize_large, 0, 0);
htx_xfer_blks(htx_from_buf(&lbuf), htx, htx_used_space(htx), HTX_BLK_UNUSED);
htx_to_buf(htx, &chn->buf);
if (large_buffer == 0 || b_is_large(&chn->buf) || !htx_move_to_large_buffer(&lbuf, &chn->buf))
goto end; /* don't use large buffer or already a large buffer */
b_free(&chn->buf);
offer_buffers(s, 1);
chn->buf = lbuf;

View File

@ -604,10 +604,7 @@ void httpclient_applet_io_handler(struct appctx *appctx)
htx_to_buf(htx, outbuf);
b_xfer(outbuf, &hc->req.buf, b_data(&hc->req.buf));
} else {
struct htx_ret ret;
ret = htx_xfer_blks(htx, hc_htx, htx_used_space(hc_htx), HTX_BLK_UNUSED);
if (!ret.ret) {
if (!htx_xfer(htx, hc_htx, htx_used_space(hc_htx), HTX_XFER_DEFAULT)) {
applet_have_more_data(appctx);
goto out;
}
@ -711,7 +708,6 @@ void httpclient_applet_io_handler(struct appctx *appctx)
if (hc->options & HTTPCLIENT_O_RES_HTX) {
/* HTX mode transfers the header to the hc buffer */
struct htx *hc_htx;
struct htx_ret ret;
if (!b_alloc(&hc->res.buf, DB_MUX_TX)) {
applet_wont_consume(appctx);
@ -720,8 +716,7 @@ void httpclient_applet_io_handler(struct appctx *appctx)
hc_htx = htxbuf(&hc->res.buf);
/* xfer the headers */
ret = htx_xfer_blks(hc_htx, htx, htx_used_space(htx), HTX_BLK_EOH);
if (!ret.ret) {
if (!htx_xfer(hc_htx, htx, htx_used_space(htx), HTX_XFER_HDRS_ONLY)) {
applet_need_more_data(appctx);
goto out;
}
@ -811,12 +806,10 @@ void httpclient_applet_io_handler(struct appctx *appctx)
if (hc->options & HTTPCLIENT_O_RES_HTX) {
/* HTX mode transfers the header to the hc buffer */
struct htx *hc_htx;
struct htx_ret ret;
hc_htx = htxbuf(&hc->res.buf);
ret = htx_xfer_blks(hc_htx, htx, htx_used_space(htx), HTX_BLK_UNUSED);
if (!ret.ret)
if (!htx_xfer(hc_htx, htx, htx_used_space(htx), HTX_XFER_DEFAULT))
applet_wont_consume(appctx);
else
applet_fl_clr(appctx, APPCTX_FL_INBLK_FULL);

View File

@ -41,17 +41,18 @@ struct list http_replies_list = LIST_HEAD_INIT(http_replies_list);
/* The declaration of an errorfiles/errorfile directives. Used during config
* parsing only. */
struct conf_errors {
char type; /* directive type (0: errorfiles, 1: errorfile) */
enum http_err_directive directive; /* directive type: inline (errorfile <code> <file>) / section (errorfiles <section>) */
union {
struct {
int status; /* the status code associated to this error */
struct http_reply *reply; /* the http reply for the errorfile */
} errorfile; /* describe an "errorfile" directive */
} inl; /* for HTTP_ERR_DIRECTIVE_INLINE only */
struct {
char *name; /* the http-errors section name */
char status[HTTP_ERR_SIZE]; /* list of status to import (0: ignore, 1: implicit import, 2: explicit import) */
} errorfiles; /* describe an "errorfiles" directive */
} info;
struct http_errors *resolved; /* resolved section pointer set via proxy_check_http_errors() */
enum http_err_import status[HTTP_ERR_SIZE]; /* list of status to import */
} section; /* for HTTP_ERR_DIRECTIVE_SECTION only */
} type;
char *file; /* file where the directive appears */
int line; /* line where the directive appears */
@ -2034,9 +2035,9 @@ static int proxy_parse_errorloc(char **args, int section, struct proxy *curpx,
ret = -1;
goto out;
}
conf_err->type = 1;
conf_err->info.errorfile.status = status;
conf_err->info.errorfile.reply = reply;
conf_err->directive = HTTP_ERR_DIRECTIVE_INLINE;
conf_err->type.inl.status = status;
conf_err->type.inl.reply = reply;
conf_err->file = strdup(file);
conf_err->line = line;
@ -2105,9 +2106,9 @@ static int proxy_parse_errorfile(char **args, int section, struct proxy *curpx,
ret = -1;
goto out;
}
conf_err->type = 1;
conf_err->info.errorfile.status = status;
conf_err->info.errorfile.reply = reply;
conf_err->directive = HTTP_ERR_DIRECTIVE_INLINE;
conf_err->type.inl.status = status;
conf_err->type.inl.reply = reply;
conf_err->file = strdup(file);
conf_err->line = line;
LIST_APPEND(&curpx->conf.errors, &conf_err->list);
@ -2146,12 +2147,12 @@ static int proxy_parse_errorfiles(char **args, int section, struct proxy *curpx,
memprintf(err, "%s : out of memory.", args[0]);
goto error;
}
conf_err->type = 0;
conf_err->info.errorfiles.name = name;
conf_err->directive = HTTP_ERR_DIRECTIVE_SECTION;
conf_err->type.section.name = name;
if (!*(args[2])) {
for (rc = 0; rc < HTTP_ERR_SIZE; rc++)
conf_err->info.errorfiles.status[rc] = 1;
conf_err->type.section.status[rc] = HTTP_ERR_IMPORT_IMPLICIT;
}
else {
int cur_arg, status;
@ -2160,7 +2161,7 @@ static int proxy_parse_errorfiles(char **args, int section, struct proxy *curpx,
for (rc = 0; rc < HTTP_ERR_SIZE; rc++) {
if (http_err_codes[rc] == status) {
conf_err->info.errorfiles.status[rc] = 2;
conf_err->type.section.status[rc] = HTTP_ERR_IMPORT_EXPLICIT;
break;
}
}
@ -2231,16 +2232,16 @@ static int proxy_parse_http_error(char **args, int section, struct proxy *curpx,
if (reply->type == HTTP_REPLY_ERRFILES) {
int rc = http_get_status_idx(reply->status);
conf_err->type = 2;
conf_err->info.errorfiles.name = reply->body.http_errors;
conf_err->info.errorfiles.status[rc] = 2;
conf_err->directive = HTTP_ERR_DIRECTIVE_SECTION;
conf_err->type.section.name = reply->body.http_errors;
conf_err->type.section.status[rc] = HTTP_ERR_IMPORT_EXPLICIT;
reply->body.http_errors = NULL;
release_http_reply(reply);
}
else {
conf_err->type = 1;
conf_err->info.errorfile.status = reply->status;
conf_err->info.errorfile.reply = reply;
conf_err->directive = HTTP_ERR_DIRECTIVE_INLINE;
conf_err->type.inl.status = reply->status;
conf_err->type.inl.reply = reply;
LIST_APPEND(&http_replies_list, &reply->list);
}
conf_err->file = strdup(file);
@ -2260,60 +2261,46 @@ static int proxy_parse_http_error(char **args, int section, struct proxy *curpx,
}
/* Check "errorfiles" proxy keyword */
static int proxy_check_errors(struct proxy *px)
/* Converts <conf_errors> initialized during config parsing for <px> proxy.
* Each one of them is transfromed in a http_reply type, stored in proxy
* replies array member. The original <conf_errors> becomes unneeded and is
* thus removed and freed.
*/
static int proxy_finalize_http_errors(struct proxy *px)
{
struct conf_errors *conf_err, *conf_err_back;
struct http_errors *http_errs;
int rc, err = ERR_NONE;
int rc;
list_for_each_entry_safe(conf_err, conf_err_back, &px->conf.errors, list) {
if (conf_err->type == 1) {
/* errorfile */
rc = http_get_status_idx(conf_err->info.errorfile.status);
px->replies[rc] = conf_err->info.errorfile.reply;
switch (conf_err->directive) {
case HTTP_ERR_DIRECTIVE_INLINE:
rc = http_get_status_idx(conf_err->type.inl.status);
px->replies[rc] = conf_err->type.inl.reply;
/* For proxy, to rely on default replies, just don't reference a reply */
if (px->replies[rc]->type == HTTP_REPLY_ERRMSG && !px->replies[rc]->body.errmsg)
px->replies[rc] = NULL;
}
else {
/* errorfiles */
list_for_each_entry(http_errs, &http_errors_list, list) {
if (strcmp(http_errs->id, conf_err->info.errorfiles.name) == 0)
break;
}
break;
/* unknown http-errors section */
if (&http_errs->list == &http_errors_list) {
ha_alert("proxy '%s': unknown http-errors section '%s' (at %s:%d).\n",
px->id, conf_err->info.errorfiles.name, conf_err->file, conf_err->line);
err |= ERR_ALERT | ERR_FATAL;
free(conf_err->info.errorfiles.name);
goto next;
}
free(conf_err->info.errorfiles.name);
for (rc = 0; rc < HTTP_ERR_SIZE; rc++) {
if (conf_err->info.errorfiles.status[rc] > 0) {
case HTTP_ERR_DIRECTIVE_SECTION:
http_errs = conf_err->type.section.resolved;
if (http_errs) {
for (rc = 0; rc < HTTP_ERR_SIZE; rc++) {
if (conf_err->type.section.status[rc] == HTTP_ERR_IMPORT_NO)
continue;
if (http_errs->replies[rc])
px->replies[rc] = http_errs->replies[rc];
else if (conf_err->info.errorfiles.status[rc] == 2)
ha_warning("config: proxy '%s' : status '%d' not declared in"
" http-errors section '%s' (at %s:%d).\n",
px->id, http_err_codes[rc], http_errs->id,
conf_err->file, conf_err->line);
}
}
}
next:
LIST_DELETE(&conf_err->list);
free(conf_err->file);
free(conf_err);
}
out:
return err;
return ERR_NONE;
}
static int post_check_errors()
@ -2343,6 +2330,55 @@ static int post_check_errors()
return err_code;
}
/* Checks the validity of conf_errors stored in <px> proxy after the
* configuration is completely parsed.
*
* Returns ERR_NONE on success and a combination of ERR_CODE on failure.
*/
int proxy_check_http_errors(struct proxy *px)
{
struct http_errors *http_errs;
struct conf_errors *conf_err;
int section_found;
int rc, err = ERR_NONE;
list_for_each_entry(conf_err, &px->conf.errors, list) {
if (conf_err->directive == HTTP_ERR_DIRECTIVE_SECTION) {
section_found = 0;
list_for_each_entry(http_errs, &http_errors_list, list) {
if (strcmp(http_errs->id, conf_err->type.section.name) == 0) {
section_found = 1;
break;
}
}
if (!section_found) {
ha_alert("proxy '%s': unknown http-errors section '%s' (at %s:%d).\n",
px->id, conf_err->type.section.name, conf_err->file, conf_err->line);
ha_free(&conf_err->type.section.name);
err |= ERR_ALERT | ERR_FATAL;
continue;
}
conf_err->type.section.resolved = http_errs;
ha_free(&conf_err->type.section.name);
for (rc = 0; rc < HTTP_ERR_SIZE; rc++) {
if (conf_err->type.section.status[rc] == HTTP_ERR_IMPORT_EXPLICIT &&
!http_errs->replies[rc]) {
ha_warning("config: proxy '%s' : status '%d' not declared in"
" http-errors section '%s' (at %s:%d).\n",
px->id, http_err_codes[rc], http_errs->id,
conf_err->file, conf_err->line);
err |= ERR_WARN;
}
}
}
}
return err;
}
int proxy_dup_default_conf_errors(struct proxy *curpx, const struct proxy *defpx, char **errmsg)
{
struct conf_errors *conf_err, *new_conf_err = NULL;
@ -2354,19 +2390,22 @@ int proxy_dup_default_conf_errors(struct proxy *curpx, const struct proxy *defpx
memprintf(errmsg, "unable to duplicate default errors (out of memory).");
goto out;
}
new_conf_err->type = conf_err->type;
if (conf_err->type == 1) {
new_conf_err->info.errorfile.status = conf_err->info.errorfile.status;
new_conf_err->info.errorfile.reply = conf_err->info.errorfile.reply;
}
else {
new_conf_err->info.errorfiles.name = strdup(conf_err->info.errorfiles.name);
if (!new_conf_err->info.errorfiles.name) {
new_conf_err->directive = conf_err->directive;
switch (conf_err->directive) {
case HTTP_ERR_DIRECTIVE_INLINE:
new_conf_err->type.inl.status = conf_err->type.inl.status;
new_conf_err->type.inl.reply = conf_err->type.inl.reply;
break;
case HTTP_ERR_DIRECTIVE_SECTION:
new_conf_err->type.section.name = strdup(conf_err->type.section.name);
if (!new_conf_err->type.section.name) {
memprintf(errmsg, "unable to duplicate default errors (out of memory).");
goto out;
}
memcpy(&new_conf_err->info.errorfiles.status, &conf_err->info.errorfiles.status,
sizeof(conf_err->info.errorfiles.status));
memcpy(&new_conf_err->type.section.status, &conf_err->type.section.status,
sizeof(conf_err->type.section.status));
break;
}
new_conf_err->file = strdup(conf_err->file);
new_conf_err->line = conf_err->line;
@ -2385,8 +2424,8 @@ void proxy_release_conf_errors(struct proxy *px)
struct conf_errors *conf_err, *conf_err_back;
list_for_each_entry_safe(conf_err, conf_err_back, &px->conf.errors, list) {
if (conf_err->type == 0)
free(conf_err->info.errorfiles.name);
if (conf_err->directive == HTTP_ERR_DIRECTIVE_SECTION)
free(conf_err->type.section.name);
LIST_DELETE(&conf_err->list);
free(conf_err->file);
free(conf_err);
@ -2505,7 +2544,7 @@ static struct cfg_kw_list cfg_kws = {ILH, {
}};
INITCALL1(STG_REGISTER, cfg_register_keywords, &cfg_kws);
REGISTER_POST_PROXY_CHECK(proxy_check_errors);
REGISTER_POST_PROXY_CHECK(proxy_finalize_http_errors);
REGISTER_POST_CHECK(post_check_errors);
REGISTER_CONFIG_SECTION("http-errors", cfg_parse_http_errors, NULL);

View File

@ -52,17 +52,17 @@ struct action_kw_list http_after_res_keywords = {
void http_req_keywords_register(struct action_kw_list *kw_list)
{
LIST_APPEND(&http_req_keywords.list, &kw_list->list);
act_add_list(&http_req_keywords.list, kw_list);
}
void http_res_keywords_register(struct action_kw_list *kw_list)
{
LIST_APPEND(&http_res_keywords.list, &kw_list->list);
act_add_list(&http_res_keywords.list, kw_list);
}
void http_after_res_keywords_register(struct action_kw_list *kw_list)
{
LIST_APPEND(&http_after_res_keywords.list, &kw_list->list);
act_add_list(&http_after_res_keywords.list, kw_list);
}
/*

222
src/htx.c
View File

@ -11,6 +11,7 @@
*/
#include <haproxy/chunk.h>
#include <haproxy/dynbuf.h>
#include <haproxy/global.h>
#include <haproxy/htx.h>
#include <haproxy/net_helper.h>
@ -469,6 +470,18 @@ void htx_truncate(struct htx *htx, uint32_t offset)
blk = htx_remove_blk(htx, blk);
}
/* Removes all blocks after <blk>, excluding it. if <blk> is NULL, all blocks
* are removed.
*/
void htx_truncate_blk(struct htx *htx, struct htx_blk *blk)
{
if (!blk) {
htx_drain(htx, htx->data);
return;
}
for (blk = htx_get_next_blk(htx, blk); blk; blk = htx_remove_blk(htx, blk));
}
/* Drains <count> bytes from the HTX message <htx>. If the last block is a DATA
* block, it will be cut if necessary. Others blocks will be removed at once if
* <count> is large enough. The function returns an htx_ret with the first block
@ -707,10 +720,154 @@ struct htx_blk *htx_replace_blk_value(struct htx *htx, struct htx_blk *blk,
return blk;
}
/* Transfer HTX blocks from <src> to <dst>, stopping if <count> bytes were
* transferred (including payload and meta-data). It returns the number of bytes
* copied. By default, copied blocks are removed from <src> and only full
* headers and trailers part can be moved. <flags> can be set to change the
* default behavior:
* - HTX_XFER_KEEP_SRC_BLKS: source blocks are not removed
* - HTX_XFER_PARTIAL_HDRS_COPY: partial headers and trailers part can be xferred
* - HTX_XFER_HDRS_ONLY: Only the headers part is xferred
*/
size_t htx_xfer(struct htx *dst, struct htx *src, size_t count, unsigned int flags)
{
struct htx_blk *blk, *last_dstblk;
size_t ret = 0;
int dst_full = 0;
last_dstblk = NULL;
for (blk = htx_get_head_blk(src); blk && count; blk = htx_get_next_blk(src, blk)) {
struct ist v;
enum htx_blk_type type;
uint32_t sz;
/* Ignore unused block */
type = htx_get_blk_type(blk);
if (type == HTX_BLK_UNUSED)
continue;
if ((flags & HTX_XFER_HDRS_ONLY) &&
type != HTX_BLK_REQ_SL && type != HTX_BLK_RES_SL &&
type != HTX_BLK_HDR && type != HTX_BLK_EOH)
break;
sz = htx_get_blksz(blk);
switch (type) {
case HTX_BLK_DATA:
v = htx_get_blk_value(src, blk);
if (v.len > count)
v.len = count;
v.len = htx_add_data(dst, v);
if (!v.len) {
dst_full = 1;
goto stop;
}
last_dstblk = htx_get_tail_blk(dst);
count -= sizeof(*blk) + v.len;
ret += sizeof(*blk) + v.len;
if (v.len != sz) {
dst_full = 1;
goto stop;
}
break;
default:
if (sz > count) {
dst_full = 1;
goto stop;
}
last_dstblk = htx_add_blk(dst, type, sz);
if (!last_dstblk) {
dst_full = 1;
goto stop;
}
last_dstblk->info = blk->info;
htx_memcpy(htx_get_blk_ptr(dst, last_dstblk), htx_get_blk_ptr(src, blk), sz);
count -= sizeof(*blk) + sz;
ret += sizeof(*blk) + sz;
break;
}
last_dstblk = NULL; /* Reset last_dstblk because it was fully copied */
}
stop:
/* Here, if not NULL, <blk> point on the first not fully copied block in
* <src>. And <last_dstblk>, if defined, is the last not fully copied
* block in <dst>. So have:
* - <blk> == NULL: everything was copied. <last_dstblk> must be NULL
* - <blk> != NULL && <last_dstblk> == NULL: partial copy but the last block was fully copied
* - <blk> != NULL && <last_dstblk> != NULL: partial copy and the last block was patially copied (DATA block only)
*/
if (!(flags & HTX_XFER_PARTIAL_HDRS_COPY)) {
/* Partial headers/trailers copy is not supported */
struct htx_blk *dstblk;
enum htx_blk_type type = HTX_BLK_UNUSED;
dstblk = htx_get_tail_blk(dst);
if (dstblk)
type = htx_get_blk_type(dstblk);
/* the last copied block is a start-line, a header or a trailer */
if (type == HTX_BLK_REQ_SL || type == HTX_BLK_RES_SL || type == HTX_BLK_HDR || type == HTX_BLK_TLR) {
/* <src > cannot have partial headers or trailers part */
BUG_ON(blk == NULL);
/* Remove partial headers/trailers from <dst> and rollback on <str> to not remove them later */
while (type == HTX_BLK_REQ_SL || type == HTX_BLK_RES_SL || type == HTX_BLK_HDR || type == HTX_BLK_TLR) {
BUG_ON(type != htx_get_blk_type(blk));
ret -= sizeof(*blk) + htx_get_blksz(blk);
htx_remove_blk(dst, dstblk);
dstblk = htx_get_tail_blk(dst);
blk = htx_get_prev_blk(src, blk);
if (!dstblk)
break;
type = htx_get_blk_type(dstblk);
}
/* Report if the xfer was interrupted because <dst> was
* full but is was originally empty
*/
if (dst_full && htx_is_empty(dst))
src->flags |= HTX_FL_PARSING_ERROR;
}
}
if (!(flags & HTX_XFER_KEEP_SRC_BLKS)) {
/* True xfer performed, remove copied block from <src> */
struct htx_blk *blk2;
/* Remove all fully copied blocks */
if (!blk)
htx_drain(src, src->data);
else {
for (blk2 = htx_get_head_blk(src); blk2 && blk2 != blk; blk2 = htx_remove_blk(src, blk2));
/* If copy was stopped on a DATA block and the last destination
* block is not NULL, it means a partial copy was performed. So
* cut the source block accordingly
*/
if (last_dstblk && blk2 && htx_get_blk_type(blk2) == HTX_BLK_DATA) {
htx_cut_data_blk(src, blk2, htx_get_blksz(last_dstblk));
}
}
}
/* Everything was copied, transfert terminal HTX flags too */
if (!blk) {
dst->flags |= (src->flags & (HTX_FL_EOM|HTX_FL_PARSING_ERROR|HTX_FL_PROCESSING_ERROR));
src->flags = 0;
}
return ret;
}
/* Transfer HTX blocks from <src> to <dst>, stopping once the first block of the
* type <mark> is transferred (typically EOH or EOT) or when <count> bytes were
* moved (including payload and meta-data). It returns the number of bytes moved
* and the last HTX block inserted in <dst>.
*
* DEPRECATED
*/
struct htx_ret htx_xfer_blks(struct htx *dst, struct htx *src, uint32_t count,
enum htx_blk_type mark)
@ -1169,3 +1326,68 @@ int htx_append_msg(struct htx *dst, const struct htx *src)
htx_truncate(dst, offset);
return 0;
}
/* If possible, trasnfer HTX blocks from <src> to a small buffer. This function
* allocate the small buffer and makes <dst> point on it. If <dst> is not empty
* or if <src> contains to many data, NULL is returned. If the allocation
* failed, NULL is returned. Otherwise <dst> is returned. <flags> instructs how
* the transfer must be performed.
*/
struct buffer *__htx_xfer_to_small_buffer(struct buffer *dst, struct buffer *src, unsigned int flags)
{
struct htx *dst_htx;
struct htx *src_htx = htxbuf(src);
size_t sz = (sizeof(struct htx) + htx_used_space(src_htx));
if (dst->size || sz > global.tune.bufsize_small || !b_alloc_small(dst))
return NULL;
dst_htx = htx_from_buf(dst);
htx_xfer(dst_htx, src_htx, src_htx->size, flags);
htx_to_buf(dst_htx, dst);
return dst;
}
/* If possible, trasnfer HTX blocks from <src> to a large buffer. This function
* allocate the small buffer and makes <dst> point on it. If <dst> is not empty
* or if <src> contains to many data, NULL is returned. If the allocation
* failed, NULL is returned. Otherwise <dst> is returned. <flags> instructs how
* the transfer must be performed.
*/
struct buffer *__htx_xfer_to_large_buffer(struct buffer *dst, struct buffer *src, unsigned int flags)
{
struct htx *dst_htx;
struct htx *src_htx = htxbuf(src);
size_t sz = (sizeof(struct htx) + htx_used_space(src_htx));
if (dst->size || sz > global.tune.bufsize_large || !b_alloc_large(dst))
return NULL;
dst_htx = htx_from_buf(dst);
htx_xfer(dst_htx, src_htx, src_htx->size, flags);
htx_to_buf(dst_htx, dst);
return dst;
}
/* Move HTX blocks from <src> to <dst>. Relies on __htx_xfer_to_small_buffer() */
struct buffer *htx_move_to_small_buffer(struct buffer *dst, struct buffer *src)
{
return __htx_xfer_to_small_buffer(dst, src, HTX_XFER_DEFAULT);
}
/* Move HTX blocks from <src> to <dst>. Relies on __htx_xfer_to_large_buffer() */
struct buffer *htx_move_to_large_buffer(struct buffer *dst, struct buffer *src)
{
return __htx_xfer_to_large_buffer(dst, src, HTX_XFER_DEFAULT);
}
/* Copy HTX blocks from <src> to <dst>. Relies on __htx_xfer_to_small_buffer() */
struct buffer *htx_copy_to_small_buffer(struct buffer *dst, struct buffer *src)
{
return __htx_xfer_to_small_buffer(dst, src, HTX_XFER_KEEP_SRC_BLKS);
}
/* Copy HTX blocks from <src> to <dst>. Relies on __htx_xfer_to_large_buffer() */
struct buffer *htx_copy_to_large_buffer(struct buffer *dst, struct buffer *src)
{
return __htx_xfer_to_large_buffer(dst, src, HTX_XFER_KEEP_SRC_BLKS);
}

View File

@ -89,6 +89,11 @@ struct list per_thread_free_list = LIST_HEAD_INIT(per_thread_free_list);
* valgrind mostly happy. */
struct list per_thread_deinit_list = LIST_HEAD_INIT(per_thread_deinit_list);
/* location of current INITCALL declaration being processed, or NULL */
const struct initcall *caller_initcall = NULL;
const char *caller_file = NULL;
int caller_line = 0;
/* used to register some initialization functions to call before the checks. */
void hap_register_pre_check(int (*fct)())
{

795
src/jwe.c

File diff suppressed because it is too large Load Diff

View File

@ -356,7 +356,7 @@ out:
*/
size_t jws_b64_signature(EVP_PKEY *pkey, enum jwt_alg alg, char *b64protected, char *b64payload, char *dst, size_t dsize)
{
EVP_MD_CTX *ctx;
EVP_MD_CTX *ctx = NULL;
const EVP_MD *evp_md = NULL;
int ret = 0;
struct buffer *sign = NULL;
@ -450,6 +450,7 @@ size_t jws_b64_signature(EVP_PKEY *pkey, enum jwt_alg alg, char *b64protected, c
ret = a2base64url(sign->area, sign->data, dst, dsize);
out:
EVP_MD_CTX_free(ctx);
free_trash_chunk(sign);
if (ret > 0)

View File

@ -94,26 +94,30 @@ enum jwt_alg jwt_parse_alg(const char *alg_str, unsigned int alg_len)
* now, we don't need to manage more than three subparts in the tokens.
* See section 3.1 of RFC7515 for more information about JWS Compact
* Serialization.
* Returns 0 in case of success.
* Returns -1 in case of error, 0 if the token has exactly <item_num> parts, a
* positive value otherwise.
*/
int jwt_tokenize(const struct buffer *jwt, struct jwt_item *items, unsigned int *item_num)
int jwt_tokenize(const struct buffer *jwt, struct jwt_item *items, unsigned int item_num)
{
char *ptr = jwt->area;
char *jwt_end = jwt->area + jwt->data;
unsigned int index = 0;
unsigned int length = 0;
if (index < *item_num) {
items[index].start = ptr;
items[index].length = 0;
}
if (item_num == 0)
return -1;
while (index < *item_num && ptr < jwt_end) {
items[index].start = ptr;
items[index].length = 0;
while (ptr < jwt_end) {
if (*ptr++ == '.') {
items[index++].length = length;
/* We found enough items, no need to keep looking for
* separators. */
if (index == item_num)
return 1;
if (index == *item_num)
return -1;
items[index].start = ptr;
items[index].length = 0;
length = 0;
@ -121,10 +125,11 @@ int jwt_tokenize(const struct buffer *jwt, struct jwt_item *items, unsigned int
++length;
}
if (index < *item_num)
items[index].length = length;
/* We might not have found enough items */
if (index < item_num - 1)
return -1;
*item_num = (index+1);
items[index].length = length;
return (ptr != jwt_end);
}
@ -493,13 +498,9 @@ enum jwt_vrfy_status jwt_verify(const struct buffer *token, const struct buffer
if (ctx.alg == JWT_ALG_DEFAULT)
return JWT_VRFY_UNKNOWN_ALG;
if (jwt_tokenize(token, items, &item_num))
if (jwt_tokenize(token, items, item_num))
return JWT_VRFY_INVALID_TOKEN;
if (item_num != JWT_ELT_MAX)
if (ctx.alg != JWS_ALG_NONE || item_num != JWT_ELT_SIG)
return JWT_VRFY_INVALID_TOKEN;
ctx.jose = items[JWT_ELT_JOSE];
ctx.claims = items[JWT_ELT_CLAIMS];
ctx.signature = items[JWT_ELT_SIG];

116
src/log.c
View File

@ -2913,6 +2913,7 @@ static inline void __send_log_set_metadata_sd(struct ist *metadata, char *sd, si
struct process_send_log_ctx {
struct session *sess;
struct stream *stream;
struct log_profile *profile;
struct log_orig origin;
};
@ -2942,6 +2943,10 @@ static inline void _process_send_log_override(struct process_send_log_ctx *ctx,
enum log_orig_id orig = (ctx) ? ctx->origin.id : LOG_ORIG_UNSPEC;
uint16_t orig_fl = (ctx) ? ctx->origin.flags : LOG_ORIG_FL_NONE;
/* ctx->profile gets priority over logger profile */
if (ctx && ctx->profile)
prof = ctx->profile;
BUG_ON(!prof);
if (!b_is_null(&prof->log_tag))
@ -3095,8 +3100,8 @@ static void process_send_log(struct process_send_log_ctx *ctx,
nblogger += 1;
/* logger may use a profile to override a few things */
if (unlikely(logger->prof))
/* caller or default logger may use a profile to override a few things */
if (unlikely(logger->prof || (ctx && ctx->profile)))
_process_send_log_override(ctx, logger, hdr, message, size, nblogger);
else
_process_send_log_final(logger, hdr, message, size, nblogger);
@ -5200,17 +5205,11 @@ out:
}
/*
* opportunistic log when at least the session is known to exist
* <s> may be NULL
*
* Will not log if the frontend has no log defined. By default it will
* try to emit the log as INFO, unless the stream already exists and
* set-log-level was used.
*/
void do_log(struct session *sess, struct stream *s, struct log_orig origin)
static void do_log_ctx(struct process_send_log_ctx *ctx)
{
struct process_send_log_ctx ctx;
struct stream *s = ctx->stream;
struct session *sess = ctx->sess;
struct log_orig origin = ctx->origin;
int size;
int sd_size = 0;
int level = -1;
@ -5242,11 +5241,27 @@ void do_log(struct session *sess, struct stream *s, struct log_orig origin)
size = sess_build_logline_orig(sess, s, logline, global.max_syslog_len, &sess->fe->logformat, origin);
__send_log(ctx, &sess->fe->loggers, &sess->fe->log_tag, level,
logline, size, logline_rfc5424, sd_size);
}
/*
* opportunistic log when at least the session is known to exist
* <s> may be NULL
*
* Will not log if the frontend has no log defined. By default it will
* try to emit the log as INFO, unless the stream already exists and
* set-log-level was used.
*/
void do_log(struct session *sess, struct stream *s, struct log_orig origin)
{
struct process_send_log_ctx ctx;
ctx.origin = origin;
ctx.sess = sess;
ctx.stream = s;
__send_log(&ctx, &sess->fe->loggers, &sess->fe->log_tag, level,
logline, size, logline_rfc5424, sd_size);
ctx.profile = NULL;
do_log_ctx(&ctx);
}
/*
@ -5297,6 +5312,7 @@ void strm_log(struct stream *s, struct log_orig origin)
ctx.origin = origin;
ctx.sess = sess;
ctx.stream = s;
ctx.profile = NULL;
__send_log(&ctx, &sess->fe->loggers, &sess->fe->log_tag, level,
logline, size, logline_rfc5424, sd_size);
s->logs.logwait = 0;
@ -5364,6 +5380,7 @@ void _sess_log(struct session *sess, int embryonic)
ctx.origin = orig;
ctx.sess = sess;
ctx.stream = NULL;
ctx.profile = NULL;
__send_log(&ctx, &sess->fe->loggers,
&sess->fe->log_tag, level,
logline, size, logline_rfc5424, sd_size);
@ -6910,24 +6927,87 @@ static int px_parse_log_steps(char **args, int section_type, struct proxy *curpx
static enum act_return do_log_action(struct act_rule *rule, struct proxy *px,
struct session *sess, struct stream *s, int flags)
{
struct process_send_log_ctx ctx;
/* do_log() expects valid session pointer */
BUG_ON(sess == NULL);
do_log(sess, s, log_orig(rule->arg.expr_int.value, LOG_ORIG_FL_NONE));
ctx.origin = log_orig(rule->arg.do_log.orig, LOG_ORIG_FL_NONE);
ctx.sess = sess;
ctx.stream = s;
ctx.profile = rule->arg.do_log.profile;
do_log_ctx(&ctx);
return ACT_RET_CONT;
}
/* Parse a "do_log" action. It doesn't take any argument
static int do_log_action_check(struct act_rule *rule, struct proxy *px, char **err)
{
if (rule->arg.do_log.profile_name) {
struct log_profile *prof;
prof = log_profile_find_by_name(rule->arg.do_log.profile_name);
if (!prof) {
memprintf(err, "do-log action: profile '%s' is invalid", rule->arg.do_log.profile_name);
ha_free(&rule->arg.do_log.profile_name);
return 0;
}
ha_free(&rule->arg.do_log.profile_name);
if (!log_profile_postcheck(px, prof, err)) {
memprintf(err, "do-log action on %s %s uses incompatible log-profile '%s': %s", proxy_type_str(px), px->id, prof->id, *err);
return 0;
}
rule->arg.do_log.profile = prof;
}
return 1; // success
}
static void do_log_action_release(struct act_rule *rule)
{
ha_free(&rule->arg.do_log.profile_name);
}
/* Parse a "do_log" action. It takes optional "log-profile" argument to
* specifically use a given log-profile when generating the log message
*
* May be used from places where per-context actions are usually registered
*/
enum act_parse_ret do_log_parse_act(enum log_orig_id id,
const char **args, int *orig_arg, struct proxy *px,
struct act_rule *rule, char **err)
{
int cur_arg = *orig_arg;
rule->action_ptr = do_log_action;
rule->action = ACT_CUSTOM;
rule->release_ptr = NULL;
rule->arg.expr_int.value = id;
rule->check_ptr = do_log_action_check;
rule->release_ptr = do_log_action_release;
rule->arg.do_log.orig = id;
while (*args[*orig_arg]) {
if (!strcmp(args[*orig_arg], "profile")) {
if (!*args[*orig_arg + 1]) {
memprintf(err,
"action '%s': 'profile' expects argument.",
args[cur_arg-1]);
return ACT_RET_PRS_ERR;
}
rule->arg.do_log.profile_name = strdup(args[*orig_arg + 1]);
if (!rule->arg.do_log.profile_name) {
memprintf(err,
"action '%s': memory error when setting 'profile'",
args[cur_arg-1]);
return ACT_RET_PRS_ERR;
}
*orig_arg += 2;
}
else
break;
}
return ACT_RET_PRS_OK;
}

View File

@ -24,7 +24,7 @@
#include <import/mjson.h>
static double mystrtod(const char *str, char **end);
static double mystrtod(const char *str, int len, char **end);
static int mjson_esc(int c, int esc) {
const char *p, *esc1 = "\b\f\n\r\t\\\"", *esc2 = "bfnrt\\\"";
@ -101,7 +101,7 @@ int mjson(const char *s, int len, mjson_cb_t cb, void *ud) {
tok = MJSON_TOK_FALSE;
} else if (c == '-' || ((c >= '0' && c <= '9'))) {
char *end = NULL;
mystrtod(&s[i], &end);
mystrtod(&s[i], len - i, &end);
if (end != NULL) i += (int) (end - &s[i] - 1);
tok = MJSON_TOK_NUMBER;
} else if (c == '"') {
@ -212,7 +212,7 @@ static int mjson_get_cb(int tok, const char *s, int off, int len, void *ud) {
} else if (tok == '[') {
if (data->d1 == data->d2 && data->path[data->pos] == '[') {
data->i1 = 0;
data->i2 = (int) mystrtod(&data->path[data->pos + 1], NULL);
data->i2 = (int) mystrtod(&data->path[data->pos + 1], strlen(&data->path[data->pos + 1]), NULL);
if (data->i1 == data->i2) {
data->d2++;
data->pos += 3;
@ -272,7 +272,7 @@ int mjson_get_number(const char *s, int len, const char *path, double *v) {
const char *p;
int tok, n;
if ((tok = mjson_find(s, len, path, &p, &n)) == MJSON_TOK_NUMBER) {
if (v != NULL) *v = mystrtod(p, NULL);
if (v != NULL) *v = mystrtod(p, n, NULL);
}
return tok == MJSON_TOK_NUMBER ? 1 : 0;
}
@ -343,57 +343,56 @@ static int is_digit(int c) {
}
/* NOTE: strtod() implementation by Yasuhiro Matsumoto. */
static double mystrtod(const char *str, char **end) {
static double mystrtod(const char *str, int len, char **end) {
double d = 0.0;
int sign = 1, __attribute__((unused)) n = 0;
const char *p = str, *a = str;
const char *end_p = str + len;
/* decimal part */
if (*p == '-') {
if (p < end_p && *p == '-') {
sign = -1;
++p;
} else if (*p == '+') {
} else if (p < end_p && *p == '+') {
++p;
}
if (is_digit(*p)) {
if (p < end_p && is_digit(*p)) {
d = (double) (*p++ - '0');
while (*p && is_digit(*p)) {
while (p < end_p && is_digit(*p)) {
d = d * 10.0 + (double) (*p - '0');
++p;
++n;
}
a = p;
} else if (*p != '.') {
} else if (p >= end_p || *p != '.') {
goto done;
}
d *= sign;
/* fraction part */
if (*p == '.') {
if (p < end_p && *p == '.') {
double f = 0.0;
double base = 0.1;
++p;
if (is_digit(*p)) {
while (*p && is_digit(*p)) {
f += base * (*p - '0');
base /= 10.0;
++p;
++n;
}
while (p < end_p && is_digit(*p)) {
f += base * (*p - '0');
base /= 10.0;
++p;
++n;
}
d += f * sign;
a = p;
}
/* exponential part */
if ((*p == 'E') || (*p == 'e')) {
if (p < end_p && ((*p == 'E') || (*p == 'e'))) {
double exp, f;
int i, e = 0, neg = 0;
p++;
if (*p == '-') p++, neg++;
if (*p == '+') p++;
while (is_digit(*p)) e = e * 10 + *p++ - '0';
if (p < end_p && *p == '-') p++, neg++;
if (p < end_p && *p == '+') p++;
while (p < end_p && is_digit(*p)) e = e * 10 + *p++ - '0';
i = e;
if (neg) e = -e;
#if 0

View File

@ -862,7 +862,7 @@ static void fcgi_strm_notify_recv(struct fcgi_strm *fstrm)
{
if (fstrm->subs && (fstrm->subs->events & SUB_RETRY_RECV)) {
TRACE_POINT(FCGI_EV_STRM_WAKE, fstrm->fconn->conn, fstrm);
tasklet_wakeup(fstrm->subs->tasklet);
tasklet_wakeup(fstrm->subs->tasklet, TASK_WOKEN_IO);
fstrm->subs->events &= ~SUB_RETRY_RECV;
if (!fstrm->subs->events)
fstrm->subs = NULL;
@ -875,7 +875,7 @@ static void fcgi_strm_notify_send(struct fcgi_strm *fstrm)
if (fstrm->subs && (fstrm->subs->events & SUB_RETRY_SEND)) {
TRACE_POINT(FCGI_EV_STRM_WAKE, fstrm->fconn->conn, fstrm);
fstrm->flags |= FCGI_SF_NOTIFIED;
tasklet_wakeup(fstrm->subs->tasklet);
tasklet_wakeup(fstrm->subs->tasklet, TASK_WOKEN_IO);
fstrm->subs->events &= ~SUB_RETRY_SEND;
if (!fstrm->subs->events)
fstrm->subs = NULL;
@ -886,26 +886,28 @@ static void fcgi_strm_notify_send(struct fcgi_strm *fstrm)
}
}
/* Alerts the data layer, trying to wake it up by all means, following
* this sequence :
* - if the fcgi stream' data layer is subscribed to recv, then it's woken up
* for recv
* - if its subscribed to send, then it's woken up for send
* - if it was subscribed to neither, its ->wake() callback is called
* It is safe to call this function with a closed stream which doesn't have a
* stream connector anymore.
/* Alerts the data layer by waking it up. TASK_WOKEN_MSG state is used by
* default and if the data layer is also subscribed to recv or send,
* TASK_WOKEN_IO is added. But first of all, we check if the shut tasklet must
* be woken up or not instead.
*/
static void fcgi_strm_alert(struct fcgi_strm *fstrm)
{
TRACE_POINT(FCGI_EV_STRM_WAKE, fstrm->fconn->conn, fstrm);
if (fstrm->subs ||
(fstrm->flags & (FCGI_SF_WANT_SHUTR|FCGI_SF_WANT_SHUTW))) {
fcgi_strm_notify_recv(fstrm);
fcgi_strm_notify_send(fstrm);
}
else if (fcgi_strm_sc(fstrm) && fcgi_strm_sc(fstrm)->app_ops->wake != NULL) {
TRACE_POINT(FCGI_EV_STRM_WAKE, fstrm->fconn->conn, fstrm);
fcgi_strm_sc(fstrm)->app_ops->wake(fcgi_strm_sc(fstrm));
if (!fstrm->subs && (fstrm->flags & (FCGI_SF_WANT_SHUTR|FCGI_SF_WANT_SHUTW)))
tasklet_wakeup(fstrm->shut_tl);
else if (fcgi_strm_sc(fstrm)) {
unsigned int state = TASK_WOKEN_MSG;
if (fstrm->subs) {
if (fstrm->subs->events & SUB_RETRY_SEND)
fstrm->flags |= FCGI_SF_NOTIFIED;
fstrm->subs->events = 0;
fstrm->subs = NULL;
state |= TASK_WOKEN_IO;
}
tasklet_wakeup(fcgi_strm_sc(fstrm)->wait_event.tasklet, state);
}
}
@ -3734,7 +3736,7 @@ static void fcgi_detach(struct sedesc *sd)
if (eb_is_empty(&fconn->streams_by_id)) {
if (!fconn->conn->owner) {
/* Session insertion above has failed and connection is idle, remove it. */
fconn->conn->mux->destroy(fconn);
CALL_MUX_NO_RET(fconn->conn->mux, destroy(fconn));
TRACE_DEVEL("outgoing connection killed", FCGI_EV_STRM_END|FCGI_EV_FCONN_ERR);
return;
}
@ -3747,7 +3749,7 @@ static void fcgi_detach(struct sedesc *sd)
/* Ensure session can keep a new idle connection. */
if (session_check_idle_conn(sess, fconn->conn) != 0) {
fconn->conn->mux->destroy(fconn);
CALL_MUX_NO_RET(fconn->conn->mux, destroy(fconn));
TRACE_DEVEL("outgoing connection killed", FCGI_EV_STRM_END|FCGI_EV_FCONN_ERR);
return;
}
@ -3778,7 +3780,7 @@ static void fcgi_detach(struct sedesc *sd)
if (!srv_add_to_idle_list(objt_server(fconn->conn->target), fconn->conn, 1)) {
/* The server doesn't want it, let's kill the connection right away */
fconn->conn->mux->destroy(fconn);
CALL_MUX_NO_RET(fconn->conn->mux, destroy(fconn));
TRACE_DEVEL("outgoing connection killed", FCGI_EV_STRM_END|FCGI_EV_FCONN_ERR);
return;
}

View File

@ -1197,7 +1197,7 @@ static int h1s_finish_detach(struct h1s *h1s)
if (!session_add_conn(sess, h1c->conn)) {
/* HTTP/1.1 conn is always idle after detach, can be removed if session insert failed. */
h1c->conn->owner = NULL;
h1c->conn->mux->destroy(h1c);
CALL_MUX_NO_RET(h1c->conn->mux, destroy(h1c));
goto released;
}
@ -1213,7 +1213,7 @@ static int h1s_finish_detach(struct h1s *h1s)
/* Ensure session can keep a new idle connection. */
if (session_check_idle_conn(sess, h1c->conn)) {
TRACE_DEVEL("outgoing connection rejected", H1_EV_STRM_END|H1_EV_H1C_END, h1c->conn);
h1c->conn->mux->destroy(h1c);
CALL_MUX_NO_RET(h1c->conn->mux, destroy(h1c));
goto released;
}
@ -1236,7 +1236,7 @@ static int h1s_finish_detach(struct h1s *h1s)
if (!srv_add_to_idle_list(objt_server(h1c->conn->target), h1c->conn, is_not_first)) {
/* The server doesn't want it, let's kill the connection right away */
h1c->conn->mux->destroy(h1c);
CALL_MUX_NO_RET(h1c->conn->mux, destroy(h1c));
TRACE_DEVEL("outgoing connection killed", H1_EV_STRM_END|H1_EV_H1C_END);
goto released;
}
@ -3716,7 +3716,7 @@ static void h1_wake_stream_for_recv(struct h1s *h1s)
{
if (h1s && h1s->subs && h1s->subs->events & SUB_RETRY_RECV) {
TRACE_POINT(H1_EV_STRM_WAKE, h1s->h1c->conn, h1s);
tasklet_wakeup(h1s->subs->tasklet);
tasklet_wakeup(h1s->subs->tasklet, TASK_WOKEN_IO);
h1s->subs->events &= ~SUB_RETRY_RECV;
if (!h1s->subs->events)
h1s->subs = NULL;
@ -3726,28 +3726,31 @@ static void h1_wake_stream_for_send(struct h1s *h1s)
{
if (h1s && h1s->subs && h1s->subs->events & SUB_RETRY_SEND) {
TRACE_POINT(H1_EV_STRM_WAKE, h1s->h1c->conn, h1s);
tasklet_wakeup(h1s->subs->tasklet);
tasklet_wakeup(h1s->subs->tasklet, TASK_WOKEN_IO);
h1s->subs->events &= ~SUB_RETRY_SEND;
if (!h1s->subs->events)
h1s->subs = NULL;
}
}
/* alerts the data layer following this sequence :
* - if the h1s' data layer is subscribed to recv, then it's woken up for recv
* - if its subscribed to send, then it's woken up for send
* - if it was subscribed to neither, its ->wake() callback is called
/* Alerts the data layer by waking it up. TASK_WOKEN_MSG state is used by
* default and if the data layer is also subscribed to recv or send,
* TASK_WOKEN_IO is added.
*/
static void h1_alert(struct h1s *h1s)
{
unsigned int state = TASK_WOKEN_MSG;
TRACE_POINT(H1_EV_STRM_WAKE, h1s->h1c->conn, h1s);
if (!h1s_sc(h1s))
return;
if (h1s->subs) {
h1_wake_stream_for_recv(h1s);
h1_wake_stream_for_send(h1s);
}
else if (h1s_sc(h1s) && h1s_sc(h1s)->app_ops->wake != NULL) {
TRACE_POINT(H1_EV_STRM_WAKE, h1s->h1c->conn, h1s);
h1s_sc(h1s)->app_ops->wake(h1s_sc(h1s));
h1s->subs->events = 0;
h1s->subs = NULL;
state |= TASK_WOKEN_IO;
}
tasklet_wakeup(h1s_sc(h1s)->wait_event.tasklet, state);
}
/* Try to send an HTTP error with h1c->errcode status code. It returns 1 on success

View File

@ -101,6 +101,7 @@ struct h2c {
struct wait_event wait_event; /* To be used if we're waiting for I/Os */
struct list *next_tasklet; /* which applet to wake up next (NULL by default) */
uint32_t streams_hard_limit; /* maximum number of concurrent streams supported locally */
};
@ -488,9 +489,14 @@ static int h2_be_glitches_threshold = 0; /* backend's max glitches
static int h2_fe_glitches_threshold = 0; /* frontend's max glitches: unlimited */
static uint h2_be_rxbuf = 0; /* backend's default total rxbuf (bytes) */
static uint h2_fe_rxbuf = 0; /* frontend's default total rxbuf (bytes) */
static unsigned int h2_be_max_frames_at_once = 0; /* backend value: 0=no limit */
static unsigned int h2_fe_max_frames_at_once = 0; /* frontend value: 0=no limit */
static unsigned int h2_fe_max_rst_at_once = 0; /* frontend value: 0=no limit */
static unsigned int h2_settings_max_concurrent_streams = 100; /* default value */
static unsigned int h2_be_settings_max_concurrent_streams = 0; /* backend value */
static unsigned int h2_fe_settings_max_concurrent_streams = 0; /* frontend value */
static unsigned int h2_fe_min_concurrent_streams = 1; /* minimum concurrent streams when using rq-load */
static unsigned int h2_fe_max_rq_load = ~0; /* max rq for FE dynamic MCS sizing. 0=def rq */
static int h2_settings_max_frame_size = 0; /* unset */
static int h2_settings_log_errors = H2_ERR_LOG_ERR_STRM;
@ -744,7 +750,8 @@ static inline int h2c_may_expire(const struct h2c *h2c)
/* returns the number of max concurrent streams permitted on a connection,
* depending on its side (frontend or backend), falling back to the default
* h2_settings_max_concurrent_streams. It may even be zero.
* h2_settings_max_concurrent_streams. It may even be zero. It only relies
* on configuration settings.
*/
static inline int h2c_max_concurrent_streams(const struct h2c *h2c)
{
@ -755,6 +762,22 @@ static inline int h2c_max_concurrent_streams(const struct h2c *h2c)
h2_fe_settings_max_concurrent_streams;
ret = ret ? ret : h2_settings_max_concurrent_streams;
/* if h2_fe_max_rq_load is set, adjust the max concurrent streams
* according to it and the current load.
*/
if (!(h2c->flags & H2_CF_IS_BACK) && h2_fe_max_rq_load != ~0) {
uint limit = h2_fe_max_rq_load ? h2_fe_max_rq_load : global.tune.runqueue_depth;
uint load = MAX(swrate_avg(th_ctx->rq_tot_peak, RQ_LOAD_SAMPLES), th_ctx->rq_total);
/* divide limits by the square of the ratio of current load to
* the limit so as to react fast.
*/
if (load > limit)
ret = (uint64_t)ret * limit / load * limit / load;
if (ret < h2_fe_min_concurrent_streams)
ret = h2_fe_min_concurrent_streams;
}
return ret;
}
@ -1114,7 +1137,7 @@ static inline void h2c_restart_reading(const struct h2c *h2c, int consider_buffe
/* returns true if the front connection has too many stream connectors attached */
static inline int h2_frt_has_too_many_sc(const struct h2c *h2c)
{
return h2c->nb_sc > h2c_max_concurrent_streams(h2c) ||
return h2c->nb_sc > h2c->streams_hard_limit ||
unlikely(conn_reverse_in_preconnect(h2c->conn));
}
@ -1425,7 +1448,7 @@ static int h2_init(struct connection *conn, struct proxy *prx, struct session *s
/* Initialise the context. */
h2c->st0 = H2_CS_PREFACE;
h2c->conn = conn;
h2c->streams_limit = h2c_max_concurrent_streams(h2c);
h2c->streams_limit = h2c->streams_hard_limit = h2c_max_concurrent_streams(h2c);
nb_rxbufs = (h2c->flags & H2_CF_IS_BACK) ? h2_be_rxbuf : h2_fe_rxbuf;
nb_rxbufs = (nb_rxbufs + global.tune.bufsize - 9 - 1) / (global.tune.bufsize - 9);
nb_rxbufs = MAX(nb_rxbufs, h2c->streams_limit);
@ -1668,9 +1691,10 @@ static void __maybe_unused h2s_notify_recv(struct h2s *h2s)
TRACE_POINT(H2_EV_STRM_WAKE, h2s->h2c->conn, h2s);
if (h2s->h2c->next_tasklet ||
(th_ctx->current && th_ctx->current->process == h2_io_cb))
h2s->h2c->next_tasklet = tasklet_wakeup_after(h2s->h2c->next_tasklet, h2s->subs->tasklet);
h2s->h2c->next_tasklet = tasklet_wakeup_after(h2s->h2c->next_tasklet, h2s->subs->tasklet,
TASK_WOKEN_IO);
else
tasklet_wakeup(h2s->subs->tasklet);
tasklet_wakeup(h2s->subs->tasklet, TASK_WOKEN_IO);
h2s->subs->events &= ~SUB_RETRY_RECV;
if (!h2s->subs->events)
h2s->subs = NULL;
@ -1683,7 +1707,7 @@ static void __maybe_unused h2s_notify_send(struct h2s *h2s)
if (h2s->subs && h2s->subs->events & SUB_RETRY_SEND) {
TRACE_POINT(H2_EV_STRM_WAKE, h2s->h2c->conn, h2s);
h2s->flags |= H2_SF_NOTIFIED;
tasklet_wakeup(h2s->subs->tasklet);
tasklet_wakeup(h2s->subs->tasklet, TASK_WOKEN_IO);
h2s->subs->events &= ~SUB_RETRY_SEND;
if (!h2s->subs->events)
h2s->subs = NULL;
@ -1694,26 +1718,27 @@ static void __maybe_unused h2s_notify_send(struct h2s *h2s)
}
}
/* alerts the data layer, trying to wake it up by all means, following
* this sequence :
* - if the h2s' data layer is subscribed to recv, then it's woken up for recv
* - if its subscribed to send, then it's woken up for send
* - if it was subscribed to neither, its ->wake() callback is called
* It is safe to call this function with a closed stream which doesn't have a
* stream connector anymore.
/* alerts the data layer by waking it up. TASK_WOKEN_MSG state is used by
* default and if the data layer is also subscribed to recv or send,
* TASK_WOKEN_IO is added. But first of all, we check if the shut tasklet must
* be woken up or not instead.
*/
static void __maybe_unused h2s_alert(struct h2s *h2s)
{
TRACE_ENTER(H2_EV_H2S_WAKE, h2s->h2c->conn, h2s);
if (!h2s->subs && (h2s->flags & (H2_SF_WANT_SHUTR | H2_SF_WANT_SHUTW)))
tasklet_wakeup(h2s->shut_tl);
else if (h2s_sc(h2s)) {
unsigned int state = TASK_WOKEN_MSG;
if (h2s->subs ||
(h2s->flags & (H2_SF_WANT_SHUTR | H2_SF_WANT_SHUTW))) {
h2s_notify_recv(h2s);
h2s_notify_send(h2s);
}
else if (h2s_sc(h2s) && h2s_sc(h2s)->app_ops->wake != NULL) {
TRACE_POINT(H2_EV_STRM_WAKE, h2s->h2c->conn, h2s);
h2s_sc(h2s)->app_ops->wake(h2s_sc(h2s));
if (h2s->subs) {
if (h2s->subs->events & SUB_RETRY_SEND)
h2s->flags |= H2_SF_NOTIFIED;
h2s->subs->events = 0;
h2s->subs = NULL;
state |= TASK_WOKEN_IO;
}
tasklet_wakeup(h2s_sc(h2s)->wait_event.tasklet, state);
}
TRACE_LEAVE(H2_EV_H2S_WAKE, h2s->h2c->conn, h2s);
@ -1741,9 +1766,9 @@ static inline int _h2c_report_glitch(struct h2c *h2c, int increment)
* to force clients to periodically reconnect.
*/
if (h2c->last_sid <= 0 ||
h2c->last_sid > h2c->max_id + 2 * h2c_max_concurrent_streams(h2c)) {
h2c->last_sid > h2c->max_id + 2 * h2c->streams_hard_limit) {
/* not set yet or was too high */
h2c->last_sid = h2c->max_id + 2 * h2c_max_concurrent_streams(h2c);
h2c->last_sid = h2c->max_id + 2 * h2c->streams_hard_limit;
h2c_send_goaway_error(h2c, NULL);
}
@ -2105,7 +2130,7 @@ static struct h2s *h2c_frt_stream_new(struct h2c *h2c, int id, struct buffer *in
/* Cannot handle stream if active reversed connection is not yet accepted. */
BUG_ON(conn_reverse_in_preconnect(h2c->conn));
if (h2c->nb_streams >= h2c_max_concurrent_streams(h2c)) {
if (h2c->nb_streams >= h2c->streams_hard_limit) {
h2c_report_glitch(h2c, 1, "HEADERS frame causing MAX_CONCURRENT_STREAMS to be exceeded");
TRACE_ERROR("HEADERS frame causing MAX_CONCURRENT_STREAMS to be exceeded", H2_EV_H2S_NEW|H2_EV_RX_FRAME|H2_EV_RX_HDR, h2c->conn);
session_inc_http_req_ctr(sess);
@ -2285,7 +2310,7 @@ static int h2c_send_settings(struct h2c *h2c)
chunk_memcat(&buf, str, 6);
}
mcs = h2c_max_concurrent_streams(h2c);
mcs = h2c->streams_hard_limit;
if (mcs != 0) {
char str[6] = "\x00\x03"; /* max_concurrent_streams */
@ -2906,8 +2931,8 @@ static int h2c_handle_settings(struct h2c *h2c)
case H2_SETTINGS_MAX_CONCURRENT_STREAMS:
if (h2c->flags & H2_CF_IS_BACK) {
/* the limit is only for the backend; for the frontend it is our limit */
if ((unsigned int)arg > h2c_max_concurrent_streams(h2c))
arg = h2c_max_concurrent_streams(h2c);
if ((unsigned int)arg > h2c->streams_hard_limit)
arg = h2c->streams_hard_limit;
h2c->streams_limit = arg;
}
break;
@ -3287,7 +3312,7 @@ static int h2c_handle_window_update(struct h2c *h2c, struct h2s *h2s)
goto out0;
}
inc = h2_get_n32(&h2c->dbuf, 0);
inc = h2_get_n32(&h2c->dbuf, 0) & 0x7FFFFFFF;
if (h2c->dsi != 0) {
/* stream window update */
@ -3382,7 +3407,7 @@ static int h2c_handle_goaway(struct h2c *h2c)
return 0;
}
last = h2_get_n32(&h2c->dbuf, 0);
last = h2_get_n32(&h2c->dbuf, 0) & 0x7FFFFFFF; // mask R bit
h2c->errcode = h2_get_n32(&h2c->dbuf, 4);
if (h2c->last_sid < 0)
h2c->last_sid = last;
@ -3559,7 +3584,7 @@ static struct h2s *h2c_frt_handle_headers(struct h2c *h2c, struct h2s *h2s)
goto out; // IDLE but too many sc still present
}
else if (h2_fe_max_total_streams &&
h2c->stream_cnt >= h2_fe_max_total_streams + h2c_max_concurrent_streams(h2c)) {
h2c->stream_cnt >= h2_fe_max_total_streams + h2c->streams_hard_limit) {
/* We've already told this client we were going to close a
* while ago and apparently it didn't care, so it's time to
* stop processing its requests for real.
@ -3686,9 +3711,9 @@ static struct h2s *h2c_frt_handle_headers(struct h2c *h2c, struct h2s *h2s)
* ID.
*/
if (h2c->last_sid <= 0 ||
h2c->last_sid > h2c->max_id + 2 * h2c_max_concurrent_streams(h2c)) {
h2c->last_sid > h2c->max_id + 2 * h2c->streams_hard_limit) {
/* not set yet or was too high */
h2c->last_sid = h2c->max_id + 2 * h2c_max_concurrent_streams(h2c);
h2c->last_sid = h2c->max_id + 2 * h2c->streams_hard_limit;
h2c_send_goaway_error(h2c, NULL);
}
}
@ -4217,6 +4242,8 @@ static void h2_process_demux(struct h2c *h2c)
struct h2_fh hdr;
unsigned int padlen = 0;
int32_t old_iw = h2c->miw;
uint frames_budget = 0;
uint rst_budget = 0;
TRACE_ENTER(H2_EV_H2C_WAKE, h2c->conn);
@ -4305,6 +4332,14 @@ static void h2_process_demux(struct h2c *h2c)
}
}
if (h2c->flags & H2_CF_IS_BACK) {
frames_budget = h2_be_max_frames_at_once;
}
else {
frames_budget = h2_fe_max_frames_at_once;
rst_budget = h2_fe_max_rst_at_once;
}
/* process as many incoming frames as possible below */
while (1) {
int ret = 0;
@ -4607,6 +4642,29 @@ static void h2_process_demux(struct h2c *h2c)
h2c->st0 = H2_CS_FRAME_H;
}
}
/* If more frames remain in the buffer, let's first check if we've
* depleted the frames processing budget. Consuming the RST budget
* makes the tasklet go to TL_BULK to make it less prioritary than
* other processing since it's often used by attacks, while other
* frame types just yield normally.
*/
if (b_data(&h2c->dbuf)) {
if (h2c->dft == H2_FT_RST_STREAM && (rst_budget && !--rst_budget)) {
/* we've consumed all RST frames permitted by
* the budget, we have to yield now.
*/
tasklet_wakeup(h2c->wait_event.tasklet, 0);
break;
}
else if ((frames_budget && !--frames_budget)) {
/* we've consumed all frames permitted by the
* budget, we have to yield now.
*/
tasklet_wakeup(h2c->wait_event.tasklet);
break;
}
}
}
if (h2c_update_strm_rx_win(h2c) &&
@ -4689,16 +4747,7 @@ static void h2_resume_each_sending_h2s(struct h2c *h2c, struct list *head)
continue;
}
if (h2s->subs && h2s->subs->events & SUB_RETRY_SEND) {
h2s->flags |= H2_SF_NOTIFIED;
tasklet_wakeup(h2s->subs->tasklet);
h2s->subs->events &= ~SUB_RETRY_SEND;
if (!h2s->subs->events)
h2s->subs = NULL;
}
else if (h2s->flags & (H2_SF_WANT_SHUTR|H2_SF_WANT_SHUTW)) {
tasklet_wakeup(h2s->shut_tl);
}
h2s_notify_send(h2s);
}
TRACE_LEAVE(H2_EV_H2C_SEND|H2_EV_H2S_WAKE, h2c->conn);
@ -5679,7 +5728,7 @@ static void h2_detach(struct sedesc *sd)
if (eb_is_empty(&h2c->streams_by_id)) {
if (!h2c->conn->owner) {
/* Session insertion above has failed and connection is idle, remove it. */
h2c->conn->mux->destroy(h2c);
CALL_MUX_NO_RET(h2c->conn->mux, destroy(h2c));
TRACE_DEVEL("leaving on error after killing outgoing connection", H2_EV_STRM_END|H2_EV_H2C_ERR);
return;
}
@ -5692,7 +5741,7 @@ static void h2_detach(struct sedesc *sd)
/* Ensure session can keep a new idle connection. */
if (session_check_idle_conn(sess, h2c->conn) != 0) {
h2c->conn->mux->destroy(h2c);
CALL_MUX_NO_RET(h2c->conn->mux, destroy(h2c));
TRACE_DEVEL("leaving without reusable idle connection", H2_EV_STRM_END);
return;
}
@ -5723,7 +5772,7 @@ static void h2_detach(struct sedesc *sd)
if (!srv_add_to_idle_list(objt_server(h2c->conn->target), h2c->conn, 1)) {
/* The server doesn't want it, let's kill the connection right away */
h2c->conn->mux->destroy(h2c);
CALL_MUX_NO_RET(h2c->conn->mux, destroy(h2c));
TRACE_DEVEL("leaving on error after killing outgoing connection", H2_EV_STRM_END|H2_EV_H2C_ERR);
return;
}
@ -7817,7 +7866,6 @@ static size_t h2_rcv_buf(struct stconn *sc, struct buffer *buf, size_t count, in
struct htx *h2s_htx = NULL;
struct htx *buf_htx = NULL;
struct buffer *rxbuf = NULL;
struct htx_ret htxret;
size_t ret = 0;
uint prev_h2c_flags = h2c->flags;
unsigned long long prev_body_len = h2s->body_len;
@ -7852,17 +7900,7 @@ static size_t h2_rcv_buf(struct stconn *sc, struct buffer *buf, size_t count, in
goto end;
}
htxret = htx_xfer_blks(buf_htx, h2s_htx, count, HTX_BLK_UNUSED);
count -= htxret.ret;
if (h2s_htx->flags & HTX_FL_PARSING_ERROR) {
buf_htx->flags |= HTX_FL_PARSING_ERROR;
if (htx_is_empty(buf_htx))
se_fl_set(h2s->sd, SE_FL_EOI);
}
else if (htx_is_empty(h2s_htx)) {
buf_htx->flags |= (h2s_htx->flags & HTX_FL_EOM);
}
count -= htx_xfer(buf_htx, h2s_htx, count, HTX_XFER_DEFAULT);
htx_to_buf(buf_htx, buf);
htx_to_buf(h2s_htx, rxbuf);
@ -7891,13 +7929,7 @@ static size_t h2_rcv_buf(struct stconn *sc, struct buffer *buf, size_t count, in
/* tell the stream layer whether there are data left or not */
if (h2s_rxbuf_cnt(h2s)) {
/* Note that parsing errors can also arrive here, we may need
* to propagate errors upstream otherwise no new activity will
* unblock them.
*/
se_fl_set(h2s->sd, SE_FL_RCV_MORE | SE_FL_WANT_ROOM);
if (h2s_htx && h2s_htx->flags & HTX_FL_PARSING_ERROR)
h2s_propagate_term_flags(h2c, h2s);
BUG_ON_HOT(!buf->data);
}
else {
@ -8713,20 +8745,56 @@ static int h2_parse_max_concurrent_streams(char **args, int section_type, struct
char **err)
{
uint *vptr;
if (too_many_args(1, args, err, NULL))
return -1;
int arg;
/* backend/frontend/default */
vptr = (args[0][8] == 'b') ? &h2_be_settings_max_concurrent_streams :
(args[0][8] == 'f') ? &h2_fe_settings_max_concurrent_streams :
&h2_settings_max_concurrent_streams;
if (args[0][8] != 'f' && too_many_args(1, args, err, NULL))
return -1;
*vptr = atoi(args[1]);
if ((int)*vptr < 0) {
memprintf(err, "'%s' expects a positive numeric value.", args[0]);
return -1;
}
if (args[0][8] != 'f')
goto leave;
/* tune.h2.fe. here */
for (arg = 2; *args[arg]; arg += 2) {
if (strcmp(args[arg], "rq-load") == 0) {
if (strcmp(args[arg + 1], "ignore") == 0)
h2_fe_max_rq_load = ~0;
else if (strcmp(args[arg + 1], "auto") == 0)
h2_fe_max_rq_load = 0;
else if (!*args[arg + 1] || (h2_fe_max_rq_load = atoi(args[arg + 1])) <= 0) {
memprintf(err, "'%s' expects a strictly positive run-queue length, or 'auto' or 'ignore'.", args[0]);
return -1;
}
}
else if (strcmp(args[arg], "min") == 0) {
if (!*args[arg + 1] || (h2_fe_min_concurrent_streams = atoi(args[arg + 1])) <= 0) {
memprintf(err, "'%s' expects a strictly positive minimum number of streams'.", args[0]);
return -1;
}
if (h2_fe_min_concurrent_streams > h2_fe_settings_max_concurrent_streams) {
memprintf(err, "'%s' minimum number of streams (%u) cannot be higher than the maximum (%u)'.",
args[0], h2_fe_min_concurrent_streams, h2_fe_settings_max_concurrent_streams);
return -1;
}
}
else if (*args[arg]) {
memprintf(err, "'%s' only supports 'rq-load' or 'min' after the numeric value, but found '%s'.", args[0], args[arg]);
return -1;
}
}
leave:
return 0;
}
@ -8751,6 +8819,30 @@ static int h2_parse_max_total_streams(char **args, int section_type, struct prox
return 0;
}
/* config parser for global "tune.h2.{be.,fe.,}max-{frames,rst}-at-once" */
static int h2_parse_max_frames_at_once(char **args, int section_type, struct proxy *curpx,
const struct proxy *defpx, const char *file, int line,
char **err)
{
uint *vptr;
/* backend/frontend/default */
if (strcmp(args[0], "tune.h2.be.max-frames-at-once") == 0)
vptr = &h2_be_max_frames_at_once;
else if (strcmp(args[0], "tune.h2.fe.max-frames-at-once") == 0)
vptr = &h2_fe_max_frames_at_once;
else if (strcmp(args[0], "tune.h2.fe.max-rst-at-once") == 0)
vptr = &h2_fe_max_rst_at_once;
else
BUG_ON(1, "unhandled keyword");
if (too_many_args(1, args, err, NULL))
return -1;
*vptr = atoi(args[1]);
return 0;
}
/* config parser for global "tune.h2.max-frame-size" */
static int h2_parse_max_frame_size(char **args, int section_type, struct proxy *curpx,
const struct proxy *defpx, const char *file, int line,
@ -8849,10 +8941,13 @@ static struct cfg_kw_list cfg_kws = {ILH, {
{ CFG_GLOBAL, "tune.h2.be.glitches-threshold", h2_parse_glitches_threshold },
{ CFG_GLOBAL, "tune.h2.be.initial-window-size", h2_parse_initial_window_size },
{ CFG_GLOBAL, "tune.h2.be.max-concurrent-streams", h2_parse_max_concurrent_streams },
{ CFG_GLOBAL, "tune.h2.be.max-frames-at-once", h2_parse_max_frames_at_once },
{ CFG_GLOBAL, "tune.h2.be.rxbuf", h2_parse_rxbuf },
{ CFG_GLOBAL, "tune.h2.fe.glitches-threshold", h2_parse_glitches_threshold },
{ CFG_GLOBAL, "tune.h2.fe.initial-window-size", h2_parse_initial_window_size },
{ CFG_GLOBAL, "tune.h2.fe.max-concurrent-streams", h2_parse_max_concurrent_streams },
{ CFG_GLOBAL, "tune.h2.fe.max-frames-at-once", h2_parse_max_frames_at_once },
{ CFG_GLOBAL, "tune.h2.fe.max-rst-at-once", h2_parse_max_frames_at_once },
{ CFG_GLOBAL, "tune.h2.fe.max-total-streams", h2_parse_max_total_streams },
{ CFG_GLOBAL, "tune.h2.fe.rxbuf", h2_parse_rxbuf },
{ CFG_GLOBAL, "tune.h2.header-table-size", h2_parse_header_table_size },

View File

@ -252,20 +252,21 @@ struct task *mux_pt_io_cb(struct task *t, void *tctx, unsigned int status)
TRACE_ENTER(PT_EV_CONN_WAKE, ctx->conn);
if (!se_fl_test(ctx->sd, SE_FL_ORPHAN)) {
unsigned int state = TASK_WOKEN_MSG;
/* There's a small race condition.
* mux_pt_io_cb() is only supposed to be called if we have no
* stream attached. However, maybe the tasklet got woken up,
* and this connection was then attached to a new stream.
* If this happened, just wake the tasklet up if anybody
* subscribed to receive events, and otherwise call the wake
* method, to make sure the event is noticed.
* If this happened, just wake the tasklet up.
*/
if (ctx->conn->subs) {
ctx->conn->subs->events = 0;
tasklet_wakeup(ctx->conn->subs->tasklet);
ctx->conn->subs = NULL;
} else if (pt_sc(ctx)->app_ops->wake)
pt_sc(ctx)->app_ops->wake(pt_sc(ctx));
state |= TASK_WOKEN_IO;
}
tasklet_wakeup(pt_sc(ctx)->wait_event.tasklet, state);
TRACE_DEVEL("leaving waking up SC", PT_EV_CONN_WAKE, ctx->conn);
return t;
}
@ -360,14 +361,9 @@ static int mux_pt_wake(struct connection *conn)
int ret = 0;
TRACE_ENTER(PT_EV_CONN_WAKE, ctx->conn);
if (!se_fl_test(ctx->sd, SE_FL_ORPHAN)) {
ret = pt_sc(ctx)->app_ops->wake ? pt_sc(ctx)->app_ops->wake(pt_sc(ctx)) : 0;
if (ret < 0) {
TRACE_DEVEL("leaving waking up SC", PT_EV_CONN_WAKE, ctx->conn);
return ret;
}
} else {
if (!se_fl_test(ctx->sd, SE_FL_ORPHAN))
tasklet_wakeup(pt_sc(ctx)->wait_event.tasklet, TASK_WOKEN_MSG);
else {
conn_ctrl_drain(conn);
if (conn->flags & (CO_FL_ERROR | CO_FL_SOCK_RD_SH)) {
TRACE_DEVEL("leaving destroying PT context", PT_EV_CONN_WAKE, ctx->conn);

View File

@ -506,19 +506,23 @@ static struct ncbuf *qcs_get_ncbuf(struct qcs *qcs, struct ncbuf *ncbuf)
return ncbuf;
}
/* Notify an eventual subscriber on <qcs> or else wakeup up the stconn layer if
* initialized.
/* Notify the stconn layer if initialized with TASK_WOKEN_MSG state and
* eventually TASK_WOKEN_IO.
*/
static void qcs_alert(struct qcs *qcs)
{
unsigned int state = TASK_WOKEN_MSG;
TRACE_POINT(QMUX_EV_STRM_WAKE, qcs->qcc->conn, qcs);
if (!qcs_sc(qcs))
return;
if (qcs->subs) {
qcs_notify_recv(qcs);
qcs_notify_send(qcs);
}
else if (qcs_sc(qcs) && qcs->sd->sc->app_ops->wake) {
TRACE_POINT(QMUX_EV_STRM_WAKE, qcs->qcc->conn, qcs);
qcs->sd->sc->app_ops->wake(qcs->sd->sc);
qcs->subs->events = 0;
qcs->subs = NULL;
state |= TASK_WOKEN_IO;
}
tasklet_wakeup(qcs_sc(qcs)->wait_event.tasklet, state);
}
int qcs_subscribe(struct qcs *qcs, int event_type, struct wait_event *es)
@ -548,7 +552,7 @@ void qcs_notify_recv(struct qcs *qcs)
{
if (qcs->subs && qcs->subs->events & SUB_RETRY_RECV) {
TRACE_POINT(QMUX_EV_STRM_WAKE, qcs->qcc->conn, qcs);
tasklet_wakeup(qcs->subs->tasklet);
tasklet_wakeup(qcs->subs->tasklet, TASK_WOKEN_IO);
qcs->subs->events &= ~SUB_RETRY_RECV;
if (!qcs->subs->events)
qcs->subs = NULL;
@ -559,7 +563,7 @@ void qcs_notify_send(struct qcs *qcs)
{
if (qcs->subs && qcs->subs->events & SUB_RETRY_SEND) {
TRACE_POINT(QMUX_EV_STRM_WAKE, qcs->qcc->conn, qcs);
tasklet_wakeup(qcs->subs->tasklet);
tasklet_wakeup(qcs->subs->tasklet, TASK_WOKEN_IO);
qcs->subs->events &= ~SUB_RETRY_SEND;
if (!qcs->subs->events)
qcs->subs = NULL;

View File

@ -952,7 +952,7 @@ static void spop_strm_notify_recv(struct spop_strm *spop_strm)
{
if (spop_strm->subs && (spop_strm->subs->events & SUB_RETRY_RECV)) {
TRACE_POINT(SPOP_EV_STRM_WAKE, spop_strm->spop_conn->conn, spop_strm);
tasklet_wakeup(spop_strm->subs->tasklet);
tasklet_wakeup(spop_strm->subs->tasklet, TASK_WOKEN_IO);
spop_strm->subs->events &= ~SUB_RETRY_RECV;
if (!spop_strm->subs->events)
spop_strm->subs = NULL;
@ -965,33 +965,33 @@ static void spop_strm_notify_send(struct spop_strm *spop_strm)
if (spop_strm->subs && (spop_strm->subs->events & SUB_RETRY_SEND)) {
TRACE_POINT(SPOP_EV_STRM_WAKE, spop_strm->spop_conn->conn, spop_strm);
spop_strm->flags |= SPOP_SF_NOTIFIED;
tasklet_wakeup(spop_strm->subs->tasklet);
tasklet_wakeup(spop_strm->subs->tasklet, TASK_WOKEN_IO);
spop_strm->subs->events &= ~SUB_RETRY_SEND;
if (!spop_strm->subs->events)
spop_strm->subs = NULL;
}
}
/* Alerts the data layer, trying to wake it up by all means, following
* this sequence :
* - if the spop stream' data layer is subscribed to recv, then it's woken up
* for recv
* - if its subscribed to send, then it's woken up for send
* - if it was subscribed to neither, its ->wake() callback is called
* It is safe to call this function with a closed stream which doesn't have a
* stream connector anymore.
/* Alerts the data layer by waking it up. TASK_WOKEN_MSG state is used by
* default and if the data layer is also subscribed to recv or send,
* TASK_WOKEN_IO is added.
*/
static void spop_strm_alert(struct spop_strm *spop_strm)
{
unsigned int state = TASK_WOKEN_MSG;
TRACE_POINT(SPOP_EV_STRM_WAKE, spop_strm->spop_conn->conn, spop_strm);
if (!spop_strm_sc(spop_strm))
return;
if (spop_strm->subs) {
spop_strm_notify_recv(spop_strm);
spop_strm_notify_send(spop_strm);
}
else if (spop_strm_sc(spop_strm) && spop_strm_sc(spop_strm)->app_ops->wake != NULL) {
TRACE_POINT(SPOP_EV_STRM_WAKE, spop_strm->spop_conn->conn, spop_strm);
spop_strm_sc(spop_strm)->app_ops->wake(spop_strm_sc(spop_strm));
if (spop_strm->subs->events & SUB_RETRY_SEND)
spop_strm->flags |= SPOP_SF_NOTIFIED;
spop_strm->subs->events = 0;
spop_strm->subs = NULL;
state |= TASK_WOKEN_IO;
}
tasklet_wakeup(spop_strm_sc(spop_strm)->wait_event.tasklet, state);
}
/* Writes the 32-bit frame size <len> at address <frame> */
@ -2023,13 +2023,7 @@ static void spop_resume_each_sending_spop_strm(struct spop_conn *spop_conn, stru
continue;
}
if (spop_strm->subs && spop_strm->subs->events & SUB_RETRY_SEND) {
spop_strm->flags |= SPOP_SF_NOTIFIED;
tasklet_wakeup(spop_strm->subs->tasklet);
spop_strm->subs->events &= ~SUB_RETRY_SEND;
if (!spop_strm->subs->events)
spop_strm->subs = NULL;
}
spop_strm_notify_send(spop_strm);
}
TRACE_LEAVE(SPOP_EV_SPOP_CONN_SEND|SPOP_EV_STRM_WAKE, spop_conn->conn);
@ -3019,7 +3013,7 @@ static void spop_detach(struct sedesc *sd)
if (eb_is_empty(&spop_conn->streams_by_id)) {
if (!spop_conn->conn->owner) {
/* Session insertion above has failed and connection is idle, remove it. */
spop_conn->conn->mux->destroy(spop_conn);
CALL_MUX_NO_RET(spop_conn->conn->mux, destroy(spop_conn));
TRACE_DEVEL("leaving on error after killing outgoing connection", SPOP_EV_STRM_END|SPOP_EV_SPOP_CONN_ERR);
return;
}
@ -3032,7 +3026,7 @@ static void spop_detach(struct sedesc *sd)
/* Ensure session can keep a new idle connection. */
if (session_check_idle_conn(sess, spop_conn->conn) != 0) {
spop_conn->conn->mux->destroy(spop_conn);
CALL_MUX_NO_RET(spop_conn->conn->mux, destroy(spop_conn));
TRACE_DEVEL("leaving without reusable idle connection", SPOP_EV_STRM_END);
return;
}
@ -3063,7 +3057,7 @@ static void spop_detach(struct sedesc *sd)
if (!srv_add_to_idle_list(objt_server(spop_conn->conn->target), spop_conn->conn, 1)) {
/* The server doesn't want it, let's kill the connection right away */
spop_conn->conn->mux->destroy(spop_conn);
CALL_MUX_NO_RET(spop_conn->conn->mux, destroy(spop_conn));
TRACE_DEVEL("leaving on error after killing outgoing connection", SPOP_EV_STRM_END|SPOP_EV_SPOP_CONN_ERR);
return;
}

View File

@ -121,11 +121,11 @@ void mworker_proc_list_to_env()
if (child->options & PROC_O_TYPE_MASTER)
type = 'm';
else if (child->options &= PROC_O_TYPE_WORKER)
else if (child->options & PROC_O_TYPE_WORKER)
type = 'w';
if (child->pid > -1)
memprintf(&msg, "%s|type=%c;fd=%d;cfd=%d;pid=%d;reloads=%d;failedreloads=%d;timestamp=%d;id=%s;version=%s", msg ? msg : "", type, child->ipc_fd[0], child->ipc_fd[1], child->pid, child->reloads, child->failedreloads, child->timestamp, child->id ? child->id : "", child->version);
memprintf(&msg, "%s|type=%c;fd=%d;cfd=%d;pid=%d;reloads=%d;failedreloads=%d;timestamp=%d;id=%s;version=%s", msg ? msg : "", type, child->ipc_fd[0], child->ipc_fd[1], child->pid, child->reloads, child->failedreloads, child->timestamp, child->id ? child->id : "", child->version ? child->version : "");
}
if (msg)
setenv("HAPROXY_PROCESSES", msg, 1);
@ -223,7 +223,20 @@ int mworker_env_to_proc_list()
}
}
if (child->pid > 0) {
LIST_APPEND(&proc_list, &child->list);
struct list *insert_pt = &proc_list;
struct mworker_proc *pos;
/* insert at the right position in ASC reload order;
* search from the tail since items are sorted most of
* the time
*/
list_for_each_entry_rev(pos, &proc_list, list) {
if (pos->reloads <= child->reloads) {
insert_pt = &pos->list;
break;
}
}
LIST_INSERT(insert_pt, &child->list);
} else {
mworker_free_child(child);
}
@ -232,28 +245,18 @@ int mworker_env_to_proc_list()
/* set the leaving processes once we know which number of reloads are the current processes */
list_for_each_entry(child, &proc_list, list) {
if (child->reloads > 0)
if (child->reloads > 0 && !(child->options & PROC_O_TYPE_MASTER))
child->options |= PROC_O_LEAVING;
}
unsetenv("HAPROXY_PROCESSES");
no_env:
/* couldn't find the master element, exiting */
if (!proc_self) {
proc_self = mworker_proc_new();
if (!proc_self) {
ha_alert("Cannot allocate process structures.\n");
err = -1;
goto out;
}
proc_self->options |= PROC_O_TYPE_MASTER;
proc_self->pid = pid;
proc_self->timestamp = 0; /* we don't know the startime anymore */
LIST_APPEND(&proc_list, &proc_self->list);
ha_warning("The master internals are corrupted or it was started with a too old version (< 1.9). Please restart the master process.\n");
err = -1;
ha_alert("Failed to deserialize data for the master process. Unrecoverable error, exiting.\n");
goto out;
}
out:
@ -825,7 +828,7 @@ void mworker_cleanup_proc()
struct cli_showproc_ctx {
int debug;
int next_reload; /* reload number to resume from, 0 = from the beginning */
int resume_reload; /* reload count of the last flushed old worker row, 0 = none yet */
};
/* Append a single worker row to trash (shared between current/old sections) */
@ -860,7 +863,7 @@ static int cli_io_handler_show_proc(struct appctx *appctx)
chunk_reset(&trash);
if (ctx->next_reload == 0) {
if (ctx->resume_reload == 0) {
memprintf(&reloadtxt, "%d [failed: %d]", proc_self->reloads, proc_self->failedreloads);
chunk_printf(&trash, "#%-14s %-15s %-15s %-15s %-15s", "<PID>", "<type>", "<reloads>", "<uptime>", "<version>");
if (ctx->debug)
@ -878,12 +881,12 @@ static int cli_io_handler_show_proc(struct appctx *appctx)
ha_free(&uptime);
/* displays current processes */
if (ctx->next_reload == 0)
if (ctx->resume_reload == 0)
chunk_appendf(&trash, "# workers\n");
list_for_each_entry(child, &proc_list, list) {
/* don't display current worker if we only need the next ones */
if (ctx->next_reload != 0)
if (ctx->resume_reload != 0)
continue;
if (!(child->options & PROC_O_TYPE_WORKER))
@ -900,34 +903,69 @@ static int cli_io_handler_show_proc(struct appctx *appctx)
return 0;
/* displays old processes */
if (old || ctx->next_reload) { /* there's more */
if (ctx->next_reload == 0)
if (old || ctx->resume_reload) { /* there's more */
int skip = ctx->resume_reload; /* if resuming, skip until we pass this reload count */
int prev_reload = 0; /* previous LEAVING entry's reload count during skip phase */
if (!ctx->resume_reload)
chunk_appendf(&trash, "# old workers\n");
list_for_each_entry(child, &proc_list, list) {
/* If we're resuming, skip entries that were already printed (reload >= ctx->next_reload) */
if (ctx->next_reload && child->reloads >= ctx->next_reload)
continue;
if (!(child->options & PROC_O_TYPE_WORKER))
continue;
if (child->options & PROC_O_LEAVING) {
cli_append_worker_row(ctx, child, date.tv_sec);
if (!(child->options & PROC_O_LEAVING))
continue;
/* Try to flush so we can resume after this reload on next page if the buffer is full. */
if (applet_putchk(appctx, &trash) == -1) {
/* resume at this reload (exclude it on next pass) */
ctx->next_reload = child->reloads; /* resume after entries >= this reload */
return 0;
/* When resuming after a flush failure, skip entries
* up to and including the last successfully flushed
* row (identified by its reload count). This is
* direction-agnostic: works whether the list is in
* ascending or descending reload order.
*
* If the target entry was deleted from proc_list
* (e.g. process exited between handler calls), we
* detect that we've passed its former position when
* two consecutive LEAVING entries straddle the skip
* value i.e. one has reloads > skip and the next
* has reloads < skip (or vice versa). In that case
* we stop skipping and emit the current entry.
*/
if (skip) {
if (child->reloads == skip) {
skip = 0; /* found it, resume from the next entry */
prev_reload = 0;
continue;
}
if (prev_reload &&
((prev_reload > skip) != (child->reloads > skip))) {
/* Crossed where skip would have been —
* the entry was deleted. Stop skipping
* and fall through to emit this entry.
*/
skip = 0;
} else {
prev_reload = child->reloads;
continue;
}
chunk_reset(&trash);
}
cli_append_worker_row(ctx, child, date.tv_sec);
if (applet_putchk(appctx, &trash) == -1) {
/* ctx->resume_reload already holds the last
* flushed row or 0; don't update it here so
* the failed row will be replayed.
*/
return 0;
}
/* This row was successfully flushed, remember it */
ctx->resume_reload = child->reloads;
chunk_reset(&trash);
}
}
/* dump complete: reset resume cursor so next 'show proc' starts from the top */
ctx->next_reload = 0;
ctx->resume_reload = 0;
return 1;
}

View File

@ -2351,6 +2351,11 @@ static inline int peer_treat_definemsg(struct appctx *appctx, struct peer *p,
goto malformed_exit;
}
if (table_type < 0 || table_type >= PEER_KT_TYPES) {
TRACE_PROTO("ignore table definition message: unknown table type", PEERS_EV_SESS_IO|PEERS_EV_RX_MSG|PEERS_EV_PROTO_DEF, appctx, p);
goto ignore_msg;
}
table_keylen = intdecode(msg_cur, msg_end);
if (!*msg_cur) {
TRACE_ERROR("malformed table definition message: no key length", PEERS_EV_SESS_IO|PEERS_EV_RX_MSG|PEERS_EV_PROTO_ERR, appctx, p);

Some files were not shown because too many files have changed in this diff Show More