Compare commits

...

63 Commits

Author SHA1 Message Date
Christopher Faulet
4fd5cafe27 BUG/MEDIUM: htx: Fix htx_xfer() to consume more data than expected
When an htx DATA block is partially transfer, we must take care to remove
exactly the copied size. To do so, we must save the size of the last block
value copied and not rely on the last data block after the copy. Indeed,
data can be merged with an existing DATA block, so the last block size can
be larger than the last part copied.

Because of this issue, it is possible to remove more data than
expected. Worse, this could lead to a crash by performing an integer
overflow on the block size.

No backport needed.
2026-03-27 17:19:12 +01:00
William Lallemand
d26bd9f978 BUG/MINOR: acme: fix task allocation leaked upon error
Fix a leak of the task object in acme_start_task() when one of the
condition in the function failed.

Fix issue #3308.

Must be backported to 3.2 and later.
2026-03-27 16:58:49 +01:00
Olivier Houchard
506cfcb5d4 MINOR: connections: Enhance tune.idle-pool.shared
There are two settings to control idle connection sharing across
threads.
tune.idle-pool.shared, that enables or disables it, and then
tune.takeover-other-tg-connections, which lets you or not get idle
connections from other thread groups.
Add a new keyword for tune.idle-pool.shared, "full", that lets you get
connections from other thread groups (equivalent to "full" keyword for
tune.takeover-other-tg-connections). The "on" keyword now will be
equivalent to the "restrict" one, which allowed getting connection from
other thread groups only when not doing it would result in a connection
failure (when reverse-http or when strict-macxonn are used).
tune.takeover-other-tg-connections will be deprecated.
2026-03-27 16:14:53 +01:00
Mia Kanashi
418f0c0bbe BUG/MEDIUM: acme: skip doing challenge if it is already valid
If server returns an auth with status valid it seems that client
needs to always skip it, CA can recycle authorizations, without
this change haproxy fails to obtain certificates in that case.
It is also something that is explicitly allowed and stated
in the dns-persist-01 draft RFC.

Note that it would be better to change how haproxy does status polling,
and implements the state machine, but that will take some thought
and time, this patch is a quick fix of the problem.

See:
https://github.com/letsencrypt/boulder/issues/2125
https://github.com/letsencrypt/pebble/issues/133

This must be backported to 3.2 and later.
2026-03-27 14:41:11 +01:00
Christopher Faulet
27d7c69e87 BUG/MINOR: http-ana: Only consider client abort for abortonclose
When abortonclose option is enabled (by default since 3.3), the HTTP rules
can no longer yield if the client aborts. However, stream aborts were also
considered. So it was possible to interrupt yielding rules, especially on
the response processing, while the client was still waiting for the
response.

So now, when abortonclose option is enabled, we now take care to only
consider client aborts to prevent HTTP rules to yield.

Many thanks to @DirkyJerky for his detailed analysis.

This patch should fix the issue #3306. It should be backported as far as
2.8.
2026-03-27 11:18:40 +01:00
Christopher Faulet
d1c7e56585 BUG/MINOR: config: Properly test warnif_misplaced_* return values
warnif_misplaced_* functions return 1 when a warning is reported and 0
otherwise. So the caller must properly handle the return value.

When parsing a proxy, ERR_WARN code must be added to the error code instead
of the return value. When a warning was reported, ERR_RETRYABLE (1) was
added instead of ERR_WARN.

And when tcp rules were parsed, warnings were ignored. Message were emitted
but the return values were ignored.

This patch should be backported to all stable versions.
2026-03-27 07:35:25 +01:00
Christopher Faulet
4e99cddde4 BUG/MINOR: config: Warn only if warnif_cond_conflicts report a conflict
When warnif_cond_conflicts() is called, we must take care to emit a warning
only when a conflict is reported. We cannot rely on the err_code variable
because some warnings may have been already reported. We now rely on the
errmsg variable. If it contains something, a warning is emitted. It is good
enough becasue warnif_cond_conflicts() only reports warnings.

This patch should fix the issue #3305. It is a 3.4-dev specific issue. No
backport needed.
2026-03-27 07:35:25 +01:00
Olivier Houchard
0e36267aac MEDIUM: server: remove a useless memset() in srv_update_check_addr_port.
Remove a memset that should not be there, and tries to zero a NULL pointer.
2026-03-26 16:43:48 +01:00
Olivier Houchard
1b0dfff552 MEDIUM: connections: Enforce mux protocol requirements
When picking a mux, pay attention to its MX_FL_FRAMED. If it is set,
then it means we explicitely want QUIC, so don't use that mux for any
protocol that is not QUIC.
2026-03-26 15:09:13 +01:00
Olivier Houchard
d3ad730d5f MINOR: protocols: Add a new proto_is_quic() function
Add a new function, proto_is_quic(), that returns true if the protocol
is QUIC (using a datagram socket but provides a stream transport).
2026-03-26 15:09:13 +01:00
Olivier Houchard
cca9245416 MINOR: checks: Store the protocol to be used in struct check
When parsing the check address, store the associated proto too.
That way we can use the notation like quic4@address, and the right
protocol will be used. It is possible for checks to use a different
protocol than the server, ie we can have a QUIC server but want to run
TCP checks, so we can't just reuse whatever the server uses.
WIP: store the protocol in checks
2026-03-26 15:09:13 +01:00
Olivier Houchard
07edaed191 BUG/MEDIUM: check: Don't reuse the server xprt if we should not
Don't assume the check will reuse the server's xprt. It may not be true
if some settings such as the ALPN has been set, and it differs from the
server's one. If the server is QUIC, and we want to use TCP for checks,
we certainly don't want to reuse its XPRT.
2026-03-26 15:09:13 +01:00
William Lallemand
1c1d9d2500 BUG/MINOR: acme: permission checks on the CLI
Permission checks on the CLI for ACME are missing.

This patch adds a check on the ACME commands
so they can only be run in admin mode.

ACME is stil a feature in experimental-mode.

Initial report by Cameron Brown.

Must be backported to 3.2 and later.
2026-03-25 18:37:47 +01:00
William Lallemand
47987ccbd9 BUG/MINOR: ech: permission checks on the CLI
Permission checks on the CLI for ECH are missing.

This patch adds a check for "(add|set|del|show) ssl ech" commands
so they can only be run in admin mode.

ECH is stil a feature in experimental-mode and is not compiled by
default.

Initial report by Cameron Brown.

Must be backported to 3.3.
2026-03-25 18:37:06 +01:00
William Lallemand
33041fe91f BUILD: tools: potential null pointer dereference in dl_collect_libs_cb
This patch fixes a warning that can be reproduced with gcc-8.5 on RHEL8
(gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-28)).

This should fix issue #3303.

Must be backported everywhere 917e82f283 ("MINOR: debug: copy debug
symbols from /usr/lib/debug when present") was backported, which is
to branch 3.2 for now.
2026-03-23 21:52:56 +01:00
William Lallemand
8e250bba8f BUG/MINOR: acme/cli: fix argument check and error in 'acme challenge_ready'
Fix the check or arguments of the 'acme challenge_ready' command which
was checking if all arguments are NULL instead of one of the argument.

Must be backported to 3.2 and later.
2026-03-23 14:39:55 +01:00
William Lallemand
c7564c19a2 BUG/MINOR: acme: replace atol with len-bounded __strl2uic() for retry-after
Replace atol() by _strl2uic() in cases the input are ISTs when parsing
the retry-after header. There's no risk of an error since it will stop
at the first non-digit.

Must be backported to 3.2 and later.
2026-03-23 14:39:55 +01:00
William Lallemand
efbf0f8ed1 BUG/MINOR: acme: free() DER buffer on a2base64url error path
In acme_req_finalize() the data buffer is only freed when a2base64url
succeed. This patch moves the allocation so it free() the DER buffer in
every cases.

Must be backported to 3.2 and later.
2026-03-23 14:39:55 +01:00
William Lallemand
52d8ee85e7 BUG/MINOR: acme: NULL check on my_strndup()
Add a NULL check on my_strndup().

Must be backported to 3.2 and later.
2026-03-23 14:39:55 +01:00
Christopher Faulet
38a7d8599d DOC: config: Reorder params for 'tcp-check expect' directive
Order of parameters for the 'tcp-check expect' directive is changed to be
the same than 'http-check expect'.
2026-03-23 14:02:43 +01:00
Christopher Faulet
82afd36b6c DOC: config: Add missing 'status-code' param for 'http-check expect' directive
In the documentation of 'http-check expect' directive, the parameter
'status-code' was missing. Let's add it.

This patch could be backported to all stable versions.
2026-03-23 14:02:43 +01:00
Christopher Faulet
ada33006ef MINOR: proxy: Add use-small-buffers option to set where to use small buffers
Thanks to previous commits, it is possible to use small buffers at different
places: to store the request when a connection is queued or when L7 retries
are enabled, or for health-checks requests. However, there was no
configuration parameter to fine tune small buffer use.

It is now possible, thanks to the proxy option "use-small-buffers".
Documentation was updated accordingly.
2026-03-23 14:02:43 +01:00
Christopher Faulet
163eba5c8c DOC: config: Fix alphabetical ordering of external-check directives
external-check directives were not at the right place. Let's fix it.
2026-03-23 14:02:43 +01:00
Christopher Faulet
61d68f14b2 DOC: config: Fix alphabetical ordering of proxy options
external-check and idle-close-on-response options were not at the right
place. Let's fix it.
2026-03-23 14:02:43 +01:00
Christopher Faulet
125cbecfa9 MINOR: proxy: Review options flags used to configure healthchecks
When healthchecks were configured for a proxy, an enum-like was used to
sepcify the check's type. The idea was to reserve some values for futur
types of healthcheck. But it is overkill. I doubt we will ever have
something else than tcp and external checks. So corresponding PR_O2 flags
were slightly reviewed and a hole was filled.

Thanks to this change, some bits were released in options2 bitfield.
2026-03-23 14:02:43 +01:00
Christopher Faulet
a61ea0f414 MEDIUM: tcpcheck: Use small buffer if possible for healthchecks
If support for small buffers is enabled, we now try to use them for
healthcheck requests. First, we take care the tcpcheck ruleset may use small
buffers. Send rules using LF strings or too large data are excluded. The
ability to use small buffers or not are set on the ruleset. All send rules
of the ruleset must be compatible. This info is then transfer to server's
healthchecks relying on this ruleset.

Then, when a healthcheck is running, when a send rule is evaluated, if
possible, we try to use small buffers. On error, the ability to use small
buffers is removed and we retry with a regular buffer. It means on the first
error, the support is disabled for the healthcheck and all other runs will
use regular buffers.
2026-03-23 14:02:43 +01:00
Christopher Faulet
cd363e0246 MEDIUM: mux-h2: Stop dealing with HTX flags transfer in h2_rcv_buf()
In h2_rcv_buf(), HTX flags are transfer with data when htx_xfer() is
called. There is no reason to continue to deal with them in the H2 mux. In
addition, there is no reason to set SE_FL_EOI flag when a parsing error was
reported. This part was added before the stconn era. Nowadays, when an HTX
parsing error is reported, an error on the sedesc should also be reported.
2026-03-23 14:02:43 +01:00
Christopher Faulet
d257dd4563 Revert "BUG/MEDIUM: mux-h2: make sure to always report pending errors to the stream"
This reverts commit 44932b6c417e472d25039ec3d7b8bf14e07629bc.

The patch above was only necessary to handle partial headers or trailers
parsing. There was nothing to prevent the H2 multiplexer to start to add
headers or trailers in an HTX message and to stop the processing on error,
leaving the HTX message with no EOH/EOT block.

From the HTX API point of view, it is unexepected. And this was fixed thanks
to the commit ba7dc46a9 ("BUG/MINOR: h2/h3: Never insert partial
headers/trailers in an HTX message").

So this patch can be reverted. It is important to not report a parsign error
too early, when there are still data to transfer to the upper layer.

This patch must be backport where 44932b6c4 was backported but only after
backporting ba7dc46a9 first.
2026-03-23 14:02:43 +01:00
Christopher Faulet
39121ceca6 MEDIUM: tree-wide: Rely on htx_xfer() instead of htx_xfer_blks()
htx_xfer() function replaced htx_xfer_blks(). So let's use it.
2026-03-23 14:02:43 +01:00
Christopher Faulet
c9a9fa813b MEDIUM: stconn: Use a small buffer if possible for L7 retries
Whe L7 retries are enabled and the request is small enough, a small buffer
is used instead of a regular one.
2026-03-23 14:02:43 +01:00
Christopher Faulet
181cd8ba8a MEDIUM: stream: Try to use small buffer when TCP stream is queued
It was performed when an HTX stream was queued. Small requests were moved in
small buffers. Here we do the same but for TCP streams.
2026-03-23 14:02:42 +01:00
Christopher Faulet
5acdda4eed MEDIUM: stream: Try to use a small buffer for HTTP request on queuing
When a HTX stream is queued, if the request is small enough, it is moved
into a small buffer. This should save memory on instances intensively using
queues.

Applet and connection receive function were update to block receive when a
small buffer is in use.
2026-03-23 14:02:42 +01:00
Christopher Faulet
92a24a4e87 MEDIUM: chunk: Add support for small chunks
In the same way support for large chunks was added to properly work with
large buffers, we are now adding supports for small chunks because it is
possible to process small buffers.

So a dedicated memory pool is added to allocate small
chunks. alloc_small_trash_chunk() must be used to allocate a small
chunk. alloc_trash_chunk_sz() and free_trash_chunk() were uppdated to
support small chunks.

In addition, small trash buffers are also created, using the same mechanism
than for regular trash buffers. So three thread-local trash buffers are
created. get_small_trash_chunk() must be used to get a small trash buffer.
And get_trash_chunk_sz() was updated to also deal with small buffers.
2026-03-23 14:02:42 +01:00
Christopher Faulet
467f911cea MINOR: http-ana: Use HTX API to move to a large buffer
Use htx_move_to_large_buffer() to move a regular HTX message to a large
buffer when we are waiting for a huge payload.
2026-03-23 14:02:42 +01:00
Christopher Faulet
0213dd70c9 MINOR: htx: Add helper functions to xfer a message to smaller or larger one
htx_move_to_small_buffer()/htx_move_to_large_buffer() and
htx_copy_to_small_buffer()/htx_copy_to_large_buffer() functions can now be
used to move or copy blocks from a default buffer to a small or large
buffer. The destination buffer is allocated and then each blocks are
transferred into it.

These funtions relies in htx_xfer() function.
2026-03-23 14:02:42 +01:00
Christopher Faulet
5ead611cc2 MEDIUM: htx: Add htx_xfer function to replace htx_xfer_blks
htx_xfer() function should replace htx_xfer_blks(). It will be a bit easier to
maintain and to use. The behavior of htx_xfer() can be changed by calling it
with specific flags:

  * HTX_XFER_KEEP_SRC_BLKS: Blocks from the source message are just copied
  * HTX_XFER_PARTIAL_HDRS_COPY: It is allowed to partially xfer headers or trailers
  * HTX_XFER_HDRS_ONLY: only headers are xferred

By default (HTX_XFER_DEFAULT or 0), all blocks from the source message are moved
into to the destination mesage. So copied in the destination messageand removed
from the source message.

The caller must still define the maximum amount of data (including meta-data)
that can be xferred.

It is no longer necessary to specify a block type to stop the copy. Most of
time, with htx_xfer_blks(), this parameter was set to HTX_BLK_UNUSED. And
otherwise it was only specified to transfer headers.

It is important to not that the caller is responsible to verify the original
HTX message is well-formated. Especially, it must be sure headers part and
trailers part are complete (finished by EOH/EOT block).

For now, htx_xfer_blks() is not removed for compatiblity reason. But it is
deprecated.
2026-03-23 14:02:42 +01:00
Christopher Faulet
41c89e4fb6 MINOR: config: Report the warning when invalid large buffer size is set
When an invalid large buffer size was found in the configuration, a warning
was emitted but it was not reported via the error code. It is now fixed.
2026-03-23 14:02:42 +01:00
Christopher Faulet
b71f70d548 MINOR: config: Relax tests on the configured size of small buffers
When small buffer size was greater than the default buffer size, an error
was triggered. We now do the same than for large buffer. A warning is
emitted and the small buffer size is set to 0 do disable small buffer
allocation.
2026-03-23 14:02:42 +01:00
Christopher Faulet
01b9b67d5c MINOR: quic: Use b_alloc_small() to allocate a small buffer
Rely on b_alloc_small to allocate a small buffer.
2026-03-23 14:02:42 +01:00
Christopher Faulet
f8c96bf9cb MINOR: dynbuf: Add helper functions to alloc large and small buffers
b_alloc_small() and b_alloc_large() can now be used to alloc small or larger
buffers. For now, unlike default buffers, buffer_wait lists are not used.
2026-03-23 14:02:42 +01:00
Christopher Faulet
4d6cba03f2 MINOR: buffers: Move small buffers management from quic to dynbuf part
Because small buffers were only used by QUIC streams, the pool used to alloc
these buffers was located in the quic code. However, their usage will be
extended to other parts. So, the small buffers pool was moved into the
dynbuf part.
2026-03-23 14:02:42 +01:00
Amaury Denoyelle
1c379cad88 BUG/MINOR: http_htx: fix null deref in http-errors config check
http-errors parsing has been refactored in a recent serie of patches.
However, a null deref was introduced by the following patch in case a
non-existent http-errors section is referenced by an "errorfiles"
directive.

  commit 2ca7601c2d6781f455cf205e4f3b52f5beb16e41
  MINOR/OPTIM: http_htx: lookup once http_errors section on check/init

Fix this by delaying ha_free() so that it is called after ha_alert().

No need to backport.
2026-03-23 13:55:48 +01:00
William Lallemand
3d9865a12c BUG/MINOR: acme/cli: wrong argument check in 'acme renew'
Argument check should be args[2] instead of args[1] which is always
'renew'.

Must be backported to 3.2 and later.
2026-03-23 11:58:53 +01:00
William Lallemand
d72be950bd BUG/MINOR: acme: wrong error when checking for duplicate section
The cfg_parse_acme() function checks if an 'acme' section is already
existing in the configuration with cur_acme->linenum > 0. But the wrong
filename and line number are displayed in the commit message.

Must be backported to 3.2 and later.
2026-03-23 11:58:53 +01:00
William Lallemand
5a0fbbf1ca BUG/MINOR: acme: leak of ext_san upon insertion error
This patch fixes a leak of the ext_san structure when
sk_X509_EXTENSION_push() failed. sk_X509_EXTENSION_pop_free() is already
suppose to free it, so ext_san must be set to NULL upon success to avoid
a double-free.

Must be backported to 3.2 and later.
2026-03-23 11:58:53 +01:00
Amaury Denoyelle
c6fc53aa99 MEDIUM: proxy: remove http-errors limitation for dynamic backends
Use proxy_check_http_errors() on defaults proxy instances. This will
emit alert messages for errorfiles directives referencing a non-existing
http-errors section, or a warning if an explicitely listed status code
is not present in the target section.

This is a small behavior changes, as previouly this was only performed
for regular proxies. Thus, errorfile/errorfiles directives in an unused
defaults were never checked.

This may prevent startup of haproxy with a configuration file previously
considered as valid. However, this change is considered as necessary to
be able to use http-errors with dynamic backends. Any invalid defaults
will be detected on startup, rather than having to discover it at
runtime via "add backend" invokation.

Thus, any restriction on http-errors usage is now lifted for the
creation of dynamic backends.
2026-03-23 11:14:07 +01:00
Amaury Denoyelle
2ca7601c2d MINOR/OPTIM: http_htx: lookup once http_errors section on check/init
The previous patch has splitted the original proxy_check_errors()
function in two, so that check and init steps are performed separately.
However, this renders the code inefficient for "errorfiles" directive as
tree lookup on http-errors section is performed twice.

Optimize this by adding a reference to the section in conf_errors
structure. This is resolved during proxy_check_http_errors() and
proxy_finalize_http_errors() can reuse it.

No need to backport.
2026-03-23 10:51:33 +01:00
Amaury Denoyelle
d250b381dc MINOR: http_htx: split check/init of http_errors
Function proxy_check_errors() is used when configuration parsing is
over. This patch splits it in two newly named ones.

The first function is named proxy_check_http_errors(). It is responsible
to check for the validity of any "errorfiles" directive which could
reference non-existent http-errors section or code not defined in such
section. This function is now called via proxy_finalize().

The second function is named proxy_finalize_http_errors(). It converts
each conf_errors type used during parsing in a proper http_reply type
for runtime usage. This function is still called via post-proxy-check,
after proxy_finalize().

This patch does not bring any functional change. However, it will become
necessary to ensure http-errors can be used as expected with dynamic
backends.
2026-03-23 10:51:33 +01:00
Amaury Denoyelle
5b184e4178 MINOR: http_htx: rename fields in struct conf_errors
This patch is the second part of the refactoring for http-errors
parsing. It renames some fields in <conf_errors> structure to clarify
their usage. In particular, union variants are renamed "inl"/"section",
which better highlight the link with the newly defined enum
http_err_directive.
2026-03-23 10:51:33 +01:00
Amaury Denoyelle
fedaf054c4 MINOR: http_htx: use enum for arbitrary values in conf_errors
In conf_errors struct, arbitrary integer values were used for both
<type> field and <status> array. This renders the code difficult to
follow.

Replaces these values with proper enums type. Two new types are defined
for each of these fields. The first one represents the directive type,
derived from the keyword used (errorfile vs errorfiles). This directly
represents which part of <info> union should be manipulated.

The second enum is used for errorfiles directive with a reference on a
http-errors section. It indicates whether or not if a status code should
be imported from this section, and if this import is explicit or
implicit.
2026-03-23 10:51:33 +01:00
David Carlier
8e469ebf2e BUG/MEDIUM: acme: fix multiple resource leaks in acme_x509_req()
Several resources were leaked on both success and error paths:

- X509_NAME *nm was never freed. X509_REQ_set_subject_name() makes
  an internal copy, so nm must be freed separately by the caller.
- str_san allocated via my_strndup() was never freed on either path.
- On error paths after allocation, x (X509_REQ) and exts
  (STACK_OF(X509_EXTENSION)) were also leaked.

Fix this by adding proper cleanup of all allocated resources in both
the success and error paths. Also move sk_X509_EXTENSION_pop_free()
after X509_REQ_sign() so it is not skipped when sign fails, and
initialize nm to NULL to make early error paths safe.

Must be backported as far as 3.2.
2026-03-23 10:44:42 +01:00
Willy Tarreau
ff7b06badb BUILD: sched: fix leftover of debugging test in single-run changes
There was a leftover of "activity[tid].ctr1++" in commit 7d40b3134
("MEDIUM: sched: do not run a same task multiple times in series")
that unfortunately only builds in development mode :-(
2026-03-23 07:29:43 +01:00
Willy Tarreau
5d0f5f8168 MINOR: mux-h2: assign a limited frames processing budget
This introduces 3 new settings: tune.h2.be.max-frames-at-once and
tune.h2.fe.max-frames-at-once, which limit the number of frames that
will be processed at once for backend and frontend side respectively,
and tune.h2.fe.max-rst-at-once which limits the number of RST_STREAM
frames processed at once on the frontend.

We can now yield when reading too many frames at once, which allows to
limit the latency caused by processing too many frames in large buffers.
However if we stop due to the RST budget being depleted, it's most likely
the sign of a protocol abuse, so we make the tasklet go to BULK since
the goal is to punish it.

By limiting the number of RST per loop to 1, the SSL response time drops
from 95ms to 1.6ms during an H2 RST flood attack, and the maximum SSL
connection rate drops from 35.5k to 28.0k instead of 11.8k. A moderate
SSL load that shows 1ms response time and 23kcps increases to 2ms with
15kcps versus 95ms and 800cps before. The average loop time goes down
from 270-280us to 160us, while still doubling the attack absorption
rate with the same CPU capacity.

This patch may usefully be backported to 3.3 and 3.2. Note that to be
effective, this relies on the following patches:

  MEDIUM: sched: do not run a same task multiple times in series
  MINOR: sched: do not requeue a tasklet into the current queue
  MINOR: sched: do not punish self-waking tasklets anymore
  MEDIUM: sched: do not punish self-waking tasklets if TASK_WOKEN_ANY
  MEDIUM: sched: change scheduler budgets to lower TL_BULK
2026-03-23 07:14:22 +01:00
Willy Tarreau
ed6a4bc807 MEDIUM: sched: change scheduler budgets to lower TL_BULK
Having less yielding tasks in TL_BULK and more in TL_NORMAL, we need
to rebalance these queues' priorities. Tests have shown that raising
TL_NORMAL to 40% and lowering TL_BULK to 3% seems to give about the
best tradeoffs.
2026-03-23 06:58:37 +01:00
Willy Tarreau
282b9b7d16 MEDIUM: sched: do not punish self-waking tasklets if TASK_WOKEN_ANY
Self-waking tasklets are currently punished and go to the BULK list.
However it's a problem with muxes or the stick-table purge that just
yield and wake themselves up to limit the latency they cause to the
rest of the process, because by doing so to help others, they punish
themselves. Let's check if any TASK_WOKEN_ANY flag is present on
the tasklet and stop sending tasks presenting such a flag to TL_BULK.
Since tasklet_wakeup() by default passes TASK_WOKEN_OTHER, it means
that such tasklets will no longer be punished. However, tasks which
only want a best-effort wakeup can simply pass 0.

It's worth noting that a comparison was made between going into
TL_BULK at all and only setting the TASK_SELF_WAKING flag, and
it shows that the average latencies are ~10% better when entirely
avoiding TL_BULK in this case.
2026-03-23 06:57:12 +01:00
Willy Tarreau
6982c2539f MINOR: sched: do not punish self-waking tasklets anymore
Nowadays due to yield etc, it's counter-productive to permanently
punish self-waking tasklets, let's abandon this principle as it prevent
finer task priority handling.

We continue to check for the TASK_SELF_WAKING flag to place a task
into TL_BULK in case some code wants to make use of it in the future
(similarly to TASK_HEAVY), but no code sets it anymore. It could
possible make sense in the future to replace this flag with a one-shot
variant requesting low-priority.
2026-03-23 06:55:31 +01:00
Willy Tarreau
9852d5be26 MINOR: sched: do not requeue a tasklet into the current queue
As found by Christopher, the concept of waking a tasklet up into the
current queue is totally flawed, because if a task is in TL_BULK or
TL_HEAVY, all the tasklets it will wake up will end up in the same
queue. Not only this will clobber such queues, but it will also
reduce their quality of service, and this can contaminate other
tasklets due to the numerous wakeups there are now with the subsribe
mechanism between layers.
2026-03-23 06:54:42 +01:00
Willy Tarreau
7d40b3134a MEDIUM: sched: do not run a same task multiple times in series
There's always a risk that some tasks run multiple times if they wake
each other up. Now we include the loop counter in the task struct and
stop processing the queue it's in when meeting a task that has already
run. We only pick 16 bits since that's only what remains free in the
task common part, so from time to time (once every 65536) it will be
possible to wrongly match a task as having already run and stop evaluating
its queue, but it's rare enough that we don't care, because this will
be OK on the next iteration.
2026-03-23 06:52:24 +01:00
Frederic Lecaille
8f6cb8f452 BUG/MINOR: qpack: fix 62-bit overflow and 1-byte OOB reads in decoding
This patch improves the robustness of the QPACK varint decoder and fixes
potential 1-byte out-of-bounds reads in qpack_decode_fs().

In qpack_decode_fs(), two 1-byte OOB reads were possible on truncated
streams between two varint decoding. These occurred when trying to read
the byte containing the Huffman bit <h> and the Value Length prefix
immediately following an Index or a Name Length.

Note that these OOB are limited to a single byte because
qpack_get_varint() already ensures that its input length is non-zero
before consuming any data.

The fixes in qpack_decode_fs() are:
- When decoding an index, we now verify that at least one byte remains
  to safely access the following <h> bit and value length.
- When decoding a literal, we now check len < name_len + 1 to ensure
  the byte starting the header value is reachable.

In qpack_get_varint(), the maximum value is now strictly capped at 2^62-1
as per RFC. This is enforced using a budget-based check:

   (v & 127) > (limit - ret) >> shift

This prevents values from  overflowing into the 63rd or 64th bits, which
would otherwise break subsequent signed comparisons (e.g., if (len < name_len))
by interpreting the length as a negative value, leading to false positive
tests.

Thank you to @jming912 for having reported this issue in GH #3302.

Must be backported as far as 2.6
2026-03-20 19:40:11 +01:00
Egor Shestakov
60c9e2975b BUG/MINOR: sock: adjust accept() error messages for ENFILE and ENOMEM
In the ENFILE and ENOMEM cases, when accept() fails, an irrelevant
global.maxsock value was printed that doesn't reflect system limits.
Now the actconn is printed that gives a hint about the failure reasons.

Should be backported in all stable branches.
2026-03-20 16:51:47 +01:00
Aurelien DARRAGON
5617e47f91 MINOR: log: support optional 'profile <log_profile_name>' argument to do-log action
We anticipated that the do-log action should be expanded with optional
arguments at some point. Now that we heard of multiple use-cases
that could be achieved with do-log action, but that are limitated by the
fact that all do-log statements inherit from the implicit log-profile
defined on the logger, we need to provide a way for the user to specify
that custom log-profile that could be used per do-log actions individually

This is what we try to achieve in this commit, by leveraging the
prerequisite work performed by the last 2 commits.
2026-03-20 11:42:48 +01:00
Aurelien DARRAGON
042b7ab763 MINOR: log: provide a way to override logger->profile from process_send_log_ctx
In process_send_log(), now also consider the ctx if ctx->profile != NULL

In that case, we do as if logger->prof was set, but we consider
ctx->profile in priority over the logger one. What this means is that
it will become possible to pass ctx.profile to a profile that will be
used no matter what to generate the log payload.

This is a pre-requisite to implement optional "profile" argument for
do-log action
2026-03-20 11:42:40 +01:00
Aurelien DARRAGON
7466f64c56 MINOR: log: split do_log() in do_log() + do_log_ctx()
do_log() is just a wrapper to use do_log_ctx() with pre-filled ctx, but
we now have the low-level do_log_ctx() variant which can be used to
pass specific ctx parameters instead.
2026-03-20 11:41:06 +01:00
44 changed files with 1389 additions and 504 deletions

View File

@ -1886,10 +1886,13 @@ The following keywords are supported in the "global" section :
- tune.h2.be.glitches-threshold
- tune.h2.be.initial-window-size
- tune.h2.be.max-concurrent-streams
- tune.h2.be.max-frames-at-once
- tune.h2.be.rxbuf
- tune.h2.fe.glitches-threshold
- tune.h2.fe.initial-window-size
- tune.h2.fe.max-concurrent-streams
- tune.h2.fe.max-frames-at-once
- tune.h2.fe.max-rst-at-once
- tune.h2.fe.max-total-streams
- tune.h2.fe.rxbuf
- tune.h2.header-table-size
@ -4162,8 +4165,11 @@ tune.bufsize.small <size>
If however a small buffer is not sufficient, a reallocation is automatically
done to switch to a standard size buffer.
For the moment, it is used only by HTTP/3 protocol to emit the response
headers.
For the moment, it is automatically used only by HTTP/3 protocol to emit the
response headers. Otherwise, small buffers support can be enabled for
specific proxies via the "use-small-buffers" option.
See also: option use-small-buffers
tune.comp.maxlevel <number>
Sets the maximum compression level. The compression level affects CPU
@ -4368,6 +4374,13 @@ tune.h2.be.max-concurrent-streams <number>
case). It is highly recommended not to increase this value; some might find
it optimal to run at low values (1..5 typically).
tune.h2.be.max-frames-at-once <number>
Sets the maximum number of HTTP/2 incoming frames that will be processed at
once on a backend connection. It can be useful to set this to a low value
(a few tens to a few hundreds) when dealing with very large buffers in order
to maintain a low latency and a better fairness between multiple connections.
The default value is zero, which means that no limitation is enforced.
tune.h2.be.rxbuf <size>
Sets the HTTP/2 receive buffer size for outgoing connections, in bytes. This
size will be rounded up to the next multiple of tune.bufsize and will be
@ -4458,6 +4471,25 @@ tune.h2.fe.max-concurrent-streams <number> [args...]
tune.h2.fe.max-concurrent-streams 100 rq-load auto min 15
tune.h2.fe.max-frames-at-once <number>
Sets the maximum number of HTTP/2 incoming frames that will be processed at
once on a frontend connection. It can be useful to set this to a low value
(a few tens to a few hundreds) when dealing with very large buffers in order
to maintain a low latency and a better fairness between multiple connections.
The default value is zero, which means that no limitation is enforced.
tune.h2.fe.max-rst-at-once <number>
Sets the maximum number of HTTP/2 incoming RST_STREAM that will be processed
at once on a frontend connection. Once the specified number of RST_STREAM
frames are received, the connection handler will be placed in a low priority
queue and be processed after all other tasks. It can be useful to set this to
a very low value (1 or a few units) to significantly reduce the impacts of
RST_STREAM floods. RST_STREAM do happen when a user clicks on the Stop button
in their browser, but the few extra milliseconds caused by this requeuing are
generally unnoticeable, however they are generally effective at significantly
lowering the load caused from such floods. The default value is zero, which
means that no limitation is enforced.
tune.h2.fe.max-total-streams <number>
Sets the HTTP/2 maximum number of total streams processed per incoming
connection. Once this limit is reached, HAProxy will send a graceful GOAWAY
@ -4604,22 +4636,22 @@ tune.http.maxhdr <number>
protocols. This limit is large enough but not documented on purpose. The same
limit is applied on the first steps of the decoding for the same reason.
tune.idle-pool.shared { on | off }
Enables ('on') or disables ('off') sharing of idle connection pools between
threads for a same server. The default is to share them between threads in
order to minimize the number of persistent connections to a server, and to
optimize the connection reuse rate. But to help with debugging or when
tune.idle-pool.shared { auto | on | off }
Controls sharing idle connection pools between threads for a same server.
It can be enabled for all threads in a same thread group ('on'), enabled for
all threads ('full') or disabled ('off'). The default is to share them
between threads in the same thread group ('on'), in order to minimize the
number of persistent connections to a server, and to optimize the connection
reuse rate. Sharing with threads from other thread groups can have a
performance impact, and is not enabled by default, but can be useful if
maximizing connection reuse is a priority. To help with debugging or when
suspecting a bug in HAProxy around connection reuse, it can be convenient to
forcefully disable this idle pool sharing between multiple threads, and force
this option to "off". The default is on. It is strongly recommended against
disabling this option without setting a conservative value on "pool-low-conn"
for all servers relying on connection reuse to achieve a high performance
level, otherwise connections might be closed very often as the thread count
increases. Note that in any case, connections are only shared between threads
of the same thread group. This means that systems with many NUMA nodes may
show slightly more persistent connections while machines with unified caches
and many CPU cores per node may experience higher CPU usage. In the latter
case, the "max-thread-per-group" tunable may be used to improve the behavior.
forcefully disable this idle pool sharing between multiple threads,
and force this option to "off". It is strongly recommended against disabling
this option without setting a conservative value on "pool-low-conn" for all
servers relying on connection reuse to achieve a high performance level,
otherwise connections might be closed very often as the thread count
increases.
tune.idletimer <timeout>
Sets the duration after which HAProxy will consider that an empty buffer is
@ -5555,6 +5587,9 @@ tune.takeover-other-tg-connections <value>
connections.
Note that using connections from other thread groups can occur performance
penalties, so it should not be used unless really needed.
Note that this behavior is now controlled by tune.idle-pool.shared, and
this keyword is just there for compatibility with older configurations, and
will be deprecated.
tune.vars.global-max-size <size>
tune.vars.proc-max-size <size>
@ -5942,6 +5977,8 @@ errorloc302 X X X X
-- keyword -------------------------- defaults - frontend - listen -- backend -
errorloc303 X X X X
error-log-format X X X -
external-check command X - X X
external-check path X - X X
force-persist - - X X
force-be-switch - X X -
filter - X X X
@ -5987,6 +6024,7 @@ option disable-h2-upgrade (*) X X X -
option dontlog-normal (*) X X X -
option dontlognull (*) X X X -
-- keyword -------------------------- defaults - frontend - listen -- backend -
option external-check X - X X
option forwardfor X X X X
option forwarded (*) X - X X
option h1-case-adjust-bogus-client (*) X X X -
@ -6005,9 +6043,9 @@ option httpchk X - X X
option httpclose (*) X X X X
option httplog X X X -
option httpslog X X X -
option idle-close-on-response (*) X X X -
option independent-streams (*) X X X X
option ldap-check X - X X
option external-check X - X X
option log-health-checks (*) X - X X
option log-separate-errors (*) X X X -
option logasap (*) X X X -
@ -6034,9 +6072,7 @@ option tcp-smart-connect (*) X - X X
option tcpka X X X X
option tcplog X X X -
option transparent (deprecated) (*) X - X X
option idle-close-on-response (*) X X X -
external-check command X - X X
external-check path X - X X
option use-small-buffers (*) X - X X
persist rdp-cookie X - X X
quic-initial X (!) X X -
rate-limit sessions X X X -
@ -7699,6 +7735,96 @@ force-persist { if | unless } <condition>
and section 7 about ACL usage.
external-check command <command>
Executable to run when performing an external-check
May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
yes | no | yes | yes
Arguments :
<command> is the external command to run
The arguments passed to the to the command are:
<proxy_address> <proxy_port> <server_address> <server_port>
The <proxy_address> and <proxy_port> are derived from the first listener
that is either IPv4, IPv6 or a UNIX socket. In the case of a UNIX socket
listener the proxy_address will be the path of the socket and the
<proxy_port> will be the string "NOT_USED". In a backend section, it's not
possible to determine a listener, and both <proxy_address> and <proxy_port>
will have the string value "NOT_USED".
Some values are also provided through environment variables.
Environment variables :
HAPROXY_PROXY_ADDR The first bind address if available (or empty if not
applicable, for example in a "backend" section).
HAPROXY_PROXY_ID The backend id.
HAPROXY_PROXY_NAME The backend name.
HAPROXY_PROXY_PORT The first bind port if available (or empty if not
applicable, for example in a "backend" section or
for a UNIX socket).
HAPROXY_SERVER_ADDR The server address.
HAPROXY_SERVER_CURCONN The current number of connections on the server.
HAPROXY_SERVER_ID The server id.
HAPROXY_SERVER_MAXCONN The server max connections.
HAPROXY_SERVER_NAME The server name.
HAPROXY_SERVER_PORT The server port if available (or empty for a UNIX
socket).
HAPROXY_SERVER_SSL "0" when SSL is not used, "1" when it is used
HAPROXY_SERVER_PROTO The protocol used by this server, which can be one
of "cli" (the haproxy CLI), "syslog" (syslog TCP
server), "peers" (peers TCP server), "h1" (HTTP/1.x
server), "h2" (HTTP/2 server), or "tcp" (any other
TCP server).
PATH The PATH environment variable used when executing
the command may be set using "external-check path".
If the command executed and exits with a zero status then the check is
considered to have passed, otherwise the check is considered to have
failed.
Example :
external-check command /bin/true
See also : "external-check", "option external-check", "external-check path"
external-check path <path>
The value of the PATH environment variable used when running an external-check
May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
yes | no | yes | yes
Arguments :
<path> is the path used when executing external command to run
The default path is "".
Example :
external-check path "/usr/bin:/bin"
See also : "external-check", "option external-check",
"external-check command"
force-be-switch { if | unless } <condition>
Allow content switching to select a backend instance even if it is disabled
or unpublished. This rule can be used by admins to test traffic to services
@ -8203,6 +8329,11 @@ http-check expect [min-recv <int>] [comment <msg>]
occurred during the expect rule evaluation. <fmt> is a
Custom log format string (see section 8.2.6).
status-code <expr> is optional and can be used to set the check status code
reported in logs, on success or on error. <expr> is a
standard HAProxy expression formed by a sample-fetch
followed by some converters.
<match> is a keyword indicating how to look for a specific pattern in the
response. The keyword may be one of "status", "rstatus", "hdr",
"fhdr", "string", or "rstring". The keyword may be preceded by an
@ -9920,6 +10051,24 @@ no option dontlognull
See also : "log", "http-ignore-probes", "monitor-uri", and
section 8 about logging.
option external-check
Use external processes for server health checks
May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
yes | no | yes | yes
It is possible to test the health of a server using an external command.
This is achieved by running the executable set using "external-check
command".
Requires the "external-check" global to be set.
See also : "external-check", "external-check command", "external-check path"
option forwarded [ proto ]
[ host | host-expr <host_expr> ]
[ by | by-expr <by_expr> ] [ by_port | by_port-expr <by_port_expr>]
@ -10710,6 +10859,39 @@ option httpslog
See also : section 8 about logging.
option idle-close-on-response
no option idle-close-on-response
Avoid closing idle frontend connections if a soft stop is in progress
May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
yes | yes | yes | no
Arguments : none
By default, idle connections will be closed during a soft stop. In some
environments, a client talking to the proxy may have prepared some idle
connections in order to send requests later. If there is no proper retry on
write errors, this can result in errors while haproxy is reloading. Even
though a proper implementation should retry on connection/write errors, this
option was introduced to support backwards compatibility with haproxy prior
to version 2.4. Indeed before v2.4, haproxy used to wait for a last request
and response to add a "connection: close" header before closing, thus
notifying the client that the connection would not be reusable.
In a real life example, this behavior was seen in AWS using the ALB in front
of a haproxy. The end result was ALB sending 502 during haproxy reloads.
Users are warned that using this option may increase the number of old
processes if connections remain idle for too long. Adjusting the client
timeouts and/or the "hard-stop-after" parameter accordingly might be
needed in case of frequent reloads.
See also: "timeout client", "timeout client-fin", "timeout http-request",
"hard-stop-after"
option independent-streams
no option independent-streams
Enable or disable independent timeout processing for both directions
@ -10774,56 +10956,6 @@ option ldap-check
See also : "option httpchk"
option external-check
Use external processes for server health checks
May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
yes | no | yes | yes
It is possible to test the health of a server using an external command.
This is achieved by running the executable set using "external-check
command".
Requires the "external-check" global to be set.
See also : "external-check", "external-check command", "external-check path"
option idle-close-on-response
no option idle-close-on-response
Avoid closing idle frontend connections if a soft stop is in progress
May be used in the following contexts: http
May be used in sections : defaults | frontend | listen | backend
yes | yes | yes | no
Arguments : none
By default, idle connections will be closed during a soft stop. In some
environments, a client talking to the proxy may have prepared some idle
connections in order to send requests later. If there is no proper retry on
write errors, this can result in errors while haproxy is reloading. Even
though a proper implementation should retry on connection/write errors, this
option was introduced to support backwards compatibility with haproxy prior
to version 2.4. Indeed before v2.4, haproxy used to wait for a last request
and response to add a "connection: close" header before closing, thus
notifying the client that the connection would not be reusable.
In a real life example, this behavior was seen in AWS using the ALB in front
of a haproxy. The end result was ALB sending 502 during haproxy reloads.
Users are warned that using this option may increase the number of old
processes if connections remain idle for too long. Adjusting the client
timeouts and/or the "hard-stop-after" parameter accordingly might be
needed in case of frequent reloads.
See also: "timeout client", "timeout client-fin", "timeout http-request",
"hard-stop-after"
option log-health-checks
no option log-health-checks
Enable or disable logging of health checks status updates
@ -11785,95 +11917,35 @@ no option transparent (deprecated)
"transparent" option of the "bind" keyword.
external-check command <command>
Executable to run when performing an external-check
option use-small-buffers [ queue | l7-retries | check ]*
May be used in the following contexts: tcp, http, log
Enable support for small buffers for the given categories.
May be used in the following contexts: tcp, http
May be used in sections : defaults | frontend | listen | backend
yes | no | yes | yes
Arguments :
<command> is the external command to run
This option can be used to enable the small buffers support at diffent places
to save memory. By default, with no parameter, small buffers are used as far
as possible at all possible places. Otherwise, it is possible to limit it to
following the places:
The arguments passed to the to the command are:
- queue: When set, small buffers will be used to store the requests, if
small enough, when the connection is queued.
- l7-retries: When set, small buffers will be used to save the requests
when L7 retries are enabled.
- check: When set, small buffers will be used for the health-checks
requests.
<proxy_address> <proxy_port> <server_address> <server_port>
When enabled, small buffers are used, but only if it is possible. Otherwise,
when data are too large, a regular buffer is automtically used. The size of
small buffers is configurable via the "tune.bufsize.small" global setting.
The <proxy_address> and <proxy_port> are derived from the first listener
that is either IPv4, IPv6 or a UNIX socket. In the case of a UNIX socket
listener the proxy_address will be the path of the socket and the
<proxy_port> will be the string "NOT_USED". In a backend section, it's not
possible to determine a listener, and both <proxy_address> and <proxy_port>
will have the string value "NOT_USED".
Some values are also provided through environment variables.
Environment variables :
HAPROXY_PROXY_ADDR The first bind address if available (or empty if not
applicable, for example in a "backend" section).
HAPROXY_PROXY_ID The backend id.
HAPROXY_PROXY_NAME The backend name.
HAPROXY_PROXY_PORT The first bind port if available (or empty if not
applicable, for example in a "backend" section or
for a UNIX socket).
HAPROXY_SERVER_ADDR The server address.
HAPROXY_SERVER_CURCONN The current number of connections on the server.
HAPROXY_SERVER_ID The server id.
HAPROXY_SERVER_MAXCONN The server max connections.
HAPROXY_SERVER_NAME The server name.
HAPROXY_SERVER_PORT The server port if available (or empty for a UNIX
socket).
HAPROXY_SERVER_SSL "0" when SSL is not used, "1" when it is used
HAPROXY_SERVER_PROTO The protocol used by this server, which can be one
of "cli" (the haproxy CLI), "syslog" (syslog TCP
server), "peers" (peers TCP server), "h1" (HTTP/1.x
server), "h2" (HTTP/2 server), or "tcp" (any other
TCP server).
PATH The PATH environment variable used when executing
the command may be set using "external-check path".
If the command executed and exits with a zero status then the check is
considered to have passed, otherwise the check is considered to have
failed.
Example :
external-check command /bin/true
See also : "external-check", "option external-check", "external-check path"
external-check path <path>
The value of the PATH environment variable used when running an external-check
May be used in the following contexts: tcp, http, log
May be used in sections : defaults | frontend | listen | backend
yes | no | yes | yes
Arguments :
<path> is the path used when executing external command to run
The default path is "".
Example :
external-check path "/usr/bin:/bin"
See also : "external-check", "option external-check",
"external-check command"
If this option has been enabled in a "defaults" section, it can be disabled
in a specific instance by prepending the "no" keyword before it.
See also: tune.bufsize.small
persist rdp-cookie
persist rdp-cookie(<name>)
@ -13632,13 +13704,6 @@ tcp-check expect [min-recv <int>] [comment <msg>]
does not match, the check will wait for more data. If set to 0,
the evaluation result is always conclusive.
<match> is a keyword indicating how to look for a specific pattern in the
response. The keyword may be one of "string", "rstring", "binary" or
"rbinary".
The keyword may be preceded by an exclamation mark ("!") to negate
the match. Spaces are allowed between the exclamation mark and the
keyword. See below for more details on the supported keywords.
ok-status <st> is optional and can be used to set the check status if
the expect rule is successfully evaluated and if it is
the last rule in the tcp-check ruleset. "L7OK", "L7OKC",
@ -13686,6 +13751,13 @@ tcp-check expect [min-recv <int>] [comment <msg>]
standard HAProxy expression formed by a sample-fetch
followed by some converters.
<match> is a keyword indicating how to look for a specific pattern in the
response. The keyword may be one of "string", "rstring", "binary" or
"rbinary".
The keyword may be preceded by an exclamation mark ("!") to negate
the match. Spaces are allowed between the exclamation mark and the
keyword. See below for more details on the supported keywords.
<pattern> is the pattern to look for. It may be a string or a regular
expression. If the pattern contains spaces, they must be escaped
with the usual backslash ('\').
@ -15382,15 +15454,14 @@ disable-l7-retry
reason than a connection failure. This can be useful for example to make
sure POST requests aren't retried on failure.
do-log
do-log [profile <log_profile>]
Usable in: QUIC Ini| TCP RqCon| RqSes| RqCnt| RsCnt| HTTP Req| Res| Aft
X | X | X | X | X | X | X | X
This action manually triggers a log emission on the proxy. This means
log options on the proxy will be considered (including formatting options
such as "log-format"), but it will not interfere with the logs automatically
generated by the proxy during transaction handling. It currently doesn't
support any argument, though extensions may appear in future versions.
generated by the proxy during transaction handling.
Using "log-profile", it is possible to precisely describe how the log should
be emitted for each of the available contexts where the action may be used.
@ -15400,15 +15471,28 @@ do-log
Also, they will be properly reported when using "%OG" logformat alias.
Optional "profile" argument may be used to specify the name of a log-profile
section that should be used for this do-log action specifically instead of
the one associated to the current logger that applies by default.
Example:
log-profile myprof
log-profile my-dft-prof
on tcp-req-conn format "Connect: %ci"
log-profile my-local-prof
on tcp-req-conn format "Local Connect: %ci"
frontend myfront
log stdout format rfc5424 profile myprof local0
log stdout format rfc5424 profile my-dft-prof local0
log-format "log generated using proxy logformat, from '%OG'"
tcp-request connection do-log #uses special log-profile format
tcp-request content do-log #uses proxy logformat
acl local src 127.0.0.1
# on connection use either log-profile from the logger (my-dft-prof) or
# explicit my-local-prof if source ip is localhost
tcp-request connection do-log if !local
tcp-request connection do-log profile my-local-prof if local
# on content use proxy logformat, since no override was specified
# in my-dft-prof
tcp-request content do-log
do-resolve(<var>,<resolvers>[,ipv4|ipv6]) <expr>
Usable in: QUIC Ini| TCP RqCon| RqSes| RqCnt| RsCnt| HTTP Req| Res| Aft

View File

@ -539,10 +539,22 @@ message. These functions are used by HTX analyzers or by multiplexers.
with the first block not removed, or NULL if everything was removed, and
the amount of data drained.
- htx_xfer_blks() transfers HTX blocks from an HTX message to another,
stopping after the first block of a specified type is transferred or when
a specific amount of bytes, including meta-data, was moved. If the tail
block is a DATA block, it may be partially moved. All other block are
- htx_xfer() transfers HTX blocks from an HTX message to another, stopping
when a specific amount of bytes, including meta-data, was copied. If the
tail block is a DATA block, it may be partially copied. All other block
are transferred at once. By default, copied blocks are removed from the
original HTX message and headers and trailers parts cannot be partially
copied. But flags can be set to change the default behavior:
- HTX_XFER_KEEP_SRC_BLKS: source blocks are not removed
- HTX_XFER_PARTIAL_HDRS_COPY: partial headers and trailers
part can be xferred
- HTX_XFER_HDRS_ONLY: Only the headers part is xferred
- htx_xfer_blks() [DEPRECATED] transfers HTX blocks from an HTX message to
another, stopping after the first block of a specified type is transferred
or when a specific amount of bytes, including meta-data, was moved. If the
tail block is a DATA block, it may be partially moved. All other block are
transferred at once or kept. This function returns a mixed value, with the
last block moved, or NULL if nothing was moved, and the amount of data
transferred. When HEADERS or TRAILERS blocks must be transferred, this

View File

@ -1740,10 +1740,7 @@ add backend <name> from <defproxy> [mode <mode>] [guid <guid>] [ EXPERIMENTAL ]
All named default proxies can be used, given that they validate the same
inheritance rules applied during configuration parsing. There is some
exceptions though, for example when the mode is neither TCP nor HTTP. Another
exception is that it is not yet possible to use a default proxies which
reference custom HTTP errors, for example via the errorfiles or http-rules
keywords.
exceptions though, for example when the mode is neither TCP nor HTTP.
This command is restricted and can only be issued on sockets configured for
level "admin". Moreover, this feature is still considered in development so it

View File

@ -58,6 +58,7 @@ struct acme_auth {
struct ist auth; /* auth URI */
struct ist chall; /* challenge URI */
struct ist token; /* token */
int validated; /* already validated */
int ready; /* is the challenge ready ? */
void *next;
};

View File

@ -198,6 +198,11 @@ struct act_rule {
struct server *srv; /* target server to attach the connection */
struct sample_expr *name; /* used to differentiate idle connections */
} attach_srv; /* 'attach-srv' rule */
struct {
enum log_orig_id orig;
char *profile_name;
struct log_profile *profile;
} do_log; /* 'do-log' action */
struct {
int value;
struct sample_expr *expr;

View File

@ -59,6 +59,7 @@ enum chk_result {
#define CHK_ST_FASTINTER 0x0400 /* force fastinter check */
#define CHK_ST_READY 0x0800 /* check ready to migrate or run, see below */
#define CHK_ST_SLEEPING 0x1000 /* check was sleeping, i.e. not currently bound to a thread, see below */
#define CHK_ST_USE_SMALL_BUFF 0x2000 /* Use small buffers if possible for the request */
/* 4 possible states for CHK_ST_SLEEPING and CHK_ST_READY:
* SLP RDY State Description
@ -188,6 +189,7 @@ struct check {
char **envp; /* the environment to use if running a process-based check */
struct pid_list *curpid; /* entry in pid_list used for current process-based test, or -1 if not in test */
struct sockaddr_storage addr; /* the address to check */
struct protocol *proto; /* protocol used for check, may be different from the server's one */
char *pool_conn_name; /* conn name used on reuse */
char *sni; /* Server name */
char *alpn_str; /* ALPN to use for checks */

View File

@ -78,7 +78,7 @@ struct task *process_chk(struct task *t, void *context, unsigned int state);
struct task *srv_chk_io_cb(struct task *t, void *ctx, unsigned int state);
int check_buf_available(void *target);
struct buffer *check_get_buf(struct check *check, struct buffer *bptr);
struct buffer *check_get_buf(struct check *check, struct buffer *bptr, unsigned int small_buffer);
void check_release_buf(struct check *check, struct buffer *bptr);
const char *init_check(struct check *check, int type);
void free_check(struct check *check);

View File

@ -33,6 +33,7 @@
extern struct pool_head *pool_head_trash;
extern struct pool_head *pool_head_large_trash;
extern struct pool_head *pool_head_small_trash;
/* function prototypes */
@ -48,6 +49,7 @@ int chunk_strcmp(const struct buffer *chk, const char *str);
int chunk_strcasecmp(const struct buffer *chk, const char *str);
struct buffer *get_trash_chunk(void);
struct buffer *get_large_trash_chunk(void);
struct buffer *get_small_trash_chunk(void);
struct buffer *get_trash_chunk_sz(size_t size);
struct buffer *get_larger_trash_chunk(struct buffer *chunk);
int init_trash_buffers(int first);
@ -133,6 +135,29 @@ static forceinline struct buffer *alloc_large_trash_chunk(void)
return chunk;
}
/*
* Allocate a small trash chunk from the reentrant pool. The buffer starts at
* the end of the chunk. This chunk must be freed using free_trash_chunk(). This
* call may fail and the caller is responsible for checking that the returned
* pointer is not NULL.
*/
static forceinline struct buffer *alloc_small_trash_chunk(void)
{
struct buffer *chunk;
if (!pool_head_small_trash)
return NULL;
chunk = pool_alloc(pool_head_small_trash);
if (chunk) {
char *buf = (char *)chunk + sizeof(struct buffer);
*buf = 0;
chunk_init(chunk, buf,
pool_head_small_trash->size - sizeof(struct buffer));
}
return chunk;
}
/*
* Allocate a trash chunk accordingly to the requested size. This chunk must be
* freed using free_trash_chunk(). This call may fail and the caller is
@ -140,7 +165,9 @@ static forceinline struct buffer *alloc_large_trash_chunk(void)
*/
static forceinline struct buffer *alloc_trash_chunk_sz(size_t size)
{
if (likely(size <= pool_head_trash->size))
if (pool_head_small_trash && size <= pool_head_small_trash->size)
return alloc_small_trash_chunk();
else if (size <= pool_head_trash->size)
return alloc_trash_chunk();
else if (pool_head_large_trash && size <= pool_head_large_trash->size)
return alloc_large_trash_chunk();
@ -153,10 +180,12 @@ static forceinline struct buffer *alloc_trash_chunk_sz(size_t size)
*/
static forceinline void free_trash_chunk(struct buffer *chunk)
{
if (likely(chunk && chunk->size == pool_head_trash->size - sizeof(struct buffer)))
pool_free(pool_head_trash, chunk);
else
if (pool_head_small_trash && chunk && chunk->size == pool_head_small_trash->size - sizeof(struct buffer))
pool_free(pool_head_small_trash, chunk);
else if (pool_head_large_trash && chunk && chunk->size == pool_head_large_trash->size - sizeof(struct buffer))
pool_free(pool_head_large_trash, chunk);
else
pool_free(pool_head_trash, chunk);
}
/* copies chunk <src> into <chk>. Returns 0 in case of failure. */

View File

@ -34,6 +34,7 @@
#include <haproxy/listener-t.h>
#include <haproxy/obj_type.h>
#include <haproxy/pool-t.h>
#include <haproxy/protocol.h>
#include <haproxy/server.h>
#include <haproxy/session-t.h>
#include <haproxy/task-t.h>
@ -609,13 +610,13 @@ void list_mux_proto(FILE *out);
*/
static inline const struct mux_proto_list *conn_get_best_mux_entry(
const struct ist mux_proto,
int proto_side, int proto_mode)
int proto_side, int proto_is_quic, int proto_mode)
{
struct mux_proto_list *item;
struct mux_proto_list *fallback = NULL;
list_for_each_entry(item, &mux_proto_list.list, list) {
if (!(item->side & proto_side) || !(item->mode & proto_mode))
if (!(item->side & proto_side) || !(item->mode & proto_mode) || (proto_is_quic && !(item->mux->flags & MX_FL_FRAMED)))
continue;
if (istlen(mux_proto) && isteq(mux_proto, item->token))
return item;
@ -640,7 +641,7 @@ static inline const struct mux_ops *conn_get_best_mux(struct connection *conn,
{
const struct mux_proto_list *item;
item = conn_get_best_mux_entry(mux_proto, proto_side, proto_mode);
item = conn_get_best_mux_entry(mux_proto, proto_side, proto_is_quic(conn->ctrl), proto_mode);
return item ? item->mux : NULL;
}

View File

@ -37,6 +37,7 @@
extern struct pool_head *pool_head_buffer;
extern struct pool_head *pool_head_large_buffer;
extern struct pool_head *pool_head_small_buffer;
int init_buffer(void);
void buffer_dump(FILE *o, struct buffer *b, int from, int to);
@ -66,6 +67,12 @@ static inline int b_is_large_sz(size_t sz)
return (pool_head_large_buffer && sz == pool_head_large_buffer->size);
}
/* Return 1 if <sz> is the size of a small buffer */
static inline int b_is_small_sz(size_t sz)
{
return (pool_head_small_buffer && sz == pool_head_small_buffer->size);
}
/* Return 1 if <bug> is a default buffer */
static inline int b_is_default(struct buffer *buf)
{
@ -78,6 +85,12 @@ static inline int b_is_large(struct buffer *buf)
return b_is_large_sz(b_size(buf));
}
/* Return 1 if <buf> is a small buffer */
static inline int b_is_small(struct buffer *buf)
{
return b_is_small_sz(b_size(buf));
}
/**************************************************/
/* Functions below are used for buffer allocation */
/**************************************************/
@ -172,6 +185,8 @@ static inline char *__b_get_emergency_buf(void)
* than the default buffers */ \
if (unlikely(b_is_large_sz(sz))) \
pool_free(pool_head_large_buffer, area); \
else if (unlikely(b_is_small_sz(sz))) \
pool_free(pool_head_small_buffer, area); \
else if (th_ctx->emergency_bufs_left < global.tune.reserved_bufs) \
th_ctx->emergency_bufs[th_ctx->emergency_bufs_left++] = area; \
else \
@ -185,6 +200,35 @@ static inline char *__b_get_emergency_buf(void)
__b_free((_buf)); \
} while (0)
static inline struct buffer *b_alloc_small(struct buffer *buf)
{
char *area = NULL;
if (!buf->size) {
area = pool_alloc(pool_head_small_buffer);
if (!area)
return NULL;
buf->area = area;
buf->size = global.tune.bufsize_small;
}
return buf;
}
static inline struct buffer *b_alloc_large(struct buffer *buf)
{
char *area = NULL;
if (!buf->size) {
area = pool_alloc(pool_head_large_buffer);
if (!area)
return NULL;
buf->area = area;
buf->size = global.tune.bufsize_large;
}
return buf;
}
/* Offer one or multiple buffer currently belonging to target <from> to whoever
* needs one. Any pointer is valid for <from>, including NULL. Its purpose is
* to avoid passing a buffer to oneself in case of failed allocations (e.g.

View File

@ -93,4 +93,22 @@ struct http_errors {
struct list list; /* http-errors list */
};
/* Indicates the keyword origin of an http-error definition. This is used in
* <conf_errors> type to indicate which part of the internal union should be
* manipulated.
*/
enum http_err_directive {
HTTP_ERR_DIRECTIVE_SECTION = 0, /* "errorfiles" keyword referencing a http-errors section */
HTTP_ERR_DIRECTIVE_INLINE, /* "errorfile" keyword with inline error definition */
};
/* Used with "errorfiles" directives. It indicates for each known HTTP error
* status codes if they are defined in the target http-errors section.
*/
enum http_err_import {
HTTP_ERR_IMPORT_NO = 0,
HTTP_ERR_IMPORT_IMPLICIT, /* import every errcode defined in a section */
HTTP_ERR_IMPORT_EXPLICIT, /* import a specific errcode from a section */
};
#endif /* _HAPROXY_HTTP_HTX_T_H */

View File

@ -78,6 +78,7 @@ struct buffer *http_load_errorfile(const char *file, char **errmsg);
struct buffer *http_load_errormsg(const char *key, const struct ist msg, char **errmsg);
struct buffer *http_parse_errorfile(int status, const char *file, char **errmsg);
struct buffer *http_parse_errorloc(int errloc, int status, const char *url, char **errmsg);
int proxy_check_http_errors(struct proxy *px);
int proxy_dup_default_conf_errors(struct proxy *curpx, const struct proxy *defpx, char **errmsg);
void proxy_release_conf_errors(struct proxy *px);

View File

@ -57,6 +57,16 @@ size_t htx_add_data(struct htx *htx, const struct ist data);
struct htx_blk *htx_add_last_data(struct htx *htx, struct ist data);
void htx_move_blk_before(struct htx *htx, struct htx_blk **blk, struct htx_blk **ref);
int htx_append_msg(struct htx *dst, const struct htx *src);
struct buffer *htx_move_to_small_buffer(struct buffer *dst, struct buffer *src);
struct buffer *htx_move_to_large_buffer(struct buffer *dst, struct buffer *src);
struct buffer *htx_copy_to_small_buffer(struct buffer *dst, struct buffer *src);
struct buffer *htx_copy_to_large_buffer(struct buffer *dst, struct buffer *src);
#define HTX_XFER_DEFAULT 0x00000000 /* Default XFER: no partial xfer / remove blocks from source */
#define HTX_XFER_KEEP_SRC_BLKS 0x00000001 /* Don't remove xfer blocks from source messages during xfer */
#define HTX_XFER_PARTIAL_HDRS_COPY 0x00000002 /* Allow partial copy of headers and trailers part */
#define HTX_XFER_HDRS_ONLY 0x00000003 /* Only Transfert header blocks (start-line, header and EOH) */
size_t htx_xfer(struct htx *dst, struct htx *src, size_t count, unsigned int flags);
/* Functions and macros to get parts of the start-line or length of these
* parts. Request and response start-lines are both composed of 3 parts.

View File

@ -124,6 +124,12 @@ static inline int real_family(int ss_family)
return fam ? fam->real_family : AF_UNSPEC;
}
static inline int proto_is_quic(const struct protocol *proto)
{
return (proto->proto_type == PROTO_TYPE_DGRAM &&
proto->xprt_type == PROTO_TYPE_STREAM);
}
#endif /* _HAPROXY_PROTOCOL_H */
/*

View File

@ -156,14 +156,17 @@ enum PR_SRV_STATE_FILE {
#define PR_O2_RSTRICT_REQ_HDR_NAMES_NOOP 0x01000000 /* preserve request header names containing chars outside of [0-9a-zA-Z-] charset */
#define PR_O2_RSTRICT_REQ_HDR_NAMES_MASK 0x01c00000 /* mask for restrict-http-header-names option */
/* unused : 0x02000000 ... 0x08000000 */
/* server health checks */
#define PR_O2_CHK_NONE 0x00000000 /* no L7 health checks configured (TCP by default) */
#define PR_O2_TCPCHK_CHK 0x90000000 /* use TCPCHK check for server health */
#define PR_O2_EXT_CHK 0xA0000000 /* use external command for server health */
/* unused: 0xB0000000 to 0xF000000, reserved for health checks */
#define PR_O2_CHK_ANY 0xF0000000 /* Mask to cover any check */
#define PR_O2_CHK_NONE 0x00000000 /* no L7 health checks configured (TCP by default) */
#define PR_O2_TCPCHK_CHK 0x02000000 /* use TCPCHK check for server health */
#define PR_O2_EXT_CHK 0x04000000 /* use external command for server health */
#define PR_O2_CHK_ANY 0x06000000 /* Mask to cover any check */
#define PR_O2_USE_SBUF_QUEUE 0x08000000 /* use small buffer for request when stream are queued*/
#define PR_O2_USE_SBUF_L7_RETRY 0x10000000 /* use small buffer for request when L7 retires are enabled */
#define PR_O2_USE_SBUF_CHECK 0x20000000 /* use small buffer for request's healthchecks */
#define PR_O2_USE_SBUF_ALL 0x38000000 /* all flags for use-large-buffer option */
/* unused : 0x40000000 ... 0x80000000 */
/* end of proxy->options2 */
/* bits for proxy->options3 */

View File

@ -130,20 +130,22 @@ struct notification {
* on return.
*/
#define TASK_COMMON \
struct { \
unsigned int state; /* task state : bitfield of TASK_ */ \
int tid; /* tid of task/tasklet. <0 = local for tasklet, unbound for task */ \
struct task *(*process)(struct task *t, void *ctx, unsigned int state); /* the function which processes the task */ \
void *context; /* the task's context */ \
const struct ha_caller *caller; /* call place of last wakeup(); 0 on init, -1 on free */ \
uint32_t wake_date; /* date of the last task wakeup */ \
unsigned int calls; /* number of times process was called */ \
TASK_DEBUG_STORAGE; \
}
unsigned int state; /* task state : bitfield of TASK_ */ \
int tid; /* tid of task/tasklet. <0 = local for tasklet, unbound for task */ \
struct task *(*process)(struct task *t, void *ctx, unsigned int state); /* the function which processes the task */ \
void *context; /* the task's context */ \
const struct ha_caller *caller; /* call place of last wakeup(); 0 on init, -1 on free */ \
uint32_t wake_date; /* date of the last task wakeup */ \
unsigned int calls; /* number of times process was called */ \
TASK_DEBUG_STORAGE; \
short last_run; /* 16-bit now_ms of last run */
/* a 16- or 48-bit hole remains here and is used by task */
/* The base for all tasks */
struct task {
TASK_COMMON; /* must be at the beginning! */
short nice; /* task prio from -1024 to +1024 */
int expire; /* next expiration date for this task, in ticks */
struct eb32_node rq; /* ebtree node used to hold the task in the run queue */
/* WARNING: the struct task is often aliased as a struct tasklet when
* it is NOT in the run queue. The tasklet has its struct list here
@ -151,14 +153,12 @@ struct task {
* ever reorder these fields without taking this into account!
*/
struct eb32_node wq; /* ebtree node used to hold the task in the wait queue */
int expire; /* next expiration date for this task, in ticks */
short nice; /* task prio from -1024 to +1024 */
/* 16-bit hole here */
};
/* lightweight tasks, without priority, mainly used for I/Os */
struct tasklet {
TASK_COMMON; /* must be at the beginning! */
/* 48-bit hole here */
struct list list;
/* WARNING: the struct task is often aliased as a struct tasklet when
* it is not in the run queue. The task has its struct rq here where

View File

@ -121,6 +121,7 @@ enum tcpcheck_rule_type {
/* Unused 0x000000A0..0x00000FF0 (reserved for future proto) */
#define TCPCHK_RULES_TCP_CHK 0x00000FF0
#define TCPCHK_RULES_PROTO_CHK 0x00000FF0 /* Mask to cover protocol check */
#define TCPCHK_RULES_MAY_USE_SBUF 0x00001000 /* checks may try to use small buffers if possible for the request */
struct check;
struct tcpcheck_connect {

View File

@ -15,6 +15,7 @@
#include <haproxy/acme-t.h>
#include <haproxy/base64.h>
#include <haproxy/intops.h>
#include <haproxy/cfgparse.h>
#include <haproxy/cli.h>
#include <haproxy/errors.h>
@ -266,7 +267,6 @@ static int cfg_parse_acme(const char *file, int linenum, char **args, int kwm)
mark_tainted(TAINTED_CONFIG_EXP_KW_DECLARED);
if (strcmp(args[0], "acme") == 0) {
struct acme_cfg *tmp_acme = acme_cfgs;
if (alertif_too_many_args(1, file, linenum, args, &err_code))
goto out;
@ -292,7 +292,7 @@ static int cfg_parse_acme(const char *file, int linenum, char **args, int kwm)
* name */
err_code |= ERR_ALERT | ERR_FATAL;
ha_alert("parsing [%s:%d]: acme section '%s' already exists (%s:%d).\n",
file, linenum, args[1], tmp_acme->filename, tmp_acme->linenum);
file, linenum, args[1], cur_acme->filename, cur_acme->linenum);
goto out;
}
@ -1188,7 +1188,7 @@ int acme_res_certificate(struct task *task, struct acme_ctx *ctx, char **errmsg)
}
/* get the next retry timing */
if (isteqi(hdr->n, ist("Retry-After"))) {
ctx->retryafter = atol(hdr->v.ptr);
ctx->retryafter = __strl2uic(hdr->v.ptr, hdr->v.len);
}
}
@ -1261,7 +1261,7 @@ int acme_res_chkorder(struct task *task, struct acme_ctx *ctx, char **errmsg)
}
/* get the next retry timing */
if (isteqi(hdr->n, ist("Retry-After"))) {
ctx->retryafter = atol(hdr->v.ptr);
ctx->retryafter = __strl2uic(hdr->v.ptr, hdr->v.len);
}
}
@ -1344,7 +1344,6 @@ int acme_req_finalize(struct task *task, struct acme_ctx *ctx, char **errmsg)
csr->data = ret;
chunk_printf(req_in, "{ \"csr\": \"%.*s\" }", (int)csr->data, csr->area);
OPENSSL_free(data);
if (acme_jws_payload(req_in, ctx->nonce, ctx->finalize, ctx->cfg->account.pkey, ctx->kid, req_out, errmsg) != 0)
@ -1358,6 +1357,7 @@ int acme_req_finalize(struct task *task, struct acme_ctx *ctx, char **errmsg)
error:
memprintf(errmsg, "couldn't request the finalize URL");
out:
OPENSSL_free(data);
free_trash_chunk(req_in);
free_trash_chunk(req_out);
free_trash_chunk(csr);
@ -1391,7 +1391,7 @@ int acme_res_finalize(struct task *task, struct acme_ctx *ctx, char **errmsg)
}
/* get the next retry timing */
if (isteqi(hdr->n, ist("Retry-After"))) {
ctx->retryafter = atol(hdr->v.ptr);
ctx->retryafter = __strl2uic(hdr->v.ptr, hdr->v.len);
}
}
@ -1492,7 +1492,7 @@ enum acme_ret acme_res_challenge(struct task *task, struct acme_ctx *ctx, struct
}
/* get the next retry timing */
if (isteqi(hdr->n, ist("Retry-After"))) {
ctx->retryafter = atol(hdr->v.ptr);
ctx->retryafter = __strl2uic(hdr->v.ptr, hdr->v.len);
}
}
@ -1618,7 +1618,7 @@ int acme_res_auth(struct task *task, struct acme_ctx *ctx, struct acme_auth *aut
}
/* get the next retry timing */
if (isteqi(hdr->n, ist("Retry-After"))) {
ctx->retryafter = atol(hdr->v.ptr);
ctx->retryafter = __strl2uic(hdr->v.ptr, hdr->v.len);
}
}
@ -1654,6 +1654,19 @@ int acme_res_auth(struct task *task, struct acme_ctx *ctx, struct acme_auth *aut
auth->dns = istdup(ist2(t2->area, t2->data));
ret = mjson_get_string(hc->res.buf.area, hc->res.buf.data, "$.status", trash.area, trash.size);
if (ret == -1) {
memprintf(errmsg, "couldn't get a \"status\" from Authorization URL \"%s\"", auth->auth.ptr);
goto error;
}
trash.data = ret;
/* if auth is already valid we need to skip solving challenges */
if (strncasecmp("valid", trash.area, trash.data) == 0) {
auth->validated = 1;
goto out;
}
/* get the multiple challenges and select the one from the configuration */
for (i = 0; ; i++) {
int ret;
@ -1761,6 +1774,7 @@ int acme_res_auth(struct task *task, struct acme_ctx *ctx, struct acme_auth *aut
break;
}
out:
ret = 0;
error:
@ -1849,7 +1863,7 @@ int acme_res_neworder(struct task *task, struct acme_ctx *ctx, char **errmsg)
}
/* get the next retry timing */
if (isteqi(hdr->n, ist("Retry-After"))) {
ctx->retryafter = atol(hdr->v.ptr);
ctx->retryafter = __strl2uic(hdr->v.ptr, hdr->v.len);
}
/* get the order URL */
if (isteqi(hdr->n, ist("Location"))) {
@ -2009,7 +2023,7 @@ int acme_res_account(struct task *task, struct acme_ctx *ctx, int newaccount, ch
}
/* get the next retry timing */
if (isteqi(hdr->n, ist("Retry-After"))) {
ctx->retryafter = atol(hdr->v.ptr);
ctx->retryafter = __strl2uic(hdr->v.ptr, hdr->v.len);
}
if (isteqi(hdr->n, ist("Replay-Nonce"))) {
istfree(&ctx->nonce);
@ -2263,6 +2277,14 @@ re:
break;
case ACME_CHALLENGE:
if (http_st == ACME_HTTP_REQ) {
/* if challenge is already validated we skip this stage */
if (ctx->next_auth->validated) {
if ((ctx->next_auth = ctx->next_auth->next) == NULL) {
st = ACME_CHKCHALLENGE;
ctx->next_auth = ctx->auths;
}
goto nextreq;
}
/* if the challenge is not ready, wait to be wakeup */
if (!ctx->next_auth->ready)
@ -2292,6 +2314,14 @@ re:
break;
case ACME_CHKCHALLENGE:
if (http_st == ACME_HTTP_REQ) {
/* if challenge is already validated we skip this stage */
if (ctx->next_auth->validated) {
if ((ctx->next_auth = ctx->next_auth->next) == NULL)
st = ACME_FINALIZE;
goto nextreq;
}
if (acme_post_as_get(task, ctx, ctx->next_auth->chall, &errmsg) != 0)
goto retry;
}
@ -2526,9 +2556,9 @@ X509_REQ *acme_x509_req(EVP_PKEY *pkey, char **san)
{
struct buffer *san_trash = NULL;
X509_REQ *x = NULL;
X509_NAME *nm;
X509_NAME *nm = NULL;
STACK_OF(X509_EXTENSION) *exts = NULL;
X509_EXTENSION *ext_san;
X509_EXTENSION *ext_san = NULL;
char *str_san = NULL;
int i = 0;
@ -2559,26 +2589,36 @@ X509_REQ *acme_x509_req(EVP_PKEY *pkey, char **san)
for (i = 0; san[i]; i++) {
chunk_appendf(san_trash, "%sDNS:%s", i ? "," : "", san[i]);
}
str_san = my_strndup(san_trash->area, san_trash->data);
if ((str_san = my_strndup(san_trash->area, san_trash->data)) == NULL)
goto error;
if ((ext_san = X509V3_EXT_conf_nid(NULL, NULL, NID_subject_alt_name, str_san)) == NULL)
goto error;
if (!sk_X509_EXTENSION_push(exts, ext_san))
goto error;
ext_san = NULL; /* handle double-free upon error */
if (!X509_REQ_add_extensions(x, exts))
goto error;
sk_X509_EXTENSION_pop_free(exts, X509_EXTENSION_free);
if (!X509_REQ_sign(x, pkey, EVP_sha256()))
goto error;
sk_X509_EXTENSION_pop_free(exts, X509_EXTENSION_free);
X509_NAME_free(nm);
free(str_san);
free_trash_chunk(san_trash);
return x;
error:
X509_EXTENSION_free(ext_san);
sk_X509_EXTENSION_pop_free(exts, X509_EXTENSION_free);
X509_REQ_free(x);
X509_NAME_free(nm);
free(str_san);
free_trash_chunk(san_trash);
return NULL;
@ -2627,7 +2667,7 @@ EVP_PKEY *acme_gen_tmp_pkey()
/* start an ACME task */
static int acme_start_task(struct ckch_store *store, char **errmsg)
{
struct task *task;
struct task *task = NULL;
struct acme_ctx *ctx = NULL;
struct acme_cfg *cfg;
struct ckch_store *newstore = NULL;
@ -2712,6 +2752,8 @@ err:
HA_RWLOCK_WRUNLOCK(OTHER_LOCK, &acme_lock);
acme_ctx_destroy(ctx);
}
if (task)
task_destroy(task);
memprintf(errmsg, "%sCan't start the ACME client.", *errmsg ? *errmsg : "");
return 1;
}
@ -2721,7 +2763,10 @@ static int cli_acme_renew_parse(char **args, char *payload, struct appctx *appct
struct ckch_store *store = NULL;
char *errmsg = NULL;
if (!*args[1]) {
if (!cli_has_level(appctx, ACCESS_LVL_ADMIN))
return 1;
if (!*args[2]) {
memprintf(&errmsg, ": not enough parameters\n");
goto err;
}
@ -2760,8 +2805,11 @@ static int cli_acme_chall_ready_parse(char **args, char *payload, struct appctx
int remain = 0;
struct ebmb_node *node = NULL;
if (!*args[2] && !*args[3] && !*args[4]) {
memprintf(&msg, ": not enough parameters\n");
if (!cli_has_level(appctx, ACCESS_LVL_ADMIN))
return 1;
if (!*args[2] || !*args[3] || !*args[4]) {
memprintf(&msg, "Not enough parameters: \"acme challenge_ready <certfile> domain <domain>\"\n");
goto err;
}
@ -2882,8 +2930,12 @@ end:
return 1;
}
static int cli_acme_ps(char **args, char *payload, struct appctx *appctx, void *private)
static int cli_acme_parse_status(char **args, char *payload, struct appctx *appctx, void *private)
{
if (!cli_has_level(appctx, ACCESS_LVL_ADMIN))
return 1;
return 0;
}
@ -2891,7 +2943,7 @@ static int cli_acme_ps(char **args, char *payload, struct appctx *appctx, void *
static struct cli_kw_list cli_kws = {{ },{
{ { "acme", "renew", NULL }, "acme renew <certfile> : renew a certificate using the ACME protocol", cli_acme_renew_parse, NULL, NULL, NULL, 0 },
{ { "acme", "status", NULL }, "acme status : show status of certificates configured with ACME", cli_acme_ps, cli_acme_status_io_handler, NULL, NULL, 0 },
{ { "acme", "status", NULL }, "acme status : show status of certificates configured with ACME", cli_acme_parse_status, cli_acme_status_io_handler, NULL, NULL, 0 },
{ { "acme", "challenge_ready", NULL }, "acme challenge_ready <certfile> domain <domain> : notify HAProxy that the ACME challenge is ready", cli_acme_chall_ready_parse, NULL, NULL, NULL, 0 },
{ { NULL }, NULL, NULL, NULL }
}};

View File

@ -511,7 +511,7 @@ size_t appctx_htx_rcv_buf(struct appctx *appctx, struct buffer *buf, size_t coun
goto out;
}
htx_xfer_blks(buf_htx, appctx_htx, count, HTX_BLK_UNUSED);
htx_xfer(buf_htx, appctx_htx, count, HTX_XFER_DEFAULT);
buf_htx->flags |= (appctx_htx->flags & (HTX_FL_PARSING_ERROR|HTX_FL_PROCESSING_ERROR));
if (htx_is_empty(appctx_htx)) {
buf_htx->flags |= (appctx_htx->flags & HTX_FL_EOM);
@ -608,7 +608,7 @@ size_t appctx_htx_snd_buf(struct appctx *appctx, struct buffer *buf, size_t coun
goto end;
}
htx_xfer_blks(appctx_htx, buf_htx, count, HTX_BLK_UNUSED);
htx_xfer(appctx_htx, buf_htx, count, HTX_XFER_DEFAULT);
if (htx_is_empty(buf_htx)) {
appctx_htx->flags |= (buf_htx->flags & HTX_FL_EOM);
}

View File

@ -1358,14 +1358,15 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
goto out;
}
err_code |= warnif_misplaced_http_req(curproxy, file, linenum, args[0], NULL);
if (warnif_misplaced_http_req(curproxy, file, linenum, args[0], NULL))
err_code |= ERR_WARN;
if (curproxy->cap & PR_CAP_FE)
where |= SMP_VAL_FE_HRQ_HDR;
if (curproxy->cap & PR_CAP_BE)
where |= SMP_VAL_BE_HRQ_HDR;
err_code |= warnif_cond_conflicts(rule->cond, where, &errmsg);
if (err_code)
if (errmsg && *errmsg)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
LIST_APPEND(&curproxy->http_req_rules, &rule->list);
@ -1400,7 +1401,7 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
if (curproxy->cap & PR_CAP_BE)
where |= SMP_VAL_BE_HRS_HDR;
err_code |= warnif_cond_conflicts(rule->cond, where, &errmsg);
if (err_code)
if (errmsg && *errmsg)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
LIST_APPEND(&curproxy->http_res_rules, &rule->list);
@ -1434,7 +1435,7 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
if (curproxy->cap & PR_CAP_BE)
where |= SMP_VAL_BE_HRS_HDR;
err_code |= warnif_cond_conflicts(rule->cond, where, &errmsg);
if (err_code)
if (errmsg && *errmsg)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
LIST_APPEND(&curproxy->http_after_res_rules, &rule->list);
@ -1491,14 +1492,15 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
}
LIST_APPEND(&curproxy->redirect_rules, &rule->list);
err_code |= warnif_misplaced_redirect(curproxy, file, linenum, args[0], NULL);
if (warnif_misplaced_redirect(curproxy, file, linenum, args[0], NULL))
err_code |= ERR_WARN;
if (curproxy->cap & PR_CAP_FE)
where |= SMP_VAL_FE_HRQ_HDR;
if (curproxy->cap & PR_CAP_BE)
where |= SMP_VAL_BE_HRQ_HDR;
err_code |= warnif_cond_conflicts(rule->cond, where, &errmsg);
if (err_code)
if (errmsg && *errmsg)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
}
else if (strcmp(args[0], "use_backend") == 0) {
@ -1528,7 +1530,7 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
}
err_code |= warnif_cond_conflicts(cond, SMP_VAL_FE_SET_BCK, &errmsg);
if (err_code)
if (errmsg && *errmsg)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
}
else if (*args[2]) {
@ -1591,7 +1593,7 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
}
err_code |= warnif_cond_conflicts(cond, SMP_VAL_BE_SET_SRV, &errmsg);
if (err_code)
if (errmsg && *errmsg)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
rule = calloc(1, sizeof(*rule));
@ -1646,7 +1648,7 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
* where force-persist is applied.
*/
err_code |= warnif_cond_conflicts(cond, SMP_VAL_BE_REQ_CNT, &errmsg);
if (err_code)
if (errmsg && *errmsg)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
rule = calloc(1, sizeof(*rule));
@ -1814,7 +1816,7 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
err_code |= warnif_cond_conflicts(cond, SMP_VAL_BE_STO_RUL, &errmsg);
else
err_code |= warnif_cond_conflicts(cond, SMP_VAL_BE_SET_SRV, &errmsg);
if (err_code)
if (errmsg && *errmsg)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
rule = calloc(1, sizeof(*rule));
@ -1872,7 +1874,7 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
if (curproxy->cap & PR_CAP_BE)
where |= SMP_VAL_BE_HRQ_HDR;
err_code |= warnif_cond_conflicts(cond, where, &errmsg);
if (err_code)
if (errmsg && *errmsg)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
rule = calloc(1, sizeof(*rule));
@ -1952,7 +1954,7 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
if (curproxy->cap & PR_CAP_BE)
where |= SMP_VAL_BE_HRQ_HDR;
err_code |= warnif_cond_conflicts(rule->cond, where, &errmsg);
if (err_code)
if (errmsg && *errmsg)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
LIST_APPEND(&curproxy->uri_auth->http_req_rules, &rule->list);
@ -2200,6 +2202,42 @@ stats_error_parsing:
err_code |= ERR_ALERT | ERR_FATAL;
goto out;
}
else if (strcmp(args[1], "use-small-buffers") == 0) {
unsigned int flags = PR_O2_USE_SBUF_ALL;
if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[1], NULL)) {
err_code |= ERR_WARN;
goto out;
}
if (*(args[2])) {
int cur_arg;
flags = 0;
for (cur_arg = 2; *(args[cur_arg]); cur_arg++) {
if (strcmp(args[cur_arg], "queue") == 0)
flags |= PR_O2_USE_SBUF_QUEUE;
else if (strcmp(args[cur_arg], "l7-retries") == 0)
flags |= PR_O2_USE_SBUF_L7_RETRY;
else if (strcmp(args[cur_arg], "check") == 0)
flags |= PR_O2_USE_SBUF_CHECK;
else {
ha_alert("parsing [%s:%d] : invalid parameter '%s'. option '%s' expects 'queue', 'l7-retries' or 'check' value.\n",
file, linenum, args[cur_arg], args[1]);
err_code |= ERR_ALERT | ERR_FATAL;
goto out;
}
}
}
if (kwm == KWM_STD) {
curproxy->options2 &= ~PR_O2_USE_SBUF_ALL;
curproxy->options2 |= flags;
}
else if (kwm == KWM_NO) {
curproxy->options2 &= ~flags;
}
goto out;
}
if (kwm != KWM_STD) {
ha_alert("parsing [%s:%d]: negation/default is not supported for option '%s'.\n",
@ -2557,7 +2595,8 @@ stats_error_parsing:
goto out;
}
err_code |= warnif_misplaced_monitor(curproxy, file, linenum, args[0], args[1]);
if (warnif_misplaced_monitor(curproxy, file, linenum, args[0], args[1]))
err_code |= ERR_WARN;
if ((cond = build_acl_cond(file, linenum, &curproxy->acl, curproxy, (const char **)args + 2, &errmsg)) == NULL) {
ha_alert("parsing [%s:%d] : error detected while parsing a '%s %s' condition : %s.\n",
file, linenum, args[0], args[1], errmsg);

View File

@ -63,6 +63,7 @@
#include <haproxy/global.h>
#include <haproxy/http_ana.h>
#include <haproxy/http_rules.h>
#include <haproxy/http_htx.h>
#include <haproxy/lb_chash.h>
#include <haproxy/lb_fas.h>
#include <haproxy/lb_fwlc.h>
@ -2318,6 +2319,18 @@ int check_config_validity()
"Please fix either value to remove this warning.\n",
global.tune.bufsize_large, global.tune.bufsize);
global.tune.bufsize_large = 0;
err_code |= ERR_WARN;
}
}
if (global.tune.bufsize_small > 0) {
if (global.tune.bufsize_small == global.tune.bufsize)
global.tune.bufsize_small = 0;
else if (global.tune.bufsize_small > global.tune.bufsize) {
ha_warning("invalid small buffer size %d bytes which is greater to default bufsize %d bytes.\n",
global.tune.bufsize_small, global.tune.bufsize);
global.tune.bufsize_small = 0;
err_code |= ERR_WARN;
}
}
@ -2392,6 +2405,8 @@ int check_config_validity()
else {
cfgerr += acl_find_targets(defpx);
}
err_code |= proxy_check_http_errors(defpx);
}
/* starting to initialize the main proxies list */

View File

@ -1515,13 +1515,15 @@ int check_buf_available(void *target)
/*
* Allocate a buffer. If it fails, it adds the check in buffer wait queue.
*/
struct buffer *check_get_buf(struct check *check, struct buffer *bptr)
struct buffer *check_get_buf(struct check *check, struct buffer *bptr, unsigned int small_buffer)
{
struct buffer *buf = NULL;
if (likely(!LIST_INLIST(&check->buf_wait.list)) &&
unlikely((buf = b_alloc(bptr, DB_CHANNEL)) == NULL)) {
b_queue(DB_CHANNEL, &check->buf_wait, check, check_buf_available);
if (small_buffer == 0 || (buf = b_alloc_small(bptr)) == NULL) {
if (likely(!LIST_INLIST(&check->buf_wait.list)) &&
unlikely((buf = b_alloc(bptr, DB_CHANNEL)) == NULL)) {
b_queue(DB_CHANNEL, &check->buf_wait, check, check_buf_available);
}
}
return buf;
}
@ -1533,8 +1535,11 @@ struct buffer *check_get_buf(struct check *check, struct buffer *bptr)
void check_release_buf(struct check *check, struct buffer *bptr)
{
if (bptr->size) {
int defbuf = b_is_default(bptr);
b_free(bptr);
offer_buffers(check->buf_wait.target, 1);
if (defbuf)
offer_buffers(check->buf_wait.target, 1);
}
}
@ -1654,7 +1659,6 @@ int start_check_task(struct check *check, int mininter,
*/
static int start_checks()
{
struct proxy *px;
struct server *s;
char *errmsg = NULL;
@ -1681,6 +1685,10 @@ static int start_checks()
*/
for (px = proxies_list; px; px = px->next) {
for (s = px->srv; s; s = s->next) {
if ((px->options2 & PR_O2_USE_SBUF_CHECK) &&
(s->check.tcpcheck_rules->flags & TCPCHK_RULES_MAY_USE_SBUF))
s->check.state |= CHK_ST_USE_SMALL_BUFF;
if (s->check.state & CHK_ST_CONFIGURED) {
nbcheck++;
if ((srv_getinter(&s->check) >= SRV_CHK_INTER_THRES) &&
@ -1805,7 +1813,15 @@ int init_srv_check(struct server *srv)
* specified.
*/
if (!srv->check.port && !is_addr(&srv->check.addr)) {
if (!srv->check.use_ssl && srv->use_ssl != -1)
/*
* If any setting is set for the check, then we can't
* assume we'll use the same XPRT as the server, the
* server may be QUIC, but we want a TCP check.
*/
if (!srv->check.use_ssl && srv->use_ssl != -1 &&
!srv->check.via_socks4 && !srv->check.send_proxy &&
(!srv->check.alpn_len || (srv->check.alpn_len == srv->ssl_ctx.alpn_len && !strncmp(srv->check.alpn_str, srv->ssl_ctx.alpn_str, srv->check.alpn_len))) &&
(!srv->check.mux_proto || srv->check.mux_proto != srv->mux_proto))
srv->check.xprt = srv->xprt;
else if (srv->check.use_ssl == 1)
srv->check.xprt = xprt_get(XPRT_SSL);
@ -2056,6 +2072,7 @@ static int srv_parse_addr(char **args, int *cur_arg, struct proxy *curpx, struct
char **errmsg)
{
struct sockaddr_storage *sk;
struct protocol *proto;
int port1, port2, err_code = 0;
@ -2064,7 +2081,7 @@ static int srv_parse_addr(char **args, int *cur_arg, struct proxy *curpx, struct
goto error;
}
sk = str2sa_range(args[*cur_arg+1], NULL, &port1, &port2, NULL, NULL, NULL, errmsg, NULL, NULL, NULL,
sk = str2sa_range(args[*cur_arg+1], NULL, &port1, &port2, NULL, &proto, NULL, errmsg, NULL, NULL, NULL,
PA_O_RESOLVE | PA_O_PORT_OK | PA_O_STREAM | PA_O_CONNECT);
if (!sk) {
memprintf(errmsg, "'%s' : %s", args[*cur_arg], *errmsg);
@ -2072,6 +2089,7 @@ static int srv_parse_addr(char **args, int *cur_arg, struct proxy *curpx, struct
}
srv->check.addr = *sk;
srv->check.proto = proto;
/* if agentaddr was never set, we can use addr */
if (!(srv->flags & SRV_F_AGENTADDR))
srv->agent.addr = *sk;
@ -2101,7 +2119,11 @@ static int srv_parse_agent_addr(char **args, int *cur_arg, struct proxy *curpx,
goto error;
}
set_srv_agent_addr(srv, &sk);
/* Agent currently only uses TCP */
if (sk.ss_family == AF_INET)
srv->agent.proto = &proto_tcpv4;
else
srv->agent.proto = &proto_tcpv6;
out:
return err_code;

View File

@ -53,6 +53,22 @@ struct pool_head *pool_head_large_trash __read_mostly = NULL;
/* this is used to drain data, and as a temporary large buffer */
THREAD_LOCAL struct buffer trash_large = { };
/* small trash chunks used for various conversions */
static THREAD_LOCAL struct buffer *small_trash_chunk;
static THREAD_LOCAL struct buffer small_trash_chunk1;
static THREAD_LOCAL struct buffer small_trash_chunk2;
/* small trash buffers used for various conversions */
static int small_trash_size __read_mostly = 0;
static THREAD_LOCAL char *small_trash_buf1 = NULL;
static THREAD_LOCAL char *small_trash_buf2 = NULL;
/* the trash pool for reentrant allocations */
struct pool_head *pool_head_small_trash __read_mostly = NULL;
/* this is used to drain data, and as a temporary small buffer */
THREAD_LOCAL struct buffer trash_small = { };
/*
* Returns a pre-allocated and initialized trash chunk that can be used for any
* type of conversion. Two chunks and their respective buffers are alternatively
@ -103,14 +119,40 @@ struct buffer *get_large_trash_chunk(void)
return large_trash_chunk;
}
/* Similar to get_trash_chunk() but return a pre-allocated small chunk
* instead. Becasuse small buffers are not enabled by default, this function may
* return NULL.
*/
struct buffer *get_small_trash_chunk(void)
{
char *small_trash_buf;
if (!small_trash_size)
return NULL;
if (small_trash_chunk == &small_trash_chunk1) {
small_trash_chunk = &small_trash_chunk2;
small_trash_buf = small_trash_buf2;
}
else {
small_trash_chunk = &small_trash_chunk1;
small_trash_buf = small_trash_buf1;
}
*small_trash_buf = 0;
chunk_init(small_trash_chunk, small_trash_buf, small_trash_size);
return small_trash_chunk;
}
/* Returns a trash chunk accordingly to the requested size. This function may
* fail if the requested size is too big or if the large chubks are not
* configured.
*/
struct buffer *get_trash_chunk_sz(size_t size)
{
if (likely(size <= trash_size))
return get_trash_chunk();
if (likely(size > small_trash_size && size <= trash_size))
return get_trash_chunk();
else if (small_trash_size && size <= small_trash_size)
return get_small_trash_chunk();
else if (large_trash_size && size <= large_trash_size)
return get_large_trash_chunk();
else
@ -122,17 +164,20 @@ struct buffer *get_trash_chunk_sz(size_t size)
*/
struct buffer *get_larger_trash_chunk(struct buffer *chk)
{
struct buffer *chunk;
struct buffer *chunk = NULL;
if (!chk)
return get_trash_chunk();
if (!chk || chk->size == small_trash_size) {
/* no chunk or a small one, use a regular buffer */
chunk = get_trash_chunk();
}
else if (large_trash_size && chk->size <= large_trash_size) {
/* a regular byffer, use a large buffer if possible */
chunk = get_large_trash_chunk();
}
/* No large buffers or current chunk is alread a large trash chunk */
if (!large_trash_size || chk->size == large_trash_size)
return NULL;
if (chk && chunk)
b_xfer(chunk, chk, b_data(chk));
chunk = get_large_trash_chunk();
b_xfer(chunk, chk, b_data(chk));
return chunk;
}
@ -166,9 +211,29 @@ static int alloc_large_trash_buffers(int bufsize)
return trash_large.area && large_trash_buf1 && large_trash_buf2;
}
/* allocates the trash small buffers if necessary. Returns 0 in case of
* failure. Unlike alloc_trash_buffers(), It is unexpected to call this function
* multiple times. Small buffers are not used during configuration parsing.
*/
static int alloc_small_trash_buffers(int bufsize)
{
small_trash_size = bufsize;
if (!small_trash_size)
return 1;
BUG_ON(trash_small.area && small_trash_buf1 && small_trash_buf2);
chunk_init(&trash_small, my_realloc2(trash_small.area, bufsize), bufsize);
small_trash_buf1 = (char *)my_realloc2(small_trash_buf1, bufsize);
small_trash_buf2 = (char *)my_realloc2(small_trash_buf2, bufsize);
return trash_small.area && small_trash_buf1 && small_trash_buf2;
}
static int alloc_trash_buffers_per_thread()
{
return alloc_trash_buffers(global.tune.bufsize) && alloc_large_trash_buffers(global.tune.bufsize_large);
return (alloc_trash_buffers(global.tune.bufsize) &&
alloc_large_trash_buffers(global.tune.bufsize_large) &&
alloc_small_trash_buffers(global.tune.bufsize_large));
}
static void free_trash_buffers_per_thread()
@ -180,6 +245,10 @@ static void free_trash_buffers_per_thread()
chunk_destroy(&trash_large);
ha_free(&large_trash_buf2);
ha_free(&large_trash_buf1);
chunk_destroy(&trash_small);
ha_free(&small_trash_buf2);
ha_free(&small_trash_buf1);
}
/* Initialize the trash buffers. It returns 0 if an error occurred. */
@ -207,6 +276,14 @@ int init_trash_buffers(int first)
if (!pool_head_large_trash)
return 0;
}
if (!first && global.tune.bufsize_small) {
pool_head_small_trash = create_pool("small_trash",
sizeof(struct buffer) + global.tune.bufsize_small,
MEM_F_EXACT);
if (!pool_head_small_trash)
return 0;
}
return 1;
}

View File

@ -24,6 +24,7 @@
struct pool_head *pool_head_buffer __read_mostly;
struct pool_head *pool_head_large_buffer __read_mostly = NULL;
struct pool_head *pool_head_small_buffer __read_mostly;
/* perform minimal initializations, report 0 in case of error, 1 if OK. */
int init_buffer()
@ -43,6 +44,12 @@ int init_buffer()
return 0;
}
if (global.tune.bufsize_small) {
pool_head_small_buffer = create_aligned_pool("small_buffer", global.tune.bufsize_small, 64, MEM_F_SHARED|MEM_F_EXACT);
if (!pool_head_small_buffer)
return 0;
}
/* make sure any change to the queues assignment isn't overlooked */
BUG_ON(DB_PERMANENT - DB_UNLIKELY - 1 != DYNBUF_NBQ);
BUG_ON(DB_MUX_RX_Q < DB_SE_RX_Q || DB_MUX_RX_Q >= DYNBUF_NBQ);

View File

@ -136,6 +136,10 @@ static int cli_parse_show_ech(char **args, char *payload,
{
struct show_ech_ctx *ctx = applet_reserve_svcctx(appctx, sizeof(*ctx));
if (!cli_has_level(appctx, ACCESS_LVL_ADMIN))
return 1;
/* no parameter, shows only file list */
if (*args[3]) {
SSL_CTX *sctx = NULL;
@ -297,6 +301,9 @@ static int cli_parse_add_ech(char **args, char *payload, struct appctx *appctx,
OSSL_ECHSTORE *es = NULL;
BIO *es_in = NULL;
if (!cli_has_level(appctx, ACCESS_LVL_ADMIN))
return 1;
if (!*args[3] || !payload)
return cli_err(appctx, "syntax: add ssl ech <name> <PEM file content>");
if (cli_find_ech_specific_ctx(args[3], &sctx) != 1)
@ -324,6 +331,9 @@ static int cli_parse_set_ech(char **args, char *payload, struct appctx *appctx,
OSSL_ECHSTORE *es = NULL;
BIO *es_in = NULL;
if (!cli_has_level(appctx, ACCESS_LVL_ADMIN))
return 1;
if (!*args[3] || !payload)
return cli_err(appctx, "syntax: set ssl ech <name> <PEM file content>");
if (cli_find_ech_specific_ctx(args[3], &sctx) != 1)
@ -351,6 +361,9 @@ static int cli_parse_del_ech(char **args, char *payload, struct appctx *appctx,
char success_message[ECH_SUCCESS_MSG_MAX];
OSSL_ECHSTORE *es = NULL;
if (!cli_has_level(appctx, ACCESS_LVL_ADMIN))
return 1;
if (!*args[3])
return cli_err(appctx, "syntax: del ssl ech <name>");
if (*args[4])

View File

@ -1114,7 +1114,7 @@ static int spoe_process_event(struct stream *s, struct spoe_context *ctx,
}
else if (ret == 0) {
if ((s->scf->flags & SC_FL_ERROR) ||
((s->scf->flags & (SC_FL_EOS|SC_FL_ABRT_DONE)) && proxy_abrt_close_def(s->be, 1))) {
((s->scf->flags & SC_FL_EOS) && proxy_abrt_close_def(s->be, 1))) {
ctx->status_code = SPOE_CTX_ERR_INTERRUPT;
spoe_stop_processing(agent, ctx);
spoe_handle_processing_error(s, agent, ctx, dir);

View File

@ -2828,8 +2828,7 @@ static enum rule_result http_req_get_intercept_rule(struct proxy *px, struct lis
int act_opts = 0;
if ((s->scf->flags & SC_FL_ERROR) ||
((s->scf->flags & (SC_FL_EOS|SC_FL_ABRT_DONE)) &&
proxy_abrt_close_def(px, 1)))
((s->scf->flags & SC_FL_EOS) && proxy_abrt_close_def(px, 1)))
act_opts |= ACT_OPT_FINAL | ACT_OPT_FINAL_EARLY;
/* If "the current_rule_list" match the executed rule list, we are in
@ -3020,8 +3019,7 @@ static enum rule_result http_res_get_intercept_rule(struct proxy *px, struct lis
if (final)
act_opts |= ACT_OPT_FINAL;
if ((s->scf->flags & SC_FL_ERROR) ||
((s->scf->flags & (SC_FL_EOS|SC_FL_ABRT_DONE)) &&
proxy_abrt_close_def(px, 1)))
((s->scf->flags & SC_FL_EOS) && proxy_abrt_close_def(px, 1)))
act_opts |= ACT_OPT_FINAL | ACT_OPT_FINAL_EARLY;
/* If "the current_rule_list" match the executed rule list, we are in
@ -4337,20 +4335,10 @@ enum rule_result http_wait_for_msg_body(struct stream *s, struct channel *chn,
}
if (channel_htx_full(chn, htx, global.tune.maxrewrite) || sc_waiting_room(chn_prod(chn))) {
struct buffer lbuf;
char *area;
struct buffer lbuf = BUF_NULL;
if (large_buffer == 0 || b_is_large(&chn->buf))
goto end; /* don't use large buffer or large buffer is full */
/* normal buffer is full, allocate a large one
*/
area = pool_alloc(pool_head_large_buffer);
if (!area)
goto end; /* Allocation failure: TODO must be improved to use buffer_wait */
lbuf = b_make(area, global.tune.bufsize_large, 0, 0);
htx_xfer_blks(htx_from_buf(&lbuf), htx, htx_used_space(htx), HTX_BLK_UNUSED);
htx_to_buf(htx, &chn->buf);
if (large_buffer == 0 || b_is_large(&chn->buf) || !htx_move_to_large_buffer(&lbuf, &chn->buf))
goto end; /* don't use large buffer or already a large buffer */
b_free(&chn->buf);
offer_buffers(s, 1);
chn->buf = lbuf;
@ -4366,8 +4354,7 @@ enum rule_result http_wait_for_msg_body(struct stream *s, struct channel *chn,
/* we get here if we need to wait for more data */
if ((s->scf->flags & SC_FL_ERROR) ||
((s->scf->flags & (SC_FL_EOS|SC_FL_ABRT_DONE)) &&
proxy_abrt_close_def(s->be, 1)))
((s->scf->flags & SC_FL_EOS) && proxy_abrt_close_def(s->be, 1)))
ret = HTTP_RULE_RES_CONT;
else if (!(chn_prod(chn)->flags & (SC_FL_ERROR|SC_FL_EOS|SC_FL_ABRT_DONE))) {
if (!tick_isset(chn->analyse_exp))

View File

@ -604,10 +604,7 @@ void httpclient_applet_io_handler(struct appctx *appctx)
htx_to_buf(htx, outbuf);
b_xfer(outbuf, &hc->req.buf, b_data(&hc->req.buf));
} else {
struct htx_ret ret;
ret = htx_xfer_blks(htx, hc_htx, htx_used_space(hc_htx), HTX_BLK_UNUSED);
if (!ret.ret) {
if (!htx_xfer(htx, hc_htx, htx_used_space(hc_htx), HTX_XFER_DEFAULT)) {
applet_have_more_data(appctx);
goto out;
}
@ -711,7 +708,6 @@ void httpclient_applet_io_handler(struct appctx *appctx)
if (hc->options & HTTPCLIENT_O_RES_HTX) {
/* HTX mode transfers the header to the hc buffer */
struct htx *hc_htx;
struct htx_ret ret;
if (!b_alloc(&hc->res.buf, DB_MUX_TX)) {
applet_wont_consume(appctx);
@ -720,8 +716,7 @@ void httpclient_applet_io_handler(struct appctx *appctx)
hc_htx = htxbuf(&hc->res.buf);
/* xfer the headers */
ret = htx_xfer_blks(hc_htx, htx, htx_used_space(htx), HTX_BLK_EOH);
if (!ret.ret) {
if (!htx_xfer(hc_htx, htx, htx_used_space(htx), HTX_XFER_HDRS_ONLY)) {
applet_need_more_data(appctx);
goto out;
}
@ -811,12 +806,10 @@ void httpclient_applet_io_handler(struct appctx *appctx)
if (hc->options & HTTPCLIENT_O_RES_HTX) {
/* HTX mode transfers the header to the hc buffer */
struct htx *hc_htx;
struct htx_ret ret;
hc_htx = htxbuf(&hc->res.buf);
ret = htx_xfer_blks(hc_htx, htx, htx_used_space(htx), HTX_BLK_UNUSED);
if (!ret.ret)
if (!htx_xfer(hc_htx, htx, htx_used_space(htx), HTX_XFER_DEFAULT))
applet_wont_consume(appctx);
else
applet_fl_clr(appctx, APPCTX_FL_INBLK_FULL);

View File

@ -41,17 +41,18 @@ struct list http_replies_list = LIST_HEAD_INIT(http_replies_list);
/* The declaration of an errorfiles/errorfile directives. Used during config
* parsing only. */
struct conf_errors {
char type; /* directive type (0: errorfiles, 1: errorfile) */
enum http_err_directive directive; /* directive type: inline (errorfile <code> <file>) / section (errorfiles <section>) */
union {
struct {
int status; /* the status code associated to this error */
struct http_reply *reply; /* the http reply for the errorfile */
} errorfile; /* describe an "errorfile" directive */
} inl; /* for HTTP_ERR_DIRECTIVE_INLINE only */
struct {
char *name; /* the http-errors section name */
char status[HTTP_ERR_SIZE]; /* list of status to import (0: ignore, 1: implicit import, 2: explicit import) */
} errorfiles; /* describe an "errorfiles" directive */
} info;
struct http_errors *resolved; /* resolved section pointer set via proxy_check_http_errors() */
enum http_err_import status[HTTP_ERR_SIZE]; /* list of status to import */
} section; /* for HTTP_ERR_DIRECTIVE_SECTION only */
} type;
char *file; /* file where the directive appears */
int line; /* line where the directive appears */
@ -2034,9 +2035,9 @@ static int proxy_parse_errorloc(char **args, int section, struct proxy *curpx,
ret = -1;
goto out;
}
conf_err->type = 1;
conf_err->info.errorfile.status = status;
conf_err->info.errorfile.reply = reply;
conf_err->directive = HTTP_ERR_DIRECTIVE_INLINE;
conf_err->type.inl.status = status;
conf_err->type.inl.reply = reply;
conf_err->file = strdup(file);
conf_err->line = line;
@ -2105,9 +2106,9 @@ static int proxy_parse_errorfile(char **args, int section, struct proxy *curpx,
ret = -1;
goto out;
}
conf_err->type = 1;
conf_err->info.errorfile.status = status;
conf_err->info.errorfile.reply = reply;
conf_err->directive = HTTP_ERR_DIRECTIVE_INLINE;
conf_err->type.inl.status = status;
conf_err->type.inl.reply = reply;
conf_err->file = strdup(file);
conf_err->line = line;
LIST_APPEND(&curpx->conf.errors, &conf_err->list);
@ -2146,12 +2147,12 @@ static int proxy_parse_errorfiles(char **args, int section, struct proxy *curpx,
memprintf(err, "%s : out of memory.", args[0]);
goto error;
}
conf_err->type = 0;
conf_err->info.errorfiles.name = name;
conf_err->directive = HTTP_ERR_DIRECTIVE_SECTION;
conf_err->type.section.name = name;
if (!*(args[2])) {
for (rc = 0; rc < HTTP_ERR_SIZE; rc++)
conf_err->info.errorfiles.status[rc] = 1;
conf_err->type.section.status[rc] = HTTP_ERR_IMPORT_IMPLICIT;
}
else {
int cur_arg, status;
@ -2160,7 +2161,7 @@ static int proxy_parse_errorfiles(char **args, int section, struct proxy *curpx,
for (rc = 0; rc < HTTP_ERR_SIZE; rc++) {
if (http_err_codes[rc] == status) {
conf_err->info.errorfiles.status[rc] = 2;
conf_err->type.section.status[rc] = HTTP_ERR_IMPORT_EXPLICIT;
break;
}
}
@ -2231,16 +2232,16 @@ static int proxy_parse_http_error(char **args, int section, struct proxy *curpx,
if (reply->type == HTTP_REPLY_ERRFILES) {
int rc = http_get_status_idx(reply->status);
conf_err->type = 2;
conf_err->info.errorfiles.name = reply->body.http_errors;
conf_err->info.errorfiles.status[rc] = 2;
conf_err->directive = HTTP_ERR_DIRECTIVE_SECTION;
conf_err->type.section.name = reply->body.http_errors;
conf_err->type.section.status[rc] = HTTP_ERR_IMPORT_EXPLICIT;
reply->body.http_errors = NULL;
release_http_reply(reply);
}
else {
conf_err->type = 1;
conf_err->info.errorfile.status = reply->status;
conf_err->info.errorfile.reply = reply;
conf_err->directive = HTTP_ERR_DIRECTIVE_INLINE;
conf_err->type.inl.status = reply->status;
conf_err->type.inl.reply = reply;
LIST_APPEND(&http_replies_list, &reply->list);
}
conf_err->file = strdup(file);
@ -2260,60 +2261,46 @@ static int proxy_parse_http_error(char **args, int section, struct proxy *curpx,
}
/* Check "errorfiles" proxy keyword */
static int proxy_check_errors(struct proxy *px)
/* Converts <conf_errors> initialized during config parsing for <px> proxy.
* Each one of them is transfromed in a http_reply type, stored in proxy
* replies array member. The original <conf_errors> becomes unneeded and is
* thus removed and freed.
*/
static int proxy_finalize_http_errors(struct proxy *px)
{
struct conf_errors *conf_err, *conf_err_back;
struct http_errors *http_errs;
int rc, err = ERR_NONE;
int rc;
list_for_each_entry_safe(conf_err, conf_err_back, &px->conf.errors, list) {
if (conf_err->type == 1) {
/* errorfile */
rc = http_get_status_idx(conf_err->info.errorfile.status);
px->replies[rc] = conf_err->info.errorfile.reply;
switch (conf_err->directive) {
case HTTP_ERR_DIRECTIVE_INLINE:
rc = http_get_status_idx(conf_err->type.inl.status);
px->replies[rc] = conf_err->type.inl.reply;
/* For proxy, to rely on default replies, just don't reference a reply */
if (px->replies[rc]->type == HTTP_REPLY_ERRMSG && !px->replies[rc]->body.errmsg)
px->replies[rc] = NULL;
}
else {
/* errorfiles */
list_for_each_entry(http_errs, &http_errors_list, list) {
if (strcmp(http_errs->id, conf_err->info.errorfiles.name) == 0)
break;
}
break;
/* unknown http-errors section */
if (&http_errs->list == &http_errors_list) {
ha_alert("proxy '%s': unknown http-errors section '%s' (at %s:%d).\n",
px->id, conf_err->info.errorfiles.name, conf_err->file, conf_err->line);
err |= ERR_ALERT | ERR_FATAL;
free(conf_err->info.errorfiles.name);
goto next;
}
free(conf_err->info.errorfiles.name);
for (rc = 0; rc < HTTP_ERR_SIZE; rc++) {
if (conf_err->info.errorfiles.status[rc] > 0) {
case HTTP_ERR_DIRECTIVE_SECTION:
http_errs = conf_err->type.section.resolved;
if (http_errs) {
for (rc = 0; rc < HTTP_ERR_SIZE; rc++) {
if (conf_err->type.section.status[rc] == HTTP_ERR_IMPORT_NO)
continue;
if (http_errs->replies[rc])
px->replies[rc] = http_errs->replies[rc];
else if (conf_err->info.errorfiles.status[rc] == 2)
ha_warning("config: proxy '%s' : status '%d' not declared in"
" http-errors section '%s' (at %s:%d).\n",
px->id, http_err_codes[rc], http_errs->id,
conf_err->file, conf_err->line);
}
}
}
next:
LIST_DELETE(&conf_err->list);
free(conf_err->file);
free(conf_err);
}
out:
return err;
return ERR_NONE;
}
static int post_check_errors()
@ -2343,6 +2330,55 @@ static int post_check_errors()
return err_code;
}
/* Checks the validity of conf_errors stored in <px> proxy after the
* configuration is completely parsed.
*
* Returns ERR_NONE on success and a combination of ERR_CODE on failure.
*/
int proxy_check_http_errors(struct proxy *px)
{
struct http_errors *http_errs;
struct conf_errors *conf_err;
int section_found;
int rc, err = ERR_NONE;
list_for_each_entry(conf_err, &px->conf.errors, list) {
if (conf_err->directive == HTTP_ERR_DIRECTIVE_SECTION) {
section_found = 0;
list_for_each_entry(http_errs, &http_errors_list, list) {
if (strcmp(http_errs->id, conf_err->type.section.name) == 0) {
section_found = 1;
break;
}
}
if (!section_found) {
ha_alert("proxy '%s': unknown http-errors section '%s' (at %s:%d).\n",
px->id, conf_err->type.section.name, conf_err->file, conf_err->line);
ha_free(&conf_err->type.section.name);
err |= ERR_ALERT | ERR_FATAL;
continue;
}
conf_err->type.section.resolved = http_errs;
ha_free(&conf_err->type.section.name);
for (rc = 0; rc < HTTP_ERR_SIZE; rc++) {
if (conf_err->type.section.status[rc] == HTTP_ERR_IMPORT_EXPLICIT &&
!http_errs->replies[rc]) {
ha_warning("config: proxy '%s' : status '%d' not declared in"
" http-errors section '%s' (at %s:%d).\n",
px->id, http_err_codes[rc], http_errs->id,
conf_err->file, conf_err->line);
err |= ERR_WARN;
}
}
}
}
return err;
}
int proxy_dup_default_conf_errors(struct proxy *curpx, const struct proxy *defpx, char **errmsg)
{
struct conf_errors *conf_err, *new_conf_err = NULL;
@ -2354,19 +2390,22 @@ int proxy_dup_default_conf_errors(struct proxy *curpx, const struct proxy *defpx
memprintf(errmsg, "unable to duplicate default errors (out of memory).");
goto out;
}
new_conf_err->type = conf_err->type;
if (conf_err->type == 1) {
new_conf_err->info.errorfile.status = conf_err->info.errorfile.status;
new_conf_err->info.errorfile.reply = conf_err->info.errorfile.reply;
}
else {
new_conf_err->info.errorfiles.name = strdup(conf_err->info.errorfiles.name);
if (!new_conf_err->info.errorfiles.name) {
new_conf_err->directive = conf_err->directive;
switch (conf_err->directive) {
case HTTP_ERR_DIRECTIVE_INLINE:
new_conf_err->type.inl.status = conf_err->type.inl.status;
new_conf_err->type.inl.reply = conf_err->type.inl.reply;
break;
case HTTP_ERR_DIRECTIVE_SECTION:
new_conf_err->type.section.name = strdup(conf_err->type.section.name);
if (!new_conf_err->type.section.name) {
memprintf(errmsg, "unable to duplicate default errors (out of memory).");
goto out;
}
memcpy(&new_conf_err->info.errorfiles.status, &conf_err->info.errorfiles.status,
sizeof(conf_err->info.errorfiles.status));
memcpy(&new_conf_err->type.section.status, &conf_err->type.section.status,
sizeof(conf_err->type.section.status));
break;
}
new_conf_err->file = strdup(conf_err->file);
new_conf_err->line = conf_err->line;
@ -2385,8 +2424,8 @@ void proxy_release_conf_errors(struct proxy *px)
struct conf_errors *conf_err, *conf_err_back;
list_for_each_entry_safe(conf_err, conf_err_back, &px->conf.errors, list) {
if (conf_err->type == 0)
free(conf_err->info.errorfiles.name);
if (conf_err->directive == HTTP_ERR_DIRECTIVE_SECTION)
free(conf_err->type.section.name);
LIST_DELETE(&conf_err->list);
free(conf_err->file);
free(conf_err);
@ -2505,7 +2544,7 @@ static struct cfg_kw_list cfg_kws = {ILH, {
}};
INITCALL1(STG_REGISTER, cfg_register_keywords, &cfg_kws);
REGISTER_POST_PROXY_CHECK(proxy_check_errors);
REGISTER_POST_PROXY_CHECK(proxy_finalize_http_errors);
REGISTER_POST_CHECK(post_check_errors);
REGISTER_CONFIG_SECTION("http-errors", cfg_parse_http_errors, NULL);

219
src/htx.c
View File

@ -11,6 +11,7 @@
*/
#include <haproxy/chunk.h>
#include <haproxy/dynbuf.h>
#include <haproxy/global.h>
#include <haproxy/htx.h>
#include <haproxy/net_helper.h>
@ -719,10 +720,163 @@ struct htx_blk *htx_replace_blk_value(struct htx *htx, struct htx_blk *blk,
return blk;
}
/* Transfer HTX blocks from <src> to <dst>, stopping if <count> bytes were
* transferred (including payload and meta-data). It returns the number of bytes
* copied. By default, copied blocks are removed from <src> and only full
* headers and trailers part can be moved. <flags> can be set to change the
* default behavior:
* - HTX_XFER_KEEP_SRC_BLKS: source blocks are not removed
* - HTX_XFER_PARTIAL_HDRS_COPY: partial headers and trailers part can be xferred
* - HTX_XFER_HDRS_ONLY: Only the headers part is xferred
*/
size_t htx_xfer(struct htx *dst, struct htx *src, size_t count, unsigned int flags)
{
struct htx_blk *blk, *last_dstblk;
size_t ret = 0;
uint32_t max, last_dstblk_sz;
int dst_full = 0;
last_dstblk = NULL;
last_dstblk_sz = 0;
for (blk = htx_get_head_blk(src); blk && count; blk = htx_get_next_blk(src, blk)) {
struct ist v;
enum htx_blk_type type;
uint32_t sz;
/* Ignore unused block */
type = htx_get_blk_type(blk);
if (type == HTX_BLK_UNUSED)
continue;
if ((flags & HTX_XFER_HDRS_ONLY) &&
type != HTX_BLK_REQ_SL && type != HTX_BLK_RES_SL &&
type != HTX_BLK_HDR && type != HTX_BLK_EOH)
break;
max = htx_get_max_blksz(dst, count);
if (!max)
break;
sz = htx_get_blksz(blk);
switch (type) {
case HTX_BLK_DATA:
v = htx_get_blk_value(src, blk);
if (v.len > max)
v.len = max;
v.len = htx_add_data(dst, v);
if (!v.len) {
dst_full = 1;
goto stop;
}
last_dstblk = htx_get_tail_blk(dst);
last_dstblk_sz = v.len;
count -= sizeof(*blk) + v.len;
ret += sizeof(*blk) + v.len;
if (v.len != sz) {
dst_full = 1;
goto stop;
}
break;
default:
if (sz > max) {
dst_full = 1;
goto stop;
}
last_dstblk = htx_add_blk(dst, type, sz);
if (!last_dstblk) {
dst_full = 1;
goto stop;
}
last_dstblk->info = blk->info;
htx_memcpy(htx_get_blk_ptr(dst, last_dstblk), htx_get_blk_ptr(src, blk), sz);
last_dstblk_sz = sz;
count -= sizeof(*blk) + sz;
ret += sizeof(*blk) + sz;
break;
}
last_dstblk = NULL; /* Reset last_dstblk because it was fully copied */
last_dstblk_sz = 0;
}
stop:
/* Here, if not NULL, <blk> point on the first not fully copied block in
* <src>. And <last_dstblk>, if defined, is the last not fully copied
* block in <dst>. So have:
* - <blk> == NULL: everything was copied. <last_dstblk> must be NULL
* - <blk> != NULL && <last_dstblk> == NULL: partial copy but the last block was fully copied
* - <blk> != NULL && <last_dstblk> != NULL: partial copy and the last block was patially copied (DATA block only)
*/
if (!(flags & HTX_XFER_PARTIAL_HDRS_COPY)) {
/* Partial headers/trailers copy is not supported */
struct htx_blk *dstblk;
enum htx_blk_type type = HTX_BLK_UNUSED;
dstblk = htx_get_tail_blk(dst);
if (dstblk)
type = htx_get_blk_type(dstblk);
/* the last copied block is a start-line, a header or a trailer */
if (type == HTX_BLK_REQ_SL || type == HTX_BLK_RES_SL || type == HTX_BLK_HDR || type == HTX_BLK_TLR) {
/* <src > cannot have partial headers or trailers part */
BUG_ON(blk == NULL);
/* Remove partial headers/trailers from <dst> and rollback on <str> to not remove them later */
while (type == HTX_BLK_REQ_SL || type == HTX_BLK_RES_SL || type == HTX_BLK_HDR || type == HTX_BLK_TLR) {
BUG_ON(type != htx_get_blk_type(blk));
ret -= sizeof(*blk) + htx_get_blksz(blk);
htx_remove_blk(dst, dstblk);
dstblk = htx_get_tail_blk(dst);
blk = htx_get_prev_blk(src, blk);
if (!dstblk)
break;
type = htx_get_blk_type(dstblk);
}
/* Report if the xfer was interrupted because <dst> was
* full but is was originally empty
*/
if (dst_full && htx_is_empty(dst))
src->flags |= HTX_FL_PARSING_ERROR;
}
}
if (!(flags & HTX_XFER_KEEP_SRC_BLKS)) {
/* True xfer performed, remove copied block from <src> */
struct htx_blk *blk2;
/* Remove all fully copied blocks */
if (!blk)
htx_drain(src, src->data);
else {
for (blk2 = htx_get_head_blk(src); blk2 && blk2 != blk; blk2 = htx_remove_blk(src, blk2));
/* If copy was stopped on a DATA block and the last destination
* block is not NULL, it means a partial copy was performed. So
* cut the source block accordingly
*/
if (last_dstblk && blk2 && htx_get_blk_type(blk2) == HTX_BLK_DATA) {
htx_cut_data_blk(src, blk2, last_dstblk_sz);
}
}
}
/* Everything was copied, transfert terminal HTX flags too */
if (!blk) {
dst->flags |= (src->flags & (HTX_FL_EOM|HTX_FL_PARSING_ERROR|HTX_FL_PROCESSING_ERROR));
src->flags = 0;
}
return ret;
}
/* Transfer HTX blocks from <src> to <dst>, stopping once the first block of the
* type <mark> is transferred (typically EOH or EOT) or when <count> bytes were
* moved (including payload and meta-data). It returns the number of bytes moved
* and the last HTX block inserted in <dst>.
*
* DEPRECATED
*/
struct htx_ret htx_xfer_blks(struct htx *dst, struct htx *src, uint32_t count,
enum htx_blk_type mark)
@ -1181,3 +1335,68 @@ int htx_append_msg(struct htx *dst, const struct htx *src)
htx_truncate(dst, offset);
return 0;
}
/* If possible, trasnfer HTX blocks from <src> to a small buffer. This function
* allocate the small buffer and makes <dst> point on it. If <dst> is not empty
* or if <src> contains to many data, NULL is returned. If the allocation
* failed, NULL is returned. Otherwise <dst> is returned. <flags> instructs how
* the transfer must be performed.
*/
struct buffer *__htx_xfer_to_small_buffer(struct buffer *dst, struct buffer *src, unsigned int flags)
{
struct htx *dst_htx;
struct htx *src_htx = htxbuf(src);
size_t sz = (sizeof(struct htx) + htx_used_space(src_htx));
if (dst->size || sz > global.tune.bufsize_small || !b_alloc_small(dst))
return NULL;
dst_htx = htx_from_buf(dst);
htx_xfer(dst_htx, src_htx, src_htx->size, flags);
htx_to_buf(dst_htx, dst);
return dst;
}
/* If possible, trasnfer HTX blocks from <src> to a large buffer. This function
* allocate the small buffer and makes <dst> point on it. If <dst> is not empty
* or if <src> contains to many data, NULL is returned. If the allocation
* failed, NULL is returned. Otherwise <dst> is returned. <flags> instructs how
* the transfer must be performed.
*/
struct buffer *__htx_xfer_to_large_buffer(struct buffer *dst, struct buffer *src, unsigned int flags)
{
struct htx *dst_htx;
struct htx *src_htx = htxbuf(src);
size_t sz = (sizeof(struct htx) + htx_used_space(src_htx));
if (dst->size || sz > global.tune.bufsize_large || !b_alloc_large(dst))
return NULL;
dst_htx = htx_from_buf(dst);
htx_xfer(dst_htx, src_htx, src_htx->size, flags);
htx_to_buf(dst_htx, dst);
return dst;
}
/* Move HTX blocks from <src> to <dst>. Relies on __htx_xfer_to_small_buffer() */
struct buffer *htx_move_to_small_buffer(struct buffer *dst, struct buffer *src)
{
return __htx_xfer_to_small_buffer(dst, src, HTX_XFER_DEFAULT);
}
/* Move HTX blocks from <src> to <dst>. Relies on __htx_xfer_to_large_buffer() */
struct buffer *htx_move_to_large_buffer(struct buffer *dst, struct buffer *src)
{
return __htx_xfer_to_large_buffer(dst, src, HTX_XFER_DEFAULT);
}
/* Copy HTX blocks from <src> to <dst>. Relies on __htx_xfer_to_small_buffer() */
struct buffer *htx_copy_to_small_buffer(struct buffer *dst, struct buffer *src)
{
return __htx_xfer_to_small_buffer(dst, src, HTX_XFER_KEEP_SRC_BLKS);
}
/* Copy HTX blocks from <src> to <dst>. Relies on __htx_xfer_to_large_buffer() */
struct buffer *htx_copy_to_large_buffer(struct buffer *dst, struct buffer *src)
{
return __htx_xfer_to_large_buffer(dst, src, HTX_XFER_KEEP_SRC_BLKS);
}

116
src/log.c
View File

@ -2913,6 +2913,7 @@ static inline void __send_log_set_metadata_sd(struct ist *metadata, char *sd, si
struct process_send_log_ctx {
struct session *sess;
struct stream *stream;
struct log_profile *profile;
struct log_orig origin;
};
@ -2942,6 +2943,10 @@ static inline void _process_send_log_override(struct process_send_log_ctx *ctx,
enum log_orig_id orig = (ctx) ? ctx->origin.id : LOG_ORIG_UNSPEC;
uint16_t orig_fl = (ctx) ? ctx->origin.flags : LOG_ORIG_FL_NONE;
/* ctx->profile gets priority over logger profile */
if (ctx && ctx->profile)
prof = ctx->profile;
BUG_ON(!prof);
if (!b_is_null(&prof->log_tag))
@ -3095,8 +3100,8 @@ static void process_send_log(struct process_send_log_ctx *ctx,
nblogger += 1;
/* logger may use a profile to override a few things */
if (unlikely(logger->prof))
/* caller or default logger may use a profile to override a few things */
if (unlikely(logger->prof || (ctx && ctx->profile)))
_process_send_log_override(ctx, logger, hdr, message, size, nblogger);
else
_process_send_log_final(logger, hdr, message, size, nblogger);
@ -5200,17 +5205,11 @@ out:
}
/*
* opportunistic log when at least the session is known to exist
* <s> may be NULL
*
* Will not log if the frontend has no log defined. By default it will
* try to emit the log as INFO, unless the stream already exists and
* set-log-level was used.
*/
void do_log(struct session *sess, struct stream *s, struct log_orig origin)
static void do_log_ctx(struct process_send_log_ctx *ctx)
{
struct process_send_log_ctx ctx;
struct stream *s = ctx->stream;
struct session *sess = ctx->sess;
struct log_orig origin = ctx->origin;
int size;
int sd_size = 0;
int level = -1;
@ -5242,11 +5241,27 @@ void do_log(struct session *sess, struct stream *s, struct log_orig origin)
size = sess_build_logline_orig(sess, s, logline, global.max_syslog_len, &sess->fe->logformat, origin);
__send_log(ctx, &sess->fe->loggers, &sess->fe->log_tag, level,
logline, size, logline_rfc5424, sd_size);
}
/*
* opportunistic log when at least the session is known to exist
* <s> may be NULL
*
* Will not log if the frontend has no log defined. By default it will
* try to emit the log as INFO, unless the stream already exists and
* set-log-level was used.
*/
void do_log(struct session *sess, struct stream *s, struct log_orig origin)
{
struct process_send_log_ctx ctx;
ctx.origin = origin;
ctx.sess = sess;
ctx.stream = s;
__send_log(&ctx, &sess->fe->loggers, &sess->fe->log_tag, level,
logline, size, logline_rfc5424, sd_size);
ctx.profile = NULL;
do_log_ctx(&ctx);
}
/*
@ -5297,6 +5312,7 @@ void strm_log(struct stream *s, struct log_orig origin)
ctx.origin = origin;
ctx.sess = sess;
ctx.stream = s;
ctx.profile = NULL;
__send_log(&ctx, &sess->fe->loggers, &sess->fe->log_tag, level,
logline, size, logline_rfc5424, sd_size);
s->logs.logwait = 0;
@ -5364,6 +5380,7 @@ void _sess_log(struct session *sess, int embryonic)
ctx.origin = orig;
ctx.sess = sess;
ctx.stream = NULL;
ctx.profile = NULL;
__send_log(&ctx, &sess->fe->loggers,
&sess->fe->log_tag, level,
logline, size, logline_rfc5424, sd_size);
@ -6910,24 +6927,87 @@ static int px_parse_log_steps(char **args, int section_type, struct proxy *curpx
static enum act_return do_log_action(struct act_rule *rule, struct proxy *px,
struct session *sess, struct stream *s, int flags)
{
struct process_send_log_ctx ctx;
/* do_log() expects valid session pointer */
BUG_ON(sess == NULL);
do_log(sess, s, log_orig(rule->arg.expr_int.value, LOG_ORIG_FL_NONE));
ctx.origin = log_orig(rule->arg.do_log.orig, LOG_ORIG_FL_NONE);
ctx.sess = sess;
ctx.stream = s;
ctx.profile = rule->arg.do_log.profile;
do_log_ctx(&ctx);
return ACT_RET_CONT;
}
/* Parse a "do_log" action. It doesn't take any argument
static int do_log_action_check(struct act_rule *rule, struct proxy *px, char **err)
{
if (rule->arg.do_log.profile_name) {
struct log_profile *prof;
prof = log_profile_find_by_name(rule->arg.do_log.profile_name);
if (!prof) {
memprintf(err, "do-log action: profile '%s' is invalid", rule->arg.do_log.profile_name);
ha_free(&rule->arg.do_log.profile_name);
return 0;
}
ha_free(&rule->arg.do_log.profile_name);
if (!log_profile_postcheck(px, prof, err)) {
memprintf(err, "do-log action on %s %s uses incompatible log-profile '%s': %s", proxy_type_str(px), px->id, prof->id, *err);
return 0;
}
rule->arg.do_log.profile = prof;
}
return 1; // success
}
static void do_log_action_release(struct act_rule *rule)
{
ha_free(&rule->arg.do_log.profile_name);
}
/* Parse a "do_log" action. It takes optional "log-profile" argument to
* specifically use a given log-profile when generating the log message
*
* May be used from places where per-context actions are usually registered
*/
enum act_parse_ret do_log_parse_act(enum log_orig_id id,
const char **args, int *orig_arg, struct proxy *px,
struct act_rule *rule, char **err)
{
int cur_arg = *orig_arg;
rule->action_ptr = do_log_action;
rule->action = ACT_CUSTOM;
rule->release_ptr = NULL;
rule->arg.expr_int.value = id;
rule->check_ptr = do_log_action_check;
rule->release_ptr = do_log_action_release;
rule->arg.do_log.orig = id;
while (*args[*orig_arg]) {
if (!strcmp(args[*orig_arg], "profile")) {
if (!*args[*orig_arg + 1]) {
memprintf(err,
"action '%s': 'profile' expects argument.",
args[cur_arg-1]);
return ACT_RET_PRS_ERR;
}
rule->arg.do_log.profile_name = strdup(args[*orig_arg + 1]);
if (!rule->arg.do_log.profile_name) {
memprintf(err,
"action '%s': memory error when setting 'profile'",
args[cur_arg-1]);
return ACT_RET_PRS_ERR;
}
*orig_arg += 2;
}
else
break;
}
return ACT_RET_PRS_OK;
}

View File

@ -489,6 +489,9 @@ static int h2_be_glitches_threshold = 0; /* backend's max glitches
static int h2_fe_glitches_threshold = 0; /* frontend's max glitches: unlimited */
static uint h2_be_rxbuf = 0; /* backend's default total rxbuf (bytes) */
static uint h2_fe_rxbuf = 0; /* frontend's default total rxbuf (bytes) */
static unsigned int h2_be_max_frames_at_once = 0; /* backend value: 0=no limit */
static unsigned int h2_fe_max_frames_at_once = 0; /* frontend value: 0=no limit */
static unsigned int h2_fe_max_rst_at_once = 0; /* frontend value: 0=no limit */
static unsigned int h2_settings_max_concurrent_streams = 100; /* default value */
static unsigned int h2_be_settings_max_concurrent_streams = 0; /* backend value */
static unsigned int h2_fe_settings_max_concurrent_streams = 0; /* frontend value */
@ -4239,6 +4242,8 @@ static void h2_process_demux(struct h2c *h2c)
struct h2_fh hdr;
unsigned int padlen = 0;
int32_t old_iw = h2c->miw;
uint frames_budget = 0;
uint rst_budget = 0;
TRACE_ENTER(H2_EV_H2C_WAKE, h2c->conn);
@ -4327,6 +4332,14 @@ static void h2_process_demux(struct h2c *h2c)
}
}
if (h2c->flags & H2_CF_IS_BACK) {
frames_budget = h2_be_max_frames_at_once;
}
else {
frames_budget = h2_fe_max_frames_at_once;
rst_budget = h2_fe_max_rst_at_once;
}
/* process as many incoming frames as possible below */
while (1) {
int ret = 0;
@ -4629,6 +4642,29 @@ static void h2_process_demux(struct h2c *h2c)
h2c->st0 = H2_CS_FRAME_H;
}
}
/* If more frames remain in the buffer, let's first check if we've
* depleted the frames processing budget. Consuming the RST budget
* makes the tasklet go to TL_BULK to make it less prioritary than
* other processing since it's often used by attacks, while other
* frame types just yield normally.
*/
if (b_data(&h2c->dbuf)) {
if (h2c->dft == H2_FT_RST_STREAM && (rst_budget && !--rst_budget)) {
/* we've consumed all RST frames permitted by
* the budget, we have to yield now.
*/
tasklet_wakeup(h2c->wait_event.tasklet, 0);
break;
}
else if ((frames_budget && !--frames_budget)) {
/* we've consumed all frames permitted by the
* budget, we have to yield now.
*/
tasklet_wakeup(h2c->wait_event.tasklet);
break;
}
}
}
if (h2c_update_strm_rx_win(h2c) &&
@ -7830,7 +7866,6 @@ static size_t h2_rcv_buf(struct stconn *sc, struct buffer *buf, size_t count, in
struct htx *h2s_htx = NULL;
struct htx *buf_htx = NULL;
struct buffer *rxbuf = NULL;
struct htx_ret htxret;
size_t ret = 0;
uint prev_h2c_flags = h2c->flags;
unsigned long long prev_body_len = h2s->body_len;
@ -7865,17 +7900,7 @@ static size_t h2_rcv_buf(struct stconn *sc, struct buffer *buf, size_t count, in
goto end;
}
htxret = htx_xfer_blks(buf_htx, h2s_htx, count, HTX_BLK_UNUSED);
count -= htxret.ret;
if (h2s_htx->flags & HTX_FL_PARSING_ERROR) {
buf_htx->flags |= HTX_FL_PARSING_ERROR;
if (htx_is_empty(buf_htx))
se_fl_set(h2s->sd, SE_FL_EOI);
}
else if (htx_is_empty(h2s_htx)) {
buf_htx->flags |= (h2s_htx->flags & HTX_FL_EOM);
}
count -= htx_xfer(buf_htx, h2s_htx, count, HTX_XFER_DEFAULT);
htx_to_buf(buf_htx, buf);
htx_to_buf(h2s_htx, rxbuf);
@ -7904,13 +7929,7 @@ static size_t h2_rcv_buf(struct stconn *sc, struct buffer *buf, size_t count, in
/* tell the stream layer whether there are data left or not */
if (h2s_rxbuf_cnt(h2s)) {
/* Note that parsing errors can also arrive here, we may need
* to propagate errors upstream otherwise no new activity will
* unblock them.
*/
se_fl_set(h2s->sd, SE_FL_RCV_MORE | SE_FL_WANT_ROOM);
if (h2s_htx && h2s_htx->flags & HTX_FL_PARSING_ERROR)
h2s_propagate_term_flags(h2c, h2s);
BUG_ON_HOT(!buf->data);
}
else {
@ -8800,6 +8819,30 @@ static int h2_parse_max_total_streams(char **args, int section_type, struct prox
return 0;
}
/* config parser for global "tune.h2.{be.,fe.,}max-{frames,rst}-at-once" */
static int h2_parse_max_frames_at_once(char **args, int section_type, struct proxy *curpx,
const struct proxy *defpx, const char *file, int line,
char **err)
{
uint *vptr;
/* backend/frontend/default */
if (strcmp(args[0], "tune.h2.be.max-frames-at-once") == 0)
vptr = &h2_be_max_frames_at_once;
else if (strcmp(args[0], "tune.h2.fe.max-frames-at-once") == 0)
vptr = &h2_fe_max_frames_at_once;
else if (strcmp(args[0], "tune.h2.fe.max-rst-at-once") == 0)
vptr = &h2_fe_max_rst_at_once;
else
BUG_ON(1, "unhandled keyword");
if (too_many_args(1, args, err, NULL))
return -1;
*vptr = atoi(args[1]);
return 0;
}
/* config parser for global "tune.h2.max-frame-size" */
static int h2_parse_max_frame_size(char **args, int section_type, struct proxy *curpx,
const struct proxy *defpx, const char *file, int line,
@ -8898,10 +8941,13 @@ static struct cfg_kw_list cfg_kws = {ILH, {
{ CFG_GLOBAL, "tune.h2.be.glitches-threshold", h2_parse_glitches_threshold },
{ CFG_GLOBAL, "tune.h2.be.initial-window-size", h2_parse_initial_window_size },
{ CFG_GLOBAL, "tune.h2.be.max-concurrent-streams", h2_parse_max_concurrent_streams },
{ CFG_GLOBAL, "tune.h2.be.max-frames-at-once", h2_parse_max_frames_at_once },
{ CFG_GLOBAL, "tune.h2.be.rxbuf", h2_parse_rxbuf },
{ CFG_GLOBAL, "tune.h2.fe.glitches-threshold", h2_parse_glitches_threshold },
{ CFG_GLOBAL, "tune.h2.fe.initial-window-size", h2_parse_initial_window_size },
{ CFG_GLOBAL, "tune.h2.fe.max-concurrent-streams", h2_parse_max_concurrent_streams },
{ CFG_GLOBAL, "tune.h2.fe.max-frames-at-once", h2_parse_max_frames_at_once },
{ CFG_GLOBAL, "tune.h2.fe.max-rst-at-once", h2_parse_max_frames_at_once },
{ CFG_GLOBAL, "tune.h2.fe.max-total-streams", h2_parse_max_total_streams },
{ CFG_GLOBAL, "tune.h2.fe.rxbuf", h2_parse_rxbuf },
{ CFG_GLOBAL, "tune.h2.header-table-size", h2_parse_header_table_size },

View File

@ -1676,11 +1676,17 @@ int proxy_finalize(struct proxy *px, int *err_code)
}
if (bind_conf->mux_proto) {
int is_quic;
if ((bind_conf->options & (BC_O_USE_SOCK_DGRAM | BC_O_USE_XPRT_STREAM)) == (BC_O_USE_SOCK_DGRAM | BC_O_USE_XPRT_STREAM))
is_quic = 1;
else
is_quic = 0;
/* it is possible that an incorrect mux was referenced
* due to the proxy's mode not being taken into account
* on first pass. Let's adjust it now.
*/
mux_ent = conn_get_best_mux_entry(bind_conf->mux_proto->token, PROTO_SIDE_FE, mode);
mux_ent = conn_get_best_mux_entry(bind_conf->mux_proto->token, PROTO_SIDE_FE, is_quic, mode);
if (!mux_ent || !isteq(mux_ent->token, bind_conf->mux_proto->token)) {
ha_alert("%s '%s' : MUX protocol '%.*s' is not usable for 'bind %s' at [%s:%d].\n",
@ -2672,6 +2678,8 @@ int proxy_finalize(struct proxy *px, int *err_code)
*err_code |= ERR_WARN;
}
*err_code |= proxy_check_http_errors(px);
if (px->mode != PR_MODE_HTTP && !(px->options & PR_O_HTTP_UPG)) {
int optnum;
@ -2877,7 +2885,7 @@ int proxy_finalize(struct proxy *px, int *err_code)
* due to the proxy's mode not being taken into account
* on first pass. Let's adjust it now.
*/
mux_ent = conn_get_best_mux_entry(newsrv->mux_proto->token, PROTO_SIDE_BE, mode);
mux_ent = conn_get_best_mux_entry(newsrv->mux_proto->token, PROTO_SIDE_BE, srv_is_quic(newsrv), mode);
if (!mux_ent || !isteq(mux_ent->token, newsrv->mux_proto->token)) {
ha_alert("%s '%s' : MUX protocol '%.*s' is not usable for server '%s' at [%s:%d].\n",
@ -4916,10 +4924,6 @@ static int cli_parse_add_backend(char **args, char *payload, struct appctx *appc
def_name, proxy_mode_str(defpx->mode)));
return 1;
}
if (!LIST_ISEMPTY(&defpx->conf.errors)) {
cli_dynerr(appctx, memprintf(&msg, "Dynamic backends cannot inherit from default proxy '%s' because it references HTTP errors.\n", def_name));
return 1;
}
thread_isolate();

View File

@ -44,7 +44,7 @@ size_t qcs_http_rcv_buf(struct qcs *qcs, struct buffer *buf, size_t count,
goto end;
}
htx_xfer_blks(cs_htx, qcs_htx, count, HTX_BLK_UNUSED);
htx_xfer(cs_htx, qcs_htx, count, HTX_XFER_DEFAULT);
BUG_ON(qcs_htx->flags & HTX_FL_PARSING_ERROR);
/* Copy EOM from src to dst buffer if all data copied. */

View File

@ -61,10 +61,10 @@
static uint64_t qpack_get_varint(const unsigned char **buf, uint64_t *len_in, int b)
{
uint64_t ret = 0;
int len = *len_in;
uint64_t len = *len_in;
const uint8_t *raw = *buf;
uint64_t v, max = ~0;
uint8_t shift = 0;
uint64_t v, limit = (1ULL << 62) - 1;
int shift = 0;
if (len == 0)
goto too_short;
@ -77,24 +77,26 @@ static uint64_t qpack_get_varint(const unsigned char **buf, uint64_t *len_in, in
do {
if (!len)
goto too_short;
v = *raw++;
len--;
if (v & 127) { // make UBSan happy
if ((v & 127) > max)
goto too_large;
ret += (v & 127) << shift;
}
max >>= 7;
/* This check is sufficient to prevent any overflow
* and implicitly limits shift to 63.
*/
if ((v & 127) > (limit - ret) >> shift)
goto too_large;
ret += (v & 127) << shift;
shift += 7;
} while (v & 128);
end:
end:
*buf = raw;
*len_in = len;
return ret;
too_large:
too_short:
too_large:
too_short:
*len_in = (uint64_t)-1;
return 0;
}
@ -402,7 +404,10 @@ int qpack_decode_fs(const unsigned char *raw, uint64_t len, struct buffer *tmp,
n = efl_type & 0x20;
static_tbl = efl_type & 0x10;
index = qpack_get_varint(&raw, &len, 4);
if (len == (uint64_t)-1) {
/* There must be at least one byte available for <h> value after this
* decoding before the next call to qpack_get_varint().
*/
if ((int64_t)len <= 0) {
qpack_debug_printf(stderr, "##ERR@%d\n", __LINE__);
ret = -QPACK_RET_TRUNCATED;
goto out;
@ -474,7 +479,10 @@ int qpack_decode_fs(const unsigned char *raw, uint64_t len, struct buffer *tmp,
n = *raw & 0x10;
hname = *raw & 0x08;
name_len = qpack_get_varint(&raw, &len, 3);
if (len == (uint64_t)-1 || len < name_len) {
/* There must be at least one byte available for <hvalue> after this
* decoding before the next call to qpack_get_varint().
*/
if ((int64_t)len < (int64_t)name_len + 1) {
qpack_debug_printf(stderr, "##ERR@%d\n", __LINE__);
ret = -QPACK_RET_TRUNCATED;
goto out;

View File

@ -16,8 +16,6 @@ DECLARE_STATIC_TYPED_POOL(pool_head_quic_stream_desc, "qc_stream_desc", struct q
DECLARE_STATIC_TYPED_POOL(pool_head_quic_stream_buf, "qc_stream_buf", struct qc_stream_buf);
DECLARE_STATIC_TYPED_POOL(pool_head_quic_stream_ack, "qc_stream_ack", struct qc_stream_ack);
static struct pool_head *pool_head_sbuf;
static void qc_stream_buf_free(struct qc_stream_desc *stream,
struct qc_stream_buf **stream_buf)
{
@ -39,13 +37,10 @@ static void qc_stream_buf_free(struct qc_stream_desc *stream,
room = b_data(buf);
}
if ((*stream_buf)->sbuf) {
pool_free(pool_head_sbuf, buf->area);
}
else {
b_free(buf);
if (!(*stream_buf)->sbuf) {
bdata_ctr_del(&stream->data, b_data(buf));
bdata_ctr_bdec(&stream->data);
b_free(buf);
offer_buffers(NULL, 1);
}
pool_free(pool_head_quic_stream_buf, *stream_buf);
@ -412,10 +407,7 @@ void qc_stream_desc_free(struct qc_stream_desc *stream, int closing)
pool_free(pool_head_quic_stream_ack, ack);
}
if (buf->sbuf)
pool_free(pool_head_sbuf, buf->buf.area);
else
b_free(&buf->buf);
b_free(&buf->buf);
eb64_delete(&buf->offset_node);
pool_free(pool_head_quic_stream_buf, buf);
@ -461,7 +453,7 @@ struct buffer *qc_stream_buf_alloc(struct qc_stream_desc *stream,
stream->buf->buf = BUF_NULL;
stream->buf->offset_node.key = offset;
if (!small) {
if (!small || !global.tune.bufsize_small) {
stream->buf->sbuf = 0;
if (!b_alloc(&stream->buf->buf, DB_MUX_TX)) {
pool_free(pool_head_quic_stream_buf, stream->buf);
@ -470,16 +462,12 @@ struct buffer *qc_stream_buf_alloc(struct qc_stream_desc *stream,
}
}
else {
char *area;
if (!(area = pool_alloc(pool_head_sbuf))) {
if (!b_alloc_small(&stream->buf->buf)) {
pool_free(pool_head_quic_stream_buf, stream->buf);
stream->buf = NULL;
return NULL;
}
stream->buf->sbuf = 1;
stream->buf->buf = b_make(area, global.tune.bufsize_small, 0, 0);
}
eb64_insert(&stream->buf_tree, &stream->buf->offset_node);
@ -502,7 +490,7 @@ struct buffer *qc_stream_buf_realloc(struct qc_stream_desc *stream)
BUG_ON(b_data(&stream->buf->buf));
/* Release buffer */
pool_free(pool_head_sbuf, stream->buf->buf.area);
b_free(&stream->buf->buf);
stream->buf->buf = BUF_NULL;
stream->buf->sbuf = 0;
@ -536,23 +524,3 @@ void qc_stream_buf_release(struct qc_stream_desc *stream)
if (stream->notify_room && room)
stream->notify_room(stream, room);
}
static int create_sbuf_pool(void)
{
if (global.tune.bufsize_small > global.tune.bufsize) {
ha_warning("invalid small buffer size %d bytes which is greater to default bufsize %d bytes.\n",
global.tune.bufsize_small, global.tune.bufsize);
return ERR_FATAL|ERR_ABORT;
}
pool_head_sbuf = create_pool("sbuf", global.tune.bufsize_small,
MEM_F_SHARED|MEM_F_EXACT);
if (!pool_head_sbuf) {
ha_warning("error on small buffer pool allocation.\n");
return ERR_FATAL|ERR_ABORT;
}
return ERR_NONE;
}
REGISTER_POST_CHECK(create_sbuf_pool);

View File

@ -37,6 +37,7 @@
#include <haproxy/namespace.h>
#include <haproxy/port_range.h>
#include <haproxy/protocol.h>
#include <haproxy/proto_tcp.h>
#include <haproxy/proxy.h>
#include <haproxy/queue.h>
#include <haproxy/quic_tp.h>
@ -2938,7 +2939,9 @@ void srv_settings_cpy(struct server *srv, const struct server *src, int srv_tmpl
}
srv->use_ssl = src->use_ssl;
srv->check.addr = src->check.addr;
srv->check.proto = src->check.proto;
srv->agent.addr = src->agent.addr;
srv->agent.proto = src->agent.proto;
srv->check.use_ssl = src->check.use_ssl;
srv->check.port = src->check.port;
if (src->check.sni != NULL)
@ -4635,6 +4638,11 @@ out:
set_srv_agent_addr(s, &sk);
if (port)
set_srv_agent_port(s, new_port);
/* Agent currently only uses TCP */
if (sk.ss_family == AF_INET)
s->agent.proto = &proto_tcpv4;
else
s->agent.proto = &proto_tcpv6;
}
return NULL;
}
@ -4646,7 +4654,8 @@ out:
*/
const char *srv_update_check_addr_port(struct server *s, const char *addr, const char *port)
{
struct sockaddr_storage sk;
struct sockaddr_storage *sk = NULL;
struct protocol *proto = NULL;
struct buffer *msg;
int new_port;
@ -4658,8 +4667,8 @@ const char *srv_update_check_addr_port(struct server *s, const char *addr, const
goto out;
}
if (addr) {
memset(&sk, 0, sizeof(struct sockaddr_storage));
if (str2ip2(addr, &sk, 0) == NULL) {
sk = str2sa_range(addr, NULL, NULL, NULL, NULL, &proto, NULL, NULL, NULL, NULL, NULL, 0);
if (sk == NULL) {
chunk_appendf(msg, "invalid addr '%s'", addr);
goto out;
}
@ -4683,8 +4692,10 @@ out:
if (msg->data)
return msg->area;
else {
if (addr)
s->check.addr = sk;
if (sk) {
s->check.addr = *sk;
s->check.proto = proto;
}
if (port)
s->check.port = new_port;
@ -6230,7 +6241,7 @@ static int cli_parse_add_server(char **args, char *payload, struct appctx *appct
int proto_mode = conn_pr_mode_to_proto_mode(be->mode);
const struct mux_proto_list *mux_ent;
mux_ent = conn_get_best_mux_entry(srv->mux_proto->token, PROTO_SIDE_BE, proto_mode);
mux_ent = conn_get_best_mux_entry(srv->mux_proto->token, PROTO_SIDE_BE, srv_is_quic(srv), proto_mode);
if (!mux_ent || !isteq(mux_ent->token, srv->mux_proto->token)) {
ha_alert("MUX protocol is not usable for server.\n");
@ -7612,7 +7623,7 @@ static void srv_close_idle_conns(struct server *srv)
REGISTER_SERVER_DEINIT(srv_close_idle_conns);
/* config parser for global "tune.idle-pool.shared", accepts "on" or "off" */
/* config parser for global "tune.idle-pool.shared", accepts "full", "on" or "off" */
static int cfg_parse_idle_pool_shared(char **args, int section_type, struct proxy *curpx,
const struct proxy *defpx, const char *file, int line,
char **err)
@ -7620,12 +7631,17 @@ static int cfg_parse_idle_pool_shared(char **args, int section_type, struct prox
if (too_many_args(1, args, err, NULL))
return -1;
if (strcmp(args[1], "on") == 0)
if (strcmp(args[1], "full") == 0) {
global.tune.options |= GTUNE_IDLE_POOL_SHARED;
else if (strcmp(args[1], "off") == 0)
global.tune.tg_takeover = FULL_THREADGROUP_TAKEOVER;
} else if (strcmp(args[1], "on") == 0) {
global.tune.options |= GTUNE_IDLE_POOL_SHARED;
global.tune.tg_takeover = RESTRICTED_THREADGROUP_TAKEOVER;
} else if (strcmp(args[1], "off") == 0) {
global.tune.options &= ~GTUNE_IDLE_POOL_SHARED;
else {
memprintf(err, "'%s' expects either 'on' or 'off' but got '%s'.", args[0], args[1]);
global.tune.tg_takeover = NO_THREADGROUP_TAKEOVER;
} else {
memprintf(err, "'%s' expects 'auto', 'on' or 'off' but got '%s'.", args[0], args[1]);
return -1;
}
return 0;

View File

@ -162,8 +162,8 @@ struct connection *sock_accept_conn(struct listener *l, int *status)
case ENFILE:
if (p)
send_log(p, LOG_EMERG,
"Proxy %s reached system FD limit (maxsock=%d). Please check system tunables.\n",
p->id, global.maxsock);
"Proxy %s reached system FD limit (actconn=%d). Please check system tunables.\n",
p->id, actconn);
ret = CO_AC_PAUSE;
break;
@ -179,8 +179,8 @@ struct connection *sock_accept_conn(struct listener *l, int *status)
case ENOMEM:
if (p)
send_log(p, LOG_EMERG,
"Proxy %s reached system memory limit (maxsock=%d). Please check system tunables.\n",
p->id, global.maxsock);
"Proxy %s reached system memory limit (actconn=%d). Please check system tunables.\n",
p->id, actconn);
ret = CO_AC_PAUSE;
break;

View File

@ -1190,7 +1190,7 @@ int sc_conn_recv(struct stconn *sc)
* SE_FL_RCV_MORE on the SC if more space is needed.
*/
max = channel_recv_max(ic);
if ((ic->flags & CF_WROTE_DATA) && b_is_large(sc_ib(sc)))
if (b_is_small(sc_ib(sc)) || ((ic->flags & CF_WROTE_DATA) && b_is_large(sc_ib(sc))))
max = 0;
ret = CALL_MUX_WITH_RET(conn->mux, rcv_buf(sc, &ic->buf, max, cur_flags));
@ -1496,16 +1496,21 @@ int sc_conn_send(struct stconn *sc)
if (s->txn->req.msg_state != HTTP_MSG_DONE || b_is_large(&oc->buf))
s->txn->flags &= ~TX_L7_RETRY;
else {
if (b_alloc(&s->txn->l7_buffer, DB_UNLIKELY) == NULL)
s->txn->flags &= ~TX_L7_RETRY;
else {
memcpy(b_orig(&s->txn->l7_buffer),
b_orig(&oc->buf),
b_size(&oc->buf));
s->txn->l7_buffer.head = co_data(oc);
b_add(&s->txn->l7_buffer, co_data(oc));
if (!(s->be->options2 & PR_O2_USE_SBUF_L7_RETRY) ||
!htx_copy_to_small_buffer(&s->txn->l7_buffer, &oc->buf)) {
if (b_alloc(&s->txn->l7_buffer, DB_UNLIKELY) == NULL)
s->txn->flags &= ~TX_L7_RETRY;
else {
memcpy(b_orig(&s->txn->l7_buffer),
b_orig(&oc->buf),
b_size(&oc->buf));
}
}
if (s->txn->flags & TX_L7_RETRY) {
s->txn->l7_buffer.head = co_data(oc);
b_set_data(&s->txn->l7_buffer, co_data(oc));
}
}
}
@ -1861,7 +1866,7 @@ int sc_applet_recv(struct stconn *sc)
* SE_FL_RCV_MORE on the SC if more space is needed.
*/
max = channel_recv_max(ic);
if ((ic->flags & CF_WROTE_DATA) && b_is_large(sc_ib(sc)))
if (b_is_small(sc_ib(sc)) || ((ic->flags & CF_WROTE_DATA) && b_is_large(sc_ib(sc))))
max = 0;
ret = appctx_rcv_buf(sc, &ic->buf, max, flags);
if (sc_ep_test(sc, SE_FL_WANT_ROOM)) {

View File

@ -2511,6 +2511,29 @@ struct task *process_stream(struct task *t, void *context, unsigned int state)
srv = objt_server(s->target);
if (scb->state == SC_ST_ASS && srv && srv->rdr_len && (s->flags & SF_REDIRECTABLE))
http_perform_server_redirect(s, scb);
if (unlikely((s->be->options2 & PR_O2_USE_SBUF_QUEUE) && scb->state == SC_ST_QUE)) {
struct buffer sbuf = BUF_NULL;
if (IS_HTX_STRM(s)) {
if (!htx_move_to_small_buffer(&sbuf, &req->buf))
break;
}
else {
if (b_size(&req->buf) == global.tune.bufsize_small ||
b_data(&req->buf) > global.tune.bufsize_small)
break;
if (!b_alloc_small(&sbuf))
break;
b_xfer(&sbuf, &req->buf, b_data(&req->buf));
}
b_free(&req->buf);
offer_buffers(s, 1);
req->buf = sbuf;
DBG_TRACE_DEVEL("request moved to a small buffer", STRM_EV_STRM_PROC, s);
}
} while (scb->state == SC_ST_ASS);
}
@ -3266,7 +3289,7 @@ static int check_tcp_switch_stream_mode(struct act_rule *rule, struct proxy *px,
px->options |= PR_O_HTTP_UPG;
if (mux_proto) {
mux_ent = conn_get_best_mux_entry(mux_proto->token, PROTO_SIDE_FE, mode);
mux_ent = conn_get_best_mux_entry(mux_proto->token, PROTO_SIDE_FE, 0, mode);
if (!mux_ent || !isteq(mux_ent->token, mux_proto->token)) {
memprintf(err, "MUX protocol '%.*s' is not compatible with the selected mode",
(int)mux_proto->token.len, mux_proto->token.ptr);
@ -3274,7 +3297,7 @@ static int check_tcp_switch_stream_mode(struct act_rule *rule, struct proxy *px,
}
}
else {
mux_ent = conn_get_best_mux_entry(IST_NULL, PROTO_SIDE_FE, mode);
mux_ent = conn_get_best_mux_entry(IST_NULL, PROTO_SIDE_FE, 0, mode);
if (!mux_ent) {
memprintf(err, "Unable to find compatible MUX protocol with the selected mode");
return 0;

View File

@ -147,8 +147,7 @@ void __tasklet_wakeup_on(struct tasklet *tl, int thr)
LIST_APPEND(&th_ctx->tasklets[TL_BULK], &tl->list);
th_ctx->tl_class_mask |= 1 << TL_BULK;
}
else if ((struct task *)tl == th_ctx->current) {
_HA_ATOMIC_OR(&tl->state, TASK_SELF_WAKING);
else if ((struct task *)tl == th_ctx->current && !(tl->state & TASK_WOKEN_ANY)) {
LIST_APPEND(&th_ctx->tasklets[TL_BULK], &tl->list);
th_ctx->tl_class_mask |= 1 << TL_BULK;
}
@ -157,8 +156,8 @@ void __tasklet_wakeup_on(struct tasklet *tl, int thr)
th_ctx->tl_class_mask |= 1 << TL_URGENT;
}
else {
LIST_APPEND(&th_ctx->tasklets[th_ctx->current_queue], &tl->list);
th_ctx->tl_class_mask |= 1 << th_ctx->current_queue;
LIST_APPEND(&th_ctx->tasklets[TL_NORMAL], &tl->list);
th_ctx->tl_class_mask |= 1 << TL_NORMAL;
}
_HA_ATOMIC_INC(&th_ctx->rq_total);
} else {
@ -186,8 +185,7 @@ struct list *__tasklet_wakeup_after(struct list *head, struct tasklet *tl)
LIST_INSERT(&th_ctx->tasklets[TL_BULK], &tl->list);
th_ctx->tl_class_mask |= 1 << TL_BULK;
}
else if ((struct task *)tl == th_ctx->current) {
_HA_ATOMIC_OR(&tl->state, TASK_SELF_WAKING);
else if ((struct task *)tl == th_ctx->current && !(tl->state & TASK_WOKEN_ANY)) {
LIST_INSERT(&th_ctx->tasklets[TL_BULK], &tl->list);
th_ctx->tl_class_mask |= 1 << TL_BULK;
}
@ -196,8 +194,8 @@ struct list *__tasklet_wakeup_after(struct list *head, struct tasklet *tl)
th_ctx->tl_class_mask |= 1 << TL_URGENT;
}
else {
LIST_INSERT(&th_ctx->tasklets[th_ctx->current_queue], &tl->list);
th_ctx->tl_class_mask |= 1 << th_ctx->current_queue;
LIST_INSERT(&th_ctx->tasklets[TL_NORMAL], &tl->list);
th_ctx->tl_class_mask |= 1 << TL_NORMAL;
}
}
else {
@ -563,14 +561,22 @@ unsigned int run_tasks_from_lists(unsigned int budgets[])
continue;
}
budgets[queue]--;
activity[tid].ctxsw++;
t = (struct task *)LIST_ELEM(tl_queues[queue].n, struct tasklet *, list);
/* check if this task has already run during this loop */
if ((uint16_t)t->last_run == (uint16_t)activity[tid].loops) {
budget_mask &= ~(1 << queue);
queue++;
continue;
}
t->last_run = activity[tid].loops;
ctx = t->context;
process = t->process;
t->calls++;
budgets[queue]--;
activity[tid].ctxsw++;
th_ctx->lock_wait_total = 0;
th_ctx->mem_wait_total = 0;
th_ctx->locked_total = 0;
@ -723,8 +729,8 @@ void process_runnable_tasks()
struct task *t;
const unsigned int default_weights[TL_CLASSES] = {
[TL_URGENT] = 64, // ~50% of CPU bandwidth for I/O
[TL_NORMAL] = 48, // ~37% of CPU bandwidth for tasks
[TL_BULK] = 16, // ~13% of CPU bandwidth for self-wakers
[TL_NORMAL] = 60, // ~47% of CPU bandwidth for tasks
[TL_BULK] = 4, // ~3% of CPU bandwidth for self-wakers
[TL_HEAVY] = 1, // never more than 1 heavy task at once
};
unsigned int max[TL_CLASSES]; // max to be run per class
@ -734,7 +740,7 @@ void process_runnable_tasks()
int max_processed;
int lpicked, gpicked;
int heavy_queued = 0;
int budget;
int budget, done;
_HA_ATOMIC_AND(&th_ctx->flags, ~TH_FL_STUCK); // this thread is still running
@ -904,10 +910,11 @@ void process_runnable_tasks()
}
/* execute tasklets in each queue */
max_processed -= run_tasks_from_lists(max);
done = run_tasks_from_lists(max);
max_processed -= done;
/* some tasks may have woken other ones up */
if (max_processed > 0 && thread_has_tasks())
if (done && max_processed > 0 && thread_has_tasks())
goto not_done_yet;
leave:

View File

@ -1257,7 +1257,8 @@ static int tcp_parse_tcp_rep(char **args, int section_type, struct proxy *curpx,
}
/* the following function directly emits the warning */
warnif_misplaced_tcp_res_cont(curpx, file, line, args[0], args[1]);
if (warnif_misplaced_tcp_res_cont(curpx, file, line, args[0], args[1]))
warn++;
LIST_APPEND(&curpx->tcp_rep.inspect_rules, &rule->list);
}
else {
@ -1377,7 +1378,8 @@ static int tcp_parse_tcp_req(char **args, int section_type, struct proxy *curpx,
}
/* the following function directly emits the warning */
warnif_misplaced_tcp_req_cont(curpx, file, line, args[0], args[1]);
if (warnif_misplaced_tcp_req_cont(curpx, file, line, args[0], args[1]))
warn++;
LIST_APPEND(&curpx->tcp_req.inspect_rules, &rule->list);
}
else if (strcmp(args[1], "connection") == 0) {
@ -1422,7 +1424,8 @@ static int tcp_parse_tcp_req(char **args, int section_type, struct proxy *curpx,
}
/* the following function directly emits the warning */
warnif_misplaced_tcp_req_conn(curpx, file, line, args[0], args[1]);
if (warnif_misplaced_tcp_req_conn(curpx, file, line, args[0], args[1]))
warn++;
LIST_APPEND(&curpx->tcp_req.l4_rules, &rule->list);
}
else if (strcmp(args[1], "session") == 0) {
@ -1466,7 +1469,8 @@ static int tcp_parse_tcp_req(char **args, int section_type, struct proxy *curpx,
}
/* the following function directly emits the warning */
warnif_misplaced_tcp_req_sess(curpx, file, line, args[0], args[1]);
if (warnif_misplaced_tcp_req_sess(curpx, file, line, args[0], args[1]))
warn++;
LIST_APPEND(&curpx->tcp_req.l5_rules, &rule->list);
}
else {

View File

@ -40,6 +40,7 @@
#include <haproxy/check.h>
#include <haproxy/chunk.h>
#include <haproxy/connection.h>
#include <haproxy/dynbuf.h>
#include <haproxy/errors.h>
#include <haproxy/global.h>
#include <haproxy/h1.h>
@ -1427,9 +1428,15 @@ enum tcpcheck_eval_ret tcpcheck_eval_connect(struct check *check, struct tcpchec
check->mux_proto = NULL;
}
else {
proto = s ?
protocol_lookup(conn->dst->ss_family, s->addr_type.proto_type, s->alt_proto) :
protocol_lookup(conn->dst->ss_family, PROTO_TYPE_STREAM, 0);
if (check->proto)
proto = check->proto;
else {
if (is_addr(&connect->addr))
proto = protocol_lookup(conn->dst->ss_family, PROTO_TYPE_STREAM, 0);
else
proto = protocol_lookup(conn->dst->ss_family, s->addr_type.proto_type, s->alt_proto);
}
}
port = 0;
@ -1659,7 +1666,8 @@ enum tcpcheck_eval_ret tcpcheck_eval_send(struct check *check, struct tcpcheck_r
goto out;
}
if (!check_get_buf(check, &check->bo)) {
retry:
if (!check_get_buf(check, &check->bo, (check->state & CHK_ST_USE_SMALL_BUFF))) {
check->state |= CHK_ST_OUT_ALLOC;
ret = TCPCHK_EVAL_WAIT;
TRACE_STATE("waiting for output buffer allocation", CHK_EV_TCPCHK_SND|CHK_EV_TX_DATA|CHK_EV_TX_BLK, check);
@ -1679,6 +1687,13 @@ enum tcpcheck_eval_ret tcpcheck_eval_send(struct check *check, struct tcpcheck_r
case TCPCHK_SEND_STRING:
case TCPCHK_SEND_BINARY:
if (istlen(send->data) >= b_size(&check->bo)) {
if (b_is_small(&check->bo)) {
check->state &= ~CHK_ST_USE_SMALL_BUFF;
check_release_buf(check, &check->bo);
TRACE_DEVEL("Send fail with small buffer retry with default one", CHK_EV_TCPCHK_SND|CHK_EV_TX_DATA, check);
goto retry;
}
chunk_printf(&trash, "tcp-check send : string too large (%u) for buffer size (%u) at step %d",
(unsigned int)istlen(send->data), (unsigned int)b_size(&check->bo),
tcpcheck_get_step_id(check, rule));
@ -1689,6 +1704,7 @@ enum tcpcheck_eval_ret tcpcheck_eval_send(struct check *check, struct tcpcheck_r
b_putist(&check->bo, send->data);
break;
case TCPCHK_SEND_STRING_LF:
BUG_ON(check->state & CHK_ST_USE_SMALL_BUFF);
check->bo.data = sess_build_logline(check->sess, NULL, b_orig(&check->bo), b_size(&check->bo), &rule->send.fmt);
if (!b_data(&check->bo))
goto out;
@ -1696,7 +1712,8 @@ enum tcpcheck_eval_ret tcpcheck_eval_send(struct check *check, struct tcpcheck_r
case TCPCHK_SEND_BINARY_LF: {
int len = b_size(&check->bo);
tmp = alloc_trash_chunk();
BUG_ON(check->state & CHK_ST_USE_SMALL_BUFF);
tmp = alloc_trash_chunk_sz(len);
if (!tmp)
goto error_lf;
tmp->data = sess_build_logline(check->sess, NULL, b_orig(tmp), b_size(tmp), &rule->send.fmt);
@ -1713,7 +1730,7 @@ enum tcpcheck_eval_ret tcpcheck_eval_send(struct check *check, struct tcpcheck_r
struct ist meth, uri, vsn, clen, body;
unsigned int slflags = 0;
tmp = alloc_trash_chunk();
tmp = alloc_trash_chunk_sz(b_size(&check->bo));
if (!tmp)
goto error_htx;
@ -1838,6 +1855,12 @@ enum tcpcheck_eval_ret tcpcheck_eval_send(struct check *check, struct tcpcheck_r
htx_reset(htx);
htx_to_buf(htx, &check->bo);
}
if (b_is_small(&check->bo)) {
check->state &= ~CHK_ST_USE_SMALL_BUFF;
check_release_buf(check, &check->bo);
TRACE_DEVEL("Send fail with small buffer retry with default one", CHK_EV_TCPCHK_SND|CHK_EV_TX_DATA, check);
goto retry;
}
chunk_printf(&trash, "tcp-check send : failed to build HTTP request at step %d",
tcpcheck_get_step_id(check, rule));
TRACE_ERROR("failed to build HTTP request", CHK_EV_TCPCHK_SND|CHK_EV_TX_DATA|CHK_EV_TCPCHK_ERR, check);
@ -1884,7 +1907,7 @@ enum tcpcheck_eval_ret tcpcheck_eval_recv(struct check *check, struct tcpcheck_r
goto wait_more_data;
}
if (!check_get_buf(check, &check->bi)) {
if (!check_get_buf(check, &check->bi, 0)) {
check->state |= CHK_ST_IN_ALLOC;
TRACE_STATE("waiting for input buffer allocation", CHK_EV_RX_DATA|CHK_EV_RX_BLK, check);
goto wait_more_data;
@ -4067,6 +4090,8 @@ static int check_proxy_tcpcheck(struct proxy *px)
}
}
/* Allow small buffer use by default. All send rules must be compatible */
px->tcpcheck_rules.flags |= (global.tune.bufsize_small ? TCPCHK_RULES_MAY_USE_SBUF : 0);
/* Remove all comment rules. To do so, when a such rule is found, the
* comment is assigned to the following rule(s).
@ -4096,6 +4121,25 @@ static int check_proxy_tcpcheck(struct proxy *px)
ha_free(&comment);
break;
case TCPCHK_ACT_SEND:
/* Disable small buffer use for rules using LF stirngs or too large data */
switch (chk->send.type) {
case TCPCHK_SEND_STRING:
case TCPCHK_SEND_BINARY:
if (istlen(chk->send.data) >= global.tune.bufsize_small)
px->tcpcheck_rules.flags &= ~TCPCHK_RULES_MAY_USE_SBUF;
break;
case TCPCHK_SEND_STRING_LF:
case TCPCHK_SEND_BINARY_LF:
px->tcpcheck_rules.flags &= ~TCPCHK_RULES_MAY_USE_SBUF;
break;
case TCPCHK_SEND_HTTP:
if ((chk->send.http.flags & TCPCHK_SND_HTTP_FL_BODY_FMT) ||
(istlen(chk->send.http.body) >= global.tune.bufsize_small))
px->tcpcheck_rules.flags &= ~TCPCHK_RULES_MAY_USE_SBUF;
default:
break;
}
__fallthrough;
case TCPCHK_ACT_EXPECT:
if (!chk->comment && comment)
chk->comment = strdup(comment);

View File

@ -6057,6 +6057,9 @@ static int dl_collect_libs_cb(struct dl_phdr_info *info, size_t size, void *data
/* else it's a VDSO or similar and we're not interested */
goto leave;
if (!fname)
goto leave;
load_file_into_tar(&ctx->storage, &ctx->size, ctx->prefix, fname, NULL, "haproxy-libs-dump");
/* try to load equivalent debug symbols for absolute paths */