Released version 3.3-dev10 with the following main changes :
- BUG/MEDIUM: connections: Only avoid creating a mux if we have one
- BUG/MINOR: sink: retry attempt for sft server may never occur
- CLEANUP: mjson: remove MJSON_ENABLE_RPC code
- CLEANUP: mjson: remove MJSON_ENABLE_PRINT code
- CLEANUP: mjson: remove MJSON_ENABLE_NEXT code
- CLEANUP: mjson: remove MJSON_ENABLE_BASE64 code
- CLEANUP: mjson: remove unused defines and math.h
- BUG/MINOR: http-ana: Reset analyse_exp date after 'wait-for-body' action
- CLEANUP: mjson: remove unused defines from mjson.h
- BUG/MINOR: acme: avoid overflow when diff > notAfter
- DEV: patchbot: use git reset+checkout instead of pull
- MINOR: proxy: explicitly permit abortonclose on frontends and clarify the doc
- REGTESTS: fix h2_desync_attacks to wait for the response
- REGTESTS: http-messaging: fix the websocket and upgrade tests not to close early
- MINOR: proxy: only check abortonclose through a dedicated function
- MAJOR: proxy: enable abortonclose by default on HTTP proxies
- MINOR: proxy: introduce proxy_abrt_close_def() to pass the desired default
- MAJOR: proxy: enable abortonclose by default on TLS listeners
- MINOR: h3/qmux: Set QC_SF_UNKNOWN_PL_LENGTH flag on QCS when headers are sent
- MINOR: stconn: Add two fields in sedesc to replace the HTX extra value
- MINOR: h1-htx: Increment body len when parsing a payload with no xfer length
- MINOR: mux-h1: Set known input payload length during demux
- MINOR: mux-fcgi: Set known input payload length during demux
- MINOR: mux-h2: Use <body_len> H2S field for payload without content-length
- MINOR: mux-h2: Set known input payload length of the sedesc
- MINOR: h3: Set known input payload length of the sedesc
- MINOR: stconn: Move data from kip to kop when data are sent to the consumer
- MINOR: filters: Reset knwon input payload length if a data filter is used
- MINOR: hlua/http-fetch: Use <kip> instead of HTX extra field to get body size
- MINOR: cache: Use the <kip> value to check too big objects
- MINOR: compression: Use the <kip> value to check body size
- MEDIUM: mux-h1: Stop to use HTX extra value when formatting message
- MEDIUM: htx: Remove the HTX extra field
- MEDIUM: acme: don't insert acme account key in ckchs_tree
- BUG/MINOR: acme: memory leak from the config parser
- CI: cirrus-ci: bump FreeBSD image to 14-3
- BUG/MEDIUM: ssl: take care of second client hello
- BUG/MINOR: ssl: always clear the remains of the first hello for the second one
- BUG/MEDIUM: stconn: Properly forward kip to the opposite SE descriptor
- MEDIUM: applet: Forward <kip> to applets
- DEBUG: mux-h1: Dump <kip> and <kop> values with sedesc info
- BUG/MINOR: ssl: leak in ssl-f-use
- BUG/MINOR: ssl: leak crtlist_name in ssl-f-use
- BUILD: makefile: disable tail calls optimizations with memory profiling
- BUG/MEDIUM: apppet: Improve spinning loop detection with the new API
- BUG/MINOR: ssl: Free global_ssl structure contents during deinit
- BUG/MINOR: ssl: Free key_base from global_ssl structure during deinit
- MEDIUM: jwt: Remove certificate support in jwt_verify converter
- MINOR: jwt: Add new jwt_verify_cert converter
- MINOR: jwt: Do not look into ckch_store for jwt_verify converter
- MINOR: jwt: Add new "jwt" certificate option
- MINOR: jwt: Add specific error code for known but unavailable certificate
- DOC: jwt: Add doc about "jwt_verify_cert" converter
- MINOR: ssl: Dump options in "show ssl cert"
- MINOR: jwt: Add new "add/del/show ssl jwt" CLI commands
- REGTEST: jwt: Test new CLI commands
- BUG/MINOR: ssl: Potential NULL deref in trace macro
- MINOR: regex: use a thread-local match pointer for pcre2
- BUG/MEDIUM: pools: fix bad freeing of aligned pools in UAF mode
- MEDIUM: pools: detect() when munmap() fails in UAF mode
- TESTS: quic: useless param for b_quic_dec_int()
- BUG/MEDIUM: pools: fix crash on filtered "show pools" output
- BUG/MINOR: pools: don't report "limited to the first X entries" by default
- BUG/MAJOR: lb-chash: fix key calculation when using default hash-key id
- BUG/MEDIUM: stick-tables: Don't forget to dec count on failure.
- BUG/MINOR: quic: check applet_putchk() for 'show quic' first line
- TESTS: quic: fix uninit of quic_cc_path const member
- BUILD: ssl: can't build when using -DLISTEN_DEFAULT_CIPHERS
- BUG/MAJOR: quic: uninitialized quic_conn_closed struct members
- BUG/MAJOR: quic: do not reset QUIC backends fds in closing state
- BUG/MINOR: quic: SSL counters not handled
- DOC: clarify the experimental status for certain features
- MINOR: config: remove experimental status on tune.disable-fast-forward
- MINOR: tree-wide: add missing TAINTED flags for some experimental directives
- MEDIUM: config: warn when expose-experimental-directives is used for no reason
- BUG/MEDIUM: threads/config: drop absent threads from thread groups
- REGTESTS: remove experimental from quic/retry.vtc
Recent commit 8b7a82cd30 ("MEDIUM: config: warn when
expose-experimental-directives is used for no reason") triggered on
this test exactly for the reason it was made for. The tests were just
done without quic on it. Let's drop the unneeded option.
Thread groups can be assigned arbitrary thread ranges, but if the
mentioned threads do not exist, this causes crashes in listener_accept()
or some connections to be ignored. The reason is that the calculated
mask is derived from the thread group's enabled threads count. Examples:
global
nbthread 2
thread-groups 2
thread-group 1 1-64
thread-group 2 65-128
frontend f-crash
bind :8001 thread 1/all
frontend f-freeze
bind :8002 thread 2/all
This commit removes missing threads, emits a warning when the thread
group just has less threads than requested, and an error when it is
left with no threads at all.
This must be backported to 3.1 since the issue is present there already.
If users start to enable expose-experimental-directives for the purpose
of testing one specific feature, there are chances that the option remains
forever and hides the experimental status of other options.
Let's emit a warning if the option appears and is not used. This will
remind users that they can now drop it, and help keep configs safe for
future upgrades.
We normally taint the process when using experimental directives, but
a handful of places were missed so we don't always know that they are
in use. Let's fix these places (hint for future directives, just look
for places checking for "experimental_directives_allowed", and add
"mark_tainted(TAINTED_CONFIG_EXP_KW_DECLARED);").
The option was turned to off by default in 2.8 with commit 2f7c82bfd
("BUG/MINOR: haproxy: Fix option to disable the fast-forward"), however
at the same time it should have dropped its experimental status since
the feature is enabled by default. The only goal of the option is to
debug something, like many other tune.xxx options. The option should
still normally not be used without being invited to do so by developers
looking for something specific though.
This could be backported if desired to simplify debugging, though this
has never been needed for now.
Certain features require "expose-experimental-directives" to be set in
the global section. Let's clarify that experimental featuers are only
maintained in best effort mode, may break during the stable cycle, and
are generally not maintained beyond the release of the next LTS branch
since it is extremely challenging, and early adopters are expected to
upgrade to benefit from improvements anyway.
The SSL counters were not handled at all for QUIC connections. This patch
implement ssl_sock_update_counters() extracting the code from ssl_sock.c
and call this function where applicable both in TLS/TCP and QUIC parts.
Must be backported as far as 2.8.
This bug impacts only the backends.
When entering the closing state, a quic_closed_conn is used to replace the quic_conn.
In this state, the ->fd value was reset to -1 value calling qc_init_fd(). This value
is used by qc_may_use_saddr() which supposes it cannot be -1 for a backend, leading
->li to be dereferencd, which is legal only for a listener.
This bug impacts only the backend but with possible crash when qc_may_use_saddr()
is called: qc_test_fd() is false leading qc->li to be dereferenced. This is legal
only for a listener.
This patch prevents such fd value resettings for backends.
No need to backport because the QUIC backends support arrived with 3.3.
A quic_conn_closed struct is initialized to replace the quic_conn when the
connection enters the closing to reduce the connection memory footprint.
->max_udp_payload quic_conn_close was not initialized leading to possible
BUG_ON()s in qc_rcv_buf() when comparing the RX buf size to this payload.
->cntrs counters were alon not initialized with the only consequence
to generate wrong values for these counters.
Must be backported as far as 2.9.
Emeric reported that he can't build haproxy anymore since 9bc6a034
("BUG/MINOR: ssl: Free global_ssl structure contents during deinit").
src/ssl_sock.c:7020:40: error: comparison with string literal results in unspecified behavior [-Werror=address]
7020 | if (global_ssl.listen_default_ciphers != LISTEN_DEFAULT_CIPHERS)
| ^~
src/ssl_sock.c:7023:41: error: comparison with string literal results in unspecified behavior [-Werror=address]
7023 | if (global_ssl.connect_default_ciphers != CONNECT_DEFAULT_CIPHERS)
| ^~
src/ssl_sock.c: At top level:
Indeed the mentionned patch is checking the pointer in order to free
something freeable, but that can't work because these constant are
strings literal which can be passed from the compiler and not pointers.
Also the test is not useful, because these strings are strdup() in
__ssl_sock_init, so they can be free directly.
Must be backported in every stable branches with 9bc6a034.
Fix quic_tx unittest module by adding an explicit define for <mtu> const
member of quic_cc_path.
This should fix coverity report from github issue #3162.
This can be backported up to 3.2.
Ensure applet_putchk() return value is checked when outputing via the
CLI 'show quic' header line.
This is only to align with other usages of the same function, as trash
output buffer should always be large enough for it. As such, the command
is simply aborted if this is not the case.
This should fix coverity report from github issue #3139.
This could be backported up to 2.8.
In stksess_new(), if we failed to allocate memory for the new stksess,
don't forget to decrement the table entry count, as nobody else will
do it for us.
An artificially high count could lead to at least purging entries while
there is no need to.
This should be backported up to 2.8.
WIP decrement current on allocation failure
A subtle regression was introduced in 3.0 by commit faa8c3e02 ("MEDIUM:
lb-chash: Deterministic node hashes based on server address"). When keys
are calculated from the server's ID (which is the default), due to the
reorganisation of the code, the key ended up being hashed twice instead
of being multiplied by the scaling range.
While most users will never notice it, it is blocking some large cache
users from upgrading from 2.8 to 3.0 or 3.2 because the keys are
redistributed.
After a check with users on the mailing list [1] it was estimated that
keep the current situation is the worst choice because those who have
not yet upgraded will face the problem while by fixing it, those who
already have and for whom it happened smoothly will handle it just
right again.
As such this fix must be backported to 3.0 without waiting (in order
to preserve those who upgrade from two redistributions). Please note
that only configurations featuring "hash-type consistent" and not
having "hash-key" present with a value other than "id" are affected,
others are not (e.g. "hash-key addr" is unaffected).
[1] https://www.mail-archive.com/haproxy@formilux.org/msg46115.html
With the fix in commit 982805e6a3 ("BUG/MINOR: pools: Fix the dump of
pools info to deal with buffers limitations"), the max count is now
compared to the number of dumped pools instead of the configured
numbered, and keeping >= is no longer valid because maxcnt is set by
default to the same value when not set, so this means that since this
patch we're always displaying "limited to the first X entries" where X
is the number of dumped entries even in the absence of any limitation.
Let's just fix the comparison to only show this when the limit is lower.
This must be backported to 3.2 where the patch above already is.
The truncation of pools output that was adressed in commit 982805e6a3
("BUG/MINOR: pools: Fix the dump of pools info to deal with buffers
limitations") required to split the pools filling from dumping. However
there is a problem when a limit is passed that is lower than the number
of pools or if a pool name is specified or if pool caches are disabled,
because in this case the number of filled slots will be lower than the
initially allocated one, and empty entries will be visited either by the
sort functions when filling the entries if "byxxx" is specified, or by
the dump function after the last entry, but none of these functions was
expecting to be passed a NULL entry.
Let's just re-adjust nbpools to match the number of filled entries at
the end. Anyway the totals are calculated on the number of dumped
entries.
This must be backported to 3.2 since the fix above was backported there
as well.
The third parameter passed to b_quic_dec_int() is unitialized. This is not a bug.
But this disturbs coverity for an unknown reason as revealed by GH issue #3154.
This patch takes the opportunity to use NULL as passed value to avoid using such
an uneeded third parameter.
Should be backported to 3.2 where this unit test was introduced.
Better check that munmap() always works, otherwise it means we might
have miscalculated an address, and if it fails silently, it will eat
all the memory extremely quickly. Let's add a BUG_ON() on munmap's
return.
As reported by Christopher, in UAF mode memory release of aligned
objects as introduced in commit ef915e672a ("MEDIUM: pools: respect
pool alignment in allocations") does not work. The padding calculation
in the freeing code is no longer correct since it now depends on the
alignment, so munmap() fails on EINVAL. Fortunately we don't care much
about it since we know it's the low bits of the passed address, which
is much simpler to compute, since all mmaps are page-aligned.
There's no need to backport this, as this was introduced in 3.3.
The pcre2 matching requires an array of matches for grouping, that is
allocated when executing the rule by pre-processing it, and that is
immediately freed after use. This is quite inefficient and results in
annoying patterns in "show profiling" that attribute the allocations
to libpcre2 and the releases to haproxy.
A good suggestion from Dragan is to pre-allocate these per thread,
since the entry is not specific to a regex. In addition we're already
limited to MAX_MATCH matches so we don't even have the problem of
having to grow it while parsing nor processing.
The current patch adds a per-thread pair of init/deinit functions to
allocate a thread-local entry for that, and gets rid of the dynamic
allocations. It will result in cleaner memory management patterns and
slightly higher performance (+2.5%) when using pcre2.
'ctx' might be NULL when we exit 'ssl_sock_handshake', it can't be
dereferenced without check in the trace macro.
This was found by Coverity andraised in GitHub #3113.
This patch should be backported up to 3.2
The new "add/del ssl jwt <file>" commands allow to change the "jwt" flag
of an already loaded certificate. It allows to delete certificates used
for JWT validation, which was not yet possible.
The "show ssl jwt" command iterates over all the ckch_stores and dumps
the ones that have the option set.
Add information about the new "jwt_verify_cert" converter and update the
existing "jwt_converter" doc to remove mentions of certificates from it.
Add information about the new "jwt" certificate option.
A certificate that does not have the 'jwt' flag enabled cannot be used
for JWT validation. We now raise a specific return value so that such a
case can be identified.
This option can be used to enable the use of a given certificate for JWT
verification. It defaults to 'off' so certificates that are declared in
a crt-store and will be used for JWT verification must have a
"jwt on" option in the configuration.
This converter will be in charge of performing the same operation as the
'jwt_verify' one except that it takes a full-on pem certificate path
instead of a public key path as parameter.
The certificate path can be either provided directly as a string or via
a variable. This allows to use certificates that are not known during
init to perform token validation.
The jwt_verify converter will not take full-on certificates anymore
in favor of a new soon to come jwt_verify_cert. We might end up with a
new jwt_verify_hmac in the future as well which would allow to deprecate
the jwt_verify converter and remove the need for a specific internal
tree for public keys.
The logic to always look into the internal jwt tree by default and
resolve to locking the ckch tree as little as possible will also be
removed. This allows to get rid of the duplicated reference to
EVP_PKEYs, the one in the jwt tree entry and the one in the ckch_store.
The key_base field of the global_ssl structure is an strdup'ed field
(when set) which was never free'd during deinit.
This patch can be backported up to branch 3.0.
Some fields of the global_ssl structure are strings that are strdup'ed
but never freed. There is only one static global_ssl structure so not
much memory is used but we might as well free it during deinit.
This patch can be backported to all stable branches.
Conditions to detect the spinning loop for applets based on the new API are
not accurrate. We cannot continue to check the channel's buffers state to
know if an applet has made some progress. At least, we must also check the
applet's buffers.
After digging to find the right way to do, it was clear that the best is to
use something similar to what is performed for the streams, namely, checking
read and write events. And in fact, it is quite easy to do with the new
API. So let's do so.
This patch must be backported as far as 3.0.
The purpose of memory profiling precisely is to figure what function
allocates and what function frees for specific objects. It turns out
that a non-negligible number of release callbacks basically do nothing
but a free() or pool_free() call and return, which the compiler happily
turns into a jump, making the caller of that callback appear as the
real one. That's how we can see libcrypto release to pools such as
ssl-capture for example, which also makes the per-DSO calls appear
wrong:
10000 0 10720000 0| 0x448c8d ssl_async_fd_free+0x3b9d p_alloc(1072) [pool=ssl-capture]
50000 0 6800000 0| 0x4456b9 ssl_async_fd_free+0x5c9 p_alloc(136) [pool=ssl-keylogf]
10072 0 644608 0| 0x447f14 ssl_async_fd_free+0x2e24 p_alloc(64) [pool=ssl-keylogf]
0 10000 0 1360000| 0x445987 ssl_async_fd_free+0x897 p_free(-136) [pool=ssl-keylogf]
0 10000 0 1360000| 0x4459b8 ssl_async_fd_free+0x8c8 p_free(-136) [pool=ssl-keylogf]
0 10000 0 1360000| 0x4459e9 ssl_async_fd_free+0x8f9 p_free(-136) [pool=ssl-keylogf]
0 10000 0 1360000| 0x445a1a ssl_async_fd_free+0x92a p_free(-136) [pool=ssl-keylogf]
0 10000 0 1360000| 0x445a4b ssl_async_fd_free+0x95b p_free(-136) [pool=ssl-keylogf]
0 20072 0 11364608| 0x7f5f1397db62 libcrypto:CRYPTO_free_ex_data+0xf2/0x261 p_free(-566) [pool=ssl-keylogf] [locked=72 (0.3 %)]
Worse, as can be seen on the last line above, there can be a single pool
per call place (since we don't release to arbitrary pools), and the stats
are misleading by reporting the first used pool only when a same function
can call multiple release callbacks. This is why the free call totals
10k ssl-capture and 10072 ssl-keylogfile.
Let's just disable tail call optimization when using memory profiling.
The gains are only very marginal and complicate so much the debugging
that it's not worth it. Now the output is correct, and no longer claims
that libcrypto is the caller:
10000 0 10720000 0| 0x448c9f ssl_async_fd_free+0x3b9f p_alloc(1072) [pool=ssl-capture]
0 10000 0 10720000| 0x445af0 ssl_async_fd_free+0x9f0 p_free(-1072) [pool=ssl-capture]
50000 0 6800000 0| 0x4456c9 ssl_async_fd_free+0x5c9 p_alloc(136) [pool=ssl-keylogf]
10177 0 1221240 0| 0x45543d ssl_async_fd_handler+0xb51d p_alloc(120) [pool=ssl_sock_ct] [locked=165 (1.6 %)]
10061 0 643904 0| 0x447f1c ssl_async_fd_free+0x2e1c p_alloc(64) [pool=ssl-keylogf]
0 10000 0 1360000| 0x445987 ssl_async_fd_free+0x887 p_free(-136) [pool=ssl-keylogf]
0 10000 0 1360000| 0x4459b8 ssl_async_fd_free+0x8b8 p_free(-136) [pool=ssl-keylogf]
0 10000 0 1360000| 0x4459e9 ssl_async_fd_free+0x8e9 p_free(-136) [pool=ssl-keylogf]
0 10000 0 1360000| 0x445a1a ssl_async_fd_free+0x91a p_free(-136) [pool=ssl-keylogf]
0 10000 0 1360000| 0x445a4b ssl_async_fd_free+0x94b p_free(-136) [pool=ssl-keylogf]
0 10188 0 1222560| 0x44f518 ssl_async_fd_handler+0x55f8 p_free(-120) [pool=ssl_sock_ct] [locked=176 (1.7 %)]
0 10072 0 644608| 0x445aa6 ssl_async_fd_free+0x9a6 p_free(-64) [pool=ssl-keylogf] [locked=72 (0.7 %)]
An attempt was made to only instrument pool_free() to place a compiler
barrier, but that resulted in much larger code and wouldn't cover
functions ending with a simple "free()" call. "ha_free()" however is
already immune against tail call optimization since it has to write
the NULL when returning from free().
This should be backported to recent stable releases that are still
regularly being debugged.
For now, no applets are using the <kop> value when consuming data. At least,
as far as I know. But it remains a good idea to keep the applet API
compatible. So now, the <kip> of the opposite side is properly forwarded to
applets.
By refactoring the HTX to remove the extra field, a bug was introduced in
the stream-connector part. The <kip> (known input payload) value of a sedesc
was moved to <kop> (knwon output payload) using the same sedesc. Of course,
this is totally wrong. <kip> value of a sedesc must be forwarded to the
opposite side.
In addition, the operation is performed in sc_conn_send(). In this function,
we manipulate the stream-connectors. So se_fwd_kip() function was changed to
use the stream-connectors directely.
Now, the function sc_ep_fwd_kip() is now called with the both
stream-connectors to properly forward <kip> from on side to the opposite
side.
The bug is 3.3-specific. No backport needed.
William rightfully pointed that despite the ssl capture being a
structure, some of its entries are only set for certain contents,
so we need to always zero it before using it so as to clear any
remains of a previous use, otherwise we could possibly report some
entries that were only present in the first hello and not the second
one. No need to clear the data though, since any remains will not be
referenced by the fields.
This must be backported wherever commit 336170007c ("BUG/MEDIUM: ssl:
take care of second client hello") is backported.
For a long time we've been observing some sporadic leaks of ssl-capture
pool entries on haproxy.org without figuring exactly the root cause. All
that was seen was that less calls to the free callback were made than
calls to the hello parsing callback, and these were never reproduced
locally.
It recently turned out to be triggered by the presence of "curves" or
"ecdhe" on the "bind" line. Captures have shown the presence of a second
client hello, called "Change Cipher Client Hello" in wireshark traces,
that calls the client hello callback again. That one wasn't prepared for
being called twice per connection, so it allocates an ssl-capture entry
and assigns it to the ex_data entry, possibly overwriting the previous
one.
In this case, the fix is super simple, just reuse the current ex_data
if it exists, otherwise allocate a new one. This completely solves the
problem.
Other callbacks have been audited for the same issue and are not
affected: ssl_ini_keylog() already performs this check and ignores
subsequent calls, and other ones do not allocate data.
This must be backported to all supported versions.
This patch fixes some memory leaks in the configuration parser:
- deinit_acme() was never called
- add ha_free() before every strdup() for section overwrite
- lacked some free() in deinit_acme()
Don't insert the acme account key in the ckchs_tree anymore. ckch_store
are not made to only include a private key. CLI operations are not
possible with them either. That doesn't make much sense to keep it that
way until we rework the ckch_store.
Thanks for previous changes, it is now possible to remove the <extra> field
from the HTX structure. HTX_FL_ALTERED_PAYLOAD flag is also removed because
it is now unsued.