Compare commits

...

218 Commits

Author SHA1 Message Date
Remi Tricot-Le Breton
362ff2628f REGTESTS: jwe: Fix tests of algorithms not supported by AWS-LC
Many tests use the A128KW algorithm which is not supported by AWS-LC but
instead of removing those tests we will just have a hardcoded value set
by default in this case.
2026-01-15 10:56:28 +01:00
Remi Tricot-Le Breton
aba18bac71 MINOR: jwe: Some algorithms not supported by AWS-LC
AWS-LC does not have EVP_aes_128_wrap or EVP_aes_192_wrap so the A128KW
and A192KW algorithms will not be supported for JWE token decryption.
2026-01-15 10:56:28 +01:00
Remi Tricot-Le Breton
39da1845fc DOC: jwe: Add doc for jwt_decrypt converters
Add doc for jwt_decrypt_secret and jwt_decrypt_cert converters.
2026-01-15 10:56:28 +01:00
Remi Tricot-Le Breton
4b73a3ed29 REGTESTS: jwe: Add jwt_decrypt_secret and jwt_decrypt_cert tests
Test the new jwt_decrypt converters.
2026-01-15 10:56:27 +01:00
Remi Tricot-Le Breton
e3a782adb5 MINOR: jwe: Add new jwt_decrypt_cert converter
This converter checks the validity and decrypts the content of a JWE
token that has an asymetric "alg" algorithm (RSA). In such a case, we
must provide a path to an already loaded certificate and private key
that has the "jwt" option set to "on".
2026-01-15 10:56:27 +01:00
Remi Tricot-Le Breton
416b87d5db MINOR: jwe: Add new jwt_decrypt_secret converter
This converter checks the validity and decrypts the content of a JWE
token that has a symetric "alg" algorithm. In such a case, we only
require a secret as parameter in order to decrypt the token.
2026-01-15 10:56:27 +01:00
Remi Tricot-Le Breton
2b45b7bf4f REGTESTS: ssl: Add tests for new aes cbc converters
This test mimics what was already done for the aes_gcm converters. Some
data is encrypted and directly decrypted and we ensure that the output
was not changed.
2026-01-15 10:56:27 +01:00
Remi Tricot-Le Breton
c431034037 MINOR: ssl: Add new aes_cbc_enc/_dec converters
Those converters allow to encrypt or decrypt data with AES in Cipher
Block Chaining mode. They work the same way as the already existing
aes_gcm_enc/_dec ones apart from the AEAD tag notion which is not
supported in CBC mode.
2026-01-15 10:56:27 +01:00
Remi Tricot-Le Breton
f0e64de753 MINOR: ssl: Factorize AES GCM data processing
The parameter parsing and processing and the actual crypto part of the
aes_gcm converter are interleaved. This patch puts the crypto parts in a
dedicated function for better reuse in the upcoming JWE processing.
2026-01-15 10:56:27 +01:00
Amaury Denoyelle
6870551a57 MEDIUM: proxy: force traffic on unpublished/disabled backends
A recent patch has introduced a new state for proxies : unpublished
backends. Such backends won't be eligilible for traffic, thus
use_backend/default_backend rules which target them won't match and
content switching rules processing will continue.

This patch defines a new frontend keywords 'force-be-switch'. This
keyword allows to ignore unpublished or disabled state. Thus,
use_backend/default_backend will match even if the target backend is
unpublished or disabled. This is useful to be able to test a backend
instance before exposing it outside.

This new keyword is converted into a persist rule of new type
PERSIST_TYPE_BE_SWITCH, stored in persist_rules list proxy member. This
is the only persist rule applicable to frontend side. Prior to this
commit, pure frontend proxies persist_rules list were always empty.

This new features requires adjustment in process_switching_rules(). Now,
when a use_backend/default_backend rule matches with an non eligible
backend, frontend persist_rules are inspected to detect if a
force-be-switch is present so that the backend may be selected.
2026-01-15 09:08:19 +01:00
Amaury Denoyelle
16f035d555 MINOR: cfgparse: adapt warnif_cond_conflicts() error output
Utility function warnif_cond_conflicts() is used when parsing an ACL.
Previously, the function directly calls ha_warning() to report an error.
Change the function so that it now takes the error message as argument.
Caller can then output it as wanted.

This change is necessary to use the function when parsing a keyword
registered as cfg_kw_list. The next patch will reuse it.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
82907d5621 MINOR: stats: report BE unpublished status
A previous patch defines a new proxy status : unpublished backends. This
patch extends this by changing proxy status reported in stats. If
unpublished is set, an extra "(UNPUB)" is added to the field.

Also, HTML stats is also slightly updated. If a backend is up but
unpublished, its status will be reported in orange color.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
797ec6ede5 MEDIUM: proxy: implement publish/unpublish backend CLI
Define a new set of CLI commands publish/unpublish backend <be>. The
objective is to be able to change the status of a backend to
unpublished. Such a backend is considered ineligible to traffic : this
allows to skip use_backend rules which target it.

Note that contrary to disabled/stopped proxies, an unpublished backend
still has server checks running on it.

Internally, a new proxy flags PR_FL_BE_UNPUBLISHED is defined. CLI
commands handler "publish backend" and "unpublish backend" are executed
under thread isolation. This guarantees that the flag can safely be set
or remove in the CLI handlers, and read during content-switching
processing.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
21fb0a3f58 MEDIUM: proxy: do not select a backend if disabled
A proxy can be marked as disabled using the keyword with the same name.
The doc mentions that it won't process any traffic. However, this is not
really the case for backends as they may still be selected via switching
rules during stream processing.

In fact, currently access to disabled backends will be conducted up to
assign_server(). However, no eligible server is found at this stage,
resulting in a connection closure or an HTTP 503, which is expected. So
in the end, servers in disabled backends won't receive any traffic. But
this is only because post-parsing steps are not performed on such
backends. Thus, this can be considered as functional but only via
side-effects.

This patch clarifies the handling of disable backends, so that they are
never selected via switching rules. Now, process_switching_rules() will
ignore disable backends and continue rules evaluation.

As this is a behavior change, this patch is labelled as medium. The
documentation manuel for use_backend is updated accordingly.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
2d26d353ce REGTESTS: add test on backend switching rules selection
Create a new test to ensure that switching rules selection is fine.
Currently, this checks that dynamic backend switching works as expected.
If a matching rule is resolved to an unexisting backend, the default
backend is used instead.

This regtest should be useful as switching-rules will be extended in a
future set of patches to add new abilities on backends, linked to
dynamic backend support.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
12975c5c37 MEDIUM: stream: refactor switching-rules processing
This commit rewrites process_switching_rules() function. The objective
is to simplify backend selection so that a single unified
stream_set_backend() call is kept, both for regular and default backends
case.

This patch will be useful to add new capabilities on backends, in the
context of dynamic backend support implementation.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
2f6aab9211 BUG/MINOR: proxy: free persist_rules
force-persist proxy keyword is converted into a persist_rule, stored in
proxy persist_rules list member. Each new rule is dynamically allocated
during parsing.

This commit fixes the memory leak on deinit due to a missing free on
persist_rules list entries. This is done via deinit_proxy()
modification. Each rule in the list is freed, along with its associated
ACL condition type.

This can be backported to every stable version.
2026-01-15 09:08:18 +01:00
Olivier Houchard
a209c35f30 MEDIUM: thread: Turn the group mask in thread set into a group counter
If we want to be able to have more than 64 thread groups, we can no
longer use thread group masks as long.
One remaining place where it is done is in struct thread_set. However,
it is not really used as a mask anywhere, all we want is a thread group
counter, so convert that mask to a counter.
2026-01-15 05:24:53 +01:00
Olivier Houchard
6249698840 BUG/MEDIUM: queues: Fix arithmetic when feeling non_empty_tgids
Fix the arithmetic when pre-filling non_empty_tgids when we still have
more than 32/64 thread groups left, to get the right index, we of course
have to divide the number of thread groups by the number of bits in a
long.
This bug was introduced by commit
7e1fed4b7a8b862bf7722117f002ee91a836beb5, but hopefully was not hit
because it requires to have at least as much thread groups as there are
bits in a long, which is impossible on 64bits machines, as MAX_TGROUPS
is still 32.
2026-01-15 04:28:04 +01:00
Olivier Houchard
1397982599 MINOR: threads: Eliminate all_tgroups_mask.
Now that it is unused, eliminate all_tgroups_mask, as we can't 64bits
masks to represent thread groups, if we want to be able to have more
than 64 thread groups.
2026-01-15 03:46:57 +01:00
Olivier Houchard
7e1fed4b7a MINOR: queues: Turn non_empty_tgids into a long array.
In order to be able to have more than 64 thread groups, turn
non_empty_tgids into a long array, so that we have enough bits to
represent everty thread group, and manipulate it with the ha_bit_*
functions.
2026-01-15 03:46:57 +01:00
Aurelien DARRAGON
2ec387cdc2 BUG/MINOR: http_act: fix deinit performed on uninitialized lf_expr in release_http_map()
As reported by GH user @Lzq-001 on issue #3245, the config below would
cause haproxy to SEGFAULT after having reported an error:

  frontend 0000000
        http-request set-map %[hdr(0000)0_

Root cause is simple, in parse_http_set_map(), we define the release
function (which is responsible to clear lf_expr expressions used by the
action), prior to initializing the expressions, while the release
function assumes the expressions are always initialized.

For all similar actions, we already perform the init prior to setting
the related release function, but this was not the case for
parse_http_set_map(). We fix the bug by initializing the expressions
earlier.

Thanks to @Lzq-001 for having reported the issue and provided a simple
reproducer.

It should be backported to all stable versions, note for versions prior to
3.0, lf_expr_init() should be replace by LIST_INIT(), see
6810c41 ("MEDIUM: tree-wide: add logformat expressions wrapper")
2026-01-14 20:05:39 +01:00
Olivier Houchard
7f4b053b26 MEDIUM: counters: mostly revert da813ae4d7cb77137ed
Contrarily to what was previously believed, there are corner cases where
the counters may not be allocated, and we may want to make them optional
at a later date, so we have to check if those counters are there.
However, just checking that shared.tg is non-NULL is enough, we can then
assume that shared.tg[tgid - 1] has properly been allocated too.
Also modify the various COUNTER_SHARED_* macros to make sure they check
for that too.
2026-01-14 12:39:14 +01:00
Amaury Denoyelle
7aa839296d BUG/MEDIUM: quic: fix ACK ECN frame parsing
ACK frames are either of type 0x02 or 0x03. The latter is an indication
that it contains extra ECN related fields. In haproxy QUIC stack, this
is considered as a different frame type, set to QUIC_FT_ACK_ECN, with
its own set of builder/parser functions.

This patch fixes ACK ECN parsing function. Indeed, the latter suffered
from two issues. First, 'first ACK range' and 'ACK ranges' were
inverted. Then, the three remaining ECN fields were simply ignored by
the parsing function.

This issue can cause desynchronization in the frames parsing code, which
may result in various result. Most of the time, the connection will be
aborted by haproxy due to an invalid frame content read.

Note that this issue was not detected earlier as most clients do not
enable ECN support if the peer is not able to emit ACK ECN frame first,
which haproxy currently never sends. Nevertheless, this is not the case
for every client implementation, thus proper ACK ECN parsing is
mandatory for a proper QUIC stack support.

Fix this by adjusting quic_parse_ack_ecn_frame() function. The remaining
ECN fields are parsed to ensure correct packet parsing. Currently, they
are not used by the congestion controller.

This must be backported up to 2.6.
2026-01-13 15:08:02 +01:00
Olivier Houchard
82196eb74e BUG/MEDIUM: threads: Fix binding thread on bind.
The code to parse the "thread" keyword on bind lines was changed to
check if the thread numbers were correct against the value provided with
max-threads-per-group, if any were provided, however, at the time those
thread keywords have been set, it may not yet have been set, and that
breaks the feature, so revert to check against MAX_THREADS_PER_GROUP instead,
it should have no major impact.
2026-01-13 11:45:46 +01:00
Olivier Houchard
da813ae4d7 MEDIUM: counters: Remove some extra tests
Before updating counters, a few tests are made to check if the counters
exits. but those counters should always exist at this point, so just
remmove them.
This commit should have no impact, but can easily be reverted with no
functional impact if various crashes appear.
2026-01-13 11:12:34 +01:00
Olivier Houchard
5495c88441 MEDIUM: counters: Dynamically allocate per-thread group counters
Instead of statically allocating the per-thread group counters,
based on the max number of thread groups available, allocate
them dynamically, based on the number of thread groups actually
used. That way we can increase the maximum number of thread
groups without using an unreasonable amount of memory.
2026-01-13 11:12:34 +01:00
Willy Tarreau
37057feb80 BUG/MINOR: net_helper: fix IPv6 header length processing
The IPv6 header contains a payload length that excludes the 40 bytes of
IPv6 packet header, which differs from IPv4's total length which includes
it. As a result, the parser was wrong and would only see the IP part and
not the TCP one unless sufficient options were present tocover it.

This issue came in 3.4-dev2 with recent commit e88e03a6e4 ("MINOR:
net_helper: add ip.fp() to build a simplified fingerprint of a SYN"),
so no backport is needed.
2026-01-13 08:42:36 +01:00
Aurelien DARRAGON
fcd4d4a7aa BUG/MINOR: hlua_fcn: ensure Patref:add_bulk() is given a table object before using it
As reported by GH user @kanashimia in GH #3241, providing anything else
than a table to Patref:add_bulk() method could cause a segfault because
we were calling lua_next() with the lua object without ensuring it
actually is a table.

Let's add the missing lua_istable() check on the stack object before
calling lua_next() function on it.

It should be backported up to 3.2 with 884dc62 ("MINOR: hlua_fcn:
add Patref:add_bulk()")
2026-01-12 17:30:54 +01:00
Aurelien DARRAGON
04545cb2b7 BUG/MINOR: hlua_fcn: fix broken yield for Patref:add_bulk()
In GH #3241, GH user @kanashimia reported that the Patref:add_bulk()
method would raise a Lua exception when called with more than 101
elements at once.

As identified by @kanashimia there was an error in the way the
add_bulk() method was forced to yield after 101 elements precisely.
The yield is there to ensure Lua doesn't eat too much ressources at
once and doesn't impact haproxy's core responsiveness, but the check
for the yield was misplaced resulting in improper stack content upon
resume.

Thanks to user @kanashimia who even provided a reproducer which helped
a lot to troubleshoot the issue.

This fix should be backported up to 3.2 with 884dc62 ("MINOR: hlua_fcn:
add Patref:add_bulk()") where the bug was introduced.
2026-01-12 17:30:52 +01:00
Olivier Houchard
b1cfeeef21 BUG/MINOR: stats-file: Use a 16bits variable when loading tgid
Now that the tgid stored in the stats file has been increased to 16bits
by commit 022cb3ab7fdce74de2cf24bea865ecf7015e5754, don't forget to
increase the variable size when reading it from the file, too.
This should have no impact given the maximum thread group limit is still
32.
2026-01-12 09:48:54 +01:00
Olivier Houchard
022cb3ab7f MINOR: stats: Increase the tgid from 8bits to 16bits
Increase the size of the stored tgid in the stat file from 8bits to
32bits, so that we can have more than 256 thread group. 65536 should be
enough for some time.

This bumps thet stat file minor version, as the structure changes.
2026-01-12 09:39:52 +01:00
Olivier Houchard
c0f64fc36a MINOR: receiver: Dynamically alloc the "members" field of shard_info
Instead of always allocating MAX_TGROUPS members, allocate them
dynamically, using the number of thread groups we'll use, so that
increasing MAX_TGROUPS will not have a huge impact on the structure
size.
2026-01-12 09:32:27 +01:00
Tim Duesterhus
96faf71f87 CLEANUP: connection: Remove outdated note about CO_FL 0x00002000 being unused
This flag is used as of commit dcce9369129f6ca9b8eed6b451c0e20c226af2e3
("MINOR: connections: Add a new CO_FL_SSL_NO_CACHED_INFO flag"). This patch
should be backported to 3.3. Apparently dcce9369129 has been backported
to 3.2 and 3.1 already, with that change already applied, so no need for a
backport there.
2026-01-12 03:22:15 +01:00
Willy Tarreau
2560cce7c5 MINOR: tcp-sample: permit retrieving tcp_info from the connection/session stage
The fc_xxx info that are retrieved over tcp_info could currently not
be accessed before a stream is created due to a test that verified the
existence of a stream. The rationale here was that the function works
both for frontend and backend. Let's always retrieve these info from
the session for the frontend case so that it now becomes possible to
set variables at connection/session time. The doc did not mention this
limitation so this could almost be considered as a bug.
2026-01-11 15:48:20 +01:00
Willy Tarreau
880bbeeda4 MINOR: sample: also support retrieving fc.timer.handshake without a stream
Some timers, like the handshake timer, are stored in the session and are
only copied to the logs struct when a stream is created. But this means
we can't measure it without a stream, nor store it once for all in a
variable at session creation time. Let's extend the sample fetch function
to retrieve it from the session when no stream is present. The doc did not
mention this limitation so this could almost be considered as a bug.
2026-01-11 15:48:19 +01:00
Amaury Denoyelle
875bbaa7fc MINOR: cfgparse: remove duplicate "force-persist" in common kw list
"force-persist" proxy keyword is listed twice in common_kw_list. This
patch removes the duplicated occurence.

This could be backported up to 2.4.
2026-01-09 16:45:54 +01:00
Willy Tarreau
46088b7ad0 MEDIUM: config: warn if some userlist hashes are too slow
It was reported in GH #2956 and more recently in GH #3235 that some
hashes are way too slow. The former triggers watchdog warnings during
checks, the second sees the config parsing take 20 seconds. This is
always due to the use of hash algorithms that are not suitable for use
in low-latency environments like web. They might be fine for a local
auth though. The difficulty, as explained by Philipp Hossner, is that
developers are not aware of this cost and adopt this without suspecting
any side effect.

The proposal here is to measure the crypt() call time and emit a warning
if it takes more than 10ms (which is already extreme). This was tested
by Philipp and confirmed to catch his case.

This is marked medium as it might start to report warnings on config
suffering from this problem without ever detecting it till now.
2026-01-09 14:56:18 +01:00
akarl10
a203ce6854 BUG/MINOR: ech/quic: enable ech configuration also for quic listeners
Patch dba4fd24 ("MEDIUM: ssl/ech: config and load keys") introduced
ECH configuration for bind lines, but the QUIC configuration parsers
still suffers from not using the same code as the TCP/TLS one, so the
init for QUIC was missed.

Must be backported in 3.3.
2026-01-08 17:34:28 +01:00
William Lallemand
6e1718ce4b CI: github: remove ERR=1 temporarly from the ECH job
The ECH job still fails to compile since the openssl 4.0 deprecated
functions were not removed yet. Let's remove ERR=1 temporarly.

We do know that there's a regression in OpenSSL 4.0 with these
reg-tests though:

Error: #    top  TEST reg-tests/ssl/set_ssl_crlfile.vtc FAILED (0.219) exit=2
Error: #    top  TEST reg-tests/ssl/set_ssl_cafile.vtc FAILED (0.236) exit=2
Error: #    top  TEST reg-tests/quic/set_ssl_crlfile.vtc FAILED (0.196) exit=2
2026-01-08 17:32:27 +01:00
Christian Ruppert
dbe52cc23e REGTESTS: ssl: Fix reg-tests curve check
OpenSSL changed the output from "Server Temp Key" in prior versions to
"Peer Temp Key" in recent ones.
a39dc27c25
It looks like it affects OpenSSL >=3.5.0
This broke the reg-test for e.g. Debian 13 builds, using OpenSSL 3.5.1

Fixes bug #3238

Could be backported in every branches.

Signed-off-by: Christian Ruppert <idl0r@qasl.de>
2026-01-08 16:14:54 +01:00
William Lallemand
623aa725a2 BUG/MINOR: cli/stick-tables: argument to "show table" is optional
Discussed in issue #3187, the CLI help is confusing for the "show table"
command as it seems that the argument is mandatory.

This patch adds the arguments between square brackets to remove the
confusion.
2026-01-08 11:54:01 +01:00
Willy Tarreau
dbba442740 BUILD: sockpair: fix build issue on macOS related to variable-length arrays
In GH issue #3226, Sergey Fedorov (@barracuda156) reported that since
commit 10c14a1ed0 ("MINOR: proto_sockpair: send_fd_uxst: init iobuf,
cmsghdr, cmsgbuf to zeros"), macOS 10.6.8 with gcc 14.3.0 doesn't build
anymore:

  src/proto_sockpair.c: In function 'send_fd_uxst':
  src/proto_sockpair.c:246:49: error: variable-sized object may not be initialized except with an empty initializer
    246 |         char cmsgbuf[CMSG_SPACE(sizeof(int))] = {0};
        |                                                 ^
  src/proto_sockpair.c:247:45: error: variable-sized object may not be initialized except with an empty initializer
    247 |         char buf[CMSG_SPACE(sizeof(int))] = {0};
        |                                             ^

Upon investigation, it appears that the CMSG_SPACE() macro on this OS
looks too complex for gcc to consider it as a constant, so it takes
these buffers for variable-length arrays and cannot initialize them.

Let's move to a simple memset() instead, which Sergey confirmed fixes
the problem.

This needs to be backported as far as 3.1. Thanks to Sergey for the
report, the bisect and testing the fix.
2026-01-08 09:26:22 +01:00
Hyeonggeun Oh
c17ed69bf3 MINOR: cfgparse: Refactor "userlist" parser to print it in -dKall operation
This patch covers issue https://github.com/haproxy/haproxy/issues/3221.

The parser for the "userlist" section did not use the standard keyword
registration mechanism. Instead, it relied on a series of strcmp()
comparisons to identify keywords such as "group" and "user".

This had two main drawbacks:
1. The keywords were not discoverable by the "-dKall" dump option,
   making it difficult for users to see all available keywords for the
   section.
2. The implementation was inconsistent with the parsers for other
   sections, which have been progressively refactored to use the
   standard cfg_kw_list infrastructure.

This patch refactors the userlist parser to align it with the project's
standard conventions.

The parsing logic for the "group" and "user" keywords has been extracted
from the if/else block in cfg_parse_users() into two new dedicated
functions:
- cfg_parse_users_group()
- cfg_parse_users_user()

These two keywords are now registered via a dedicated cfg_kw_list,
making them visible to the rest of the HAPorxy ecosystem, including the
-dKall dump.
2026-01-07 18:25:09 +01:00
William Lallemand
91cff75908 BUG/MINOR: cfgparse: wrong section name upon error
When a unknown keyword was used in the "userlist" section, the error was
mentioning the "users" section, instead of "userlist".

Could be backported in every branches.
2026-01-07 18:13:12 +01:00
William Lallemand
4aff6d1c25 BUILD: tools: memchr definition changed in C23
New gcc and clang versions from fedora rawhide seems to use the C23
standard by default. This version changes the definition of some
string.h functions, which now return a const char * instead of a char *.

src/tools.c: In function ‘fgets_from_mem’:
src/tools.c:7200:17: warning: assignment discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]
 7200 |         new_pos = memchr(*position, '\n', size);
      |                 ^

Strangely, -Wdiscarded-qualifiers does not seem to catch all the
memchr.

Should fix issue #3228.

This could be backported in previous versions.
2026-01-07 14:51:26 +01:00
William Lallemand
5322bd3785 BUILD: ssl: strchr definition changed in C23
New gcc and clang versions from fedora rawhide seems to use the C23
standard by default. This version changes the definition of some
string.h functions, which now return a const char * instead of a char *.

src/ssl_sock.c: In function ‘SSL_CTX_keylog’:
src/ssl_sock.c:4475:17: error: assignment discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
 4475 |         lastarg = strrchr(line, ' ');

Strangely, -Wdiscarded-qualifiers does not seem to catch all the
strrchr.

Should fix issue #3228.

This could be backported in previous versions.
2026-01-07 14:51:26 +01:00
Willy Tarreau
71b00a945d [RELEASE] Released version 3.4-dev2
Released version 3.4-dev2 with the following main changes :
    - BUG/MEDIUM: mworker/listener: ambiguous use of RX_F_INHERITED with shards
    - BUG/MEDIUM: http-ana: Properly detect client abort when forwarding response (v2)
    - BUG/MEDIUM: stconn: Don't report abort from SC if read0 was already received
    - BUG/MEDIUM: quic: Don't try to use hystart if not implemented
    - CLEANUP: backend: Remove useless test on server's xprt
    - CLEANUP: tcpcheck: Remove useless test on the xprt used for healthchecks
    - CLEANUP: ssl-sock: Remove useless tests on connection when resuming TLS session
    - REGTESTS: quic: fix a TLS stack usage
    - REGTESTS: list all skipped tests including 'feature cmd' ones
    - CI: github: remove openssl no-deprecated job
    - CI: github: add a job to test the master branch of OpenSSL
    - CI: github: openssl-master.yml misses actions/checkout
    - BUG/MEDIUM: backend: Do not remove CO_FL_SESS_IDLE in assign_server()
    - CI: github: use git prefix for openssl-master.yml
    - BUG/MEDIUM: mux-h2: synchronize all conditions to create a new backend stream
    - REGTESTS: fix error when no test are skipped
    - MINOR: cpu-topo: Turn the cpu policy configuration into a struct
    - MEDIUM: cpu-topo: Add a "threads-per-core" keyword to cpu-policy
    - MEDIUM: cpu-topo: Add a "cpu-affinity" option
    - MEDIUM: cpu-topo: Add a new "max-threads-per-group" global keyword
    - MEDIUM: cpu-topo: Add the "per-thread" cpu_affinity
    - MEDIUM: cpu-topo: Add the "per-ccx" cpu_affinity
    - BUG/MINOR: cpu-topo: fix -Wlogical-not-parentheses build with clang
    - DOC: config: fix number of values for "cpu-affinity"
    - MINOR: tools: add a secure implementation of memset
    - MINOR: mux-h2: add missing glitch count for non-decodable H2 headers
    - MINOR: mux-h2: perform a graceful close at 75% glitches threshold
    - MEDIUM: mux-h1: implement basic glitches support
    - MINOR: mux-h1: perform a graceful close at 75% glitches threshold
    - MEDIUM: cfgparse: acknowledge that proxy ID auto numbering starts at 2
    - MINOR: cfgparse: remove useless checks on no server in backend
    - OPTIM/MINOR: proxy: do not init proxy management task if unused
    - MINOR: patterns: preliminary changes for reorganization
    - MEDIUM: patterns: reorganize pattern reference elements
    - CLEANUP: patterns: remove dead code
    - OPTIM: patterns: cache the current generation
    - MINOR: tcp: add new bind option "tcp-ss" to instruct the kernel to save the SYN
    - MINOR: protocol: support a generic way to call getsockopt() on a connection
    - MINOR: tcp: implement the get_opt() function
    - MINOR: tcp_sample: implement the fc_saved_syn sample fetch function
    - CLEANUP: assorted typo fixes in the code, commits and doc
    - BUG/MEDIUM: cpu-topo: Don't forget to reset visited_ccx.
    - BUG/MAJOR: set the correct generation ID in pat_ref_append().
    - BUG/MINOR: backend: fix the conn_retries check for TFO
    - BUG/MINOR: backend: inspect request not response buffer to check for TFO
    - MINOR: net_helper: add sample converters to decode ethernet frames
    - MINOR: net_helper: add sample converters to decode IP packet headers
    - MINOR: net_helper: add sample converters to decode TCP headers
    - MINOR: net_helper: add ip.fp() to build a simplified fingerprint of a SYN
    - MINOR: net_helper: prepare the ip.fp() converter to support more options
    - MINOR: net_helper: add an option to ip.fp() to append the TTL to the fingerprint
    - MINOR: net_helper: add an option to ip.fp() to append the source address
    - DOC: config: fix the length attribute name for stick tables of type binary / string
    - MINOR: mworker/cli: only keep positive PIDs in proc_list
    - CLEANUP: mworker: remove duplicate list.h include
    - BUG/MINOR: mworker/cli: fix show proc pagination using reload counter
    - MINOR: mworker/cli: extract worker "show proc" row printer
    - MINOR: cpu-topo: Factorize code
    - MINOR: cpu-topo: Rename variables to better fit their usage
    - BUG/MEDIUM: peers: Properly handle shutdown when trying to get a line
    - BUG/MEDIUM: mux-h1: Take care to update <kop> value during zero-copy forwarding
    - MINOR: threads: Avoid using a thread group mask when stopping.
    - MINOR: hlua: Add support for lua 5.5
    - MEDIUM: cpu-topo: Add an optional directive for per-group affinity
    - BUG/MEDIUM: mworker: can't use signals after a failed reload
    - BUG/MEDIUM: stconn: Move data from <kip> to <kop> during zero-copy forwarding
    - DOC: config: fix a few typos and refine cpu-affinity
    - MINOR: receiver: Remove tgroup_mask from struct shard_info
    - BUG/MINOR: quic: fix deprecated warning for window size keyword
2026-01-07 11:02:12 +01:00
Amaury Denoyelle
e061547d9d BUG/MINOR: quic: fix deprecated warning for window size keyword
QUIC configuration was cleaned up in the previous release. Several
global keyword names were changed to unify the configuration. For each
of them the older keyword is marked as deprecated, with a warning to
mention the newer alternative.

This patch fixes the warning for 'tune.quic.frontend.default-max-size'
as the alternative proposed was not correct. The proper value now is
'tune.quic.fe.cc.max-win-size'.

This must be backported up to 3.3.
2026-01-07 09:54:31 +01:00
Olivier Houchard
41cd589645 MINOR: receiver: Remove tgroup_mask from struct shard_info
The only purpose from tgroup_mask seems to be to calculate how many
tgroups share the same shard, but this is an information we can
calculate differently, we just have to increment the number when a new
receiver is added to the shard, and decrement it when one is detached
from the shard. Removing thread group masks will allow us to increase
the maximum number of thread groups past 64.
2026-01-07 09:27:12 +01:00
Willy Tarreau
c3fcdfaf5c DOC: config: fix a few typos and refine cpu-affinity
There were two typos in the recently updated parts about per-group.
Also, change the commas to ':' after the options values, as sometimes
it would be confusing. Last, place quotes around keyword names so that
they're explicitly referred to as language keywords. No backport is
needed.
2026-01-07 09:19:25 +01:00
Christopher Faulet
83457b9e38 BUG/MEDIUM: stconn: Move data from <kip> to <kop> during zero-copy forwarding
The <kip> of producer was not forwarded to <kop> of consumer when zero-copy
data forwarding was tried. Because of the issue, the chunking of emitted H1
messages could be invalid.

To fix the bug, sc_ep_fwd_kip() must be called at this stage.

This fix is related to the previous one (529a8dbfb "BUG/MEDIUM: mux-h1: Take
care to update <kop> value during zero-copy forwarding"). Both are required
to fully fix the issue #3230.

This patch must be backported to 3.3.
2026-01-06 15:41:50 +01:00
William Lallemand
97490a7789 BUG/MEDIUM: mworker: can't use signals after a failed reload
In issue #3229 it was reported that the master couldn't reload after a
failed reload following a wrong configuration.

It is still possible to do a reload using the "reload" command of the
master CLI. But every signals are blocked.

The problem was introduced in 709cde6d0 ("BUG/MEDIUM: mworker: signals
inconsistencies during startup and reload") which fixes the blocking of
signals during the reload.

However the patch missed a case, indeed, the
run_master_in_recovery_mode() is not being called when the worker failed
to parse the configuration, it is only failing when the master is
failing.

To handle this case, the mworker_unblock_signals() function must be
called upon mworker_on_new_child_failure(). But since this is called in
an haproxy signal handler it would mess with the signals.

Instead, the patch adds a task which is started by the signal handler,
and restores the signals outside of it.

This must be backported as far as 3.1.
2026-01-06 14:27:53 +01:00
Olivier Houchard
56fd0c1a5c MEDIUM: cpu-topo: Add an optional directive for per-group affinity
When using per-group affinity, add an optional new directive. It accepts
the values of "auto", where when multiple thread groups are created, the
available CPUs are split equally across the groups, and is the new
default, and "loose", where all groups are bound to all available CPUs,
this is the old default.
2026-01-06 11:32:45 +01:00
Mike Lothian
1c0f781994 MINOR: hlua: Add support for lua 5.5
Lua 5.5 adds an extra argument to lua_newstate(). Since there are
already a few other ifdefs in hlua.c checking for the Lua version,
and there's a single call place, let's do the same here. This should
be safe for backporting if needed.

Signed-off-by: Mike Lothian <mike@fireburn.co.uk>
2026-01-06 11:05:02 +01:00
Olivier Houchard
853604f87a MINOR: threads: Avoid using a thread group mask when stopping.
Remove the "stopped_tgroup_mask" variable, that indicated which thread
groups were stopping, and instead just use "stopped_tgroups", a counter
indicating how many thread groups are stopping. We want to remove all
thread group masks, so that we can increase the maximum number of thread
groups past 64.
2026-01-06 08:30:55 +01:00
Christopher Faulet
529a8dbfba BUG/MEDIUM: mux-h1: Take care to update <kop> value during zero-copy forwarding
Since the extra field was removed from the HTX structure, a regression was
introduced when forwarding of chunked messages. The <kop> value was not
decreased as it should be when data were sent via the zero-copy
forwarding. Because of this bug, it was possible to announce a chunk size
larger than the chunk data sent.

To fix the bug, an helper function was added to properly update the <kop>
value when a chunk size is emitted. This function is now called when new
chunk is announced, including during zero-copy forwarding.

As a workaround, "tune.disable-zero-copy-forwarding" or just
"tune.h1.zero-copy-fwd-send off" can be set in the global section.

This patch should fix the issue #3230. It must be backported to 3.3.
2026-01-06 07:39:05 +01:00
Christopher Faulet
0b29b76a52 BUG/MEDIUM: peers: Properly handle shutdown when trying to get a line
When a shutdown was reported to a peer applet, the event was not properly
handled if it failed to receive data. The function responsible to get data
was exiting too early if the applet buffer was empty, without testing the
sedesc status. Because of this issue, it was possible to have frozen peer
applets. For instance, it happend on client timeout. With too many frozen
applets, it was possible to reach the maxconn.

This patch should fix the issue #3234. It must be backported to 3.3.
2026-01-05 13:46:57 +01:00
Olivier Houchard
196d16f2b1 MINOR: cpu-topo: Rename variables to better fit their usage
Rename "visited_tsid" and "visited_ccx" to "touse_tsid" and
"touse_ccx". They are not there to remember which tsid/ccx we
alreaday visited, contrarily to visited_ccx_set and
visited_cl_set, they are there to know which tsid/ccx we should
use, so make that clear.
2026-01-05 09:25:48 +01:00
Olivier Houchard
bbf5c30a87 MINOR: cpu-topo: Factorize code
Factorize the code common to cpu_policy_group_by_ccx() and
cpu_policy_group_by_cluster() into a new function,
cpu_policy_assign_threads().
2026-01-05 09:24:44 +01:00
Alexander Stephan
e241144e70 MINOR: mworker/cli: extract worker "show proc" row printer
Introduce cli_append_worker_row() to centralize formatting of a single
worker row. Also, replace duplicated row-printing code in both current
and old workers loops with the helper. Motivation: Reduces LOC and
improves readability by removing duplication.
2026-01-05 08:59:45 +01:00
Alexander Stephan
4c10d9c70c BUG/MINOR: mworker/cli: fix show proc pagination using reload counter
After commit 594408cd612b5 ("BUG/MINOR: mworker/cli: 'show proc' is limited
by buffer size"), related to ticket #3204, the "show proc" logic
has been fixed to be able to print more than 202 processes. However, this
fix can lead to the omission of entries in case they have the same
timestamp.

To fix this, we use the unique reload counter instead of the timestamp.
On partial flush, set ctx->next_reload = child->reloads.
On resume skip entries with child->reloads >= ctx->next_reload.
Finally, we clear ctx->next_reload at the end of a complete dump so
subsequent show proc starts from the top.

Could be backported in all stable branches.
2026-01-05 08:59:34 +01:00
Alexander Stephan
a5f274de92 CLEANUP: mworker: remove duplicate list.h include
Drop the second #include <haproxy/list.h> from mworker.c.
No functional change; reduces redundancy and keeps includes tidy.
2026-01-05 08:59:34 +01:00
Alexander Stephan
c30eeb2967 MINOR: mworker/cli: only keep positive PIDs in proc_list
Change mworker_env_to_proc_list() to if (child->pid > 0) before
LIST_APPEND, avoiding invalid PIDs (0/-1) in the process list.
This has no functional impact beyond stricter validation and it aligns
with existing kill safeguards.
2026-01-05 08:59:14 +01:00
Willy Tarreau
6970c8b8b6 DOC: config: fix the length attribute name for stick tables of type binary / string
The stick-table doc was reworked and moved in 3.2 with commit da67a89f3
("DOC: config: move stick-tables and peers to their own section"), however
the optional length attribute for binary/string types was mistakenly
spelled "length" while it's "len".

This must be backported to 3.2.
2026-01-01 10:52:50 +01:00
Willy Tarreau
a206f85f96 MINOR: net_helper: add an option to ip.fp() to append the source address
The new value 4 will permit to append the source address to the
fingerprint, making it easier to build rules checking a specific path.
2026-01-01 10:32:16 +01:00
Willy Tarreau
70ffae3614 MINOR: net_helper: add an option to ip.fp() to append the TTL to the fingerprint
With mode value 1, the TTL will be appended immediately after the 7 bytes,
making it a 8-byte fingerprint.
2026-01-01 10:19:48 +01:00
Willy Tarreau
2c317cfed7 MINOR: net_helper: prepare the ip.fp() converter to support more options
It can make sense to support extra components in the fingerprint to ease
configuration, so let's change the 0/1 value to a bit field. We also turn
the current 1 (TCP options list) to 2 so that we'll reuse 1 for the TTL.
2026-01-01 10:19:20 +01:00
Willy Tarreau
e88e03a6e4 MINOR: net_helper: add ip.fp() to build a simplified fingerprint of a SYN
Here we collect all the stuff that depends on the sender's settings,
such as TOS, IP version, TTL range, presence of DF bit or IP options,
presence of DATA in the SYN, CWR+ECE flags, TCP header length, wscale,
initial window, mss, as well as the list of TCP extension kinds. It's
obviously fairly limited but can allows to avoid blacklisting certain
valid clients sharing the same IP address as a misbehaving one.

It supports both a short and a long mode depending on the argument.
These can be used with the tcp-ss bind option. The doc was updated
accordingly.
2025-12-31 17:17:38 +01:00
Willy Tarreau
6e46d1345b MINOR: net_helper: add sample converters to decode TCP headers
This adds the following converters, used to decode fields
in an incoming tcp header:

   tcp.dst, tcp.flags, tcp.seq, tcp.src, tcp.win,
   tcp.options.mss, tcp.options.tsopt, tcp.options.tsval,
   tcp.options.wscale, tcp.options_list,

These can be used with the tcp-ss bind option. The doc was updated
accordingly.
2025-12-31 17:17:23 +01:00
Willy Tarreau
e0a7a7ca43 MINOR: net_helper: add sample converters to decode IP packet headers
This adds a few converters that help decode parts of IP packets:
  - ip.data : returns the next header (typically TCP)
  - ip.df   : returns the dont-fragment flags
  - ip.dst  : returns the destination IPv4/v6 address
  - ip.hdr  : returns only the IP header
  - ip.proto: returns the upper level protocol (udp/tcp)
  - ip.src  : returns the source IPv4/v6 address
  - ip.tos  : returns the TOS / TC field
  - ip.ttl  : returns the TTL/HL value
  - ip.ver  : returns the IP version (4 or 6)

These can be used with the tcp-ss bind option. The doc was updated
accordingly.
2025-12-31 17:16:29 +01:00
Willy Tarreau
90d2f157f2 MINOR: net_helper: add sample converters to decode ethernet frames
This adds a few converters that help decode parts of ethernet frame
headers:
  - eth.data : returns the next header (typically IP)
  - eth.dst  : returns the destination MAC address
  - eth.hdr  : returns only the ethernet header
  - eth.proto: returns the ethernet proto
  - eth.src  : returns the source MAC address
  - eth.vlan : returns the VLAN ID when present

These can be used with the tcp-ss bind option. The doc was updated
accordingly.
2025-12-31 17:15:36 +01:00
Willy Tarreau
933cb76461 BUG/MINOR: backend: inspect request not response buffer to check for TFO
In 2.6, do_connect_server() was introduced by commit 0a4dcb65f ("MINOR:
stream-int/backend: Move si_connect() in the backend scope") and changed
the approach to work with a stream instead of a stream-interface. However
si_oc(si) was wrongly turned to &s->res instead of &s->req, which breaks
TFO by always inspecting the response channel to figure whether there are
data pending.

This fix can be backported to all versions till 2.6.
2025-12-31 13:03:53 +01:00
Willy Tarreau
799653d536 BUG/MINOR: backend: fix the conn_retries check for TFO
In 2.6, the retries counter on a stream was changed from retries left
to retries done via commit 731c8e6cf ("MINOR: stream: Simplify retries
counter calculation"). However, one comparison fell through the cracks
in order to detect whether or not we can use TFO (only first attempt),
resulting in TFO never working anymore.

This may be backported to all versions till 2.6.
2025-12-31 13:03:53 +01:00
Maxime Henrion
51592f7a09 BUG/MAJOR: set the correct generation ID in pat_ref_append().
This fixes crashes when creating more than one new revision of a map or
acl file and purging the previous version.
2025-12-31 00:29:47 +01:00
Olivier Houchard
54f59e4669 BUG/MEDIUM: cpu-topo: Don't forget to reset visited_ccx.
We want to reset visited_ccx, as introduced by commit
8aef5bec1ef57eac449298823843d6cc08545745, each time we run the loop,
otherwise the chances of its content being correct are very low, and
will likely end up being bound to the wrong threads.
This was reported in github issue #3224.
2025-12-26 23:55:57 +01:00
Ilia Shipitsin
f8a77ecf62 CLEANUP: assorted typo fixes in the code, commits and doc 2025-12-25 19:45:29 +01:00
Willy Tarreau
6fb521d2f6 MINOR: tcp_sample: implement the fc_saved_syn sample fetch function
This function retrieves the copy of a SYN packet that the system has
kept for us when bind option "tcp-ss" was set to 1 or above. It's
recommended to copy it to a local variable because it will be freed
after being read. It allows to inspect all parts of an incoming SYN
packet, provided that it was preserved (e.g. not possible with SYN
cookies). The doc provides examples of how to use it.
2025-12-24 18:39:37 +01:00
Willy Tarreau
52d60bf9ee MINOR: tcp: implement the get_opt() function
It relies on the generic sock_conn_get_opt() function and will permit
sample fetch functions to retrieve generic TCP-level info.
2025-12-24 18:38:51 +01:00
Willy Tarreau
6d995e59e9 MINOR: protocol: support a generic way to call getsockopt() on a connection
It's regularly needed to call getsockopt() on a connection, but each
time the calling code has to do all the job by itself. This commit adds
a "get_opt()" callback on the protocol struct, that directly calls
getsockopt() on the connection's FD. A generic implementation for
standard sockets is provided, though QUIC would likely require a
different approach, or maybe a mapping. Due to the overlap between
IP/TCP/socket option values, it is necessary for the caller to indicate
both the level and the option. An abstraction of the level could be
done, but the caller would nonetheless have to know the optname, which
is generally defined in the same include files. So for now we'll
consider that this callback is only for very specific use.

The levels and optnames are purposely passed as signed ints so that it
is possible to further extend the API by using negative levels for
internal namespaces.
2025-12-24 18:38:51 +01:00
Willy Tarreau
44c67a08dd MINOR: tcp: add new bind option "tcp-ss" to instruct the kernel to save the SYN
This option enables TCP_SAVE_SYN on the listening socket, which will
cause the kernel to try to save a copy of the SYN packet header (L2,
IP and TCP are supported). This can permit to check the source MAC
address of a client, or find certain TCP options such as a source
address encapsulated using RFC7974. It could also be used as an
alternate approach to retrieving the source and destination addresses
and ports. For now setting the option is enabled, but sample fetch
functions and converters will be needed to extract info.
2025-12-24 11:35:09 +01:00
Maxime Henrion
1fdccbe8da OPTIM: patterns: cache the current generation
This makes a significant difference when loading large files and during
commit and clear operations, thanks to improved cache locality. In the
measurements below, master refers to the code before any of the changes
to the patterns code, not the code before this one commit.

Timing the replacement of 10M entries from the CLI with this command
which also reports timestamps at start, end of upload and end of clear:

  $ (echo "prompt i"; echo "show activity"; echo "prepare acl #0";
     awk '{print "add acl @1 #0",$0}' < bad-ip.map; echo "show activity";
     echo "commit acl @1 #0"; echo "clear acl @0 #0";echo "show activity") |
    socat -t 10 - /tmp/sock1 | grep ^uptim

master, on a 3.7 GHz EPYC, 3 samples:

  uptime_now: 6.087030
  uptime_now: 25.981777  => 21.9 sec insertion time
  uptime_now: 29.286368  => 3.3 sec commit+clear

  uptime_now: 5.748087
  uptime_now: 25.740675  => 20.0s insertion time
  uptime_now: 29.039023  => 3.3 s commit+clear

  uptime_now: 7.065362
  uptime_now: 26.769596  => 19.7s insertion time
  uptime_now: 30.065044  => 3.3s commit+clear

And after this commit:

  uptime_now: 6.119215
  uptime_now: 25.023019  => 18.9 sec insertion time
  uptime_now: 27.155503  => 2.1 sec commit+clear

  uptime_now: 5.675931
  uptime_now: 24.551035  => 18.9s insertion
  uptime_now: 26.652352  => 2.1s commit+clear

  uptime_now: 6.722256
  uptime_now: 25.593952  => 18.9s insertion
  uptime_now: 27.724153  => 2.1s commit+clear

Now timing the startup time with a 10M entries file (on another machine)
on master, 20 samples:

Standard Deviation, s: 0.061652677408033
Mean:        4.217

And after this commit:

Standard Deviation, s: 0.081821371548669
Mean:        3.78
2025-12-23 21:17:39 +01:00
Maxime Henrion
99e625a41d CLEANUP: patterns: remove dead code
Situations where we are iterating over elements and find one with a
different generation ID cannot arise anymore since the elements are kept
per-generation.
2025-12-23 21:17:39 +01:00
Maxime Henrion
545cf59b6f MEDIUM: patterns: reorganize pattern reference elements
Instead of a global list (and tree) of pattern reference elements, we
now have an intermediate pat_ref_gen structure and store the elements in
those. This simplifies the logic of some operations such as commit and
clear, and improves performance in some cases - numbers to be provided
in a subsequent commit after one important optimization is added.

A lot of the changes are due to adding an extra level of indirection,
changing many cases where we iterate over all elements to an outer loop
iterating over the generation and an inner one iterating over the
elements of the current generation. It is therefore easier to read this
patch using 'git diff -w'.
2025-12-23 21:17:39 +01:00
Maxime Henrion
5547bedebb MINOR: patterns: preliminary changes for reorganization
Safe and non-functional changes that only add currently unused
structures, field, functions and macros, in preparation of larger
changes that alter the way pattern reference elements are stored.

This includes code to create and lookup generation objects, and
macros to iterate over the generations of a pattern reference.
2025-12-23 21:17:39 +01:00
Amaury Denoyelle
a4a17eb366 OPTIM/MINOR: proxy: do not init proxy management task if unused
Each proxy has its owned task for internal purpose. Currently, it is
only used either by frontends or if a stick-table is present.

This commit rendres the task allocation optional to only the required
case. Thus, it is not allocated anymore for backend only proxies without
stick-table.
2025-12-23 16:35:49 +01:00
Amaury Denoyelle
c397f6fc9a MINOR: cfgparse: remove useless checks on no server in backend
A legacy check could be activated at compile time to reject backends
without servers. In practice this is not used anymore and does not have
much sense with the introduction of dynamic servers.
2025-12-23 16:35:49 +01:00
Amaury Denoyelle
b562602044 MEDIUM: cfgparse: acknowledge that proxy ID auto numbering starts at 2
Each frontend/backend/listen proxies is assigned an unique ID. It can
either be set explicitely via 'id' keyword, or automatically assigned on
post parsing depending on the available values.

It was expected that the first automatically assigned value would start
at '1'. However, due to a legacy bug this is not the case as this value
is always skipped. Thus, automatically assigned proxies always start at
'2' or more.

To avoid breaking the current existing state, this situation is now
acknowledged with the current patch. The code is rewritten with an
explicit warning to ensure that this won't be fixed without knowing the
current status. A new regtest also ensures this.
2025-12-23 16:35:49 +01:00
Willy Tarreau
5904f8279b MINOR: mux-h1: perform a graceful close at 75% glitches threshold
This avoids hitting the hard wall for connections with non-compliant
peers that are accumulating errors. We recycle the connection early
enough to permit to reset the counter. Example below with a threshold
set to 100:

Before, 1% errors:
  $ h1load -H "Host : blah" -c 1 -n 10000000 0:4445
  #     time conns tot_conn  tot_req      tot_bytes    err  cps  rps  bps   ttfb
           1     1     1039   103872        6763365   1038 1k03 103k 54M1 9.426u
           2     1     2128   212793       14086140   2127 1k08 108k 58M5 8.963u
           3     1     3215   321465       21392137   3214 1k08 108k 58M3 8.982u
           4     1     4307   430684       28735013   4306 1k09 109k 58M6 8.935u
           5     1     5390   538989       36016294   5389 1k08 108k 58M1 9.021u

After, no more errors:
  $ h1load -H "Host : blah" -c 1 -n 10000000 0:4445
  #     time conns tot_conn  tot_req      tot_bytes    err  cps  rps  bps   ttfb
           1     1     1509   113161        7487809      0 1k50 113k 59M9 8.482u
           2     1     3002   225101       15114659      0 1k49 111k 60M9 8.582u
           3     1     4508   338045       22809911      0 1k50 112k 61M5 8.523u
           4     1     5971   447785       30286861      0 1k46 109k 59M7 8.772u
           5     1     7472   560335       37955271      0 1k49 112k 61M2 8.537u
2025-12-20 19:29:37 +01:00
Willy Tarreau
05b457002b MEDIUM: mux-h1: implement basic glitches support
We now count glitches for each parsing error, including those that
have been accepted via accept-unsafe-violations-*. Front and back
are considered and the connection gets killed on error once if the
threshold is reached or passed and the CPU usage is beyond the
configured limit (0 by default). This was tested with:

   curl -ivH "host : blah" 0:4445{,,,,,,,,,}

which sends 10 requests to a configuration having a threshold of 5.
The global keywords are named similarly to H2 and quic:

     tune.h1.be.glitches-threshold xxxx
     tune.h1.fe.glitches-threshold xxxx

The glitches count of each connection is also reported when non-null
in the connection dumps (e.g. "show fd").
2025-12-20 19:29:33 +01:00
Willy Tarreau
0901f60cef MINOR: mux-h2: perform a graceful close at 75% glitches threshold
This avoids hitting the hard wall for connections with non-compliant
peers that would be accumulating errors over long connections. We now
permit to recycle the connection early enough to reset the connection
counter.

This was tested artificially by adding this to h2c_frt_handle_headers():

  h2c_report_glitch(h2c, 1, "new stream");

or this to h2_detach():

  h2c_report_glitch(h2c, 1, "detaching");

and injecting using h2load -c 1 -n 1000 0:4445 on a config featuring
tune.h2.fe.glitches-threshold 1000:

  finished in 8.74ms, 85802.54 req/s, 686.62MB/s
  requests: 1000 total, 751 started, 751 done, 750 succeeded, 250 failed, 250 errored, 0 timeout
  status codes: 750 2xx, 0 3xx, 0 4xx, 0 5xx
  traffic: 6.00MB (6293303) total, 132.57KB (135750) headers (space savings 29.84%), 5.86MB (6144000) data
                       min         max         mean         sd        +/- sd
  time for request:        9us       178us        10us         6us    99.47%
  time for connect:      139us       139us       139us         0us   100.00%
  time to 1st byte:      339us       339us       339us         0us   100.00%
  req/s           :   87477.70    87477.70    87477.70        0.00   100.00%

The failures are due to h2load not supporting reconnection.
2025-12-20 19:26:29 +01:00
Willy Tarreau
52adeef7e1 MINOR: mux-h2: add missing glitch count for non-decodable H2 headers
One rare error case could produce a protocol error on the stream when
not being able to decode response headers wasn't being accounted as a
glitch, so let's fix it.
2025-12-20 19:11:16 +01:00
Maxime Henrion
c8750e4e9d MINOR: tools: add a secure implementation of memset
This guarantees that the compiler will not optimize away the memset()
call if it detects a dead store.

Use this to clear SSL passphrases.

No backport needed.
2025-12-19 17:42:57 +01:00
Willy Tarreau
bd92f34f02 DOC: config: fix number of values for "cpu-affinity"
It said "accepts 2 values" then goes on enumerating 5 since more were
added one at a time. Let's fix it by removing the number. No backport
is needed.
2025-12-19 11:21:09 +01:00
William Lallemand
03340748de BUG/MINOR: cpu-topo: fix -Wlogical-not-parentheses build with clang
src/cpu_topo.c:1325:15: warning: logical not is only applied to the left hand side of this bitwise operator [-Wlogical-not-parentheses]
 1325 |                         } else if (!cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE)
      |                                    ^                      ~
src/cpu_topo.c:1325:15: note: add parentheses after the '!' to evaluate the bitwise operator first
 1325 |                         } else if (!cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE)
      |                                    ^
      |                                     (                                                     )
src/cpu_topo.c:1325:15: note: add parentheses around left hand side expression to silence this warning
 1325 |                         } else if (!cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE)
      |                                    ^
      |                                    (                     )
src/cpu_topo.c:1533:15: warning: logical not is only applied to the left hand side of this bitwise operator [-Wlogical-not-parentheses]
 1533 |                         } else if (!cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE)
      |                                    ^                      ~
src/cpu_topo.c:1533:15: note: add parentheses after the '!' to evaluate the bitwise operator first
 1533 |                         } else if (!cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE)
      |                                    ^
      |                                     (                                                     )
src/cpu_topo.c:1533:15: note: add parentheses around left hand side expression to silence this warning
 1533 |                         } else if (!cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE)
      |                                    ^
      |                                    (                     )

No backport needed.
2025-12-19 10:15:17 +01:00
Olivier Houchard
8aef5bec1e MEDIUM: cpu-topo: Add the "per-ccx" cpu_affinity
Add a new cpu-affinity keyword, "per-ccx".
If used, each thread will be bound to all the hardware threads available
in one CCX of the threads group.
2025-12-18 18:52:52 +01:00
Olivier Houchard
c524b181a2 MEDIUM: cpu-topo: Add the "per-thread" cpu_affinity
Add a new cpu-affinity keyword, "per-thread".
If used, each thread will be bound to only one hardware thread of the
thread group.
If used in conjonction with the "threads-per-core 1" cpu_policy, then
each thread will be bound on a different core.
2025-12-18 18:52:52 +01:00
Olivier Houchard
7e22d9c484 MEDIUM: cpu-topo: Add a new "max-threads-per-group" global keyword
Add a new global keyword, max-threads-per-group. It sets the maximum number of
threads a thread group can contain. Unless the number of thread groups
is fixed with "thread-groups", haproxy will just create more thread
groups as needed.
The default and maximum value is 64.
2025-12-18 18:52:52 +01:00
Olivier Houchard
3865f6c5c6 MEDIUM: cpu-topo: Add a "cpu-affinity" option
Add a new global option, "cpu-affinity", which controls how threads are
bound.
It currently accepts three values, "per-core", which will bind one thread to
each hardware thread of a given core, and "per-group" which will use all
the available hardware threads of the thread group, and "auto", the
default, which will use "per-group", unless "threads-per-core 1" has
been specified in cpu_policy, in which case it will use per-core.
2025-12-18 18:52:52 +01:00
Olivier Houchard
3671652bc9 MEDIUM: cpu-topo: Add a "threads-per-core" keyword to cpu-policy
Add a new, optional key-word to "cpu-policy", "threads-per-core".
It takes one argument, "1" or "auto". If "1" is used, then only one
thread per core will be created, no matter how many hardware thread each
core has. If "auto" is used, then one thread will be created per
hardware thread, as is the case by default.

for example: cpu-policy performance threads-per-core 1
2025-12-18 18:52:52 +01:00
Olivier Houchard
58f04b4615 MINOR: cpu-topo: Turn the cpu policy configuration into a struct
Turn the cpu policy configuration into a struct. Right now it just
contains an int, that represents the policy used, but will get more
information soon.
2025-12-18 18:52:52 +01:00
William Lallemand
876b1e8477 REGTESTS: fix error when no test are skipped
Since commit 1ed2c9d ("REGTESTS: list all skipped tests including
'feature cmd' ones"), the script emits some error when trying to display
the list of skipped tests when there are none.

No backport needed.
2025-12-18 17:26:50 +01:00
Willy Tarreau
9a046fc3ad BUG/MEDIUM: mux-h2: synchronize all conditions to create a new backend stream
In H2 the conditions to create a new stream differ for a client and a
server when a GOAWAY was exchanged. While on the server, any stream
whose ID is lower than or equal to the one advertised in GOAWAY is
valid, for a client it's forbidden to create any stream after receipt
of a GOAWAY, even if its ID is lower than or equal to the last one,
despite the server not being able to tell the difference from the
number of streams in flight.

Unfortunately, the logic in the code did not always reflect this
specificity of the client (the backend code in our case), and most
often considered that it was still permitted to create a new stream
until the max_id was greater than or equal to the advertised last_id.
This is for example what h2c_is_dead() and h2c_streams_left() do. In
other places, such as h2_avail_streams(), the rule is properly taken
into account. Very often the advertised last_id is the same, and this
is also what haproxy does (which explains why it's impossible to
reproduce the issue by chaining two haproxy layers), but a server may
wish to advertise any ID including 2^31-1 as mentioned in the spec,
and in this case the functions would behave differently.

This discrepancy results in a corner case where a GOAWAY received on
an idle connection will cause the next stream creation to be initially
accepted but then rejected via h2_avail_streams(), and the connection
left in a bad state, still attached to the session due to http-reuse
safe, but not reinserted into idle list, since the backend code
currently is not able to properly recover from this situation. Worse,
the idle flags are no longer on it but TASK_F_USR1 still is, and this
makes the recently added BUG_ON() rightfully trigger since this case
is not supposed to happen.

Admittedly more of the backend recovery code needs to be reworked,
however the mux must consistently decide whether or not a connection
may be reused or needs to be released.

This commit fixes the affected logic by introducing a new function
"h2c_reached_last_stream()" which says if a connection has reached its
last stream, regardless of the side, and using this one everywhere
max_id was compared to last_id. This is sufficient to address the
corner case that be_reuse_connection() currently cannot recover from.

This is in relation to GH issue #3215 and it should be sufficient to
fix the issue there. Thanks to Chris Staite for reporting the issue
and kudos to Amaury for spotting the events sequence that can lead
to this situation.

This patch must be backported to 3.3 first, then to older versions
later. It's worth noting that it's much more difficult to observe
the issue before 3.3 because the BUG_ON() is not there, and the
possibly non-released connection might end up being killed for other
reasons (timeouts etc). But one possible visible effect might be the
impossibility to delete a server (which Chris observed in 3.3).
2025-12-18 17:01:32 +01:00
William Lallemand
9c8925ba0d CI: github: use git prefix for openssl-master.yml
Uses the git- prefix in order to get the latest tarball for the master
branch on github.
2025-12-18 16:13:04 +01:00
Olivier Houchard
40d16af7a6 BUG/MEDIUM: backend: Do not remove CO_FL_SESS_IDLE in assign_server()
Back in the mists of time, commit e91a526c8f decided that if we were trying
to stay on the same server than the previous request, and if there were
a connection available in the session, we'd remove its CO_FL_SESS_IDLE.
The reason for doing that has been long lost, probably it fixed a bug at some
point, but it was most probably not the right place to do that. And starting
with 3.3, this triggers a BUG_ON() because that flag is expected later on.
So just revert the commit, if the ancient bug shows up again, it will be
fixed another way.

This should be backported to 3.3. There is little reason to backport it
to previous versions, unless other patches depend on it.
2025-12-18 16:09:34 +01:00
William Lallemand
0c7a4469d2 CI: github: openssl-master.yml misses actions/checkout
The job can't run setup-vtest because the actions/checkout use line is
missing.
2025-12-18 16:03:20 +01:00
William Lallemand
38d3c24931 CI: github: add a job to test the master branch of OpenSSL
vtest.yml only builds the releases of OpenSSL for now, there's no way to
check if we still have issues with the API before a pre-release version
is released.

This job builds the master branch of OpenSSL.

It is run everyday at 3 AM.
2025-12-18 15:43:06 +01:00
William Lallemand
a58f09b63c CI: github: remove openssl no-deprecated job
Remove the openssl no-deprecated job which was used for 1.1.0 API.
It's not useful anymore since it uses the OpenSSL version of the
distributions.

Checking depreciations in the API is still useful when using newest
version of the library. A job for the OpenSSL master branch would be
more useful than that.
2025-12-18 15:22:27 +01:00
William Lallemand
1ed2c9da2c REGTESTS: list all skipped tests including 'feature cmd' ones
The script for running regression tests is modified to improve the
visibility of skipped tests.

Previously, the reasons for skipping tests were only visible during the
test discovery phase when grepping the vtc (REQUIRE, EXCLUDE, etc).
But reg-tests skipped by vtest with the 'feature cmd' keywords were not
listed.

This change introduces the following:
  - vtest does not remove the logs itself anymore, because it is not
    able to let the log available when a test is skipped. So the -L
    parameter is now always passed to vtest
  - All skipped tests during the discovery phase are now logged to a
    'skipped.log' file within the test directory
  - The script now parses vtest logs to find tests that were skipped
    due to missing features (via the 'feature cmd' in .vtc files)
    and adds them to the skipped list.
2025-12-17 15:54:15 +01:00
Frederic Lecaille
8523a5cde0 REGTESTS: quic: fix a TLS stack usage
This issue was reported in GH #3214 where quic/tls13_ssl_crt-list_filters.vtc
QUIC reg test was run without haproxy QUIC support due to OPENSSL_AWSLC enabled
featured.

This is due to the fact that when ssl/tls13_ssl_crt-list_filters.vtc has been
ported to QUIC the feature(OPENSSL) was silly replaced by feature(QUIC) leading
the script to be run even without QUIC support if OR'ed OPENSSL_AWSLC feature is
enabled.

A good method to port these feature() commands to QUIC would have been
to add a feature(QUIC) command seperated from the one used for the supported
TLS stacks identified by the original underlying ssl reg tests (in reg-tests/ssl).
This is what is done by this patch.

Thank you to @idl0r for having reported this issue.
2025-12-15 09:44:42 +01:00
Christopher Faulet
a25394b6c8 CLEANUP: ssl-sock: Remove useless tests on connection when resuming TLS session
In ssl_sock_srv_try_reuse_sess(), the connection is always defined, to TCP
and QUIC connections. No reason to test it. Because it is not so obvious for
the QUIC part, a BUG_ON() could be added here. For now, just remove useless
tests.

This patch should fix a Coverity report from #3213.
2025-12-15 08:16:59 +01:00
Christopher Faulet
d6b1d5f6e9 CLEANUP: tcpcheck: Remove useless test on the xprt used for healthchecks
The xprt used to perform a healthcheck is always defined and cannot be NULL.
So there is no reason to test it. It could lead to wrong assumptions later
in the code.

This patch should fix a Coverity report from #3213.
2025-12-15 08:01:21 +01:00
Christopher Faulet
5c5914c32e CLEANUP: backend: Remove useless test on server's xprt
The server's xprt is always defined and cannot be NULL. So there is no
reason to test it. It could lead to wrong assumptions later in the code.

This patch should fix a Coverity report from #3213.
2025-12-15 07:56:53 +01:00
Olivier Houchard
a08bc468d2 BUG/MEDIUM: quic: Don't try to use hystart if not implemented
Not every CC algos implement hystart, so only call the method if it is
actually there. Failure to do so will cause crashes if hystart is on,
and the algo doesn't implement it.

This should fix github issue #3218

This should be backported up to 3.0.
2025-12-14 16:46:12 +01:00
Christopher Faulet
54e58103e5 BUG/MEDIUM: stconn: Don't report abort from SC if read0 was already received
SC_FL_ABRT_DONE flag should never be set when SC_FL_EOS was already
set. These both flags were introduced to replace the old CF_SHUTR and to
have a flag for shuts driven by the stream and a flag for the read0 received
by the mux. So both flags must not be seen at same time on a SC. It is
espeically important because some processing are performed when these flags
are set. And wrong decisions may be made.

This patch must be backproted as far as 2.8.
2025-12-12 08:41:08 +01:00
Christopher Faulet
a483450fa2 BUG/MEDIUM: http-ana: Properly detect client abort when forwarding response (v2)
The first attempt to fix this issue (c672b2a29 "BUG/MINOR: http-ana:
Properly detect client abort when forwarding the response") was not fully
correct and could be responsible to false report of client abort during the
response forwarding. I guess it is possible to truncate the response.

Instead, we must also take care that the client closed on its side, by
checking SC_FL_EOS flag on the front SC. Indeed, if the client has aborted,
this flag should be set.

This patch should be backported as far as 2.8.
2025-12-12 08:41:08 +01:00
William Lallemand
5b19d95850 BUG/MEDIUM: mworker/listener: ambiguous use of RX_F_INHERITED with shards
The RX_F_INHERITED flag was ambiguous, as it was used to mark both
listeners inherited from the parent process and listeners duplicated
from another local receiver. This could lead to incorrect behavior
concerning socket unbinding and suspension.

This commit refactors the handling of inherited listeners by splitting
the RX_F_INHERITED flag into two more specific flags:

- RX_F_INHERITED_FD: Indicates a listener inherited from the parent
  process via its file descriptor. These listeners should not be unbound
  by the master.

- RX_F_INHERITED_SOCK: Indicates a listener that shares a socket with
  another one, either by being inherited from the parent or by being
  duplicated from another local listener. These listeners should not be
  suspended or resumed individually.

Previously, the sharding code was unconditionally using RX_F_INHERITED
when duplicating a file descriptor. In HAProxy versions prior to 3.1,
this led to a file descriptor leak for duplicated unix stats sockets in
the master process. This would eventually cause the master to crash with
a BUG_ON in fd_insert() once the file descriptor limit was reached.

This must be backported as far as 3.0. Branches earlier than 3.0 are
affected but would need a different patch as the logic is different.
2025-12-11 18:09:47 +01:00
Willy Tarreau
aed953088e [RELEASE] Released version 3.4-dev1
Released version 3.4-dev1 with the following main changes :
    - BUG/MINOR: jwt: Missing "case" in switch statement
    - DOC: configuration: ECH support details
    - Revert "MINOR: quic: use dynamic cc_algo on bind_conf"
    - MINOR: quic: define quic_cc_algo as const
    - MINOR: quic: extract cc-algo parsing in a dedicated function
    - MINOR: quic: implement cc-algo server keyword
    - BUG/MINOR: quic-be: Missing keywords array NULL termination
    - REGTESTS: ssl enable tls12_reuse.vtc for AWS-LC
    - REGTESTS: ssl: split tls*_reuse in stateless and stateful resume tests
    - BUG/MEDIUM: connection: fix "bc_settings_streams_limit" typo
    - BUG/MEDIUM: config: ignore empty args in skipped blocks
    - DOC: config: mention clearer that the cache's total-max-size is mandatory
    - DOC: config: reorder the cache section's keywords
    - BUG/MINOR: quic/ssl: crash in ClientHello callback ssl traces
    - BUG/MINOR: quic-be: handshake errors without connection stream closure
    - MINOR: quic: Add useful debugging traces in qc_idle_timer_do_rearm()
    - REGTESTS: ssl: Move all the SSL certificates, keys, crt-lists inside "certs" directory
    - REGTESTS: quic/ssl: ssl/del_ssl_crt-list.vtc supported by QUIC
    - REGTESTS: quic: dynamic_server_ssl.vtc supported by QUIC
    - REGTESTS: quic: issuers_chain_path.vtc supported by QUIC
    - REGTESTS: quic: new_del_ssl_cafile.vtc supported by QUIC
    - REGTESTS: quic: ocsp_auto_update.vtc supported by QUIC
    - REGTESTS: quic: set_ssl_bug_2265.vtc supported by QUIC
    - MINOR: quic: avoid code duplication in TLS alert callback
    - BUG/MINOR: quic-be: missing connection stream closure upon TLS alert to send
    - REGTESTS: quic: set_ssl_cafile.vtc supported by QUIC
    - REGTESTS: quic: set_ssl_cert_noext.vtc supported by QUIC
    - REGTESTS: quic: set_ssl_cert.vtc supported by QUIC
    - REGTESTS: quic: set_ssl_crlfile.vtc supported by QUIC
    - REGTESTS: quic: set_ssl_server_cert.vtc supported by QUIC
    - REGTESTS: quic: show_ssl_ocspresponse.vtc supported by QUIC
    - REGTESTS: quic: ssl_client_auth.vtc supported by QUIC
    - REGTESTS: quic: ssl_client_samples.vtc supported by QUIC
    - REGTESTS: quic: ssl_default_server.vtc supported by QUIC
    - REGTESTS: quic: new_del_ssl_crlfile.vtc supported by QUIC
    - REGTESTS: quic: ssl_frontend_samples.vtc supported by QUIC
    - REGTESTS: quic: ssl_server_samples.vtc supported by QUIC
    - REGTESTS: quic: ssl_simple_crt-list.vtc supported by QUIC
    - REGTESTS: quic: ssl_sni_auto.vtc code provision for QUIC
    - REGTESTS: quic: ssl_curve_name.vtc supported by QUIC
    - REGTESTS: quic: add_ssl_crt-list.vtc supported by QUIC
    - REGTESTS: add ssl_ciphersuites.vtc (TCP & QUIC)
    - BUG/MINOR: quic: do not set first the default QUIC curves
    - REGTESTS: quic/ssl: Add ssl_curves_selection.vtc
    - BUG/MINOR: ssl: Don't allow to set NULL sni
    - MEDIUM: quic: Add connection as argument when qc_new_conn() is called
    - MINOR: ssl: Add a function to hash SNIs
    - MINOR: ssl: Store hash of the SNI for cached TLS sessions
    - MINOR: ssl: Compare hashes instead of SNIs when a session is cached
    - MINOR: connection/ssl: Store the SNI hash value in the connection itself
    - MEDIUM: tcpcheck/backend: Get the connection SNI before initializing SSL ctx
    - BUG/MEDIUM: ssl: Don't reuse TLS session if the connection's SNI differs
    - MEDIUM: ssl/server: No longer store the SNI of cached TLS sessions
    - BUG/MINOR: log: Dump good %B and %U values in logs
    - BUG/MEDIUM: http-ana: Don't close server connection on read0 in TUNNEL mode
    - DOC: config: Fix description of the spop mode
    - DOC: config: Improve spop mode documentation
    - MINOR: ssl: Split ssl_crt-list_filters.vtc in two files by TLS version
    - REGTESTS: quic: tls13_ssl_crt-list_filters.vtc supported by QUIC
    - BUG/MEDIUM: h3: do not access QCS <sd> if not allocated
    - CLEANUP: mworker/cli: remove useless variable
    - BUG/MINOR: mworker/cli: 'show proc' is limited by buffer size
    - BUG/MEDIUM: ssl: Always check the ALPN after handshake
    - MINOR: connections: Add a new CO_FL_SSL_NO_CACHED_INFO flag
    - BUG/MEDIUM: ssl: Don't store the ALPN for check connections
    - BUG/MEDIUM: ssl: Don't resume session for check connections
    - CLEANUP: improvements to the alignment macros
    - CLEANUP: use the automatic alignment feature
    - CLEANUP: more conversions and cleanups for alignment
    - BUG/MEDIUM: h3: fix access to QCS <sd> definitely
    - MINOR: h2/trace: emit a trace of the received RST_STREAM type
2025-12-10 16:52:30 +01:00
Willy Tarreau
3ec5818807 MINOR: h2/trace: emit a trace of the received RST_STREAM type
Right now we don't get any state trace when receiving an RST_STREAM, and
this is not convenient because RST_STREAM(0) is not visible at all, except
in developer level because the function is entered and left.

Let's extract the RST code first and always log it using TRACE_PRINTF()
(along with h2c/h2s) so that it's possible to detect certain codes being
used.
2025-12-10 15:58:56 +01:00
Amaury Denoyelle
5b8e6d6811 BUG/MEDIUM: h3: fix access to QCS <sd> definitely
The previous patch tried to fix access to QCS <sd> member, as the latter
is not always allocated anymore on the frontend side.

  a15f0461a016a664427f5aaad2227adcc622c882
  BUG/MEDIUM: h3: do not access QCS <sd> if not allocated

In particular, access was prevented after HEADERS parsing in case
h3_req_headers_to_htx() returned an error, which indicates that the
stream-endpoint allocation was not performed. However, this still is not
enough when QCS instance is already closed at this step. Indeed, in this
case, h3_req_headers_to_htx() returns OK but stream-endpoint allocation
is skipped as an optimization as no data exchange will be performed.

To definitely fix this kind of problems, add checks on qcs <sd> member
before accessing it in H3 layer. This method is the safest one to ensure
there is no NULL dereferencement.

This should fix github issue #3211.

This must be backported along the above mentionned patch.
2025-12-10 12:04:37 +01:00
Maxime Henrion
6eedd0d485 CLEANUP: more conversions and cleanups for alignment
- Convert additional cases to use the automatic alignment feature for
  the THREAD_ALIGN(ED) macros. This includes some cases that are less
  obviously correct where it seems we wanted to align only in the
  USE_THREAD case but were not using the thread specific macros.
- Also move some alignment requirements to the structure definition
  instead of having it on variable declaration.
2025-12-09 17:40:58 +01:00
Maxime Henrion
bc8e14ec23 CLEANUP: use the automatic alignment feature
- Use the automatic alignment feature instead of hardcoding 64 all over
  the code.
- This also converts a few bare __attribute__((aligned(X))) to using the
  ALIGNED macro.
2025-12-09 17:14:58 +01:00
Maxime Henrion
74719dc457 CLEANUP: improvements to the alignment macros
- It is now possible to use the THREAD_ALIGN and THREAD_ALIGNED macros
  without a parameter. In this case, we automatically align on the cache
  line size.
- The cache line size is set to 64 by default to match the current code,
  but it can be overridden on the command line.
- This required moving the DEFVAL/DEFNULL/DEFZERO macros to compiler.h
  instead of tools-t.h, to avoid namespace pollution if we included
  tools-t.h from compiler.h.
2025-12-09 17:05:52 +01:00
Olivier Houchard
420b42df1c BUG/MEDIUM: ssl: Don't resume session for check connections
Don't attempt to use stored sessions when creating new check
connections, as the check SSL parameters might be different from the
server's ones.
This has not been proven to be a problem yet, but it doesn't mean it
can't be, and this should be backported up to 2.8 along with
dcce9369129f6ca9b8eed6b451c0e20c226af2e3 if it is.
2025-12-09 16:45:54 +01:00
Olivier Houchard
be4e1220c2 BUG/MEDIUM: ssl: Don't store the ALPN for check connections
When establishing check connections, do not store the negociated ALPN
into the server's path_param if the connection is a check connection, as
it may use different SSL parameters than the regular connections. To do
so, only store them if the CO_FL_SSL_NO_CACHED_INFO is not set.
Otherwise, the check ALPN may be stored, and the wrong mux can be used
for regular connections, which will end up generating 502s.

This should fix Github issue #3207

This should be backported to 3.3.
2025-12-09 16:43:31 +01:00
Olivier Houchard
dcce936912 MINOR: connections: Add a new CO_FL_SSL_NO_CACHED_INFO flag
Add a new flag to connections, CO_FL_SSL_NO_CACHED_INFO, and set it for
checks.
It lets the ssl layer know that he should not use cached informations,
such as the ALPN as stored in the server, or cached sessions.
This wlil be used for checks, as checks may target different servers, or
used a different SSL configuration, so we can't assume the stored
informations are correct.

This should be backported to 3.3, and may be backported up to 2.8 if the
attempts to do session resume by checks is proven to be a problem.
2025-12-09 16:43:31 +01:00
Olivier Houchard
260d64d787 BUG/MEDIUM: ssl: Always check the ALPN after handshake
Move the code that is responsible for checking the ALPN, and updating
the one stored in the server's path_param, from after we created the
mux, to after we did an handshake. Once we did it once, the mux will not
be created by the ssl code anymore, as when we know which mux to use
thanks to the ALPN, it will be done earlier in connect_server(), so in
the unlikely event it changes, we would not detect it anymore, and we'd
keep on creating the wrong mux.
This can be reproduced by doing a first request, and then changing the
ALPN of the server without haproxy noticing (ie without haproxy noticing
that the server went down).

This should be backported to 3.3.
2025-12-09 16:43:31 +01:00
William Lallemand
594408cd61 BUG/MINOR: mworker/cli: 'show proc' is limited by buffer size
In ticket #3204, it was reported that "show proc" is not able to display
more than 202 processes. Indeed the bufsize is 16k by default in the
master, and can't be changed anymore since 3.1.

This patch allows the 'show proc' to start again to dump when the buffer
is full, based on the timestamp of the last PID it attempted to dump.
Using pointers or count the number of processes might not be a good idea
since the list can change between calls.

Could be backported in all stable branche.
2025-12-09 16:09:10 +01:00
William Lallemand
dabe8856ad CLEANUP: mworker/cli: remove useless variable
The msg variable is declared and free but never used, this patch removes it.
2025-12-09 16:09:10 +01:00
Amaury Denoyelle
a15f0461a0 BUG/MEDIUM: h3: do not access QCS <sd> if not allocated
Since the following commit, allocation of QCS stream-endpoint on FE side
has been delayed. The objective is to allocate it only for QCS attached
to an upper stream object. Stream-endpoint allocation is now performed
on qcs_attach_sc() called during HEADERS parsing.

  commit e6064c561684d9b079e3b5725d38dc3b5c1b5cd5
  OPTIM: mux-quic: delay FE sedesc alloc to stream creation

Also, stream-endpoint is accessed through the QCS instance after HEADERS
or DATA frames parsing, to update the known input payload length. The
above patch triggered regressions as in some code paths, <sd> field is
dereferenced while still being NULL.

This patch fixes this by restricting access to <sd> field after newer
conditions.

First, after HEADERS parsing, known input length is only updated if
h3_req_headers_to_htx() previously returned a success value, which
guarantee that qcs_attach_sc() has been executed.

After DATA parsing, <sd> is only accessed after the frame validity
check. This ensures that HEADERS were already parsed, thus guaranteing
that stream-endpoint is allocated.

This should fix github issue #3211.

This must be backported up to 3.3. This is sufficient, unless above
patch is backported to previous releases, in which case the current one
must be picked with it.
2025-12-09 15:00:23 +01:00
Frederic Lecaille
18625f7ff3 REGTESTS: quic: tls13_ssl_crt-list_filters.vtc supported by QUIC
ssl/tls13_ssl_crt-list_filters.vtc was renamed to ssl/tls13_ssl_crt-list_filters.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then tls13_ssl_crt-list_filters.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
2025-12-09 07:42:45 +01:00
Frederic Lecaille
c005ed0df8 MINOR: ssl: Split ssl_crt-list_filters.vtc in two files by TLS version
Seperate the section from ssl_crt-list_filters.vtc which supports TLS 1.2 and 1.3
versions to produce tls12_ssl_crt-list_filters.vtc and tls13_ssl_crt-list_filters.vtc.
2025-12-09 07:42:45 +01:00
Christopher Faulet
2fa3b4c3a3 DOC: config: Improve spop mode documentation
The spop mode description was a bit confusing. So let's improve it.

Thanks to @NickMRamirez.

This patch shoud fix issue #3206. It could be backported as far as 3.1.
2025-12-08 15:24:05 +01:00
Christopher Faulet
e16dcab92f DOC: config: Fix description of the spop mode
It was mentionned that the spop mode turned the backend into a "log"
backend. It is obviously wrong. It turns the backend into a spop backend.

This patch should be backported as far as 3.1.
2025-12-08 15:22:01 +01:00
Christopher Faulet
3cf4e7afb9 BUG/MEDIUM: http-ana: Don't close server connection on read0 in TUNNEL mode
It is a very old bug (2012), dating from the introduction of the keep-alive
support to HAProxy. When a request is fully received, the SC on backend side
is switched to NOHALF mode. It means that when the read0 is received from
the server, the server connection is immediately closed. It is expected to
do so at the end of a classical request. However, it must not be performed
if the session is switched to the TUNNEL mode (after an HTTP/1 upgrade or a
CONNECT). The client may still have data to send to the server. And closing
brutally the server connection this way will be handled as an error on
client side.

This bug is especially visible when a H2 connection on client side because a
RST_STREAM is emitted and a "SD--" is reported in logs.

Thanks to @chrisstaite

This patch should fix the issue #3205. It must be backported to all stable
versions.
2025-12-08 15:22:01 +01:00
Christopher Faulet
5d74980277 BUG/MINOR: log: Dump good %B and %U values in logs
When per-stream "bytes_in" and "bytes_out" counters where replaced in 3.3,
the wrong counters were used for %B and %U values in logs. In the
configuration manual and the commit message, it was specificed that
"bytes_in" was replaced by "req_in" and "bytes_out" by "res_in", but in the
code, wrong counters were used. It is now fixed.

This patch should fix the issue #3208. It must be backported to 3.3.
2025-12-08 15:22:01 +01:00
Christopher Faulet
be998b590e MEDIUM: ssl/server: No longer store the SNI of cached TLS sessions
Thanks to the previous patch, "BUG/MEDIUM: ssl: Don't reuse TLS session
if the connection's SNI differs", it is no useless to store the SNI of
cached TLS sessions. This SNI is no longer tested and new connections
reusing a session must have the same SNI.

The main change here is for the ssl_sock_set_servername() function. It is no
longer possible to compare the SNI of the reused session with the one of the
new connection. So, the SNI is always set, with no other processing. Mainly,
the session is not destroyed when SNIs don't match. It means the commit
119a4084bf ("BUG/MEDIUM: ssl: for a handshake when server-side SNI changes")
is implicitly reverted.

It is good to note that it is unclear for me when and why the reused session
should be destroyed. Because I'm unable to reproduce any issue fixed by the
commit above.

This patch could be backported as far as 3.0 with the commit above.
2025-12-08 15:22:01 +01:00
Christopher Faulet
5702009c8c BUG/MEDIUM: ssl: Don't reuse TLS session if the connection's SNI differs
When a new SSL server connection is created, if no SNI is set, it is
possible to inherit from the one of the reused TLS session. The bug was
introduced by the commit 95ac5fe4a ("MEDIUM: ssl_sock: always use the SSL's
server name, not the one from the tid"). The mixup is possible between
regular connections but also with health-checks connections.

But it is only the visible part of the bug. If the SNI of the cached TLS
session does not match the one of the new connection, no reuse must be
performed at all.

To fix the bug, hash of the SNI of the reused session is compared with the
one of the new connection. The TLS session is reused only if the hashes are
the same.

This patch should fix the issue #3195. It must be slowly backported as far
as 3.0. it relies on the following series:

  * MEDIUM: tcpcheck/backend: Get the connection SNI before initializing SSL ctx
  * MINOR: connection/ssl: Store the SNI hash value in the connection itself
  * MEDIUM: ssl: Store hash of the SNI for cached TLS sessions
  * MINOR: ssl: Add a function to hash SNIs
  * MEDIUM: quic: Add connection as argument when qc_new_conn() is called
  * BUG/MINOR: ssl: Don't allow to set NULL sni
2025-12-08 15:22:01 +01:00
Christopher Faulet
7e9d921141 MEDIUM: tcpcheck/backend: Get the connection SNI before initializing SSL ctx
The SNI of a new connection is now retrieved earlier, before the
initialization of the SSL context. So, concretely, it is now performed
before calling conn_prepare(). The SNI is then set just after.
2025-12-08 15:22:01 +01:00
Christopher Faulet
28654f3c9b MINOR: connection/ssl: Store the SNI hash value in the connection itself
When a SNI is set on a new connection, its hash is now saved in the
connection itself. To do so, a dedicated field was added into the connection
strucutre, called sni_hash. For now, this value is only used when the TLS
session is cached.
2025-12-08 15:22:01 +01:00
Christopher Faulet
92f77cb3e6 MINOR: ssl: Compare hashes instead of SNIs when a session is cached
This patch relies on the commit "MINOR: ssl: Store hash of the SNI for
cached TLS sessions". We now use the hash of the SNIs instead of the SNIs
themselves to know if we must update the cached SNI or not.
2025-12-08 15:22:01 +01:00
Christopher Faulet
9794585204 MINOR: ssl: Store hash of the SNI for cached TLS sessions
For cached TLS sessions, in addition to the SNI itself, its hash is now also
saved. No changes are expected here because this hash is not used for now.

This commit relies on:

  * MINOR: ssl: Add a function to hash SNIs
2025-12-08 15:22:00 +01:00
Christopher Faulet
d993e1eeae MINOR: ssl: Add a function to hash SNIs
This patch only adds the function ssl_sock_sni_hash() that can be used to
get the hash value corresponding to an SNI. A global seed, sni_hash_seed, is
used.
2025-12-08 15:22:00 +01:00
Christopher Faulet
a83ed86b78 MEDIUM: quic: Add connection as argument when qc_new_conn() is called
This patch reverts the commit efe60745b ("MINOR: quic: remove connection arg
from qc_new_conn()"). The connection will be mandatory when the QUIC
connection is created on backend side to fix an issue when we try to reuse a
TLS session.

So, the connection is again an argument of qc_new_conn(), the 4th
argument. It is NULL for frontend QUIC connections but there is no special
check on it.
2025-12-08 15:22:00 +01:00
Christopher Faulet
3534efe798 BUG/MINOR: ssl: Don't allow to set NULL sni
ssl_sock_set_servername() function was documented to support NULL sni to
unset it. However, the man page of SSL_get_servername() does not mentionned
it is supported or not. And it is in fact not supported by WolfSSL and leads
to a crash if we do so.

For now, this function is never called with a NULL sni, so it better and
safer to forbid this case. Now, if the sni is NULL, the function does
nothing.

This patch could be backported to all stable versions.
2025-12-08 15:22:00 +01:00
Frederic Lecaille
7872260525 REGTESTS: quic/ssl: Add ssl_curves_selection.vtc
This reg test ensures the curves may be correctly set for frontend
and backends by "ssl-default-bind-curves" and "ssl-default-server-curves"
as global options or with "curves" options on "bind" and "server" lines.
2025-12-08 10:40:59 +01:00
Frederic Lecaille
90064ac88b BUG/MINOR: quic: do not set first the default QUIC curves
This patch impacts both the QUIC frontends and listeners.

Note that "ssl-default-bind-ciphersuites", "ssl-default-bind-curves",
are not ignored by QUIC by the frontend. This is also the case for the
backends with "ssl-default-server-ciphersuites" and "ssl-default-server-curves".

These settings are set by ssl_sock_prepare_ctx() for the frontends and
by ssl_sock_prepare_srv_ssl_ctx() for the backends. But ssl_quic_initial_ctx()
first sets the default QUIC frontends (see <quic_ciphers> and <quic_groups>)
before these ssl_sock.c function are called, leading some TLS stack to
refuse them if they do not support them. This is the case for some OpenSSL 3.5
stack with FIPS support. They do not support X25519.

To fix this, set the default QUIC ciphersuites and curves only if not already
set by the settings mentioned above.

Rename <quic_ciphers> global variable to <default_quic_ciphersuites>
and <quic_groups> to <default_quic_curves> to reflect the OpenSSL API naming.

These options are taken into an account by ssl_quic_initial_ctx()
which inspects these four variable before calling SSL_CTX_set_ciphersuites()
with <default_quic_ciphersuites> as parameter and SSL_CTX_set_curves() with
<default_quic_curves> as parameter if needed, that is to say, if no ciphersuites
and curves were set by "ssl-default-bind-ciphersuites", "ssl-default-bind-curves"
as global options  or "ciphersuites", "curves" as "bind" line options.
Note that the bind_conf struct is not modified when no "ciphersuites" or
"curves" option are used on "bind" lines.

On backend side, rely on ssl_sock_init_srv() to set the server ciphersuites
and curves. This function is modified to use respectively <default_quic_ciphersuites>
and <default_quic_curves> if no ciphersuites  and curves were set by
"ssl-default-server-ciphersuites", "ssl-default-server-curves" as global options
or "ciphersuites", "curves" as "server" line options.

Thank to @rwagoner for having reported this issue in GH #3194 when using
an OpenSSL 3.5.4 stack with FIPS support.

Must be backported as far as 2.6
2025-12-08 10:40:59 +01:00
Frederic Lecaille
a2d2cda631 REGTESTS: add ssl_ciphersuites.vtc (TCP & QUIC)
This reg test ensures the ciphersuites may be correctly set for frontend
and backends by "ssl-default-bind-ciphersuites" and "ssl-default-server-ciphersuites"
as global options or with "ciphersuites" options on "bind" and "server" lines.
2025-12-08 10:40:59 +01:00
Frederic Lecaille
062a0ed899 REGTESTS: quic: add_ssl_crt-list.vtc supported by QUIC
ssl/add_ssl_crt-list.vtc was renamed to ssl/add_ssl_crt-list.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then add_ssl_crt-list.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
2025-12-08 10:40:59 +01:00
Frederic Lecaille
4214c97dd4 REGTESTS: quic: ssl_curve_name.vtc supported by QUIC
ssl/ssl_curve_name.vtc was renamed to ssl/ssl_curve_name.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then ssl_curve_name.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);

Note that this script works by chance for QUIC because the curves
selection matches the default ones used by QUIC.
2025-12-08 10:40:59 +01:00
Frederic Lecaille
c615b14fac REGTESTS: quic: ssl_sni_auto.vtc code provision for QUIC
ssl/ssl_sni_auto.vtc was renamed to ssl/ssl_sni_auto.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then ssl_sni_auto.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);

Mark the test as broken for QUIC
2025-12-08 10:40:59 +01:00
Frederic Lecaille
7bb7b26317 REGTESTS: quic: ssl_simple_crt-list.vtc supported by QUIC
ssl/ssl_simple_crt-list.vtc was renamed to ssl/ssl_simple_crt-list.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then ssl_simple_crt-list.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
2025-12-08 10:40:59 +01:00
Frederic Lecaille
b87bee8e04 REGTESTS: quic: ssl_server_samples.vtc supported by QUIC
ssl/ssl_server_samples.vtc was renamed to ssl/ssl_server_samples.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then ssl_server_samples.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
2025-12-08 10:40:59 +01:00
Frederic Lecaille
25529dddb6 REGTESTS: quic: ssl_frontend_samples.vtc supported by QUIC
ssl/ssl_frontend_samples.vtc was renamed to ssl/ssl_frontend_samples.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then ssl_frontend_samples.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
2025-12-08 10:40:59 +01:00
Frederic Lecaille
5cf5f76a90 REGTESTS: quic: new_del_ssl_crlfile.vtc supported by QUIC
ssl/new_del_ssl_crlfile.vtc was renamed to ssl/new_del_ssl_crlfile.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then new_del_ssl_crlfile.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
2025-12-08 10:40:59 +01:00
Frederic Lecaille
fc0c52f2af REGTESTS: quic: ssl_default_server.vtc supported by QUIC
ssl/ssl_default_server.vtc was renamed to ssl/ssl_default_server.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then ssl_default_server.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
2025-12-08 10:40:59 +01:00
Frederic Lecaille
4bff826204 REGTESTS: quic: ssl_client_samples.vtc supported by QUIC
ssl/ssl_client_samples.vtc was renamed to ssl/ssl_client_samples.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then ssl_client_samples.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
2025-12-08 10:40:59 +01:00
Frederic Lecaille
47889154d2 REGTESTS: quic: ssl_client_auth.vtc supported by QUIC
ssl/ssl_client_auth.vtc was renamed to ssl/ssl_client_auth.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then ssl_client_auth.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
2025-12-08 10:40:59 +01:00
Frederic Lecaille
b285f11cd6 REGTESTS: quic: show_ssl_ocspresponse.vtc supported by QUIC
ssl/show_ssl_ocspresponse.vtc was renamed to ssl/show_ssl_ocspresponse.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then show_ssl_ocspresponse.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
2025-12-08 10:40:59 +01:00
Frederic Lecaille
c4d066e735 REGTESTS: quic: set_ssl_server_cert.vtc supported by QUIC
ssl/set_ssl_server_cert.vtc was renamed to ssl/set_ssl_server_cert.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then set_ssl_server_cert.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
2025-12-08 10:40:59 +01:00
Frederic Lecaille
c1a818c204 REGTESTS: quic: set_ssl_crlfile.vtc supported by QUIC
ssl/set_ssl_crlfile.vtc was renamed to ssl/set_ssl_crlfile.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then set_ssl_crlfile.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
2025-12-08 10:40:59 +01:00
Frederic Lecaille
83b3e2876e REGTESTS: quic: set_ssl_cert.vtc supported by QUIC
ssl/set_ssl_cert.vtc was renamed to ssl/set_ssl_cert.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then set_ssl_cert.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
2025-12-08 10:40:59 +01:00
Frederic Lecaille
cb1e9e3cd8 REGTESTS: quic: set_ssl_cert_noext.vtc supported by QUIC
ssl/set_ssl_cert_noext.vtc was renamed to ssl/set_ssl_cert_noext.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then set_ssl_cert_noext.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
2025-12-08 10:40:59 +01:00
Frederic Lecaille
9c3180160d REGTESTS: quic: set_ssl_cafile.vtc supported by QUIC
ssl/set_ssl_cafile.vtc was renamed to ssl/set_ssl_cafile.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then set_ssl_cafile.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
2025-12-08 10:40:59 +01:00
Frederic Lecaille
3f5e73e83f BUG/MINOR: quic-be: missing connection stream closure upon TLS alert to send
This is the same issue as the one fixed by this commit:
   BUG/MINOR: quic-be: handshake errors without connection stream closure
But this time this is when the client has to send an alert to the server.
The fix consists in creating the mux after having set the handshake connection
error flag and error_code.

This bug was revealed by ssl/set_ssl_cafile.vtc reg test.

Depends on this commit:
     MINOR: quic: avoid code duplication in TLS alert callback

Must be backported to 3.3
2025-12-08 10:40:59 +01:00
Frederic Lecaille
e7b06f5e7a MINOR: quic: avoid code duplication in TLS alert callback
Both the OpenSSL QUIC API TLS alert callback ha_quic_ossl_alert() does exactly
the same thing than the one for quictls API, even if the parameter have different
types.

Call ha_quic_send_alert() quictls callback from ha_quic_ossl_alert OpenSSL
QUIC API callback to avoid such code duplication.
2025-12-08 10:40:59 +01:00
Frederic Lecaille
ad101dc3d5 REGTESTS: quic: set_ssl_bug_2265.vtc supported by QUIC
ssl/set_ssl_bug_2265.vtc was renamed to ssl/set_ssl_bug_2265.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then set_ssl_bug_2265.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
2025-12-08 10:40:59 +01:00
Frederic Lecaille
2e7320d2ee REGTESTS: quic: ocsp_auto_update.vtc supported by QUIC
ssl/ocsp_auto_update.vtc was renamed to ssl/ocsp_auto_update.vtci
to produce a common part runnable both for QUIC and TCP listeners.
Then ocsp_auto_update.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC listeners and "stream" for TCP listeners);
2025-12-08 10:40:59 +01:00
Frederic Lecaille
cdfd9b154a REGTESTS: quic: new_del_ssl_cafile.vtc supported by QUIC
ssl/new_del_ssl_cafile.vtc was rename to ssl/new_del_ssl_cafile.vtci
to produce a common part runnable both for QUIC and TCP connections.
Then new_del_ssl_cafile.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC connection and "stream" for TCP connections);
2025-12-08 10:40:59 +01:00
Frederic Lecaille
8c48a7798a REGTESTS: quic: issuers_chain_path.vtc supported by QUIC
ssl/issuers_chain_path.vtc was rename to ssl/issuers_chain_path.vtci
to produce a common part runnable both for QUIC and TCP connections.
Then issuers_chain_path.vtc files were created both under ssl and quic directories
to call this .vtci file with correct VTC_SOCK_TYPE environment values
("quic" for QUIC connection and "stream" for TCP connections);
2025-12-08 10:40:59 +01:00
Frederic Lecaille
94a7e0127b REGTESTS: quic: dynamic_server_ssl.vtc supported by QUIC
ssl/dynamic_server_ssl.vtc was rename to ssl/dynamic_server_ssl.vtci
to produce a common part runnable both for QUIC and TCP connections.
Then dynamic_server_ssl.vtc were created both under ssl and quic directories
to call the .vtci file with correct VTC_SOCK_TYPE environment value.

Note that VTC_SOCK_TYPE may be resolved in haproxy -cli { } sections.
2025-12-08 10:40:59 +01:00
Frederic Lecaille
588d0edf99 REGTESTS: quic/ssl: ssl/del_ssl_crt-list.vtc supported by QUIC
Extract from ssl/del_ssl_crt-list.vtc the common part to produce
ssl/del_ssl_crt-list.vtci which may be reused by QUIC and TCP
from respectively quic/del_ssl_crt-list.vtc and ssl/del_ssl_crt-list.vtc
thanks to "include" VTC command and VTC_SOCK_TYPE special vtest environment
variable.
2025-12-08 10:40:59 +01:00
Frederic Lecaille
6e94b69665 REGTESTS: ssl: Move all the SSL certificates, keys, crt-lists inside "certs" directory
Move all these files and others for OCSP tests found into reg-tests/ssl
to reg-test/ssl/certs and adapt all the VTC files which use them.

This patch is needed by other tests which have to include the SSL tests.
Indeed, some VTC commands contain paths to these files which cannot
be customized with environment variables, depending on the location the VTC file
is runi from, because VTC does not resolve the environment variables. Only macros
as ${testdir} can be resolved.

For instance this command run from a VTC file from reg-tests/ssl directory cannot
be reused from another directory, except if we add a symbolic link for each certs,
key etc.

 haproxy h1 -cli {
   send "del ssl crt-list ${testdir}/localhost.crt-list ${testdir}/common.pem:1"
 }

This is not what we want. We add a symbolic link to reg-test/ssl/certs to the
directory and modify the command above as follows:

 haproxy h1 -cli {
   send "del ssl crt-list ${testdir}/certs/localhost.crt-list ${testdir}/certs/common.pem:1"
 }
2025-12-08 10:40:59 +01:00
Frederic Lecaille
21293dd6c3 MINOR: quic: Add useful debugging traces in qc_idle_timer_do_rearm()
Traces were missing in this function.
Also add information about the connection struct from qc->conn when
initialized for all the traces.

Should be easily backported as far as 2.6.
2025-12-08 10:40:59 +01:00
Frederic Lecaille
c36e27d10e BUG/MINOR: quic-be: handshake errors without connection stream closure
This bug was revealed on backend side by reg-tests/ssl/del_ssl_crt-list.vtc when
run wich QUIC connections. As expected by the test, a TLS alert is generated on
servsr side. This latter sands a CONNECTION_CLOSE frame with a CRYPTO error
(>= 0x100). In this case the client closes its QUIC connection. But
the stream connection was not informed. This leads the connection to
be closed after the server timeout expiration. It shouls be closed asap.
This is the reason why reg-tests/ssl/del_ssl_crt-list.vtc could succeeds
or failed, but only after a 5 seconds delay.

To fix this, mimic the ssl_sock_io_cb() for TCP/SSL connections. Call
the same code this patch implements with ssl_sock_handle_hs_error()
to correctly handle the handshake errors. Note that some SSL counters
were not incremented for both the backends and frontends. After such
errors, ssl_sock_io_cb() start the mux after the connection has been
flagged in error. This has as side effect to close the stream
in conn_create_mux().

Must be backported to 3.3 only for backends. This is not sure at this time
if this bug may impact the frontends.
2025-12-08 10:40:59 +01:00
Frederic Lecaille
63273c795f BUG/MINOR: quic/ssl: crash in ClientHello callback ssl traces
Such crashes may occur for QUIC frontends only when the SSL traces are enabled.

ssl_sock_switchctx_cbk() ClientHello callback may be called without any connection
initialize (<conn>) for QUIC connections leading to crashes when passing
conn->err_code to TRACE_ERROR().

Modify the TRACE_ERROR() statement to pass this parameter only when <conn> is
initialized.

Must be backported as far as 3.2.
2025-12-08 10:40:59 +01:00
Willy Tarreau
d2a1665af0 DOC: config: reorder the cache section's keywords
Probably due to historical accumulation, keywords were in a random
order that doesn't help when looking them up. Let's just reorder them
in alphabetical order like other sections. This can be backported.
2025-12-04 15:44:38 +01:00
Willy Tarreau
4d0a88c746 DOC: config: mention clearer that the cache's total-max-size is mandatory
As reported in GH issue #3201, it's easy to overlook this, so let's make
it clearer by mentioning the keyword. This can be backported to all
versions.
2025-12-04 15:42:09 +01:00
Willy Tarreau
cd959f1321 BUG/MEDIUM: config: ignore empty args in skipped blocks
As returned by Christian Ruppert in GH issue #3203, we're having an
issue with checks for empty args in skipped blocks: the check is
performed after the line is tokenized, without considering the case
where it's disabled due to outer false .if/.else conditions. Because
of this, a test like this one:

    .if defined(SRV1_ADDR)
        server srv1 "$SRV1_ADDR"
    .endif

will fail when SRV1_ADDR is empty or not set, saying that this will
result in an empty arg on the line.

The solution consists in postponing this check after the conditions
evaluation so that disabled lines are already skipped. And for this
to be possible, we need to move "errptr" one level above so that it
remains accessible there.

This will need to be backported to 3.3 and wherever commit 1968731765
("BUG/MEDIUM: config: solve the empty argument problem again") is
backported. As such it is also related to GH issue #2367.
2025-12-04 15:33:43 +01:00
Willy Tarreau
b29560f610 BUG/MEDIUM: connection: fix "bc_settings_streams_limit" typo
The keyword was correct in the doc but in the code it was spelled
with a missing 's' after 'settings', making it unavailable. Since
there was no other way to find this but reading the code, it's safe
to simply fix it and assume nobody relied on the wrong spelling.

In the worst case for older backports it can also be duplicated.

This must be backported to 3.0.
2025-12-04 15:26:54 +01:00
William Lallemand
85689b072a REGTESTS: ssl: split tls*_reuse in stateless and stateful resume tests
Simplify ssl_reuse.vtci so it can be started with variables:

- SSL_CACHESIZE allow to specify the size of the session cache size for
  the frontend
- NO_TLS_TICKETS allow to specify the "no-tls-tickets" option on bind

It introduces these files:

- ssl/tls12_resume_stateful.vtc
- ssl/tls12_resume_stateless.vtc
- ssl/tls13_resume_stateless.vtc
- ssl/tls13_resume_stateful.vtc
- quic/tls13_resume_stateless.vtc
- quic/tls13_resume_stateful.vtc
- quic/tls13_0rtt_stateful.vtc
- quic/tls13_0rtt_stateless.vtc

stateful files have "no-tls-tickets" + tune.tls.cachesize 20000
stateless files have "tls-tickets" + tune.tls.cachesize 0

This allows to enable AWS-LC on TCP TLS1.2 and TCP TL1.3+tickets.

TLS1.2+stateless does not seem to work on WolfSSL.
2025-12-04 15:05:56 +01:00
William Lallemand
c7b5d2552a REGTESTS: ssl enable tls12_reuse.vtc for AWS-LC
The TLS resume test was never started with AWS-LC because the TLS1.3
part was not working. Since we split the reg-tests with a TLS1.2 part
and a TLS1.3 part, we can enable the tls1.2 part for AWS-LC.
2025-12-04 11:40:04 +01:00
Frederic Lecaille
cdca48b88c BUG/MINOR: quic-be: Missing keywords array NULL termination
This bug arrived with this commit:
     MINOR: quic: implement cc-algo server keyword
where <srv> keywords list with a missing array NULL termination inside was
introduced to parse the QUIC backend CC algorithms.

Detected by ASAN during ssl/add_ssl_crt-list.vtc execution as follows:

***  h1    debug|==4066081==ERROR: AddressSanitizer: global-buffer-overflow on address 0x5562e31dedb8 at pc 0x5562e298951f bp 0x7ffe9f9f2b40 sp 0x7ffe9f9f2b38
***  h1    debug|READ of size 8 at 0x5562e31dedb8 thread T0
**** dT    0.173
***  h1    debug|    #0 0x5562e298951e in srv_find_kw src/server.c:789
***  h1    debug|    #1 0x5562e2989630 in _srv_parse_kw src/server.c:3847
***  h1    debug|    #2 0x5562e299db1f in parse_server src/server.c:4024
***  h1    debug|    #3 0x5562e2c86ea4 in cfg_parse_listen src/cfgparse-listen.c:593
***  h1    debug|    #4 0x5562e2b0ede9 in parse_cfg src/cfgparse.c:2708
***  h1    debug|    #5 0x5562e2c47d48 in read_cfg src/haproxy.c:1077
***  h1    debug|    #6 0x5562e2682055 in main src/haproxy.c:3366
***  h1    debug|    #7 0x7ff3ff867249 in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
***  h1    debug|    #8 0x7ff3ff867304 in __libc_start_main_impl ../csu/libc-start.c:360
***  h1    debug|    #9 0x5562e26858d0 in _start (/home/flecaille/src/haproxy/haproxy+0x2638d0)
***  h1    debug|
***  h1    debug|0x5562e31dedb8 is located 40 bytes to the left of global variable 'bind_kws' defined in 'src/cfgparse-quic.c:255:28' (0x5562e31dede0) of size 120
***  h1    debug|0x5562e31dedb8 is located 0 bytes to the right of global variable 'srv_kws' defined in 'src/cfgparse-quic.c:264:27' (0x5562e31ded80) of size 56
***  h1    debug|SUMMARY: AddressSanitizer: global-buffer-overflow src/server.c:789 in srv_find_kw
***  h1    debug|Shadow bytes around the buggy address:
***  h1    debug|  0x0aacdc633d60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
***  h1    debug|  0x0aacdc633d70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
***  h1    debug|  0x0aacdc633d80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
***  h1    debug|  0x0aacdc633d90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
***  h1    debug|  0x0aacdc633da0: 00 00 00 00 00 00 00 00 00 00 f9 f9 f9 f9 f9 f9
***  h1    debug|=>0x0aacdc633db0: 00 00 00 00 00 00 00[f9]f9 f9 f9 f9 00 00 00 00
***  h1    debug|  0x0aacdc633dc0: 00 00 00 00 00 00 00 00 00 00 00 f9 f9 f9 f9 f9
***  h1    debug|  0x0aacdc633dd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
***  h1    debug|  0x0aacdc633de0: 00 00 00 00 00 00 00 00 f9 f9 f9 f9 f9 f9 f9 f9
***  h1    debug|  0x0aacdc633df0: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9
***  h1    debug|  0x0aacdc633e00: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9
***  h1    debug|Shadow byte legend (one shadow byte represents 8 application bytes):

This should be backported where the commit above is supposed to be backported.
2025-12-03 11:07:47 +01:00
Amaury Denoyelle
47dff5be52 MINOR: quic: implement cc-algo server keyword
Extend QUIC server configuration so that congestion algorithm and
maximum window size can be set on the server line. This can be achieved
using quic-cc-algo keyword with a syntax similar to a bind line.

This should be backported up to 3.3 as this feature is considered as
necessary for full QUIC backend support. Note that this relies on the
serie of previous commits which should be picked first.
2025-12-01 15:53:58 +01:00
Amaury Denoyelle
4f43abd731 MINOR: quic: extract cc-algo parsing in a dedicated function
Extract code from bind_parse_quic_cc_algo() related to pure parsing of
quic-cc-algo keyword. The objective is to be able to quickly duplicate
this option on the server line.

This may need to be backported to support QUIC congestion control
algorithm support on the server line in version 3.3.
2025-12-01 15:06:01 +01:00
Amaury Denoyelle
979588227f MINOR: quic: define quic_cc_algo as const
Each QUIC congestion algorithm is defined as a structure with callbacks
in it. Every quic_conn has a member pointing to the configured
algorithm, inherited from the bind-conf keyword or to the default CUBIC
value.

Convert all these definitions to const. This ensures that there never
will be an accidental modification of a globally shared structure. This
also requires to mark quic_cc_algo field in bind_conf and quic_cc as
const.
2025-12-01 15:05:41 +01:00
Amaury Denoyelle
acbb378136 Revert "MINOR: quic: use dynamic cc_algo on bind_conf"
This reverts commit a6504c9cfb6bb48ae93babb76a2ab10ddb014a79.

Each supported QUIC algo are associated with a set of callbacks defined
in a structure quic_cc_algo. Originally, bind_conf would use a constant
pointer to one of these definitions.

During pacing implementation, this field was transformed into a
dynamically allocated value copied from the original definition. The
idea was to be able to tweak settings at the listener level. However,
this was never used in practice. As such, revert to the original model.

This may need to be backported to support QUIC congestion control
algorithm support on the server line in version 3.3.
2025-12-01 14:18:58 +01:00
William Lallemand
c641ea4f9b DOC: configuration: ECH support details
Specify which OpenSSL branch is supported and that AWS-LC is not
supported.

Must be backported to 3.3.
2025-11-30 09:47:56 +01:00
Remi Tricot-Le Breton
2b3d13a740 BUG/MINOR: jwt: Missing "case" in switch statement
Because of missing "case" keyword in front of the values in a switch
case statement, the values were interpreted as goto tags and the switch
statement became useless.

This patch should fix GitHub issue #3200.
The fix should be backported up to 2.8.
2025-11-28 16:36:46 +01:00
Willy Tarreau
36133759d3 [RELEASE] Released version 3.4-dev0
Released version 3.4-dev0 with the following main changes :
    - MINOR: version: mention that it's development again
2025-11-26 16:12:45 +01:00
Willy Tarreau
e8d6ffb692 MINOR: version: mention that it's development again
This essentially reverts d8ba9a2a92.
2025-11-26 16:11:47 +01:00
Willy Tarreau
7832fb21fe [RELEASE] Released version 3.3.0
Released version 3.3.0 with the following main changes :
    - BUG/MINOR: acme: better challenge_ready processing
    - BUG/MINOR: acme: warning ‘ctx’ may be used uninitialized
    - MINOR: httpclient: complete the https log
    - BUG/MEDIUM: server: do not use default SNI if manually set
    - BUG/MINOR: freq_ctr: Prevent possible signed overflow in freq_ctr_overshoot_period
    - DOC: ssl: Document the restrictions on 0RTT.
    - DOC: ssl: Note that 0rtt works fork QUIC with QuicTLS too.
    - BUG/MEDIUM: quic: do not prevent sending if no BE token
    - BUG/MINOR: quic/server: free quic_retry_token on srv drop
    - MINOR: quic: split global CID tree between FE and BE sides
    - MINOR: quic: use separate global quic_conns FE/BE lists
    - MINOR: quic: add "clo" filter on show quic
    - MINOR: quic: dump backend connections on show quic
    - MINOR: quic: mark backend conns on show quic
    - BUG/MINOR: quic: fix uninit list on show quic handler
    - BUG/MINOR: quic: release BE quic_conn on connect failure
    - BUG/MINOR: server: fix srv_drop() crash on partially init srv
    - BUG/MINOR: h3: do no crash on forwarding multiple chained response
    - BUG/MINOR: h3: handle properly buf alloc failure on response forwarding
    - BUG/MEDIUM: server/ssl: Unset the SNI for new server connections if none is set
    - BUG/MINOR: acme: fix ha_alert() call
    - Revert "BUG/MEDIUM: server/ssl: Unset the SNI for new server connections if none is set"
    - BUG/MINOR: sock-inet: ignore conntrack for transparent sockets on Linux
    - DEV: patchbot: prepare for new version 3.4-dev
    - DOC: update INSTALL with the range of gcc compilers and openssl versions
    - MINOR: version: mention that 3.3 is stable now
2025-11-26 15:55:57 +01:00
Willy Tarreau
d8ba9a2a92 MINOR: version: mention that 3.3 is stable now
This version will be maintained up to around Q1 2027. The INSTALL file
also mentions it.
2025-11-26 15:54:30 +01:00
Willy Tarreau
09dd6bb4cb DOC: update INSTALL with the range of gcc compilers and openssl versions
Gcc 4.7 to 15 are tested. OpenSSL was tested up to 3.6. QUIC support
requires OpenSSL >= 3.5.2.
2025-11-26 15:50:43 +01:00
Willy Tarreau
22fd296a04 DEV: patchbot: prepare for new version 3.4-dev
The bot will now load the prompt for the upcoming 3.4 version so we have
to rename the files and update their contents to match the current version.
2025-11-26 15:35:22 +01:00
Willy Tarreau
e5658c52d0 BUG/MINOR: sock-inet: ignore conntrack for transparent sockets on Linux
As reported in github issue #3192, in certain situations with transparent
listeners, it is possible to get the incoming connection's destination
wrong via SO_ORIGINAL_DST. Two cases were identified thus far:
  - incorrect conntrack configuration where NOTRACK is used only on
    incoming packets, resulting in reverse connections being created
    from response packets. It's then mostly a matter of timing, i.e.
    whether or not the connection is confirmed before the source is
    retrieved, but in this case the connection's destination address
    as retrieved by SO_ORIGINAL_DST is the client's address.

  - late outgoing retransmit that recreates a just expired conntrack
    entry, in reverse direction as well. It's possible that combinations
    of RST or FIN might play a role here in speeding up conntrack eviction,
    as well as the rollover of source ports on the client whose new
    connection matches an older one and simply refreshes it due to
    nf_conntrack_tcp_loose being set by default.

TPROXY doesn't require conntrack, only REDIRECT, DNAT etc do. However
the system doesn't offer any option to know how a conntrack entry was
created (i.e. normally or via a response packet) to let us know that
it's pointless to check the original destination, nor does it permit
to access the local vs peer addresses in opposition to src/dst which
can be wrong in this case.

One alternate approach could consist in only checking SO_ORIGINAL_DST
for listening sockets not configured with the "transparent" option,
but the problem here is that our low-level API only works with FDs
without knowing their purpose, so it's unknown there that the fd
corresponds to a listener, let alone in transparent mode.

A (slightly more expensive) variant of this approach here consists in
checking on the socket itself that it was accepted in transparent mode
using IP_TRANSPARENT, and skip SO_ORIGINAL_DST if this is the case.
This does the job well enough (no more client addresses appearing in
the dst field) and remains a good compromise. A future improvement of
the API could permit to pass the transparent flag down the stack to
that function.

This should be backported to stable versions after some observation
in latest -dev.

For reference, here are some links to older conversations on that topic
that Lukas found during this analysis:

  https://lists.openwall.net/netdev/2019/01/12/34
  https://discourse.haproxy.org/t/send-proxy-not-modifying-some-traffic-with-proxy-ip-port-details/3336/9
  https://www.mail-archive.com/haproxy@formilux.org/msg32199.html
  https://lists.openwall.net/netdev/2019/01/23/114
2025-11-26 13:43:58 +01:00
Christopher Faulet
7d9cc28f92 Revert "BUG/MEDIUM: server/ssl: Unset the SNI for new server connections if none is set"
This reverts commit de29000e602bda55d32c266252ef63824e838ac0.

The fix was in fact invalid. First it is not supprted by WolfSSL to call
SSL_set_tlsext_host_name with a hostname to NULL. Then, it is not specified
as supported by other SSL libraries.

But, by reviewing the root cause of this bug, it appears there is an issue
with the reuse of TLS sesisons. It must not be performed if the SNI does not
match. A TLS session created with a SNI must not be reused with another
SNI. The side effects are not clear but functionnaly speaking, it is
invalid.

So, for now, the commit above was reverted because it is invalid and it
crashes with WolfSSL. Then the init of the SSL connection must be reworked
to get the SNI earlier, to be able to reuse or not an existing TLS
session.
2025-11-26 12:05:43 +01:00
Maxime Henrion
d506c03aa0 BUG/MINOR: acme: fix ha_alert() call
A NULL pointer was passed as the format string, so this alert message
was never written.

Must be backported to 3.2.
2025-11-25 20:20:25 +01:00
Christopher Faulet
de29000e60 BUG/MEDIUM: server/ssl: Unset the SNI for new server connections if none is set
When a new SSL server connection is created, if no SNI is set, it is
possible to inherit from the one of the reused TLS session. The bug was
introduced by the commit 95ac5fe4a ("MEDIUM: ssl_sock: always use the SSL's
server name, not the one from the tid"). The mixup is possible between
regular connections but also with health-checks connections.

To fix the issue, when no SNI is set, for regular server connections and for
health-check connections, the SNI must explicitly be disabled by calling
ssl_sock_set_servername() with the hostname set to NULL.

Many thanks to Lukas for his detailed bug report.

This patch should fix the issue #3195. It must be backported as far as 3.0.
2025-11-25 16:32:46 +01:00
Amaury Denoyelle
a70816da82 BUG/MINOR: h3: handle properly buf alloc failure on response forwarding
Replace BUG_ON() for buffer alloc failure on h3_resp_headers_to_htx() by
proper error handling. An error status is reported which should be
sufficient to initiate connection closure.

No need to backport.
2025-11-25 15:55:08 +01:00
Amaury Denoyelle
ae96defaca BUG/MINOR: h3: do no crash on forwarding multiple chained response
h3_resp_headers_to_htx() is the function used to convert an HTTP/3
response into a HTX message. It was introduced on this release for QUIC
backend support.

A BUG_ON() would occur if multiple responses are forwarded
simultaneously on a stream without rcv_buf in between. Fix this by
removing it. Instead, if QCS HTX buffer is not empty when handling with
a new response, prefer to pause demux operation. This is restarted when
the buffer has been read and emptied by the upper stream layer.

No need to backport.
2025-11-25 15:52:37 +01:00
Amaury Denoyelle
a363b536a9 BUG/MINOR: server: fix srv_drop() crash on partially init srv
A recent patch has introduced free operation for QUIC tokens stored in a
server. These values are located in <per_thr> server array.

However, a server instance may be released prior to its full
initialization in case of a failure during "add server" CLI command. The
mentionned patch would cause a srv_drop() crash due to an invalid usage
of NULL <per_thr> member.

Fix this by adding a check on <per_thr> prior to dereference it in
srv_drop().

No need to backport.
2025-11-25 15:16:13 +01:00
Amaury Denoyelle
6c08eb7173 BUG/MINOR: quic: release BE quic_conn on connect failure
If quic_connect_server() fails, quic_conn FD will remain unopened as set
to -1. Backend connections do not have a fallback socket for future
exchange, contrary to frontend one which can use the listener FD. As
such, it is better to release these connections early.

This patch adjusts such failure by extending quic_close(). This function
is called by the upper layer immediately after a connect issue. In this
case, release immediately a quic_conn backend instance if the FD is
unset, which means that connect has previously failed.

Also, quic_conn_release() is extended to ensure that such faulty
connections are immediately freed and not converted into a
quic_conn_closed instance.

Prior to this patch, a backend quic_conn without any FD would remain
allocated and possibly active. If its tasklet is executed, this resulted
in a crash due to access to an invalid FD.

No need to backport.
2025-11-25 14:50:23 +01:00
Amaury Denoyelle
346631700d BUG/MINOR: quic: fix uninit list on show quic handler
A recent patch has extended "show quic" capability. It is now possible
to list a specific list of connections, either active frontend, closing
frontend or backend connections.

An issue was introduced as the list is local storage. As this command is
reentrant, show quic context must be extended so that the currently
inspected list is also saved.

This issue was reported via GCC which mentions an uninitilized value
depending on branching conditions.
2025-11-25 14:50:19 +01:00
Amaury Denoyelle
a3f76875f4 MINOR: quic: mark backend conns on show quic
Add an extra "(B)" marker when displaying a backend connection during a
"show quic". This is useful to differentiate them with the frontend side
when displaying all connections.
2025-11-25 14:31:27 +01:00
Amaury Denoyelle
e56fdf6320 MINOR: quic: dump backend connections on show quic
Add a new "be" filter to "show quic". Its purpose is to be able to
display backend connections. These connections can also be listed using
"all" filter.
2025-11-25 14:30:18 +01:00
Amaury Denoyelle
3685681373 MINOR: quic: add "clo" filter on show quic
Add a new filter "clo" for "show quic" command. Its purpose is to filter
output to only list closing frontend connections.
2025-11-25 14:30:18 +01:00
Amaury Denoyelle
49e6fca51b MINOR: quic: use separate global quic_conns FE/BE lists
Each quic_conn instance is stored in a global list. Its purpose is to be
able to loop over all known connections during "show quic".

Split this into two separate lists for frontend and backend usage.
Another change is that closing backend connections do not move into
quic_conns_clo list. They remain instead in their original list. The
objective of this patch is to reduce the contention between the two
sides.

Note that this prevents backend connections to be listed in "show quic"
now. This will be adjusted in a future patch.
2025-11-25 14:30:18 +01:00
Amaury Denoyelle
a5801e542d MINOR: quic: split global CID tree between FE and BE sides
QUIC CIDs are stored in a global tree. Prior to this patch, CIDs used on
both frontend and backend sides were mixed together.

This patch implement CID storage separation between FE and BE sides. The
original tre quic_cid_trees is splitted as
quic_fe_cid_trees/quic_be_cid_trees.

This patch should reduce contention between frontend and backend usages.
Also, it should reduce the risk of random CID collision.
2025-11-25 14:30:18 +01:00
Amaury Denoyelle
4b596c1ea8 BUG/MINOR: quic/server: free quic_retry_token on srv drop
A recent patch has implemented caching of QUIC token received from a
NEW_TOKEN frame into the server cache. This value is stored per thread
into a <quic_retry_token> field.

This field is an ist, first set to an empty string. Via
qc_try_store_new_token(), it is reallocated to fit the size of the newly
stored token. Prior to this patch, the field was never freed so this
causes a memory leak.

Fix this by using istfree() on <quic_retry_token> field during
srv_drop().

No need to backport.
2025-11-25 14:30:18 +01:00
Amaury Denoyelle
cbfe574d8a BUG/MEDIUM: quic: do not prevent sending if no BE token
For QUIC client support, a token may be emitted along with INITIAL
packets during the handshake. The token is encoded during emission via
qc_enc_token() called by qc_build_pkt().

The token may be provided from different sources. First, it can be
retrieved via <retry_token> quic_conn member when a Retry packet was
received. If not present, a token may be reused from the server cache,
populated from NEW_TOKEN received from previous a connection.

Prior to this patch, the last method may cause an issue. If the upper
connection instance is released prior to the handshake completion, this
prevents access to a possible server token. This is considered an error
by qc_enc_token(). The error is reported up to calling functions,
preventing any emission to be performed. In the end, this prevented the
either the full quic_conn release or subsizing into quic_conn_closed
until the idle timeout completion (30s by default). With abortonclose
set now by default on HTTP frontends, early client shutdowns can easily
cause excessive memory consumption.

To fix this, change qc_enc_token() so that if connection is closed, no
token is encoded but also no error is reported. This allows to continue
emission and permit early connection release.

No need to backport.
2025-11-25 14:30:18 +01:00
Olivier Houchard
e27216b799 DOC: ssl: Note that 0rtt works fork QUIC with QuicTLS too.
Document that one can use 0rtt with QUIC when using QuicTLS too.
2025-11-25 13:17:45 +01:00
Olivier Houchard
f867068dc7 DOC: ssl: Document the restrictions on 0RTT.
Document that with QUIC, 0RTT only works with OpenSSL >= 3.5.2 and
AWS-LC, and for TLS/TCP, it only works with OpenSSL, and frontends
require that an ALPN be sent by the client to use the early data before
the handshake.
2025-11-25 11:46:22 +01:00
Jacques Heunis
91eb9b082b BUG/MINOR: freq_ctr: Prevent possible signed overflow in freq_ctr_overshoot_period
All of the other bandwidth-limiting code stores limits and intermediate
(byte) counters as unsigned integers. The exception here is
freq_ctr_overshoot_period which takes in unsigned values but returns a
signed value. While this has the benefit of letting the caller know how
far away from overshooting they are, this is not currently leveraged
anywhere in the codebase, and it has the downside of halving the positive
range of the result.

More concretely though, returning a signed integer when all intermediate
values are unsigned (and boundaries are not checked) could result in an
overflow, producing values that are at best unexpected. In the case of
flt_bwlim (the only usage of freq_ctr_overshoot_period in the codebase at
the time of writing), an overflow could cause the filter to wait for a
large number of milliseconds when in fact it shouldn't wait at all.

This is a niche possibility, because it requires that a bandwidth limit is
defined in the range [2^31, 2^32). In this case, the raw limit value would
not fit into a signed integer, and close to the end of the period, the
`(elapsed * freq)/period` calculation could produce a value which also
doesn't fit into a signed integer.

If at the same time `curr` (the number of events counted so far in the
current period) is small, then we could get a very large negative value
which overflows. This is undefined behaviour and could produce surprising
results. The most obvious outcome is flt_bwlim sometimes waiting for a
large amount of time in a case where it shouldn't wait at all, thereby
incorrectly slowing down the flow of data.

Converting just the return type from signed to unsigned (and checking for
the overflow) prevents this undefined behaviour. It also makes the range
of valid values consistent between the input and output of
freq_ctr_overshoot_period and with the input and output of other freq_ctr
functions, thereby reducing the potential for surprise in intermediate
calculations: now everything supports the full 0 - 2^32 range.
2025-11-24 14:10:13 +01:00
Amaury Denoyelle
2829165f61 BUG/MEDIUM: server: do not use default SNI if manually set
A new server feature "sni-auto" has been introduced recently. The
objective is to automatically set the SNI value to the host header if no
SNI is explicitely set.

  668916c1a2fc2180028ae051aa805bb71c7b690b
  MEDIUM: server/ssl: Base the SNI value to the HTTP host header by default

There is an issue with it : server SNI is currently always overwritten,
even if explicitely set in the configuration file. Adjust
check_config_validity() to ensure the default value is only used if
<sni_expr> is NULL.

This issue was detected as a memory leak on <sni_expr> was reported when
SNI is explicitely set on a server line.

This patch is related to github feature request #3081.

No need to backport, unless the above patch is.
2025-11-24 11:45:18 +01:00
William Lallemand
5dbf06e205 MINOR: httpclient: complete the https log
The httpsclient_log_format variable lacks a few values in the TLS fields
that are now available as fetches.

On the backend side we have:

"%[fc_err]/%[ssl_fc_err,hex]/%[ssl_c_err]/%[ssl_c_ca_err]/%[ssl_fc_is_resumed] %[ssl_fc_sni]/%sslv/%sslc"

We now have enough sample fetches to have this equivalent in the
httpclient:

"%[bc_err]/%[ssl_bc_err,hex]/%[ssl_c_err]/%[ssl_c_ca_err]/%[ssl_bc_is_resumed] %[ssl_bc_sni]/%[ssl_bc_protocol]/%[ssl_bc_cipher]"

Instead of the current:

"%[bc_err]/%[ssl_bc_err,hex]/-/-/%[ssl_bc_is_resumed] -/-/-"
2025-11-22 12:29:33 +01:00
William Lallemand
0cae2f0515 BUG/MINOR: acme: warning ‘ctx’ may be used uninitialized
Please compiler with maybe-uninitialized warning

src/acme.c: In function ‘cli_acme_chall_ready_parse’:
include/haproxy/task.h:215:9: error: ‘ctx’ may be used uninitialized [-Werror=maybe-uninitialized]
  215 |         _task_wakeup(t, f, MK_CALLER(WAKEUP_TYPE_TASK_WAKEUP, 0, 0))
      |         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/acme.c:2903:17: note: in expansion of macro ‘task_wakeup’
 2903 |                 task_wakeup(ctx->task, TASK_WOKEN_MSG);
      |                 ^~~~~~~~~~~
src/acme.c:2862:26: note: ‘ctx’ was declared here
 2862 |         struct acme_ctx *ctx;
      |                          ^~~

Backport to 3.2.
2025-11-21 23:04:16 +01:00
William Lallemand
d77d3479ed BUG/MINOR: acme: better challenge_ready processing
Improve the challenge_ready processing:

- do a lookup directly instead looping in the task tree
- only do a task_wakeup when every challenges are ready to avoid starting
  the task and stopping it just after
- Compute the number of remaining challenge to setup
- Output a message giving the number of remaining challenges to setup
  and if the task started again.

Backport to 3.2.
2025-11-21 22:47:52 +01:00
347 changed files with 10539 additions and 4647 deletions

View File

@ -19,7 +19,7 @@ defaults
frontend h2
mode http
bind 127.0.0.1:8443 ssl crt reg-tests/ssl/common.pem alpn h2,http/1.1
bind 127.0.0.1:8443 ssl crt reg-tests/ssl/certs/common.pem alpn h2,http/1.1
default_backend h2b
backend h2b

View File

@ -28,7 +28,7 @@ jobs:
run: env SSL_LIB=${HOME}/opt/ scripts/build-curl.sh
- name: Compile HAProxy
run: |
make -j$(nproc) ERR=1 CC=gcc TARGET=linux-glibc \
make -j$(nproc) CC=gcc TARGET=linux-glibc \
USE_QUIC=1 USE_OPENSSL=1 USE_ECH=1 \
SSL_LIB=${HOME}/opt/lib SSL_INC=${HOME}/opt/include \
DEBUG="-DDEBUG_POOL_INTEGRITY -DDEBUG_UNIT" \

77
.github/workflows/openssl-master.yml vendored Normal file
View File

@ -0,0 +1,77 @@
name: openssl master
on:
schedule:
- cron: "0 3 * * *"
workflow_dispatch:
permissions:
contents: read
jobs:
test:
runs-on: ubuntu-latest
if: ${{ github.repository_owner == 'haproxy' || github.event_name == 'workflow_dispatch' }}
steps:
- uses: actions/checkout@v5
- name: Install apt dependencies
run: |
sudo apt-get update -o Acquire::Languages=none -o Acquire::Translation=none
sudo apt-get --no-install-recommends -y install socat gdb
sudo apt-get --no-install-recommends -y install libpsl-dev
- uses: ./.github/actions/setup-vtest
- name: Install OpenSSL master
run: env OPENSSL_VERSION="git-master" GIT_TYPE="branch" scripts/build-ssl.sh
- name: Compile HAProxy
run: |
make -j$(nproc) ERR=1 CC=gcc TARGET=linux-glibc \
USE_QUIC=1 USE_OPENSSL=1 \
SSL_LIB=${HOME}/opt/lib SSL_INC=${HOME}/opt/include \
DEBUG="-DDEBUG_POOL_INTEGRITY -DDEBUG_UNIT" \
ADDLIB="-Wl,-rpath,/usr/local/lib/ -Wl,-rpath,$HOME/opt/lib/"
sudo make install
- name: Show HAProxy version
id: show-version
run: |
ldd $(which haproxy)
haproxy -vv
echo "version=$(haproxy -v |awk 'NR==1{print $3}')" >> $GITHUB_OUTPUT
- name: Install problem matcher for VTest
run: echo "::add-matcher::.github/vtest.json"
- name: Run VTest for HAProxy
id: vtest
run: |
# This is required for macOS which does not actually allow to increase
# the '-n' soft limit to the hard limit, thus failing to run.
ulimit -n 65536
# allow to catch coredumps
ulimit -c unlimited
make reg-tests VTEST_PROGRAM=../vtest/vtest REGTESTS_TYPES=default,bug,devel
- name: Show VTest results
if: ${{ failure() && steps.vtest.outcome == 'failure' }}
run: |
for folder in ${TMPDIR:-/tmp}/haregtests-*/vtc.*; do
printf "::group::"
cat $folder/INFO
cat $folder/LOG
echo "::endgroup::"
done
exit 1
- name: Run Unit tests
id: unittests
run: |
make unit-tests
- name: Show coredumps
if: ${{ failure() && steps.vtest.outcome == 'failure' }}
run: |
failed=false
shopt -s nullglob
for file in /tmp/core.*; do
failed=true
printf "::group::"
gdb -ex 'thread apply all bt full' ./haproxy $file
echo "::endgroup::"
done
if [ "$failed" = true ]; then
exit 1;
fi

View File

@ -1,32 +0,0 @@
#
# special purpose CI: test against OpenSSL built in "no-deprecated" mode
# let us run those builds weekly
#
# for example, OpenWRT uses such OpenSSL builds (those builds are smaller)
#
#
# some details might be found at NL: https://www.mail-archive.com/haproxy@formilux.org/msg35759.html
# GH: https://github.com/haproxy/haproxy/issues/367
name: openssl no-deprecated
on:
schedule:
- cron: "0 0 * * 4"
workflow_dispatch:
permissions:
contents: read
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: ./.github/actions/setup-vtest
- name: Compile HAProxy
run: |
make DEFINE="-DOPENSSL_API_COMPAT=0x10100000L -DOPENSSL_NO_DEPRECATED" -j3 CC=gcc ERR=1 TARGET=linux-glibc USE_OPENSSL=1
- name: Run VTest
run: |
make reg-tests VTEST_PROGRAM=../vtest/vtest REGTESTS_TYPES=default,bug,devel

175
CHANGELOG
View File

@ -1,6 +1,181 @@
ChangeLog :
===========
2026/01/07 : 3.4-dev2
- BUG/MEDIUM: mworker/listener: ambiguous use of RX_F_INHERITED with shards
- BUG/MEDIUM: http-ana: Properly detect client abort when forwarding response (v2)
- BUG/MEDIUM: stconn: Don't report abort from SC if read0 was already received
- BUG/MEDIUM: quic: Don't try to use hystart if not implemented
- CLEANUP: backend: Remove useless test on server's xprt
- CLEANUP: tcpcheck: Remove useless test on the xprt used for healthchecks
- CLEANUP: ssl-sock: Remove useless tests on connection when resuming TLS session
- REGTESTS: quic: fix a TLS stack usage
- REGTESTS: list all skipped tests including 'feature cmd' ones
- CI: github: remove openssl no-deprecated job
- CI: github: add a job to test the master branch of OpenSSL
- CI: github: openssl-master.yml misses actions/checkout
- BUG/MEDIUM: backend: Do not remove CO_FL_SESS_IDLE in assign_server()
- CI: github: use git prefix for openssl-master.yml
- BUG/MEDIUM: mux-h2: synchronize all conditions to create a new backend stream
- REGTESTS: fix error when no test are skipped
- MINOR: cpu-topo: Turn the cpu policy configuration into a struct
- MEDIUM: cpu-topo: Add a "threads-per-core" keyword to cpu-policy
- MEDIUM: cpu-topo: Add a "cpu-affinity" option
- MEDIUM: cpu-topo: Add a new "max-threads-per-group" global keyword
- MEDIUM: cpu-topo: Add the "per-thread" cpu_affinity
- MEDIUM: cpu-topo: Add the "per-ccx" cpu_affinity
- BUG/MINOR: cpu-topo: fix -Wlogical-not-parentheses build with clang
- DOC: config: fix number of values for "cpu-affinity"
- MINOR: tools: add a secure implementation of memset
- MINOR: mux-h2: add missing glitch count for non-decodable H2 headers
- MINOR: mux-h2: perform a graceful close at 75% glitches threshold
- MEDIUM: mux-h1: implement basic glitches support
- MINOR: mux-h1: perform a graceful close at 75% glitches threshold
- MEDIUM: cfgparse: acknowledge that proxy ID auto numbering starts at 2
- MINOR: cfgparse: remove useless checks on no server in backend
- OPTIM/MINOR: proxy: do not init proxy management task if unused
- MINOR: patterns: preliminary changes for reorganization
- MEDIUM: patterns: reorganize pattern reference elements
- CLEANUP: patterns: remove dead code
- OPTIM: patterns: cache the current generation
- MINOR: tcp: add new bind option "tcp-ss" to instruct the kernel to save the SYN
- MINOR: protocol: support a generic way to call getsockopt() on a connection
- MINOR: tcp: implement the get_opt() function
- MINOR: tcp_sample: implement the fc_saved_syn sample fetch function
- CLEANUP: assorted typo fixes in the code, commits and doc
- BUG/MEDIUM: cpu-topo: Don't forget to reset visited_ccx.
- BUG/MAJOR: set the correct generation ID in pat_ref_append().
- BUG/MINOR: backend: fix the conn_retries check for TFO
- BUG/MINOR: backend: inspect request not response buffer to check for TFO
- MINOR: net_helper: add sample converters to decode ethernet frames
- MINOR: net_helper: add sample converters to decode IP packet headers
- MINOR: net_helper: add sample converters to decode TCP headers
- MINOR: net_helper: add ip.fp() to build a simplified fingerprint of a SYN
- MINOR: net_helper: prepare the ip.fp() converter to support more options
- MINOR: net_helper: add an option to ip.fp() to append the TTL to the fingerprint
- MINOR: net_helper: add an option to ip.fp() to append the source address
- DOC: config: fix the length attribute name for stick tables of type binary / string
- MINOR: mworker/cli: only keep positive PIDs in proc_list
- CLEANUP: mworker: remove duplicate list.h include
- BUG/MINOR: mworker/cli: fix show proc pagination using reload counter
- MINOR: mworker/cli: extract worker "show proc" row printer
- MINOR: cpu-topo: Factorize code
- MINOR: cpu-topo: Rename variables to better fit their usage
- BUG/MEDIUM: peers: Properly handle shutdown when trying to get a line
- BUG/MEDIUM: mux-h1: Take care to update <kop> value during zero-copy forwarding
- MINOR: threads: Avoid using a thread group mask when stopping.
- MINOR: hlua: Add support for lua 5.5
- MEDIUM: cpu-topo: Add an optional directive for per-group affinity
- BUG/MEDIUM: mworker: can't use signals after a failed reload
- BUG/MEDIUM: stconn: Move data from <kip> to <kop> during zero-copy forwarding
- DOC: config: fix a few typos and refine cpu-affinity
- MINOR: receiver: Remove tgroup_mask from struct shard_info
- BUG/MINOR: quic: fix deprecated warning for window size keyword
2025/12/10 : 3.4-dev1
- BUG/MINOR: jwt: Missing "case" in switch statement
- DOC: configuration: ECH support details
- Revert "MINOR: quic: use dynamic cc_algo on bind_conf"
- MINOR: quic: define quic_cc_algo as const
- MINOR: quic: extract cc-algo parsing in a dedicated function
- MINOR: quic: implement cc-algo server keyword
- BUG/MINOR: quic-be: Missing keywords array NULL termination
- REGTESTS: ssl enable tls12_reuse.vtc for AWS-LC
- REGTESTS: ssl: split tls*_reuse in stateless and stateful resume tests
- BUG/MEDIUM: connection: fix "bc_settings_streams_limit" typo
- BUG/MEDIUM: config: ignore empty args in skipped blocks
- DOC: config: mention clearer that the cache's total-max-size is mandatory
- DOC: config: reorder the cache section's keywords
- BUG/MINOR: quic/ssl: crash in ClientHello callback ssl traces
- BUG/MINOR: quic-be: handshake errors without connection stream closure
- MINOR: quic: Add useful debugging traces in qc_idle_timer_do_rearm()
- REGTESTS: ssl: Move all the SSL certificates, keys, crt-lists inside "certs" directory
- REGTESTS: quic/ssl: ssl/del_ssl_crt-list.vtc supported by QUIC
- REGTESTS: quic: dynamic_server_ssl.vtc supported by QUIC
- REGTESTS: quic: issuers_chain_path.vtc supported by QUIC
- REGTESTS: quic: new_del_ssl_cafile.vtc supported by QUIC
- REGTESTS: quic: ocsp_auto_update.vtc supported by QUIC
- REGTESTS: quic: set_ssl_bug_2265.vtc supported by QUIC
- MINOR: quic: avoid code duplication in TLS alert callback
- BUG/MINOR: quic-be: missing connection stream closure upon TLS alert to send
- REGTESTS: quic: set_ssl_cafile.vtc supported by QUIC
- REGTESTS: quic: set_ssl_cert_noext.vtc supported by QUIC
- REGTESTS: quic: set_ssl_cert.vtc supported by QUIC
- REGTESTS: quic: set_ssl_crlfile.vtc supported by QUIC
- REGTESTS: quic: set_ssl_server_cert.vtc supported by QUIC
- REGTESTS: quic: show_ssl_ocspresponse.vtc supported by QUIC
- REGTESTS: quic: ssl_client_auth.vtc supported by QUIC
- REGTESTS: quic: ssl_client_samples.vtc supported by QUIC
- REGTESTS: quic: ssl_default_server.vtc supported by QUIC
- REGTESTS: quic: new_del_ssl_crlfile.vtc supported by QUIC
- REGTESTS: quic: ssl_frontend_samples.vtc supported by QUIC
- REGTESTS: quic: ssl_server_samples.vtc supported by QUIC
- REGTESTS: quic: ssl_simple_crt-list.vtc supported by QUIC
- REGTESTS: quic: ssl_sni_auto.vtc code provision for QUIC
- REGTESTS: quic: ssl_curve_name.vtc supported by QUIC
- REGTESTS: quic: add_ssl_crt-list.vtc supported by QUIC
- REGTESTS: add ssl_ciphersuites.vtc (TCP & QUIC)
- BUG/MINOR: quic: do not set first the default QUIC curves
- REGTESTS: quic/ssl: Add ssl_curves_selection.vtc
- BUG/MINOR: ssl: Don't allow to set NULL sni
- MEDIUM: quic: Add connection as argument when qc_new_conn() is called
- MINOR: ssl: Add a function to hash SNIs
- MINOR: ssl: Store hash of the SNI for cached TLS sessions
- MINOR: ssl: Compare hashes instead of SNIs when a session is cached
- MINOR: connection/ssl: Store the SNI hash value in the connection itself
- MEDIUM: tcpcheck/backend: Get the connection SNI before initializing SSL ctx
- BUG/MEDIUM: ssl: Don't reuse TLS session if the connection's SNI differs
- MEDIUM: ssl/server: No longer store the SNI of cached TLS sessions
- BUG/MINOR: log: Dump good %B and %U values in logs
- BUG/MEDIUM: http-ana: Don't close server connection on read0 in TUNNEL mode
- DOC: config: Fix description of the spop mode
- DOC: config: Improve spop mode documentation
- MINOR: ssl: Split ssl_crt-list_filters.vtc in two files by TLS version
- REGTESTS: quic: tls13_ssl_crt-list_filters.vtc supported by QUIC
- BUG/MEDIUM: h3: do not access QCS <sd> if not allocated
- CLEANUP: mworker/cli: remove useless variable
- BUG/MINOR: mworker/cli: 'show proc' is limited by buffer size
- BUG/MEDIUM: ssl: Always check the ALPN after handshake
- MINOR: connections: Add a new CO_FL_SSL_NO_CACHED_INFO flag
- BUG/MEDIUM: ssl: Don't store the ALPN for check connections
- BUG/MEDIUM: ssl: Don't resume session for check connections
- CLEANUP: improvements to the alignment macros
- CLEANUP: use the automatic alignment feature
- CLEANUP: more conversions and cleanups for alignment
- BUG/MEDIUM: h3: fix access to QCS <sd> definitely
- MINOR: h2/trace: emit a trace of the received RST_STREAM type
2025/11/26 : 3.4-dev0
- MINOR: version: mention that it's development again
2025/11/26 : 3.3.0
- BUG/MINOR: acme: better challenge_ready processing
- BUG/MINOR: acme: warning ctx may be used uninitialized
- MINOR: httpclient: complete the https log
- BUG/MEDIUM: server: do not use default SNI if manually set
- BUG/MINOR: freq_ctr: Prevent possible signed overflow in freq_ctr_overshoot_period
- DOC: ssl: Document the restrictions on 0RTT.
- DOC: ssl: Note that 0rtt works fork QUIC with QuicTLS too.
- BUG/MEDIUM: quic: do not prevent sending if no BE token
- BUG/MINOR: quic/server: free quic_retry_token on srv drop
- MINOR: quic: split global CID tree between FE and BE sides
- MINOR: quic: use separate global quic_conns FE/BE lists
- MINOR: quic: add "clo" filter on show quic
- MINOR: quic: dump backend connections on show quic
- MINOR: quic: mark backend conns on show quic
- BUG/MINOR: quic: fix uninit list on show quic handler
- BUG/MINOR: quic: release BE quic_conn on connect failure
- BUG/MINOR: server: fix srv_drop() crash on partially init srv
- BUG/MINOR: h3: do no crash on forwarding multiple chained response
- BUG/MINOR: h3: handle properly buf alloc failure on response forwarding
- BUG/MEDIUM: server/ssl: Unset the SNI for new server connections if none is set
- BUG/MINOR: acme: fix ha_alert() call
- Revert "BUG/MEDIUM: server/ssl: Unset the SNI for new server connections if none is set"
- BUG/MINOR: sock-inet: ignore conntrack for transparent sockets on Linux
- DEV: patchbot: prepare for new version 3.4-dev
- DOC: update INSTALL with the range of gcc compilers and openssl versions
- MINOR: version: mention that 3.3 is stable now
2025/11/21 : 3.3-dev14
- MINOR: stick-tables: Rename stksess shards to use buckets
- MINOR: quic: do not use quic_newcid_from_hash64 on BE side

18
INSTALL
View File

@ -111,7 +111,7 @@ HAProxy requires a working GCC or Clang toolchain and GNU make :
may want to retry with "gmake" which is the name commonly used for GNU make
on BSD systems.
- GCC >= 4.7 (up to 14 tested). Older versions are no longer supported due to
- GCC >= 4.7 (up to 15 tested). Older versions are no longer supported due to
the latest mt_list update which only uses c11-like atomics. Newer versions
may sometimes break due to compiler regressions or behaviour changes. The
version shipped with your operating system is very likely to work with no
@ -237,7 +237,7 @@ to forcefully enable it using "USE_LIBCRYPT=1".
-----------------
For SSL/TLS, it is necessary to use a cryptography library. HAProxy currently
supports the OpenSSL library, and is known to build and work with branches
1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, and 3.0 to 3.5. It is recommended to use
1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, and 3.0 to 3.6. It is recommended to use
at least OpenSSL 1.1.1 to have support for all SSL keywords and configuration
in HAProxy. OpenSSL follows a long-term support cycle similar to HAProxy's,
and each of the branches above receives its own fixes, without forcing you to
@ -259,11 +259,15 @@ reported to work as well. While there are some efforts from the community to
ensure they work well, OpenSSL remains the primary target and this means that
in case of conflicting choices, OpenSSL support will be favored over other
options. Note that QUIC is not fully supported when haproxy is built with
OpenSSL < 3.5 version. In this case, QUICTLS is the preferred alternative.
As of writing this, the QuicTLS project follows OpenSSL very closely and provides
update simultaneously, but being a volunteer-driven project, its long-term future
does not look certain enough to convince operating systems to package it, so it
needs to be build locally. See the section about QUIC in this document.
OpenSSL < 3.5.2 version. In this case, QUICTLS or AWS-LC are the preferred
alternatives. As of writing this, the QuicTLS project follows OpenSSL very
closely and provides update simultaneously, but being a volunteer-driven
project, its long-term future does not look certain enough to convince
operating systems to package it, so it needs to be build locally. Recent
versions of AWS-LC (>= 1.22 and the FIPS branches) are pretty complete and
generally more performant than other OpenSSL derivatives, but may behave
slightly differently, particularly when dealing with outdated setups. See
the section about QUIC in this document.
A fifth option is wolfSSL (https://github.com/wolfSSL/wolfssl). It is the only
supported alternative stack not based on OpenSSL, yet which implements almost

View File

@ -643,7 +643,7 @@ ifneq ($(USE_OPENSSL:0=),)
OPTIONS_OBJS += src/ssl_sock.o src/ssl_ckch.o src/ssl_ocsp.o src/ssl_crtlist.o \
src/ssl_sample.o src/cfgparse-ssl.o src/ssl_gencert.o \
src/ssl_utils.o src/jwt.o src/ssl_clienthello.o src/jws.o src/acme.o \
src/ssl_trace.o
src/ssl_trace.o src/jwe.o
endif
ifneq ($(USE_ENGINE:0=),)
@ -992,7 +992,7 @@ OBJS += src/mux_h2.o src/mux_h1.o src/mux_fcgi.o src/log.o \
src/cfgcond.o src/proto_udp.o src/lb_fwlc.o src/ebmbtree.o \
src/proto_uxdg.o src/cfgdiag.o src/sock_unix.o src/sha1.o \
src/lb_fas.o src/clock.o src/sock_inet.o src/ev_select.o \
src/lb_map.o src/shctx.o src/hpack-dec.o \
src/lb_map.o src/shctx.o src/hpack-dec.o src/net_helper.o \
src/arg.o src/signal.o src/fix.o src/dynbuf.o src/guid.o \
src/cfgparse-tcp.o src/lb_ss.o src/chunk.o src/counters.o \
src/cfgparse-unix.o src/regex.o src/fcgi.o src/uri_auth.o \

View File

@ -1,2 +1,2 @@
$Format:%ci$
2025/11/21
2026/01/07

View File

@ -1 +1 @@
3.3-dev14
3.4-dev2

View File

@ -55,7 +55,7 @@ usage() {
echo " -S, --master-socket <path> Use the master socket at <path> (default: ${MASTER_SOCKET})"
echo " -d, --debug Debug mode, set -x"
echo " -t, --timeout Timeout (socat -t) (default: ${TIMEOUT})"
echo " -s, --silent Slient mode (no output)"
echo " -s, --silent Silent mode (no output)"
echo " -v, --verbose Verbose output (output from haproxy on failure)"
echo " -vv Even more verbose output (output from haproxy on success and failure)"
echo " -h, --help This help"

View File

@ -59,9 +59,9 @@ struct ring_v2 {
struct ring_v2a {
size_t size; // storage size
size_t rsvd; // header length (used for file-backed maps)
size_t tail __attribute__((aligned(64))); // storage tail
size_t head __attribute__((aligned(64))); // storage head
char area[0] __attribute__((aligned(64))); // storage area begins immediately here
size_t tail ALIGNED(64); // storage tail
size_t head ALIGNED(64); // storage head
char area[0] ALIGNED(64); // storage area begins immediately here
};
/* display the message and exit with the code */

View File

@ -0,0 +1,70 @@
BEGININPUT
BEGINCONTEXT
HAProxy's development cycle consists in one development branch, and multiple
maintenance branches.
All the development is made into the development branch exclusively. This
includes mostly new features, doc updates, cleanups and or course, fixes.
The maintenance branches, also called stable branches, never see any
development, and only receive ultra-safe fixes for bugs that affect them,
that are picked from the development branch.
Branches are numbered in 0.1 increments. Every 6 months, upon a new major
release, the development branch enters maintenance and a new development branch
is created with a new, higher version. The current development branch is
3.4-dev, and maintenance branches are 3.3 and below.
Fixes created in the development branch for issues that were introduced in an
earlier branch are applied in descending order to each and every version till
that branch that introduced the issue: 3.3 first, then 3.2, then 3.1, then 3.0
and so on. This operation is called "backporting". A fix for an issue is never
backported beyond the branch that introduced the issue. An important point is
that the project maintainers really aim at zero regression in maintenance
branches, so they're never willing to take any risk backporting patches that
are not deemed strictly necessary.
Fixes consist of patches managed using the Git version control tool and are
identified by a Git commit ID and a commit message. For this reason we
indistinctly talk about backporting fixes, commits, or patches; all mean the
same thing. When mentioning commit IDs, developers always use a short form
made of the first 8 characters only, and expect the AI assistant to do the
same.
It seldom happens that some fixes depend on changes that were brought by other
patches that were not in some branches and that will need to be backported as
well for the fix to work. In this case, such information is explicitly provided
in the commit message by the patch's author in natural language.
Developers are serious and always indicate if a patch needs to be backported.
Sometimes they omit the exact target branch, or they will say that the patch is
"needed" in some older branch, but it means the same. If a commit message
doesn't mention any backport instructions, it means that the commit does not
have to be backported. And patches that are not strictly bug fixes nor doc
improvements are normally not backported. For example, fixes for design
limitations, architectural improvements and performance optimizations are
considered too risky for a backport. Finally, all bug fixes are tagged as
"BUG" at the beginning of their subject line. Patches that are not tagged as
such are not bugs, and must never be backported unless their commit message
explicitly requests so.
ENDCONTEXT
A developer is reviewing the development branch, trying to spot which commits
need to be backported to maintenance branches. This person is already expert
on HAProxy and everything related to Git, patch management, and the risks
associated with backports, so he doesn't want to be told how to proceed nor to
review the contents of the patch.
The goal for this developer is to get some help from the AI assistant to save
some precious time on this tedious review work. In order to do a better job, he
needs an accurate summary of the information and instructions found in each
commit message. Specifically he needs to figure if the patch fixes a problem
affecting an older branch or not, if it needs to be backported, if so to which
branches, and if other patches need to be backported along with it.
The indented text block below after an "id" line and starting with a Subject line
is a commit message from the HAProxy development branch that describes a patch
applied to that branch, starting with its subject line, please read it carefully.

View File

@ -0,0 +1,29 @@
ENDINPUT
BEGININSTRUCTION
You are an AI assistant that follows instruction extremely well. Help as much
as you can, responding to a single question using a single response.
The developer wants to know if he needs to backport the patch above to fix
maintenance branches, for which branches, and what possible dependencies might
be mentioned in the commit message. Carefully study the commit message and its
backporting instructions if any (otherwise it should probably not be backported),
then provide a very concise and short summary that will help the developer decide
to backport it, or simply to skip it.
Start by explaining in one or two sentences what you recommend for this one and why.
Finally, based on your analysis, give your general conclusion as "Conclusion: X"
where X is a single word among:
- "yes", if you recommend to backport the patch right now either because
it explicitly states this or because it's a fix for a bug that affects
a maintenance branch (3.3 or lower);
- "wait", if this patch explicitly mentions that it must be backported, but
only after waiting some time.
- "no", if nothing clearly indicates a necessity to backport this patch (e.g.
lack of explicit backport instructions, or it's just an improvement);
- "uncertain" otherwise for cases not covered above
ENDINSTRUCTION
Explanation:

View File

@ -2,8 +2,8 @@
HAProxy
Configuration Manual
----------------------
version 3.3
2025/11/21
version 3.4
2026/01/07
This document covers the configuration language as implemented in the version
@ -647,8 +647,8 @@ which must be placed before other sections, but it may be repeated if needed.
In addition, some automatic identifiers may automatically be assigned to some
of the created objects (e.g. proxies), and by reordering sections, their
identifiers will change. These ones appear in the statistics for example. As
such, the configuration below will assign "foo" ID number 1 and "bar" ID number
2, which will be swapped if the two sections are reversed:
such, the configuration below will assign "foo" an ID number smaller than its
"bar" counterpart. This will be swapped if the two sections are reversed:
listen foo
bind :80
@ -1747,6 +1747,7 @@ The following keywords are supported in the "global" section :
- ca-base
- chroot
- cluster-secret
- cpu-affinity
- cpu-map
- cpu-policy
- cpu-set
@ -1786,6 +1787,7 @@ The following keywords are supported in the "global" section :
- lua-load
- lua-load-per-thread
- lua-prepend-path
- max-thread-per-group
- mworker-max-reloads
- nbthread
- node
@ -1875,6 +1877,8 @@ The following keywords are supported in the "global" section :
- tune.events.max-events-at-once
- tune.fail-alloc
- tune.fd.edge-triggered
- tune.h1.be.glitches-threshold
- tune.h1.fe.glitches-threshold
- tune.h1.zero-copy-fwd-recv
- tune.h1.zero-copy-fwd-send
- tune.h2.be.glitches-threshold
@ -2223,7 +2227,30 @@ cpu-map [auto:]<thread-group>[/<thread-set>] <cpu-set>[,...] [...]
cpu-map 4/1-40 40-79,120-159
cpu-policy <policy>
cpu-affinity <affinity>
Defines how you want threads to be bound to cpus.
It currently accepts the following values :
- per-core: each thread will be bound to all the hardware threads of one core.
- per-group: each thread will be bound to all the hardware threads of the
group. This is the default unless "threads-per-core 1" is used in
"cpu-policy". "per-group" accepts an optional argument, to specify how CPUs
should be allocated. When a list of CPUs is larger than the maximum allowed
number of CPUs per group and has to be split between multiple groups, an
extra option allows to choose how the groups will be bound to those CPUs:
- auto: each thread group will only be assigned a fair share of contiguous
CPU cores that are dedicated to it and not shared with other groups. This
is the default as it generally is more optimal.
- loose: each group will still be allowed to use any CPU in the list. This
generally causes more contention, but may sometimes help deal better with
parasitic loads running on the same CPUs.
- auto: "per-group" will be used, unless "threads-per-core 1" is used in
"cpu-policy", in which case "per-core" will be used. This is the default.
- per-thread: that will bind one thread to one hardware thread only. If
"threads-per-core 1" is used in "cpu-policy", then each thread will be
bound to one hardware thread of a different core.
- per-ccx: each thread will be bound to all the hardware threads of a CCX.
cpu-policy <policy> [threads-per-core 1 | auto]
Selects the CPU allocation policy to be used.
On multi-CPU systems, there can be plenty of reasons for not using all
@ -2375,6 +2402,13 @@ cpu-policy <policy>
easily. Note that if a single cluster is present, it
will still be fully used.
An optional keyword can be added, "threads-per-core". It can accept two
values, "1" and "auto". If set to 1, then only one thread per core will be
created, unrespective of how many hardware threads the core has. If set
to auto, then one thread per hardware thread will be created.
If no affinity is specified, and threads-per-core 1 is used, then by
default the affinity will be per-core.
See also: "cpu-map", "cpu-set", "nbthread"
cpu-set <directive>...
@ -2845,7 +2879,7 @@ limited-quic
layer supports most of the necessary TLS operations, albeit without QUIC
0-RTT capability.
This feature is primarily targetted for OpenSSL prior to version 3.5.2, where
This feature is primarily targeted for OpenSSL prior to version 3.5.2, where
QUIC API was not implemented or only partially. The compatibility layer can
still be activated for version 3.5.2 and above, but this is probably
unnecessary.
@ -2980,6 +3014,14 @@ master-worker no-exit-on-failure
it is only meant for debugging and could put the master process in an
abnormal state.
max-threads-per-group <number>
Defines the maximum number of threads in a thread group. Unless the number
of thread groups is fixed with the thread-groups directive, haproxy will
create more thread groups if needed. The default and maximum value is 64.
Having a lower value means more groups will potentially be created, which
can help improve performances, as a number of data structures are per
thread group, and that will mean less contention
mworker-max-reloads <number>
In master-worker mode, this option limits the number of time a worker can
survive to a reload. If the worker did not leave after a reload, once its
@ -4163,9 +4205,49 @@ tune.glitches.kill.cpu-usage <number>
will automatically get killed. A rule of thumb would be to set this value to
twice the usually observed CPU usage, or the commonly observed CPU usage plus
half the idle one (i.e. if CPU commonly reaches 60%, setting 80 here can make
sense). This parameter has no effect without tune.h2.fe.glitches-threshold or
tune.quic.fe.sec.glitches-threshold. See also the global parameters
"tune.h2.fe.glitches-threshold" and "tune.quic.fe.sec.glitches-threshold".
sense). This parameter has no effect without tune.h2.fe.glitches-threshold,
tune.quic.fe.sec.glitches-threshold or tune.h1.fe.glitches-threshold. See
also the global parameters "tune.h2.fe.glitches-threshold",
"tune.h1.fe.glitches-threshold" and "tune.quic.fe.sec.glitches-threshold".
tune.h1.be.glitches-threshold <number>
Sets the threshold for the number of glitches on a HTTP/1 backend connection,
after which that connection will automatically be killed. This allows to
automatically kill misbehaving connections without having to write explicit
rules for them. The default value is zero, indicating that no threshold is
set so that no event will cause a connection to be closed. Typical events
include improperly formatted headers that had been nevertheless accepted by
"accept-unsafe-violations-in-http-response". Any non-zero value here should
probably be in the hundreds or thousands to be effective without affecting
slightly bogus servers. It is also possible to only kill connections when the
CPU usage crosses a certain level, by using "tune.glitches.kill.cpu-usage".
Note that a graceful close is attempted at 75% of the configured threshold by
advertising a GOAWAY for a future stream. This ensures that a slightly faulty
connection will stop being used after some time without risking to interrupt
ongoing transfers.
See also: tune.h1.fe.glitches-threshold, bc_glitches, and
tune.glitches.kill.cpu-usage
tune.h1.fe.glitches-threshold <number>
Sets the threshold for the number of glitches on a HTTP/1 frontend connection
after which that connection will automatically be killed. This allows to
automatically kill misbehaving connections without having to write explicit
rules for them. The default value is zero, indicating that no threshold is
set so that no event will cause a connection to be closed. Typical events
include improperly formatted headers that had been nevertheless accepted by
"accept-unsafe-violations-in-http-request". Any non-zero value here should
probably be in the hundreds or thousands to be effective without affecting
slightly bogus clients. It is also possible to only kill connections when the
CPU usage crosses a certain level, by using "tune.glitches.kill.cpu-usage".
Note that a graceful close is attempted at 75% of the configured threshold by
advertising a GOAWAY for a future stream. This ensures that a slightly non-
compliant client will have the opportunity to create a new connection and
continue to work unaffected without ever triggering the hard close thus
risking to interrupt ongoing transfers.
See also: tune.h1.be.glitches-threshold, fc_glitches, and
tune.glitches.kill.cpu-usage
tune.h1.zero-copy-fwd-recv { on | off }
Enables ('on') of disabled ('off') the zero-copy receives of data for the H1
@ -4189,7 +4271,10 @@ tune.h2.be.glitches-threshold <number>
zero value here should probably be in the hundreds or thousands to be
effective without affecting slightly bogus servers. It is also possible to
only kill connections when the CPU usage crosses a certain level, by using
"tune.glitches.kill.cpu-usage".
"tune.glitches.kill.cpu-usage". Note that a graceful close is attempted at
75% of the configured threshold by advertising a GOAWAY for a future stream.
This ensures that a slightly faulty connection will stop being used after
some time without risking to interrupt ongoing transfers.
See also: tune.h2.fe.glitches-threshold, bc_glitches, and
tune.glitches.kill.cpu-usage
@ -4246,7 +4331,11 @@ tune.h2.fe.glitches-threshold <number>
zero value here should probably be in the hundreds or thousands to be
effective without affecting slightly bogus clients. It is also possible to
only kill connections when the CPU usage crosses a certain level, by using
"tune.glitches.kill.cpu-usage".
"tune.glitches.kill.cpu-usage". Note that a graceful close is attempted at
75% of the configured threshold by advertising a GOAWAY for a future stream.
This ensures that a slightly non-compliant client will have the opportunity
to create a new connection and continue to work unaffected without ever
triggering the hard close thus risking to interrupt ongoing transfers.
See also: tune.h2.be.glitches-threshold, fc_glitches, and
tune.glitches.kill.cpu-usage
@ -4834,7 +4923,7 @@ tune.quic.fe.cc.max-win-size <size>
The default value is 480k.
See also the "quic-cc-algo" bind option.
See also the "quic-cc-algo" bind and server options.
tune.quic.frontend.default-max-window-size <size> (deprecated)
This keyword has been deprecated in 3.3 and will be removed in 3.5. It is
@ -5022,7 +5111,7 @@ tune.quic.fe.tx.pacing { on | off }
deactivate it for networks with very high bandwidth/low latency
characteristics to prevent unwanted delay and reduce CPU consumption.
See also the "quic-cc-algo" bind option.
See also the "quic-cc-algo" bind and server options.
tune.quic.disable-tx-pacing (deprecated)
This keyword has been deprecated in 3.3 and will be removed in 3.5. It is
@ -5731,6 +5820,7 @@ errorloc302 X X X X
errorloc303 X X X X
error-log-format X X X -
force-persist - - X X
force-be-switch - X X -
filter - X X X
fullconn X - X X
guid - X X X
@ -7014,6 +7104,9 @@ default_backend <backend>
used when no rule has matched. It generally is the dynamic backend which
will catch all undetermined requests.
If a backend is disabled or unpublished, default_backend rules targetting it
will be ignored and stream processing will remain on the original proxy.
Example :
use_backend dynamic if url_dyn
@ -7057,7 +7150,11 @@ disabled
is possible to disable many instances at once by adding the "disabled"
keyword in a "defaults" section.
See also : "enabled"
By default, a disabled backend cannot be selected for content-switching.
However, a portion of the traffic can ignore this when "force-be-switch" is
used.
See also : "enabled", "force-be-switch"
dispatch <address>:<port> (deprecated)
@ -7467,6 +7564,19 @@ force-persist { if | unless } <condition>
and section 7 about ACL usage.
force-be-switch { if | unless } <condition>
Allow content switching to select a backend instance even if it is disabled
or unpublished. This rule can be used by admins to test traffic to services
prior to expose them to the outside world.
May be used in the following contexts: tcp, http
May be used in sections: defaults | frontend | listen | backend
no | yes | yes | no
See also : "disabled"
filter <name> [param*]
Add the filter <name> in the filter list attached to the proxy.
@ -8613,9 +8723,11 @@ id <value>
Arguments : none
Set a persistent ID for the proxy. This ID must be unique and positive.
An unused ID will automatically be assigned if unset. The first assigned
value will be 1. This ID is currently only returned in statistics.
Set a persistent ID for the proxy. This ID must be unique and positive. An
unused ID will automatically be assigned if unset. Due to an historical
behavior, value 1 is not used unless explicitly set. Thus, the lowest value
automatically assigned will be 2. This ID is currently only returned in
statistics.
ignore-persist { if | unless } <condition>
@ -9138,8 +9250,10 @@ mode { tcp|http|log|spop }
server features are supported, but not TCP or HTTP specific ones.
spop When used in a backend section, it will turn the backend into a
log backend. This mode is mandatory and automatically set, if
necessary, for backends referenced by SPOE engines.
spop backend. This mode is mandatory if the backend contains
SPOA servers, but when mode is tcp, it will automatically be
converted to mode spop if such servers are detected.
When doing content switching, it is mandatory that the frontend and the
backend are in the same mode (generally HTTP), otherwise the configuration
@ -14659,14 +14773,17 @@ use_backend <backend> [{if | unless} <condition>]
There may be as many "use_backend" rules as desired. All of these rules are
evaluated in their declaration order, and the first one which matches will
assign the backend.
assign the backend. This is even the case if the backend is considered as
down. However, if a matching rule targets a disabled or unpublished backend,
it is ignored instead and rules evaluation continue.
In the first form, the backend will be used if the condition is met. In the
second form, the backend will be used if the condition is not met. If no
condition is valid, the backend defined with "default_backend" will be used.
If no default backend is defined, either the servers in the same section are
used (in case of a "listen" section) or, in case of a frontend, no server is
used and a 503 service unavailable response is returned.
condition is valid, the backend defined with "default_backend" will be used
unless it is disabled or unpublished. If no default backend is available,
either the servers in the same section are used (in case of a "listen"
section) or, in case of a frontend, no server is used and a 503 service
unavailable response is returned.
Note that it is possible to switch from a TCP frontend to an HTTP backend. In
this case, either the frontend has already checked that the protocol is HTTP,
@ -16513,6 +16630,10 @@ allow-0rtt
you should only allow if for requests that are safe to replay, i.e. requests
that are idempotent. You can use the "wait-for-handshake" action for any
request that wouldn't be safe with early data.
With QUIC, 0rtt is supported with QuicTLS, OpenSSL >= 3.5.2 and AWS-LC.
With TCP/TLS, 0rtt is only supported with OpenSSL, and requires that the
client sends an ALPN, otherwise the early data won't be considered before
the handshake happens.
alpn <protocols>
This enables the TLS ALPN extension and advertises the specified protocol
@ -16937,9 +17058,10 @@ ech <dir> [ EXPERIMENTAL ]
See https://datatracker.ietf.org/doc/draft-ietf-tls-esni/
This is an experimental feature, which requires the
"expose-experimental-directives" option in the global section. It also
necessitates an OpenSSL version that supports ECH, and HAProxy must be
compiled with USE_ECH=1.
"expose-experimental-directives" option in the global section.
It also necessitates an OpenSSL version that supports ECH
( https://github.com/openssl/openssl/tree/feature/ech), and HAProxy must be
compiled with USE_ECH=1. The ECH API of AWS-LC is not supported.
Example:
$ openssl ech -public_name foobar.com -out /etc/haproxy/echkeydir/foobar.com.ech
@ -17424,6 +17546,19 @@ tcp-md5sig <password>
introduction of spoofed TCP segments into the connection stream. But it can
be useful for any very long-lived TCP connections.
tcp-ss <mode>
Sets the TCP Save SYN option for all incoming connections instantiated from
this listening socket. This option is available on Linux since version 4.3.
It instructs the kernel to try to keep a copy of the incoming IP packet
containing the TCP SYN flag, for later inspection via the "fc_saved_syn"
sample fetch function. The option knows 3 modes:
- 0 SYN packet saving is disabled, this is the default
- 1 SYN packet saving is enabled, and contains IP and TCP headers
- 2 SYN packet saving is enabled, and contains ETH, IP and TCP headers
This only works for regular TCP connections, and is ignored for other
protocols (e.g. UNIX sockets). See also "fc_saved_syn".
tcp-ut <delay>
Sets the TCP User Timeout for all incoming connections instantiated from this
listening socket. This option is available on Linux since version 2.6.37. It
@ -17741,6 +17876,8 @@ allow-0rtt
Allow sending early data to the server when using TLS 1.3.
Note that early data will be sent only if the client used early data, or
if the backend uses "retry-on" with the "0rtt-rejected" keyword.
With QUIC, 0rtt is supported with QuicTLS, OpenSSL >= 3.5.2 and AWS-LC.
With TCP/TLS, 0rtt is only supported with OpenSSL.
alpn <protocols>
May be used in the following contexts: tcp, http
@ -18815,6 +18952,16 @@ proto <name>
See also "ws" to use an alternative protocol for websocket streams.
quic-cc-algo { cubic | newreno | bbr | nocc }[(<args,...>)]
This is a QUIC specific setting to select the congestion control algorithm
for any connection targeting this server. They are similar to those used by
TCP. See the bind option with a similar name for a complete description of
all customization options.
Default value: cubic
See also: "tune.quic.be.tx.pacing" and "tune.quic.be.cc.max-win-size"
redir <prefix>
May be used in the following contexts: http
@ -19654,16 +19801,7 @@ the corresponding http-request and http-response actions.
cache <name>
Declare a cache section, allocate a shared cache memory named <name>, the
size of cache is mandatory.
total-max-size <megabytes>
Define the size in RAM of the cache in megabytes. This size is split in
blocks of 1kB which are used by the cache entries. Its maximum value is 4095.
max-object-size <bytes>
Define the maximum size of the objects to be cached. Must not be greater than
an half of "total-max-size". If not set, it equals to a 256th of the cache size.
All objects with sizes larger than "max-object-size" will not be cached.
size of cache is mandatory (see keyword "total-max-size" below).
max-age <seconds>
Define the maximum expiration duration. The expiration is set as the lowest
@ -19672,6 +19810,16 @@ max-age <seconds>
seconds, which means that you can't cache an object more than 60 seconds by
default.
max-object-size <bytes>
Define the maximum size of the objects to be cached. Must not be greater than
an half of "total-max-size". If not set, it equals to a 256th of the cache size.
All objects with sizes larger than "max-object-size" will not be cached.
max-secondary-entries <number>
Define the maximum number of simultaneous secondary entries with the same primary
key in the cache. This needs the vary support to be enabled. Its default value is 10
and should be passed a strictly positive integer.
process-vary <on/off>
Enable or disable the processing of the Vary header. When disabled, a response
containing such a header will never be cached. When enabled, we need to calculate
@ -19681,10 +19829,9 @@ process-vary <on/off>
the contents of the 'accept-encoding', 'referer' and 'origin' headers for
now. The default value is off (disabled).
max-secondary-entries <number>
Define the maximum number of simultaneous secondary entries with the same primary
key in the cache. This needs the vary support to be enabled. Its default value is 10
and should be passed a strictly positive integer.
total-max-size <megabytes>
Define the size in RAM of the cache in megabytes. This size is split in
blocks of 1kB which are used by the cache entries. Its maximum value is 4095.
6.2.2. Proxy section
@ -20333,6 +20480,8 @@ The following keywords are supported:
51d.single(prop[,prop*]) string string
add(value) integer integer
add_item(delim[,var[,suff]]) string string
aes_cbc_dec(bits,nonce,key[,<aad>]) binary binary
aes_cbc_enc(bits,nonce,key[,<aad>]) binary binary
aes_gcm_dec(bits,nonce,key,aead_tag[,aad]) binary binary
aes_gcm_enc(bits,nonce,key,aead_tag[,aad]) binary binary
and(value) integer integer
@ -20358,6 +20507,12 @@ debug([prefix][,destination]) any same
digest(algorithm) binary binary
div(value) integer integer
djb2([avalanche]) binary integer
eth.data binary binary
eth.dst binary binary
eth.hdr binary binary
eth.proto binary integer
eth.src binary binary
eth.vlan binary integer
even integer boolean
field(index,delimiters[,count]) string string
fix_is_valid binary boolean
@ -20370,9 +20525,21 @@ htonl integer integer
http_date([offset[,unit]]) integer string
iif(true,false) boolean string
in_table([table]) any boolean
ip.data binary binary
ip.df binary integer
ip.dst binary address
ip.fp binary binary
ip.hdr binary binary
ip.proto binary integer
ip.src binary address
ip.tos binary integer
ip.ttl binary integer
ip.ver binary integer
ipmask(mask4[,mask6]) address address
json([input-code]) string string
json_query(json_path[,output_type]) string _outtype_
jwt_decrypt_cert(<cert>) string binary
jwt_decrypt_secret(<secret>) string binary
jwt_header_query([json_path[,output_type]]) string string
jwt_payload_query([json_path[,output_type]]) string string
-- keyword -------------------------------------+- input type + output type -
@ -20455,6 +20622,18 @@ table_server_id([table]) any integer
table_sess_cnt([table]) any integer
table_sess_rate([table]) any integer
table_trackers([table]) any integer
tcp.dst binary integer
tcp.flags binary integer
tcp.options.mss binary integer
tcp.options.sack binary integer
tcp.options.tsopt binary integer
tcp.options.tsval binary integer
tcp.options.wscale binary integer
tcp.options.wsopt binary integer
tcp.options_list binary binary
tcp.seq binary integer
tcp.src binary integer
tcp.win binary integer
ub64dec string string
ub64enc string string
ungrpc(field_number[,field_type]) binary binary / int
@ -20529,6 +20708,31 @@ add_item(<delim>[,<var>[,<suff>]])
http-request set-var(req.tagged) 'var(req.tagged),add_item(",",req.score1),add_item(",",req.score2)'
http-request set-var(req.tagged) 'var(req.tagged),add_item(",",,(site1))' if src,in_table(site1)
aes_cbc_dec(<bits>,<nonce>,<key>[,<aad>])
Decrypts the raw byte input using the AES128-CBC, AES192-CBC or AES256-CBC
algorithm, depending on the <bits> parameter. All other parameters need to be
base64 encoded and the returned result is in raw byte format. The <aad>
parameter is optional. If the <aad> validation fails, the converter doesn't
return any data.
The <nonce>, <key> and <aad> can either be strings or variables. This
converter requires at least OpenSSL 1.0.1.
Example:
http-response set-header X-Decrypted-Text %[var(txn.enc),\
aes_cbc_dec(128,txn.nonce,Zm9vb2Zvb29mb29wZm9vbw==)]
aes_cbc_enc(<bits>,<nonce>,<key>[,<aad>])
Encrypts the raw byte input using the AES128-CBC, AES192-CBC or AES256-CBC
algorithm, depending on the <bits> parameter. <nonce>, <key> and <aad>
parameters must be base64 encoded.
The <aad> parameter is optional. The returned result is in raw byte format.
The <nonce>, <key> and <aad> can either be strings or variables. This
converter requires at least OpenSSL 1.0.1.
Example:
http-response set-header X-Encrypted-Text %[var(txn.plain),\
aes_cbc_enc(128,txn.nonce,Zm9vb2Zvb29mb29wZm9vbw==)]
aes_gcm_dec(<bits>,<nonce>,<key>,<aead_tag>[,<aad>])
Decrypts the raw byte input using the AES128-GCM, AES192-GCM or AES256-GCM
algorithm, depending on the <bits> parameter. All other parameters need to be
@ -20777,6 +20981,48 @@ djb2([<avalanche>])
32-bit hash is trivial to break. See also "crc32", "sdbm", "wt6", "crc32c",
and the "hash-type" directive.
eth.data
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "2".
It skips all the Ethernet header including possible VLANs and returns a block
of binary data starting at the layer 3 protocol (usually IPv4 or IPv6). See
also "fc_saved_syn" and "tcp-ss".
eth.dst
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "2".
It returns the 6 bytes of the Ethernet header corresponding to the
destination address of the frame, as a binary block. See also "fc_saved_syn"
and "tcp-ss".
eth.hdr
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "2".
It trims anything past the Ethernet header but keeps possible VLANs, and
returns this header as a block of binary data. See also "fc_saved_syn" and
"tcp-ss".
eth.proto
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "2".
It returns the protocol number (also known as EtherType) found in a Ethernet
header after any optional VLAN as an integer value. It should normally be
either 0x800 for IPv4 or 0x86DD for IPv6. See also "fc_saved_syn" and
"tcp-ss".
eth.src
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "2".
It returns the 6 bytes of the Ethernet header corresponding to the source
address of the frame, as a binary block. See also "fc_saved_syn" and
"tcp-ss".
eth.vlan
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "2".
It returns the last VLAN ID found in a Ethernet header as an integer value.
See also "fc_saved_syn" and "tcp-ss".
even
Returns a boolean TRUE if the input value of type signed integer is even
otherwise returns FALSE. It is functionally equivalent to "not,and(1),bool".
@ -20905,6 +21151,132 @@ in_table([<table>])
elements (e.g. whether or not a source IP address or an Authorization header
was already seen).
ip.data
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "1",
or with the output of "eth.data". It skips the IP header and any optional
options or extensions, and returns a block of binary data starting at the
transport protocol (usually TCP or UDP). See also "fc_saved_syn", "tcp-ss",
and "eth.data".
ip.df
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "1",
or with the output of "eth.data". It returns integer value 1 if the DF (don't
fragment) flag is set in the IP header, 0 otherwise. IPv6 does not have a DF
flag, and doesn't fragment by default so it always returns 1. See also
"fc_saved_syn", "tcp-ss", and "eth.data".
ip.dst
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "1",
or with the output of "eth.data". It returns the IPv4 or IPv6 destination
address from the IPv4/v6 header. See also "fc_saved_syn", "tcp-ss", and
"eth.data".
ip.fp([<mode>])
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "1",
or with the output of "eth.data". It inspects various parts of the IP header
and the TCP header to construct sort of a fingerprint of invariant parts that
can be used to distinguish between multiple apparently identical hosts. The
real-world use case is to refine the identification of misbehaving hosts
between a shared IP address to avoid blocking legitimate users when only one
is misbehaving and needs to be blocked. The converter builds a 7-byte binary
block based on the input. The bytes of the fingerprint are arranged like
this:
- byte 0: IP TOS field (see ip.tos)
- byte 1:
- bit 7: IPv6 (1) / IPv4 (0)
- bit 6: ip.df
- bit 5..4: 0:ip.ttl<=32; 1:ip.ttl<=64; 2:ip.ttl<=128; 3:ip.ttl<=255
- bit 3: IP options present (1) / absent (0)
- bit 2: TCP data present (1) / absent (0)
- bit 1: TCP.flags has CWR set (1) / cleared (0)
- bit 0: TCP.flags has ECE set (1) / cleared (0)
- byte 2:
- bits 7..4: TCP header length in 4-byte words
- bits 3..0: TCP window scaling + 1 (1..15) / 0 (no WS advertised)
- byte 3..4: tcp.win
- byte 5..6: tcp.options.mss, or zero if absent
The <mode> argument permits to append more information to the fingerprint. By
default, when the <mode> argument is not set or is zero, the fingerprint is
solely made of the 7 bytes described above. If <mode> is specified as another
value, it then corresponds to the sum of the following values, and the
respective components will be concatenated to the fingerprint, in the order
below:
- 1: the received TTL value is appended to the fingerprint (1 byte)
- 2: the list of TCP option kinds, as returned by "tcp.options_list",
made of 0 to 40 extra bytes, is appended to the fingerprint
- 4: the source IP address is appended to the fingerprint, which adds
4 bytes for IPv4 and 16 for IPv6.
Example: make a 12..24 bytes fingerprint using the base FP, the TTL and the
source address (1+4=5):
frontend test
mode http
bind :4445 tcp-ss 1
tcp-request connection set-var(sess.syn) fc_saved_syn
http-request return status 200 content-type text/plain lf-string \
"src=%[var(sess.syn),ip.src] fp=%[var(sess.syn),ip.fp(5),hex]\n"
See also "fc_saved_syn", "tcp-ss", "eth.data", "ip.df", "ip.ttl", "tcp.win",
"tcp.options.mss", and "tcp.options_list".
ip.hdr
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "1",
or with the output of "eth.data". It returns a block of binary data starting
with the IP header and stopping after the last option or extension, and
before the transport protocol header. See also "fc_saved_syn", "tcp-ss", and
"eth.data".
ip.proto
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "1",
or with the output of "eth.data". It returns the transport protocol number,
usually 6 for TCP or 17 for UDP. See also "fc_saved_syn", "tcp-ss", and
"eth.data".
ip.src
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "1",
or with the output of "eth.data". It returns the IPv4 or IPv6 source address
from the IPv4/v6 header. See also "fc_saved_syn", "tcp-ss", and "eth.data".
ip.tos
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "1",
or with the output of "eth.data". It returns an integer corresponding to the
value of the type-of-service (TOS) field in the IPv4 header or traffic class
(TC) field in the IPv6 header. Note that in the modern internet, this field
most often contains a DSCP (Differentiated Services Codepoint) value in the
6 upper bits and the two lower are either not used, or used by IP ECN. Please
refer to RFC2474 and RFC8436 for DSCP values, and RFC3168 for IP ECN fields.
See also "fc_saved_syn", "tcp-ss", and "eth.data".
ip.ttl
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "1",
or with the output of "eth.data". This returns an integer corresponding to
the TTL (Time To Live) or HL (Hop Limit) field in the IPv4/IPv6 header. This
value is usually preset to a fixed value and decremented by each router that
the packet crosses. It can help infer how far a client connects from when the
initial value is known. Note that most modern operating systems start with an
initial value of 64. See also "fc_saved_syn", "tcp-ss", and "eth.data".
ip.ver
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "1",
or with the output of "eth.data". This returns the IP version from the IP
header, normally either 4 or 6. Note that this doesn't check whether the
protocol number in the upper layer Ethernet frame matches, but since this is
expected to be used with valid packets, it is expected that the operating
system has already verified this. See also "fc_saved_syn", "tcp-ss", and
"eth.data".
ipmask(<mask4>[,<mask6>])
Apply a mask to an IP address, and use the result for lookups and storage.
This can be used to make all hosts within a certain mask to share the same
@ -20992,22 +21364,72 @@ json_query(<json_path>[,<output_type>])
# get the value of the key 'iss' from a JWT Bearer token
http-request set-var(txn.token_payload) req.hdr(Authorization),word(2,.),ub64dec,json_query('$.iss')
jwt_decrypt_cert(<cert>)
Performs a signature validation of a JSON Web Token following the JSON Web
Encryption format (see RFC 7516) given in input and return its content
decrypted thanks to the certificate provided.
The <cert> parameter must be a path to an already loaded certificate (that
can be dumped via the "dump ssl cert" CLI command). The certificate must have
its "jwt" option explicitely set to "on" (see "jwt" crt-list option). It can
be provided directly or via a variable.
The only tokens managed yet are the ones using the Compact Serialization
format (five dot-separated base64-url encoded strings).
This converter can be used for tokens that have an algorithm ("alg" field of
the JOSE header) among the following: RSA1_5, RSA-OAEP or RSA-OAEP-256.
The JWE token must be provided base64url-encoded and the output will be
provided "raw". If an error happens during token parsing, signature
verification or content decryption, an empty string will be returned.
Example:
# Get a JWT from the authorization header, put its decrypted content in an
# HTTP header
http-request set-var(txn.bearer) http_auth_bearer
http-request set-header X-Decrypted %[var(txn.bearer),jwt_decrypt_cert("/foo/bar.pem")]
jwt_decrypt_secret(<secret>)
Performs a signature validation of a JSON Web Token following the JSON Web
Encryption format (see RFC 7516) given in input and return its content
decrypted thanks to the base64-encoded secret provided. The secret can be
given as a string or via a variable.
The only tokens managed yet are the ones using the Compact Serialization
format (five dot-separated base64-url encoded strings).
This converter can be used for tokens that have an algorithm ("alg" field of
the JOSE header) among the following: A128KW, A192KW, A256KW, A128GCMKW,
A192GCMKW, A256GCMKW, dir. Please note that the A128KW and A192KW algorithms
are not available on AWS-LC and decryption will not work.
The JWE token must be provided base64url-encoded and the output will be
provided "raw". If an error happens during token parsing, signature
verification or content decryption, an empty string will be returned.
Example:
# Get a JWT from the authorization header, put its decrypted content in an
# HTTP header
http-request set-var(txn.bearer) http_auth_bearer
http-request set-header X-Decrypted %[var(txn.bearer),jwt_decrypt_secret("GawgguFyGrWKav7AX4VKUg")]
jwt_header_query([<json_path>[,<output_type>]])
When given a JSON Web Token (JWT) in input, either returns the decoded header
part of the token (the first base64-url encoded part of the JWT) if no
parameter is given, or performs a json_query on the decoded header part of
the token. See "json_query" converter for details about the accepted
json_path and output_type parameters.
This converter can be used with tokens that are either JWS or JWE tokens as
long as they are in the Compact Serialization format.
Please note that this converter is only available when HAProxy has been
compiled with USE_OPENSSL.
jwt_payload_query([<json_path>[,<output_type>]])
When given a JSON Web Token (JWT) in input, either returns the decoded
payload part of the token (the second base64-url encoded part of the JWT) if
no parameter is given, or performs a json_query on the decoded payload part
of the token. See "json_query" converter for details about the accepted
json_path and output_type parameters.
When given a JSON Web Token (JWT) of the JSON Web Signed (JWS) format in
input, either returns the decoded payload part of the token (the second
base64-url encoded part of the JWT) if no parameter is given, or performs a
json_query on the decoded payload part of the token. See "json_query"
converter for details about the accepted json_path and output_type
parameters.
Please note that this converter is only available when HAProxy has been
compiled with USE_OPENSSL.
@ -22114,6 +22536,88 @@ table_trackers([<table>])
concurrent connections there are from a given address for example. See also
the sc_trackers sample fetch keyword.
tcp.dst
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It returns an integer representing the destination
port present in the TCP header. See also "fc_saved_syn", "tcp-ss", and
"ip.data".
tcp.flags
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It returns an integer representing the TCP flags
from this TCP header. All 8 flags from FIN to CWR are retrieved. Each flag
may be tested using the "and()" converter. Please refer to RFC9293 for the
value of each flag. See also "fc_saved_syn", "tcp-ss", and "ip.data".
tcp.options.mss
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It looks for a TCP option of kind "MSS", and if found,
it returns an integer value corresponding to the advertised value in that
option, otherwise zero. The MSS is the Maximum Segment Size and indicates the
largest segment the peer may receive, in bytes. See also "fc_saved_syn",
"tcp-ss", and "ip.data".
tcp.options.sack
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It looks for a TCP option of kind "Sack-Permitted",
and if found, returns 1, otherwise zero. See also "fc_saved_syn", "tcp-ss",
and "ip.data".
tcp.options.tsopt
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It looks for a TCP option of kind "Timestamp", and if
found, returns 1, otherwise zero. See also "fc_saved_syn", "tcp-ss", and
"ip.data".
tcp.options.tsval
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It looks for a TCP option of kind "Timestamp", and if
found, returns the timestamp value emitted by the peer, otherwise does not
return anything. Note that timestamps are 32-bit unsigned values with no
particular unit that only the peer decides on, and timestamps are expected to
be independent between different connections. See also "fc_saved_syn",
"tcp-ss", and "ip.data".
tcp.options.wscale
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It looks for a TCP option of kind "Window Scale", and
if found, returns the window scaling value emitted by the peer, otherwise
zero. Note that values are not expected to be beyond 14 though no technical
limitation prevents them from being sent. In order to detect if the window
scale option was used, please use "tcp.options.wsopt". See also "tcp-ss",
"fc_saved_syn", "ip.data", and "tcp.options.wsopt".
tcp.options.wsopt
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It looks for a TCP option of kind "Window Scale", and
if found, returns 1 otherwise 0. See also "fc_saved_syn", "tcp-ss", "ip.data"
"tcp.options.wscale".
tcp.options_list
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It builds a binary sequence of all TCP option kinds in
the same order as they appear in the TCP header. It can produce from 0 to 60
bytes (in the worst case). The End-of-options is not emitted. See also
"fc_saved_syn", "tcp-ss", and "ip.data".
tcp.seq
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It returns an integer representing the sequence number
used by the peer in the TCP header. Sequence numbers are 32-bit unsigned
values. See also "fc_saved_syn", "tcp-ss", and "ip.data".
tcp.src
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It returns an integer representing the source port
present in the TCP header. See also "fc_saved_syn", "tcp-ss", and "ip.data".
tcp.win
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It returns an integer representing the window size
advertised by the peer in the TCP header. The value is provided as-is, as a
16-bit unsigned quantity, without applying the window scaling factor. See
also "fc_saved_syn", "tcp-ss", and "ip.data".
ub64dec
This converter is the base64url variant of b64dec converter. base64url
encoding is the "URL and Filename Safe Alphabet" variant of base64 encoding.
@ -23165,6 +23669,7 @@ fc_retrans integer
fc_rtt(<unit>) integer
fc_rttvar(<unit>) integer
fc_sacked integer
fc_saved_syn binary
fc_settings_streams_limit integer
fc_src ip
fc_src_is_local boolean
@ -23763,6 +24268,80 @@ fc_sacked : integer
if the operating system does not support TCP_INFO, for example Linux kernels
before 2.4, the sample fetch fails.
fc_saved_syn : binary
Returns a copy of the saved SYN packet that was preserved by the system
during the incoming connection setup. This requires that the "tcp-ss" option
was present on the "bind" line, and a Linux kernel 4.3 minimum. When "tcp-ss"
is set to 1, only the IP and TCP headers are present. When "tcp-ss" is set to
2, then the Ethernet header is also present before the IP header, and may be
used to control or log source MAC address or VLANs for example. Note that
there is no guarantee that a SYN will be saved. For example, if SYN cookies
are used, the SYN packet is not preserved and the connection is established
on the matching ACK packet. In addition, the system doesn't guarantee to
preserve the copy beyond the first read. As such it is strongly recommended
to copy it into a variable in scope "sess" from a "tcp-request connection"
rule and only use that variable for further manipulations. It is worth noting
that on the loopback interface a dummy 14-byte ethernet header is constructed
by the system where both the source and destination addresses are zero, and
only the protocol is set. It is convenient to convert such samples to
hexadecimal using the "hex" converter during debugging. Example (fields
manually separated and commented below):
frontend test
mode http
bind :::4445 tcp-ss 2
tcp-request connection set-var(sess.syn) fc_saved_syn
http-request return status 200 content-type text/plain \
lf-string "%[var(sess.syn),hex]\n"
$ curl '0:4445'
000000000000 000000000000 0800 \ # MAC_DST MAC_SRC PROTO=IPv4
4500003C0A65400040063255 \ # IPv4 header, proto=6 (TCP)
7F000001 7F000001 \ # IP_SRC=127.0.0.1 IP_DST=127.0.0.1
E1F2 115D 01AF4E3E 00000000 \ # TCP_SPORT=57842 TCP_DPORT=4445, SEQ
A0 02 FFD7 FE300000 \ # OPT_LEN=20 TCP_FLAGS=SYN WIN=65495
0204FFD70402080A01C2A71A0000000001030307 # MSS=65495, TS, SACK, WSCALE 7
$ curl '[::1]:4445'
000000000000 000000000000 86DD \ # MAC_DST MAC_SRC PROTO=IPv6
6008018F00280640 \ # IPv6 header, proto=6 (TCP)
00000000000000000000000000000001 \ # SRC=::1
00000000000000000000000000000001 \ # DST=::1
9758 115D B5511F5D 00000000 \ # TCP_SPORT=38744 TCP_DPORT=4445, SEQ
A0 02 FFC4 00300000 \ # OPT_LEN=20 TCP_FLAGS=SYN WIN=65476
0204FFC40402080A9C231D680000000001030307 # MSS=65476, TS, SACK, WSCALE 7
The "bytes()" converter helps extract specific fields from the packet. The
be2dec() also permits to read chunks and emit them in integer form. For more
accurate extraction, please refer to the "eth.XXX" converters.
Example with IPv4 input:
frontend test
mode http
bind :4445 tcp-ss 2
tcp-request connection set-var(sess.syn) fc_saved_syn
http-request return status 200 content-type text/plain lf-string \
"mac_dst=%[var(sess.syn),eth.dst,hex] \
mac_src=%[var(sess.syn),eth.src,hex] \
proto=%[var(sess.syn),eth.proto,bytes(6),be2hex(,2)] \
ipv4h=%[var(sess.syn),eth.data,bytes(0,12),hex] \
ipv4_src=%[var(sess.syn),eth.data,ip.src] \
ipv4_dst=%[var(sess.syn),eth.data,ip.dst] \
tcp_spt=%[var(sess.syn),eth.data,ip.data,tcp.src] \
tcp_dpt=%[var(sess.syn),eth.data,ip.data,tcp.dst] \
tcp_win=%[var(sess.syn),eth.data,ip.data,tcp.win] \
tcp_opt=%[var(sess.syn),eth.data,ip.data,bytes(20),hex]\n"
$ curl '0:4445'
mac_dst=000000000000 mac_src=000000000000 proto=0800 \
ipv4h=4500003CC9B7400040067302 ipv4_src=127.0.0.1 ipv4_dst=127.0.0.1 \
tcp_spt=43970 tcp_dpt=4445 tcp_win=65495 \
tcp_opt=0204FFD70402080A01DC0D410000000001030307
See also the "set-var" action, the "be2dec", "bytes", "hex", "eth.XXX",
"ip.XXX", and "tcp.XXX" converters.
fc_settings_streams_limit : integer
Returns the maximum number of streams allowed on the frontend connection. For
TCP and HTTP/1.1 connections, it is always 1. For other protocols, it depends
@ -29741,7 +30320,7 @@ Arguments: (mandatory ones first, then alphabetically sorted):
which can represent a client identifier found in a request for
instance.
* string [length <len>]
* string [len <len>]
A table declared with "type string" will store substrings of
up to <len> characters. If the string provided by the pattern
extractor is larger than <len>, it will be truncated before
@ -29751,7 +30330,7 @@ Arguments: (mandatory ones first, then alphabetically sorted):
limited to 32 characters. Increasing the length can have a
non-negligible memory usage impact.
* binary [length <len>]
* binary [len <len>]
A table declared with "type binary" will store binary blocks
of <len> bytes. If the block provided by the pattern
extractor is larger than <len>, it will be truncated before
@ -31025,8 +31604,9 @@ ocsp-update [ off | on ]
failure" or "Error during insertion" errors.
jwt [ off | on ]
Allow for this certificate to be used for JWT validation via the
"jwt_verify_cert" converter when set to 'on'. Its value default to 'off'.
Allow for this certificate to be used for JWT validation or decryption via
the "jwt_verify_cert" or "jwt_decrypt_cert" converters when set to 'on'. Its
value defaults to 'off'.
When set to 'on' for a given certificate, the CLI command "del ssl cert" will
not work. In order to be deleted, a certificate must not be used, either for

View File

@ -1,7 +1,7 @@
-----------------------
HAProxy Starter Guide
-----------------------
version 3.3
version 3.4
This document is an introduction to HAProxy for all those who don't know it, as

View File

@ -1,7 +1,7 @@
------------------------
HAProxy Management Guide
------------------------
version 3.3
version 3.4
This document describes how to start, stop, manage, and troubleshoot HAProxy,
@ -2474,6 +2474,11 @@ prompt [help | n | i | p | timed]*
advanced scripts, and the non-interactive mode (default) to basic scripts.
Note that the non-interactive mode is not available for the master socket.
publish backend <backend>
Activates content switching to a backend instance. This is the reverse
operation of "unpublish backend" command. This command is restricted and can
only be issued on sockets configured for levels "operator" or "admin".
quit
Close the connection when in interactive mode.
@ -2842,6 +2847,13 @@ operator
increased. It also drops expert and experimental mode. See also "show cli
level".
unpublish backend <backend>
Marks the backend as unqualified for future traffic selection. In effect,
use_backend / default_backend rules which reference it are ignored and the
next content switching rules are evaluated. Contrary to disabled backends,
servers health checks remain active. This command is restricted and can only
be issued on sockets configured for levels "operator" or "admin".
user
Decrease the CLI level of the current CLI session to user. It can't be
increased. It also drops expert and experimental mode. See also "show cli
@ -3342,9 +3354,10 @@ show quic [<format>] [<filter>]
in the format will instead show a more detailed help message.
The final argument is used to restrict or extend the connection list. By
default, connections on closing or draining state are not displayed. Use the
extra argument "all" to include them in the output. It's also possible to
restrict to a single connection by specifying its hexadecimal address.
default, active frontend connections only are displayed. Use the extra
argument "clo" to list instead closing frontend connections, "be" for backend
connections or "all" for every categories. It's also possible to restrict to
a single connection by specifying its hexadecimal address.
show servers conn [<backend>]
Dump the current and idle connections state of the servers belonging to the

View File

@ -125,8 +125,8 @@ struct activity {
unsigned int ctr2; // general purposee debug counter
#endif
char __pad[0]; // unused except to check remaining room
char __end[0] __attribute__((aligned(64))); // align size to 64.
};
char __end[0] THREAD_ALIGNED();
} THREAD_ALIGNED();
/* 256 entries for callers * callees should be highly sufficient (~45 seen usually) */
#define SCHED_ACT_HASH_BITS 8
@ -146,7 +146,7 @@ struct sched_activity {
uint64_t lkw_time; /* lock waiting time */
uint64_t lkd_time; /* locked time */
uint64_t mem_time; /* memory ops wait time */
};
} THREAD_ALIGNED();
#endif /* _HAPROXY_ACTIVITY_T_H */

View File

@ -366,7 +366,7 @@ static inline size_t applet_output_data(const struct appctx *appctx)
* This is useful when data have been read directly from the buffer. It is
* illegal to call this function with <len> causing a wrapping at the end of the
* buffer. It's the caller's responsibility to ensure that <len> is never larger
* than available ouput data.
* than available output data.
*
* This function is not HTX aware.
*/
@ -392,7 +392,7 @@ static inline void applet_reset_input(struct appctx *appctx)
co_skip(sc_oc(appctx_sc(appctx)), co_data(sc_oc(appctx_sc(appctx))));
}
/* Returns the amout of space available at the HTX output buffer (see applet_get_outbuf).
/* Returns the amount of space available at the HTX output buffer (see applet_get_outbuf).
*/
static inline size_t applet_htx_output_room(const struct appctx *appctx)
{
@ -402,7 +402,7 @@ static inline size_t applet_htx_output_room(const struct appctx *appctx)
return channel_recv_max(sc_ic(appctx_sc(appctx)));
}
/* Returns the amout of space available at the output buffer (see applet_get_outbuf).
/* Returns the amount of space available at the output buffer (see applet_get_outbuf).
*/
static inline size_t applet_output_room(const struct appctx *appctx)
{

View File

@ -85,10 +85,20 @@ static inline int be_usable_srv(struct proxy *be)
return be->srv_bck;
}
/* Returns true if <be> backend can be used as target to a switching rules. */
static inline int be_is_eligible(const struct proxy *be)
{
/* A disabled or unpublished backend cannot be selected for traffic.
* Note that STOPPED state is ignored as there is a risk of breaking
* requests during soft-stop.
*/
return !(be->flags & (PR_FL_DISABLED|PR_FL_BE_UNPUBLISHED));
}
/* set the time of last session on the backend */
static inline void be_set_sess_last(struct proxy *be)
{
if (be->be_counters.shared.tg[tgid - 1])
if (be->be_counters.shared.tg)
HA_ATOMIC_STORE(&be->be_counters.shared.tg[tgid - 1]->last_sess, ns_to_sec(now_ns));
}

View File

@ -537,7 +537,7 @@ struct mem_stats {
size_t size;
struct ha_caller caller;
const void *extra; // extra info specific to this call (e.g. pool ptr)
} __attribute__((aligned(sizeof(void*))));
} ALIGNED(sizeof(void*));
#undef calloc
#define calloc(x,y) ({ \

View File

@ -140,7 +140,7 @@ int warnif_misplaced_tcp_req_sess(struct proxy *proxy, const char *file, int lin
int warnif_misplaced_tcp_req_cont(struct proxy *proxy, const char *file, int line, const char *arg, const char *arg2);
int warnif_misplaced_tcp_res_cont(struct proxy *proxy, const char *file, int line, const char *arg, const char *arg2);
int warnif_misplaced_quic_init(struct proxy *proxy, const char *file, int line, const char *arg, const char *arg2);
int warnif_cond_conflicts(const struct acl_cond *cond, unsigned int where, const char *file, int line);
int warnif_cond_conflicts(const struct acl_cond *cond, unsigned int where, char **err);
int warnif_tcp_http_cond(const struct proxy *px, const struct acl_cond *cond);
int too_many_args_idx(int maxarg, int index, char **args, char **msg, int *err_code);
int too_many_args(int maxarg, char **args, char **msg, int *err_code);

View File

@ -31,6 +31,23 @@
#include <stdlib.h>
#endif
/* DEFVAL() returns either the second argument as-is, or <def> if absent. This
* is for use in macros arguments.
*/
#define DEFVAL(_def,...) _FIRST_ARG(NULL, ##__VA_ARGS__, (_def))
/* DEFNULL() returns either the argument as-is, or NULL if absent. This is for
* use in macros arguments.
*/
#define DEFNULL(...) DEFVAL(NULL, ##__VA_ARGS__)
/* DEFZERO() returns either the argument as-is, or 0 if absent. This is for
* use in macros arguments.
*/
#define DEFZERO(...) DEFVAL(0, ##__VA_ARGS__)
#define _FIRST_ARG(a, b, ...) b
/*
* Gcc before 3.0 needs [0] to declare a variable-size array
*/
@ -415,6 +432,13 @@
* for multi_threading, see THREAD_PAD() below. *
\*****************************************************************************/
/* Cache line size for alignment purposes. This value is incorrect for some
* Apple CPUs which have 128 bytes cache lines.
*/
#ifndef CACHELINE_SIZE
#define CACHELINE_SIZE 64
#endif
/* sets alignment for current field or variable */
#ifndef ALIGNED
#define ALIGNED(x) __attribute__((aligned(x)))
@ -438,12 +462,12 @@
#endif
#endif
/* sets alignment for current field or variable only when threads are enabled.
* Typically used to respect cache line alignment to avoid false sharing.
/* Sets alignment for current field or variable only when threads are enabled.
* When no parameters are provided, we align to the cache line size.
*/
#ifndef THREAD_ALIGNED
#ifdef USE_THREAD
#define THREAD_ALIGNED(x) __attribute__((aligned(x)))
#define THREAD_ALIGNED(...) ALIGNED(DEFVAL(CACHELINE_SIZE, ##__VA_ARGS__))
#else
#define THREAD_ALIGNED(x)
#endif
@ -476,13 +500,12 @@
#endif
#endif
/* add an optional alignment for next fields in a structure, only when threads
* are enabled. Typically used to respect cache line alignment to avoid false
* sharing.
/* Add an optional alignment for next fields in a structure, only when threads
* are enabled. When no parameters are provided, we align to the cache line size.
*/
#ifndef THREAD_ALIGN
#ifdef USE_THREAD
#define THREAD_ALIGN(x) union { } ALIGNED(x)
#define THREAD_ALIGN(...) union { } ALIGNED(DEFVAL(CACHELINE_SIZE, ##__VA_ARGS__))
#else
#define THREAD_ALIGN(x)
#endif
@ -507,7 +530,7 @@
/* add mandatory padding of the specified size between fields in a structure,
* This is used to avoid false sharing of cache lines for dynamically allocated
* structures which cannot guarantee alignment, or to ensure that the size of
* the struct remains consistent on architectures with different aligment
* the struct remains consistent on architectures with different alignment
* constraints
*/
#ifndef ALWAYS_PAD

View File

@ -145,7 +145,7 @@ enum {
CO_FL_WAIT_ROOM = 0x00000800, /* data sink is full */
CO_FL_WANT_SPLICING = 0x00001000, /* we wish to use splicing on the connection when possible */
/* unused: 0x00002000 */
CO_FL_SSL_NO_CACHED_INFO = 0x00002000, /* Don't use any cached information when creating a new SSL connection */
CO_FL_EARLY_SSL_HS = 0x00004000, /* We have early data pending, don't start SSL handshake yet */
CO_FL_EARLY_DATA = 0x00008000, /* At least some of the data are early data */
@ -212,13 +212,13 @@ static forceinline char *conn_show_flags(char *buf, size_t len, const char *deli
/* flags */
_(CO_FL_SAFE_LIST, _(CO_FL_IDLE_LIST, _(CO_FL_CTRL_READY,
_(CO_FL_REVERSED, _(CO_FL_ACT_REVERSING, _(CO_FL_OPT_MARK, _(CO_FL_OPT_TOS,
_(CO_FL_XPRT_READY, _(CO_FL_WANT_DRAIN, _(CO_FL_WAIT_ROOM, _(CO_FL_EARLY_SSL_HS,
_(CO_FL_XPRT_READY, _(CO_FL_WANT_DRAIN, _(CO_FL_WAIT_ROOM, _(CO_FL_SSL_NO_CACHED_INFO, _(CO_FL_EARLY_SSL_HS,
_(CO_FL_EARLY_DATA, _(CO_FL_SOCKS4_SEND, _(CO_FL_SOCKS4_RECV, _(CO_FL_SOCK_RD_SH,
_(CO_FL_SOCK_WR_SH, _(CO_FL_ERROR, _(CO_FL_FDLESS, _(CO_FL_WAIT_L4_CONN,
_(CO_FL_WAIT_L6_CONN, _(CO_FL_SEND_PROXY, _(CO_FL_ACCEPT_PROXY, _(CO_FL_ACCEPT_CIP,
_(CO_FL_SSL_WAIT_HS, _(CO_FL_PRIVATE, _(CO_FL_RCVD_PROXY, _(CO_FL_SESS_IDLE,
_(CO_FL_XPRT_TRACKED
))))))))))))))))))))))))))));
)))))))))))))))))))))))))))));
/* epilogue */
_(~0U);
return buf;
@ -476,7 +476,7 @@ struct xprt_ops {
void (*dump_info)(struct buffer *, const struct connection *);
/*
* Returns the value for various capabilities.
* Returns 0 if the capability is known, iwth the actual value in arg,
* Returns 0 if the capability is known, with the actual value in arg,
* or -1 otherwise
*/
int (*get_capability)(struct connection *connection, void *xprt_ctx, enum xprt_capabilities, void *arg);
@ -660,6 +660,7 @@ struct connection {
struct buffer name; /* Only used for passive reverse. Used as SNI when connection added to server idle pool. */
} reverse;
uint64_t sni_hash; /* Hash of the SNI. Used to cache the TLS session and try to reuse it. set to 0 is there is no SNI */
uint32_t term_evts_log; /* Termination events log: first 4 events reported from fd, handshake or xprt */
uint32_t mark; /* set network mark, if CO_FL_OPT_MARK is set */
uint8_t tos; /* set ip tos, if CO_FL_OPT_TOS is set */
@ -794,7 +795,7 @@ struct idle_conns {
struct mt_list toremove_conns;
struct task *cleanup_task;
__decl_thread(HA_SPINLOCK_T idle_conns_lock);
} THREAD_ALIGNED(64);
} THREAD_ALIGNED();
/* Termination events logs:

View File

@ -66,7 +66,7 @@ struct counters_shared {
COUNTERS_SHARED;
struct {
COUNTERS_SHARED_TG;
} *tg[MAX_TGROUPS];
} **tg;
};
/*
@ -101,7 +101,7 @@ struct fe_counters_shared_tg {
struct fe_counters_shared {
COUNTERS_SHARED;
struct fe_counters_shared_tg *tg[MAX_TGROUPS];
struct fe_counters_shared_tg **tg;
};
/* counters used by listeners and frontends */
@ -160,7 +160,7 @@ struct be_counters_shared_tg {
struct be_counters_shared {
COUNTERS_SHARED;
struct be_counters_shared_tg *tg[MAX_TGROUPS];
struct be_counters_shared_tg **tg;
};
/* counters used by servers and backends */

View File

@ -43,11 +43,13 @@ void counters_be_shared_drop(struct be_counters_shared *counters);
*/
#define COUNTERS_SHARED_LAST_OFFSET(scounters, type, offset) \
({ \
unsigned long last = HA_ATOMIC_LOAD((type *)((char *)scounters[0] + offset));\
unsigned long last = 0; \
unsigned long now_seconds = ns_to_sec(now_ns); \
int it; \
\
for (it = 1; (it < global.nbtgroups && scounters[it]); it++) { \
if (scounters) \
last = HA_ATOMIC_LOAD((type *)((char *)scounters[0] + offset));\
for (it = 1; (it < global.nbtgroups && scounters); it++) { \
unsigned long cur = HA_ATOMIC_LOAD((type *)((char *)scounters[it] + offset));\
if ((now_seconds - cur) < (now_seconds - last)) \
last = cur; \
@ -74,7 +76,7 @@ void counters_be_shared_drop(struct be_counters_shared *counters);
uint64_t __ret = 0; \
int it; \
\
for (it = 0; (it < global.nbtgroups && scounters[it]); it++) \
for (it = 0; (it < global.nbtgroups && scounters); it++) \
__ret += rfunc((type *)((char *)scounters[it] + offset)); \
__ret; \
})
@ -94,7 +96,7 @@ void counters_be_shared_drop(struct be_counters_shared *counters);
uint64_t __ret = 0; \
int it; \
\
for (it = 0; (it < global.nbtgroups && scounters[it]); it++) \
for (it = 0; (it < global.nbtgroups && scounters); it++) \
__ret += rfunc(&scounters[it]->elem, arg1, arg2); \
__ret; \
})

View File

@ -202,7 +202,7 @@ struct fdtab {
#ifdef DEBUG_FD
unsigned int event_count; /* number of events reported */
#endif
} THREAD_ALIGNED(64);
} THREAD_ALIGNED();
/* polled mask, one bit per thread and per direction for each FD */
struct polled_mask {

View File

@ -31,7 +31,7 @@
ullong _freq_ctr_total_from_values(uint period, int pend, uint tick, ullong past, ullong curr);
ullong freq_ctr_total(const struct freq_ctr *ctr, uint period, int pend);
ullong freq_ctr_total_estimate(const struct freq_ctr *ctr, uint period, int pend);
int freq_ctr_overshoot_period(const struct freq_ctr *ctr, uint period, uint freq);
uint freq_ctr_overshoot_period(const struct freq_ctr *ctr, uint period, uint freq);
uint update_freq_ctr_period_slow(struct freq_ctr *ctr, uint period, uint inc);
/* Only usable during single threaded startup phase. */

View File

@ -261,6 +261,7 @@ struct global {
unsigned int req_count; /* request counter (HTTP or TCP session) for logs and unique_id */
int last_checks;
uint32_t anon_key;
int maxthrpertgroup; /* Maximum number of threads per thread group */
/* leave this at the end to make sure we don't share this cache line by accident */
ALWAYS_ALIGN(64);

View File

@ -255,6 +255,7 @@ struct hlua_patref_iterator_context {
struct hlua_patref *ref;
struct bref bref; /* back-reference from the pat_ref_elt being accessed
* during listing */
struct pat_ref_gen *gen; /* the generation we are iterating over */
};
#else /* USE_LUA */

View File

@ -184,6 +184,7 @@ enum {
PERSIST_TYPE_NONE = 0, /* no persistence */
PERSIST_TYPE_FORCE, /* force-persist */
PERSIST_TYPE_IGNORE, /* ignore-persist */
PERSIST_TYPE_BE_SWITCH, /* force-be-switch */
};
/* final results for http-request rules */

View File

@ -270,7 +270,7 @@ struct htx {
/* XXX 4 bytes unused */
/* Blocks representing the HTTP message itself */
char blocks[VAR_ARRAY] __attribute__((aligned(8)));
char blocks[VAR_ARRAY] ALIGNED(8);
};
#endif /* _HAPROXY_HTX_T_H */

View File

@ -186,7 +186,7 @@ struct bind_conf {
#endif
#ifdef USE_QUIC
struct quic_transport_params quic_params; /* QUIC transport parameters. */
struct quic_cc_algo *quic_cc_algo; /* QUIC control congestion algorithm */
const struct quic_cc_algo *quic_cc_algo; /* QUIC control congestion algorithm */
size_t max_cwnd; /* QUIC maximumu congestion control window size (kB) */
enum quic_sock_mode quic_mode; /* QUIC socket allocation strategy */
#endif
@ -204,6 +204,7 @@ struct bind_conf {
unsigned int backlog; /* if set, listen backlog */
int maxconn; /* maximum connections allowed on this listener */
int (*accept)(struct connection *conn); /* upper layer's accept() */
int tcp_ss; /* for TCP, Save SYN */
int level; /* stats access level (ACCESS_LVL_*) */
int severity_output; /* default severity output format in cli feedback messages */
short int nice; /* nice value to assign to the instantiated tasks */
@ -309,7 +310,7 @@ struct bind_kw_list {
struct accept_queue_ring {
uint32_t idx; /* (head << 16) | tail */
struct tasklet *tasklet; /* tasklet of the thread owning this ring */
struct connection *entry[ACCEPT_QUEUE_SIZE] __attribute((aligned(64)));
struct connection *entry[ACCEPT_QUEUE_SIZE] THREAD_ALIGNED();
};

View File

@ -231,7 +231,7 @@ const char *listener_state_str(const struct listener *l);
struct task *accept_queue_process(struct task *t, void *context, unsigned int state);
struct task *manage_global_listener_queue(struct task *t, void *context, unsigned int state);
extern struct accept_queue_ring accept_queue_rings[MAX_THREADS] __attribute__((aligned(64)));
extern struct accept_queue_ring accept_queue_rings[MAX_THREADS] THREAD_ALIGNED();
extern const char* li_status_st[LI_STATE_COUNT];
enum li_status get_li_status(struct listener *l);

View File

@ -107,20 +107,34 @@ struct pat_ref {
struct list list; /* Used to chain refs. */
char *reference; /* The reference name. */
char *display; /* String displayed to identify the pattern origin. */
struct list head; /* The head of the list of struct pat_ref_elt. */
struct ceb_root *ceb_root; /* The tree where pattern reference elements are attached. */
struct ceb_root *gen_root; /* The tree mapping generation IDs to pattern reference elements */
struct list pat; /* The head of the list of struct pattern_expr. */
unsigned int flags; /* flags PAT_REF_*. */
unsigned int curr_gen; /* current generation number (anything below can be removed) */
unsigned int next_gen; /* next generation number (insertions use this one) */
/* We keep a cached pointer to the current generation for performance. */
struct {
struct pat_ref_gen *data;
unsigned int id;
} cached_gen;
int unique_id; /* Each pattern reference have unique id. */
unsigned long long revision; /* updated for each update */
unsigned long long entry_cnt; /* the total number of entries */
THREAD_ALIGN(64);
THREAD_ALIGN();
__decl_thread(HA_RWLOCK_T lock); /* Lock used to protect pat ref elements */
event_hdl_sub_list e_subs; /* event_hdl: pat_ref's subscribers list (atomically updated) */
};
/* This struct represents all the elements in a pattern reference generation. The tree
* is used most of the time, but we also maintain a list for when order matters.
*/
struct pat_ref_gen {
struct list head; /* The head of the list of struct pat_ref_elt. */
struct ceb_root *elt_root; /* The tree where pattern reference elements are attached. */
struct ceb_node gen_node; /* Linkage for the gen_root cebtree in struct pat_ref */
unsigned int gen_id;
};
/* This is a part of struct pat_ref. Each entry contains one pattern and one
* associated value as original string. All derivative forms (via exprs) are
* accessed from list_head or tree_head. Be careful, it's variable-sized!
@ -133,7 +147,7 @@ struct pat_ref_elt {
char *sample;
unsigned int gen_id; /* generation of pat_ref this was made for */
int line;
struct ceb_node node; /* Node to attach this element to its <pat_ref> ebtree. */
struct ceb_node node; /* Node to attach this element to its <pat_ref_gen> cebtree. */
const char pattern[0]; // const only to make sure nobody tries to free it.
};

View File

@ -189,8 +189,10 @@ struct pat_ref *pat_ref_new(const char *reference, const char *display, unsigned
struct pat_ref *pat_ref_newid(int unique_id, const char *display, unsigned int flags);
struct pat_ref_elt *pat_ref_find_elt(struct pat_ref *ref, const char *key);
struct pat_ref_elt *pat_ref_gen_find_elt(struct pat_ref *ref, unsigned int gen_id, const char *key);
struct pat_ref_elt *pat_ref_append(struct pat_ref *ref, const char *pattern, const char *sample, int line);
struct pat_ref_elt *pat_ref_append(struct pat_ref *ref, unsigned int gen, const char *pattern, const char *sample, int line);
struct pat_ref_elt *pat_ref_load(struct pat_ref *ref, unsigned int gen, const char *pattern, const char *sample, int line, char **err);
struct pat_ref_gen *pat_ref_gen_new(struct pat_ref *ref, unsigned int gen_id);
struct pat_ref_gen *pat_ref_gen_get(struct pat_ref *ref, unsigned int gen_id);
int pat_ref_push(struct pat_ref_elt *elt, struct pattern_expr *expr, int patflags, char **err);
int pat_ref_add(struct pat_ref *ref, const char *pattern, const char *sample, char **err);
int pat_ref_set(struct pat_ref *ref, const char *pattern, const char *sample, char **err);

View File

@ -63,7 +63,7 @@ struct pool_cache_head {
unsigned int tid; /* thread id, for debugging only */
struct pool_head *pool; /* assigned pool, for debugging only */
ulong fill_pattern; /* pattern used to fill the area on free */
} THREAD_ALIGNED(64);
} THREAD_ALIGNED();
/* This describes a pool registration, which is what was passed to
* create_pool() and that might have been merged with an existing pool.
@ -139,7 +139,7 @@ struct pool_head {
struct list regs; /* registrations: alt names for this pool */
/* heavily read-write part */
THREAD_ALIGN(64);
THREAD_ALIGN();
/* these entries depend on the pointer value, they're used to reduce
* the contention on fast-changing values. The alignment here is
@ -148,7 +148,7 @@ struct pool_head {
* just meant to shard elements and there are no per-free_list stats.
*/
struct {
THREAD_ALIGN(64);
THREAD_ALIGN();
struct pool_item *free_list; /* list of free shared objects */
unsigned int allocated; /* how many chunks have been allocated */
unsigned int used; /* how many chunks are currently in use */
@ -156,8 +156,8 @@ struct pool_head {
unsigned int failed; /* failed allocations (indexed by hash of TID) */
} buckets[CONFIG_HAP_POOL_BUCKETS];
struct pool_cache_head cache[MAX_THREADS] THREAD_ALIGNED(64); /* pool caches */
} __attribute__((aligned(64)));
struct pool_cache_head cache[MAX_THREADS] THREAD_ALIGNED(); /* pool caches */
} THREAD_ALIGNED();
#endif /* _HAPROXY_POOL_T_H */

View File

@ -160,6 +160,7 @@ struct protocol {
/* default I/O handler */
void (*default_iocb)(int fd); /* generic I/O handler (typically accept callback) */
int (*get_info)(struct connection *conn, long long int *info, int info_num); /* Callback to get connection level statistical counters */
int (*get_opt)(const struct connection *conn, int level, int optname, void *buf, int size); /* getsockopt(level:optname) into buf:size */
uint flags; /* flags describing protocol support (PROTO_F_*) */
uint nb_receivers; /* number of receivers (under proto_lock) */

View File

@ -247,6 +247,7 @@ enum PR_SRV_STATE_FILE {
#define PR_FL_IMPLICIT_REF 0x10 /* The default proxy is implicitly referenced by another proxy */
#define PR_FL_PAUSED 0x20 /* The proxy was paused at run time (reversible) */
#define PR_FL_CHECKED 0x40 /* The proxy configuration was fully checked (including postparsing checks) */
#define PR_FL_BE_UNPUBLISHED 0x80 /* The proxy cannot be targetted by content switching rules */
struct stream;
@ -304,7 +305,7 @@ struct error_snapshot {
struct proxy_per_tgroup {
struct queue queue;
struct lbprm_per_tgrp lbprm;
} THREAD_ALIGNED(64);
} THREAD_ALIGNED();
struct proxy {
enum obj_type obj_type; /* object type == OBJ_TYPE_PROXY */
@ -505,7 +506,7 @@ struct proxy {
EXTRA_COUNTERS(extra_counters_fe);
EXTRA_COUNTERS(extra_counters_be);
THREAD_ALIGN(64);
THREAD_ALIGN();
unsigned int queueslength; /* Sum of the length of each queue */
int served; /* # of active sessions currently being served */
int totpend; /* total number of pending connections on this instance (for stats) */

View File

@ -166,12 +166,12 @@ static inline int proxy_abrt_close(const struct proxy *px)
/* increase the number of cumulated connections received on the designated frontend */
static inline void proxy_inc_fe_conn_ctr(struct listener *l, struct proxy *fe)
{
if (fe->fe_counters.shared.tg[tgid - 1])
if (fe->fe_counters.shared.tg) {
_HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->cum_conn);
if (l && l->counters && l->counters->shared.tg[tgid - 1])
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_conn);
if (fe->fe_counters.shared.tg[tgid - 1])
update_freq_ctr(&fe->fe_counters.shared.tg[tgid - 1]->conn_per_sec, 1);
}
if (l && l->counters && l->counters->shared.tg)
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_conn);
HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.cps_max,
update_freq_ctr(&fe->fe_counters._conn_per_sec, 1));
}
@ -179,12 +179,12 @@ static inline void proxy_inc_fe_conn_ctr(struct listener *l, struct proxy *fe)
/* increase the number of cumulated connections accepted by the designated frontend */
static inline void proxy_inc_fe_sess_ctr(struct listener *l, struct proxy *fe)
{
if (fe->fe_counters.shared.tg[tgid - 1])
if (fe->fe_counters.shared.tg) {
_HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->cum_sess);
if (l && l->counters && l->counters->shared.tg[tgid - 1])
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_sess);
if (fe->fe_counters.shared.tg[tgid - 1])
update_freq_ctr(&fe->fe_counters.shared.tg[tgid - 1]->sess_per_sec, 1);
}
if (l && l->counters && l->counters->shared.tg)
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_sess);
HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.sps_max,
update_freq_ctr(&fe->fe_counters._sess_per_sec, 1));
}
@ -199,19 +199,19 @@ static inline void proxy_inc_fe_cum_sess_ver_ctr(struct listener *l, struct prox
http_ver > sizeof(fe->fe_counters.shared.tg[tgid - 1]->cum_sess_ver) / sizeof(*fe->fe_counters.shared.tg[tgid - 1]->cum_sess_ver))
return;
if (fe->fe_counters.shared.tg[tgid - 1])
if (fe->fe_counters.shared.tg)
_HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->cum_sess_ver[http_ver - 1]);
if (l && l->counters && l->counters->shared.tg[tgid - 1])
if (l && l->counters && l->counters->shared.tg && l->counters->shared.tg[tgid - 1])
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_sess_ver[http_ver - 1]);
}
/* increase the number of cumulated streams on the designated backend */
static inline void proxy_inc_be_ctr(struct proxy *be)
{
if (be->be_counters.shared.tg[tgid - 1])
if (be->be_counters.shared.tg) {
_HA_ATOMIC_INC(&be->be_counters.shared.tg[tgid - 1]->cum_sess);
if (be->be_counters.shared.tg[tgid - 1])
update_freq_ctr(&be->be_counters.shared.tg[tgid - 1]->sess_per_sec, 1);
}
HA_ATOMIC_UPDATE_MAX(&be->be_counters.sps_max,
update_freq_ctr(&be->be_counters._sess_per_sec, 1));
}
@ -226,12 +226,12 @@ static inline void proxy_inc_fe_req_ctr(struct listener *l, struct proxy *fe,
if (http_ver >= sizeof(fe->fe_counters.shared.tg[tgid - 1]->p.http.cum_req) / sizeof(*fe->fe_counters.shared.tg[tgid - 1]->p.http.cum_req))
return;
if (fe->fe_counters.shared.tg[tgid - 1])
if (fe->fe_counters.shared.tg) {
_HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->p.http.cum_req[http_ver]);
if (l && l->counters && l->counters->shared.tg[tgid - 1])
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->p.http.cum_req[http_ver]);
if (fe->fe_counters.shared.tg[tgid - 1])
update_freq_ctr(&fe->fe_counters.shared.tg[tgid - 1]->req_per_sec, 1);
}
if (l && l->counters && l->counters->shared.tg)
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->p.http.cum_req[http_ver]);
HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.p.http.rps_max,
update_freq_ctr(&fe->fe_counters.p.http._req_per_sec, 1));
}

View File

@ -35,13 +35,13 @@
#define QUIC_CC_INFINITE_SSTHESH ((uint32_t)-1)
extern struct quic_cc_algo quic_cc_algo_nr;
extern struct quic_cc_algo quic_cc_algo_cubic;
extern struct quic_cc_algo quic_cc_algo_bbr;
extern struct quic_cc_algo *default_quic_cc_algo;
extern const struct quic_cc_algo quic_cc_algo_nr;
extern const struct quic_cc_algo quic_cc_algo_cubic;
extern const struct quic_cc_algo quic_cc_algo_bbr;
extern const struct quic_cc_algo *default_quic_cc_algo;
/* Fake algorithm with its fixed window */
extern struct quic_cc_algo quic_cc_algo_nocc;
extern const struct quic_cc_algo quic_cc_algo_nocc;
extern unsigned long long last_ts;
@ -90,7 +90,7 @@ enum quic_cc_algo_type {
struct quic_cc {
/* <conn> is there only for debugging purpose. */
struct quic_conn *qc;
struct quic_cc_algo *algo;
const struct quic_cc_algo *algo;
uint32_t priv[144];
};

View File

@ -35,7 +35,7 @@
#include <haproxy/quic_loss.h>
#include <haproxy/thread.h>
void quic_cc_init(struct quic_cc *cc, struct quic_cc_algo *algo, struct quic_conn *qc);
void quic_cc_init(struct quic_cc *cc, const struct quic_cc_algo *algo, struct quic_conn *qc);
void quic_cc_event(struct quic_cc *cc, struct quic_cc_event *ev);
void quic_cc_state_trace(struct buffer *buf, const struct quic_cc *cc);
@ -83,7 +83,7 @@ static inline void *quic_cc_priv(const struct quic_cc *cc)
* which is true for an IPv4 path, if not false for an IPv6 path.
*/
static inline void quic_cc_path_init(struct quic_cc_path *path, int ipv4, unsigned long max_cwnd,
struct quic_cc_algo *algo,
const struct quic_cc_algo *algo,
struct quic_conn *qc)
{
unsigned int max_dgram_sz;

View File

@ -24,6 +24,12 @@ struct quic_cid {
unsigned char len; /* size of QUIC CID */
};
/* Determines whether a CID is used for frontend or backend connections. */
enum quic_cid_side {
QUIC_CID_SIDE_FE,
QUIC_CID_SIDE_BE
};
/* QUIC connection id attached to a QUIC connection.
*
* This structure is used to match received packets DCIDs with the
@ -34,11 +40,12 @@ struct quic_connection_id {
uint64_t retire_prior_to;
unsigned char stateless_reset_token[QUIC_STATELESS_RESET_TOKEN_LEN];
struct ebmb_node node; /* node for receiver tree, cid.data as key */
struct quic_cid cid; /* CID data */
struct ebmb_node node; /* node for receiver tree, cid.data as key */
struct quic_cid cid; /* CID data */
struct quic_conn *qc; /* QUIC connection using this CID */
uint tid; /* Attached Thread ID for the connection. */
struct quic_conn *qc; /* QUIC connection using this CID */
uint tid; /* Attached Thread ID for the connection. */
enum quic_cid_side side; /* side where this CID is used */
};
#endif /* _HAPROXY_QUIC_CID_T_H */

View File

@ -15,9 +15,10 @@
#include <haproxy/quic_rx-t.h>
#include <haproxy/proto_quic.h>
extern struct quic_cid_tree *quic_cid_trees;
extern struct quic_cid_tree *quic_fe_cid_trees;
extern struct quic_cid_tree *quic_be_cid_trees;
struct quic_connection_id *quic_cid_alloc(void);
struct quic_connection_id *quic_cid_alloc(enum quic_cid_side side);
int quic_cid_generate_random(struct quic_connection_id *conn_id);
int quic_cid_generate_from_hash(struct quic_connection_id *conn_id, uint64_t hash64);
@ -81,11 +82,18 @@ static inline uchar quic_cid_tree_idx(const struct quic_cid *cid)
return _quic_cid_tree_idx(cid->data);
}
/* Returns the tree instance responsible for <conn_id> storage. */
static inline struct quic_cid_tree *quic_cid_get_tree(const struct quic_connection_id *conn_id)
{
const int tree_idx = quic_cid_tree_idx(&conn_id->cid);
return conn_id->side == QUIC_CID_SIDE_FE ?
&quic_fe_cid_trees[tree_idx] : &quic_be_cid_trees[tree_idx];
}
/* Remove <conn_id> from global CID tree as a thread-safe operation. */
static inline void quic_cid_delete(struct quic_connection_id *conn_id)
{
const uchar idx = quic_cid_tree_idx(&conn_id->cid);
struct quic_cid_tree __maybe_unused *tree = &quic_cid_trees[idx];
struct quic_cid_tree __maybe_unused *tree = quic_cid_get_tree(conn_id);
HA_RWLOCK_WRLOCK(QC_CID_LOCK, &tree->lock);
ebmb_delete(&conn_id->node);

View File

@ -434,7 +434,7 @@ struct quic_conn_closed {
#define QUIC_FL_CONN_NEED_POST_HANDSHAKE_FRMS (1U << 2) /* HANDSHAKE_DONE must be sent */
#define QUIC_FL_CONN_IS_BACK (1U << 3) /* conn used on backend side */
#define QUIC_FL_CONN_ACCEPT_REGISTERED (1U << 4)
#define QUIC_FL_CONN_UDP_GSO_EIO (1U << 5) /* GSO disabled due to a EIO occured on same listener */
#define QUIC_FL_CONN_UDP_GSO_EIO (1U << 5) /* GSO disabled due to a EIO occurred on same listener */
#define QUIC_FL_CONN_IDLE_TIMER_RESTARTED_AFTER_READ (1U << 6)
#define QUIC_FL_CONN_RETRANS_NEEDED (1U << 7)
#define QUIC_FL_CONN_RETRANS_OLD_DATA (1U << 8) /* retransmission in progress for probing with already sent data */

View File

@ -67,6 +67,7 @@ int qc_h3_request_reject(struct quic_conn *qc, uint64_t id);
struct quic_conn *qc_new_conn(void *target,
const struct quic_rx_packet *initial_pkt,
const struct quic_cid *token_odcid,
struct connection *connection,
struct quic_connection_id *conn_id,
struct sockaddr_storage *local_addr,
struct sockaddr_storage *peer_addr);
@ -91,6 +92,12 @@ static inline int qc_is_back(const struct quic_conn *qc)
return qc->flags & QUIC_FL_CONN_IS_BACK;
}
static inline enum quic_cid_side qc_cid_side(const struct quic_conn *qc)
{
return !(qc->flags & QUIC_FL_CONN_IS_BACK) ?
QUIC_CID_SIDE_FE : QUIC_CID_SIDE_BE;
}
/* Free the CIDs attached to <conn> QUIC connection. */
static inline void free_quic_conn_cids(struct quic_conn *conn)
{

View File

@ -17,5 +17,7 @@
#include <haproxy/pool-t.h>
extern struct pool_head *pool_head_quic_ssl_sock_ctx;
extern const char *default_quic_ciphersuites;
extern const char *default_quic_curves;
#endif /* _HAPROXY_QUIC_SSL_T_H */

View File

@ -5,7 +5,7 @@
#include <haproxy/api-t.h>
/* Counter which can be used to measure data amount accross several buffers. */
/* Counter which can be used to measure data amount across several buffers. */
struct bdata_ctr {
uint64_t tot; /* sum of data present in all underlying buffers */
uint8_t bcnt; /* current number of allocated underlying buffers */

View File

@ -33,11 +33,12 @@
/* Bit values for receiver->flags */
#define RX_F_BOUND 0x00000001 /* receiver already bound */
#define RX_F_INHERITED 0x00000002 /* inherited FD from the parent process (fd@) or duped from another local receiver */
#define RX_F_INHERITED_FD 0x00000002 /* inherited FD from the parent process (fd@) */
#define RX_F_MWORKER 0x00000004 /* keep the FD open in the master but close it in the children */
#define RX_F_MUST_DUP 0x00000008 /* this receiver's fd must be dup() from a reference; ignore socket-level ops here */
#define RX_F_NON_SUSPENDABLE 0x00000010 /* this socket cannot be suspended hence must always be unbound */
#define RX_F_PASS_PKTINFO 0x00000020 /* pass pktinfo in received messages */
#define RX_F_INHERITED_SOCK 0x00000040 /* inherited sock that could be duped from another local receiver */
/* Bit values for rx_settings->options */
#define RX_O_FOREIGN 0x00000001 /* receives on foreign addresses */
@ -63,9 +64,8 @@ struct rx_settings {
struct shard_info {
uint nbgroups; /* number of groups in this shard (=#rx); Zero = unused. */
uint nbthreads; /* number of threads in this shard (>=nbgroups) */
ulong tgroup_mask; /* bitmask of thread groups having a member here */
struct receiver *ref; /* first one, reference for FDs to duplicate */
struct receiver *members[MAX_TGROUPS]; /* all members of the shard (one per thread group) */
struct receiver **members; /* all members of the shard (one per thread group) */
};
/* This describes a receiver with all its characteristics (address, options, etc) */

View File

@ -130,11 +130,11 @@ struct ring_wait_cell {
struct ring_storage {
size_t size; // storage size
size_t rsvd; // header length (used for file-backed maps)
THREAD_ALIGN(64);
THREAD_ALIGN();
size_t tail; // storage tail
THREAD_ALIGN(64);
THREAD_ALIGN();
size_t head; // storage head
THREAD_ALIGN(64);
THREAD_ALIGN();
char area[0]; // storage area begins immediately here
};
@ -149,7 +149,7 @@ struct ring {
/* keep the queue in a separate cache line below */
struct {
THREAD_ALIGN(64);
THREAD_ALIGN();
struct ring_wait_cell *ptr;
} queue[RING_WAIT_QUEUES + 1]; // wait queue + 1 spacer
};

View File

@ -63,6 +63,7 @@ int smp_expr_output_type(struct sample_expr *expr);
int c_none(struct sample *smp);
int c_pseudo(struct sample *smp);
int smp_dup(struct sample *smp);
int sample_check_arg_base64(struct arg *arg, char **err);
/*
* This function just apply a cast on sample. It returns 0 if the cast is not

View File

@ -294,7 +294,7 @@ struct srv_per_tgroup {
struct eb_root *lb_tree; /* For LB algos with split between thread groups, the tree to be used, for each group */
unsigned npos, lpos; /* next and last positions in the LB tree, protected by LB lock */
unsigned rweight; /* remainder of weight in the current LB tree */
} THREAD_ALIGNED(64);
} THREAD_ALIGNED();
/* Configure the protocol selection for websocket */
enum __attribute__((__packed__)) srv_ws_mode {
@ -396,7 +396,7 @@ struct server {
/* The elements below may be changed on every single request by any
* thread, and generally at the same time.
*/
THREAD_ALIGN(64);
THREAD_ALIGN();
struct eb32_node idle_node; /* When to next do cleanup in the idle connections */
unsigned int curr_idle_conns; /* Current number of orphan idling connections, both the idle and the safe lists */
unsigned int curr_idle_nb; /* Current number of connections in the idle list */
@ -414,7 +414,7 @@ struct server {
/* Element below are usd by LB algorithms and must be doable in
* parallel to other threads reusing connections above.
*/
THREAD_ALIGN(64);
THREAD_ALIGN();
__decl_thread(HA_SPINLOCK_T lock); /* may enclose the proxy's lock, must not be taken under */
union {
struct eb32_node lb_node; /* node used for tree-based load balancing */
@ -428,7 +428,7 @@ struct server {
};
/* usually atomically updated by any thread during parsing or on end of request */
THREAD_ALIGN(64);
THREAD_ALIGN();
int cur_sess; /* number of currently active sessions (including syn_sent) */
int served; /* # of active sessions currently being served (ie not pending) */
int consecutive_errors; /* current number of consecutive errors */
@ -436,7 +436,7 @@ struct server {
struct be_counters counters; /* statistics counters */
/* Below are some relatively stable settings, only changed under the lock */
THREAD_ALIGN(64);
THREAD_ALIGN();
struct eb_root *lb_tree; /* we want to know in what tree the server is */
struct tree_occ *lb_nodes; /* lb_nodes_tot * struct tree_occ */
@ -485,7 +485,7 @@ struct server {
unsigned char *ptr;
int size;
int allocated_size;
char *sni; /* SNI used for the session */
uint64_t sni_hash; /* Hash of the SNI used for the session */
__decl_thread(HA_RWLOCK_T sess_lock);
} * reused_sess;
@ -514,6 +514,8 @@ struct server {
} ssl_ctx;
#ifdef USE_QUIC
struct quic_transport_params quic_params; /* QUIC transport parameters */
const struct quic_cc_algo *quic_cc_algo; /* QUIC control congestion algorithm */
size_t quic_max_cwnd; /* QUIC maximum congestion control window size (kB) */
#endif
struct path_parameters path_params; /* Connection parameters for that server */
struct resolv_srvrq *srvrq; /* Pointer representing the DNS SRV requeest, if any */

View File

@ -207,7 +207,7 @@ static inline void server_index_id(struct proxy *px, struct server *srv)
/* increase the number of cumulated streams on the designated server */
static inline void srv_inc_sess_ctr(struct server *s)
{
if (s->counters.shared.tg[tgid - 1]) {
if (s->counters.shared.tg) {
_HA_ATOMIC_INC(&s->counters.shared.tg[tgid - 1]->cum_sess);
update_freq_ctr(&s->counters.shared.tg[tgid - 1]->sess_per_sec, 1);
}
@ -218,7 +218,7 @@ static inline void srv_inc_sess_ctr(struct server *s)
/* set the time of last session on the designated server */
static inline void srv_set_sess_last(struct server *s)
{
if (s->counters.shared.tg[tgid - 1])
if (s->counters.shared.tg)
HA_ATOMIC_STORE(&s->counters.shared.tg[tgid - 1]->last_sess, ns_to_sec(now_ns));
}

View File

@ -46,6 +46,7 @@ struct connection *sock_accept_conn(struct listener *l, int *status);
void sock_accept_iocb(int fd);
void sock_conn_ctrl_init(struct connection *conn);
void sock_conn_ctrl_close(struct connection *conn);
int sock_conn_get_opt(const struct connection *conn, int level, int optname, void *buf, int size);
void sock_conn_iocb(int fd);
int sock_conn_check(struct connection *conn);
int sock_drain(struct connection *conn);

View File

@ -254,7 +254,7 @@ struct ssl_keylog {
#define SSL_SOCK_F_KTLS_SEND (1 << 2) /* kTLS send is configured on that socket */
#define SSL_SOCK_F_KTLS_RECV (1 << 3) /* kTLS receive is configure on that socket */
#define SSL_SOCK_F_CTRL_SEND (1 << 4) /* We want to send a kTLS control message for that socket */
#define SSL_SOCK_F_HAS_ALPN (1 << 5) /* An ALPN has been negociated */
#define SSL_SOCK_F_HAS_ALPN (1 << 5) /* An ALPN has been negotiated */
struct ssl_sock_ctx {
struct connection *conn;

View File

@ -30,6 +30,7 @@
#include <haproxy/proxy-t.h>
#include <haproxy/quic_conn-t.h>
#include <haproxy/ssl_sock-t.h>
#include <haproxy/stats.h>
#include <haproxy/thread.h>
extern struct list tlskeys_reference;
@ -57,6 +58,7 @@ extern struct pool_head *pool_head_ssl_keylog_str;
extern struct list openssl_providers;
extern struct stats_module ssl_stats_module;
uint64_t ssl_sock_sni_hash(const struct ist sni);
int ssl_sock_prep_ctx_and_inst(struct bind_conf *bind_conf, struct ssl_bind_conf *ssl_conf,
SSL_CTX *ctx, struct ckch_inst *ckch_inst, char **err);
int ssl_sock_prep_srv_ctx_and_inst(const struct server *srv, SSL_CTX *ctx,
@ -89,6 +91,7 @@ unsigned int ssl_sock_get_verify_result(struct connection *conn);
void ssl_sock_update_counters(SSL *ssl,
struct ssl_counters *counters,
struct ssl_counters *counters_px, int backend);
void ssl_sock_handle_hs_error(struct connection *conn);
#if (defined SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB && TLS_TICKETS_NO > 0)
int ssl_sock_update_tlskey_ref(struct tls_keys_ref *ref,
struct buffer *tlskey);
@ -241,6 +244,30 @@ static inline struct connection *ssl_sock_get_conn(const SSL *s, struct ssl_sock
return ret;
}
/* Set at <counters> and <counters_px> addresses the SSL statistical counters */
static inline void ssl_sock_get_stats_counters(struct connection *conn,
struct ssl_counters **counters,
struct ssl_counters **counters_px)
{
switch (obj_type(conn->target)) {
case OBJ_TYPE_LISTENER: {
struct listener *li = __objt_listener(conn->target);
*counters = EXTRA_COUNTERS_GET(li->extra_counters, &ssl_stats_module);
*counters_px = EXTRA_COUNTERS_GET(li->bind_conf->frontend->extra_counters_fe,
&ssl_stats_module);
break;
}
case OBJ_TYPE_SERVER: {
struct server *srv = __objt_server(conn->target);
*counters = EXTRA_COUNTERS_GET(srv->extra_counters, &ssl_stats_module);
*counters_px = EXTRA_COUNTERS_GET(srv->proxy->extra_counters_be,
&ssl_stats_module);
break;
}
default:
break;
}
}
#endif /* USE_OPENSSL */
#endif /* _HAPROXY_SSL_SOCK_H */

View File

@ -57,6 +57,9 @@ const char *nid2nist(int nid);
const char *sigalg2str(int sigalg);
const char *curveid2str(int curve_id);
int aes_process(struct buffer *data, struct buffer *nonce, struct buffer *key, int key_size,
struct buffer *aead_tag, struct buffer *aad, struct buffer *out, int decrypt, int gcm);
#endif /* _HAPROXY_SSL_UTILS_H */
#endif /* USE_OPENSSL */

View File

@ -15,7 +15,7 @@ enum stfile_domain {
};
#define SHM_STATS_FILE_VER_MAJOR 1
#define SHM_STATS_FILE_VER_MINOR 1
#define SHM_STATS_FILE_VER_MINOR 2
#define SHM_STATS_FILE_HEARTBEAT_TIMEOUT 60 /* passed this delay (seconds) process which has not
* sent heartbeat will be considered down
@ -64,9 +64,9 @@ struct shm_stats_file_hdr {
*/
struct shm_stats_file_object {
char guid[GUID_MAX_LEN + 1];
uint8_t tgid; // thread group ID from 1 to 64
uint16_t tgid; // thread group ID
uint8_t type; // SHM_STATS_FILE_OBJECT_TYPE_* to know how to handle object.data
ALWAYS_PAD(6); // 6 bytes hole, ensure it remains the same size 32 vs 64 bits arch
ALWAYS_PAD(5); // 5 bytes hole, ensure it remains the same size 32 vs 64 bits arch
uint64_t users; // bitfield that corresponds to users of the object (see shm_stats_file_hdr slots)
/* as the struct may hold any of the types described here, let's make it
* so it may store up to the heaviest one using an union

View File

@ -313,8 +313,8 @@ struct se_abort_info {
*
* <kip> is the known input payload length. It is set by the stream endpoint
* that produce data and decremented once consumed by the app
* loyer. Depending on the enpoint, this value may be unset. It may be set
* only once if the payload lenght is fully known from the begining (a
* layer. Depending on the endpoint, this value may be unset. It may be set
* only once if the payload length is fully known from the beginning (a
* HTTP message with a content-length for instance), or incremented
* periodically when more data are expected (a chunk-encoded HTTP message
* for instance). On the app side, this value is decremented when data are

View File

@ -206,7 +206,7 @@ struct stktable {
void *ptr; /* generic ptr to check if set or not */
} write_to; /* updates received on the source table will also update write_to */
THREAD_ALIGN(64);
THREAD_ALIGN();
struct {
struct eb_root keys; /* head of sticky session tree */
@ -221,7 +221,7 @@ struct stktable {
unsigned int refcnt; /* number of local peer over all peers sections
attached to this table */
unsigned int current; /* number of sticky sessions currently in table */
THREAD_ALIGN(64);
THREAD_ALIGN();
struct eb_root updates; /* head of sticky updates sequence tree, uses updt_lock */
struct mt_list *pend_updts; /* list of updates to be added to the update sequence tree, one per thread-group */
@ -229,7 +229,7 @@ struct stktable {
unsigned int localupdate; /* uses updt_lock */
struct tasklet *updt_task;/* tasklet responsible for pushing the pending updates into the tree */
THREAD_ALIGN(64);
THREAD_ALIGN();
/* this lock is heavily used and must be on its own cache line */
__decl_thread(HA_RWLOCK_T updt_lock); /* lock protecting the updates part */

View File

@ -91,7 +91,7 @@ extern struct pool_head *pool_head_task;
extern struct pool_head *pool_head_tasklet;
extern struct pool_head *pool_head_notification;
__decl_thread(extern HA_RWLOCK_T wq_lock THREAD_ALIGNED(64));
__decl_thread(extern HA_RWLOCK_T wq_lock THREAD_ALIGNED());
void __tasklet_wakeup_on(struct tasklet *tl, int thr);
struct list *__tasklet_wakeup_after(struct list *head, struct tasklet *tl);

View File

@ -51,7 +51,7 @@
/* declare a self-initializing spinlock, aligned on a cache line */
#define __decl_aligned_spinlock(lock) \
HA_SPINLOCK_T (lock) __attribute__((aligned(64))) = 0;
HA_SPINLOCK_T (lock) ALIGNED(64) = 0;
/* declare a self-initializing rwlock */
#define __decl_rwlock(lock) \
@ -59,7 +59,7 @@
/* declare a self-initializing rwlock, aligned on a cache line */
#define __decl_aligned_rwlock(lock) \
HA_RWLOCK_T (lock) __attribute__((aligned(64))) = 0;
HA_RWLOCK_T (lock) ALIGNED(64) = 0;
#else /* !USE_THREAD */
@ -72,7 +72,7 @@
/* declare a self-initializing spinlock, aligned on a cache line */
#define __decl_aligned_spinlock(lock) \
HA_SPINLOCK_T (lock) __attribute__((aligned(64))); \
HA_SPINLOCK_T (lock) THREAD_ALIGNED(); \
INITCALL1(STG_LOCK, ha_spin_init, &(lock))
/* declare a self-initializing rwlock */
@ -82,7 +82,7 @@
/* declare a self-initializing rwlock, aligned on a cache line */
#define __decl_aligned_rwlock(lock) \
HA_RWLOCK_T (lock) __attribute__((aligned(64))); \
HA_RWLOCK_T (lock) THREAD_ALIGNED(); \
INITCALL1(STG_LOCK, ha_rwlock_init, &(lock))
#endif /* USE_THREAD */

View File

@ -60,7 +60,6 @@ extern int thread_cpus_enabled_at_boot;
/* Only way found to replace variables with constants that are optimized away
* at build time.
*/
enum { all_tgroups_mask = 1UL };
enum { tid_bit = 1UL };
enum { tid = 0 };
enum { tgid = 1 };
@ -208,7 +207,6 @@ void wait_for_threads_completion();
void set_thread_cpu_affinity();
unsigned long long ha_get_pthread_id(unsigned int thr);
extern volatile unsigned long all_tgroups_mask;
extern volatile unsigned int rdv_requests;
extern volatile unsigned int isolated_thread;
extern THREAD_LOCAL unsigned int tid; /* The thread id */

View File

@ -42,7 +42,7 @@ struct thread_set {
ulong abs[(MAX_THREADS + LONGBITS - 1) / LONGBITS];
ulong rel[MAX_TGROUPS];
};
ulong grps; /* bit field of all non-empty groups, 0 for abs */
ulong nbgrps; /* Number of thread groups, 0 for abs */
};
/* tasklet classes */
@ -86,7 +86,7 @@ struct tgroup_info {
/* pad to cache line (64B) */
char __pad[0]; /* unused except to check remaining room */
char __end[0] __attribute__((aligned(64)));
char __end[0] THREAD_ALIGNED();
};
/* This structure describes the group-specific context (e.g. active threads
@ -103,7 +103,7 @@ struct tgroup_ctx {
/* pad to cache line (64B) */
char __pad[0]; /* unused except to check remaining room */
char __end[0] __attribute__((aligned(64)));
char __end[0] THREAD_ALIGNED();
};
/* This structure describes all the per-thread info we need. When threads are
@ -124,7 +124,7 @@ struct thread_info {
/* pad to cache line (64B) */
char __pad[0]; /* unused except to check remaining room */
char __end[0] __attribute__((aligned(64)));
char __end[0] THREAD_ALIGNED();
};
/* This structure describes all the per-thread context we need. This is
@ -150,7 +150,8 @@ struct thread_ctx {
struct list buffer_wq[DYNBUF_NBQ]; /* buffer waiters, 4 criticality-based queues */
struct list pool_lru_head; /* oldest objects in thread-local pool caches */
struct list streams; /* list of streams attached to this thread */
struct list quic_conns; /* list of active quic-conns attached to this thread */
struct list quic_conns_fe; /* list of active FE quic-conns attached to this thread */
struct list quic_conns_be; /* list of active BE quic-conns attached to this thread */
struct list quic_conns_clo; /* list of closing quic-conns attached to this thread */
struct list queued_checks; /* checks waiting for a connection slot */
struct list tasklets[TL_CLASSES]; /* tasklets (and/or tasks) to run, by class */

View File

@ -77,7 +77,7 @@ static inline int thread_set_nth_group(const struct thread_set *ts, int n)
{
int i;
if (ts->grps) {
if (ts->nbgrps) {
for (i = 0; i < MAX_TGROUPS; i++)
if (ts->rel[i] && !n--)
return i + 1;
@ -95,7 +95,7 @@ static inline ulong thread_set_nth_tmask(const struct thread_set *ts, int n)
{
int i;
if (ts->grps) {
if (ts->nbgrps) {
for (i = 0; i < MAX_TGROUPS; i++)
if (ts->rel[i] && !n--)
return ts->rel[i];
@ -111,7 +111,7 @@ static inline void thread_set_pin_grp1(struct thread_set *ts, ulong mask)
{
int i;
ts->grps = 1;
ts->nbgrps = 1;
ts->rel[0] = mask;
for (i = 1; i < MAX_TGROUPS; i++)
ts->rel[i] = 0;

View File

@ -47,23 +47,6 @@
/* return the largest possible integer of type <ret>, with all bits set */
#define MAX_RANGE(ret) (~(typeof(ret))0)
/* DEFVAL() returns either the second argument as-is, or <def> if absent. This
* is for use in macros arguments.
*/
#define DEFVAL(_def,...) _FIRST_ARG(NULL, ##__VA_ARGS__, (_def))
/* DEFNULL() returns either the argument as-is, or NULL if absent. This is for
* use in macros arguments.
*/
#define DEFNULL(...) DEFVAL(NULL, ##__VA_ARGS__)
/* DEFZERO() returns either the argument as-is, or 0 if absent. This is for
* use in macros arguments.
*/
#define DEFZERO(...) DEFVAL(0, ##__VA_ARGS__)
#define _FIRST_ARG(a, b, ...) b
/* options flags for parse_line() */
#define PARSE_OPT_SHARP 0x00000001 // '#' ends the line
#define PARSE_OPT_BKSLASH 0x00000002 // '\' escapes chars

View File

@ -1490,4 +1490,6 @@ int path_base(const char *path, const char *base, char *dst, char **err);
void ha_freearray(char ***array);
void ha_memset_s(void *s, int c, size_t n);
#endif /* _HAPROXY_TOOLS_H */

View File

@ -33,7 +33,7 @@
#ifdef CONFIG_PRODUCT_BRANCH
#define PRODUCT_BRANCH CONFIG_PRODUCT_BRANCH
#else
#define PRODUCT_BRANCH "3.3"
#define PRODUCT_BRANCH "3.4"
#endif
#ifdef CONFIG_PRODUCT_STATUS

View File

@ -63,7 +63,7 @@
* the same split bit as its parent node, it is necessary its associated leaf
*
* When descending along the tree, it is possible to know that a search key is
* not present, because its XOR with both of the branches is stricly higher
* not present, because its XOR with both of the branches is strictly higher
* than the inter-branch XOR. The reason is simple : the inter-branch XOR will
* have its highest bit set indicating the split bit. Since it's the bit that
* differs between the two branches, the key cannot have it both set and

1
reg-tests/checks/certs Symbolic link
View File

@ -0,0 +1 @@
../ssl/certs/

View File

@ -1 +0,0 @@
../ssl/common.pem

View File

@ -39,7 +39,7 @@ haproxy htst -conf {
timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
frontend fe1
bind "fd@${fe1}" ssl crt ${testdir}/common.pem
bind "fd@${fe1}" ssl crt ${testdir}/certs/common.pem
frontend fe2
bind "fd@${fe2}"

View File

@ -45,10 +45,10 @@ haproxy htst -conf {
server fe1 ${htst_fe1_addr}:${htst_fe1_port}
frontend fe1
bind "fd@${fe1}" ssl crt ${testdir}/common.pem curves P-256:P-384
bind "fd@${fe1}" ssl crt ${testdir}/certs/common.pem curves P-256:P-384
frontend fe3
bind "fd@${fe3}" ssl crt ${testdir}/common.pem
bind "fd@${fe3}" ssl crt ${testdir}/certs/common.pem
} -start
haproxy h1 -conf {

View File

@ -62,7 +62,7 @@ haproxy htst -conf {
server fe1 ${htst_fe1_addr}:${htst_fe1_port}
frontend fe1
bind "fd@${fe1}" ssl crt ${testdir}/common.pem
bind "fd@${fe1}" ssl crt ${testdir}/certs/common.pem
} -start

View File

@ -60,15 +60,15 @@ haproxy h1 -conf {
frontend fe1
option httplog
log ${S1_addr}:${S1_port} len 2048 local0 debug err
bind "fd@${fe1}" ssl crt ${testdir}/common.pem
bind "fd@${fe1}" ssl crt ${testdir}/certs/common.pem
use_backend be1
frontend fe2
bind "fd@${fe2}" ssl crt ${testdir}/common.pem
bind "fd@${fe2}" ssl crt ${testdir}/certs/common.pem
use_backend be2
frontend fe3
bind "fd@${fe3}" ssl crt ${testdir}/common.pem
bind "fd@${fe3}" ssl crt ${testdir}/certs/common.pem
use_backend be3
} -start
@ -108,19 +108,19 @@ haproxy h2 -conf {
option httpchk OPTIONS * HTTP/1.1
http-check send hdr Host www
log ${S2_addr}:${S2_port} daemon
server srv1 ${h1_fe1_addr}:${h1_fe1_port} ssl crt ${testdir}/common.pem verify none check
server srv1 ${h1_fe1_addr}:${h1_fe1_port} ssl crt ${testdir}/certs/common.pem verify none check
backend be4
option log-health-checks
log ${S4_addr}:${S4_port} daemon
server srv2 ${h1_fe2_addr}:${h1_fe2_port} ssl crt ${testdir}/common.pem verify none check-ssl check
server srv2 ${h1_fe2_addr}:${h1_fe2_port} ssl crt ${testdir}/certs/common.pem verify none check-ssl check
backend be6
option log-health-checks
option httpchk OPTIONS * HTTP/1.1
http-check send hdr Host www
log ${S6_addr}:${S6_port} daemon
server srv3 127.0.0.1:80 crt ${testdir}/common.pem verify none check check-ssl port ${h1_fe3_port} addr ${h1_fe3_addr}:80
server srv3 127.0.0.1:80 crt ${testdir}/certs/common.pem verify none check check-ssl port ${h1_fe3_port} addr ${h1_fe3_addr}:80
} -start
syslog S1 -wait

1
reg-tests/compression/certs Symbolic link
View File

@ -0,0 +1 @@
../ssl/certs/

View File

@ -1 +0,0 @@
../ssl/common.pem

View File

@ -22,7 +22,7 @@ defaults
mode http
frontend main-https
bind "fd@${fe1}" ssl crt ${testdir}/common.pem
bind "fd@${fe1}" ssl crt ${testdir}/certs/common.pem
compression algo gzip
compression type text/html text/plain application/json application/javascript
compression offload

View File

@ -1 +0,0 @@
../ssl/ca-auth.crt

1
reg-tests/connection/certs Symbolic link
View File

@ -0,0 +1 @@
../ssl/certs/

View File

@ -1 +0,0 @@
../ssl/client1.pem

View File

@ -1 +0,0 @@
../ssl/common.pem

View File

@ -47,7 +47,7 @@ haproxy h1 -conf {
listen receiver
bind "fd@${feR}"
bind "fd@${feR_ssl}" ssl crt ${testdir}/common.pem
bind "fd@${feR_ssl}" ssl crt ${testdir}/certs/common.pem
bind "fd@${feR_proxy}" accept-proxy
http-request return status 200
http-after-response set-header http_first_request %[http_first_req]

View File

@ -24,7 +24,7 @@ haproxy h1 -conf {
server example ${h1_feR_addr}:${h1_feR_port} send-proxy-v2 proxy-v2-options unique-id ssl alpn XXX verify none
listen receiver
bind "fd@${feR}" ssl crt ${testdir}/common.pem accept-proxy
bind "fd@${feR}" ssl crt ${testdir}/certs/common.pem accept-proxy
http-request set-var(txn.proxy_unique_id) fc_pp_unique_id
http-after-response set-header proxy_unique_id %[var(txn.proxy_unique_id)]

View File

@ -29,7 +29,7 @@ backend be-reverse
server dev rhttp@ ssl sni hdr(x-name) verify none
frontend priv
bind "fd@${priv}" ssl crt ${testdir}/common.pem verify required ca-verify-file ${testdir}/ca-auth.crt alpn h2
bind "fd@${priv}" ssl crt ${testdir}/certs/common.pem verify required ca-verify-file ${testdir}/certs/ca-auth.crt alpn h2
tcp-request session attach-srv be-reverse/dev name ssl_c_s_dn(CN)
} -start
@ -45,7 +45,7 @@ defaults
listen li
bind "fd@${li}"
server h_edge "${h_edge_priv_addr}:${h_edge_priv_port}" ssl crt ${testdir}/client1.pem verify none alpn h2
server h_edge "${h_edge_priv_addr}:${h_edge_priv_port}" ssl crt ${testdir}/certs/client1.pem verify none alpn h2
} -start
# Run a client through private endpoint

View File

@ -0,0 +1,85 @@
varnishtest "aes_cbc converter Test"
feature cmd "$HAPROXY_PROGRAM -cc 'feature(OPENSSL)'"
feature cmd "$HAPROXY_PROGRAM -cc 'version_atleast(3.4-dev2)'"
feature ignore_unknown_macro
server s1 {
rxreq
txresp -hdr "Connection: close"
} -repeat 2 -start
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
# WT: limit false-positives causing "HTTP header incomplete" due to
# idle server connections being randomly used and randomly expiring
# under us.
tune.idle-pool.shared off
defaults
mode http
timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"
frontend fe
bind "fd@${fe}"
http-request set-var(txn.plain) str("Hello from HAProxy AES-CBC")
http-request set-var(txn.short_nonce) str("MTIzNDU2Nzg5MDEy")
http-request set-var(txn.nonce) str("MTIzNDU2Nzg5MDEyMzQ1Ng==")
http-request set-var(txn.key) str("Zm9vb2Zvb29mb29vb29vbw==")
# AES-CBC enc with vars + dec with strings
http-request set-var(txn.encrypted1) var(txn.plain),aes_cbc_enc(128,txn.nonce,txn.key),base64
http-after-response set-header X-Encrypted1 %[var(txn.encrypted1)]
http-request set-var(txn.decrypted1) var(txn.encrypted1),b64dec,aes_cbc_dec(128,"MTIzNDU2Nzg5MDEyMzQ1Ng==","Zm9vb2Zvb29mb29vb29vbw==")
http-after-response set-header X-Decrypted1 %[var(txn.decrypted1)]
# AES-CBC enc with strings + dec with vars
http-request set-var(txn.encrypted2) var(txn.plain),aes_cbc_enc(128,"MTIzNDU2Nzg5MDEyMzQ1Ng==","Zm9vb2Zvb29mb29vb29vbw=="),base64
http-after-response set-header X-Encrypted2 %[var(txn.encrypted2)]
http-request set-var(txn.decrypted2) var(txn.encrypted2),b64dec,aes_cbc_dec(128,txn.nonce,txn.key)
http-after-response set-header X-Decrypted2 %[var(txn.decrypted2)]
# AES-CBC + AAD enc with vars + dec with strings
http-request set-var(txn.aad) str("dGVzdAo=")
http-request set-var(txn.encrypted3) var(txn.plain),aes_cbc_enc(128,txn.nonce,txn.key,txn.aad),base64
http-after-response set-header X-Encrypted3 %[var(txn.encrypted3)]
http-request set-var(txn.decrypted3) var(txn.encrypted3),b64dec,aes_cbc_dec(128,"MTIzNDU2Nzg5MDEyMzQ1Ng==","Zm9vb2Zvb29mb29vb29vbw==","dGVzdAo=")
http-after-response set-header X-Decrypted3 %[var(txn.decrypted3)]
# AES-CBC + AAD enc with strings + enc with strings
http-request set-var(txn.encrypted4) var(txn.plain),aes_cbc_enc(128,"MTIzNDU2Nzg5MDEyMzQ1Ng==","Zm9vb2Zvb29mb29vb29vbw==","dGVzdAo="),base64
http-after-response set-header X-Encrypted4 %[var(txn.encrypted4)]
http-request set-var(txn.decrypted4) var(txn.encrypted4),b64dec,aes_cbc_dec(128,txn.nonce,txn.key,txn.aad)
http-after-response set-header X-Decrypted4 %[var(txn.decrypted4)]
# AES-CBC enc with short nonce (var) + dec with short nonce (string)
http-request set-var(txn.encrypted5) var(txn.plain),aes_cbc_enc(128,txn.short_nonce,txn.key),base64
http-after-response set-header X-Encrypted5 %[var(txn.encrypted5)]
http-request set-var(txn.decrypted5) var(txn.encrypted5),b64dec,aes_cbc_dec(128,"MTIzNDU2Nzg5MDEy","Zm9vb2Zvb29mb29vb29vbw==")
http-after-response set-header X-Decrypted5 %[var(txn.decrypted5)]
default_backend be
backend be
server s1 ${s1_addr}:${s1_port}
} -start
client c1 -connect ${h1_fe_sock} {
txreq
rxresp
expect resp.http.x-decrypted1 == "Hello from HAProxy AES-CBC"
expect resp.http.x-decrypted2 == "Hello from HAProxy AES-CBC"
expect resp.http.x-decrypted3 == "Hello from HAProxy AES-CBC"
expect resp.http.x-decrypted4 == "Hello from HAProxy AES-CBC"
expect resp.http.x-decrypted5 == "Hello from HAProxy AES-CBC"
} -run

View File

@ -0,0 +1 @@
../ssl/certs/

View File

@ -1 +0,0 @@
../ssl/common.pem

View File

@ -22,7 +22,7 @@ haproxy hapsrv -conf {
frontend fe
bind "fd@${fe}"
bind "fd@${fessl}" ssl crt ${testdir}/common.pem alpn h2,http/1.1
bind "fd@${fessl}" ssl crt ${testdir}/certs/common.pem alpn h2,http/1.1
capture request header sec-websocket-key len 128
http-request set-var(txn.ver) req.ver
use_backend be

View File

@ -0,0 +1,201 @@
#REGTEST_TYPE=devel
# This reg-test checks the behaviour of the jwt_decrypt_secret and
# jwt_decrypt_cert converters that decode a JSON Web Encryption (JWE) token,
# checks its signature and decrypt its content (RFC 7516).
# The tokens have two tiers of encryption, one that is used to encrypt a secret
# ("alg" field of the JOSE header) and this secret is then used to
# encrypt/decrypt the data contained in the token ("enc" field of the JOSE
# header).
# This reg-test tests a subset of alg/enc combination.
#
# AWS-LC does not support A128KW algorithm so for tests that use it, we will
# have a hardcoded "AWS-LC UNMANAGED" value put in the response header instead
# of the decrypted contents.
varnishtest "Test the 'jwt_decrypt' functionalities"
feature cmd "$HAPROXY_PROGRAM -cc 'version_atleast(3.4-dev2)'"
feature cmd "$HAPROXY_PROGRAM -cc 'feature(OPENSSL) && openssl_version_atleast(1.1.1)'"
feature ignore_unknown_macro
server s1 -repeat 10 {
rxreq
txresp
} -start
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
.if !ssllib_name_startswith(AWS-LC)
tune.ssl.default-dh-param 2048
.endif
tune.ssl.capture-buffer-size 1
stats socket "${tmpdir}/h1/stats" level admin
crt-base "${testdir}"
key-base "${testdir}"
defaults
mode http
timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"
crt-store
# Private key built out of following JWK:
# { "kty": "RSA", "e": "AQAB", "n": "wsqJbopx18NQFYLYOq4ZeMSE89yGiEankUpf25yV8QqroKUGrASj_OeqTWUjwPGKTN1vGFFuHYxiJeAUQH2qQPmg9Oqk6-ATBEKn9COKYniQ5459UxCwmZA2RL6ufhrNyq0JF3GfXkjLDBfhU9zJJEOhknsA0L_c-X4AI3d_NbFdMqxNe1V_UWAlLcbKdwO6iC9fAvwUmDQxgy6R0DC1CMouQpenMRcALaSHar1cm4K-syoNobv3HEuqgZ3s6-hOOSqauqAO0GUozPpaIA7OeruyRl5sTWT0r-iz39bchID2bIKtcqLiFcSYPLBcxmsaQCqRlGhmv6stjTCLV1yT9w", "kid": "ff3c5c96-392e-46ef-a839-6ff16027af78", "d": "b9hXfQ8lOtw8mX1dpqPcoElGhbczz_-xq2znCXQpbBPSZBUddZvchRSH5pSSKPEHlgb3CSGIdpLqsBCv0C_XmCM9ViN8uqsYgDO9uCLIDK5plWttbkqA_EufvW03R9UgIKWmOL3W4g4t-C2mBb8aByaGGVNjLnlb6i186uBsPGkvaeLHbQcRQKAvhOUTeNiyiiCbUGJwCm4avMiZrsz1r81Y1Z5izo0ERxdZymxM3FRZ9vjTB-6DtitvTXXnaAm1JTu6TIpj38u2mnNLkGMbflOpgelMNKBZVxSmfobIbFN8CHVc1UqLK2ElsZ9RCQANgkMHlMkOMj-XT0wHa3VBUQ", "p": "8mgriveKJAp1S7SHqirQAfZafxVuAK_A2QBYPsAUhikfBOvN0HtZjgurPXSJSdgR8KbWV7ZjdJM_eOivIb_XiuAaUdIOXbLRet7t9a_NJtmX9iybhoa9VOJFMBq_rbnbbte2kq0-FnXmv3cukbC2LaEw3aEcDgyURLCgWFqt7M0", "q": "zbbTv5421GowOfKVEuVoA35CEWgl8mdasnEZac2LWxMwKExikKU5LLacLQlcOt7A6n1ZGUC2wyH8mstO5tV34Eug3fnNrbnxFUEE_ZB_njs_rtZnwz57AoUXOXVnd194seIZF9PjdzZcuwXwXbrZ2RSVW8if_ZH5OVYEM1EsA9M", "dp": "1BaIYmIKn1X3InGlcSFcNRtSOnaJdFhRpotCqkRssKUx2qBlxs7ln_5dqLtZkx5VM_UE_GE7yzc6BZOwBxtOftdsr8HVh-14ksSR9rAGEsO2zVBiEuW4qZf_aQM-ScWfU--wcczZ0dT-Ou8P87Bk9K9fjcn0PeaLoz3WTPepzNE", "dq": "kYw2u4_UmWvcXVOeV_VKJ5aQZkJ6_sxTpodRBMPyQmkMHKcW4eKU1mcJju_deqWadw5jGPPpm5yTXm5UkAwfOeookoWpGa7CvVf4kPNI6Aphn3GBjunJHNpPuU6w-wvomGsxd-NqQDGNYKHuFFMcyXO_zWXglQdP_1o1tJ1M-BM", "qi": "j94Ens784M8zsfwWoJhYq9prcSZOGgNbtFWQZO8HP8pcNM9ls7YA4snTtAS_B4peWWFAFZ0LSKPCxAvJnrq69ocmEKEk7ss1Jo062f9pLTQ6cnhMjev3IqLocIFt5Vbsg_PWYpFSR7re6FRbF9EYOM7F2-HRv1idxKCWoyQfBqk" }
load crt rsa1_5.pem key rsa1_5.key jwt on
# Private key built out of following JWK:
# { "kty": "RSA", "e": "AQAB", "n": "wsqJbopx18NQFYLYOq4ZeMSE89yGiEankUpf25yV8QqroKUGrASj_OeqTWUjwPGKTN1vGFFuHYxiJeAUQH2qQPmg9Oqk6-ATBEKn9COKYniQ5459UxCwmZA2RL6ufhrNyq0JF3GfXkjLDBfhU9zJJEOhknsA0L_c-X4AI3d_NbFdMqxNe1V_UWAlLcbKdwO6iC9fAvwUmDQxgy6R0DC1CMouQpenMRcALaSHar1cm4K-syoNobv3HEuqgZ3s6-hOOSqauqAO0GUozPpaIA7OeruyRl5sTWT0r-iz39bchID2bIKtcqLiFcSYPLBcxmsaQCqRlGhmv6stjTCLV1yT9w", "kid": "ff3c5c96-392e-46ef-a839-6ff16027af78", "d": "b9hXfQ8lOtw8mX1dpqPcoElGhbczz_-xq2znCXQpbBPSZBUddZvchRSH5pSSKPEHlgb3CSGIdpLqsBCv0C_XmCM9ViN8uqsYgDO9uCLIDK5plWttbkqA_EufvW03R9UgIKWmOL3W4g4t-C2mBb8aByaGGVNjLnlb6i186uBsPGkvaeLHbQcRQKAvhOUTeNiyiiCbUGJwCm4avMiZrsz1r81Y1Z5izo0ERxdZymxM3FRZ9vjTB-6DtitvTXXnaAm1JTu6TIpj38u2mnNLkGMbflOpgelMNKBZVxSmfobIbFN8CHVc1UqLK2ElsZ9RCQANgkMHlMkOMj-XT0wHa3VBUQ", "p": "8mgriveKJAp1S7SHqirQAfZafxVuAK_A2QBYPsAUhikfBOvN0HtZjgurPXSJSdgR8KbWV7ZjdJM_eOivIb_XiuAaUdIOXbLRet7t9a_NJtmX9iybhoa9VOJFMBq_rbnbbte2kq0-FnXmv3cukbC2LaEw3aEcDgyURLCgWFqt7M0", "q": "zbbTv5421GowOfKVEuVoA35CEWgl8mdasnEZac2LWxMwKExikKU5LLacLQlcOt7A6n1ZGUC2wyH8mstO5tV34Eug3fnNrbnxFUEE_ZB_njs_rtZnwz57AoUXOXVnd194seIZF9PjdzZcuwXwXbrZ2RSVW8if_ZH5OVYEM1EsA9M", "dp": "1BaIYmIKn1X3InGlcSFcNRtSOnaJdFhRpotCqkRssKUx2qBlxs7ln_5dqLtZkx5VM_UE_GE7yzc6BZOwBxtOftdsr8HVh-14ksSR9rAGEsO2zVBiEuW4qZf_aQM-ScWfU--wcczZ0dT-Ou8P87Bk9K9fjcn0PeaLoz3WTPepzNE", "dq": "kYw2u4_UmWvcXVOeV_VKJ5aQZkJ6_sxTpodRBMPyQmkMHKcW4eKU1mcJju_deqWadw5jGPPpm5yTXm5UkAwfOeookoWpGa7CvVf4kPNI6Aphn3GBjunJHNpPuU6w-wvomGsxd-NqQDGNYKHuFFMcyXO_zWXglQdP_1o1tJ1M-BM", "qi": "j94Ens784M8zsfwWoJhYq9prcSZOGgNbtFWQZO8HP8pcNM9ls7YA4snTtAS_B4peWWFAFZ0LSKPCxAvJnrq69ocmEKEk7ss1Jo062f9pLTQ6cnhMjev3IqLocIFt5Vbsg_PWYpFSR7re6FRbF9EYOM7F2-HRv1idxKCWoyQfBqk" }
load crt rsa_oeap.pem key rsa_oeap.key jwt on
listen main-fe
bind "fd@${mainfe}"
use_backend secret_based_alg if { path_beg /secret }
use_backend pem_based_alg if { path_beg /pem }
default_backend dflt
backend secret_based_alg
http-request set-var(txn.jwe) http_auth_bearer
http-request set-var(txn.secret) hdr(X-Secret),ub64dec,base64
http-request set-var(txn.decrypted) var(txn.jwe),jwt_decrypt_secret(txn.secret)
.if ssllib_name_startswith(AWS-LC)
acl aws_unmanaged var(txn.jwe),jwt_header_query('$.alg') -m str "A128KW"
http-request set-var(txn.decrypted) str("AWS-LC UNMANAGED") if aws_unmanaged
.endif
http-response set-header X-Decrypted %[var(txn.decrypted)]
server s1 ${s1_addr}:${s1_port}
backend pem_based_alg
http-request set-var(txn.jwe) http_auth_bearer
http-request set-var(txn.pem) hdr(X-PEM)
http-request set-var(txn.decrypted) var(txn.jwe),jwt_decrypt_cert(txn.pem)
http-after-response set-header X-Decrypted %[var(txn.decrypted)]
server s1 ${s1_addr}:${s1_port}
backend dflt
server s1 ${s1_addr}:${s1_port}
} -start
#ALG: dir
#ENC: A256GCM
#KEY: {"kty":"oct", "k":"ZMpktzGq1g6_r4fKVdnx9OaYr4HjxPjIs7l7SwAsgsg"}
client c1_1 -connect ${h1_mainfe_sock} {
txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiAiZGlyIiwgImVuYyI6ICJBMjU2R0NNIn0..hxCk0nP4aVNpgfb7.inlyAZtUzDCTpD_9iuWx.Pyu90cmgkXenMIVu9RUp8w" -hdr "X-Secret: ZMpktzGq1g6_r4fKVdnx9OaYr4HjxPjIs7l7SwAsgsg"
rxresp
expect resp.http.x-decrypted == "Setec Astronomy"
} -run
#ALG: dir
#ENC: A256GCM
#KEY: {"kty":"oct", "k":"ZMpktzGq1g6_r4fKVdnx9OaYr4HjxPjIs7l7SwAsgsg"}
# Token is modified to have an invalid tag
client c1_2 -connect ${h1_mainfe_sock} {
txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiAiZGlyIiwgImVuYyI6ICJBMjU2R0NNIn0..hxCk0nP4aVNpgfb7.inlyAZtUzDCTpD_9iuWx.Pyu90cmgkXenMIVu9RUp8v" -hdr "X-Secret: ZMpktzGq1g6_r4fKVdnx9OaYr4HjxPjIs7l7SwAsgsg"
rxresp
expect resp.http.x-decrypted == ""
} -run
#ALG: dir
#ENC: A256GCM
#KEY: {"kty":"oct", "k":"ZMpktzGq1g6_r4fKVdnx9OaYr4HjxPjIs7l7SwAsgsg"}
# Wrong secret
client c1_3 -connect ${h1_mainfe_sock} {
txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiAiZGlyIiwgImVuYyI6ICJBMjU2R0NNIn0..hxCk0nP4aVNpgfb7.inlyAZtUzDCTpD_9iuWx.Pyu90cmgkXenMIVu9RUp8w" -hdr "X-Secret: zMpktzGq1g6_r4fKVdnx9OaYr4HjxPjIs7l7SwAsgsg"
rxresp
expect resp.http.x-decrypted == ""
} -run
#ALG: A128KW
#ENC: A128CBC-HS256
#KEY: {"kty":"oct", "k":"3921VrO5TrLvPQ-NFLlghQ"}
client c2_1 -connect ${h1_mainfe_sock} {
txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiAiQTEyOEtXIiwgImVuYyI6ICJBMTI4Q0JDLUhTMjU2In0.AaOyP1zNjsywJOoQ941JJWT4LQIDlpy3UibM_48HrsoCJ5ENpQhfbQ.h2ZBUiy9ofvcDZOwV2iVJA.K0FhK6ri44ZWmtFUtJRpiZSeT8feKX5grFpU8xG5026bGXAdZADO4ZkQ8DRvSEE9DwNIlK6cIEoSavm12gSzQVXajz3MWv5U6VbK5gPFCeFjJfMPmdQ9THIi-hapcueSxYz2rkcGxo3iP3ixE_bww8UB_XlQvnokhFxtf8NushMkjef4RDrW5vQu4j_qPbqG334msDKmFi8Klprs6JktrADeEJ0bPGN80NKEWp7XPcCbfmcwYe-9z_tPw_KJcQhLpQevfPLfVI4WjPgPxYNGw03qKYnLD7oTjr9qCrQmzUVXutlhxfpD3UQr11SJu8q19Ug82bON-GRd2CjpSrErQq42dd0_mWjG9iDqjqpYFBK9DV_qawy2dxFbfIcCsnb6ewifjoJLiFg2OT7-YdTaC7kqaXeE1JpA-OtMXN72FUDrnQ8r9ifj_VpMNvBf_36dbOCT-cGwIOI8Pf6HH2smXULhtBv9q-qO2zyScpmliqZDXUqmvQ8rxi-xYI2hijV80jo14teZgIotWsZE2FrMPJTkegDmh4cG5UzoUsQxzPhXqHvkss6Hv7h-_fmvXvXY1AZ8T8bL1qM4bS8mKpewmGtjmU6S220tL60ieT2QL0vmTFlJkOE8uFreWlPnxNKBix_zj4Smhg1zS_sl7GoXhp5Q_QY3MOMM5-gCAALY0crqLLWtHswElVOiJSyd64T9HFyXm7Rleqq2kLXmTvDhOR6lzMnA0rcGP7lQGYlLZgFiicsMY722XlKI3v1-cJYvj2RZMPe1ijBLFFTqyPeCBkbsDC3XCpWhMByNHSHKN3t-NJmQBIC-89ZeOMU-WBtqrDDi_CMnaz9mwkyt3P7ja_fVskc4KKBBlMVYDZ3DJeJw3Kg9Pie0XlqHkD6W1vyAWjOM2z76Rh_3553dLAH1HxNRwidLjq3SvoaX3TOU5O2_omFGPBek7QdzhNBGLgv6Zlul_XxZq9UGiVo1jrnkd40_vAZQRL6NyMxGBEij_b8F_wDMz5njrL-a0c2Y5mMno-q8gmM4sFKI1BS5HsrUAw.PFFSFlDslALnebAdaqS_MA" -hdr "X-Secret: 3921VrO5TrLvPQ-NFLlghQ"
rxresp
expect resp.http.x-decrypted ~ "(Sed ut perspiciatis unde omnis iste natus error sit voluptatem doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo veritatis et quasi architecto beatae vitae dicta sunt explicabo\\. Nemo ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt\\. porro quisquam est, qui dolorem ipsum quia dolor sit amet, adipisci velit, sed quia non numquam eius modi tempora incidunt ut dolore magnam aliquam quaerat voluptatem\\. Ut enim ad minima veniam, nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut ea commodi consequatur\\? Quis autem vel eum iure reprehenderit qui in voluptate velit esse quam nihil molestiae consequatur, vel illum qui eum fugiat quo voluptas nulla pariatur\\?|AWS-LC UNMANAGED)"
} -run
#ALG: A128KW
#ENC: A128CBC-HS256
#KEY: {"kty":"oct", "k":"3921VrO5TrLvPQ-NFLlghQ"}
# Token is modified to have an invalid tag
client c2_2 -connect ${h1_mainfe_sock} {
txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiAiQTEyOEtXIiwgImVuYyI6ICJBMTI4Q0JDLUhTMjU2In0.AaOyP1zNjsywJOoQ941JJWT4LQIDlpy3UibM_48HrsoCJ5ENpQhfbQ.h2ZBUiy9ofvcDZOwV2iVJA.K0FhK6ri44ZWmtFUtJRpiZSeT8feKX5grFpU8xG5026bGXAdZADO4ZkQ8DRvSEE9DwNIlK6cIEoSavm12gSzQVXajz3MWv5U6VbK5gPFCeFjJfMPmdQ9THIi-hapcueSxYz2rkcGxo3iP3ixE_bww8UB_XlQvnokhFxtf8NushMkjef4RDrW5vQu4j_qPbqG334msDKmFi8Klprs6JktrADeEJ0bPGN80NKEWp7XPcCbfmcwYe-9z_tPw_KJcQhLpQevfPLfVI4WjPgPxYNGw03qKYnLD7oTjr9qCrQmzUVXutlhxfpD3UQr11SJu8q19Ug82bON-GRd2CjpSrErQq42dd0_mWjG9iDqjqpYFBK9DV_qawy2dxFbfIcCsnb6ewifjoJLiFg2OT7-YdTaC7kqaXeE1JpA-OtMXN72FUDrnQ8r9ifj_VpMNvBf_36dbOCT-cGwIOI8Pf6HH2smXULhtBv9q-qO2zyScpmliqZDXUqmvQ8rxi-xYI2hijV80jo14teZgIotWsZE2FrMPJTkegDmh4cG5UzoUsQxzPhXqHvkss6Hv7h-_fmvXvXY1AZ8T8bL1qM4bS8mKpewmGtjmU6S220tL60ieT2QL0vmTFlJkOE8uFreWlPnxNKBix_zj4Smhg1zS_sl7GoXhp5Q_QY3MOMM5-gCAALY0crqLLWtHswElVOiJSyd64T9HFyXm7Rleqq2kLXmTvDhOR6lzMnA0rcGP7lQGYlLZgFiicsMY722XlKI3v1-cJYvj2RZMPe1ijBLFFTqyPeCBkbsDC3XCpWhMByNHSHKN3t-NJmQBIC-89ZeOMU-WBtqrDDi_CMnaz9mwkyt3P7ja_fVskc4KKBBlMVYDZ3DJeJw3Kg9Pie0XlqHkD6W1vyAWjOM2z76Rh_3553dLAH1HxNRwidLjq3SvoaX3TOU5O2_omFGPBek7QdzhNBGLgv6Zlul_XxZq9UGiVo1jrnkd40_vAZQRL6NyMxGBEij_b8F_wDMz5njrL-a0c2Y5mMno-q8gmM4sFKI1BS5HsrUAw.PFFSFlDslALnebAdaqS_Ma" -hdr "X-Secret: 3921VrO5TrLvPQ-NFLlghQ"
rxresp
expect resp.http.x-decrypted ~ "(|AWS-LC UNMANAGED)"
} -run
#ALG: A256GCMKW
#ENC: A256CBC-HS512
#KEY: {"k":"vof8hNUaHiMw_0o3EGVPtBOPDDWJ62b8kQWE2ufSjIE","kty":"oct"}
client c3 -connect ${h1_mainfe_sock} {
txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiJBMjU2R0NNS1ciLCJlbmMiOiJBMjU2Q0JDLUhTNTEyIiwiaXYiOiJRclluZUNxVmVldExzN1FKIiwidGFnIjoieFEyeFI2SHdBUngzeDJUdFg5UFVSZyJ9.wk4eJtdTKOPsic4IBtVcppO6Sp6LfXmxHzBvHZtU0Sk7JCVqhAghkeAw0qWJ5XsdwSneIlZ4rGygtnafFl4Thw.ylzjPBsgJ4qefDQZ_jUVpA.xX0XhdL4KTSZfRvHuZD1_Dh-XrfZogRsBHpgxkDZdYk.w8LPVak5maNeQpSWgCIGGsj26SLQZTx6nAmkvDQKFIA" -hdr "X-Secret: vof8hNUaHiMw_0o3EGVPtBOPDDWJ62b8kQWE2ufSjIE"
rxresp
expect resp.http.x-decrypted == "My Encrypted message"
} -run
# RFC7516 JWE
# https://datatracker.ietf.org/doc/html/rfc7516#appendix-A.3
#ALG: A128KW
#ENC: A128CBC-HS256
#KEY: {"kty":"oct", "k":"GawgguFyGrWKav7AX4VKUg" }
client c4 -connect ${h1_mainfe_sock} {
txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiJBMTI4S1ciLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0.6KB707dM9YTIgHtLvtgWQ8mKwboJW3of9locizkDTHzBC2IlrT1oOQ.AxY8DCtDaGlsbGljb3RoZQ.KDlTtXchhZTGufMYmOYGS4HffxPSUrfmqCHXaI9wOGY.U0m_YmjN04DJvceFICbCVQ" -hdr "X-Secret: GawgguFyGrWKav7AX4VKUg"
rxresp
expect resp.http.x-decrypted ~ "(Live long and prosper\\.|AWS-LC UNMANAGED)"
} -run
#ALG: A256GCMKW
#ENC: A192CBC-HS384
#KEY: {"k":"vprpatiNyI-biJY57qr8Gg4--4Rycgb2G5yO1_myYAw","kty":"oct"}
client c5 -connect ${h1_mainfe_sock} {
txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiJBMjU2R0NNS1ciLCJlbmMiOiJBMTkyQ0JDLUhTMzg0IiwiaXYiOiJzVE81QjlPRXFuaUhCX3dYIiwidGFnIjoid2M1ZnRpYUFnNGNOR1JkZzNWQ3FXdyJ9.2zqnM9zeNU-eAMp5h2uFJyxbHHKsZs9YAYKzOcIF3d3Q9uq1TMQAvqOIuXw3kU9o.hh5aObIoIMR6Ke0rXm6V1A.R7U-4OlqOR6f2C1b3nI5bFqZBIGNBgza7FfoPEgrQT8.asJCzUAHCuxS7o8Ut4ENfaY5RluLB35F" -hdr "X-Secret: vprpatiNyI-biJY57qr8Gg4--4Rycgb2G5yO1_myYAw"
rxresp
expect resp.http.x-decrypted == "My Encrypted message"
} -run
#ALG: RSA1_5
#ENC: A256GCM
client c6 -connect ${h1_mainfe_sock} {
txreq -url "/pem" -hdr "Authorization: Bearer eyJhbGciOiAiUlNBMV81IiwgImVuYyI6ICJBMjU2R0NNIn0.ew8AbprGcd_J73-CZPIsE1YonD9rtcL7VCuOOuVkrpS_9UzA9_kMh1yw20u-b5rKJAhmFMCQPXl44ro6IzOeHu8E2X_NlPEnQfyNVQ4R1HB_E9sSk5BLxOH3aHkVUh0I-e2eDDj-pdI3OrdjZtnZEBeQ7tpMcoBEbn1VGg7Pmw4qtdS-0qnDSs-PttU-cejjgPUNLRU8UdoRVC9uJKacJms110QugDuFuMYTTSU2nbIYh0deCMRAuKGWt0Ii6EMYW2JaJ7JfXag59Ar1uylQPyEVrocnOsDuB9xnp2jd796qCPdKxBK9yKUnwjal4SQpYbutr40QzG1S4MsKaUorLg.0el2ruY0mm2s7LUR.X5RI6dF06Y_dbAr8meb-6SG5enj5noto9nzgQU5HDrYdiUofPptIf6E-FikKUM9QR4pY9SyphqbPYeAN1ZYVxBrR8tUf4Do2kw1biuuRAmuIyytpmxwvY946T3ctu1Zw3Ymwe-jWXX08EngzssvzFOGT66gkdufrTkC45Fkr0RBOmWa5OVVg_VR6LwcivtQMmlArlrwbaDmmLqt_2p7afT0UksEz4loq0sskw-p7GbhB2lpzXoDnijdHrQkftRbVCiDbK4-qGr7IRFb0YOHvyVFr-kmDoJv2Zsg_rPKV1LkYmPJUbVDo9T3RAcLinlKPK4ZPC_2bWj3M9BvfOq1HeuyVWzX2Cb1mHFdxXFGqaLPfsE0VOfn0GqL7oHVbuczYYw2eKdmiw5LEMwuuJEdYDE9IIFEe8oRB4hNZ0XMYB6oqqZejD0Fh6nqlj5QUrTYpTSE-3LkgK2zRJ0oZFXZyHCB426bmViuE0mXF7twkQep09g0U35-jFBZcSYBDvZZL1t5d_YEQ0QtO0mEeEpGb0Pvk_EsSMFib7NxClz4_rdtwWCFuM4uFOS5vrQMiMqi_TadhLxrugRFhJpsibuScCiJ7eNDrUvwSWEwv1U593MUX3guDq_ONOo_49EOJSyRJtQCNC6FW6GLWSz9TCo6g5LCnXt-pqwu0Iymr7ZTQ3MTsdq2G55JM2e6SdG43iET8r235hynmXHKPUYHlSjsC2AEAY_pGDO0akIhf4wDVIM5rytn-rjQf-29ZJp05g6KPe-EaN1C-X7aBGhgAEgnX-iaXXbotpGeKRTNj2jAG1UrkYi6BGHxluiXJ8jH_LjHuxKyzIObqK8p28ePDKRL-jyNTrvGW2uorgb_u7HGmWYIWLTI7obnZ5vw3MbkjcwEd4bX5JXUj2rRsUWMlZSSFVO9Wgf7MBvcLsyF0Yqun3p0bi__edmcqNF_uuYZT-8jkUlMborqIDDCYYqIolgi5R1Bmut-gFYq6xyfEncxOi50xmYon50UulVnAH-up_RELGtCjmAivaJb8.upVY733IMAT8YbMab2PZnw" -hdr "X-PEM: ${testdir}/rsa1_5.pem"
rxresp
expect resp.http.x-decrypted == "Sed ut perspiciatis unde omnis iste natus error sit voluptatem doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. porro quisquam est, qui dolorem ipsum quia dolor sit amet, adipisci velit, sed quia non numquam eius modi tempora incidunt ut dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut ea commodi consequatur? Quis autem vel eum iure reprehenderit qui in voluptate velit esse quam nihil molestiae consequatur, vel illum qui eum fugiat quo voluptas nulla pariatur?"
} -run
#ALG: RSA-OAEP
#ENC: A256GCM
client c7 -connect ${h1_mainfe_sock} {
txreq -url "/pem" -hdr "Authorization: Bearer eyJhbGciOiAiUlNBLU9BRVAiLCAiZW5jIjogIkEyNTZHQ00ifQ.Os33U1HEY92lrpup2E-HNttBW26shGSCafqNbVfs1rwWB__B-0dRAiKg4OtIrIXVCN7oQMqLr9RFRO6Gb-OAPIr-59FETLSXP8K_3uNcy-jdKrpKLbv8wgisEYqBJj4BysZQjuWgUgJ7Dvx28_zIUg0FJGOwxtpX2SUWxEgw5CPRgRrENJDJ2EYA6wuX9SbfarhQR4uPN7pdRKZ0ZQN6_5H3H9pWJ4WNnsQ0wjChKTsdR3kHOvygiUmdYSEWGe6LBQLSBQCnQim1pr--GBOHvDf2g4Je9EDFrrO1icFDbBdJ8I4ol4ixglLEnBCTHdhYd_lVe0i5JcxxHF8hmemAYQ.IOphaFIcCosKyXcN.KEjWfV2yBKLuMLX20mtEvrQ-P_oKWkdgZabx0FgRLqjSorD7DS3aIXLMEmyrOYd4kGHKCMg2Fvg61xKvI2FsQviA5LgHtx0QKmFARacP8kBl8vFPMEg2WtW0rIImTc1tj4C0PM9A0TbyDohtcoN9UYosrw5GyPOlHwIFwWosLA9WHqp00MAfAu3JOa4CwuMXsORGzeIyb7X-jg_bbG_9xkVUsgZpaCUX447a3QmKLJVBfQpeEO_PuYbds-MvIU9m4uYzWplNeHnf3B1dh9p6o4Ml6OEp-0G_4Nd4UmMz_g9A-TatH-A__MAC9Mx1Wj1cDn5M3upcrAyu2JLQ48A-Qa2ocElhQ4ODzwbgbC5PS34Mlm_x18zqL-0Fw3ckhzgoAyDBoRO6SaNmsKb1wQ6QGbwBJx1jC51hpzBHRv3pUlegsHXgq7OWN1x1tDJvRc_DHMa23Mheg-aKJcliP846Dduq2_Hve3md30C0hbrP1OMF5ZJSVu4kUo7UFaZA_6hhcoGvvyEGDMnPH5SznrrsyHGIre-WOdXCObZNkDV6Qn0sqAP_vkj_6Dj965W8ksCKk6ye409cB4mnqfLv3dUtGLV8o8VtCLIEs2G62lwaDGrX4HB-pZ6jea2qH6UvgwK5WT-VzrypSQcVoWCKopln2gtO1JROKmbOiL9f8dfbLKqYSRB6ppMxh5Euddx_eNikZfLEcXfq2Grwyrj0NLP82AFSxSYf3BpYqpOhSxca0gx0psb8tCwq3sqmh5Bp_qmKIOthXb6k-9R_Ng6cRTp132OnDEXEDtvDv59WJWHuo4qACyrg7jUlrh4dAYwYke1yBgVcqK5JwVnmKDjnx9vRGFSD9esrL8MpGiP6uUeN3AXiv7OSb83hDdwTTQU5nvitHWKS72Mb1FRPdDXUxooiyShAkV5Spo3YNl4EHkm6lnlJ-kC3BFlxYqYd5a_vtqA-ywR7ozWo1GtMBjYycq2s9Kp8FnqI2cTWobOCjMxaej4CXaRA4IwhjC1u6OTCvxP70MWYT0pJPjUS.k9i0Lw9MfJs4Rp-_uwIEeA" -hdr "X-PEM: ${testdir}/rsa_oeap.pem"
rxresp
expect resp.http.x-decrypted == "Sed ut perspiciatis unde omnis iste natus error sit voluptatem doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. porro quisquam est, qui dolorem ipsum quia dolor sit amet, adipisci velit, sed quia non numquam eius modi tempora incidunt ut dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut ea commodi consequatur? Quis autem vel eum iure reprehenderit qui in voluptate velit esse quam nihil molestiae consequatur, vel illum qui eum fugiat quo voluptas nulla pariatur?"
} -run

27
reg-tests/jwt/rsa1_5.key Normal file
View File

@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEAwsqJbopx18NQFYLYOq4ZeMSE89yGiEankUpf25yV8QqroKUG
rASj/OeqTWUjwPGKTN1vGFFuHYxiJeAUQH2qQPmg9Oqk6+ATBEKn9COKYniQ5459
UxCwmZA2RL6ufhrNyq0JF3GfXkjLDBfhU9zJJEOhknsA0L/c+X4AI3d/NbFdMqxN
e1V/UWAlLcbKdwO6iC9fAvwUmDQxgy6R0DC1CMouQpenMRcALaSHar1cm4K+syoN
obv3HEuqgZ3s6+hOOSqauqAO0GUozPpaIA7OeruyRl5sTWT0r+iz39bchID2bIKt
cqLiFcSYPLBcxmsaQCqRlGhmv6stjTCLV1yT9wIDAQABAoIBAG/YV30PJTrcPJl9
Xaaj3KBJRoW3M8//sats5wl0KWwT0mQVHXWb3IUUh+aUkijxB5YG9wkhiHaS6rAQ
r9Av15gjPVYjfLqrGIAzvbgiyAyuaZVrbW5KgPxLn71tN0fVICClpji91uIOLfgt
pgW/GgcmhhlTYy55W+otfOrgbDxpL2nix20HEUCgL4TlE3jYsoogm1BicApuGrzI
ma7M9a/NWNWeYs6NBEcXWcpsTNxUWfb40wfug7Yrb01152gJtSU7ukyKY9/Ltppz
S5BjG35TqYHpTDSgWVcUpn6GyGxTfAh1XNVKiythJbGfUQkADYJDB5TJDjI/l09M
B2t1QVECgYEA8mgriveKJAp1S7SHqirQAfZafxVuAK/A2QBYPsAUhikfBOvN0HtZ
jgurPXSJSdgR8KbWV7ZjdJM/eOivIb/XiuAaUdIOXbLRet7t9a/NJtmX9iybhoa9
VOJFMBq/rbnbbte2kq0+FnXmv3cukbC2LaEw3aEcDgyURLCgWFqt7M0CgYEAzbbT
v5421GowOfKVEuVoA35CEWgl8mdasnEZac2LWxMwKExikKU5LLacLQlcOt7A6n1Z
GUC2wyH8mstO5tV34Eug3fnNrbnxFUEE/ZB/njs/rtZnwz57AoUXOXVnd194seIZ
F9PjdzZcuwXwXbrZ2RSVW8if/ZH5OVYEM1EsA9MCgYEA1BaIYmIKn1X3InGlcSFc
NRtSOnaJdFhRpotCqkRssKUx2qBlxs7ln/5dqLtZkx5VM/UE/GE7yzc6BZOwBxtO
ftdsr8HVh+14ksSR9rAGEsO2zVBiEuW4qZf/aQM+ScWfU++wcczZ0dT+Ou8P87Bk
9K9fjcn0PeaLoz3WTPepzNECgYEAkYw2u4/UmWvcXVOeV/VKJ5aQZkJ6/sxTpodR
BMPyQmkMHKcW4eKU1mcJju/deqWadw5jGPPpm5yTXm5UkAwfOeookoWpGa7CvVf4
kPNI6Aphn3GBjunJHNpPuU6w+wvomGsxd+NqQDGNYKHuFFMcyXO/zWXglQdP/1o1
tJ1M+BMCgYEAj94Ens784M8zsfwWoJhYq9prcSZOGgNbtFWQZO8HP8pcNM9ls7YA
4snTtAS/B4peWWFAFZ0LSKPCxAvJnrq69ocmEKEk7ss1Jo062f9pLTQ6cnhMjev3
IqLocIFt5Vbsg/PWYpFSR7re6FRbF9EYOM7F2+HRv1idxKCWoyQfBqk=
-----END RSA PRIVATE KEY-----

21
reg-tests/jwt/rsa1_5.pem Normal file
View File

@ -0,0 +1,21 @@
-----BEGIN CERTIFICATE-----
MIIDizCCAnOgAwIBAgIUWKLX2P4KNDw9kBROSjFXWa/kjtowDQYJKoZIhvcNAQEL
BQAwVTELMAkGA1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoM
GEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZDEOMAwGA1UEAwwFYWEuYmIwHhcNMjUx
MjA0MTYyMTE2WhcNMjYxMjA0MTYyMTE2WjBVMQswCQYDVQQGEwJBVTETMBEGA1UE
CAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50ZXJuZXQgV2lkZ2l0cyBQdHkgTHRk
MQ4wDAYDVQQDDAVhYS5iYjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
AMLKiW6KcdfDUBWC2DquGXjEhPPchohGp5FKX9uclfEKq6ClBqwEo/znqk1lI8Dx
ikzdbxhRbh2MYiXgFEB9qkD5oPTqpOvgEwRCp/QjimJ4kOeOfVMQsJmQNkS+rn4a
zcqtCRdxn15IywwX4VPcySRDoZJ7ANC/3Pl+ACN3fzWxXTKsTXtVf1FgJS3GyncD
uogvXwL8FJg0MYMukdAwtQjKLkKXpzEXAC2kh2q9XJuCvrMqDaG79xxLqoGd7Ovo
TjkqmrqgDtBlKMz6WiAOznq7skZebE1k9K/os9/W3ISA9myCrXKi4hXEmDywXMZr
GkAqkZRoZr+rLY0wi1dck/cCAwEAAaNTMFEwHQYDVR0OBBYEFD+wduQlsKCoxfO5
U1W7Urqs+oTbMB8GA1UdIwQYMBaAFD+wduQlsKCoxfO5U1W7Urqs+oTbMA8GA1Ud
EwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAANfh6jY8+3XQ16SH7Pa07MK
ncnQuZqMemYUQzieBL15zftdpd0vYjOfaN5UAQ7ODVAb/iTF4nnADl0VwOocqEiR
vfaqwJTmKiNDjyIp1SJjhkRcYu3hmDXTZOzhuFxoZALe7OzWFgSjf3fX2IOOBfH+
HBqviTuMi53oURWv/ISPXk+Dr7LaCmm1rEjRq8PINJ2Ni6cN90UvHOrHdl+ty2o/
C3cQWIZrsNM6agUfiNiPCWz6x+Z4t+zP7+EorCM7CKKLGnycPUJE2I6H8bJmIHHS
ITNmUO5juLawQ5h2m5Wu/BCY3rlLU9SLrmWAAHm6lFJb0XzFgqhiCz7lxYofj8c=
-----END CERTIFICATE-----

View File

@ -0,0 +1,28 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEAwsqJbopx18NQFYLYOq4ZeMSE89yGiEankUpf25yV8QqroKUG
rASj/OeqTWUjwPGKTN1vGFFuHYxiJeAUQH2qQPmg9Oqk6+ATBEKn9COKYniQ5459
UxCwmZA2RL6ufhrNyq0JF3GfXkjLDBfhU9zJJEOhknsA0L/c+X4AI3d/NbFdMqxN
e1V/UWAlLcbKdwO6iC9fAvwUmDQxgy6R0DC1CMouQpenMRcALaSHar1cm4K+syoN
obv3HEuqgZ3s6+hOOSqauqAO0GUozPpaIA7OeruyRl5sTWT0r+iz39bchID2bIKt
cqLiFcSYPLBcxmsaQCqRlGhmv6stjTCLV1yT9wIDAQABAoIBAG/YV30PJTrcPJl9
Xaaj3KBJRoW3M8//sats5wl0KWwT0mQVHXWb3IUUh+aUkijxB5YG9wkhiHaS6rAQ
r9Av15gjPVYjfLqrGIAzvbgiyAyuaZVrbW5KgPxLn71tN0fVICClpji91uIOLfgt
pgW/GgcmhhlTYy55W+otfOrgbDxpL2nix20HEUCgL4TlE3jYsoogm1BicApuGrzI
ma7M9a/NWNWeYs6NBEcXWcpsTNxUWfb40wfug7Yrb01152gJtSU7ukyKY9/Ltppz
S5BjG35TqYHpTDSgWVcUpn6GyGxTfAh1XNVKiythJbGfUQkADYJDB5TJDjI/l09M
B2t1QVECgYEA8mgriveKJAp1S7SHqirQAfZafxVuAK/A2QBYPsAUhikfBOvN0HtZ
jgurPXSJSdgR8KbWV7ZjdJM/eOivIb/XiuAaUdIOXbLRet7t9a/NJtmX9iybhoa9
VOJFMBq/rbnbbte2kq0+FnXmv3cukbC2LaEw3aEcDgyURLCgWFqt7M0CgYEAzbbT
v5421GowOfKVEuVoA35CEWgl8mdasnEZac2LWxMwKExikKU5LLacLQlcOt7A6n1Z
GUC2wyH8mstO5tV34Eug3fnNrbnxFUEE/ZB/njs/rtZnwz57AoUXOXVnd194seIZ
F9PjdzZcuwXwXbrZ2RSVW8if/ZH5OVYEM1EsA9MCgYEA1BaIYmIKn1X3InGlcSFc
NRtSOnaJdFhRpotCqkRssKUx2qBlxs7ln/5dqLtZkx5VM/UE/GE7yzc6BZOwBxtO
ftdsr8HVh+14ksSR9rAGEsO2zVBiEuW4qZf/aQM+ScWfU++wcczZ0dT+Ou8P87Bk
9K9fjcn0PeaLoz3WTPepzNECgYEAkYw2u4/UmWvcXVOeV/VKJ5aQZkJ6/sxTpodR
BMPyQmkMHKcW4eKU1mcJju/deqWadw5jGPPpm5yTXm5UkAwfOeookoWpGa7CvVf4
kPNI6Aphn3GBjunJHNpPuU6w+wvomGsxd+NqQDGNYKHuFFMcyXO/zWXglQdP/1o1
tJ1M+BMCgYEAj94Ens784M8zsfwWoJhYq9prcSZOGgNbtFWQZO8HP8pcNM9ls7YA
4snTtAS/B4peWWFAFZ0LSKPCxAvJnrq69ocmEKEk7ss1Jo062f9pLTQ6cnhMjev3
IqLocIFt5Vbsg/PWYpFSR7re6FRbF9EYOM7F2+HRv1idxKCWoyQfBqk=
-----END RSA PRIVATE KEY-----

View File

@ -0,0 +1,22 @@
-----BEGIN CERTIFICATE-----
MIIDjTCCAnWgAwIBAgIUHGhD07tC9adNLCkSBNrfrhFUX9IwDQYJKoZIhvcNAQEL
BQAwVTELMAkGA1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoM
GEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZDEOMAwGA1UEAwwFYWEuYmIwIBcNMjUx
MjA1MTMxOTQ0WhgPMjA1MzA0MjIxMzE5NDRaMFUxCzAJBgNVBAYTAkFVMRMwEQYD
VQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBM
dGQxDjAMBgNVBAMMBWFhLmJiMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAwsqJbopx18NQFYLYOq4ZeMSE89yGiEankUpf25yV8QqroKUGrASj/OeqTWUj
wPGKTN1vGFFuHYxiJeAUQH2qQPmg9Oqk6+ATBEKn9COKYniQ5459UxCwmZA2RL6u
fhrNyq0JF3GfXkjLDBfhU9zJJEOhknsA0L/c+X4AI3d/NbFdMqxNe1V/UWAlLcbK
dwO6iC9fAvwUmDQxgy6R0DC1CMouQpenMRcALaSHar1cm4K+syoNobv3HEuqgZ3s
6+hOOSqauqAO0GUozPpaIA7OeruyRl5sTWT0r+iz39bchID2bIKtcqLiFcSYPLBc
xmsaQCqRlGhmv6stjTCLV1yT9wIDAQABo1MwUTAdBgNVHQ4EFgQUP7B25CWwoKjF
87lTVbtSuqz6hNswHwYDVR0jBBgwFoAUP7B25CWwoKjF87lTVbtSuqz6hNswDwYD
VR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEArDl4gSwqpriAFjWcAtWE
sTLTxNgbnkARDeyhQ1dj6rj9xCccBU6WN07r639c9S0lsMb+jeQU9EJFoVtX91jM
fymumOWMDY/CYm41PkHqcF6hEup5dfAeDnN/OoDjXwgTU74Y3lF/sldeS06KorCp
O9ROyq3mM9n4EtFAAEEN2Esyy1d1CJiMYKHdYRKycMwgcu1pm9n1up4ivdgLY+BH
XhnJPuKmmU3FauYlXzfcijUPAAuJdm3PZ+i4SNGsTa49tXOkHMED31EOjaAEzuX0
rWij715QkL/RIp8lPxeAvHqxavQIDtfjojFD21Cx+jIGuNcfrGNkzNjfS7AF+1+W
jA==
-----END CERTIFICATE-----

1
reg-tests/lua/certs Symbolic link
View File

@ -0,0 +1 @@
../ssl/certs/

View File

@ -1 +0,0 @@
../ssl/common.pem

View File

@ -32,7 +32,7 @@ haproxy h1 -conf {
frontend fe2
mode http
bind ":8443" ssl crt ${testdir}/common.pem
bind ":8443" ssl crt ${testdir}/certs/common.pem
stats enable
stats uri /

View File

@ -26,7 +26,7 @@ haproxy h1 -conf {
frontend fe2
mode http
bind ":8443" ssl crt ${testdir}/common.pem
bind ":8443" ssl crt ${testdir}/certs/common.pem
stats enable
stats uri /

1
reg-tests/peers/certs Symbolic link
View File

@ -0,0 +1 @@
../ssl/certs

View File

@ -1 +0,0 @@
../ssl/common.pem

View File

@ -19,8 +19,8 @@ haproxy h1 -arg "-L A" -conf {
stick-table type string size 10m store server_id,gpc0,conn_cur,conn_rate(50000) peers peers
peers peers
default-server ssl crt ${testdir}/common.pem verify none
bind "fd@${A}" ssl crt ${testdir}/common.pem
default-server ssl crt ${testdir}/certs/common.pem verify none
bind "fd@${A}" ssl crt ${testdir}/certs/common.pem
server A
server B ${h2_B_addr}:${h2_B_port}
server C ${h3_C_addr}:${h3_C_port}
@ -49,8 +49,8 @@ haproxy h2 -arg "-L B" -conf {
stick-table type string size 10m store server_id,gpc0,conn_cur,conn_rate(50000) peers peers
peers peers
default-server ssl crt ${testdir}/common.pem verify none
bind "fd@${B}" ssl crt ${testdir}/common.pem
default-server ssl crt ${testdir}/certs/common.pem verify none
bind "fd@${B}" ssl crt ${testdir}/certs/common.pem
server A ${h1_A_addr}:${h1_A_port}
server B
server C ${h3_C_addr}:${h3_C_port}
@ -78,8 +78,8 @@ haproxy h3 -arg "-L C" -conf {
stick-table type string size 10m store server_id,gpc0,conn_cur,conn_rate(50000) peers peers
peers peers
default-server ssl crt ${testdir}/common.pem verify none
bind "fd@${C}" ssl crt ${testdir}/common.pem
default-server ssl crt ${testdir}/certs/common.pem verify none
bind "fd@${C}" ssl crt ${testdir}/certs/common.pem
server A ${h1_A_addr}:${h1_A_port}
server B ${h2_B_addr}:${h2_B_port}
server C

Some files were not shown because too many files have changed in this diff Show More