Compare commits

...

120 Commits

Author SHA1 Message Date
William Lallemand
90c5618ed5 MEDIUM: systemd: implement directory loading
Redhat-based system already use a CFGDIR variable to load configuration
files from a directory, this patch implements the same feature.

It now requires that /etc/haproxy/conf.d exists or the service won't be
able to start.
2026-01-16 09:55:33 +01:00
Egor Shestakov
a3ee35cbfc REORG/MINOR: cfgparse: eliminate code duplication by lshift_args()
There were similar parts of the code in "no" and "default" prefix
keywords handling. This duplication caused the bug once.

No backport needed.
2026-01-16 09:09:24 +01:00
Egor Shestakov
447d73dc99 BUG/MINOR: cfgparse: fix "default" prefix parsing
Fix the left shift of args when "default" prefix matches. The cause of the
bug was the absence of zeroing of the right element during the shift. The
same bug for "no" prefix was fixed by commit 0f99e3497, but missed for
"default".

The shift of ("default", "option", "dontlog-normal")
    produced ("option", "dontlog-normal", "dontlog-normal")
  instead of ("option", "dontlog-normal", "")

As an example, a valid config line:
    default option dontlog-normal

caused a parse error:
[ALERT]    (32914) : config : parsing [bug-default-prefix.cfg:22] : 'option dontlog-normal' cannot handle unexpected argument 'dontlog-normal'.

The patch should be backported to all stable versions, since the absence of
zeroing was introduced with "default" keyword.
2026-01-16 09:09:19 +01:00
Remi Tricot-Le Breton
362ff2628f REGTESTS: jwe: Fix tests of algorithms not supported by AWS-LC
Many tests use the A128KW algorithm which is not supported by AWS-LC but
instead of removing those tests we will just have a hardcoded value set
by default in this case.
2026-01-15 10:56:28 +01:00
Remi Tricot-Le Breton
aba18bac71 MINOR: jwe: Some algorithms not supported by AWS-LC
AWS-LC does not have EVP_aes_128_wrap or EVP_aes_192_wrap so the A128KW
and A192KW algorithms will not be supported for JWE token decryption.
2026-01-15 10:56:28 +01:00
Remi Tricot-Le Breton
39da1845fc DOC: jwe: Add doc for jwt_decrypt converters
Add doc for jwt_decrypt_secret and jwt_decrypt_cert converters.
2026-01-15 10:56:28 +01:00
Remi Tricot-Le Breton
4b73a3ed29 REGTESTS: jwe: Add jwt_decrypt_secret and jwt_decrypt_cert tests
Test the new jwt_decrypt converters.
2026-01-15 10:56:27 +01:00
Remi Tricot-Le Breton
e3a782adb5 MINOR: jwe: Add new jwt_decrypt_cert converter
This converter checks the validity and decrypts the content of a JWE
token that has an asymetric "alg" algorithm (RSA). In such a case, we
must provide a path to an already loaded certificate and private key
that has the "jwt" option set to "on".
2026-01-15 10:56:27 +01:00
Remi Tricot-Le Breton
416b87d5db MINOR: jwe: Add new jwt_decrypt_secret converter
This converter checks the validity and decrypts the content of a JWE
token that has a symetric "alg" algorithm. In such a case, we only
require a secret as parameter in order to decrypt the token.
2026-01-15 10:56:27 +01:00
Remi Tricot-Le Breton
2b45b7bf4f REGTESTS: ssl: Add tests for new aes cbc converters
This test mimics what was already done for the aes_gcm converters. Some
data is encrypted and directly decrypted and we ensure that the output
was not changed.
2026-01-15 10:56:27 +01:00
Remi Tricot-Le Breton
c431034037 MINOR: ssl: Add new aes_cbc_enc/_dec converters
Those converters allow to encrypt or decrypt data with AES in Cipher
Block Chaining mode. They work the same way as the already existing
aes_gcm_enc/_dec ones apart from the AEAD tag notion which is not
supported in CBC mode.
2026-01-15 10:56:27 +01:00
Remi Tricot-Le Breton
f0e64de753 MINOR: ssl: Factorize AES GCM data processing
The parameter parsing and processing and the actual crypto part of the
aes_gcm converter are interleaved. This patch puts the crypto parts in a
dedicated function for better reuse in the upcoming JWE processing.
2026-01-15 10:56:27 +01:00
Amaury Denoyelle
6870551a57 MEDIUM: proxy: force traffic on unpublished/disabled backends
A recent patch has introduced a new state for proxies : unpublished
backends. Such backends won't be eligilible for traffic, thus
use_backend/default_backend rules which target them won't match and
content switching rules processing will continue.

This patch defines a new frontend keywords 'force-be-switch'. This
keyword allows to ignore unpublished or disabled state. Thus,
use_backend/default_backend will match even if the target backend is
unpublished or disabled. This is useful to be able to test a backend
instance before exposing it outside.

This new keyword is converted into a persist rule of new type
PERSIST_TYPE_BE_SWITCH, stored in persist_rules list proxy member. This
is the only persist rule applicable to frontend side. Prior to this
commit, pure frontend proxies persist_rules list were always empty.

This new features requires adjustment in process_switching_rules(). Now,
when a use_backend/default_backend rule matches with an non eligible
backend, frontend persist_rules are inspected to detect if a
force-be-switch is present so that the backend may be selected.
2026-01-15 09:08:19 +01:00
Amaury Denoyelle
16f035d555 MINOR: cfgparse: adapt warnif_cond_conflicts() error output
Utility function warnif_cond_conflicts() is used when parsing an ACL.
Previously, the function directly calls ha_warning() to report an error.
Change the function so that it now takes the error message as argument.
Caller can then output it as wanted.

This change is necessary to use the function when parsing a keyword
registered as cfg_kw_list. The next patch will reuse it.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
82907d5621 MINOR: stats: report BE unpublished status
A previous patch defines a new proxy status : unpublished backends. This
patch extends this by changing proxy status reported in stats. If
unpublished is set, an extra "(UNPUB)" is added to the field.

Also, HTML stats is also slightly updated. If a backend is up but
unpublished, its status will be reported in orange color.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
797ec6ede5 MEDIUM: proxy: implement publish/unpublish backend CLI
Define a new set of CLI commands publish/unpublish backend <be>. The
objective is to be able to change the status of a backend to
unpublished. Such a backend is considered ineligible to traffic : this
allows to skip use_backend rules which target it.

Note that contrary to disabled/stopped proxies, an unpublished backend
still has server checks running on it.

Internally, a new proxy flags PR_FL_BE_UNPUBLISHED is defined. CLI
commands handler "publish backend" and "unpublish backend" are executed
under thread isolation. This guarantees that the flag can safely be set
or remove in the CLI handlers, and read during content-switching
processing.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
21fb0a3f58 MEDIUM: proxy: do not select a backend if disabled
A proxy can be marked as disabled using the keyword with the same name.
The doc mentions that it won't process any traffic. However, this is not
really the case for backends as they may still be selected via switching
rules during stream processing.

In fact, currently access to disabled backends will be conducted up to
assign_server(). However, no eligible server is found at this stage,
resulting in a connection closure or an HTTP 503, which is expected. So
in the end, servers in disabled backends won't receive any traffic. But
this is only because post-parsing steps are not performed on such
backends. Thus, this can be considered as functional but only via
side-effects.

This patch clarifies the handling of disable backends, so that they are
never selected via switching rules. Now, process_switching_rules() will
ignore disable backends and continue rules evaluation.

As this is a behavior change, this patch is labelled as medium. The
documentation manuel for use_backend is updated accordingly.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
2d26d353ce REGTESTS: add test on backend switching rules selection
Create a new test to ensure that switching rules selection is fine.
Currently, this checks that dynamic backend switching works as expected.
If a matching rule is resolved to an unexisting backend, the default
backend is used instead.

This regtest should be useful as switching-rules will be extended in a
future set of patches to add new abilities on backends, linked to
dynamic backend support.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
12975c5c37 MEDIUM: stream: refactor switching-rules processing
This commit rewrites process_switching_rules() function. The objective
is to simplify backend selection so that a single unified
stream_set_backend() call is kept, both for regular and default backends
case.

This patch will be useful to add new capabilities on backends, in the
context of dynamic backend support implementation.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
2f6aab9211 BUG/MINOR: proxy: free persist_rules
force-persist proxy keyword is converted into a persist_rule, stored in
proxy persist_rules list member. Each new rule is dynamically allocated
during parsing.

This commit fixes the memory leak on deinit due to a missing free on
persist_rules list entries. This is done via deinit_proxy()
modification. Each rule in the list is freed, along with its associated
ACL condition type.

This can be backported to every stable version.
2026-01-15 09:08:18 +01:00
Olivier Houchard
a209c35f30 MEDIUM: thread: Turn the group mask in thread set into a group counter
If we want to be able to have more than 64 thread groups, we can no
longer use thread group masks as long.
One remaining place where it is done is in struct thread_set. However,
it is not really used as a mask anywhere, all we want is a thread group
counter, so convert that mask to a counter.
2026-01-15 05:24:53 +01:00
Olivier Houchard
6249698840 BUG/MEDIUM: queues: Fix arithmetic when feeling non_empty_tgids
Fix the arithmetic when pre-filling non_empty_tgids when we still have
more than 32/64 thread groups left, to get the right index, we of course
have to divide the number of thread groups by the number of bits in a
long.
This bug was introduced by commit
7e1fed4b7a8b862bf7722117f002ee91a836beb5, but hopefully was not hit
because it requires to have at least as much thread groups as there are
bits in a long, which is impossible on 64bits machines, as MAX_TGROUPS
is still 32.
2026-01-15 04:28:04 +01:00
Olivier Houchard
1397982599 MINOR: threads: Eliminate all_tgroups_mask.
Now that it is unused, eliminate all_tgroups_mask, as we can't 64bits
masks to represent thread groups, if we want to be able to have more
than 64 thread groups.
2026-01-15 03:46:57 +01:00
Olivier Houchard
7e1fed4b7a MINOR: queues: Turn non_empty_tgids into a long array.
In order to be able to have more than 64 thread groups, turn
non_empty_tgids into a long array, so that we have enough bits to
represent everty thread group, and manipulate it with the ha_bit_*
functions.
2026-01-15 03:46:57 +01:00
Aurelien DARRAGON
2ec387cdc2 BUG/MINOR: http_act: fix deinit performed on uninitialized lf_expr in release_http_map()
As reported by GH user @Lzq-001 on issue #3245, the config below would
cause haproxy to SEGFAULT after having reported an error:

  frontend 0000000
        http-request set-map %[hdr(0000)0_

Root cause is simple, in parse_http_set_map(), we define the release
function (which is responsible to clear lf_expr expressions used by the
action), prior to initializing the expressions, while the release
function assumes the expressions are always initialized.

For all similar actions, we already perform the init prior to setting
the related release function, but this was not the case for
parse_http_set_map(). We fix the bug by initializing the expressions
earlier.

Thanks to @Lzq-001 for having reported the issue and provided a simple
reproducer.

It should be backported to all stable versions, note for versions prior to
3.0, lf_expr_init() should be replace by LIST_INIT(), see
6810c41 ("MEDIUM: tree-wide: add logformat expressions wrapper")
2026-01-14 20:05:39 +01:00
Olivier Houchard
7f4b053b26 MEDIUM: counters: mostly revert da813ae4d7cb77137ed
Contrarily to what was previously believed, there are corner cases where
the counters may not be allocated, and we may want to make them optional
at a later date, so we have to check if those counters are there.
However, just checking that shared.tg is non-NULL is enough, we can then
assume that shared.tg[tgid - 1] has properly been allocated too.
Also modify the various COUNTER_SHARED_* macros to make sure they check
for that too.
2026-01-14 12:39:14 +01:00
Amaury Denoyelle
7aa839296d BUG/MEDIUM: quic: fix ACK ECN frame parsing
ACK frames are either of type 0x02 or 0x03. The latter is an indication
that it contains extra ECN related fields. In haproxy QUIC stack, this
is considered as a different frame type, set to QUIC_FT_ACK_ECN, with
its own set of builder/parser functions.

This patch fixes ACK ECN parsing function. Indeed, the latter suffered
from two issues. First, 'first ACK range' and 'ACK ranges' were
inverted. Then, the three remaining ECN fields were simply ignored by
the parsing function.

This issue can cause desynchronization in the frames parsing code, which
may result in various result. Most of the time, the connection will be
aborted by haproxy due to an invalid frame content read.

Note that this issue was not detected earlier as most clients do not
enable ECN support if the peer is not able to emit ACK ECN frame first,
which haproxy currently never sends. Nevertheless, this is not the case
for every client implementation, thus proper ACK ECN parsing is
mandatory for a proper QUIC stack support.

Fix this by adjusting quic_parse_ack_ecn_frame() function. The remaining
ECN fields are parsed to ensure correct packet parsing. Currently, they
are not used by the congestion controller.

This must be backported up to 2.6.
2026-01-13 15:08:02 +01:00
Olivier Houchard
82196eb74e BUG/MEDIUM: threads: Fix binding thread on bind.
The code to parse the "thread" keyword on bind lines was changed to
check if the thread numbers were correct against the value provided with
max-threads-per-group, if any were provided, however, at the time those
thread keywords have been set, it may not yet have been set, and that
breaks the feature, so revert to check against MAX_THREADS_PER_GROUP instead,
it should have no major impact.
2026-01-13 11:45:46 +01:00
Olivier Houchard
da813ae4d7 MEDIUM: counters: Remove some extra tests
Before updating counters, a few tests are made to check if the counters
exits. but those counters should always exist at this point, so just
remmove them.
This commit should have no impact, but can easily be reverted with no
functional impact if various crashes appear.
2026-01-13 11:12:34 +01:00
Olivier Houchard
5495c88441 MEDIUM: counters: Dynamically allocate per-thread group counters
Instead of statically allocating the per-thread group counters,
based on the max number of thread groups available, allocate
them dynamically, based on the number of thread groups actually
used. That way we can increase the maximum number of thread
groups without using an unreasonable amount of memory.
2026-01-13 11:12:34 +01:00
Willy Tarreau
37057feb80 BUG/MINOR: net_helper: fix IPv6 header length processing
The IPv6 header contains a payload length that excludes the 40 bytes of
IPv6 packet header, which differs from IPv4's total length which includes
it. As a result, the parser was wrong and would only see the IP part and
not the TCP one unless sufficient options were present tocover it.

This issue came in 3.4-dev2 with recent commit e88e03a6e4 ("MINOR:
net_helper: add ip.fp() to build a simplified fingerprint of a SYN"),
so no backport is needed.
2026-01-13 08:42:36 +01:00
Aurelien DARRAGON
fcd4d4a7aa BUG/MINOR: hlua_fcn: ensure Patref:add_bulk() is given a table object before using it
As reported by GH user @kanashimia in GH #3241, providing anything else
than a table to Patref:add_bulk() method could cause a segfault because
we were calling lua_next() with the lua object without ensuring it
actually is a table.

Let's add the missing lua_istable() check on the stack object before
calling lua_next() function on it.

It should be backported up to 3.2 with 884dc62 ("MINOR: hlua_fcn:
add Patref:add_bulk()")
2026-01-12 17:30:54 +01:00
Aurelien DARRAGON
04545cb2b7 BUG/MINOR: hlua_fcn: fix broken yield for Patref:add_bulk()
In GH #3241, GH user @kanashimia reported that the Patref:add_bulk()
method would raise a Lua exception when called with more than 101
elements at once.

As identified by @kanashimia there was an error in the way the
add_bulk() method was forced to yield after 101 elements precisely.
The yield is there to ensure Lua doesn't eat too much ressources at
once and doesn't impact haproxy's core responsiveness, but the check
for the yield was misplaced resulting in improper stack content upon
resume.

Thanks to user @kanashimia who even provided a reproducer which helped
a lot to troubleshoot the issue.

This fix should be backported up to 3.2 with 884dc62 ("MINOR: hlua_fcn:
add Patref:add_bulk()") where the bug was introduced.
2026-01-12 17:30:52 +01:00
Olivier Houchard
b1cfeeef21 BUG/MINOR: stats-file: Use a 16bits variable when loading tgid
Now that the tgid stored in the stats file has been increased to 16bits
by commit 022cb3ab7fdce74de2cf24bea865ecf7015e5754, don't forget to
increase the variable size when reading it from the file, too.
This should have no impact given the maximum thread group limit is still
32.
2026-01-12 09:48:54 +01:00
Olivier Houchard
022cb3ab7f MINOR: stats: Increase the tgid from 8bits to 16bits
Increase the size of the stored tgid in the stat file from 8bits to
32bits, so that we can have more than 256 thread group. 65536 should be
enough for some time.

This bumps thet stat file minor version, as the structure changes.
2026-01-12 09:39:52 +01:00
Olivier Houchard
c0f64fc36a MINOR: receiver: Dynamically alloc the "members" field of shard_info
Instead of always allocating MAX_TGROUPS members, allocate them
dynamically, using the number of thread groups we'll use, so that
increasing MAX_TGROUPS will not have a huge impact on the structure
size.
2026-01-12 09:32:27 +01:00
Tim Duesterhus
96faf71f87 CLEANUP: connection: Remove outdated note about CO_FL 0x00002000 being unused
This flag is used as of commit dcce9369129f6ca9b8eed6b451c0e20c226af2e3
("MINOR: connections: Add a new CO_FL_SSL_NO_CACHED_INFO flag"). This patch
should be backported to 3.3. Apparently dcce9369129 has been backported
to 3.2 and 3.1 already, with that change already applied, so no need for a
backport there.
2026-01-12 03:22:15 +01:00
Willy Tarreau
2560cce7c5 MINOR: tcp-sample: permit retrieving tcp_info from the connection/session stage
The fc_xxx info that are retrieved over tcp_info could currently not
be accessed before a stream is created due to a test that verified the
existence of a stream. The rationale here was that the function works
both for frontend and backend. Let's always retrieve these info from
the session for the frontend case so that it now becomes possible to
set variables at connection/session time. The doc did not mention this
limitation so this could almost be considered as a bug.
2026-01-11 15:48:20 +01:00
Willy Tarreau
880bbeeda4 MINOR: sample: also support retrieving fc.timer.handshake without a stream
Some timers, like the handshake timer, are stored in the session and are
only copied to the logs struct when a stream is created. But this means
we can't measure it without a stream, nor store it once for all in a
variable at session creation time. Let's extend the sample fetch function
to retrieve it from the session when no stream is present. The doc did not
mention this limitation so this could almost be considered as a bug.
2026-01-11 15:48:19 +01:00
Amaury Denoyelle
875bbaa7fc MINOR: cfgparse: remove duplicate "force-persist" in common kw list
"force-persist" proxy keyword is listed twice in common_kw_list. This
patch removes the duplicated occurence.

This could be backported up to 2.4.
2026-01-09 16:45:54 +01:00
Willy Tarreau
46088b7ad0 MEDIUM: config: warn if some userlist hashes are too slow
It was reported in GH #2956 and more recently in GH #3235 that some
hashes are way too slow. The former triggers watchdog warnings during
checks, the second sees the config parsing take 20 seconds. This is
always due to the use of hash algorithms that are not suitable for use
in low-latency environments like web. They might be fine for a local
auth though. The difficulty, as explained by Philipp Hossner, is that
developers are not aware of this cost and adopt this without suspecting
any side effect.

The proposal here is to measure the crypt() call time and emit a warning
if it takes more than 10ms (which is already extreme). This was tested
by Philipp and confirmed to catch his case.

This is marked medium as it might start to report warnings on config
suffering from this problem without ever detecting it till now.
2026-01-09 14:56:18 +01:00
akarl10
a203ce6854 BUG/MINOR: ech/quic: enable ech configuration also for quic listeners
Patch dba4fd24 ("MEDIUM: ssl/ech: config and load keys") introduced
ECH configuration for bind lines, but the QUIC configuration parsers
still suffers from not using the same code as the TCP/TLS one, so the
init for QUIC was missed.

Must be backported in 3.3.
2026-01-08 17:34:28 +01:00
William Lallemand
6e1718ce4b CI: github: remove ERR=1 temporarly from the ECH job
The ECH job still fails to compile since the openssl 4.0 deprecated
functions were not removed yet. Let's remove ERR=1 temporarly.

We do know that there's a regression in OpenSSL 4.0 with these
reg-tests though:

Error: #    top  TEST reg-tests/ssl/set_ssl_crlfile.vtc FAILED (0.219) exit=2
Error: #    top  TEST reg-tests/ssl/set_ssl_cafile.vtc FAILED (0.236) exit=2
Error: #    top  TEST reg-tests/quic/set_ssl_crlfile.vtc FAILED (0.196) exit=2
2026-01-08 17:32:27 +01:00
Christian Ruppert
dbe52cc23e REGTESTS: ssl: Fix reg-tests curve check
OpenSSL changed the output from "Server Temp Key" in prior versions to
"Peer Temp Key" in recent ones.
a39dc27c25
It looks like it affects OpenSSL >=3.5.0
This broke the reg-test for e.g. Debian 13 builds, using OpenSSL 3.5.1

Fixes bug #3238

Could be backported in every branches.

Signed-off-by: Christian Ruppert <idl0r@qasl.de>
2026-01-08 16:14:54 +01:00
William Lallemand
623aa725a2 BUG/MINOR: cli/stick-tables: argument to "show table" is optional
Discussed in issue #3187, the CLI help is confusing for the "show table"
command as it seems that the argument is mandatory.

This patch adds the arguments between square brackets to remove the
confusion.
2026-01-08 11:54:01 +01:00
Willy Tarreau
dbba442740 BUILD: sockpair: fix build issue on macOS related to variable-length arrays
In GH issue #3226, Sergey Fedorov (@barracuda156) reported that since
commit 10c14a1ed0 ("MINOR: proto_sockpair: send_fd_uxst: init iobuf,
cmsghdr, cmsgbuf to zeros"), macOS 10.6.8 with gcc 14.3.0 doesn't build
anymore:

  src/proto_sockpair.c: In function 'send_fd_uxst':
  src/proto_sockpair.c:246:49: error: variable-sized object may not be initialized except with an empty initializer
    246 |         char cmsgbuf[CMSG_SPACE(sizeof(int))] = {0};
        |                                                 ^
  src/proto_sockpair.c:247:45: error: variable-sized object may not be initialized except with an empty initializer
    247 |         char buf[CMSG_SPACE(sizeof(int))] = {0};
        |                                             ^

Upon investigation, it appears that the CMSG_SPACE() macro on this OS
looks too complex for gcc to consider it as a constant, so it takes
these buffers for variable-length arrays and cannot initialize them.

Let's move to a simple memset() instead, which Sergey confirmed fixes
the problem.

This needs to be backported as far as 3.1. Thanks to Sergey for the
report, the bisect and testing the fix.
2026-01-08 09:26:22 +01:00
Hyeonggeun Oh
c17ed69bf3 MINOR: cfgparse: Refactor "userlist" parser to print it in -dKall operation
This patch covers issue https://github.com/haproxy/haproxy/issues/3221.

The parser for the "userlist" section did not use the standard keyword
registration mechanism. Instead, it relied on a series of strcmp()
comparisons to identify keywords such as "group" and "user".

This had two main drawbacks:
1. The keywords were not discoverable by the "-dKall" dump option,
   making it difficult for users to see all available keywords for the
   section.
2. The implementation was inconsistent with the parsers for other
   sections, which have been progressively refactored to use the
   standard cfg_kw_list infrastructure.

This patch refactors the userlist parser to align it with the project's
standard conventions.

The parsing logic for the "group" and "user" keywords has been extracted
from the if/else block in cfg_parse_users() into two new dedicated
functions:
- cfg_parse_users_group()
- cfg_parse_users_user()

These two keywords are now registered via a dedicated cfg_kw_list,
making them visible to the rest of the HAPorxy ecosystem, including the
-dKall dump.
2026-01-07 18:25:09 +01:00
William Lallemand
91cff75908 BUG/MINOR: cfgparse: wrong section name upon error
When a unknown keyword was used in the "userlist" section, the error was
mentioning the "users" section, instead of "userlist".

Could be backported in every branches.
2026-01-07 18:13:12 +01:00
William Lallemand
4aff6d1c25 BUILD: tools: memchr definition changed in C23
New gcc and clang versions from fedora rawhide seems to use the C23
standard by default. This version changes the definition of some
string.h functions, which now return a const char * instead of a char *.

src/tools.c: In function ‘fgets_from_mem’:
src/tools.c:7200:17: warning: assignment discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]
 7200 |         new_pos = memchr(*position, '\n', size);
      |                 ^

Strangely, -Wdiscarded-qualifiers does not seem to catch all the
memchr.

Should fix issue #3228.

This could be backported in previous versions.
2026-01-07 14:51:26 +01:00
William Lallemand
5322bd3785 BUILD: ssl: strchr definition changed in C23
New gcc and clang versions from fedora rawhide seems to use the C23
standard by default. This version changes the definition of some
string.h functions, which now return a const char * instead of a char *.

src/ssl_sock.c: In function ‘SSL_CTX_keylog’:
src/ssl_sock.c:4475:17: error: assignment discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
 4475 |         lastarg = strrchr(line, ' ');

Strangely, -Wdiscarded-qualifiers does not seem to catch all the
strrchr.

Should fix issue #3228.

This could be backported in previous versions.
2026-01-07 14:51:26 +01:00
Willy Tarreau
71b00a945d [RELEASE] Released version 3.4-dev2
Released version 3.4-dev2 with the following main changes :
    - BUG/MEDIUM: mworker/listener: ambiguous use of RX_F_INHERITED with shards
    - BUG/MEDIUM: http-ana: Properly detect client abort when forwarding response (v2)
    - BUG/MEDIUM: stconn: Don't report abort from SC if read0 was already received
    - BUG/MEDIUM: quic: Don't try to use hystart if not implemented
    - CLEANUP: backend: Remove useless test on server's xprt
    - CLEANUP: tcpcheck: Remove useless test on the xprt used for healthchecks
    - CLEANUP: ssl-sock: Remove useless tests on connection when resuming TLS session
    - REGTESTS: quic: fix a TLS stack usage
    - REGTESTS: list all skipped tests including 'feature cmd' ones
    - CI: github: remove openssl no-deprecated job
    - CI: github: add a job to test the master branch of OpenSSL
    - CI: github: openssl-master.yml misses actions/checkout
    - BUG/MEDIUM: backend: Do not remove CO_FL_SESS_IDLE in assign_server()
    - CI: github: use git prefix for openssl-master.yml
    - BUG/MEDIUM: mux-h2: synchronize all conditions to create a new backend stream
    - REGTESTS: fix error when no test are skipped
    - MINOR: cpu-topo: Turn the cpu policy configuration into a struct
    - MEDIUM: cpu-topo: Add a "threads-per-core" keyword to cpu-policy
    - MEDIUM: cpu-topo: Add a "cpu-affinity" option
    - MEDIUM: cpu-topo: Add a new "max-threads-per-group" global keyword
    - MEDIUM: cpu-topo: Add the "per-thread" cpu_affinity
    - MEDIUM: cpu-topo: Add the "per-ccx" cpu_affinity
    - BUG/MINOR: cpu-topo: fix -Wlogical-not-parentheses build with clang
    - DOC: config: fix number of values for "cpu-affinity"
    - MINOR: tools: add a secure implementation of memset
    - MINOR: mux-h2: add missing glitch count for non-decodable H2 headers
    - MINOR: mux-h2: perform a graceful close at 75% glitches threshold
    - MEDIUM: mux-h1: implement basic glitches support
    - MINOR: mux-h1: perform a graceful close at 75% glitches threshold
    - MEDIUM: cfgparse: acknowledge that proxy ID auto numbering starts at 2
    - MINOR: cfgparse: remove useless checks on no server in backend
    - OPTIM/MINOR: proxy: do not init proxy management task if unused
    - MINOR: patterns: preliminary changes for reorganization
    - MEDIUM: patterns: reorganize pattern reference elements
    - CLEANUP: patterns: remove dead code
    - OPTIM: patterns: cache the current generation
    - MINOR: tcp: add new bind option "tcp-ss" to instruct the kernel to save the SYN
    - MINOR: protocol: support a generic way to call getsockopt() on a connection
    - MINOR: tcp: implement the get_opt() function
    - MINOR: tcp_sample: implement the fc_saved_syn sample fetch function
    - CLEANUP: assorted typo fixes in the code, commits and doc
    - BUG/MEDIUM: cpu-topo: Don't forget to reset visited_ccx.
    - BUG/MAJOR: set the correct generation ID in pat_ref_append().
    - BUG/MINOR: backend: fix the conn_retries check for TFO
    - BUG/MINOR: backend: inspect request not response buffer to check for TFO
    - MINOR: net_helper: add sample converters to decode ethernet frames
    - MINOR: net_helper: add sample converters to decode IP packet headers
    - MINOR: net_helper: add sample converters to decode TCP headers
    - MINOR: net_helper: add ip.fp() to build a simplified fingerprint of a SYN
    - MINOR: net_helper: prepare the ip.fp() converter to support more options
    - MINOR: net_helper: add an option to ip.fp() to append the TTL to the fingerprint
    - MINOR: net_helper: add an option to ip.fp() to append the source address
    - DOC: config: fix the length attribute name for stick tables of type binary / string
    - MINOR: mworker/cli: only keep positive PIDs in proc_list
    - CLEANUP: mworker: remove duplicate list.h include
    - BUG/MINOR: mworker/cli: fix show proc pagination using reload counter
    - MINOR: mworker/cli: extract worker "show proc" row printer
    - MINOR: cpu-topo: Factorize code
    - MINOR: cpu-topo: Rename variables to better fit their usage
    - BUG/MEDIUM: peers: Properly handle shutdown when trying to get a line
    - BUG/MEDIUM: mux-h1: Take care to update <kop> value during zero-copy forwarding
    - MINOR: threads: Avoid using a thread group mask when stopping.
    - MINOR: hlua: Add support for lua 5.5
    - MEDIUM: cpu-topo: Add an optional directive for per-group affinity
    - BUG/MEDIUM: mworker: can't use signals after a failed reload
    - BUG/MEDIUM: stconn: Move data from <kip> to <kop> during zero-copy forwarding
    - DOC: config: fix a few typos and refine cpu-affinity
    - MINOR: receiver: Remove tgroup_mask from struct shard_info
    - BUG/MINOR: quic: fix deprecated warning for window size keyword
2026-01-07 11:02:12 +01:00
Amaury Denoyelle
e061547d9d BUG/MINOR: quic: fix deprecated warning for window size keyword
QUIC configuration was cleaned up in the previous release. Several
global keyword names were changed to unify the configuration. For each
of them the older keyword is marked as deprecated, with a warning to
mention the newer alternative.

This patch fixes the warning for 'tune.quic.frontend.default-max-size'
as the alternative proposed was not correct. The proper value now is
'tune.quic.fe.cc.max-win-size'.

This must be backported up to 3.3.
2026-01-07 09:54:31 +01:00
Olivier Houchard
41cd589645 MINOR: receiver: Remove tgroup_mask from struct shard_info
The only purpose from tgroup_mask seems to be to calculate how many
tgroups share the same shard, but this is an information we can
calculate differently, we just have to increment the number when a new
receiver is added to the shard, and decrement it when one is detached
from the shard. Removing thread group masks will allow us to increase
the maximum number of thread groups past 64.
2026-01-07 09:27:12 +01:00
Willy Tarreau
c3fcdfaf5c DOC: config: fix a few typos and refine cpu-affinity
There were two typos in the recently updated parts about per-group.
Also, change the commas to ':' after the options values, as sometimes
it would be confusing. Last, place quotes around keyword names so that
they're explicitly referred to as language keywords. No backport is
needed.
2026-01-07 09:19:25 +01:00
Christopher Faulet
83457b9e38 BUG/MEDIUM: stconn: Move data from <kip> to <kop> during zero-copy forwarding
The <kip> of producer was not forwarded to <kop> of consumer when zero-copy
data forwarding was tried. Because of the issue, the chunking of emitted H1
messages could be invalid.

To fix the bug, sc_ep_fwd_kip() must be called at this stage.

This fix is related to the previous one (529a8dbfb "BUG/MEDIUM: mux-h1: Take
care to update <kop> value during zero-copy forwarding"). Both are required
to fully fix the issue #3230.

This patch must be backported to 3.3.
2026-01-06 15:41:50 +01:00
William Lallemand
97490a7789 BUG/MEDIUM: mworker: can't use signals after a failed reload
In issue #3229 it was reported that the master couldn't reload after a
failed reload following a wrong configuration.

It is still possible to do a reload using the "reload" command of the
master CLI. But every signals are blocked.

The problem was introduced in 709cde6d0 ("BUG/MEDIUM: mworker: signals
inconsistencies during startup and reload") which fixes the blocking of
signals during the reload.

However the patch missed a case, indeed, the
run_master_in_recovery_mode() is not being called when the worker failed
to parse the configuration, it is only failing when the master is
failing.

To handle this case, the mworker_unblock_signals() function must be
called upon mworker_on_new_child_failure(). But since this is called in
an haproxy signal handler it would mess with the signals.

Instead, the patch adds a task which is started by the signal handler,
and restores the signals outside of it.

This must be backported as far as 3.1.
2026-01-06 14:27:53 +01:00
Olivier Houchard
56fd0c1a5c MEDIUM: cpu-topo: Add an optional directive for per-group affinity
When using per-group affinity, add an optional new directive. It accepts
the values of "auto", where when multiple thread groups are created, the
available CPUs are split equally across the groups, and is the new
default, and "loose", where all groups are bound to all available CPUs,
this is the old default.
2026-01-06 11:32:45 +01:00
Mike Lothian
1c0f781994 MINOR: hlua: Add support for lua 5.5
Lua 5.5 adds an extra argument to lua_newstate(). Since there are
already a few other ifdefs in hlua.c checking for the Lua version,
and there's a single call place, let's do the same here. This should
be safe for backporting if needed.

Signed-off-by: Mike Lothian <mike@fireburn.co.uk>
2026-01-06 11:05:02 +01:00
Olivier Houchard
853604f87a MINOR: threads: Avoid using a thread group mask when stopping.
Remove the "stopped_tgroup_mask" variable, that indicated which thread
groups were stopping, and instead just use "stopped_tgroups", a counter
indicating how many thread groups are stopping. We want to remove all
thread group masks, so that we can increase the maximum number of thread
groups past 64.
2026-01-06 08:30:55 +01:00
Christopher Faulet
529a8dbfba BUG/MEDIUM: mux-h1: Take care to update <kop> value during zero-copy forwarding
Since the extra field was removed from the HTX structure, a regression was
introduced when forwarding of chunked messages. The <kop> value was not
decreased as it should be when data were sent via the zero-copy
forwarding. Because of this bug, it was possible to announce a chunk size
larger than the chunk data sent.

To fix the bug, an helper function was added to properly update the <kop>
value when a chunk size is emitted. This function is now called when new
chunk is announced, including during zero-copy forwarding.

As a workaround, "tune.disable-zero-copy-forwarding" or just
"tune.h1.zero-copy-fwd-send off" can be set in the global section.

This patch should fix the issue #3230. It must be backported to 3.3.
2026-01-06 07:39:05 +01:00
Christopher Faulet
0b29b76a52 BUG/MEDIUM: peers: Properly handle shutdown when trying to get a line
When a shutdown was reported to a peer applet, the event was not properly
handled if it failed to receive data. The function responsible to get data
was exiting too early if the applet buffer was empty, without testing the
sedesc status. Because of this issue, it was possible to have frozen peer
applets. For instance, it happend on client timeout. With too many frozen
applets, it was possible to reach the maxconn.

This patch should fix the issue #3234. It must be backported to 3.3.
2026-01-05 13:46:57 +01:00
Olivier Houchard
196d16f2b1 MINOR: cpu-topo: Rename variables to better fit their usage
Rename "visited_tsid" and "visited_ccx" to "touse_tsid" and
"touse_ccx". They are not there to remember which tsid/ccx we
alreaday visited, contrarily to visited_ccx_set and
visited_cl_set, they are there to know which tsid/ccx we should
use, so make that clear.
2026-01-05 09:25:48 +01:00
Olivier Houchard
bbf5c30a87 MINOR: cpu-topo: Factorize code
Factorize the code common to cpu_policy_group_by_ccx() and
cpu_policy_group_by_cluster() into a new function,
cpu_policy_assign_threads().
2026-01-05 09:24:44 +01:00
Alexander Stephan
e241144e70 MINOR: mworker/cli: extract worker "show proc" row printer
Introduce cli_append_worker_row() to centralize formatting of a single
worker row. Also, replace duplicated row-printing code in both current
and old workers loops with the helper. Motivation: Reduces LOC and
improves readability by removing duplication.
2026-01-05 08:59:45 +01:00
Alexander Stephan
4c10d9c70c BUG/MINOR: mworker/cli: fix show proc pagination using reload counter
After commit 594408cd612b5 ("BUG/MINOR: mworker/cli: 'show proc' is limited
by buffer size"), related to ticket #3204, the "show proc" logic
has been fixed to be able to print more than 202 processes. However, this
fix can lead to the omission of entries in case they have the same
timestamp.

To fix this, we use the unique reload counter instead of the timestamp.
On partial flush, set ctx->next_reload = child->reloads.
On resume skip entries with child->reloads >= ctx->next_reload.
Finally, we clear ctx->next_reload at the end of a complete dump so
subsequent show proc starts from the top.

Could be backported in all stable branches.
2026-01-05 08:59:34 +01:00
Alexander Stephan
a5f274de92 CLEANUP: mworker: remove duplicate list.h include
Drop the second #include <haproxy/list.h> from mworker.c.
No functional change; reduces redundancy and keeps includes tidy.
2026-01-05 08:59:34 +01:00
Alexander Stephan
c30eeb2967 MINOR: mworker/cli: only keep positive PIDs in proc_list
Change mworker_env_to_proc_list() to if (child->pid > 0) before
LIST_APPEND, avoiding invalid PIDs (0/-1) in the process list.
This has no functional impact beyond stricter validation and it aligns
with existing kill safeguards.
2026-01-05 08:59:14 +01:00
Willy Tarreau
6970c8b8b6 DOC: config: fix the length attribute name for stick tables of type binary / string
The stick-table doc was reworked and moved in 3.2 with commit da67a89f3
("DOC: config: move stick-tables and peers to their own section"), however
the optional length attribute for binary/string types was mistakenly
spelled "length" while it's "len".

This must be backported to 3.2.
2026-01-01 10:52:50 +01:00
Willy Tarreau
a206f85f96 MINOR: net_helper: add an option to ip.fp() to append the source address
The new value 4 will permit to append the source address to the
fingerprint, making it easier to build rules checking a specific path.
2026-01-01 10:32:16 +01:00
Willy Tarreau
70ffae3614 MINOR: net_helper: add an option to ip.fp() to append the TTL to the fingerprint
With mode value 1, the TTL will be appended immediately after the 7 bytes,
making it a 8-byte fingerprint.
2026-01-01 10:19:48 +01:00
Willy Tarreau
2c317cfed7 MINOR: net_helper: prepare the ip.fp() converter to support more options
It can make sense to support extra components in the fingerprint to ease
configuration, so let's change the 0/1 value to a bit field. We also turn
the current 1 (TCP options list) to 2 so that we'll reuse 1 for the TTL.
2026-01-01 10:19:20 +01:00
Willy Tarreau
e88e03a6e4 MINOR: net_helper: add ip.fp() to build a simplified fingerprint of a SYN
Here we collect all the stuff that depends on the sender's settings,
such as TOS, IP version, TTL range, presence of DF bit or IP options,
presence of DATA in the SYN, CWR+ECE flags, TCP header length, wscale,
initial window, mss, as well as the list of TCP extension kinds. It's
obviously fairly limited but can allows to avoid blacklisting certain
valid clients sharing the same IP address as a misbehaving one.

It supports both a short and a long mode depending on the argument.
These can be used with the tcp-ss bind option. The doc was updated
accordingly.
2025-12-31 17:17:38 +01:00
Willy Tarreau
6e46d1345b MINOR: net_helper: add sample converters to decode TCP headers
This adds the following converters, used to decode fields
in an incoming tcp header:

   tcp.dst, tcp.flags, tcp.seq, tcp.src, tcp.win,
   tcp.options.mss, tcp.options.tsopt, tcp.options.tsval,
   tcp.options.wscale, tcp.options_list,

These can be used with the tcp-ss bind option. The doc was updated
accordingly.
2025-12-31 17:17:23 +01:00
Willy Tarreau
e0a7a7ca43 MINOR: net_helper: add sample converters to decode IP packet headers
This adds a few converters that help decode parts of IP packets:
  - ip.data : returns the next header (typically TCP)
  - ip.df   : returns the dont-fragment flags
  - ip.dst  : returns the destination IPv4/v6 address
  - ip.hdr  : returns only the IP header
  - ip.proto: returns the upper level protocol (udp/tcp)
  - ip.src  : returns the source IPv4/v6 address
  - ip.tos  : returns the TOS / TC field
  - ip.ttl  : returns the TTL/HL value
  - ip.ver  : returns the IP version (4 or 6)

These can be used with the tcp-ss bind option. The doc was updated
accordingly.
2025-12-31 17:16:29 +01:00
Willy Tarreau
90d2f157f2 MINOR: net_helper: add sample converters to decode ethernet frames
This adds a few converters that help decode parts of ethernet frame
headers:
  - eth.data : returns the next header (typically IP)
  - eth.dst  : returns the destination MAC address
  - eth.hdr  : returns only the ethernet header
  - eth.proto: returns the ethernet proto
  - eth.src  : returns the source MAC address
  - eth.vlan : returns the VLAN ID when present

These can be used with the tcp-ss bind option. The doc was updated
accordingly.
2025-12-31 17:15:36 +01:00
Willy Tarreau
933cb76461 BUG/MINOR: backend: inspect request not response buffer to check for TFO
In 2.6, do_connect_server() was introduced by commit 0a4dcb65f ("MINOR:
stream-int/backend: Move si_connect() in the backend scope") and changed
the approach to work with a stream instead of a stream-interface. However
si_oc(si) was wrongly turned to &s->res instead of &s->req, which breaks
TFO by always inspecting the response channel to figure whether there are
data pending.

This fix can be backported to all versions till 2.6.
2025-12-31 13:03:53 +01:00
Willy Tarreau
799653d536 BUG/MINOR: backend: fix the conn_retries check for TFO
In 2.6, the retries counter on a stream was changed from retries left
to retries done via commit 731c8e6cf ("MINOR: stream: Simplify retries
counter calculation"). However, one comparison fell through the cracks
in order to detect whether or not we can use TFO (only first attempt),
resulting in TFO never working anymore.

This may be backported to all versions till 2.6.
2025-12-31 13:03:53 +01:00
Maxime Henrion
51592f7a09 BUG/MAJOR: set the correct generation ID in pat_ref_append().
This fixes crashes when creating more than one new revision of a map or
acl file and purging the previous version.
2025-12-31 00:29:47 +01:00
Olivier Houchard
54f59e4669 BUG/MEDIUM: cpu-topo: Don't forget to reset visited_ccx.
We want to reset visited_ccx, as introduced by commit
8aef5bec1ef57eac449298823843d6cc08545745, each time we run the loop,
otherwise the chances of its content being correct are very low, and
will likely end up being bound to the wrong threads.
This was reported in github issue #3224.
2025-12-26 23:55:57 +01:00
Ilia Shipitsin
f8a77ecf62 CLEANUP: assorted typo fixes in the code, commits and doc 2025-12-25 19:45:29 +01:00
Willy Tarreau
6fb521d2f6 MINOR: tcp_sample: implement the fc_saved_syn sample fetch function
This function retrieves the copy of a SYN packet that the system has
kept for us when bind option "tcp-ss" was set to 1 or above. It's
recommended to copy it to a local variable because it will be freed
after being read. It allows to inspect all parts of an incoming SYN
packet, provided that it was preserved (e.g. not possible with SYN
cookies). The doc provides examples of how to use it.
2025-12-24 18:39:37 +01:00
Willy Tarreau
52d60bf9ee MINOR: tcp: implement the get_opt() function
It relies on the generic sock_conn_get_opt() function and will permit
sample fetch functions to retrieve generic TCP-level info.
2025-12-24 18:38:51 +01:00
Willy Tarreau
6d995e59e9 MINOR: protocol: support a generic way to call getsockopt() on a connection
It's regularly needed to call getsockopt() on a connection, but each
time the calling code has to do all the job by itself. This commit adds
a "get_opt()" callback on the protocol struct, that directly calls
getsockopt() on the connection's FD. A generic implementation for
standard sockets is provided, though QUIC would likely require a
different approach, or maybe a mapping. Due to the overlap between
IP/TCP/socket option values, it is necessary for the caller to indicate
both the level and the option. An abstraction of the level could be
done, but the caller would nonetheless have to know the optname, which
is generally defined in the same include files. So for now we'll
consider that this callback is only for very specific use.

The levels and optnames are purposely passed as signed ints so that it
is possible to further extend the API by using negative levels for
internal namespaces.
2025-12-24 18:38:51 +01:00
Willy Tarreau
44c67a08dd MINOR: tcp: add new bind option "tcp-ss" to instruct the kernel to save the SYN
This option enables TCP_SAVE_SYN on the listening socket, which will
cause the kernel to try to save a copy of the SYN packet header (L2,
IP and TCP are supported). This can permit to check the source MAC
address of a client, or find certain TCP options such as a source
address encapsulated using RFC7974. It could also be used as an
alternate approach to retrieving the source and destination addresses
and ports. For now setting the option is enabled, but sample fetch
functions and converters will be needed to extract info.
2025-12-24 11:35:09 +01:00
Maxime Henrion
1fdccbe8da OPTIM: patterns: cache the current generation
This makes a significant difference when loading large files and during
commit and clear operations, thanks to improved cache locality. In the
measurements below, master refers to the code before any of the changes
to the patterns code, not the code before this one commit.

Timing the replacement of 10M entries from the CLI with this command
which also reports timestamps at start, end of upload and end of clear:

  $ (echo "prompt i"; echo "show activity"; echo "prepare acl #0";
     awk '{print "add acl @1 #0",$0}' < bad-ip.map; echo "show activity";
     echo "commit acl @1 #0"; echo "clear acl @0 #0";echo "show activity") |
    socat -t 10 - /tmp/sock1 | grep ^uptim

master, on a 3.7 GHz EPYC, 3 samples:

  uptime_now: 6.087030
  uptime_now: 25.981777  => 21.9 sec insertion time
  uptime_now: 29.286368  => 3.3 sec commit+clear

  uptime_now: 5.748087
  uptime_now: 25.740675  => 20.0s insertion time
  uptime_now: 29.039023  => 3.3 s commit+clear

  uptime_now: 7.065362
  uptime_now: 26.769596  => 19.7s insertion time
  uptime_now: 30.065044  => 3.3s commit+clear

And after this commit:

  uptime_now: 6.119215
  uptime_now: 25.023019  => 18.9 sec insertion time
  uptime_now: 27.155503  => 2.1 sec commit+clear

  uptime_now: 5.675931
  uptime_now: 24.551035  => 18.9s insertion
  uptime_now: 26.652352  => 2.1s commit+clear

  uptime_now: 6.722256
  uptime_now: 25.593952  => 18.9s insertion
  uptime_now: 27.724153  => 2.1s commit+clear

Now timing the startup time with a 10M entries file (on another machine)
on master, 20 samples:

Standard Deviation, s: 0.061652677408033
Mean:        4.217

And after this commit:

Standard Deviation, s: 0.081821371548669
Mean:        3.78
2025-12-23 21:17:39 +01:00
Maxime Henrion
99e625a41d CLEANUP: patterns: remove dead code
Situations where we are iterating over elements and find one with a
different generation ID cannot arise anymore since the elements are kept
per-generation.
2025-12-23 21:17:39 +01:00
Maxime Henrion
545cf59b6f MEDIUM: patterns: reorganize pattern reference elements
Instead of a global list (and tree) of pattern reference elements, we
now have an intermediate pat_ref_gen structure and store the elements in
those. This simplifies the logic of some operations such as commit and
clear, and improves performance in some cases - numbers to be provided
in a subsequent commit after one important optimization is added.

A lot of the changes are due to adding an extra level of indirection,
changing many cases where we iterate over all elements to an outer loop
iterating over the generation and an inner one iterating over the
elements of the current generation. It is therefore easier to read this
patch using 'git diff -w'.
2025-12-23 21:17:39 +01:00
Maxime Henrion
5547bedebb MINOR: patterns: preliminary changes for reorganization
Safe and non-functional changes that only add currently unused
structures, field, functions and macros, in preparation of larger
changes that alter the way pattern reference elements are stored.

This includes code to create and lookup generation objects, and
macros to iterate over the generations of a pattern reference.
2025-12-23 21:17:39 +01:00
Amaury Denoyelle
a4a17eb366 OPTIM/MINOR: proxy: do not init proxy management task if unused
Each proxy has its owned task for internal purpose. Currently, it is
only used either by frontends or if a stick-table is present.

This commit rendres the task allocation optional to only the required
case. Thus, it is not allocated anymore for backend only proxies without
stick-table.
2025-12-23 16:35:49 +01:00
Amaury Denoyelle
c397f6fc9a MINOR: cfgparse: remove useless checks on no server in backend
A legacy check could be activated at compile time to reject backends
without servers. In practice this is not used anymore and does not have
much sense with the introduction of dynamic servers.
2025-12-23 16:35:49 +01:00
Amaury Denoyelle
b562602044 MEDIUM: cfgparse: acknowledge that proxy ID auto numbering starts at 2
Each frontend/backend/listen proxies is assigned an unique ID. It can
either be set explicitely via 'id' keyword, or automatically assigned on
post parsing depending on the available values.

It was expected that the first automatically assigned value would start
at '1'. However, due to a legacy bug this is not the case as this value
is always skipped. Thus, automatically assigned proxies always start at
'2' or more.

To avoid breaking the current existing state, this situation is now
acknowledged with the current patch. The code is rewritten with an
explicit warning to ensure that this won't be fixed without knowing the
current status. A new regtest also ensures this.
2025-12-23 16:35:49 +01:00
Willy Tarreau
5904f8279b MINOR: mux-h1: perform a graceful close at 75% glitches threshold
This avoids hitting the hard wall for connections with non-compliant
peers that are accumulating errors. We recycle the connection early
enough to permit to reset the counter. Example below with a threshold
set to 100:

Before, 1% errors:
  $ h1load -H "Host : blah" -c 1 -n 10000000 0:4445
  #     time conns tot_conn  tot_req      tot_bytes    err  cps  rps  bps   ttfb
           1     1     1039   103872        6763365   1038 1k03 103k 54M1 9.426u
           2     1     2128   212793       14086140   2127 1k08 108k 58M5 8.963u
           3     1     3215   321465       21392137   3214 1k08 108k 58M3 8.982u
           4     1     4307   430684       28735013   4306 1k09 109k 58M6 8.935u
           5     1     5390   538989       36016294   5389 1k08 108k 58M1 9.021u

After, no more errors:
  $ h1load -H "Host : blah" -c 1 -n 10000000 0:4445
  #     time conns tot_conn  tot_req      tot_bytes    err  cps  rps  bps   ttfb
           1     1     1509   113161        7487809      0 1k50 113k 59M9 8.482u
           2     1     3002   225101       15114659      0 1k49 111k 60M9 8.582u
           3     1     4508   338045       22809911      0 1k50 112k 61M5 8.523u
           4     1     5971   447785       30286861      0 1k46 109k 59M7 8.772u
           5     1     7472   560335       37955271      0 1k49 112k 61M2 8.537u
2025-12-20 19:29:37 +01:00
Willy Tarreau
05b457002b MEDIUM: mux-h1: implement basic glitches support
We now count glitches for each parsing error, including those that
have been accepted via accept-unsafe-violations-*. Front and back
are considered and the connection gets killed on error once if the
threshold is reached or passed and the CPU usage is beyond the
configured limit (0 by default). This was tested with:

   curl -ivH "host : blah" 0:4445{,,,,,,,,,}

which sends 10 requests to a configuration having a threshold of 5.
The global keywords are named similarly to H2 and quic:

     tune.h1.be.glitches-threshold xxxx
     tune.h1.fe.glitches-threshold xxxx

The glitches count of each connection is also reported when non-null
in the connection dumps (e.g. "show fd").
2025-12-20 19:29:33 +01:00
Willy Tarreau
0901f60cef MINOR: mux-h2: perform a graceful close at 75% glitches threshold
This avoids hitting the hard wall for connections with non-compliant
peers that would be accumulating errors over long connections. We now
permit to recycle the connection early enough to reset the connection
counter.

This was tested artificially by adding this to h2c_frt_handle_headers():

  h2c_report_glitch(h2c, 1, "new stream");

or this to h2_detach():

  h2c_report_glitch(h2c, 1, "detaching");

and injecting using h2load -c 1 -n 1000 0:4445 on a config featuring
tune.h2.fe.glitches-threshold 1000:

  finished in 8.74ms, 85802.54 req/s, 686.62MB/s
  requests: 1000 total, 751 started, 751 done, 750 succeeded, 250 failed, 250 errored, 0 timeout
  status codes: 750 2xx, 0 3xx, 0 4xx, 0 5xx
  traffic: 6.00MB (6293303) total, 132.57KB (135750) headers (space savings 29.84%), 5.86MB (6144000) data
                       min         max         mean         sd        +/- sd
  time for request:        9us       178us        10us         6us    99.47%
  time for connect:      139us       139us       139us         0us   100.00%
  time to 1st byte:      339us       339us       339us         0us   100.00%
  req/s           :   87477.70    87477.70    87477.70        0.00   100.00%

The failures are due to h2load not supporting reconnection.
2025-12-20 19:26:29 +01:00
Willy Tarreau
52adeef7e1 MINOR: mux-h2: add missing glitch count for non-decodable H2 headers
One rare error case could produce a protocol error on the stream when
not being able to decode response headers wasn't being accounted as a
glitch, so let's fix it.
2025-12-20 19:11:16 +01:00
Maxime Henrion
c8750e4e9d MINOR: tools: add a secure implementation of memset
This guarantees that the compiler will not optimize away the memset()
call if it detects a dead store.

Use this to clear SSL passphrases.

No backport needed.
2025-12-19 17:42:57 +01:00
Willy Tarreau
bd92f34f02 DOC: config: fix number of values for "cpu-affinity"
It said "accepts 2 values" then goes on enumerating 5 since more were
added one at a time. Let's fix it by removing the number. No backport
is needed.
2025-12-19 11:21:09 +01:00
William Lallemand
03340748de BUG/MINOR: cpu-topo: fix -Wlogical-not-parentheses build with clang
src/cpu_topo.c:1325:15: warning: logical not is only applied to the left hand side of this bitwise operator [-Wlogical-not-parentheses]
 1325 |                         } else if (!cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE)
      |                                    ^                      ~
src/cpu_topo.c:1325:15: note: add parentheses after the '!' to evaluate the bitwise operator first
 1325 |                         } else if (!cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE)
      |                                    ^
      |                                     (                                                     )
src/cpu_topo.c:1325:15: note: add parentheses around left hand side expression to silence this warning
 1325 |                         } else if (!cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE)
      |                                    ^
      |                                    (                     )
src/cpu_topo.c:1533:15: warning: logical not is only applied to the left hand side of this bitwise operator [-Wlogical-not-parentheses]
 1533 |                         } else if (!cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE)
      |                                    ^                      ~
src/cpu_topo.c:1533:15: note: add parentheses after the '!' to evaluate the bitwise operator first
 1533 |                         } else if (!cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE)
      |                                    ^
      |                                     (                                                     )
src/cpu_topo.c:1533:15: note: add parentheses around left hand side expression to silence this warning
 1533 |                         } else if (!cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE)
      |                                    ^
      |                                    (                     )

No backport needed.
2025-12-19 10:15:17 +01:00
Olivier Houchard
8aef5bec1e MEDIUM: cpu-topo: Add the "per-ccx" cpu_affinity
Add a new cpu-affinity keyword, "per-ccx".
If used, each thread will be bound to all the hardware threads available
in one CCX of the threads group.
2025-12-18 18:52:52 +01:00
Olivier Houchard
c524b181a2 MEDIUM: cpu-topo: Add the "per-thread" cpu_affinity
Add a new cpu-affinity keyword, "per-thread".
If used, each thread will be bound to only one hardware thread of the
thread group.
If used in conjonction with the "threads-per-core 1" cpu_policy, then
each thread will be bound on a different core.
2025-12-18 18:52:52 +01:00
Olivier Houchard
7e22d9c484 MEDIUM: cpu-topo: Add a new "max-threads-per-group" global keyword
Add a new global keyword, max-threads-per-group. It sets the maximum number of
threads a thread group can contain. Unless the number of thread groups
is fixed with "thread-groups", haproxy will just create more thread
groups as needed.
The default and maximum value is 64.
2025-12-18 18:52:52 +01:00
Olivier Houchard
3865f6c5c6 MEDIUM: cpu-topo: Add a "cpu-affinity" option
Add a new global option, "cpu-affinity", which controls how threads are
bound.
It currently accepts three values, "per-core", which will bind one thread to
each hardware thread of a given core, and "per-group" which will use all
the available hardware threads of the thread group, and "auto", the
default, which will use "per-group", unless "threads-per-core 1" has
been specified in cpu_policy, in which case it will use per-core.
2025-12-18 18:52:52 +01:00
Olivier Houchard
3671652bc9 MEDIUM: cpu-topo: Add a "threads-per-core" keyword to cpu-policy
Add a new, optional key-word to "cpu-policy", "threads-per-core".
It takes one argument, "1" or "auto". If "1" is used, then only one
thread per core will be created, no matter how many hardware thread each
core has. If "auto" is used, then one thread will be created per
hardware thread, as is the case by default.

for example: cpu-policy performance threads-per-core 1
2025-12-18 18:52:52 +01:00
Olivier Houchard
58f04b4615 MINOR: cpu-topo: Turn the cpu policy configuration into a struct
Turn the cpu policy configuration into a struct. Right now it just
contains an int, that represents the policy used, but will get more
information soon.
2025-12-18 18:52:52 +01:00
William Lallemand
876b1e8477 REGTESTS: fix error when no test are skipped
Since commit 1ed2c9d ("REGTESTS: list all skipped tests including
'feature cmd' ones"), the script emits some error when trying to display
the list of skipped tests when there are none.

No backport needed.
2025-12-18 17:26:50 +01:00
Willy Tarreau
9a046fc3ad BUG/MEDIUM: mux-h2: synchronize all conditions to create a new backend stream
In H2 the conditions to create a new stream differ for a client and a
server when a GOAWAY was exchanged. While on the server, any stream
whose ID is lower than or equal to the one advertised in GOAWAY is
valid, for a client it's forbidden to create any stream after receipt
of a GOAWAY, even if its ID is lower than or equal to the last one,
despite the server not being able to tell the difference from the
number of streams in flight.

Unfortunately, the logic in the code did not always reflect this
specificity of the client (the backend code in our case), and most
often considered that it was still permitted to create a new stream
until the max_id was greater than or equal to the advertised last_id.
This is for example what h2c_is_dead() and h2c_streams_left() do. In
other places, such as h2_avail_streams(), the rule is properly taken
into account. Very often the advertised last_id is the same, and this
is also what haproxy does (which explains why it's impossible to
reproduce the issue by chaining two haproxy layers), but a server may
wish to advertise any ID including 2^31-1 as mentioned in the spec,
and in this case the functions would behave differently.

This discrepancy results in a corner case where a GOAWAY received on
an idle connection will cause the next stream creation to be initially
accepted but then rejected via h2_avail_streams(), and the connection
left in a bad state, still attached to the session due to http-reuse
safe, but not reinserted into idle list, since the backend code
currently is not able to properly recover from this situation. Worse,
the idle flags are no longer on it but TASK_F_USR1 still is, and this
makes the recently added BUG_ON() rightfully trigger since this case
is not supposed to happen.

Admittedly more of the backend recovery code needs to be reworked,
however the mux must consistently decide whether or not a connection
may be reused or needs to be released.

This commit fixes the affected logic by introducing a new function
"h2c_reached_last_stream()" which says if a connection has reached its
last stream, regardless of the side, and using this one everywhere
max_id was compared to last_id. This is sufficient to address the
corner case that be_reuse_connection() currently cannot recover from.

This is in relation to GH issue #3215 and it should be sufficient to
fix the issue there. Thanks to Chris Staite for reporting the issue
and kudos to Amaury for spotting the events sequence that can lead
to this situation.

This patch must be backported to 3.3 first, then to older versions
later. It's worth noting that it's much more difficult to observe
the issue before 3.3 because the BUG_ON() is not there, and the
possibly non-released connection might end up being killed for other
reasons (timeouts etc). But one possible visible effect might be the
impossibility to delete a server (which Chris observed in 3.3).
2025-12-18 17:01:32 +01:00
William Lallemand
9c8925ba0d CI: github: use git prefix for openssl-master.yml
Uses the git- prefix in order to get the latest tarball for the master
branch on github.
2025-12-18 16:13:04 +01:00
Olivier Houchard
40d16af7a6 BUG/MEDIUM: backend: Do not remove CO_FL_SESS_IDLE in assign_server()
Back in the mists of time, commit e91a526c8f decided that if we were trying
to stay on the same server than the previous request, and if there were
a connection available in the session, we'd remove its CO_FL_SESS_IDLE.
The reason for doing that has been long lost, probably it fixed a bug at some
point, but it was most probably not the right place to do that. And starting
with 3.3, this triggers a BUG_ON() because that flag is expected later on.
So just revert the commit, if the ancient bug shows up again, it will be
fixed another way.

This should be backported to 3.3. There is little reason to backport it
to previous versions, unless other patches depend on it.
2025-12-18 16:09:34 +01:00
William Lallemand
0c7a4469d2 CI: github: openssl-master.yml misses actions/checkout
The job can't run setup-vtest because the actions/checkout use line is
missing.
2025-12-18 16:03:20 +01:00
William Lallemand
38d3c24931 CI: github: add a job to test the master branch of OpenSSL
vtest.yml only builds the releases of OpenSSL for now, there's no way to
check if we still have issues with the API before a pre-release version
is released.

This job builds the master branch of OpenSSL.

It is run everyday at 3 AM.
2025-12-18 15:43:06 +01:00
William Lallemand
a58f09b63c CI: github: remove openssl no-deprecated job
Remove the openssl no-deprecated job which was used for 1.1.0 API.
It's not useful anymore since it uses the OpenSSL version of the
distributions.

Checking depreciations in the API is still useful when using newest
version of the library. A job for the OpenSSL master branch would be
more useful than that.
2025-12-18 15:22:27 +01:00
William Lallemand
1ed2c9da2c REGTESTS: list all skipped tests including 'feature cmd' ones
The script for running regression tests is modified to improve the
visibility of skipped tests.

Previously, the reasons for skipping tests were only visible during the
test discovery phase when grepping the vtc (REQUIRE, EXCLUDE, etc).
But reg-tests skipped by vtest with the 'feature cmd' keywords were not
listed.

This change introduces the following:
  - vtest does not remove the logs itself anymore, because it is not
    able to let the log available when a test is skipped. So the -L
    parameter is now always passed to vtest
  - All skipped tests during the discovery phase are now logged to a
    'skipped.log' file within the test directory
  - The script now parses vtest logs to find tests that were skipped
    due to missing features (via the 'feature cmd' in .vtc files)
    and adds them to the skipped list.
2025-12-17 15:54:15 +01:00
Frederic Lecaille
8523a5cde0 REGTESTS: quic: fix a TLS stack usage
This issue was reported in GH #3214 where quic/tls13_ssl_crt-list_filters.vtc
QUIC reg test was run without haproxy QUIC support due to OPENSSL_AWSLC enabled
featured.

This is due to the fact that when ssl/tls13_ssl_crt-list_filters.vtc has been
ported to QUIC the feature(OPENSSL) was silly replaced by feature(QUIC) leading
the script to be run even without QUIC support if OR'ed OPENSSL_AWSLC feature is
enabled.

A good method to port these feature() commands to QUIC would have been
to add a feature(QUIC) command seperated from the one used for the supported
TLS stacks identified by the original underlying ssl reg tests (in reg-tests/ssl).
This is what is done by this patch.

Thank you to @idl0r for having reported this issue.
2025-12-15 09:44:42 +01:00
Christopher Faulet
a25394b6c8 CLEANUP: ssl-sock: Remove useless tests on connection when resuming TLS session
In ssl_sock_srv_try_reuse_sess(), the connection is always defined, to TCP
and QUIC connections. No reason to test it. Because it is not so obvious for
the QUIC part, a BUG_ON() could be added here. For now, just remove useless
tests.

This patch should fix a Coverity report from #3213.
2025-12-15 08:16:59 +01:00
Christopher Faulet
d6b1d5f6e9 CLEANUP: tcpcheck: Remove useless test on the xprt used for healthchecks
The xprt used to perform a healthcheck is always defined and cannot be NULL.
So there is no reason to test it. It could lead to wrong assumptions later
in the code.

This patch should fix a Coverity report from #3213.
2025-12-15 08:01:21 +01:00
Christopher Faulet
5c5914c32e CLEANUP: backend: Remove useless test on server's xprt
The server's xprt is always defined and cannot be NULL. So there is no
reason to test it. It could lead to wrong assumptions later in the code.

This patch should fix a Coverity report from #3213.
2025-12-15 07:56:53 +01:00
Olivier Houchard
a08bc468d2 BUG/MEDIUM: quic: Don't try to use hystart if not implemented
Not every CC algos implement hystart, so only call the method if it is
actually there. Failure to do so will cause crashes if hystart is on,
and the algo doesn't implement it.

This should fix github issue #3218

This should be backported up to 3.0.
2025-12-14 16:46:12 +01:00
Christopher Faulet
54e58103e5 BUG/MEDIUM: stconn: Don't report abort from SC if read0 was already received
SC_FL_ABRT_DONE flag should never be set when SC_FL_EOS was already
set. These both flags were introduced to replace the old CF_SHUTR and to
have a flag for shuts driven by the stream and a flag for the read0 received
by the mux. So both flags must not be seen at same time on a SC. It is
espeically important because some processing are performed when these flags
are set. And wrong decisions may be made.

This patch must be backproted as far as 2.8.
2025-12-12 08:41:08 +01:00
Christopher Faulet
a483450fa2 BUG/MEDIUM: http-ana: Properly detect client abort when forwarding response (v2)
The first attempt to fix this issue (c672b2a29 "BUG/MINOR: http-ana:
Properly detect client abort when forwarding the response") was not fully
correct and could be responsible to false report of client abort during the
response forwarding. I guess it is possible to truncate the response.

Instead, we must also take care that the client closed on its side, by
checking SC_FL_EOS flag on the front SC. Indeed, if the client has aborted,
this flag should be set.

This patch should be backported as far as 2.8.
2025-12-12 08:41:08 +01:00
William Lallemand
5b19d95850 BUG/MEDIUM: mworker/listener: ambiguous use of RX_F_INHERITED with shards
The RX_F_INHERITED flag was ambiguous, as it was used to mark both
listeners inherited from the parent process and listeners duplicated
from another local receiver. This could lead to incorrect behavior
concerning socket unbinding and suspension.

This commit refactors the handling of inherited listeners by splitting
the RX_F_INHERITED flag into two more specific flags:

- RX_F_INHERITED_FD: Indicates a listener inherited from the parent
  process via its file descriptor. These listeners should not be unbound
  by the master.

- RX_F_INHERITED_SOCK: Indicates a listener that shares a socket with
  another one, either by being inherited from the parent or by being
  duplicated from another local listener. These listeners should not be
  suspended or resumed individually.

Previously, the sharding code was unconditionally using RX_F_INHERITED
when duplicating a file descriptor. In HAProxy versions prior to 3.1,
this led to a file descriptor leak for duplicated unix stats sockets in
the master process. This would eventually cause the master to crash with
a BUG_ON in fd_insert() once the file descriptor limit was reached.

This must be backported as far as 3.0. Branches earlier than 3.0 are
affected but would need a different patch as the logic is different.
2025-12-11 18:09:47 +01:00
112 changed files with 5322 additions and 954 deletions

View File

@ -28,7 +28,7 @@ jobs:
run: env SSL_LIB=${HOME}/opt/ scripts/build-curl.sh
- name: Compile HAProxy
run: |
make -j$(nproc) ERR=1 CC=gcc TARGET=linux-glibc \
make -j$(nproc) CC=gcc TARGET=linux-glibc \
USE_QUIC=1 USE_OPENSSL=1 USE_ECH=1 \
SSL_LIB=${HOME}/opt/lib SSL_INC=${HOME}/opt/include \
DEBUG="-DDEBUG_POOL_INTEGRITY -DDEBUG_UNIT" \

77
.github/workflows/openssl-master.yml vendored Normal file
View File

@ -0,0 +1,77 @@
name: openssl master
on:
schedule:
- cron: "0 3 * * *"
workflow_dispatch:
permissions:
contents: read
jobs:
test:
runs-on: ubuntu-latest
if: ${{ github.repository_owner == 'haproxy' || github.event_name == 'workflow_dispatch' }}
steps:
- uses: actions/checkout@v5
- name: Install apt dependencies
run: |
sudo apt-get update -o Acquire::Languages=none -o Acquire::Translation=none
sudo apt-get --no-install-recommends -y install socat gdb
sudo apt-get --no-install-recommends -y install libpsl-dev
- uses: ./.github/actions/setup-vtest
- name: Install OpenSSL master
run: env OPENSSL_VERSION="git-master" GIT_TYPE="branch" scripts/build-ssl.sh
- name: Compile HAProxy
run: |
make -j$(nproc) ERR=1 CC=gcc TARGET=linux-glibc \
USE_QUIC=1 USE_OPENSSL=1 \
SSL_LIB=${HOME}/opt/lib SSL_INC=${HOME}/opt/include \
DEBUG="-DDEBUG_POOL_INTEGRITY -DDEBUG_UNIT" \
ADDLIB="-Wl,-rpath,/usr/local/lib/ -Wl,-rpath,$HOME/opt/lib/"
sudo make install
- name: Show HAProxy version
id: show-version
run: |
ldd $(which haproxy)
haproxy -vv
echo "version=$(haproxy -v |awk 'NR==1{print $3}')" >> $GITHUB_OUTPUT
- name: Install problem matcher for VTest
run: echo "::add-matcher::.github/vtest.json"
- name: Run VTest for HAProxy
id: vtest
run: |
# This is required for macOS which does not actually allow to increase
# the '-n' soft limit to the hard limit, thus failing to run.
ulimit -n 65536
# allow to catch coredumps
ulimit -c unlimited
make reg-tests VTEST_PROGRAM=../vtest/vtest REGTESTS_TYPES=default,bug,devel
- name: Show VTest results
if: ${{ failure() && steps.vtest.outcome == 'failure' }}
run: |
for folder in ${TMPDIR:-/tmp}/haregtests-*/vtc.*; do
printf "::group::"
cat $folder/INFO
cat $folder/LOG
echo "::endgroup::"
done
exit 1
- name: Run Unit tests
id: unittests
run: |
make unit-tests
- name: Show coredumps
if: ${{ failure() && steps.vtest.outcome == 'failure' }}
run: |
failed=false
shopt -s nullglob
for file in /tmp/core.*; do
failed=true
printf "::group::"
gdb -ex 'thread apply all bt full' ./haproxy $file
echo "::endgroup::"
done
if [ "$failed" = true ]; then
exit 1;
fi

View File

@ -1,32 +0,0 @@
#
# special purpose CI: test against OpenSSL built in "no-deprecated" mode
# let us run those builds weekly
#
# for example, OpenWRT uses such OpenSSL builds (those builds are smaller)
#
#
# some details might be found at NL: https://www.mail-archive.com/haproxy@formilux.org/msg35759.html
# GH: https://github.com/haproxy/haproxy/issues/367
name: openssl no-deprecated
on:
schedule:
- cron: "0 0 * * 4"
workflow_dispatch:
permissions:
contents: read
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: ./.github/actions/setup-vtest
- name: Compile HAProxy
run: |
make DEFINE="-DOPENSSL_API_COMPAT=0x10100000L -DOPENSSL_NO_DEPRECATED" -j3 CC=gcc ERR=1 TARGET=linux-glibc USE_OPENSSL=1
- name: Run VTest
run: |
make reg-tests VTEST_PROGRAM=../vtest/vtest REGTESTS_TYPES=default,bug,devel

View File

@ -1,6 +1,77 @@
ChangeLog :
===========
2026/01/07 : 3.4-dev2
- BUG/MEDIUM: mworker/listener: ambiguous use of RX_F_INHERITED with shards
- BUG/MEDIUM: http-ana: Properly detect client abort when forwarding response (v2)
- BUG/MEDIUM: stconn: Don't report abort from SC if read0 was already received
- BUG/MEDIUM: quic: Don't try to use hystart if not implemented
- CLEANUP: backend: Remove useless test on server's xprt
- CLEANUP: tcpcheck: Remove useless test on the xprt used for healthchecks
- CLEANUP: ssl-sock: Remove useless tests on connection when resuming TLS session
- REGTESTS: quic: fix a TLS stack usage
- REGTESTS: list all skipped tests including 'feature cmd' ones
- CI: github: remove openssl no-deprecated job
- CI: github: add a job to test the master branch of OpenSSL
- CI: github: openssl-master.yml misses actions/checkout
- BUG/MEDIUM: backend: Do not remove CO_FL_SESS_IDLE in assign_server()
- CI: github: use git prefix for openssl-master.yml
- BUG/MEDIUM: mux-h2: synchronize all conditions to create a new backend stream
- REGTESTS: fix error when no test are skipped
- MINOR: cpu-topo: Turn the cpu policy configuration into a struct
- MEDIUM: cpu-topo: Add a "threads-per-core" keyword to cpu-policy
- MEDIUM: cpu-topo: Add a "cpu-affinity" option
- MEDIUM: cpu-topo: Add a new "max-threads-per-group" global keyword
- MEDIUM: cpu-topo: Add the "per-thread" cpu_affinity
- MEDIUM: cpu-topo: Add the "per-ccx" cpu_affinity
- BUG/MINOR: cpu-topo: fix -Wlogical-not-parentheses build with clang
- DOC: config: fix number of values for "cpu-affinity"
- MINOR: tools: add a secure implementation of memset
- MINOR: mux-h2: add missing glitch count for non-decodable H2 headers
- MINOR: mux-h2: perform a graceful close at 75% glitches threshold
- MEDIUM: mux-h1: implement basic glitches support
- MINOR: mux-h1: perform a graceful close at 75% glitches threshold
- MEDIUM: cfgparse: acknowledge that proxy ID auto numbering starts at 2
- MINOR: cfgparse: remove useless checks on no server in backend
- OPTIM/MINOR: proxy: do not init proxy management task if unused
- MINOR: patterns: preliminary changes for reorganization
- MEDIUM: patterns: reorganize pattern reference elements
- CLEANUP: patterns: remove dead code
- OPTIM: patterns: cache the current generation
- MINOR: tcp: add new bind option "tcp-ss" to instruct the kernel to save the SYN
- MINOR: protocol: support a generic way to call getsockopt() on a connection
- MINOR: tcp: implement the get_opt() function
- MINOR: tcp_sample: implement the fc_saved_syn sample fetch function
- CLEANUP: assorted typo fixes in the code, commits and doc
- BUG/MEDIUM: cpu-topo: Don't forget to reset visited_ccx.
- BUG/MAJOR: set the correct generation ID in pat_ref_append().
- BUG/MINOR: backend: fix the conn_retries check for TFO
- BUG/MINOR: backend: inspect request not response buffer to check for TFO
- MINOR: net_helper: add sample converters to decode ethernet frames
- MINOR: net_helper: add sample converters to decode IP packet headers
- MINOR: net_helper: add sample converters to decode TCP headers
- MINOR: net_helper: add ip.fp() to build a simplified fingerprint of a SYN
- MINOR: net_helper: prepare the ip.fp() converter to support more options
- MINOR: net_helper: add an option to ip.fp() to append the TTL to the fingerprint
- MINOR: net_helper: add an option to ip.fp() to append the source address
- DOC: config: fix the length attribute name for stick tables of type binary / string
- MINOR: mworker/cli: only keep positive PIDs in proc_list
- CLEANUP: mworker: remove duplicate list.h include
- BUG/MINOR: mworker/cli: fix show proc pagination using reload counter
- MINOR: mworker/cli: extract worker "show proc" row printer
- MINOR: cpu-topo: Factorize code
- MINOR: cpu-topo: Rename variables to better fit their usage
- BUG/MEDIUM: peers: Properly handle shutdown when trying to get a line
- BUG/MEDIUM: mux-h1: Take care to update <kop> value during zero-copy forwarding
- MINOR: threads: Avoid using a thread group mask when stopping.
- MINOR: hlua: Add support for lua 5.5
- MEDIUM: cpu-topo: Add an optional directive for per-group affinity
- BUG/MEDIUM: mworker: can't use signals after a failed reload
- BUG/MEDIUM: stconn: Move data from <kip> to <kop> during zero-copy forwarding
- DOC: config: fix a few typos and refine cpu-affinity
- MINOR: receiver: Remove tgroup_mask from struct shard_info
- BUG/MINOR: quic: fix deprecated warning for window size keyword
2025/12/10 : 3.4-dev1
- BUG/MINOR: jwt: Missing "case" in switch statement
- DOC: configuration: ECH support details

View File

@ -643,7 +643,7 @@ ifneq ($(USE_OPENSSL:0=),)
OPTIONS_OBJS += src/ssl_sock.o src/ssl_ckch.o src/ssl_ocsp.o src/ssl_crtlist.o \
src/ssl_sample.o src/cfgparse-ssl.o src/ssl_gencert.o \
src/ssl_utils.o src/jwt.o src/ssl_clienthello.o src/jws.o src/acme.o \
src/ssl_trace.o
src/ssl_trace.o src/jwe.o
endif
ifneq ($(USE_ENGINE:0=),)
@ -992,7 +992,7 @@ OBJS += src/mux_h2.o src/mux_h1.o src/mux_fcgi.o src/log.o \
src/cfgcond.o src/proto_udp.o src/lb_fwlc.o src/ebmbtree.o \
src/proto_uxdg.o src/cfgdiag.o src/sock_unix.o src/sha1.o \
src/lb_fas.o src/clock.o src/sock_inet.o src/ev_select.o \
src/lb_map.o src/shctx.o src/hpack-dec.o \
src/lb_map.o src/shctx.o src/hpack-dec.o src/net_helper.o \
src/arg.o src/signal.o src/fix.o src/dynbuf.o src/guid.o \
src/cfgparse-tcp.o src/lb_ss.o src/chunk.o src/counters.o \
src/cfgparse-unix.o src/regex.o src/fcgi.o src/uri_auth.o \

View File

@ -1,2 +1,2 @@
$Format:%ci$
2025/12/10
2026/01/07

View File

@ -1 +1 @@
3.4-dev1
3.4-dev2

View File

@ -55,7 +55,7 @@ usage() {
echo " -S, --master-socket <path> Use the master socket at <path> (default: ${MASTER_SOCKET})"
echo " -d, --debug Debug mode, set -x"
echo " -t, --timeout Timeout (socat -t) (default: ${TIMEOUT})"
echo " -s, --silent Slient mode (no output)"
echo " -s, --silent Silent mode (no output)"
echo " -v, --verbose Verbose output (output from haproxy on failure)"
echo " -vv Even more verbose output (output from haproxy on success and failure)"
echo " -h, --help This help"

View File

@ -6,9 +6,9 @@ Wants=network-online.target
[Service]
EnvironmentFile=-/etc/default/haproxy
EnvironmentFile=-/etc/sysconfig/haproxy
Environment="CONFIG=/etc/haproxy/haproxy.cfg" "PIDFILE=/run/haproxy.pid" "EXTRAOPTS=-S /run/haproxy-master.sock"
ExecStart=@SBINDIR@/haproxy -Ws -f $CONFIG -p $PIDFILE $EXTRAOPTS
ExecReload=@SBINDIR@/haproxy -Ws -f $CONFIG -c $EXTRAOPTS
Environment="CONFIG=/etc/haproxy/haproxy.cfg" "PIDFILE=/run/haproxy.pid" "CFGDIR=/etc/haproxy/conf.d" "EXTRAOPTS=-S /run/haproxy-master.sock"
ExecStart=@SBINDIR@/haproxy -Ws -f $CONFIG -f $CFGDIR -p $PIDFILE $EXTRAOPTS
ExecReload=@SBINDIR@/haproxy -Ws -f $CONFIG -f $CFGDIR -c $EXTRAOPTS
ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed
Restart=always

View File

@ -3,7 +3,7 @@
Configuration Manual
----------------------
version 3.4
2025/12/10
2026/01/07
This document covers the configuration language as implemented in the version
@ -647,8 +647,8 @@ which must be placed before other sections, but it may be repeated if needed.
In addition, some automatic identifiers may automatically be assigned to some
of the created objects (e.g. proxies), and by reordering sections, their
identifiers will change. These ones appear in the statistics for example. As
such, the configuration below will assign "foo" ID number 1 and "bar" ID number
2, which will be swapped if the two sections are reversed:
such, the configuration below will assign "foo" an ID number smaller than its
"bar" counterpart. This will be swapped if the two sections are reversed:
listen foo
bind :80
@ -1747,6 +1747,7 @@ The following keywords are supported in the "global" section :
- ca-base
- chroot
- cluster-secret
- cpu-affinity
- cpu-map
- cpu-policy
- cpu-set
@ -1786,6 +1787,7 @@ The following keywords are supported in the "global" section :
- lua-load
- lua-load-per-thread
- lua-prepend-path
- max-thread-per-group
- mworker-max-reloads
- nbthread
- node
@ -1875,6 +1877,8 @@ The following keywords are supported in the "global" section :
- tune.events.max-events-at-once
- tune.fail-alloc
- tune.fd.edge-triggered
- tune.h1.be.glitches-threshold
- tune.h1.fe.glitches-threshold
- tune.h1.zero-copy-fwd-recv
- tune.h1.zero-copy-fwd-send
- tune.h2.be.glitches-threshold
@ -2223,7 +2227,30 @@ cpu-map [auto:]<thread-group>[/<thread-set>] <cpu-set>[,...] [...]
cpu-map 4/1-40 40-79,120-159
cpu-policy <policy>
cpu-affinity <affinity>
Defines how you want threads to be bound to cpus.
It currently accepts the following values :
- per-core: each thread will be bound to all the hardware threads of one core.
- per-group: each thread will be bound to all the hardware threads of the
group. This is the default unless "threads-per-core 1" is used in
"cpu-policy". "per-group" accepts an optional argument, to specify how CPUs
should be allocated. When a list of CPUs is larger than the maximum allowed
number of CPUs per group and has to be split between multiple groups, an
extra option allows to choose how the groups will be bound to those CPUs:
- auto: each thread group will only be assigned a fair share of contiguous
CPU cores that are dedicated to it and not shared with other groups. This
is the default as it generally is more optimal.
- loose: each group will still be allowed to use any CPU in the list. This
generally causes more contention, but may sometimes help deal better with
parasitic loads running on the same CPUs.
- auto: "per-group" will be used, unless "threads-per-core 1" is used in
"cpu-policy", in which case "per-core" will be used. This is the default.
- per-thread: that will bind one thread to one hardware thread only. If
"threads-per-core 1" is used in "cpu-policy", then each thread will be
bound to one hardware thread of a different core.
- per-ccx: each thread will be bound to all the hardware threads of a CCX.
cpu-policy <policy> [threads-per-core 1 | auto]
Selects the CPU allocation policy to be used.
On multi-CPU systems, there can be plenty of reasons for not using all
@ -2375,6 +2402,13 @@ cpu-policy <policy>
easily. Note that if a single cluster is present, it
will still be fully used.
An optional keyword can be added, "threads-per-core". It can accept two
values, "1" and "auto". If set to 1, then only one thread per core will be
created, unrespective of how many hardware threads the core has. If set
to auto, then one thread per hardware thread will be created.
If no affinity is specified, and threads-per-core 1 is used, then by
default the affinity will be per-core.
See also: "cpu-map", "cpu-set", "nbthread"
cpu-set <directive>...
@ -2845,7 +2879,7 @@ limited-quic
layer supports most of the necessary TLS operations, albeit without QUIC
0-RTT capability.
This feature is primarily targetted for OpenSSL prior to version 3.5.2, where
This feature is primarily targeted for OpenSSL prior to version 3.5.2, where
QUIC API was not implemented or only partially. The compatibility layer can
still be activated for version 3.5.2 and above, but this is probably
unnecessary.
@ -2980,6 +3014,14 @@ master-worker no-exit-on-failure
it is only meant for debugging and could put the master process in an
abnormal state.
max-threads-per-group <number>
Defines the maximum number of threads in a thread group. Unless the number
of thread groups is fixed with the thread-groups directive, haproxy will
create more thread groups if needed. The default and maximum value is 64.
Having a lower value means more groups will potentially be created, which
can help improve performances, as a number of data structures are per
thread group, and that will mean less contention
mworker-max-reloads <number>
In master-worker mode, this option limits the number of time a worker can
survive to a reload. If the worker did not leave after a reload, once its
@ -4163,9 +4205,49 @@ tune.glitches.kill.cpu-usage <number>
will automatically get killed. A rule of thumb would be to set this value to
twice the usually observed CPU usage, or the commonly observed CPU usage plus
half the idle one (i.e. if CPU commonly reaches 60%, setting 80 here can make
sense). This parameter has no effect without tune.h2.fe.glitches-threshold or
tune.quic.fe.sec.glitches-threshold. See also the global parameters
"tune.h2.fe.glitches-threshold" and "tune.quic.fe.sec.glitches-threshold".
sense). This parameter has no effect without tune.h2.fe.glitches-threshold,
tune.quic.fe.sec.glitches-threshold or tune.h1.fe.glitches-threshold. See
also the global parameters "tune.h2.fe.glitches-threshold",
"tune.h1.fe.glitches-threshold" and "tune.quic.fe.sec.glitches-threshold".
tune.h1.be.glitches-threshold <number>
Sets the threshold for the number of glitches on a HTTP/1 backend connection,
after which that connection will automatically be killed. This allows to
automatically kill misbehaving connections without having to write explicit
rules for them. The default value is zero, indicating that no threshold is
set so that no event will cause a connection to be closed. Typical events
include improperly formatted headers that had been nevertheless accepted by
"accept-unsafe-violations-in-http-response". Any non-zero value here should
probably be in the hundreds or thousands to be effective without affecting
slightly bogus servers. It is also possible to only kill connections when the
CPU usage crosses a certain level, by using "tune.glitches.kill.cpu-usage".
Note that a graceful close is attempted at 75% of the configured threshold by
advertising a GOAWAY for a future stream. This ensures that a slightly faulty
connection will stop being used after some time without risking to interrupt
ongoing transfers.
See also: tune.h1.fe.glitches-threshold, bc_glitches, and
tune.glitches.kill.cpu-usage
tune.h1.fe.glitches-threshold <number>
Sets the threshold for the number of glitches on a HTTP/1 frontend connection
after which that connection will automatically be killed. This allows to
automatically kill misbehaving connections without having to write explicit
rules for them. The default value is zero, indicating that no threshold is
set so that no event will cause a connection to be closed. Typical events
include improperly formatted headers that had been nevertheless accepted by
"accept-unsafe-violations-in-http-request". Any non-zero value here should
probably be in the hundreds or thousands to be effective without affecting
slightly bogus clients. It is also possible to only kill connections when the
CPU usage crosses a certain level, by using "tune.glitches.kill.cpu-usage".
Note that a graceful close is attempted at 75% of the configured threshold by
advertising a GOAWAY for a future stream. This ensures that a slightly non-
compliant client will have the opportunity to create a new connection and
continue to work unaffected without ever triggering the hard close thus
risking to interrupt ongoing transfers.
See also: tune.h1.be.glitches-threshold, fc_glitches, and
tune.glitches.kill.cpu-usage
tune.h1.zero-copy-fwd-recv { on | off }
Enables ('on') of disabled ('off') the zero-copy receives of data for the H1
@ -4189,7 +4271,10 @@ tune.h2.be.glitches-threshold <number>
zero value here should probably be in the hundreds or thousands to be
effective without affecting slightly bogus servers. It is also possible to
only kill connections when the CPU usage crosses a certain level, by using
"tune.glitches.kill.cpu-usage".
"tune.glitches.kill.cpu-usage". Note that a graceful close is attempted at
75% of the configured threshold by advertising a GOAWAY for a future stream.
This ensures that a slightly faulty connection will stop being used after
some time without risking to interrupt ongoing transfers.
See also: tune.h2.fe.glitches-threshold, bc_glitches, and
tune.glitches.kill.cpu-usage
@ -4246,7 +4331,11 @@ tune.h2.fe.glitches-threshold <number>
zero value here should probably be in the hundreds or thousands to be
effective without affecting slightly bogus clients. It is also possible to
only kill connections when the CPU usage crosses a certain level, by using
"tune.glitches.kill.cpu-usage".
"tune.glitches.kill.cpu-usage". Note that a graceful close is attempted at
75% of the configured threshold by advertising a GOAWAY for a future stream.
This ensures that a slightly non-compliant client will have the opportunity
to create a new connection and continue to work unaffected without ever
triggering the hard close thus risking to interrupt ongoing transfers.
See also: tune.h2.be.glitches-threshold, fc_glitches, and
tune.glitches.kill.cpu-usage
@ -5731,6 +5820,7 @@ errorloc302 X X X X
errorloc303 X X X X
error-log-format X X X -
force-persist - - X X
force-be-switch - X X -
filter - X X X
fullconn X - X X
guid - X X X
@ -7014,6 +7104,9 @@ default_backend <backend>
used when no rule has matched. It generally is the dynamic backend which
will catch all undetermined requests.
If a backend is disabled or unpublished, default_backend rules targetting it
will be ignored and stream processing will remain on the original proxy.
Example :
use_backend dynamic if url_dyn
@ -7057,7 +7150,11 @@ disabled
is possible to disable many instances at once by adding the "disabled"
keyword in a "defaults" section.
See also : "enabled"
By default, a disabled backend cannot be selected for content-switching.
However, a portion of the traffic can ignore this when "force-be-switch" is
used.
See also : "enabled", "force-be-switch"
dispatch <address>:<port> (deprecated)
@ -7467,6 +7564,19 @@ force-persist { if | unless } <condition>
and section 7 about ACL usage.
force-be-switch { if | unless } <condition>
Allow content switching to select a backend instance even if it is disabled
or unpublished. This rule can be used by admins to test traffic to services
prior to expose them to the outside world.
May be used in the following contexts: tcp, http
May be used in sections: defaults | frontend | listen | backend
no | yes | yes | no
See also : "disabled"
filter <name> [param*]
Add the filter <name> in the filter list attached to the proxy.
@ -8613,9 +8723,11 @@ id <value>
Arguments : none
Set a persistent ID for the proxy. This ID must be unique and positive.
An unused ID will automatically be assigned if unset. The first assigned
value will be 1. This ID is currently only returned in statistics.
Set a persistent ID for the proxy. This ID must be unique and positive. An
unused ID will automatically be assigned if unset. Due to an historical
behavior, value 1 is not used unless explicitly set. Thus, the lowest value
automatically assigned will be 2. This ID is currently only returned in
statistics.
ignore-persist { if | unless } <condition>
@ -14661,14 +14773,17 @@ use_backend <backend> [{if | unless} <condition>]
There may be as many "use_backend" rules as desired. All of these rules are
evaluated in their declaration order, and the first one which matches will
assign the backend.
assign the backend. This is even the case if the backend is considered as
down. However, if a matching rule targets a disabled or unpublished backend,
it is ignored instead and rules evaluation continue.
In the first form, the backend will be used if the condition is met. In the
second form, the backend will be used if the condition is not met. If no
condition is valid, the backend defined with "default_backend" will be used.
If no default backend is defined, either the servers in the same section are
used (in case of a "listen" section) or, in case of a frontend, no server is
used and a 503 service unavailable response is returned.
condition is valid, the backend defined with "default_backend" will be used
unless it is disabled or unpublished. If no default backend is available,
either the servers in the same section are used (in case of a "listen"
section) or, in case of a frontend, no server is used and a 503 service
unavailable response is returned.
Note that it is possible to switch from a TCP frontend to an HTTP backend. In
this case, either the frontend has already checked that the protocol is HTTP,
@ -17431,6 +17546,19 @@ tcp-md5sig <password>
introduction of spoofed TCP segments into the connection stream. But it can
be useful for any very long-lived TCP connections.
tcp-ss <mode>
Sets the TCP Save SYN option for all incoming connections instantiated from
this listening socket. This option is available on Linux since version 4.3.
It instructs the kernel to try to keep a copy of the incoming IP packet
containing the TCP SYN flag, for later inspection via the "fc_saved_syn"
sample fetch function. The option knows 3 modes:
- 0 SYN packet saving is disabled, this is the default
- 1 SYN packet saving is enabled, and contains IP and TCP headers
- 2 SYN packet saving is enabled, and contains ETH, IP and TCP headers
This only works for regular TCP connections, and is ignored for other
protocols (e.g. UNIX sockets). See also "fc_saved_syn".
tcp-ut <delay>
Sets the TCP User Timeout for all incoming connections instantiated from this
listening socket. This option is available on Linux since version 2.6.37. It
@ -18826,7 +18954,7 @@ proto <name>
quic-cc-algo { cubic | newreno | bbr | nocc }[(<args,...>)]
This is a QUIC specific setting to select the congestion control algorithm
for any connection targetting this server. They are similar to those used by
for any connection targeting this server. They are similar to those used by
TCP. See the bind option with a similar name for a complete description of
all customization options.
@ -20352,6 +20480,8 @@ The following keywords are supported:
51d.single(prop[,prop*]) string string
add(value) integer integer
add_item(delim[,var[,suff]]) string string
aes_cbc_dec(bits,nonce,key[,<aad>]) binary binary
aes_cbc_enc(bits,nonce,key[,<aad>]) binary binary
aes_gcm_dec(bits,nonce,key,aead_tag[,aad]) binary binary
aes_gcm_enc(bits,nonce,key,aead_tag[,aad]) binary binary
and(value) integer integer
@ -20377,6 +20507,12 @@ debug([prefix][,destination]) any same
digest(algorithm) binary binary
div(value) integer integer
djb2([avalanche]) binary integer
eth.data binary binary
eth.dst binary binary
eth.hdr binary binary
eth.proto binary integer
eth.src binary binary
eth.vlan binary integer
even integer boolean
field(index,delimiters[,count]) string string
fix_is_valid binary boolean
@ -20389,9 +20525,21 @@ htonl integer integer
http_date([offset[,unit]]) integer string
iif(true,false) boolean string
in_table([table]) any boolean
ip.data binary binary
ip.df binary integer
ip.dst binary address
ip.fp binary binary
ip.hdr binary binary
ip.proto binary integer
ip.src binary address
ip.tos binary integer
ip.ttl binary integer
ip.ver binary integer
ipmask(mask4[,mask6]) address address
json([input-code]) string string
json_query(json_path[,output_type]) string _outtype_
jwt_decrypt_cert(<cert>) string binary
jwt_decrypt_secret(<secret>) string binary
jwt_header_query([json_path[,output_type]]) string string
jwt_payload_query([json_path[,output_type]]) string string
-- keyword -------------------------------------+- input type + output type -
@ -20474,6 +20622,18 @@ table_server_id([table]) any integer
table_sess_cnt([table]) any integer
table_sess_rate([table]) any integer
table_trackers([table]) any integer
tcp.dst binary integer
tcp.flags binary integer
tcp.options.mss binary integer
tcp.options.sack binary integer
tcp.options.tsopt binary integer
tcp.options.tsval binary integer
tcp.options.wscale binary integer
tcp.options.wsopt binary integer
tcp.options_list binary binary
tcp.seq binary integer
tcp.src binary integer
tcp.win binary integer
ub64dec string string
ub64enc string string
ungrpc(field_number[,field_type]) binary binary / int
@ -20548,6 +20708,31 @@ add_item(<delim>[,<var>[,<suff>]])
http-request set-var(req.tagged) 'var(req.tagged),add_item(",",req.score1),add_item(",",req.score2)'
http-request set-var(req.tagged) 'var(req.tagged),add_item(",",,(site1))' if src,in_table(site1)
aes_cbc_dec(<bits>,<nonce>,<key>[,<aad>])
Decrypts the raw byte input using the AES128-CBC, AES192-CBC or AES256-CBC
algorithm, depending on the <bits> parameter. All other parameters need to be
base64 encoded and the returned result is in raw byte format. The <aad>
parameter is optional. If the <aad> validation fails, the converter doesn't
return any data.
The <nonce>, <key> and <aad> can either be strings or variables. This
converter requires at least OpenSSL 1.0.1.
Example:
http-response set-header X-Decrypted-Text %[var(txn.enc),\
aes_cbc_dec(128,txn.nonce,Zm9vb2Zvb29mb29wZm9vbw==)]
aes_cbc_enc(<bits>,<nonce>,<key>[,<aad>])
Encrypts the raw byte input using the AES128-CBC, AES192-CBC or AES256-CBC
algorithm, depending on the <bits> parameter. <nonce>, <key> and <aad>
parameters must be base64 encoded.
The <aad> parameter is optional. The returned result is in raw byte format.
The <nonce>, <key> and <aad> can either be strings or variables. This
converter requires at least OpenSSL 1.0.1.
Example:
http-response set-header X-Encrypted-Text %[var(txn.plain),\
aes_cbc_enc(128,txn.nonce,Zm9vb2Zvb29mb29wZm9vbw==)]
aes_gcm_dec(<bits>,<nonce>,<key>,<aead_tag>[,<aad>])
Decrypts the raw byte input using the AES128-GCM, AES192-GCM or AES256-GCM
algorithm, depending on the <bits> parameter. All other parameters need to be
@ -20796,6 +20981,48 @@ djb2([<avalanche>])
32-bit hash is trivial to break. See also "crc32", "sdbm", "wt6", "crc32c",
and the "hash-type" directive.
eth.data
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "2".
It skips all the Ethernet header including possible VLANs and returns a block
of binary data starting at the layer 3 protocol (usually IPv4 or IPv6). See
also "fc_saved_syn" and "tcp-ss".
eth.dst
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "2".
It returns the 6 bytes of the Ethernet header corresponding to the
destination address of the frame, as a binary block. See also "fc_saved_syn"
and "tcp-ss".
eth.hdr
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "2".
It trims anything past the Ethernet header but keeps possible VLANs, and
returns this header as a block of binary data. See also "fc_saved_syn" and
"tcp-ss".
eth.proto
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "2".
It returns the protocol number (also known as EtherType) found in a Ethernet
header after any optional VLAN as an integer value. It should normally be
either 0x800 for IPv4 or 0x86DD for IPv6. See also "fc_saved_syn" and
"tcp-ss".
eth.src
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "2".
It returns the 6 bytes of the Ethernet header corresponding to the source
address of the frame, as a binary block. See also "fc_saved_syn" and
"tcp-ss".
eth.vlan
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "2".
It returns the last VLAN ID found in a Ethernet header as an integer value.
See also "fc_saved_syn" and "tcp-ss".
even
Returns a boolean TRUE if the input value of type signed integer is even
otherwise returns FALSE. It is functionally equivalent to "not,and(1),bool".
@ -20924,6 +21151,132 @@ in_table([<table>])
elements (e.g. whether or not a source IP address or an Authorization header
was already seen).
ip.data
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "1",
or with the output of "eth.data". It skips the IP header and any optional
options or extensions, and returns a block of binary data starting at the
transport protocol (usually TCP or UDP). See also "fc_saved_syn", "tcp-ss",
and "eth.data".
ip.df
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "1",
or with the output of "eth.data". It returns integer value 1 if the DF (don't
fragment) flag is set in the IP header, 0 otherwise. IPv6 does not have a DF
flag, and doesn't fragment by default so it always returns 1. See also
"fc_saved_syn", "tcp-ss", and "eth.data".
ip.dst
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "1",
or with the output of "eth.data". It returns the IPv4 or IPv6 destination
address from the IPv4/v6 header. See also "fc_saved_syn", "tcp-ss", and
"eth.data".
ip.fp([<mode>])
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "1",
or with the output of "eth.data". It inspects various parts of the IP header
and the TCP header to construct sort of a fingerprint of invariant parts that
can be used to distinguish between multiple apparently identical hosts. The
real-world use case is to refine the identification of misbehaving hosts
between a shared IP address to avoid blocking legitimate users when only one
is misbehaving and needs to be blocked. The converter builds a 7-byte binary
block based on the input. The bytes of the fingerprint are arranged like
this:
- byte 0: IP TOS field (see ip.tos)
- byte 1:
- bit 7: IPv6 (1) / IPv4 (0)
- bit 6: ip.df
- bit 5..4: 0:ip.ttl<=32; 1:ip.ttl<=64; 2:ip.ttl<=128; 3:ip.ttl<=255
- bit 3: IP options present (1) / absent (0)
- bit 2: TCP data present (1) / absent (0)
- bit 1: TCP.flags has CWR set (1) / cleared (0)
- bit 0: TCP.flags has ECE set (1) / cleared (0)
- byte 2:
- bits 7..4: TCP header length in 4-byte words
- bits 3..0: TCP window scaling + 1 (1..15) / 0 (no WS advertised)
- byte 3..4: tcp.win
- byte 5..6: tcp.options.mss, or zero if absent
The <mode> argument permits to append more information to the fingerprint. By
default, when the <mode> argument is not set or is zero, the fingerprint is
solely made of the 7 bytes described above. If <mode> is specified as another
value, it then corresponds to the sum of the following values, and the
respective components will be concatenated to the fingerprint, in the order
below:
- 1: the received TTL value is appended to the fingerprint (1 byte)
- 2: the list of TCP option kinds, as returned by "tcp.options_list",
made of 0 to 40 extra bytes, is appended to the fingerprint
- 4: the source IP address is appended to the fingerprint, which adds
4 bytes for IPv4 and 16 for IPv6.
Example: make a 12..24 bytes fingerprint using the base FP, the TTL and the
source address (1+4=5):
frontend test
mode http
bind :4445 tcp-ss 1
tcp-request connection set-var(sess.syn) fc_saved_syn
http-request return status 200 content-type text/plain lf-string \
"src=%[var(sess.syn),ip.src] fp=%[var(sess.syn),ip.fp(5),hex]\n"
See also "fc_saved_syn", "tcp-ss", "eth.data", "ip.df", "ip.ttl", "tcp.win",
"tcp.options.mss", and "tcp.options_list".
ip.hdr
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "1",
or with the output of "eth.data". It returns a block of binary data starting
with the IP header and stopping after the last option or extension, and
before the transport protocol header. See also "fc_saved_syn", "tcp-ss", and
"eth.data".
ip.proto
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "1",
or with the output of "eth.data". It returns the transport protocol number,
usually 6 for TCP or 17 for UDP. See also "fc_saved_syn", "tcp-ss", and
"eth.data".
ip.src
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "1",
or with the output of "eth.data". It returns the IPv4 or IPv6 source address
from the IPv4/v6 header. See also "fc_saved_syn", "tcp-ss", and "eth.data".
ip.tos
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "1",
or with the output of "eth.data". It returns an integer corresponding to the
value of the type-of-service (TOS) field in the IPv4 header or traffic class
(TC) field in the IPv6 header. Note that in the modern internet, this field
most often contains a DSCP (Differentiated Services Codepoint) value in the
6 upper bits and the two lower are either not used, or used by IP ECN. Please
refer to RFC2474 and RFC8436 for DSCP values, and RFC3168 for IP ECN fields.
See also "fc_saved_syn", "tcp-ss", and "eth.data".
ip.ttl
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "1",
or with the output of "eth.data". This returns an integer corresponding to
the TTL (Time To Live) or HL (Hop Limit) field in the IPv4/IPv6 header. This
value is usually preset to a fixed value and decremented by each router that
the packet crosses. It can help infer how far a client connects from when the
initial value is known. Note that most modern operating systems start with an
initial value of 64. See also "fc_saved_syn", "tcp-ss", and "eth.data".
ip.ver
This is used with an input sample representing a binary Ethernet frame, as
returned by "fc_saved_syn" combined with the "tcp-ss" bind option set to "1",
or with the output of "eth.data". This returns the IP version from the IP
header, normally either 4 or 6. Note that this doesn't check whether the
protocol number in the upper layer Ethernet frame matches, but since this is
expected to be used with valid packets, it is expected that the operating
system has already verified this. See also "fc_saved_syn", "tcp-ss", and
"eth.data".
ipmask(<mask4>[,<mask6>])
Apply a mask to an IP address, and use the result for lookups and storage.
This can be used to make all hosts within a certain mask to share the same
@ -21011,22 +21364,72 @@ json_query(<json_path>[,<output_type>])
# get the value of the key 'iss' from a JWT Bearer token
http-request set-var(txn.token_payload) req.hdr(Authorization),word(2,.),ub64dec,json_query('$.iss')
jwt_decrypt_cert(<cert>)
Performs a signature validation of a JSON Web Token following the JSON Web
Encryption format (see RFC 7516) given in input and return its content
decrypted thanks to the certificate provided.
The <cert> parameter must be a path to an already loaded certificate (that
can be dumped via the "dump ssl cert" CLI command). The certificate must have
its "jwt" option explicitely set to "on" (see "jwt" crt-list option). It can
be provided directly or via a variable.
The only tokens managed yet are the ones using the Compact Serialization
format (five dot-separated base64-url encoded strings).
This converter can be used for tokens that have an algorithm ("alg" field of
the JOSE header) among the following: RSA1_5, RSA-OAEP or RSA-OAEP-256.
The JWE token must be provided base64url-encoded and the output will be
provided "raw". If an error happens during token parsing, signature
verification or content decryption, an empty string will be returned.
Example:
# Get a JWT from the authorization header, put its decrypted content in an
# HTTP header
http-request set-var(txn.bearer) http_auth_bearer
http-request set-header X-Decrypted %[var(txn.bearer),jwt_decrypt_cert("/foo/bar.pem")]
jwt_decrypt_secret(<secret>)
Performs a signature validation of a JSON Web Token following the JSON Web
Encryption format (see RFC 7516) given in input and return its content
decrypted thanks to the base64-encoded secret provided. The secret can be
given as a string or via a variable.
The only tokens managed yet are the ones using the Compact Serialization
format (five dot-separated base64-url encoded strings).
This converter can be used for tokens that have an algorithm ("alg" field of
the JOSE header) among the following: A128KW, A192KW, A256KW, A128GCMKW,
A192GCMKW, A256GCMKW, dir. Please note that the A128KW and A192KW algorithms
are not available on AWS-LC and decryption will not work.
The JWE token must be provided base64url-encoded and the output will be
provided "raw". If an error happens during token parsing, signature
verification or content decryption, an empty string will be returned.
Example:
# Get a JWT from the authorization header, put its decrypted content in an
# HTTP header
http-request set-var(txn.bearer) http_auth_bearer
http-request set-header X-Decrypted %[var(txn.bearer),jwt_decrypt_secret("GawgguFyGrWKav7AX4VKUg")]
jwt_header_query([<json_path>[,<output_type>]])
When given a JSON Web Token (JWT) in input, either returns the decoded header
part of the token (the first base64-url encoded part of the JWT) if no
parameter is given, or performs a json_query on the decoded header part of
the token. See "json_query" converter for details about the accepted
json_path and output_type parameters.
This converter can be used with tokens that are either JWS or JWE tokens as
long as they are in the Compact Serialization format.
Please note that this converter is only available when HAProxy has been
compiled with USE_OPENSSL.
jwt_payload_query([<json_path>[,<output_type>]])
When given a JSON Web Token (JWT) in input, either returns the decoded
payload part of the token (the second base64-url encoded part of the JWT) if
no parameter is given, or performs a json_query on the decoded payload part
of the token. See "json_query" converter for details about the accepted
json_path and output_type parameters.
When given a JSON Web Token (JWT) of the JSON Web Signed (JWS) format in
input, either returns the decoded payload part of the token (the second
base64-url encoded part of the JWT) if no parameter is given, or performs a
json_query on the decoded payload part of the token. See "json_query"
converter for details about the accepted json_path and output_type
parameters.
Please note that this converter is only available when HAProxy has been
compiled with USE_OPENSSL.
@ -22133,6 +22536,88 @@ table_trackers([<table>])
concurrent connections there are from a given address for example. See also
the sc_trackers sample fetch keyword.
tcp.dst
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It returns an integer representing the destination
port present in the TCP header. See also "fc_saved_syn", "tcp-ss", and
"ip.data".
tcp.flags
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It returns an integer representing the TCP flags
from this TCP header. All 8 flags from FIN to CWR are retrieved. Each flag
may be tested using the "and()" converter. Please refer to RFC9293 for the
value of each flag. See also "fc_saved_syn", "tcp-ss", and "ip.data".
tcp.options.mss
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It looks for a TCP option of kind "MSS", and if found,
it returns an integer value corresponding to the advertised value in that
option, otherwise zero. The MSS is the Maximum Segment Size and indicates the
largest segment the peer may receive, in bytes. See also "fc_saved_syn",
"tcp-ss", and "ip.data".
tcp.options.sack
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It looks for a TCP option of kind "Sack-Permitted",
and if found, returns 1, otherwise zero. See also "fc_saved_syn", "tcp-ss",
and "ip.data".
tcp.options.tsopt
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It looks for a TCP option of kind "Timestamp", and if
found, returns 1, otherwise zero. See also "fc_saved_syn", "tcp-ss", and
"ip.data".
tcp.options.tsval
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It looks for a TCP option of kind "Timestamp", and if
found, returns the timestamp value emitted by the peer, otherwise does not
return anything. Note that timestamps are 32-bit unsigned values with no
particular unit that only the peer decides on, and timestamps are expected to
be independent between different connections. See also "fc_saved_syn",
"tcp-ss", and "ip.data".
tcp.options.wscale
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It looks for a TCP option of kind "Window Scale", and
if found, returns the window scaling value emitted by the peer, otherwise
zero. Note that values are not expected to be beyond 14 though no technical
limitation prevents them from being sent. In order to detect if the window
scale option was used, please use "tcp.options.wsopt". See also "tcp-ss",
"fc_saved_syn", "ip.data", and "tcp.options.wsopt".
tcp.options.wsopt
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It looks for a TCP option of kind "Window Scale", and
if found, returns 1 otherwise 0. See also "fc_saved_syn", "tcp-ss", "ip.data"
"tcp.options.wscale".
tcp.options_list
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It builds a binary sequence of all TCP option kinds in
the same order as they appear in the TCP header. It can produce from 0 to 60
bytes (in the worst case). The End-of-options is not emitted. See also
"fc_saved_syn", "tcp-ss", and "ip.data".
tcp.seq
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It returns an integer representing the sequence number
used by the peer in the TCP header. Sequence numbers are 32-bit unsigned
values. See also "fc_saved_syn", "tcp-ss", and "ip.data".
tcp.src
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It returns an integer representing the source port
present in the TCP header. See also "fc_saved_syn", "tcp-ss", and "ip.data".
tcp.win
This is used with an input sample representing a binary TCP header, as
returned by "ip.data". It returns an integer representing the window size
advertised by the peer in the TCP header. The value is provided as-is, as a
16-bit unsigned quantity, without applying the window scaling factor. See
also "fc_saved_syn", "tcp-ss", and "ip.data".
ub64dec
This converter is the base64url variant of b64dec converter. base64url
encoding is the "URL and Filename Safe Alphabet" variant of base64 encoding.
@ -23184,6 +23669,7 @@ fc_retrans integer
fc_rtt(<unit>) integer
fc_rttvar(<unit>) integer
fc_sacked integer
fc_saved_syn binary
fc_settings_streams_limit integer
fc_src ip
fc_src_is_local boolean
@ -23782,6 +24268,80 @@ fc_sacked : integer
if the operating system does not support TCP_INFO, for example Linux kernels
before 2.4, the sample fetch fails.
fc_saved_syn : binary
Returns a copy of the saved SYN packet that was preserved by the system
during the incoming connection setup. This requires that the "tcp-ss" option
was present on the "bind" line, and a Linux kernel 4.3 minimum. When "tcp-ss"
is set to 1, only the IP and TCP headers are present. When "tcp-ss" is set to
2, then the Ethernet header is also present before the IP header, and may be
used to control or log source MAC address or VLANs for example. Note that
there is no guarantee that a SYN will be saved. For example, if SYN cookies
are used, the SYN packet is not preserved and the connection is established
on the matching ACK packet. In addition, the system doesn't guarantee to
preserve the copy beyond the first read. As such it is strongly recommended
to copy it into a variable in scope "sess" from a "tcp-request connection"
rule and only use that variable for further manipulations. It is worth noting
that on the loopback interface a dummy 14-byte ethernet header is constructed
by the system where both the source and destination addresses are zero, and
only the protocol is set. It is convenient to convert such samples to
hexadecimal using the "hex" converter during debugging. Example (fields
manually separated and commented below):
frontend test
mode http
bind :::4445 tcp-ss 2
tcp-request connection set-var(sess.syn) fc_saved_syn
http-request return status 200 content-type text/plain \
lf-string "%[var(sess.syn),hex]\n"
$ curl '0:4445'
000000000000 000000000000 0800 \ # MAC_DST MAC_SRC PROTO=IPv4
4500003C0A65400040063255 \ # IPv4 header, proto=6 (TCP)
7F000001 7F000001 \ # IP_SRC=127.0.0.1 IP_DST=127.0.0.1
E1F2 115D 01AF4E3E 00000000 \ # TCP_SPORT=57842 TCP_DPORT=4445, SEQ
A0 02 FFD7 FE300000 \ # OPT_LEN=20 TCP_FLAGS=SYN WIN=65495
0204FFD70402080A01C2A71A0000000001030307 # MSS=65495, TS, SACK, WSCALE 7
$ curl '[::1]:4445'
000000000000 000000000000 86DD \ # MAC_DST MAC_SRC PROTO=IPv6
6008018F00280640 \ # IPv6 header, proto=6 (TCP)
00000000000000000000000000000001 \ # SRC=::1
00000000000000000000000000000001 \ # DST=::1
9758 115D B5511F5D 00000000 \ # TCP_SPORT=38744 TCP_DPORT=4445, SEQ
A0 02 FFC4 00300000 \ # OPT_LEN=20 TCP_FLAGS=SYN WIN=65476
0204FFC40402080A9C231D680000000001030307 # MSS=65476, TS, SACK, WSCALE 7
The "bytes()" converter helps extract specific fields from the packet. The
be2dec() also permits to read chunks and emit them in integer form. For more
accurate extraction, please refer to the "eth.XXX" converters.
Example with IPv4 input:
frontend test
mode http
bind :4445 tcp-ss 2
tcp-request connection set-var(sess.syn) fc_saved_syn
http-request return status 200 content-type text/plain lf-string \
"mac_dst=%[var(sess.syn),eth.dst,hex] \
mac_src=%[var(sess.syn),eth.src,hex] \
proto=%[var(sess.syn),eth.proto,bytes(6),be2hex(,2)] \
ipv4h=%[var(sess.syn),eth.data,bytes(0,12),hex] \
ipv4_src=%[var(sess.syn),eth.data,ip.src] \
ipv4_dst=%[var(sess.syn),eth.data,ip.dst] \
tcp_spt=%[var(sess.syn),eth.data,ip.data,tcp.src] \
tcp_dpt=%[var(sess.syn),eth.data,ip.data,tcp.dst] \
tcp_win=%[var(sess.syn),eth.data,ip.data,tcp.win] \
tcp_opt=%[var(sess.syn),eth.data,ip.data,bytes(20),hex]\n"
$ curl '0:4445'
mac_dst=000000000000 mac_src=000000000000 proto=0800 \
ipv4h=4500003CC9B7400040067302 ipv4_src=127.0.0.1 ipv4_dst=127.0.0.1 \
tcp_spt=43970 tcp_dpt=4445 tcp_win=65495 \
tcp_opt=0204FFD70402080A01DC0D410000000001030307
See also the "set-var" action, the "be2dec", "bytes", "hex", "eth.XXX",
"ip.XXX", and "tcp.XXX" converters.
fc_settings_streams_limit : integer
Returns the maximum number of streams allowed on the frontend connection. For
TCP and HTTP/1.1 connections, it is always 1. For other protocols, it depends
@ -29760,7 +30320,7 @@ Arguments: (mandatory ones first, then alphabetically sorted):
which can represent a client identifier found in a request for
instance.
* string [length <len>]
* string [len <len>]
A table declared with "type string" will store substrings of
up to <len> characters. If the string provided by the pattern
extractor is larger than <len>, it will be truncated before
@ -29770,7 +30330,7 @@ Arguments: (mandatory ones first, then alphabetically sorted):
limited to 32 characters. Increasing the length can have a
non-negligible memory usage impact.
* binary [length <len>]
* binary [len <len>]
A table declared with "type binary" will store binary blocks
of <len> bytes. If the block provided by the pattern
extractor is larger than <len>, it will be truncated before
@ -31044,8 +31604,9 @@ ocsp-update [ off | on ]
failure" or "Error during insertion" errors.
jwt [ off | on ]
Allow for this certificate to be used for JWT validation via the
"jwt_verify_cert" converter when set to 'on'. Its value default to 'off'.
Allow for this certificate to be used for JWT validation or decryption via
the "jwt_verify_cert" or "jwt_decrypt_cert" converters when set to 'on'. Its
value defaults to 'off'.
When set to 'on' for a given certificate, the CLI command "del ssl cert" will
not work. In order to be deleted, a certificate must not be used, either for

View File

@ -2474,6 +2474,11 @@ prompt [help | n | i | p | timed]*
advanced scripts, and the non-interactive mode (default) to basic scripts.
Note that the non-interactive mode is not available for the master socket.
publish backend <backend>
Activates content switching to a backend instance. This is the reverse
operation of "unpublish backend" command. This command is restricted and can
only be issued on sockets configured for levels "operator" or "admin".
quit
Close the connection when in interactive mode.
@ -2842,6 +2847,13 @@ operator
increased. It also drops expert and experimental mode. See also "show cli
level".
unpublish backend <backend>
Marks the backend as unqualified for future traffic selection. In effect,
use_backend / default_backend rules which reference it are ignored and the
next content switching rules are evaluated. Contrary to disabled backends,
servers health checks remain active. This command is restricted and can only
be issued on sockets configured for levels "operator" or "admin".
user
Decrease the CLI level of the current CLI session to user. It can't be
increased. It also drops expert and experimental mode. See also "show cli

View File

@ -366,7 +366,7 @@ static inline size_t applet_output_data(const struct appctx *appctx)
* This is useful when data have been read directly from the buffer. It is
* illegal to call this function with <len> causing a wrapping at the end of the
* buffer. It's the caller's responsibility to ensure that <len> is never larger
* than available ouput data.
* than available output data.
*
* This function is not HTX aware.
*/
@ -392,7 +392,7 @@ static inline void applet_reset_input(struct appctx *appctx)
co_skip(sc_oc(appctx_sc(appctx)), co_data(sc_oc(appctx_sc(appctx))));
}
/* Returns the amout of space available at the HTX output buffer (see applet_get_outbuf).
/* Returns the amount of space available at the HTX output buffer (see applet_get_outbuf).
*/
static inline size_t applet_htx_output_room(const struct appctx *appctx)
{
@ -402,7 +402,7 @@ static inline size_t applet_htx_output_room(const struct appctx *appctx)
return channel_recv_max(sc_ic(appctx_sc(appctx)));
}
/* Returns the amout of space available at the output buffer (see applet_get_outbuf).
/* Returns the amount of space available at the output buffer (see applet_get_outbuf).
*/
static inline size_t applet_output_room(const struct appctx *appctx)
{

View File

@ -85,10 +85,20 @@ static inline int be_usable_srv(struct proxy *be)
return be->srv_bck;
}
/* Returns true if <be> backend can be used as target to a switching rules. */
static inline int be_is_eligible(const struct proxy *be)
{
/* A disabled or unpublished backend cannot be selected for traffic.
* Note that STOPPED state is ignored as there is a risk of breaking
* requests during soft-stop.
*/
return !(be->flags & (PR_FL_DISABLED|PR_FL_BE_UNPUBLISHED));
}
/* set the time of last session on the backend */
static inline void be_set_sess_last(struct proxy *be)
{
if (be->be_counters.shared.tg[tgid - 1])
if (be->be_counters.shared.tg)
HA_ATOMIC_STORE(&be->be_counters.shared.tg[tgid - 1]->last_sess, ns_to_sec(now_ns));
}

View File

@ -140,7 +140,7 @@ int warnif_misplaced_tcp_req_sess(struct proxy *proxy, const char *file, int lin
int warnif_misplaced_tcp_req_cont(struct proxy *proxy, const char *file, int line, const char *arg, const char *arg2);
int warnif_misplaced_tcp_res_cont(struct proxy *proxy, const char *file, int line, const char *arg, const char *arg2);
int warnif_misplaced_quic_init(struct proxy *proxy, const char *file, int line, const char *arg, const char *arg2);
int warnif_cond_conflicts(const struct acl_cond *cond, unsigned int where, const char *file, int line);
int warnif_cond_conflicts(const struct acl_cond *cond, unsigned int where, char **err);
int warnif_tcp_http_cond(const struct proxy *px, const struct acl_cond *cond);
int too_many_args_idx(int maxarg, int index, char **args, char **msg, int *err_code);
int too_many_args(int maxarg, char **args, char **msg, int *err_code);

View File

@ -530,7 +530,7 @@
/* add mandatory padding of the specified size between fields in a structure,
* This is used to avoid false sharing of cache lines for dynamically allocated
* structures which cannot guarantee alignment, or to ensure that the size of
* the struct remains consistent on architectures with different aligment
* the struct remains consistent on architectures with different alignment
* constraints
*/
#ifndef ALWAYS_PAD

View File

@ -146,7 +146,6 @@ enum {
CO_FL_WANT_SPLICING = 0x00001000, /* we wish to use splicing on the connection when possible */
CO_FL_SSL_NO_CACHED_INFO = 0x00002000, /* Don't use any cached information when creating a new SSL connection */
/* unused: 0x00002000 */
CO_FL_EARLY_SSL_HS = 0x00004000, /* We have early data pending, don't start SSL handshake yet */
CO_FL_EARLY_DATA = 0x00008000, /* At least some of the data are early data */
@ -477,7 +476,7 @@ struct xprt_ops {
void (*dump_info)(struct buffer *, const struct connection *);
/*
* Returns the value for various capabilities.
* Returns 0 if the capability is known, iwth the actual value in arg,
* Returns 0 if the capability is known, with the actual value in arg,
* or -1 otherwise
*/
int (*get_capability)(struct connection *connection, void *xprt_ctx, enum xprt_capabilities, void *arg);

View File

@ -66,7 +66,7 @@ struct counters_shared {
COUNTERS_SHARED;
struct {
COUNTERS_SHARED_TG;
} *tg[MAX_TGROUPS];
} **tg;
};
/*
@ -101,7 +101,7 @@ struct fe_counters_shared_tg {
struct fe_counters_shared {
COUNTERS_SHARED;
struct fe_counters_shared_tg *tg[MAX_TGROUPS];
struct fe_counters_shared_tg **tg;
};
/* counters used by listeners and frontends */
@ -160,7 +160,7 @@ struct be_counters_shared_tg {
struct be_counters_shared {
COUNTERS_SHARED;
struct be_counters_shared_tg *tg[MAX_TGROUPS];
struct be_counters_shared_tg **tg;
};
/* counters used by servers and backends */

View File

@ -43,11 +43,13 @@ void counters_be_shared_drop(struct be_counters_shared *counters);
*/
#define COUNTERS_SHARED_LAST_OFFSET(scounters, type, offset) \
({ \
unsigned long last = HA_ATOMIC_LOAD((type *)((char *)scounters[0] + offset));\
unsigned long last = 0; \
unsigned long now_seconds = ns_to_sec(now_ns); \
int it; \
\
for (it = 1; (it < global.nbtgroups && scounters[it]); it++) { \
if (scounters) \
last = HA_ATOMIC_LOAD((type *)((char *)scounters[0] + offset));\
for (it = 1; (it < global.nbtgroups && scounters); it++) { \
unsigned long cur = HA_ATOMIC_LOAD((type *)((char *)scounters[it] + offset));\
if ((now_seconds - cur) < (now_seconds - last)) \
last = cur; \
@ -74,7 +76,7 @@ void counters_be_shared_drop(struct be_counters_shared *counters);
uint64_t __ret = 0; \
int it; \
\
for (it = 0; (it < global.nbtgroups && scounters[it]); it++) \
for (it = 0; (it < global.nbtgroups && scounters); it++) \
__ret += rfunc((type *)((char *)scounters[it] + offset)); \
__ret; \
})
@ -94,7 +96,7 @@ void counters_be_shared_drop(struct be_counters_shared *counters);
uint64_t __ret = 0; \
int it; \
\
for (it = 0; (it < global.nbtgroups && scounters[it]); it++) \
for (it = 0; (it < global.nbtgroups && scounters); it++) \
__ret += rfunc(&scounters[it]->elem, arg1, arg2); \
__ret; \
})

View File

@ -261,6 +261,7 @@ struct global {
unsigned int req_count; /* request counter (HTTP or TCP session) for logs and unique_id */
int last_checks;
uint32_t anon_key;
int maxthrpertgroup; /* Maximum number of threads per thread group */
/* leave this at the end to make sure we don't share this cache line by accident */
ALWAYS_ALIGN(64);

View File

@ -255,6 +255,7 @@ struct hlua_patref_iterator_context {
struct hlua_patref *ref;
struct bref bref; /* back-reference from the pat_ref_elt being accessed
* during listing */
struct pat_ref_gen *gen; /* the generation we are iterating over */
};
#else /* USE_LUA */

View File

@ -184,6 +184,7 @@ enum {
PERSIST_TYPE_NONE = 0, /* no persistence */
PERSIST_TYPE_FORCE, /* force-persist */
PERSIST_TYPE_IGNORE, /* ignore-persist */
PERSIST_TYPE_BE_SWITCH, /* force-be-switch */
};
/* final results for http-request rules */

View File

@ -204,6 +204,7 @@ struct bind_conf {
unsigned int backlog; /* if set, listen backlog */
int maxconn; /* maximum connections allowed on this listener */
int (*accept)(struct connection *conn); /* upper layer's accept() */
int tcp_ss; /* for TCP, Save SYN */
int level; /* stats access level (ACCESS_LVL_*) */
int severity_output; /* default severity output format in cli feedback messages */
short int nice; /* nice value to assign to the instantiated tasks */

View File

@ -107,12 +107,16 @@ struct pat_ref {
struct list list; /* Used to chain refs. */
char *reference; /* The reference name. */
char *display; /* String displayed to identify the pattern origin. */
struct list head; /* The head of the list of struct pat_ref_elt. */
struct ceb_root *ceb_root; /* The tree where pattern reference elements are attached. */
struct ceb_root *gen_root; /* The tree mapping generation IDs to pattern reference elements */
struct list pat; /* The head of the list of struct pattern_expr. */
unsigned int flags; /* flags PAT_REF_*. */
unsigned int curr_gen; /* current generation number (anything below can be removed) */
unsigned int next_gen; /* next generation number (insertions use this one) */
/* We keep a cached pointer to the current generation for performance. */
struct {
struct pat_ref_gen *data;
unsigned int id;
} cached_gen;
int unique_id; /* Each pattern reference have unique id. */
unsigned long long revision; /* updated for each update */
unsigned long long entry_cnt; /* the total number of entries */
@ -121,6 +125,16 @@ struct pat_ref {
event_hdl_sub_list e_subs; /* event_hdl: pat_ref's subscribers list (atomically updated) */
};
/* This struct represents all the elements in a pattern reference generation. The tree
* is used most of the time, but we also maintain a list for when order matters.
*/
struct pat_ref_gen {
struct list head; /* The head of the list of struct pat_ref_elt. */
struct ceb_root *elt_root; /* The tree where pattern reference elements are attached. */
struct ceb_node gen_node; /* Linkage for the gen_root cebtree in struct pat_ref */
unsigned int gen_id;
};
/* This is a part of struct pat_ref. Each entry contains one pattern and one
* associated value as original string. All derivative forms (via exprs) are
* accessed from list_head or tree_head. Be careful, it's variable-sized!
@ -133,7 +147,7 @@ struct pat_ref_elt {
char *sample;
unsigned int gen_id; /* generation of pat_ref this was made for */
int line;
struct ceb_node node; /* Node to attach this element to its <pat_ref> ebtree. */
struct ceb_node node; /* Node to attach this element to its <pat_ref_gen> cebtree. */
const char pattern[0]; // const only to make sure nobody tries to free it.
};

View File

@ -189,8 +189,10 @@ struct pat_ref *pat_ref_new(const char *reference, const char *display, unsigned
struct pat_ref *pat_ref_newid(int unique_id, const char *display, unsigned int flags);
struct pat_ref_elt *pat_ref_find_elt(struct pat_ref *ref, const char *key);
struct pat_ref_elt *pat_ref_gen_find_elt(struct pat_ref *ref, unsigned int gen_id, const char *key);
struct pat_ref_elt *pat_ref_append(struct pat_ref *ref, const char *pattern, const char *sample, int line);
struct pat_ref_elt *pat_ref_append(struct pat_ref *ref, unsigned int gen, const char *pattern, const char *sample, int line);
struct pat_ref_elt *pat_ref_load(struct pat_ref *ref, unsigned int gen, const char *pattern, const char *sample, int line, char **err);
struct pat_ref_gen *pat_ref_gen_new(struct pat_ref *ref, unsigned int gen_id);
struct pat_ref_gen *pat_ref_gen_get(struct pat_ref *ref, unsigned int gen_id);
int pat_ref_push(struct pat_ref_elt *elt, struct pattern_expr *expr, int patflags, char **err);
int pat_ref_add(struct pat_ref *ref, const char *pattern, const char *sample, char **err);
int pat_ref_set(struct pat_ref *ref, const char *pattern, const char *sample, char **err);

View File

@ -160,6 +160,7 @@ struct protocol {
/* default I/O handler */
void (*default_iocb)(int fd); /* generic I/O handler (typically accept callback) */
int (*get_info)(struct connection *conn, long long int *info, int info_num); /* Callback to get connection level statistical counters */
int (*get_opt)(const struct connection *conn, int level, int optname, void *buf, int size); /* getsockopt(level:optname) into buf:size */
uint flags; /* flags describing protocol support (PROTO_F_*) */
uint nb_receivers; /* number of receivers (under proto_lock) */

View File

@ -247,6 +247,7 @@ enum PR_SRV_STATE_FILE {
#define PR_FL_IMPLICIT_REF 0x10 /* The default proxy is implicitly referenced by another proxy */
#define PR_FL_PAUSED 0x20 /* The proxy was paused at run time (reversible) */
#define PR_FL_CHECKED 0x40 /* The proxy configuration was fully checked (including postparsing checks) */
#define PR_FL_BE_UNPUBLISHED 0x80 /* The proxy cannot be targetted by content switching rules */
struct stream;

View File

@ -166,12 +166,12 @@ static inline int proxy_abrt_close(const struct proxy *px)
/* increase the number of cumulated connections received on the designated frontend */
static inline void proxy_inc_fe_conn_ctr(struct listener *l, struct proxy *fe)
{
if (fe->fe_counters.shared.tg[tgid - 1])
if (fe->fe_counters.shared.tg) {
_HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->cum_conn);
if (l && l->counters && l->counters->shared.tg[tgid - 1])
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_conn);
if (fe->fe_counters.shared.tg[tgid - 1])
update_freq_ctr(&fe->fe_counters.shared.tg[tgid - 1]->conn_per_sec, 1);
}
if (l && l->counters && l->counters->shared.tg)
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_conn);
HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.cps_max,
update_freq_ctr(&fe->fe_counters._conn_per_sec, 1));
}
@ -179,12 +179,12 @@ static inline void proxy_inc_fe_conn_ctr(struct listener *l, struct proxy *fe)
/* increase the number of cumulated connections accepted by the designated frontend */
static inline void proxy_inc_fe_sess_ctr(struct listener *l, struct proxy *fe)
{
if (fe->fe_counters.shared.tg[tgid - 1])
if (fe->fe_counters.shared.tg) {
_HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->cum_sess);
if (l && l->counters && l->counters->shared.tg[tgid - 1])
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_sess);
if (fe->fe_counters.shared.tg[tgid - 1])
update_freq_ctr(&fe->fe_counters.shared.tg[tgid - 1]->sess_per_sec, 1);
}
if (l && l->counters && l->counters->shared.tg)
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_sess);
HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.sps_max,
update_freq_ctr(&fe->fe_counters._sess_per_sec, 1));
}
@ -199,19 +199,19 @@ static inline void proxy_inc_fe_cum_sess_ver_ctr(struct listener *l, struct prox
http_ver > sizeof(fe->fe_counters.shared.tg[tgid - 1]->cum_sess_ver) / sizeof(*fe->fe_counters.shared.tg[tgid - 1]->cum_sess_ver))
return;
if (fe->fe_counters.shared.tg[tgid - 1])
if (fe->fe_counters.shared.tg)
_HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->cum_sess_ver[http_ver - 1]);
if (l && l->counters && l->counters->shared.tg[tgid - 1])
if (l && l->counters && l->counters->shared.tg && l->counters->shared.tg[tgid - 1])
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_sess_ver[http_ver - 1]);
}
/* increase the number of cumulated streams on the designated backend */
static inline void proxy_inc_be_ctr(struct proxy *be)
{
if (be->be_counters.shared.tg[tgid - 1])
if (be->be_counters.shared.tg) {
_HA_ATOMIC_INC(&be->be_counters.shared.tg[tgid - 1]->cum_sess);
if (be->be_counters.shared.tg[tgid - 1])
update_freq_ctr(&be->be_counters.shared.tg[tgid - 1]->sess_per_sec, 1);
}
HA_ATOMIC_UPDATE_MAX(&be->be_counters.sps_max,
update_freq_ctr(&be->be_counters._sess_per_sec, 1));
}
@ -226,12 +226,12 @@ static inline void proxy_inc_fe_req_ctr(struct listener *l, struct proxy *fe,
if (http_ver >= sizeof(fe->fe_counters.shared.tg[tgid - 1]->p.http.cum_req) / sizeof(*fe->fe_counters.shared.tg[tgid - 1]->p.http.cum_req))
return;
if (fe->fe_counters.shared.tg[tgid - 1])
if (fe->fe_counters.shared.tg) {
_HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->p.http.cum_req[http_ver]);
if (l && l->counters && l->counters->shared.tg[tgid - 1])
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->p.http.cum_req[http_ver]);
if (fe->fe_counters.shared.tg[tgid - 1])
update_freq_ctr(&fe->fe_counters.shared.tg[tgid - 1]->req_per_sec, 1);
}
if (l && l->counters && l->counters->shared.tg)
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->p.http.cum_req[http_ver]);
HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.p.http.rps_max,
update_freq_ctr(&fe->fe_counters.p.http._req_per_sec, 1));
}

View File

@ -434,7 +434,7 @@ struct quic_conn_closed {
#define QUIC_FL_CONN_NEED_POST_HANDSHAKE_FRMS (1U << 2) /* HANDSHAKE_DONE must be sent */
#define QUIC_FL_CONN_IS_BACK (1U << 3) /* conn used on backend side */
#define QUIC_FL_CONN_ACCEPT_REGISTERED (1U << 4)
#define QUIC_FL_CONN_UDP_GSO_EIO (1U << 5) /* GSO disabled due to a EIO occured on same listener */
#define QUIC_FL_CONN_UDP_GSO_EIO (1U << 5) /* GSO disabled due to a EIO occurred on same listener */
#define QUIC_FL_CONN_IDLE_TIMER_RESTARTED_AFTER_READ (1U << 6)
#define QUIC_FL_CONN_RETRANS_NEEDED (1U << 7)
#define QUIC_FL_CONN_RETRANS_OLD_DATA (1U << 8) /* retransmission in progress for probing with already sent data */

View File

@ -5,7 +5,7 @@
#include <haproxy/api-t.h>
/* Counter which can be used to measure data amount accross several buffers. */
/* Counter which can be used to measure data amount across several buffers. */
struct bdata_ctr {
uint64_t tot; /* sum of data present in all underlying buffers */
uint8_t bcnt; /* current number of allocated underlying buffers */

View File

@ -33,11 +33,12 @@
/* Bit values for receiver->flags */
#define RX_F_BOUND 0x00000001 /* receiver already bound */
#define RX_F_INHERITED 0x00000002 /* inherited FD from the parent process (fd@) or duped from another local receiver */
#define RX_F_INHERITED_FD 0x00000002 /* inherited FD from the parent process (fd@) */
#define RX_F_MWORKER 0x00000004 /* keep the FD open in the master but close it in the children */
#define RX_F_MUST_DUP 0x00000008 /* this receiver's fd must be dup() from a reference; ignore socket-level ops here */
#define RX_F_NON_SUSPENDABLE 0x00000010 /* this socket cannot be suspended hence must always be unbound */
#define RX_F_PASS_PKTINFO 0x00000020 /* pass pktinfo in received messages */
#define RX_F_INHERITED_SOCK 0x00000040 /* inherited sock that could be duped from another local receiver */
/* Bit values for rx_settings->options */
#define RX_O_FOREIGN 0x00000001 /* receives on foreign addresses */
@ -63,9 +64,8 @@ struct rx_settings {
struct shard_info {
uint nbgroups; /* number of groups in this shard (=#rx); Zero = unused. */
uint nbthreads; /* number of threads in this shard (>=nbgroups) */
ulong tgroup_mask; /* bitmask of thread groups having a member here */
struct receiver *ref; /* first one, reference for FDs to duplicate */
struct receiver *members[MAX_TGROUPS]; /* all members of the shard (one per thread group) */
struct receiver **members; /* all members of the shard (one per thread group) */
};
/* This describes a receiver with all its characteristics (address, options, etc) */

View File

@ -63,6 +63,7 @@ int smp_expr_output_type(struct sample_expr *expr);
int c_none(struct sample *smp);
int c_pseudo(struct sample *smp);
int smp_dup(struct sample *smp);
int sample_check_arg_base64(struct arg *arg, char **err);
/*
* This function just apply a cast on sample. It returns 0 if the cast is not

View File

@ -207,7 +207,7 @@ static inline void server_index_id(struct proxy *px, struct server *srv)
/* increase the number of cumulated streams on the designated server */
static inline void srv_inc_sess_ctr(struct server *s)
{
if (s->counters.shared.tg[tgid - 1]) {
if (s->counters.shared.tg) {
_HA_ATOMIC_INC(&s->counters.shared.tg[tgid - 1]->cum_sess);
update_freq_ctr(&s->counters.shared.tg[tgid - 1]->sess_per_sec, 1);
}
@ -218,7 +218,7 @@ static inline void srv_inc_sess_ctr(struct server *s)
/* set the time of last session on the designated server */
static inline void srv_set_sess_last(struct server *s)
{
if (s->counters.shared.tg[tgid - 1])
if (s->counters.shared.tg)
HA_ATOMIC_STORE(&s->counters.shared.tg[tgid - 1]->last_sess, ns_to_sec(now_ns));
}

View File

@ -46,6 +46,7 @@ struct connection *sock_accept_conn(struct listener *l, int *status);
void sock_accept_iocb(int fd);
void sock_conn_ctrl_init(struct connection *conn);
void sock_conn_ctrl_close(struct connection *conn);
int sock_conn_get_opt(const struct connection *conn, int level, int optname, void *buf, int size);
void sock_conn_iocb(int fd);
int sock_conn_check(struct connection *conn);
int sock_drain(struct connection *conn);

View File

@ -254,7 +254,7 @@ struct ssl_keylog {
#define SSL_SOCK_F_KTLS_SEND (1 << 2) /* kTLS send is configured on that socket */
#define SSL_SOCK_F_KTLS_RECV (1 << 3) /* kTLS receive is configure on that socket */
#define SSL_SOCK_F_CTRL_SEND (1 << 4) /* We want to send a kTLS control message for that socket */
#define SSL_SOCK_F_HAS_ALPN (1 << 5) /* An ALPN has been negociated */
#define SSL_SOCK_F_HAS_ALPN (1 << 5) /* An ALPN has been negotiated */
struct ssl_sock_ctx {
struct connection *conn;

View File

@ -57,6 +57,9 @@ const char *nid2nist(int nid);
const char *sigalg2str(int sigalg);
const char *curveid2str(int curve_id);
int aes_process(struct buffer *data, struct buffer *nonce, struct buffer *key, int key_size,
struct buffer *aead_tag, struct buffer *aad, struct buffer *out, int decrypt, int gcm);
#endif /* _HAPROXY_SSL_UTILS_H */
#endif /* USE_OPENSSL */

View File

@ -15,7 +15,7 @@ enum stfile_domain {
};
#define SHM_STATS_FILE_VER_MAJOR 1
#define SHM_STATS_FILE_VER_MINOR 1
#define SHM_STATS_FILE_VER_MINOR 2
#define SHM_STATS_FILE_HEARTBEAT_TIMEOUT 60 /* passed this delay (seconds) process which has not
* sent heartbeat will be considered down
@ -64,9 +64,9 @@ struct shm_stats_file_hdr {
*/
struct shm_stats_file_object {
char guid[GUID_MAX_LEN + 1];
uint8_t tgid; // thread group ID from 1 to 64
uint16_t tgid; // thread group ID
uint8_t type; // SHM_STATS_FILE_OBJECT_TYPE_* to know how to handle object.data
ALWAYS_PAD(6); // 6 bytes hole, ensure it remains the same size 32 vs 64 bits arch
ALWAYS_PAD(5); // 5 bytes hole, ensure it remains the same size 32 vs 64 bits arch
uint64_t users; // bitfield that corresponds to users of the object (see shm_stats_file_hdr slots)
/* as the struct may hold any of the types described here, let's make it
* so it may store up to the heaviest one using an union

View File

@ -313,8 +313,8 @@ struct se_abort_info {
*
* <kip> is the known input payload length. It is set by the stream endpoint
* that produce data and decremented once consumed by the app
* loyer. Depending on the enpoint, this value may be unset. It may be set
* only once if the payload lenght is fully known from the begining (a
* layer. Depending on the endpoint, this value may be unset. It may be set
* only once if the payload length is fully known from the beginning (a
* HTTP message with a content-length for instance), or incremented
* periodically when more data are expected (a chunk-encoded HTTP message
* for instance). On the app side, this value is decremented when data are

View File

@ -60,7 +60,6 @@ extern int thread_cpus_enabled_at_boot;
/* Only way found to replace variables with constants that are optimized away
* at build time.
*/
enum { all_tgroups_mask = 1UL };
enum { tid_bit = 1UL };
enum { tid = 0 };
enum { tgid = 1 };
@ -208,7 +207,6 @@ void wait_for_threads_completion();
void set_thread_cpu_affinity();
unsigned long long ha_get_pthread_id(unsigned int thr);
extern volatile unsigned long all_tgroups_mask;
extern volatile unsigned int rdv_requests;
extern volatile unsigned int isolated_thread;
extern THREAD_LOCAL unsigned int tid; /* The thread id */

View File

@ -42,7 +42,7 @@ struct thread_set {
ulong abs[(MAX_THREADS + LONGBITS - 1) / LONGBITS];
ulong rel[MAX_TGROUPS];
};
ulong grps; /* bit field of all non-empty groups, 0 for abs */
ulong nbgrps; /* Number of thread groups, 0 for abs */
};
/* tasklet classes */

View File

@ -77,7 +77,7 @@ static inline int thread_set_nth_group(const struct thread_set *ts, int n)
{
int i;
if (ts->grps) {
if (ts->nbgrps) {
for (i = 0; i < MAX_TGROUPS; i++)
if (ts->rel[i] && !n--)
return i + 1;
@ -95,7 +95,7 @@ static inline ulong thread_set_nth_tmask(const struct thread_set *ts, int n)
{
int i;
if (ts->grps) {
if (ts->nbgrps) {
for (i = 0; i < MAX_TGROUPS; i++)
if (ts->rel[i] && !n--)
return ts->rel[i];
@ -111,7 +111,7 @@ static inline void thread_set_pin_grp1(struct thread_set *ts, ulong mask)
{
int i;
ts->grps = 1;
ts->nbgrps = 1;
ts->rel[0] = mask;
for (i = 1; i < MAX_TGROUPS; i++)
ts->rel[i] = 0;

View File

@ -1490,4 +1490,6 @@ int path_base(const char *path, const char *base, char *dst, char **err);
void ha_freearray(char ***array);
void ha_memset_s(void *s, int c, size_t n);
#endif /* _HAPROXY_TOOLS_H */

View File

@ -63,7 +63,7 @@
* the same split bit as its parent node, it is necessary its associated leaf
*
* When descending along the tree, it is possible to know that a search key is
* not present, because its XOR with both of the branches is stricly higher
* not present, because its XOR with both of the branches is strictly higher
* than the inter-branch XOR. The reason is simple : the inter-branch XOR will
* have its highest bit set indicating the split bit. Since it's the bit that
* differs between the two branches, the key cannot have it both set and

View File

@ -0,0 +1,85 @@
varnishtest "aes_cbc converter Test"
feature cmd "$HAPROXY_PROGRAM -cc 'feature(OPENSSL)'"
feature cmd "$HAPROXY_PROGRAM -cc 'version_atleast(3.4-dev2)'"
feature ignore_unknown_macro
server s1 {
rxreq
txresp -hdr "Connection: close"
} -repeat 2 -start
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
# WT: limit false-positives causing "HTTP header incomplete" due to
# idle server connections being randomly used and randomly expiring
# under us.
tune.idle-pool.shared off
defaults
mode http
timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"
frontend fe
bind "fd@${fe}"
http-request set-var(txn.plain) str("Hello from HAProxy AES-CBC")
http-request set-var(txn.short_nonce) str("MTIzNDU2Nzg5MDEy")
http-request set-var(txn.nonce) str("MTIzNDU2Nzg5MDEyMzQ1Ng==")
http-request set-var(txn.key) str("Zm9vb2Zvb29mb29vb29vbw==")
# AES-CBC enc with vars + dec with strings
http-request set-var(txn.encrypted1) var(txn.plain),aes_cbc_enc(128,txn.nonce,txn.key),base64
http-after-response set-header X-Encrypted1 %[var(txn.encrypted1)]
http-request set-var(txn.decrypted1) var(txn.encrypted1),b64dec,aes_cbc_dec(128,"MTIzNDU2Nzg5MDEyMzQ1Ng==","Zm9vb2Zvb29mb29vb29vbw==")
http-after-response set-header X-Decrypted1 %[var(txn.decrypted1)]
# AES-CBC enc with strings + dec with vars
http-request set-var(txn.encrypted2) var(txn.plain),aes_cbc_enc(128,"MTIzNDU2Nzg5MDEyMzQ1Ng==","Zm9vb2Zvb29mb29vb29vbw=="),base64
http-after-response set-header X-Encrypted2 %[var(txn.encrypted2)]
http-request set-var(txn.decrypted2) var(txn.encrypted2),b64dec,aes_cbc_dec(128,txn.nonce,txn.key)
http-after-response set-header X-Decrypted2 %[var(txn.decrypted2)]
# AES-CBC + AAD enc with vars + dec with strings
http-request set-var(txn.aad) str("dGVzdAo=")
http-request set-var(txn.encrypted3) var(txn.plain),aes_cbc_enc(128,txn.nonce,txn.key,txn.aad),base64
http-after-response set-header X-Encrypted3 %[var(txn.encrypted3)]
http-request set-var(txn.decrypted3) var(txn.encrypted3),b64dec,aes_cbc_dec(128,"MTIzNDU2Nzg5MDEyMzQ1Ng==","Zm9vb2Zvb29mb29vb29vbw==","dGVzdAo=")
http-after-response set-header X-Decrypted3 %[var(txn.decrypted3)]
# AES-CBC + AAD enc with strings + enc with strings
http-request set-var(txn.encrypted4) var(txn.plain),aes_cbc_enc(128,"MTIzNDU2Nzg5MDEyMzQ1Ng==","Zm9vb2Zvb29mb29vb29vbw==","dGVzdAo="),base64
http-after-response set-header X-Encrypted4 %[var(txn.encrypted4)]
http-request set-var(txn.decrypted4) var(txn.encrypted4),b64dec,aes_cbc_dec(128,txn.nonce,txn.key,txn.aad)
http-after-response set-header X-Decrypted4 %[var(txn.decrypted4)]
# AES-CBC enc with short nonce (var) + dec with short nonce (string)
http-request set-var(txn.encrypted5) var(txn.plain),aes_cbc_enc(128,txn.short_nonce,txn.key),base64
http-after-response set-header X-Encrypted5 %[var(txn.encrypted5)]
http-request set-var(txn.decrypted5) var(txn.encrypted5),b64dec,aes_cbc_dec(128,"MTIzNDU2Nzg5MDEy","Zm9vb2Zvb29mb29vb29vbw==")
http-after-response set-header X-Decrypted5 %[var(txn.decrypted5)]
default_backend be
backend be
server s1 ${s1_addr}:${s1_port}
} -start
client c1 -connect ${h1_fe_sock} {
txreq
rxresp
expect resp.http.x-decrypted1 == "Hello from HAProxy AES-CBC"
expect resp.http.x-decrypted2 == "Hello from HAProxy AES-CBC"
expect resp.http.x-decrypted3 == "Hello from HAProxy AES-CBC"
expect resp.http.x-decrypted4 == "Hello from HAProxy AES-CBC"
expect resp.http.x-decrypted5 == "Hello from HAProxy AES-CBC"
} -run

View File

@ -0,0 +1,201 @@
#REGTEST_TYPE=devel
# This reg-test checks the behaviour of the jwt_decrypt_secret and
# jwt_decrypt_cert converters that decode a JSON Web Encryption (JWE) token,
# checks its signature and decrypt its content (RFC 7516).
# The tokens have two tiers of encryption, one that is used to encrypt a secret
# ("alg" field of the JOSE header) and this secret is then used to
# encrypt/decrypt the data contained in the token ("enc" field of the JOSE
# header).
# This reg-test tests a subset of alg/enc combination.
#
# AWS-LC does not support A128KW algorithm so for tests that use it, we will
# have a hardcoded "AWS-LC UNMANAGED" value put in the response header instead
# of the decrypted contents.
varnishtest "Test the 'jwt_decrypt' functionalities"
feature cmd "$HAPROXY_PROGRAM -cc 'version_atleast(3.4-dev2)'"
feature cmd "$HAPROXY_PROGRAM -cc 'feature(OPENSSL) && openssl_version_atleast(1.1.1)'"
feature ignore_unknown_macro
server s1 -repeat 10 {
rxreq
txresp
} -start
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
.if !ssllib_name_startswith(AWS-LC)
tune.ssl.default-dh-param 2048
.endif
tune.ssl.capture-buffer-size 1
stats socket "${tmpdir}/h1/stats" level admin
crt-base "${testdir}"
key-base "${testdir}"
defaults
mode http
timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"
crt-store
# Private key built out of following JWK:
# { "kty": "RSA", "e": "AQAB", "n": "wsqJbopx18NQFYLYOq4ZeMSE89yGiEankUpf25yV8QqroKUGrASj_OeqTWUjwPGKTN1vGFFuHYxiJeAUQH2qQPmg9Oqk6-ATBEKn9COKYniQ5459UxCwmZA2RL6ufhrNyq0JF3GfXkjLDBfhU9zJJEOhknsA0L_c-X4AI3d_NbFdMqxNe1V_UWAlLcbKdwO6iC9fAvwUmDQxgy6R0DC1CMouQpenMRcALaSHar1cm4K-syoNobv3HEuqgZ3s6-hOOSqauqAO0GUozPpaIA7OeruyRl5sTWT0r-iz39bchID2bIKtcqLiFcSYPLBcxmsaQCqRlGhmv6stjTCLV1yT9w", "kid": "ff3c5c96-392e-46ef-a839-6ff16027af78", "d": "b9hXfQ8lOtw8mX1dpqPcoElGhbczz_-xq2znCXQpbBPSZBUddZvchRSH5pSSKPEHlgb3CSGIdpLqsBCv0C_XmCM9ViN8uqsYgDO9uCLIDK5plWttbkqA_EufvW03R9UgIKWmOL3W4g4t-C2mBb8aByaGGVNjLnlb6i186uBsPGkvaeLHbQcRQKAvhOUTeNiyiiCbUGJwCm4avMiZrsz1r81Y1Z5izo0ERxdZymxM3FRZ9vjTB-6DtitvTXXnaAm1JTu6TIpj38u2mnNLkGMbflOpgelMNKBZVxSmfobIbFN8CHVc1UqLK2ElsZ9RCQANgkMHlMkOMj-XT0wHa3VBUQ", "p": "8mgriveKJAp1S7SHqirQAfZafxVuAK_A2QBYPsAUhikfBOvN0HtZjgurPXSJSdgR8KbWV7ZjdJM_eOivIb_XiuAaUdIOXbLRet7t9a_NJtmX9iybhoa9VOJFMBq_rbnbbte2kq0-FnXmv3cukbC2LaEw3aEcDgyURLCgWFqt7M0", "q": "zbbTv5421GowOfKVEuVoA35CEWgl8mdasnEZac2LWxMwKExikKU5LLacLQlcOt7A6n1ZGUC2wyH8mstO5tV34Eug3fnNrbnxFUEE_ZB_njs_rtZnwz57AoUXOXVnd194seIZF9PjdzZcuwXwXbrZ2RSVW8if_ZH5OVYEM1EsA9M", "dp": "1BaIYmIKn1X3InGlcSFcNRtSOnaJdFhRpotCqkRssKUx2qBlxs7ln_5dqLtZkx5VM_UE_GE7yzc6BZOwBxtOftdsr8HVh-14ksSR9rAGEsO2zVBiEuW4qZf_aQM-ScWfU--wcczZ0dT-Ou8P87Bk9K9fjcn0PeaLoz3WTPepzNE", "dq": "kYw2u4_UmWvcXVOeV_VKJ5aQZkJ6_sxTpodRBMPyQmkMHKcW4eKU1mcJju_deqWadw5jGPPpm5yTXm5UkAwfOeookoWpGa7CvVf4kPNI6Aphn3GBjunJHNpPuU6w-wvomGsxd-NqQDGNYKHuFFMcyXO_zWXglQdP_1o1tJ1M-BM", "qi": "j94Ens784M8zsfwWoJhYq9prcSZOGgNbtFWQZO8HP8pcNM9ls7YA4snTtAS_B4peWWFAFZ0LSKPCxAvJnrq69ocmEKEk7ss1Jo062f9pLTQ6cnhMjev3IqLocIFt5Vbsg_PWYpFSR7re6FRbF9EYOM7F2-HRv1idxKCWoyQfBqk" }
load crt rsa1_5.pem key rsa1_5.key jwt on
# Private key built out of following JWK:
# { "kty": "RSA", "e": "AQAB", "n": "wsqJbopx18NQFYLYOq4ZeMSE89yGiEankUpf25yV8QqroKUGrASj_OeqTWUjwPGKTN1vGFFuHYxiJeAUQH2qQPmg9Oqk6-ATBEKn9COKYniQ5459UxCwmZA2RL6ufhrNyq0JF3GfXkjLDBfhU9zJJEOhknsA0L_c-X4AI3d_NbFdMqxNe1V_UWAlLcbKdwO6iC9fAvwUmDQxgy6R0DC1CMouQpenMRcALaSHar1cm4K-syoNobv3HEuqgZ3s6-hOOSqauqAO0GUozPpaIA7OeruyRl5sTWT0r-iz39bchID2bIKtcqLiFcSYPLBcxmsaQCqRlGhmv6stjTCLV1yT9w", "kid": "ff3c5c96-392e-46ef-a839-6ff16027af78", "d": "b9hXfQ8lOtw8mX1dpqPcoElGhbczz_-xq2znCXQpbBPSZBUddZvchRSH5pSSKPEHlgb3CSGIdpLqsBCv0C_XmCM9ViN8uqsYgDO9uCLIDK5plWttbkqA_EufvW03R9UgIKWmOL3W4g4t-C2mBb8aByaGGVNjLnlb6i186uBsPGkvaeLHbQcRQKAvhOUTeNiyiiCbUGJwCm4avMiZrsz1r81Y1Z5izo0ERxdZymxM3FRZ9vjTB-6DtitvTXXnaAm1JTu6TIpj38u2mnNLkGMbflOpgelMNKBZVxSmfobIbFN8CHVc1UqLK2ElsZ9RCQANgkMHlMkOMj-XT0wHa3VBUQ", "p": "8mgriveKJAp1S7SHqirQAfZafxVuAK_A2QBYPsAUhikfBOvN0HtZjgurPXSJSdgR8KbWV7ZjdJM_eOivIb_XiuAaUdIOXbLRet7t9a_NJtmX9iybhoa9VOJFMBq_rbnbbte2kq0-FnXmv3cukbC2LaEw3aEcDgyURLCgWFqt7M0", "q": "zbbTv5421GowOfKVEuVoA35CEWgl8mdasnEZac2LWxMwKExikKU5LLacLQlcOt7A6n1ZGUC2wyH8mstO5tV34Eug3fnNrbnxFUEE_ZB_njs_rtZnwz57AoUXOXVnd194seIZF9PjdzZcuwXwXbrZ2RSVW8if_ZH5OVYEM1EsA9M", "dp": "1BaIYmIKn1X3InGlcSFcNRtSOnaJdFhRpotCqkRssKUx2qBlxs7ln_5dqLtZkx5VM_UE_GE7yzc6BZOwBxtOftdsr8HVh-14ksSR9rAGEsO2zVBiEuW4qZf_aQM-ScWfU--wcczZ0dT-Ou8P87Bk9K9fjcn0PeaLoz3WTPepzNE", "dq": "kYw2u4_UmWvcXVOeV_VKJ5aQZkJ6_sxTpodRBMPyQmkMHKcW4eKU1mcJju_deqWadw5jGPPpm5yTXm5UkAwfOeookoWpGa7CvVf4kPNI6Aphn3GBjunJHNpPuU6w-wvomGsxd-NqQDGNYKHuFFMcyXO_zWXglQdP_1o1tJ1M-BM", "qi": "j94Ens784M8zsfwWoJhYq9prcSZOGgNbtFWQZO8HP8pcNM9ls7YA4snTtAS_B4peWWFAFZ0LSKPCxAvJnrq69ocmEKEk7ss1Jo062f9pLTQ6cnhMjev3IqLocIFt5Vbsg_PWYpFSR7re6FRbF9EYOM7F2-HRv1idxKCWoyQfBqk" }
load crt rsa_oeap.pem key rsa_oeap.key jwt on
listen main-fe
bind "fd@${mainfe}"
use_backend secret_based_alg if { path_beg /secret }
use_backend pem_based_alg if { path_beg /pem }
default_backend dflt
backend secret_based_alg
http-request set-var(txn.jwe) http_auth_bearer
http-request set-var(txn.secret) hdr(X-Secret),ub64dec,base64
http-request set-var(txn.decrypted) var(txn.jwe),jwt_decrypt_secret(txn.secret)
.if ssllib_name_startswith(AWS-LC)
acl aws_unmanaged var(txn.jwe),jwt_header_query('$.alg') -m str "A128KW"
http-request set-var(txn.decrypted) str("AWS-LC UNMANAGED") if aws_unmanaged
.endif
http-response set-header X-Decrypted %[var(txn.decrypted)]
server s1 ${s1_addr}:${s1_port}
backend pem_based_alg
http-request set-var(txn.jwe) http_auth_bearer
http-request set-var(txn.pem) hdr(X-PEM)
http-request set-var(txn.decrypted) var(txn.jwe),jwt_decrypt_cert(txn.pem)
http-after-response set-header X-Decrypted %[var(txn.decrypted)]
server s1 ${s1_addr}:${s1_port}
backend dflt
server s1 ${s1_addr}:${s1_port}
} -start
#ALG: dir
#ENC: A256GCM
#KEY: {"kty":"oct", "k":"ZMpktzGq1g6_r4fKVdnx9OaYr4HjxPjIs7l7SwAsgsg"}
client c1_1 -connect ${h1_mainfe_sock} {
txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiAiZGlyIiwgImVuYyI6ICJBMjU2R0NNIn0..hxCk0nP4aVNpgfb7.inlyAZtUzDCTpD_9iuWx.Pyu90cmgkXenMIVu9RUp8w" -hdr "X-Secret: ZMpktzGq1g6_r4fKVdnx9OaYr4HjxPjIs7l7SwAsgsg"
rxresp
expect resp.http.x-decrypted == "Setec Astronomy"
} -run
#ALG: dir
#ENC: A256GCM
#KEY: {"kty":"oct", "k":"ZMpktzGq1g6_r4fKVdnx9OaYr4HjxPjIs7l7SwAsgsg"}
# Token is modified to have an invalid tag
client c1_2 -connect ${h1_mainfe_sock} {
txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiAiZGlyIiwgImVuYyI6ICJBMjU2R0NNIn0..hxCk0nP4aVNpgfb7.inlyAZtUzDCTpD_9iuWx.Pyu90cmgkXenMIVu9RUp8v" -hdr "X-Secret: ZMpktzGq1g6_r4fKVdnx9OaYr4HjxPjIs7l7SwAsgsg"
rxresp
expect resp.http.x-decrypted == ""
} -run
#ALG: dir
#ENC: A256GCM
#KEY: {"kty":"oct", "k":"ZMpktzGq1g6_r4fKVdnx9OaYr4HjxPjIs7l7SwAsgsg"}
# Wrong secret
client c1_3 -connect ${h1_mainfe_sock} {
txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiAiZGlyIiwgImVuYyI6ICJBMjU2R0NNIn0..hxCk0nP4aVNpgfb7.inlyAZtUzDCTpD_9iuWx.Pyu90cmgkXenMIVu9RUp8w" -hdr "X-Secret: zMpktzGq1g6_r4fKVdnx9OaYr4HjxPjIs7l7SwAsgsg"
rxresp
expect resp.http.x-decrypted == ""
} -run
#ALG: A128KW
#ENC: A128CBC-HS256
#KEY: {"kty":"oct", "k":"3921VrO5TrLvPQ-NFLlghQ"}
client c2_1 -connect ${h1_mainfe_sock} {
txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiAiQTEyOEtXIiwgImVuYyI6ICJBMTI4Q0JDLUhTMjU2In0.AaOyP1zNjsywJOoQ941JJWT4LQIDlpy3UibM_48HrsoCJ5ENpQhfbQ.h2ZBUiy9ofvcDZOwV2iVJA.K0FhK6ri44ZWmtFUtJRpiZSeT8feKX5grFpU8xG5026bGXAdZADO4ZkQ8DRvSEE9DwNIlK6cIEoSavm12gSzQVXajz3MWv5U6VbK5gPFCeFjJfMPmdQ9THIi-hapcueSxYz2rkcGxo3iP3ixE_bww8UB_XlQvnokhFxtf8NushMkjef4RDrW5vQu4j_qPbqG334msDKmFi8Klprs6JktrADeEJ0bPGN80NKEWp7XPcCbfmcwYe-9z_tPw_KJcQhLpQevfPLfVI4WjPgPxYNGw03qKYnLD7oTjr9qCrQmzUVXutlhxfpD3UQr11SJu8q19Ug82bON-GRd2CjpSrErQq42dd0_mWjG9iDqjqpYFBK9DV_qawy2dxFbfIcCsnb6ewifjoJLiFg2OT7-YdTaC7kqaXeE1JpA-OtMXN72FUDrnQ8r9ifj_VpMNvBf_36dbOCT-cGwIOI8Pf6HH2smXULhtBv9q-qO2zyScpmliqZDXUqmvQ8rxi-xYI2hijV80jo14teZgIotWsZE2FrMPJTkegDmh4cG5UzoUsQxzPhXqHvkss6Hv7h-_fmvXvXY1AZ8T8bL1qM4bS8mKpewmGtjmU6S220tL60ieT2QL0vmTFlJkOE8uFreWlPnxNKBix_zj4Smhg1zS_sl7GoXhp5Q_QY3MOMM5-gCAALY0crqLLWtHswElVOiJSyd64T9HFyXm7Rleqq2kLXmTvDhOR6lzMnA0rcGP7lQGYlLZgFiicsMY722XlKI3v1-cJYvj2RZMPe1ijBLFFTqyPeCBkbsDC3XCpWhMByNHSHKN3t-NJmQBIC-89ZeOMU-WBtqrDDi_CMnaz9mwkyt3P7ja_fVskc4KKBBlMVYDZ3DJeJw3Kg9Pie0XlqHkD6W1vyAWjOM2z76Rh_3553dLAH1HxNRwidLjq3SvoaX3TOU5O2_omFGPBek7QdzhNBGLgv6Zlul_XxZq9UGiVo1jrnkd40_vAZQRL6NyMxGBEij_b8F_wDMz5njrL-a0c2Y5mMno-q8gmM4sFKI1BS5HsrUAw.PFFSFlDslALnebAdaqS_MA" -hdr "X-Secret: 3921VrO5TrLvPQ-NFLlghQ"
rxresp
expect resp.http.x-decrypted ~ "(Sed ut perspiciatis unde omnis iste natus error sit voluptatem doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo veritatis et quasi architecto beatae vitae dicta sunt explicabo\\. Nemo ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt\\. porro quisquam est, qui dolorem ipsum quia dolor sit amet, adipisci velit, sed quia non numquam eius modi tempora incidunt ut dolore magnam aliquam quaerat voluptatem\\. Ut enim ad minima veniam, nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut ea commodi consequatur\\? Quis autem vel eum iure reprehenderit qui in voluptate velit esse quam nihil molestiae consequatur, vel illum qui eum fugiat quo voluptas nulla pariatur\\?|AWS-LC UNMANAGED)"
} -run
#ALG: A128KW
#ENC: A128CBC-HS256
#KEY: {"kty":"oct", "k":"3921VrO5TrLvPQ-NFLlghQ"}
# Token is modified to have an invalid tag
client c2_2 -connect ${h1_mainfe_sock} {
txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiAiQTEyOEtXIiwgImVuYyI6ICJBMTI4Q0JDLUhTMjU2In0.AaOyP1zNjsywJOoQ941JJWT4LQIDlpy3UibM_48HrsoCJ5ENpQhfbQ.h2ZBUiy9ofvcDZOwV2iVJA.K0FhK6ri44ZWmtFUtJRpiZSeT8feKX5grFpU8xG5026bGXAdZADO4ZkQ8DRvSEE9DwNIlK6cIEoSavm12gSzQVXajz3MWv5U6VbK5gPFCeFjJfMPmdQ9THIi-hapcueSxYz2rkcGxo3iP3ixE_bww8UB_XlQvnokhFxtf8NushMkjef4RDrW5vQu4j_qPbqG334msDKmFi8Klprs6JktrADeEJ0bPGN80NKEWp7XPcCbfmcwYe-9z_tPw_KJcQhLpQevfPLfVI4WjPgPxYNGw03qKYnLD7oTjr9qCrQmzUVXutlhxfpD3UQr11SJu8q19Ug82bON-GRd2CjpSrErQq42dd0_mWjG9iDqjqpYFBK9DV_qawy2dxFbfIcCsnb6ewifjoJLiFg2OT7-YdTaC7kqaXeE1JpA-OtMXN72FUDrnQ8r9ifj_VpMNvBf_36dbOCT-cGwIOI8Pf6HH2smXULhtBv9q-qO2zyScpmliqZDXUqmvQ8rxi-xYI2hijV80jo14teZgIotWsZE2FrMPJTkegDmh4cG5UzoUsQxzPhXqHvkss6Hv7h-_fmvXvXY1AZ8T8bL1qM4bS8mKpewmGtjmU6S220tL60ieT2QL0vmTFlJkOE8uFreWlPnxNKBix_zj4Smhg1zS_sl7GoXhp5Q_QY3MOMM5-gCAALY0crqLLWtHswElVOiJSyd64T9HFyXm7Rleqq2kLXmTvDhOR6lzMnA0rcGP7lQGYlLZgFiicsMY722XlKI3v1-cJYvj2RZMPe1ijBLFFTqyPeCBkbsDC3XCpWhMByNHSHKN3t-NJmQBIC-89ZeOMU-WBtqrDDi_CMnaz9mwkyt3P7ja_fVskc4KKBBlMVYDZ3DJeJw3Kg9Pie0XlqHkD6W1vyAWjOM2z76Rh_3553dLAH1HxNRwidLjq3SvoaX3TOU5O2_omFGPBek7QdzhNBGLgv6Zlul_XxZq9UGiVo1jrnkd40_vAZQRL6NyMxGBEij_b8F_wDMz5njrL-a0c2Y5mMno-q8gmM4sFKI1BS5HsrUAw.PFFSFlDslALnebAdaqS_Ma" -hdr "X-Secret: 3921VrO5TrLvPQ-NFLlghQ"
rxresp
expect resp.http.x-decrypted ~ "(|AWS-LC UNMANAGED)"
} -run
#ALG: A256GCMKW
#ENC: A256CBC-HS512
#KEY: {"k":"vof8hNUaHiMw_0o3EGVPtBOPDDWJ62b8kQWE2ufSjIE","kty":"oct"}
client c3 -connect ${h1_mainfe_sock} {
txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiJBMjU2R0NNS1ciLCJlbmMiOiJBMjU2Q0JDLUhTNTEyIiwiaXYiOiJRclluZUNxVmVldExzN1FKIiwidGFnIjoieFEyeFI2SHdBUngzeDJUdFg5UFVSZyJ9.wk4eJtdTKOPsic4IBtVcppO6Sp6LfXmxHzBvHZtU0Sk7JCVqhAghkeAw0qWJ5XsdwSneIlZ4rGygtnafFl4Thw.ylzjPBsgJ4qefDQZ_jUVpA.xX0XhdL4KTSZfRvHuZD1_Dh-XrfZogRsBHpgxkDZdYk.w8LPVak5maNeQpSWgCIGGsj26SLQZTx6nAmkvDQKFIA" -hdr "X-Secret: vof8hNUaHiMw_0o3EGVPtBOPDDWJ62b8kQWE2ufSjIE"
rxresp
expect resp.http.x-decrypted == "My Encrypted message"
} -run
# RFC7516 JWE
# https://datatracker.ietf.org/doc/html/rfc7516#appendix-A.3
#ALG: A128KW
#ENC: A128CBC-HS256
#KEY: {"kty":"oct", "k":"GawgguFyGrWKav7AX4VKUg" }
client c4 -connect ${h1_mainfe_sock} {
txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiJBMTI4S1ciLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0.6KB707dM9YTIgHtLvtgWQ8mKwboJW3of9locizkDTHzBC2IlrT1oOQ.AxY8DCtDaGlsbGljb3RoZQ.KDlTtXchhZTGufMYmOYGS4HffxPSUrfmqCHXaI9wOGY.U0m_YmjN04DJvceFICbCVQ" -hdr "X-Secret: GawgguFyGrWKav7AX4VKUg"
rxresp
expect resp.http.x-decrypted ~ "(Live long and prosper\\.|AWS-LC UNMANAGED)"
} -run
#ALG: A256GCMKW
#ENC: A192CBC-HS384
#KEY: {"k":"vprpatiNyI-biJY57qr8Gg4--4Rycgb2G5yO1_myYAw","kty":"oct"}
client c5 -connect ${h1_mainfe_sock} {
txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiJBMjU2R0NNS1ciLCJlbmMiOiJBMTkyQ0JDLUhTMzg0IiwiaXYiOiJzVE81QjlPRXFuaUhCX3dYIiwidGFnIjoid2M1ZnRpYUFnNGNOR1JkZzNWQ3FXdyJ9.2zqnM9zeNU-eAMp5h2uFJyxbHHKsZs9YAYKzOcIF3d3Q9uq1TMQAvqOIuXw3kU9o.hh5aObIoIMR6Ke0rXm6V1A.R7U-4OlqOR6f2C1b3nI5bFqZBIGNBgza7FfoPEgrQT8.asJCzUAHCuxS7o8Ut4ENfaY5RluLB35F" -hdr "X-Secret: vprpatiNyI-biJY57qr8Gg4--4Rycgb2G5yO1_myYAw"
rxresp
expect resp.http.x-decrypted == "My Encrypted message"
} -run
#ALG: RSA1_5
#ENC: A256GCM
client c6 -connect ${h1_mainfe_sock} {
txreq -url "/pem" -hdr "Authorization: Bearer eyJhbGciOiAiUlNBMV81IiwgImVuYyI6ICJBMjU2R0NNIn0.ew8AbprGcd_J73-CZPIsE1YonD9rtcL7VCuOOuVkrpS_9UzA9_kMh1yw20u-b5rKJAhmFMCQPXl44ro6IzOeHu8E2X_NlPEnQfyNVQ4R1HB_E9sSk5BLxOH3aHkVUh0I-e2eDDj-pdI3OrdjZtnZEBeQ7tpMcoBEbn1VGg7Pmw4qtdS-0qnDSs-PttU-cejjgPUNLRU8UdoRVC9uJKacJms110QugDuFuMYTTSU2nbIYh0deCMRAuKGWt0Ii6EMYW2JaJ7JfXag59Ar1uylQPyEVrocnOsDuB9xnp2jd796qCPdKxBK9yKUnwjal4SQpYbutr40QzG1S4MsKaUorLg.0el2ruY0mm2s7LUR.X5RI6dF06Y_dbAr8meb-6SG5enj5noto9nzgQU5HDrYdiUofPptIf6E-FikKUM9QR4pY9SyphqbPYeAN1ZYVxBrR8tUf4Do2kw1biuuRAmuIyytpmxwvY946T3ctu1Zw3Ymwe-jWXX08EngzssvzFOGT66gkdufrTkC45Fkr0RBOmWa5OVVg_VR6LwcivtQMmlArlrwbaDmmLqt_2p7afT0UksEz4loq0sskw-p7GbhB2lpzXoDnijdHrQkftRbVCiDbK4-qGr7IRFb0YOHvyVFr-kmDoJv2Zsg_rPKV1LkYmPJUbVDo9T3RAcLinlKPK4ZPC_2bWj3M9BvfOq1HeuyVWzX2Cb1mHFdxXFGqaLPfsE0VOfn0GqL7oHVbuczYYw2eKdmiw5LEMwuuJEdYDE9IIFEe8oRB4hNZ0XMYB6oqqZejD0Fh6nqlj5QUrTYpTSE-3LkgK2zRJ0oZFXZyHCB426bmViuE0mXF7twkQep09g0U35-jFBZcSYBDvZZL1t5d_YEQ0QtO0mEeEpGb0Pvk_EsSMFib7NxClz4_rdtwWCFuM4uFOS5vrQMiMqi_TadhLxrugRFhJpsibuScCiJ7eNDrUvwSWEwv1U593MUX3guDq_ONOo_49EOJSyRJtQCNC6FW6GLWSz9TCo6g5LCnXt-pqwu0Iymr7ZTQ3MTsdq2G55JM2e6SdG43iET8r235hynmXHKPUYHlSjsC2AEAY_pGDO0akIhf4wDVIM5rytn-rjQf-29ZJp05g6KPe-EaN1C-X7aBGhgAEgnX-iaXXbotpGeKRTNj2jAG1UrkYi6BGHxluiXJ8jH_LjHuxKyzIObqK8p28ePDKRL-jyNTrvGW2uorgb_u7HGmWYIWLTI7obnZ5vw3MbkjcwEd4bX5JXUj2rRsUWMlZSSFVO9Wgf7MBvcLsyF0Yqun3p0bi__edmcqNF_uuYZT-8jkUlMborqIDDCYYqIolgi5R1Bmut-gFYq6xyfEncxOi50xmYon50UulVnAH-up_RELGtCjmAivaJb8.upVY733IMAT8YbMab2PZnw" -hdr "X-PEM: ${testdir}/rsa1_5.pem"
rxresp
expect resp.http.x-decrypted == "Sed ut perspiciatis unde omnis iste natus error sit voluptatem doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. porro quisquam est, qui dolorem ipsum quia dolor sit amet, adipisci velit, sed quia non numquam eius modi tempora incidunt ut dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut ea commodi consequatur? Quis autem vel eum iure reprehenderit qui in voluptate velit esse quam nihil molestiae consequatur, vel illum qui eum fugiat quo voluptas nulla pariatur?"
} -run
#ALG: RSA-OAEP
#ENC: A256GCM
client c7 -connect ${h1_mainfe_sock} {
txreq -url "/pem" -hdr "Authorization: Bearer eyJhbGciOiAiUlNBLU9BRVAiLCAiZW5jIjogIkEyNTZHQ00ifQ.Os33U1HEY92lrpup2E-HNttBW26shGSCafqNbVfs1rwWB__B-0dRAiKg4OtIrIXVCN7oQMqLr9RFRO6Gb-OAPIr-59FETLSXP8K_3uNcy-jdKrpKLbv8wgisEYqBJj4BysZQjuWgUgJ7Dvx28_zIUg0FJGOwxtpX2SUWxEgw5CPRgRrENJDJ2EYA6wuX9SbfarhQR4uPN7pdRKZ0ZQN6_5H3H9pWJ4WNnsQ0wjChKTsdR3kHOvygiUmdYSEWGe6LBQLSBQCnQim1pr--GBOHvDf2g4Je9EDFrrO1icFDbBdJ8I4ol4ixglLEnBCTHdhYd_lVe0i5JcxxHF8hmemAYQ.IOphaFIcCosKyXcN.KEjWfV2yBKLuMLX20mtEvrQ-P_oKWkdgZabx0FgRLqjSorD7DS3aIXLMEmyrOYd4kGHKCMg2Fvg61xKvI2FsQviA5LgHtx0QKmFARacP8kBl8vFPMEg2WtW0rIImTc1tj4C0PM9A0TbyDohtcoN9UYosrw5GyPOlHwIFwWosLA9WHqp00MAfAu3JOa4CwuMXsORGzeIyb7X-jg_bbG_9xkVUsgZpaCUX447a3QmKLJVBfQpeEO_PuYbds-MvIU9m4uYzWplNeHnf3B1dh9p6o4Ml6OEp-0G_4Nd4UmMz_g9A-TatH-A__MAC9Mx1Wj1cDn5M3upcrAyu2JLQ48A-Qa2ocElhQ4ODzwbgbC5PS34Mlm_x18zqL-0Fw3ckhzgoAyDBoRO6SaNmsKb1wQ6QGbwBJx1jC51hpzBHRv3pUlegsHXgq7OWN1x1tDJvRc_DHMa23Mheg-aKJcliP846Dduq2_Hve3md30C0hbrP1OMF5ZJSVu4kUo7UFaZA_6hhcoGvvyEGDMnPH5SznrrsyHGIre-WOdXCObZNkDV6Qn0sqAP_vkj_6Dj965W8ksCKk6ye409cB4mnqfLv3dUtGLV8o8VtCLIEs2G62lwaDGrX4HB-pZ6jea2qH6UvgwK5WT-VzrypSQcVoWCKopln2gtO1JROKmbOiL9f8dfbLKqYSRB6ppMxh5Euddx_eNikZfLEcXfq2Grwyrj0NLP82AFSxSYf3BpYqpOhSxca0gx0psb8tCwq3sqmh5Bp_qmKIOthXb6k-9R_Ng6cRTp132OnDEXEDtvDv59WJWHuo4qACyrg7jUlrh4dAYwYke1yBgVcqK5JwVnmKDjnx9vRGFSD9esrL8MpGiP6uUeN3AXiv7OSb83hDdwTTQU5nvitHWKS72Mb1FRPdDXUxooiyShAkV5Spo3YNl4EHkm6lnlJ-kC3BFlxYqYd5a_vtqA-ywR7ozWo1GtMBjYycq2s9Kp8FnqI2cTWobOCjMxaej4CXaRA4IwhjC1u6OTCvxP70MWYT0pJPjUS.k9i0Lw9MfJs4Rp-_uwIEeA" -hdr "X-PEM: ${testdir}/rsa_oeap.pem"
rxresp
expect resp.http.x-decrypted == "Sed ut perspiciatis unde omnis iste natus error sit voluptatem doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. porro quisquam est, qui dolorem ipsum quia dolor sit amet, adipisci velit, sed quia non numquam eius modi tempora incidunt ut dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut ea commodi consequatur? Quis autem vel eum iure reprehenderit qui in voluptate velit esse quam nihil molestiae consequatur, vel illum qui eum fugiat quo voluptas nulla pariatur?"
} -run

27
reg-tests/jwt/rsa1_5.key Normal file
View File

@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEAwsqJbopx18NQFYLYOq4ZeMSE89yGiEankUpf25yV8QqroKUG
rASj/OeqTWUjwPGKTN1vGFFuHYxiJeAUQH2qQPmg9Oqk6+ATBEKn9COKYniQ5459
UxCwmZA2RL6ufhrNyq0JF3GfXkjLDBfhU9zJJEOhknsA0L/c+X4AI3d/NbFdMqxN
e1V/UWAlLcbKdwO6iC9fAvwUmDQxgy6R0DC1CMouQpenMRcALaSHar1cm4K+syoN
obv3HEuqgZ3s6+hOOSqauqAO0GUozPpaIA7OeruyRl5sTWT0r+iz39bchID2bIKt
cqLiFcSYPLBcxmsaQCqRlGhmv6stjTCLV1yT9wIDAQABAoIBAG/YV30PJTrcPJl9
Xaaj3KBJRoW3M8//sats5wl0KWwT0mQVHXWb3IUUh+aUkijxB5YG9wkhiHaS6rAQ
r9Av15gjPVYjfLqrGIAzvbgiyAyuaZVrbW5KgPxLn71tN0fVICClpji91uIOLfgt
pgW/GgcmhhlTYy55W+otfOrgbDxpL2nix20HEUCgL4TlE3jYsoogm1BicApuGrzI
ma7M9a/NWNWeYs6NBEcXWcpsTNxUWfb40wfug7Yrb01152gJtSU7ukyKY9/Ltppz
S5BjG35TqYHpTDSgWVcUpn6GyGxTfAh1XNVKiythJbGfUQkADYJDB5TJDjI/l09M
B2t1QVECgYEA8mgriveKJAp1S7SHqirQAfZafxVuAK/A2QBYPsAUhikfBOvN0HtZ
jgurPXSJSdgR8KbWV7ZjdJM/eOivIb/XiuAaUdIOXbLRet7t9a/NJtmX9iybhoa9
VOJFMBq/rbnbbte2kq0+FnXmv3cukbC2LaEw3aEcDgyURLCgWFqt7M0CgYEAzbbT
v5421GowOfKVEuVoA35CEWgl8mdasnEZac2LWxMwKExikKU5LLacLQlcOt7A6n1Z
GUC2wyH8mstO5tV34Eug3fnNrbnxFUEE/ZB/njs/rtZnwz57AoUXOXVnd194seIZ
F9PjdzZcuwXwXbrZ2RSVW8if/ZH5OVYEM1EsA9MCgYEA1BaIYmIKn1X3InGlcSFc
NRtSOnaJdFhRpotCqkRssKUx2qBlxs7ln/5dqLtZkx5VM/UE/GE7yzc6BZOwBxtO
ftdsr8HVh+14ksSR9rAGEsO2zVBiEuW4qZf/aQM+ScWfU++wcczZ0dT+Ou8P87Bk
9K9fjcn0PeaLoz3WTPepzNECgYEAkYw2u4/UmWvcXVOeV/VKJ5aQZkJ6/sxTpodR
BMPyQmkMHKcW4eKU1mcJju/deqWadw5jGPPpm5yTXm5UkAwfOeookoWpGa7CvVf4
kPNI6Aphn3GBjunJHNpPuU6w+wvomGsxd+NqQDGNYKHuFFMcyXO/zWXglQdP/1o1
tJ1M+BMCgYEAj94Ens784M8zsfwWoJhYq9prcSZOGgNbtFWQZO8HP8pcNM9ls7YA
4snTtAS/B4peWWFAFZ0LSKPCxAvJnrq69ocmEKEk7ss1Jo062f9pLTQ6cnhMjev3
IqLocIFt5Vbsg/PWYpFSR7re6FRbF9EYOM7F2+HRv1idxKCWoyQfBqk=
-----END RSA PRIVATE KEY-----

21
reg-tests/jwt/rsa1_5.pem Normal file
View File

@ -0,0 +1,21 @@
-----BEGIN CERTIFICATE-----
MIIDizCCAnOgAwIBAgIUWKLX2P4KNDw9kBROSjFXWa/kjtowDQYJKoZIhvcNAQEL
BQAwVTELMAkGA1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoM
GEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZDEOMAwGA1UEAwwFYWEuYmIwHhcNMjUx
MjA0MTYyMTE2WhcNMjYxMjA0MTYyMTE2WjBVMQswCQYDVQQGEwJBVTETMBEGA1UE
CAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50ZXJuZXQgV2lkZ2l0cyBQdHkgTHRk
MQ4wDAYDVQQDDAVhYS5iYjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
AMLKiW6KcdfDUBWC2DquGXjEhPPchohGp5FKX9uclfEKq6ClBqwEo/znqk1lI8Dx
ikzdbxhRbh2MYiXgFEB9qkD5oPTqpOvgEwRCp/QjimJ4kOeOfVMQsJmQNkS+rn4a
zcqtCRdxn15IywwX4VPcySRDoZJ7ANC/3Pl+ACN3fzWxXTKsTXtVf1FgJS3GyncD
uogvXwL8FJg0MYMukdAwtQjKLkKXpzEXAC2kh2q9XJuCvrMqDaG79xxLqoGd7Ovo
TjkqmrqgDtBlKMz6WiAOznq7skZebE1k9K/os9/W3ISA9myCrXKi4hXEmDywXMZr
GkAqkZRoZr+rLY0wi1dck/cCAwEAAaNTMFEwHQYDVR0OBBYEFD+wduQlsKCoxfO5
U1W7Urqs+oTbMB8GA1UdIwQYMBaAFD+wduQlsKCoxfO5U1W7Urqs+oTbMA8GA1Ud
EwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAANfh6jY8+3XQ16SH7Pa07MK
ncnQuZqMemYUQzieBL15zftdpd0vYjOfaN5UAQ7ODVAb/iTF4nnADl0VwOocqEiR
vfaqwJTmKiNDjyIp1SJjhkRcYu3hmDXTZOzhuFxoZALe7OzWFgSjf3fX2IOOBfH+
HBqviTuMi53oURWv/ISPXk+Dr7LaCmm1rEjRq8PINJ2Ni6cN90UvHOrHdl+ty2o/
C3cQWIZrsNM6agUfiNiPCWz6x+Z4t+zP7+EorCM7CKKLGnycPUJE2I6H8bJmIHHS
ITNmUO5juLawQ5h2m5Wu/BCY3rlLU9SLrmWAAHm6lFJb0XzFgqhiCz7lxYofj8c=
-----END CERTIFICATE-----

View File

@ -0,0 +1,28 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEAwsqJbopx18NQFYLYOq4ZeMSE89yGiEankUpf25yV8QqroKUG
rASj/OeqTWUjwPGKTN1vGFFuHYxiJeAUQH2qQPmg9Oqk6+ATBEKn9COKYniQ5459
UxCwmZA2RL6ufhrNyq0JF3GfXkjLDBfhU9zJJEOhknsA0L/c+X4AI3d/NbFdMqxN
e1V/UWAlLcbKdwO6iC9fAvwUmDQxgy6R0DC1CMouQpenMRcALaSHar1cm4K+syoN
obv3HEuqgZ3s6+hOOSqauqAO0GUozPpaIA7OeruyRl5sTWT0r+iz39bchID2bIKt
cqLiFcSYPLBcxmsaQCqRlGhmv6stjTCLV1yT9wIDAQABAoIBAG/YV30PJTrcPJl9
Xaaj3KBJRoW3M8//sats5wl0KWwT0mQVHXWb3IUUh+aUkijxB5YG9wkhiHaS6rAQ
r9Av15gjPVYjfLqrGIAzvbgiyAyuaZVrbW5KgPxLn71tN0fVICClpji91uIOLfgt
pgW/GgcmhhlTYy55W+otfOrgbDxpL2nix20HEUCgL4TlE3jYsoogm1BicApuGrzI
ma7M9a/NWNWeYs6NBEcXWcpsTNxUWfb40wfug7Yrb01152gJtSU7ukyKY9/Ltppz
S5BjG35TqYHpTDSgWVcUpn6GyGxTfAh1XNVKiythJbGfUQkADYJDB5TJDjI/l09M
B2t1QVECgYEA8mgriveKJAp1S7SHqirQAfZafxVuAK/A2QBYPsAUhikfBOvN0HtZ
jgurPXSJSdgR8KbWV7ZjdJM/eOivIb/XiuAaUdIOXbLRet7t9a/NJtmX9iybhoa9
VOJFMBq/rbnbbte2kq0+FnXmv3cukbC2LaEw3aEcDgyURLCgWFqt7M0CgYEAzbbT
v5421GowOfKVEuVoA35CEWgl8mdasnEZac2LWxMwKExikKU5LLacLQlcOt7A6n1Z
GUC2wyH8mstO5tV34Eug3fnNrbnxFUEE/ZB/njs/rtZnwz57AoUXOXVnd194seIZ
F9PjdzZcuwXwXbrZ2RSVW8if/ZH5OVYEM1EsA9MCgYEA1BaIYmIKn1X3InGlcSFc
NRtSOnaJdFhRpotCqkRssKUx2qBlxs7ln/5dqLtZkx5VM/UE/GE7yzc6BZOwBxtO
ftdsr8HVh+14ksSR9rAGEsO2zVBiEuW4qZf/aQM+ScWfU++wcczZ0dT+Ou8P87Bk
9K9fjcn0PeaLoz3WTPepzNECgYEAkYw2u4/UmWvcXVOeV/VKJ5aQZkJ6/sxTpodR
BMPyQmkMHKcW4eKU1mcJju/deqWadw5jGPPpm5yTXm5UkAwfOeookoWpGa7CvVf4
kPNI6Aphn3GBjunJHNpPuU6w+wvomGsxd+NqQDGNYKHuFFMcyXO/zWXglQdP/1o1
tJ1M+BMCgYEAj94Ens784M8zsfwWoJhYq9prcSZOGgNbtFWQZO8HP8pcNM9ls7YA
4snTtAS/B4peWWFAFZ0LSKPCxAvJnrq69ocmEKEk7ss1Jo062f9pLTQ6cnhMjev3
IqLocIFt5Vbsg/PWYpFSR7re6FRbF9EYOM7F2+HRv1idxKCWoyQfBqk=
-----END RSA PRIVATE KEY-----

View File

@ -0,0 +1,22 @@
-----BEGIN CERTIFICATE-----
MIIDjTCCAnWgAwIBAgIUHGhD07tC9adNLCkSBNrfrhFUX9IwDQYJKoZIhvcNAQEL
BQAwVTELMAkGA1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoM
GEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZDEOMAwGA1UEAwwFYWEuYmIwIBcNMjUx
MjA1MTMxOTQ0WhgPMjA1MzA0MjIxMzE5NDRaMFUxCzAJBgNVBAYTAkFVMRMwEQYD
VQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBM
dGQxDjAMBgNVBAMMBWFhLmJiMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAwsqJbopx18NQFYLYOq4ZeMSE89yGiEankUpf25yV8QqroKUGrASj/OeqTWUj
wPGKTN1vGFFuHYxiJeAUQH2qQPmg9Oqk6+ATBEKn9COKYniQ5459UxCwmZA2RL6u
fhrNyq0JF3GfXkjLDBfhU9zJJEOhknsA0L/c+X4AI3d/NbFdMqxNe1V/UWAlLcbK
dwO6iC9fAvwUmDQxgy6R0DC1CMouQpenMRcALaSHar1cm4K+syoNobv3HEuqgZ3s
6+hOOSqauqAO0GUozPpaIA7OeruyRl5sTWT0r+iz39bchID2bIKtcqLiFcSYPLBc
xmsaQCqRlGhmv6stjTCLV1yT9wIDAQABo1MwUTAdBgNVHQ4EFgQUP7B25CWwoKjF
87lTVbtSuqz6hNswHwYDVR0jBBgwFoAUP7B25CWwoKjF87lTVbtSuqz6hNswDwYD
VR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEArDl4gSwqpriAFjWcAtWE
sTLTxNgbnkARDeyhQ1dj6rj9xCccBU6WN07r639c9S0lsMb+jeQU9EJFoVtX91jM
fymumOWMDY/CYm41PkHqcF6hEup5dfAeDnN/OoDjXwgTU74Y3lF/sldeS06KorCp
O9ROyq3mM9n4EtFAAEEN2Esyy1d1CJiMYKHdYRKycMwgcu1pm9n1up4ivdgLY+BH
XhnJPuKmmU3FauYlXzfcijUPAAuJdm3PZ+i4SNGsTa49tXOkHMED31EOjaAEzuX0
rWij715QkL/RIp8lPxeAvHqxavQIDtfjojFD21Cx+jIGuNcfrGNkzNjfS7AF+1+W
jA==
-----END CERTIFICATE-----

View File

@ -0,0 +1,58 @@
varnishtest "Ensure that proxies automatic numbering remains consistent across versions"
feature ignore_unknown_macro
# No ID explicitly set. First automatically assigned value must be set to '2'.
# Value '1' is skipped due to an historical bug.
haproxy h1 -conf {
defaults
timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"
listen fe1
bind "fd@${fe1}"
listen fe2
bind "fd@${fe2}"
} -start
haproxy h1 -cli {
send "show stat 1 -1 -1"
expect !~ "fe[12],"
send "show stat 2 -1 -1"
expect ~ "fe1,"
send "show stat 3 -1 -1"
expect ~ "fe2,"
}
# Explicitly uses ID 1 and 2. First automatically assigned value must be
# set to '3'.
haproxy h2 -conf {
defaults
timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"
listen fe1
bind "fd@${fe1}"
listen fe2
id 1 # 1 set as automatic value
bind "fd@${fe1}"
listen fe3
id 2
bind "fd@${fe3}"
} -start
haproxy h2 -cli {
send "show stat 1 -1 -1"
expect ~ "fe2,"
send "show stat 2 -1 -1"
expect ~ "fe3,"
send "show stat 3 -1 -1"
expect ~ "fe1,"
}

View File

@ -1,7 +1,10 @@
#REGTEST_TYPE=bug
varnishtest "Test for ECDSA/RSA selection and crt-list filters"
feature cmd "$HAPROXY_PROGRAM -cc 'version_atleast(2.8)'"
feature cmd "$HAPROXY_PROGRAM -cc 'feature(QUIC) && !feature(QUIC_OPENSSL_COMPAT) && ssllib_name_startswith(OpenSSL) && openssl_version_atleast(1.1.1) || feature(OPENSSL_AWSLC)'"
feature cmd "$HAPROXY_PROGRAM -cc 'feature(QUIC)'"
# Note that USE_OPENSSL is always set if USE_QUIC is set
# Same conditions as for ssl/tls13_ssl_crt-list_filters.vtc about TLS library versions
feature cmd "$HAPROXY_PROGRAM -cc 'ssllib_name_startswith(OpenSSL) && openssl_version_atleast(1.1.1) || feature(OPENSSL_AWSLC)'"
# This test checks if the multiple certificate types works correctly with the
# SNI, and that the negative filters are correctly excluded
#

View File

@ -118,7 +118,7 @@ client c2 -connect ${h1_clearlst_sock} -repeat 2 {
expect resp.status == 503
} -run
# successul connection to wrong-be1/s3
# successful connection to wrong-be1/s3
client c3 -connect ${h1_clearlst_sock} {
txreq -url "/wrong-be1"
rxresp
@ -133,7 +133,7 @@ client c2 -connect ${h1_clearlst_sock} -repeat 2 {
expect resp.status == 503
} -run
# successul connection to wrong-be1/s6
# successful connection to wrong-be1/s6
client c3 -connect ${h1_clearlst_sock} {
txreq -url "/wrong-be1"
rxresp

View File

@ -118,7 +118,7 @@ client c2 -connect ${h1_clearlst_sock} -repeat 2 {
expect resp.status == 503
} -run
# successul connection to wrong-be1/s3
# successful connection to wrong-be1/s3
client c3 -connect ${h1_clearlst_sock} {
txreq -url "/wrong-be1"
rxresp
@ -133,7 +133,7 @@ client c2 -connect ${h1_clearlst_sock} -repeat 2 {
expect resp.status == 503
} -run
# successul connection to wrong-be1/s6
# successful connection to wrong-be1/s6
client c3 -connect ${h1_clearlst_sock} {
txreq -url "/wrong-be1"
rxresp

View File

@ -165,7 +165,7 @@ client c6 -connect ${h1_clearlst_sock} {
# The curve with the highest priority is X25519 for OpenSSL 1.1.1 and later,
# and P-256 for OpenSSL 1.0.2.
shell {
echo "Q" | openssl s_client -unix "${tmpdir}/ssl.sock" -servername server.ecdsa.com -tls1_2 2>/dev/null | grep -E "Server Temp Key: (ECDH, P-256, 256 bits|ECDH, prime256v1, 256 bits|X25519, 253 bits)"
echo "Q" | openssl s_client -unix "${tmpdir}/ssl.sock" -servername server.ecdsa.com -tls1_2 2>/dev/null | grep -E "(Server|Peer) Temp Key: (ECDH, P-256, 256 bits|ECDH, prime256v1, 256 bits|X25519, 253 bits)"
}
shell {

View File

@ -0,0 +1,172 @@
varnishtest "Ensure switching-rules conformance with backend eligibility"
feature ignore_unknown_macro
haproxy hsrv -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode http
timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"
frontend fe
bind "fd@${feS}"
http-request return status 200 hdr "x-be" "li"
} -start
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode http
timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"
frontend fe
bind "fd@${fe1S}"
use_backend %[req.hdr("x-target")] if { req.hdr("x-dyn") "1" }
use_backend be if { req.hdr("x-target") "be" }
frontend fe_default
bind "fd@${fe2S}"
force-be-switch if { req.hdr("x-force") "1" }
use_backend %[req.hdr("x-target")] if { req.hdr("x-dyn") "1" }
use_backend be_disabled if { req.hdr("x-target") "be_disabled" }
use_backend be
use_backend be2
default_backend be_default
listen li
bind "fd@${liS}"
use_backend %[req.hdr("x-target")] if { req.hdr("x-dyn") "1" }
server srv ${hsrv_feS_sock}
backend be
http-request return status 200 hdr "x-be" %[be_name]
backend be2
http-request return status 200 hdr "x-be" %[be_name]
backend be_disabled
disabled
http-request return status 200 hdr "x-be" %[be_name]
backend be_default
http-request return status 200 hdr "x-be" %[be_name]
} -start
client c1 -connect ${h1_fe1S_sock} {
# Dynamic rule matching
txreq -hdr "x-dyn: 1" -hdr "x-target: be"
rxresp
expect resp.status == 200
expect resp.http.x-be == "be"
# Dynamic rule no match -> 503 expected
txreq -hdr "x-dyn: 1" -hdr "x-target: be_unknown"
rxresp
expect resp.status == 503
} -run
# Connect to frontend with default backend set
client c2 -connect ${h1_fe2S_sock} {
# Dynamic rule matching
txreq -hdr "x-dyn: 1" -hdr "x-target: be"
rxresp
expect resp.status == 200
expect resp.http.x-be == "be"
# Dynamic rule no match -> use default backend
txreq -hdr "x-dyn: 1" -hdr "x-target: be_unknown"
rxresp
expect resp.status == 200
expect resp.http.x-be == "be_default"
# Static rule on disabled backend -> continue to next rule
txreq -hdr "x-target: be_disabled"
rxresp
expect resp.status == 200
expect resp.http.x-be == "be"
} -run
# Connect to listen proxy type
client c3 -connect ${h1_liS_sock} {
# Dynamic rule matching
txreq -hdr "x-dyn: 1" -hdr "x-target: be"
rxresp
expect resp.status == 200
expect resp.http.x-be == "be"
# Dynamic rule no match -> stay on current proxy instance
txreq -hdr "x-dyn: 1" -hdr "x-target: be_unknown"
rxresp
expect resp.status == 200
expect resp.http.x-be == "li"
} -run
haproxy h1 -cli {
send "unpublish backend be_unknown"
expect ~ "No such backend."
send "unpublish backend be_disabled"
expect ~ "No effect on a disabled backend."
send "unpublish backend be"
expect ~ "Backend unpublished."
}
client c4 -connect ${h1_fe2S_sock} {
# Static rule on unpublished backend -> continue to next rule
txreq
rxresp
expect resp.status == 200
expect resp.http.x-be == "be2"
# Dynamic rule on unpublished backend -> continue to next rule
txreq -hdr "x-dyn: 1" -hdr "x-target: be"
rxresp
expect resp.status == 200
expect resp.http.x-be == "be2"
# Static rule matching on unpublished backend with force-be-switch
txreq -hdr "x-force: 1"
rxresp
expect resp.status == 200
expect resp.http.x-be == "be"
# Dynamic rule matching on unpublished backend with force-be-switch
txreq -hdr "x-dyn: 1" -hdr "x-target: be" -hdr "x-force: 1"
rxresp
expect resp.status == 200
expect resp.http.x-be == "be"
} -run
haproxy h1 -cli {
send "publish backend be"
expect ~ "Backend published."
}
client c5 -connect ${h1_fe2S_sock} {
# Static rule matching on republished backend
txreq -hdr "x-target: be"
rxresp
expect resp.status == 200
expect resp.http.x-be == "be"
# Dynamic rule matching on republished backend
txreq -hdr "x-dyn: 1" -hdr "x-target: be"
rxresp
expect resp.status == 200
expect resp.http.x-be == "be"
} -run

View File

@ -144,7 +144,7 @@ _findtests() {
regtest_type=default
fi
if ! $(echo $REGTESTS_TYPES | grep -wq $regtest_type) ; then
echo " Skip $i because its type '"$regtest_type"' is excluded"
echo " Skipped $i because its type '"$regtest_type"' is excluded" >> "${TESTDIR}/skipped.log"
skiptest=1
fi
fi
@ -167,7 +167,7 @@ _findtests() {
for excludedtarget in $exclude_targets; do
if [ "$excludedtarget" = "$TARGET" ]; then
echo " Skip $i because haproxy is compiled for the excluded target $TARGET"
echo " Skipped $i because haproxy is compiled for the excluded target $TARGET" >> "${TESTDIR}/skipped.log"
skiptest=1
fi
done
@ -181,7 +181,7 @@ _findtests() {
fi
done
if [ -z $found ]; then
echo " Skip $i because haproxy is not compiled with the required option $requiredoption"
echo " Skipped $i because haproxy is not compiled with the required option $requiredoption" >> "${TESTDIR}/skipped.log"
skiptest=1
fi
done
@ -195,7 +195,7 @@ _findtests() {
fi
done
if [ -z $found ]; then
echo " Skip $i because haproxy is not compiled with the required service $requiredservice"
echo " Skipped $i because haproxy is not compiled with the required service $requiredservice" >> "${TESTDIR}/skipped.log"
skiptest=1
fi
done
@ -255,7 +255,7 @@ _process() {
debug="-v"
;;
--keep-logs)
keep_logs="-L"
keep_logs=1
;;
--type)
REGTESTS_TYPES="$2"
@ -302,7 +302,7 @@ LINEFEED="
jobcount=""
verbose="-q"
debug=""
keep_logs="-l"
keep_logs=0
testlist=""
_process "$@";
@ -376,33 +376,65 @@ if [ -n "$testlist" ]; then
if [ -n "$jobcount" ]; then
jobcount="-j $jobcount"
fi
cmd="$VTEST_PROGRAM -b $((2<<20)) -k -t ${VTEST_TIMEOUT} $keep_logs $verbose $debug $jobcount $vtestparams $testlist"
cmd="$VTEST_PROGRAM -b $((2<<20)) -k -t ${VTEST_TIMEOUT} -L $verbose $debug $jobcount $vtestparams $testlist"
eval $cmd
_vtresult=$?
else
echo "No tests found that meet the required criteria"
fi
if [ $_vtresult -eq 0 ]; then
# all tests were successful, removing tempdir (the last part.)
# ignore errors is the directory is not empty or if it does not exist
rmdir "$TESTDIR" 2>/dev/null
fi
if [ -d "${TESTDIR}" ]; then
echo "########################## Gathering results ##########################"
export TESTDIR
find "$TESTDIR" -type d -name "vtc.*" -exec sh -c 'for i; do
if [ ! -e "$i/LOG" ] ; then continue; fi
# look for tests skipped by vtest
find "${TESTDIR}" -type f -name "LOG" | while read logfile; do
REASON=$(grep "SKIPPING test" "$logfile")
if [ -n "$REASON" ]; then
infofile="$(dirname "$logfile")/INFO"
if [ -e "$infofile" ]; then
vtc_path=$(sed 's/^Test case: //' "$infofile" )
if [ -n "$vtc_path" ]; then
echo " Skipped $vtc_path (feature cmd)" >> "${TESTDIR}/skipped.log"
fi
fi
fi
done
cat <<- EOF | tee -a "$TESTDIR/failedtests.log"
if [ $keep_logs -eq 0 ]; then
# remove logs for successful tests
find "$TESTDIR" -type d -name "vtc.*" | while read vtcdir; do
# errors are starting with ----
grep -q "^----" ${vtcdir}/LOG || rm -fr "${vtcdir}"
done
fi
if [ $_vtresult -eq 0 ]; then
# all tests were successful, removing tempdir (the last part.)
# ignore errors is the directory is not empty or if it does not exist
rmdir "$TESTDIR" 2>/dev/null
fi
# show failed tests
if [ -d "${TESTDIR}" ]; then
echo "########################## Gathering results ##########################"
export TESTDIR
find "$TESTDIR" -type d -name "vtc.*" -exec sh -c 'for i; do
if [ ! -e "$i/LOG" ] ; then continue; fi
cat <<- EOF | tee -a "$TESTDIR/failedtests.log"
$(echo "###### $(cat "$i/INFO") ######")
$(echo "## test results in: \"$i\"")
$(echo "## test log file: $i/LOG")
$(grep -E -- "^(----|\* diag)" "$i/LOG")
EOF
done' sh {} +
fi
done' sh {} +
fi
echo "########################## Listing skipped tests ####################"
count=0
if [ -e "${TESTDIR}/skipped.log" ]; then
count=$(wc -l < "${TESTDIR}/skipped.log")
cat "${TESTDIR}/skipped.log" | sort -n
fi
echo "Total skipped tests: $count"
fi # if TESTDIR
exit $_vtresult

View File

@ -661,10 +661,6 @@ int assign_server(struct stream *s)
if (!(conn->flags & CO_FL_WAIT_XPRT)) {
srv = tmpsrv;
stream_set_srv_target(s, srv);
if (conn->flags & CO_FL_SESS_IDLE) {
conn->flags &= ~CO_FL_SESS_IDLE;
s->sess->idle_conns--;
}
goto out_ok;
}
}
@ -827,7 +823,7 @@ int assign_server(struct stream *s)
else if (srv != prev_srv) {
if (s->be_tgcounters)
_HA_ATOMIC_INC(&s->be_tgcounters->cum_lbconn);
if (srv->counters.shared.tg[tgid - 1])
if (srv->counters.shared.tg)
_HA_ATOMIC_INC(&srv->counters.shared.tg[tgid - 1]->cum_lbconn);
}
stream_set_srv_target(s, srv);
@ -1002,12 +998,12 @@ int assign_server_and_queue(struct stream *s)
s->txn->flags |= TX_CK_DOWN;
}
s->flags |= SF_REDISP;
if (prev_srv->counters.shared.tg[tgid - 1])
if (prev_srv->counters.shared.tg)
_HA_ATOMIC_INC(&prev_srv->counters.shared.tg[tgid - 1]->redispatches);
if (s->be_tgcounters)
_HA_ATOMIC_INC(&s->be_tgcounters->redispatches);
} else {
if (prev_srv->counters.shared.tg[tgid - 1])
if (prev_srv->counters.shared.tg)
_HA_ATOMIC_INC(&prev_srv->counters.shared.tg[tgid - 1]->retries);
if (s->be_tgcounters)
_HA_ATOMIC_INC(&s->be_tgcounters->retries);
@ -1449,9 +1445,9 @@ static int do_connect_server(struct stream *s, struct connection *conn)
if (unlikely(!conn || !conn->ctrl || !conn->ctrl->connect))
return SF_ERR_INTERNAL;
if (co_data(&s->res))
if (co_data(&s->req))
conn_flags |= CONNECT_HAS_DATA;
if (s->conn_retries == s->max_retries)
if (s->conn_retries == 0)
conn_flags |= CONNECT_CAN_USE_TFO;
if (!conn_ctrl_ready(conn) || !conn_xprt_ready(conn)) {
ret = conn->ctrl->connect(conn, conn_flags);
@ -2042,7 +2038,7 @@ int connect_server(struct stream *s)
struct ist sni = IST_NULL;
/* Set socket SNI */
if (srv->xprt && srv->xprt->get_ssl_sock_ctx && srv->ssl_ctx.sni) {
if (srv->xprt->get_ssl_sock_ctx && srv->ssl_ctx.sni) {
sni_smp = sample_fetch_as_type(s->be, s->sess, s,
SMP_OPT_DIR_REQ | SMP_OPT_FINAL,
srv->ssl_ctx.sni, SMP_T_STR);

View File

@ -2133,11 +2133,11 @@ enum act_return http_action_req_cache_use(struct act_rule *rule, struct proxy *p
return ACT_RET_CONT;
if (px == strm_fe(s)) {
if (px->fe_counters.shared.tg[tgid - 1])
if (px->fe_counters.shared.tg)
_HA_ATOMIC_INC(&px->fe_counters.shared.tg[tgid - 1]->p.http.cache_lookups);
}
else {
if (px->be_counters.shared.tg[tgid - 1])
if (px->be_counters.shared.tg)
_HA_ATOMIC_INC(&px->be_counters.shared.tg[tgid - 1]->p.http.cache_lookups);
}
@ -2226,11 +2226,11 @@ enum act_return http_action_req_cache_use(struct act_rule *rule, struct proxy *p
should_send_notmodified_response(cache, htxbuf(&s->req.buf), res);
if (px == strm_fe(s)) {
if (px->fe_counters.shared.tg[tgid - 1])
if (px->fe_counters.shared.tg)
_HA_ATOMIC_INC(&px->fe_counters.shared.tg[tgid - 1]->p.http.cache_hits);
}
else {
if (px->be_counters.shared.tg[tgid - 1])
if (px->be_counters.shared.tg)
_HA_ATOMIC_INC(&px->be_counters.shared.tg[tgid - 1]->p.http.cache_hits);
}
return ACT_RET_CONT;

View File

@ -48,7 +48,7 @@ static const char *common_kw_list[] = {
"server-state-file-name", "max-session-srv-conns", "capture",
"retries", "http-request", "http-response", "http-after-response",
"http-send-name-header", "block", "redirect", "use_backend",
"use-server", "force-persist", "ignore-persist", "force-persist",
"use-server", "force-persist", "ignore-persist",
"stick-table", "stick", "stats", "option", "default_backend",
"http-reuse", "monitor", "transparent", "maxconn", "backlog",
"fullconn", "dispatch", "balance", "hash-type",
@ -1395,7 +1395,9 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
where |= SMP_VAL_FE_HRQ_HDR;
if (curproxy->cap & PR_CAP_BE)
where |= SMP_VAL_BE_HRQ_HDR;
err_code |= warnif_cond_conflicts(rule->cond, where, file, linenum);
err_code |= warnif_cond_conflicts(rule->cond, where, &errmsg);
if (err_code)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
LIST_APPEND(&curproxy->http_req_rules, &rule->list);
}
@ -1428,7 +1430,9 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
where |= SMP_VAL_FE_HRS_HDR;
if (curproxy->cap & PR_CAP_BE)
where |= SMP_VAL_BE_HRS_HDR;
err_code |= warnif_cond_conflicts(rule->cond, where, file, linenum);
err_code |= warnif_cond_conflicts(rule->cond, where, &errmsg);
if (err_code)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
LIST_APPEND(&curproxy->http_res_rules, &rule->list);
}
@ -1460,7 +1464,9 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
where |= SMP_VAL_FE_HRS_HDR;
if (curproxy->cap & PR_CAP_BE)
where |= SMP_VAL_BE_HRS_HDR;
err_code |= warnif_cond_conflicts(rule->cond, where, file, linenum);
err_code |= warnif_cond_conflicts(rule->cond, where, &errmsg);
if (err_code)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
LIST_APPEND(&curproxy->http_after_res_rules, &rule->list);
}
@ -1522,7 +1528,9 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
where |= SMP_VAL_FE_HRQ_HDR;
if (curproxy->cap & PR_CAP_BE)
where |= SMP_VAL_BE_HRQ_HDR;
err_code |= warnif_cond_conflicts(rule->cond, where, file, linenum);
err_code |= warnif_cond_conflicts(rule->cond, where, &errmsg);
if (err_code)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
}
else if (strcmp(args[0], "use_backend") == 0) {
struct switching_rule *rule;
@ -1550,7 +1558,9 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
goto out;
}
err_code |= warnif_cond_conflicts(cond, SMP_VAL_FE_SET_BCK, file, linenum);
err_code |= warnif_cond_conflicts(cond, SMP_VAL_FE_SET_BCK, &errmsg);
if (err_code)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
}
else if (*args[2]) {
ha_alert("parsing [%s:%d] : unexpected keyword '%s' after switching rule, only 'if' and 'unless' are allowed.\n",
@ -1611,7 +1621,9 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
goto out;
}
err_code |= warnif_cond_conflicts(cond, SMP_VAL_BE_SET_SRV, file, linenum);
err_code |= warnif_cond_conflicts(cond, SMP_VAL_BE_SET_SRV, &errmsg);
if (err_code)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
rule = calloc(1, sizeof(*rule));
if (!rule)
@ -1664,7 +1676,9 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
/* note: BE_REQ_CNT is the first one after FE_SET_BCK, which is
* where force-persist is applied.
*/
err_code |= warnif_cond_conflicts(cond, SMP_VAL_BE_REQ_CNT, file, linenum);
err_code |= warnif_cond_conflicts(cond, SMP_VAL_BE_REQ_CNT, &errmsg);
if (err_code)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
rule = calloc(1, sizeof(*rule));
if (!rule) {
@ -1828,9 +1842,11 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
goto out;
}
if (flags & STK_ON_RSP)
err_code |= warnif_cond_conflicts(cond, SMP_VAL_BE_STO_RUL, file, linenum);
err_code |= warnif_cond_conflicts(cond, SMP_VAL_BE_STO_RUL, &errmsg);
else
err_code |= warnif_cond_conflicts(cond, SMP_VAL_BE_SET_SRV, file, linenum);
err_code |= warnif_cond_conflicts(cond, SMP_VAL_BE_SET_SRV, &errmsg);
if (err_code)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
rule = calloc(1, sizeof(*rule));
if (!rule) {
@ -1886,7 +1902,9 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
where |= SMP_VAL_FE_HRQ_HDR;
if (curproxy->cap & PR_CAP_BE)
where |= SMP_VAL_BE_HRQ_HDR;
err_code |= warnif_cond_conflicts(cond, where, file, linenum);
err_code |= warnif_cond_conflicts(cond, where, &errmsg);
if (err_code)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
rule = calloc(1, sizeof(*rule));
if (!rule) {
@ -1964,7 +1982,9 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
where |= SMP_VAL_FE_HRQ_HDR;
if (curproxy->cap & PR_CAP_BE)
where |= SMP_VAL_BE_HRQ_HDR;
err_code |= warnif_cond_conflicts(rule->cond, where, file, linenum);
err_code |= warnif_cond_conflicts(rule->cond, where, &errmsg);
if (err_code)
ha_warning("parsing [%s:%d] : '%s.\n'", file, linenum, errmsg);
LIST_APPEND(&curproxy->uri_auth->http_req_rules, &rule->list);
} else if (strcmp(args[1], "auth") == 0) {

View File

@ -498,7 +498,7 @@ static int cfg_parse_quic_tune_setting(char **args, int section_type,
char *end_opt;
memprintf(err, "'%s' is deprecated in 3.3 and will be removed in 3.5. "
"Please use the newer keyword syntax 'tune.quic.fe.stream.max-concurrent'.", args[0]);
"Please use the newer keyword syntax 'tune.quic.fe.cc.max-win-size'.", args[0]);
cwnd = parse_window_size(args[0], args[1], &end_opt, err);
if (!cwnd)

View File

@ -146,6 +146,25 @@ static int bind_parse_tcp_md5sig(char **args, int cur_arg, struct proxy *px, str
}
#endif
#ifdef TCP_SAVE_SYN
/* parse the "tcp-ss" bind keyword */
static int bind_parse_tcp_ss(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
{
int tcp_ss = -1;
if (isdigit((unsigned char)*args[cur_arg + 1]))
tcp_ss = atoi(args[cur_arg + 1]);
if (tcp_ss < 0 || tcp_ss > 2) {
memprintf(err, "'%s' : TCP Save SYN option expects an integer argument from 0 to 2", args[cur_arg]);
return ERR_ALERT | ERR_FATAL;
}
conf->tcp_ss = tcp_ss;
return 0;
}
#endif
#ifdef TCP_USER_TIMEOUT
/* parse the "tcp-ut" bind keyword */
static int bind_parse_tcp_ut(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
@ -332,6 +351,9 @@ static struct bind_kw_list bind_kws = { "TCP", { }, {
#if defined(__linux__) && defined(TCP_MD5SIG)
{ "tcp-md5sig", bind_parse_tcp_md5sig, 1 }, /* set TCP MD5 signature password */
#endif
#ifdef TCP_SAVE_SYN
{ "tcp-ss", bind_parse_tcp_ss, 1 }, /* set TCP Save SYN option (0=no, 1=yes, 2=with MAC hdr) */
#endif
#ifdef TCP_USER_TIMEOUT
{ "tcp-ut", bind_parse_tcp_ut, 1 }, /* set User Timeout on listening socket */
#endif

View File

@ -133,6 +133,26 @@ struct cfg_kw_list cfg_keywords = {
.list = LIST_HEAD_INIT(cfg_keywords.list)
};
/*
* Shifts <args> one position to the left.
* This function tricky preserves internal allocated structure of the
* <args>. We defer the deallocation of the "shifted off" element, by
* making it an empty string and moving it into the gap that appears after
* the shift.
*/
static void
lshift_args(char **args)
{
int i;
char *shifted;
shifted = args[0];
for (i = 0; *args[i + 1]; i++)
args[i] = args[i + 1];
*shifted = '\0';
args[i] = shifted;
}
/*
* converts <str> to a list of listeners which are dynamically allocated.
* The format is "{addr|'*'}:port[-end][,{addr|'*'}:port[-end]]*", where :
@ -390,12 +410,14 @@ int alertif_too_many_args(int maxarg, const char *file, int linenum, char **args
}
/* Report it if a request ACL condition uses some keywords that are incompatible
* with the place where the ACL is used. It returns either 0 or ERR_WARN so that
* its result can be or'ed with err_code. Note that <cond> may be NULL and then
* will be ignored.
/* Report it if a request ACL condition uses some keywords that are
* incompatible with the place where the ACL is used. It returns either 0 or
* ERR_WARN so that its result can be or'ed with err_code. Note that <cond> may
* be NULL and then will be ignored. In case of error, <err> is dynamically
* allocated to contains a description.
*/
int warnif_cond_conflicts(const struct acl_cond *cond, unsigned int where, const char *file, int line)
int warnif_cond_conflicts(const struct acl_cond *cond, unsigned int where,
char **err)
{
const struct acl *acl;
const char *kw;
@ -405,23 +427,27 @@ int warnif_cond_conflicts(const struct acl_cond *cond, unsigned int where, const
acl = acl_cond_conflicts(cond, where);
if (acl) {
if (acl->name && *acl->name)
ha_warning("parsing [%s:%d] : acl '%s' will never match because it only involves keywords that are incompatible with '%s'\n",
file, line, acl->name, sample_ckp_names(where));
else
ha_warning("parsing [%s:%d] : anonymous acl will never match because it uses keyword '%s' which is incompatible with '%s'\n",
file, line, LIST_ELEM(acl->expr.n, struct acl_expr *, list)->kw, sample_ckp_names(where));
if (acl->name && *acl->name) {
memprintf(err, "acl '%s' will never match because it only involves keywords that are incompatible with '%s'",
acl->name, sample_ckp_names(where));
}
else {
memprintf(err, "anonymous acl will never match because it uses keyword '%s' which is incompatible with '%s'",
LIST_ELEM(acl->expr.n, struct acl_expr *, list)->kw, sample_ckp_names(where));
}
return ERR_WARN;
}
if (!acl_cond_kw_conflicts(cond, where, &acl, &kw))
return 0;
if (acl->name && *acl->name)
ha_warning("parsing [%s:%d] : acl '%s' involves keywords '%s' which is incompatible with '%s'\n",
file, line, acl->name, kw, sample_ckp_names(where));
else
ha_warning("parsing [%s:%d] : anonymous acl involves keyword '%s' which is incompatible with '%s'\n",
file, line, kw, sample_ckp_names(where));
if (acl->name && *acl->name) {
memprintf(err, "acl '%s' involves keywords '%s' which is incompatible with '%s'",
acl->name, kw, sample_ckp_names(where));
}
else {
memprintf(err, "anonymous acl involves keyword '%s' which is incompatible with '%s'",
kw, sample_ckp_names(where));
}
return ERR_WARN;
}
@ -1389,153 +1415,194 @@ cfg_parse_users(const char *file, int linenum, char **args, int kwm)
newul->next = userlist;
userlist = newul;
} else if (strcmp(args[0], "group") == 0) { /* new group */
int cur_arg;
const char *err;
struct auth_groups *ag;
} else {
const struct cfg_kw_list *kwl;
char *errmsg = NULL;
int index;
if (!*args[1]) {
ha_alert("parsing [%s:%d]: '%s' expects <name> as arguments.\n",
file, linenum, args[0]);
err_code |= ERR_ALERT | ERR_FATAL;
goto out;
}
err = invalid_char(args[1]);
if (err) {
ha_alert("parsing [%s:%d]: character '%c' is not permitted in '%s' name '%s'.\n",
file, linenum, *err, args[0], args[1]);
err_code |= ERR_ALERT | ERR_FATAL;
goto out;
}
if (!userlist)
goto out;
for (ag = userlist->groups; ag; ag = ag->next)
if (strcmp(ag->name, args[1]) == 0) {
ha_warning("parsing [%s:%d]: ignoring duplicated group '%s' in userlist '%s'.\n",
file, linenum, args[1], userlist->name);
err_code |= ERR_ALERT;
goto out;
list_for_each_entry(kwl, &cfg_keywords.list, list) {
for (index = 0; kwl->kw[index].kw; index++) {
if ((kwl->kw[index].section & CFG_USERLIST) &&
(strcmp(kwl->kw[index].kw, args[0]) == 0)) {
err_code |= kwl->kw[index].parse(args, CFG_USERLIST, NULL, NULL, file, linenum, &errmsg);
if (errmsg) {
ha_alert("parsing [%s:%d] : %s\n", file, linenum, errmsg);
ha_free(&errmsg);
}
goto out;
}
}
}
ag = calloc(1, sizeof(*ag));
if (!ag) {
ha_alert("parsing [%s:%d]: out of memory.\n", file, linenum);
err_code |= ERR_ALERT | ERR_ABORT;
ha_alert("parsing [%s:%d]: unknown keyword '%s' in '%s' section\n", file, linenum, args[0], "userlist");
err_code |= ERR_ALERT | ERR_FATAL;
}
out:
return err_code;
}
int cfg_parse_users_group(char **args, int section_type, struct proxy *curproxy, const struct proxy *defproxy, const char *file, int linenum, char **err)
{
int cur_arg;
const char *err_str;
struct auth_groups *ag;
int err_code = 0;
if (!*args[1]) {
ha_alert("parsing [%s:%d]: '%s' expects <name> as arguments.\n",
file, linenum, args[0]);
err_code |= ERR_ALERT | ERR_FATAL;
goto out;
}
err_str = invalid_char(args[1]);
if (err_str) {
ha_alert("parsing [%s:%d]: character '%c' is not permitted in '%s' name '%s'.\n",
file, linenum, *err_str, args[0], args[1]);
err_code |= ERR_ALERT | ERR_FATAL;
goto out;
}
if (!userlist)
goto out;
for (ag = userlist->groups; ag; ag = ag->next)
if (strcmp(ag->name, args[1]) == 0) {
ha_warning("parsing [%s:%d]: ignoring duplicated group '%s' in userlist '%s'.\n",
file, linenum, args[1], userlist->name);
err_code |= ERR_ALERT;
goto out;
}
ag->name = strdup(args[1]);
if (!ag->name) {
ha_alert("parsing [%s:%d]: out of memory.\n", file, linenum);
err_code |= ERR_ALERT | ERR_ABORT;
free(ag);
goto out;
}
ag = calloc(1, sizeof(*ag));
if (!ag) {
ha_alert("parsing [%s:%d]: out of memory.\n", file, linenum);
err_code |= ERR_ALERT | ERR_ABORT;
goto out;
}
cur_arg = 2;
ag->name = strdup(args[1]);
if (!ag->name) {
ha_alert("parsing [%s:%d]: out of memory.\n", file, linenum);
err_code |= ERR_ALERT | ERR_ABORT;
free(ag);
goto out;
}
while (*args[cur_arg]) {
if (strcmp(args[cur_arg], "users") == 0) {
if (ag->groupusers) {
ha_alert("parsing [%s:%d]: 'users' option already defined in '%s' name '%s'.\n",
file, linenum, args[0], args[1]);
err_code |= ERR_ALERT | ERR_FATAL;
free(ag->groupusers);
free(ag->name);
free(ag);
goto out;
}
ag->groupusers = strdup(args[cur_arg + 1]);
cur_arg += 2;
continue;
} else {
ha_alert("parsing [%s:%d]: '%s' only supports 'users' option.\n",
file, linenum, args[0]);
cur_arg = 2;
while (*args[cur_arg]) {
if (strcmp(args[cur_arg], "users") == 0) {
if (ag->groupusers) {
ha_alert("parsing [%s:%d]: 'users' option already defined in '%s' name '%s'.\n",
file, linenum, args[0], args[1]);
err_code |= ERR_ALERT | ERR_FATAL;
free(ag->groupusers);
free(ag->name);
free(ag);
goto out;
}
ag->groupusers = strdup(args[cur_arg + 1]);
cur_arg += 2;
continue;
} else {
ha_alert("parsing [%s:%d]: '%s' only supports 'users' option.\n",
file, linenum, args[0]);
err_code |= ERR_ALERT | ERR_FATAL;
free(ag->groupusers);
free(ag->name);
free(ag);
goto out;
}
}
ag->next = userlist->groups;
userlist->groups = ag;
out:
return err_code;
}
int cfg_parse_users_user(char **args, int section_type, struct proxy *curproxy, const struct proxy *defproxy, const char *file, int linenum, char **err)
{
struct auth_users *newuser;
int cur_arg;
int err_code = 0;
if (!*args[1]) {
ha_alert("parsing [%s:%d]: '%s' expects <name> as arguments.\n",
file, linenum, args[0]);
err_code |= ERR_ALERT | ERR_FATAL;
goto out;
}
if (!userlist)
goto out;
for (newuser = userlist->users; newuser; newuser = newuser->next)
if (strcmp(newuser->user, args[1]) == 0) {
ha_warning("parsing [%s:%d]: ignoring duplicated user '%s' in userlist '%s'.\n",
file, linenum, args[1], userlist->name);
err_code |= ERR_ALERT;
goto out;
}
ag->next = userlist->groups;
userlist->groups = ag;
newuser = calloc(1, sizeof(*newuser));
if (!newuser) {
ha_alert("parsing [%s:%d]: out of memory.\n", file, linenum);
err_code |= ERR_ALERT | ERR_ABORT;
goto out;
}
} else if (strcmp(args[0], "user") == 0) { /* new user */
struct auth_users *newuser;
int cur_arg;
newuser->user = strdup(args[1]);
if (!*args[1]) {
ha_alert("parsing [%s:%d]: '%s' expects <name> as arguments.\n",
newuser->next = userlist->users;
userlist->users = newuser;
cur_arg = 2;
while (*args[cur_arg]) {
if (strcmp(args[cur_arg], "password") == 0) {
#ifdef USE_LIBCRYPT
struct timeval tv_before, tv_after;
ulong ms_elapsed;
gettimeofday(&tv_before, NULL);
if (!crypt("", args[cur_arg + 1])) {
ha_alert("parsing [%s:%d]: the encrypted password used for user '%s' is not supported by crypt(3).\n",
file, linenum, newuser->user);
err_code |= ERR_ALERT | ERR_FATAL;
goto out;
}
gettimeofday(&tv_after, NULL);
ms_elapsed = tv_ms_elapsed(&tv_before, &tv_after);
if (ms_elapsed >= 10) {
ha_warning("parsing [%s:%d]: the hash algorithm used for this password takes %lu milliseconds to verify, which can have devastating performance and stability impacts. Please hash this password using a lighter algorithm (one that is compatible with web usage).\n", file, linenum, ms_elapsed);
err_code |= ERR_WARN;
}
#else
ha_warning("parsing [%s:%d]: no crypt(3) support compiled, encrypted passwords will not work.\n",
file, linenum);
err_code |= ERR_ALERT;
#endif
newuser->pass = strdup(args[cur_arg + 1]);
cur_arg += 2;
continue;
} else if (strcmp(args[cur_arg], "insecure-password") == 0) {
newuser->pass = strdup(args[cur_arg + 1]);
newuser->flags |= AU_O_INSECURE;
cur_arg += 2;
continue;
} else if (strcmp(args[cur_arg], "groups") == 0) {
newuser->u.groups_names = strdup(args[cur_arg + 1]);
cur_arg += 2;
continue;
} else {
ha_alert("parsing [%s:%d]: '%s' only supports 'password', 'insecure-password' and 'groups' options.\n",
file, linenum, args[0]);
err_code |= ERR_ALERT | ERR_FATAL;
goto out;
}
if (!userlist)
goto out;
for (newuser = userlist->users; newuser; newuser = newuser->next)
if (strcmp(newuser->user, args[1]) == 0) {
ha_warning("parsing [%s:%d]: ignoring duplicated user '%s' in userlist '%s'.\n",
file, linenum, args[1], userlist->name);
err_code |= ERR_ALERT;
goto out;
}
newuser = calloc(1, sizeof(*newuser));
if (!newuser) {
ha_alert("parsing [%s:%d]: out of memory.\n", file, linenum);
err_code |= ERR_ALERT | ERR_ABORT;
goto out;
}
newuser->user = strdup(args[1]);
newuser->next = userlist->users;
userlist->users = newuser;
cur_arg = 2;
while (*args[cur_arg]) {
if (strcmp(args[cur_arg], "password") == 0) {
#ifdef USE_LIBCRYPT
if (!crypt("", args[cur_arg + 1])) {
ha_alert("parsing [%s:%d]: the encrypted password used for user '%s' is not supported by crypt(3).\n",
file, linenum, newuser->user);
err_code |= ERR_ALERT | ERR_FATAL;
goto out;
}
#else
ha_warning("parsing [%s:%d]: no crypt(3) support compiled, encrypted passwords will not work.\n",
file, linenum);
err_code |= ERR_ALERT;
#endif
newuser->pass = strdup(args[cur_arg + 1]);
cur_arg += 2;
continue;
} else if (strcmp(args[cur_arg], "insecure-password") == 0) {
newuser->pass = strdup(args[cur_arg + 1]);
newuser->flags |= AU_O_INSECURE;
cur_arg += 2;
continue;
} else if (strcmp(args[cur_arg], "groups") == 0) {
newuser->u.groups_names = strdup(args[cur_arg + 1]);
cur_arg += 2;
continue;
} else {
ha_alert("parsing [%s:%d]: '%s' only supports 'password', 'insecure-password' and 'groups' options.\n",
file, linenum, args[0]);
err_code |= ERR_ALERT | ERR_FATAL;
goto out;
}
}
} else {
ha_alert("parsing [%s:%d]: unknown keyword '%s' in '%s' section\n", file, linenum, args[0], "users");
err_code |= ERR_ALERT | ERR_FATAL;
}
out:
@ -2627,19 +2694,12 @@ next_line:
/* check for keyword modifiers "no" and "default" */
if (strcmp(args[0], "no") == 0) {
char *tmp;
kwm = KWM_NO;
tmp = args[0];
for (arg=0; *args[arg+1]; arg++)
args[arg] = args[arg+1]; // shift args after inversion
*tmp = '\0'; // fix the next arg to \0
args[arg] = tmp;
lshift_args(args);
}
else if (strcmp(args[0], "default") == 0) {
kwm = KWM_DEF;
for (arg=0; *args[arg+1]; arg++)
args[arg] = args[arg+1]; // shift args after inversion
lshift_args(args);
}
if (kwm != KWM_STD && strcmp(args[0], "option") != 0 &&
@ -2786,7 +2846,8 @@ int check_config_validity()
struct server *newsrv = NULL;
struct mt_list back;
int err_code = 0;
unsigned int next_pxid = 1;
/* Value forced to skip '1' due to an historical bug, see below for more details. */
unsigned int next_pxid = 2;
struct bind_conf *bind_conf;
char *err;
struct cfg_postparser *postparser;
@ -2874,16 +2935,19 @@ init_proxies_list_stage1:
unsigned int next_id;
proxy_init_per_thr(curproxy);
/* Assign automatic UUID if unset except for internal proxies.
*
* WARNING proxy UUID initialization is buggy as value '1' is
* skipped if not explicitly used. This is an historical bug
* and should not be corrected to prevent breakage on future
* versions.
*/
if (!(curproxy->cap & PR_CAP_INT) && curproxy->uuid < 0) {
/* proxy ID not set, use automatic numbering with first
* spare entry starting with next_pxid. We don't assign
* numbers for internal proxies as they may depend on
* build or config options and we don't want them to
* possibly reuse existing IDs.
*/
next_pxid = proxy_get_next_id(next_pxid);
curproxy->uuid = next_pxid;
proxy_index_id(curproxy);
next_pxid++;
}
if (curproxy->mode == PR_MODE_HTTP && global.tune.bufsize >= (256 << 20) && ONLY_ONCE()) {
@ -2892,17 +2956,6 @@ init_proxies_list_stage1:
cfgerr++;
}
/* next IDs are shifted even if the proxy is disabled, this
* guarantees that a proxy that is temporarily disabled in the
* configuration doesn't cause a renumbering. Internal proxies
* that are not assigned a static ID must never shift the IDs
* either since they may appear in any order (Lua, logs, etc).
* The GLOBAL proxy that carries the stats socket has its ID
* forced to zero.
*/
if (curproxy->uuid >= 0)
next_pxid++;
if (curproxy->flags & PR_FL_DISABLED) {
/* ensure we don't keep listeners uselessly bound. We
* can't disable their listeners yet (fdtab not
@ -3143,13 +3196,6 @@ init_proxies_list_stage1:
proxy_type_str(curproxy), curproxy->id);
cfgerr++;
}
#ifdef WE_DONT_SUPPORT_SERVERLESS_LISTENERS
else if (curproxy->srv == NULL) {
ha_alert("%s '%s' needs at least 1 server in balance mode.\n",
proxy_type_str(curproxy), curproxy->id);
cfgerr++;
}
#endif
else if (curproxy->options & PR_O_DISPATCH) {
ha_warning("dispatch address of %s '%s' will be ignored in balance mode.\n",
proxy_type_str(curproxy), curproxy->id);
@ -4371,16 +4417,21 @@ init_proxies_list_stage2:
bind_conf->xprt->destroy_bind_conf(bind_conf);
}
/* create the task associated with the proxy */
curproxy->task = task_new_anywhere();
if (curproxy->task) {
curproxy->task->context = curproxy;
curproxy->task->process = manage_proxy;
curproxy->flags |= PR_FL_READY;
} else {
ha_alert("Proxy '%s': no more memory when trying to allocate the management task\n",
curproxy->id);
cfgerr++;
/* Create the task associated with the proxy. Only necessary
* for frontend or if a stick-table is defined.
*/
if ((curproxy->cap & PR_CAP_FE) || (curproxy->table && curproxy->table->current)) {
curproxy->task = task_new_anywhere();
if (curproxy->task) {
curproxy->task->context = curproxy;
curproxy->task->process = manage_proxy;
curproxy->flags |= PR_FL_READY;
}
else {
ha_alert("Proxy '%s': no more memory when trying to allocate the management task\n",
curproxy->id);
cfgerr++;
}
}
}
@ -4956,6 +5007,8 @@ REGISTER_CONFIG_SECTION("traces", cfg_parse_traces, NULL);
static struct cfg_kw_list cfg_kws = {{ },{
{ CFG_GLOBAL, "default-path", cfg_parse_global_def_path },
{ CFG_USERLIST, "group", cfg_parse_users_group },
{ CFG_USERLIST, "user", cfg_parse_users_user },
{ /* END */ }
}};

View File

@ -513,7 +513,7 @@ void set_server_check_status(struct check *check, short status, const char *desc
if ((!(check->state & CHK_ST_AGENT) ||
(check->status >= HCHK_STATUS_L57DATA)) &&
(check->health > 0)) {
if (s->counters.shared.tg[tgid - 1])
if (s->counters.shared.tg)
_HA_ATOMIC_INC(&s->counters.shared.tg[tgid - 1]->failed_checks);
report = 1;
check->health--;
@ -741,7 +741,7 @@ void __health_adjust(struct server *s, short status)
HA_SPIN_UNLOCK(SERVER_LOCK, &s->lock);
HA_ATOMIC_STORE(&s->consecutive_errors, 0);
if (s->counters.shared.tg[tgid - 1])
if (s->counters.shared.tg)
_HA_ATOMIC_INC(&s->counters.shared.tg[tgid - 1]->failed_hana);
if (s->check.fastinter) {

View File

@ -3372,8 +3372,14 @@ read_again:
target_pid = s->pcli_next_pid;
/* we can connect now */
s->target = pcli_pid_to_server(target_pid);
if (objt_server(s->target))
s->sv_tgcounters = __objt_server(s->target)->counters.shared.tg[tgid - 1];
if (objt_server(s->target)) {
struct server *srv = __objt_server(s->target);
if (srv->counters.shared.tg)
s->sv_tgcounters = srv->counters.shared.tg[tgid - 1];
else
s->sv_tgcounters = NULL;
}
if (!s->target)
goto server_disconnect;
@ -3894,7 +3900,7 @@ int mworker_cli_global_proxy_new_listener(struct mworker_proc *proc)
list_for_each_entry(l, &bind_conf->listeners, by_bind) {
HA_ATOMIC_INC(&unstoppable_jobs);
/* it's a sockpair but we don't want to keep the fd in the master */
l->rx.flags &= ~RX_F_INHERITED;
l->rx.flags &= ~RX_F_INHERITED_FD;
global.maxsock++; /* for the listening socket */
}

View File

@ -37,7 +37,7 @@ static void _counters_shared_drop(void *counters)
if (!shared)
return;
while (it < global.nbtgroups && shared->tg[it]) {
while (it < global.nbtgroups && shared->tg && shared->tg[it]) {
if (shared->flags & COUNTERS_SHARED_F_LOCAL) {
/* memory was allocated using calloc(), simply free it */
free(shared->tg[it]);
@ -53,6 +53,7 @@ static void _counters_shared_drop(void *counters)
}
it += 1;
}
free(shared->tg);
}
/* release a shared fe counters struct */
@ -86,6 +87,14 @@ static int _counters_shared_prepare(struct counters_shared *shared,
if (!guid->key || !shm_stats_file_hdr)
shared->flags |= COUNTERS_SHARED_F_LOCAL;
if (!shared->tg) {
shared->tg = calloc(global.nbtgroups, sizeof(*shared->tg));
if (!shared->tg) {
memprintf(errmsg, "couldn't allocate memory for shared counters");
return 0;
}
}
while (it < global.nbtgroups) {
if (shared->flags & COUNTERS_SHARED_F_LOCAL) {
size_t tg_size;

View File

@ -17,6 +17,20 @@
#define CPU_SET_FL_NONE 0x0000
#define CPU_SET_FL_DO_RESET 0x0001
/* cpu_policy_conf flags */
#define CPU_POLICY_ONE_THREAD_PER_CORE (1 << 0)
/* cpu_policy_conf affinities */
#define CPU_AFFINITY_PER_GROUP (1 << 0)
#define CPU_AFFINITY_PER_CORE (1 << 1)
#define CPU_AFFINITY_PER_THREAD (1 << 2)
#define CPU_AFFINITY_PER_CCX (1 << 3)
/*
* Specific to the per-group affinity
*/
#define CPU_AFFINITY_PER_GROUP_LOOSE (1 << 8)
/* CPU topology information, ha_cpuset_size() entries, allocated at boot */
int cpu_topo_maxcpus = -1; // max number of CPUs supported by OS/haproxy
int cpu_topo_lastcpu = -1; // last supposed online CPU (no need to look beyond)
@ -48,7 +62,39 @@ struct cpu_set_cfg {
} cpu_set_cfg;
/* CPU policy choice */
static int cpu_policy = 1; // "first-usable-node"
struct {
int cpu_policy;
int flags;
int affinity;
} cpu_policy_conf = {
1, /* "performance" policy */
0, /* Default flags */
0, /* Default affinity */
};
struct cpu_affinity_optional {
char *name;
int affinity_flag;
};
static struct cpu_affinity_optional per_group_optional[] = {
{"loose", CPU_AFFINITY_PER_GROUP_LOOSE},
{"auto", 0},
{NULL, 0}
};
static struct cpu_affinity {
char *name;
int affinity_flags;
struct cpu_affinity_optional *optional;
} ha_cpu_affinity[] = {
{"per-core", CPU_AFFINITY_PER_CORE, NULL},
{"per-group", CPU_AFFINITY_PER_GROUP, per_group_optional},
{"per-thread", CPU_AFFINITY_PER_THREAD, NULL},
{"per-ccx", CPU_AFFINITY_PER_CCX, NULL},
{"auto", 0, NULL},
{NULL, 0, NULL}
};
/* list of CPU policies for "cpu-policy". The default one is the first one. */
static int cpu_policy_first_usable_node(int policy, int tmin, int tmax, int gmin, int gmax, char **err);
@ -1016,6 +1062,28 @@ void cpu_refine_cpusets(void)
}
}
static int find_next_cpu_tsid(int start, int tsid)
{
int cpu;
for (cpu = start; cpu <= cpu_topo_lastcpu; cpu++)
if (ha_cpu_topo[cpu].ts_id == tsid)
return cpu;
return -1;
}
static int find_next_cpu_ccx(int start, int l3id)
{
int cpu;
for (cpu = start; cpu <= cpu_topo_lastcpu; cpu++)
if (ha_cpu_topo[cpu].ca_id[3] == l3id)
return cpu;
return -1;
}
/* the "first-usable-node" cpu-policy: historical one
* - does nothing if numa_cpu_mapping is not set
* - does nothing if nbthread is set
@ -1026,11 +1094,14 @@ void cpu_refine_cpusets(void)
static int cpu_policy_first_usable_node(int policy, int tmin, int tmax, int gmin, int gmax, char **err)
{
struct hap_cpuset node_cpu_set;
struct hap_cpuset touse_tsid;
struct hap_cpuset touse_ccx;
int first_node_id = -1;
int second_node_id = -1;
int cpu;
int cpu_count;
int grp, thr;
int thr_count = 0;
if (!global.numa_cpu_mapping)
return 0;
@ -1065,12 +1136,125 @@ static int cpu_policy_first_usable_node(int policy, int tmin, int tmax, int gmin
* and make a CPU set of them.
*/
ha_cpuset_zero(&node_cpu_set);
ha_cpuset_zero(&touse_tsid);
ha_cpuset_zero(&touse_ccx);
for (cpu = cpu_count = 0; cpu <= cpu_topo_lastcpu; cpu++) {
if (ha_cpu_topo[cpu].no_id != first_node_id)
ha_cpu_topo[cpu].st |= HA_CPU_F_IGNORED;
else if (!(ha_cpu_topo[cpu].st & HA_CPU_F_EXCL_MASK)) {
ha_cpuset_set(&node_cpu_set, ha_cpu_topo[cpu].idx);
cpu_count++;
ha_cpuset_set(&touse_ccx, ha_cpu_topo[cpu].ca_id[3]);
if (!(cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE) || !ha_cpuset_isset(&touse_tsid, ha_cpu_topo[cpu].ts_id)) {
ha_cpuset_set(&touse_tsid, ha_cpu_topo[cpu].ts_id);
thr_count++;
}
}
if (cpu_policy_conf.affinity & CPU_AFFINITY_PER_CORE) {
struct hap_cpuset thrset;
int tsid;
int same_core = 0;
for (thr = 0; thr < thr_count; thr++) {
if (same_core == 0) {
int corenb = 0;
ha_cpuset_zero(&thrset);
tsid = ha_cpuset_ffs(&touse_tsid) - 1;
if (tsid != -1) {
int next_try = 0;
int got_cpu;
tsid--;
while ((got_cpu = find_next_cpu_tsid(next_try, tsid)) != -1) {
next_try = got_cpu + 1;
if (!(ha_cpu_topo[got_cpu].st & HA_CPU_F_EXCL_MASK)) {
corenb++;
ha_cpuset_set(&thrset, ha_cpu_topo[got_cpu].idx);
}
}
ha_cpuset_clr(&touse_tsid, tsid);
}
if (cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE)
same_core = 1;
else
same_core = corenb;
}
if (ha_cpuset_ffs(&thrset) != 0)
ha_cpuset_assign(&cpu_map[0].thread[thr], &thrset);
same_core--;
}
} else if (cpu_policy_conf.affinity & CPU_AFFINITY_PER_THREAD) {
struct hap_cpuset thrset;
for (thr = 0; thr < thr_count; thr++) {
ha_cpuset_zero(&thrset);
/*
* if we're binding per-thread, and we have
* a one thread per core policy, then bind each
* thread on a different core, leaving the
* other hardware threads from the core unused.
*/
if (cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE) {
int got_cpu;
int next_cpu = 0;
int tsid;
tsid = ha_cpuset_ffs(&touse_tsid) - 1;
got_cpu = find_next_cpu_tsid(0, tsid);
while ((got_cpu = find_next_cpu_tsid(next_cpu, tsid)) != -1) {
if (!(ha_cpu_topo[got_cpu].st & HA_CPU_F_EXCL_MASK))
break;
next_cpu = got_cpu + 1;
}
if (got_cpu != -1) {
ha_cpuset_set(&thrset, ha_cpu_topo[got_cpu].idx);
}
ha_cpuset_clr(&touse_tsid, tsid);
} else {
int tid = ha_cpuset_ffs(&node_cpu_set) - 1;
if (tid != -1) {
ha_cpuset_set(&thrset, tid + 1);
ha_cpuset_clr(&node_cpu_set, tid + 1);
}
}
if (ha_cpuset_ffs(&thrset) != 0)
ha_cpuset_assign(&cpu_map[0].thread[thr], &thrset);
}
} else if (cpu_policy_conf.affinity & CPU_AFFINITY_PER_CCX) {
struct hap_cpuset thrset;
int same_ccx = 0;
for (thr = 0; thr < thr_count; thr++) {
int got_cpu;
int next_try = 0;
if (same_ccx == 0) {
int l3id = ha_cpuset_ffs(&touse_ccx) - 1;
ha_cpuset_zero(&thrset);
while ((got_cpu = find_next_cpu_ccx(next_try, l3id)) != -1) {
next_try = got_cpu + 1;
same_ccx++;
ha_cpuset_set(&thrset, ha_cpu_topo[got_cpu].idx);
}
ha_cpuset_clr(&touse_ccx, l3id);
}
BUG_ON(same_ccx == 0);
if (ha_cpuset_ffs(&thrset) != 0)
ha_cpuset_assign(&cpu_map[0].thread[thr], &thrset);
same_ccx--;
}
} else {
/* assign all threads of all thread groups to this node */
for (grp = 0; grp < MAX_TGROUPS; grp++)
for (thr = 0; thr < MAX_THREADS_PER_GROUP; thr++)
ha_cpuset_assign(&cpu_map[grp].thread[thr], &node_cpu_set);
}
}
@ -1079,8 +1263,8 @@ static int cpu_policy_first_usable_node(int policy, int tmin, int tmax, int gmin
for (thr = 0; thr < MAX_THREADS_PER_GROUP; thr++)
ha_cpuset_assign(&cpu_map[grp].thread[thr], &node_cpu_set);
if (tmin <= cpu_count && cpu_count < tmax)
tmax = cpu_count;
if (tmin <= thr_count && thr_count < tmax)
tmax = thr_count;
ha_diag_warning("Multi-socket cpu detected, automatically binding on active CPUs of '%d' (%u active cpu(s))\n", first_node_id, cpu_count);
@ -1090,6 +1274,206 @@ static int cpu_policy_first_usable_node(int policy, int tmin, int tmax, int gmin
return 0;
}
static void
cpu_policy_assign_threads(int cpu_count, int thr_count, struct hap_cpuset node_cpu_set, struct hap_cpuset touse_tsid, struct hap_cpuset touse_ccx)
{
struct hap_cpuset thrset;
struct hap_cpuset saved_touse_ccx;
int nb_grp;
int thr_per_grp;
int thr;
int same_core = 0;
int cpu_per_group;
ha_cpuset_zero(&thrset);
ha_cpuset_assign(&saved_touse_ccx, &touse_ccx);
/* check that we're still within limits. If there are too many
* CPUs but enough groups left, we'll try to make more smaller
* groups, of the closest size each.
*/
nb_grp = (thr_count + global.maxthrpertgroup - 1) / global.maxthrpertgroup;
if (nb_grp > MAX_TGROUPS - global.nbtgroups)
nb_grp = MAX_TGROUPS - global.nbtgroups;
cpu_per_group = (cpu_count + nb_grp - 1) / nb_grp;
thr_per_grp = (thr_count + nb_grp - 1) / nb_grp;
if (thr_per_grp > global.maxthrpertgroup)
thr_per_grp = global.maxthrpertgroup;
while (nb_grp && thr_count > 0) {
struct hap_cpuset group_cpuset;
struct hap_cpuset current_tsid;
struct hap_cpuset current_ccx;
ha_cpuset_zero(&group_cpuset);
ha_cpuset_zero(&current_tsid);
ha_cpuset_zero(&current_ccx);
/* create at most thr_per_grp threads */
if (thr_per_grp > thr_count)
thr_per_grp = thr_count;
if (thr_per_grp + global.nbthread > MAX_THREADS)
thr_per_grp = MAX_THREADS - global.nbthread;
if ((cpu_policy_conf.affinity & (CPU_AFFINITY_PER_GROUP | CPU_AFFINITY_PER_GROUP_LOOSE)) == CPU_AFFINITY_PER_GROUP) {
int i = 0;
int next_ccx;
/*
* Decide which CPUs to use for the group.
* Try to allocate them from the same CCX, and then
* the same TSID
*/
while (i < cpu_per_group) {
int next_cpu = 0;
int got_cpu;
next_ccx = ha_cpuset_ffs(&saved_touse_ccx) - 1;
if (next_ccx == -1)
break;
while (i < cpu_per_group && (got_cpu = find_next_cpu_ccx(next_cpu, next_ccx)) != -1) {
int tsid;
int got_cpu_tsid;
int next_cpu_tsid = 0;
next_cpu = got_cpu + 1;
if (!ha_cpuset_isset(&node_cpu_set, ha_cpu_topo[got_cpu].idx))
continue;
tsid = ha_cpu_topo[got_cpu].ts_id;
while (i < cpu_per_group && (got_cpu_tsid = find_next_cpu_tsid(next_cpu_tsid, tsid)) != -1) {
next_cpu_tsid = got_cpu_tsid + 1;
if (!ha_cpuset_isset(&node_cpu_set, ha_cpu_topo[got_cpu_tsid].idx))
continue;
ha_cpuset_set(&group_cpuset, ha_cpu_topo[got_cpu_tsid].idx);
ha_cpuset_clr(&node_cpu_set, ha_cpu_topo[got_cpu_tsid].idx);
ha_cpuset_set(&current_tsid, tsid);
ha_cpuset_set(&current_ccx, next_ccx);
i++;
}
}
/*
* At this point there is nothing left
* for us in that CCX, forget about it.
*/
if (i < cpu_per_group)
ha_cpuset_clr(&saved_touse_ccx, next_ccx);
}
ha_cpuset_assign(&touse_tsid, &current_tsid);
ha_cpuset_assign(&touse_ccx, &current_ccx);
} else {
ha_cpuset_assign(&group_cpuset, &node_cpu_set);
}
/* let's create the new thread group */
ha_tgroup_info[global.nbtgroups].base = global.nbthread;
ha_tgroup_info[global.nbtgroups].count = thr_per_grp;
/* assign to this group the required number of threads */
for (thr = 0; thr < thr_per_grp; thr++) {
ha_thread_info[thr + global.nbthread].tgid = global.nbtgroups + 1;
ha_thread_info[thr + global.nbthread].tg = &ha_tgroup_info[global.nbtgroups];
ha_thread_info[thr + global.nbthread].tg_ctx = &ha_tgroup_ctx[global.nbtgroups];
if (cpu_policy_conf.affinity & CPU_AFFINITY_PER_CORE) {
if (same_core == 0) {
int tsid;
int corenb = 0;
ha_cpuset_zero(&thrset);
/*
* Find the next available core, and assign the thread to it
*/
tsid = ha_cpuset_ffs(&touse_tsid) - 1;
if (tsid != -1) {
int next_try = 0;
int got_cpu;
while ((got_cpu = find_next_cpu_tsid(next_try, tsid)) != -1) {
next_try = got_cpu + 1;
if (!(ha_cpu_topo[got_cpu].st & HA_CPU_F_EXCL_MASK) &&
ha_cpuset_isset(&group_cpuset, ha_cpu_topo[got_cpu].idx)) {
ha_cpuset_set(&thrset, ha_cpu_topo[got_cpu].idx);
corenb++;
}
}
ha_cpuset_clr(&touse_tsid, tsid);
ha_cpuset_assign(&cpu_map[global.nbtgroups].thread[thr], &thrset);
}
if (cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE)
same_core = 1;
else
same_core = corenb;
}
if (ha_cpuset_ffs(&thrset) != 0)
ha_cpuset_assign(&cpu_map[global.nbtgroups].thread[thr], &thrset);
same_core--;
} else if (cpu_policy_conf.affinity & CPU_AFFINITY_PER_THREAD) {
ha_cpuset_zero(&thrset);
if (cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE) {
int got_cpu;
int next_cpu = 0;
int tsid;
tsid = ha_cpuset_ffs(&touse_tsid) - 1;
while ((got_cpu = find_next_cpu_tsid(next_cpu, tsid)) != -1) {
if (!(ha_cpu_topo[got_cpu].st & HA_CPU_F_EXCL_MASK) &&
ha_cpuset_isset(&group_cpuset, ha_cpu_topo[got_cpu].idx))
break;
next_cpu = got_cpu + 1;
}
if (got_cpu != -1) {
ha_cpuset_set(&thrset, ha_cpu_topo[got_cpu].idx);
ha_cpuset_clr(&touse_tsid, tsid);
}
} else {
int tid = ha_cpuset_ffs(&group_cpuset) - 1;
if (tid != -1) {
ha_cpuset_set(&thrset, tid);
ha_cpuset_clr(&node_cpu_set, tid);
}
}
if (ha_cpuset_ffs(&thrset) != 0)
ha_cpuset_assign(&cpu_map[global.nbtgroups].thread[thr], &thrset);
} else if (cpu_policy_conf.affinity & CPU_AFFINITY_PER_CCX) {
while (same_core == 0) {
int l3id = ha_cpuset_ffs(&touse_ccx) - 1;
int got_cpu;
int next_try = 0;
if (l3id == -1)
break;
ha_cpuset_zero(&thrset);
while ((got_cpu = find_next_cpu_ccx(next_try, l3id)) != -1) {
next_try = got_cpu + 1;
if (!(ha_cpu_topo[got_cpu].st & HA_CPU_F_EXCL_MASK) &&
ha_cpuset_isset(&group_cpuset, ha_cpu_topo[got_cpu].idx)) {
same_core++;
ha_cpuset_set(&thrset, ha_cpu_topo[got_cpu].idx);
}
}
ha_cpuset_clr(&touse_ccx, l3id);
}
if (ha_cpuset_ffs(&thrset) != 0)
ha_cpuset_assign(&cpu_map[global.nbtgroups].thread[thr], &thrset);
same_core--;
} else {
/* map these threads to all the CPUs */
ha_cpuset_assign(&cpu_map[global.nbtgroups].thread[thr], &group_cpuset);
}
}
thr_count -= thr_per_grp;
global.nbthread += thr_per_grp;
global.nbtgroups++;
if (global.nbtgroups >= MAX_TGROUPS || global.nbthread >= MAX_THREADS)
break;
}
}
/* the "group-by-cluster" cpu-policy:
* - does nothing if nbthread or thread-groups are set
* - otherwise tries to create one thread-group per cluster, with as many
@ -1102,11 +1486,12 @@ static int cpu_policy_group_by_cluster(int policy, int tmin, int tmax, int gmin,
{
struct hap_cpuset visited_cl_set;
struct hap_cpuset node_cpu_set;
struct hap_cpuset touse_tsid;
struct hap_cpuset touse_ccx;
int cpu, cpu_start;
int cpu_count;
int thr_count;
int cid;
int thr_per_grp, nb_grp;
int thr;
int div;
if (global.nbthread)
@ -1126,7 +1511,9 @@ static int cpu_policy_group_by_cluster(int policy, int tmin, int tmax, int gmin,
while (global.nbtgroups < MAX_TGROUPS && global.nbthread < MAX_THREADS) {
ha_cpuset_zero(&node_cpu_set);
cid = -1; cpu_count = 0;
ha_cpuset_zero(&touse_tsid);
ha_cpuset_zero(&touse_ccx);
cid = -1; cpu_count = 0; thr_count = 0;
for (cpu = cpu_start; cpu <= cpu_topo_lastcpu; cpu++) {
/* skip disabled and already visited CPUs */
@ -1145,7 +1532,13 @@ static int cpu_policy_group_by_cluster(int policy, int tmin, int tmax, int gmin,
/* make a mask of all of this cluster's CPUs */
ha_cpuset_set(&node_cpu_set, ha_cpu_topo[cpu].idx);
ha_cpuset_set(&touse_ccx, ha_cpu_topo[cpu].ca_id[3]);
cpu_count++;
if (!ha_cpuset_isset(&touse_tsid, ha_cpu_topo[cpu].ts_id)) {
thr_count++;
ha_cpuset_set(&touse_tsid, ha_cpu_topo[cpu].ts_id);
} else if (!(cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE))
thr_count++;
}
/* now cid = next cluster_id or -1 if none; cpu_count is the
@ -1157,44 +1550,7 @@ static int cpu_policy_group_by_cluster(int policy, int tmin, int tmax, int gmin,
ha_cpuset_set(&visited_cl_set, cid);
/* check that we're still within limits. If there are too many
* CPUs but enough groups left, we'll try to make more smaller
* groups, of the closest size each.
*/
nb_grp = (cpu_count + MAX_THREADS_PER_GROUP - 1) / MAX_THREADS_PER_GROUP;
if (nb_grp > MAX_TGROUPS - global.nbtgroups)
nb_grp = MAX_TGROUPS - global.nbtgroups;
thr_per_grp = (cpu_count + nb_grp - 1) / nb_grp;
if (thr_per_grp > MAX_THREADS_PER_GROUP)
thr_per_grp = MAX_THREADS_PER_GROUP;
while (nb_grp && cpu_count > 0) {
/* create at most thr_per_grp threads */
if (thr_per_grp > cpu_count)
thr_per_grp = cpu_count;
if (thr_per_grp + global.nbthread > MAX_THREADS)
thr_per_grp = MAX_THREADS - global.nbthread;
/* let's create the new thread group */
ha_tgroup_info[global.nbtgroups].base = global.nbthread;
ha_tgroup_info[global.nbtgroups].count = thr_per_grp;
/* assign to this group the required number of threads */
for (thr = 0; thr < thr_per_grp; thr++) {
ha_thread_info[thr + global.nbthread].tgid = global.nbtgroups + 1;
ha_thread_info[thr + global.nbthread].tg = &ha_tgroup_info[global.nbtgroups];
ha_thread_info[thr + global.nbthread].tg_ctx = &ha_tgroup_ctx[global.nbtgroups];
/* map these threads to all the CPUs */
ha_cpuset_assign(&cpu_map[global.nbtgroups].thread[thr], &node_cpu_set);
}
cpu_count -= thr_per_grp;
global.nbthread += thr_per_grp;
global.nbtgroups++;
if (global.nbtgroups >= MAX_TGROUPS || global.nbthread >= MAX_THREADS)
break;
}
cpu_policy_assign_threads(cpu_count, thr_count, node_cpu_set, touse_tsid, touse_ccx);
}
if (global.nbthread)
@ -1218,11 +1574,12 @@ static int cpu_policy_group_by_ccx(int policy, int tmin, int tmax, int gmin, int
{
struct hap_cpuset visited_ccx_set;
struct hap_cpuset node_cpu_set;
struct hap_cpuset touse_tsid;
struct hap_cpuset touse_ccx; /* List of CCXs we'll currently use */
int cpu, cpu_start;
int cpu_count;
int thr_count;
int l3id;
int thr_per_grp, nb_grp;
int thr;
int div;
if (global.nbthread)
@ -1242,7 +1599,9 @@ static int cpu_policy_group_by_ccx(int policy, int tmin, int tmax, int gmin, int
while (global.nbtgroups < MAX_TGROUPS && global.nbthread < MAX_THREADS) {
ha_cpuset_zero(&node_cpu_set);
l3id = -1; cpu_count = 0;
ha_cpuset_zero(&touse_tsid);
ha_cpuset_zero(&touse_ccx);
l3id = -1; cpu_count = 0; thr_count = 0;
for (cpu = cpu_start; cpu <= cpu_topo_lastcpu; cpu++) {
/* skip disabled and already visited CPUs */
@ -1261,7 +1620,13 @@ static int cpu_policy_group_by_ccx(int policy, int tmin, int tmax, int gmin, int
/* make a mask of all of this cluster's CPUs */
ha_cpuset_set(&node_cpu_set, ha_cpu_topo[cpu].idx);
ha_cpuset_set(&touse_ccx, ha_cpu_topo[cpu].ca_id[3]);
cpu_count++;
if (!ha_cpuset_isset(&touse_tsid, ha_cpu_topo[cpu].ts_id)) {
thr_count++;
ha_cpuset_set(&touse_tsid, ha_cpu_topo[cpu].ts_id);
} else if (!(cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE))
thr_count++;
}
/* now l3id = next L3 ID or -1 if none; cpu_count is the
@ -1273,44 +1638,7 @@ static int cpu_policy_group_by_ccx(int policy, int tmin, int tmax, int gmin, int
ha_cpuset_set(&visited_ccx_set, l3id);
/* check that we're still within limits. If there are too many
* CPUs but enough groups left, we'll try to make more smaller
* groups, of the closest size each.
*/
nb_grp = (cpu_count + MAX_THREADS_PER_GROUP - 1) / MAX_THREADS_PER_GROUP;
if (nb_grp > MAX_TGROUPS - global.nbtgroups)
nb_grp = MAX_TGROUPS - global.nbtgroups;
thr_per_grp = (cpu_count + nb_grp - 1) / nb_grp;
if (thr_per_grp > MAX_THREADS_PER_GROUP)
thr_per_grp = MAX_THREADS_PER_GROUP;
while (nb_grp && cpu_count > 0) {
/* create at most thr_per_grp threads */
if (thr_per_grp > cpu_count)
thr_per_grp = cpu_count;
if (thr_per_grp + global.nbthread > MAX_THREADS)
thr_per_grp = MAX_THREADS - global.nbthread;
/* let's create the new thread group */
ha_tgroup_info[global.nbtgroups].base = global.nbthread;
ha_tgroup_info[global.nbtgroups].count = thr_per_grp;
/* assign to this group the required number of threads */
for (thr = 0; thr < thr_per_grp; thr++) {
ha_thread_info[thr + global.nbthread].tgid = global.nbtgroups + 1;
ha_thread_info[thr + global.nbthread].tg = &ha_tgroup_info[global.nbtgroups];
ha_thread_info[thr + global.nbthread].tg_ctx = &ha_tgroup_ctx[global.nbtgroups];
/* map these threads to all the CPUs */
ha_cpuset_assign(&cpu_map[global.nbtgroups].thread[thr], &node_cpu_set);
}
cpu_count -= thr_per_grp;
global.nbthread += thr_per_grp;
global.nbtgroups++;
if (global.nbtgroups >= MAX_TGROUPS || global.nbthread >= MAX_THREADS)
break;
}
cpu_policy_assign_threads(cpu_count, thr_count, node_cpu_set, touse_tsid, touse_ccx);
}
if (global.nbthread)
@ -1459,12 +1787,22 @@ int cpu_apply_policy(int tmin, int tmax, int gmin, int gmax, char **err)
return 0;
}
if (!ha_cpu_policy[cpu_policy].fct) {
if (!ha_cpu_policy[cpu_policy_conf.cpu_policy].fct) {
/* nothing to do */
return 0;
}
if (ha_cpu_policy[cpu_policy].fct(cpu_policy, tmin, tmax, gmin, gmax, err) < 0)
/*
* If the one thread per core policy has been used, and no affinity
* has been defined, then default to the per-core affinity
*/
if ((cpu_policy_conf.flags & CPU_POLICY_ONE_THREAD_PER_CORE) &&
cpu_policy_conf.affinity == 0)
cpu_policy_conf.affinity = CPU_AFFINITY_PER_CORE;
else if (cpu_policy_conf.affinity == 0)
cpu_policy_conf.affinity = CPU_AFFINITY_PER_GROUP;
if (ha_cpu_policy[cpu_policy_conf.cpu_policy].fct(cpu_policy_conf.cpu_policy, tmin, tmax, gmin, gmax, err) < 0)
return -1;
return 0;
@ -1802,6 +2140,50 @@ int cpu_detect_topology(void)
#endif // OS-specific cpu_detect_topology()
/*
* Parse the "cpu-affinity" global directive, which takes names
*/
static int cfg_parse_cpu_affinity(char **args, int section_type, struct proxy *curpx,
const struct proxy *defpx, const char *file, int line,
char **err)
{
int i;
if (too_many_args(2, args, err, NULL))
return -1;
for (i = 0; ha_cpu_affinity[i].name != NULL; i++) {
if (!strcmp(args[1], ha_cpu_affinity[i].name)) {
cpu_policy_conf.affinity |= ha_cpu_affinity[i].affinity_flags;
if (*args[2] != 0) {
struct cpu_affinity_optional *optional = ha_cpu_affinity[i].optional;
if (optional) {
for (i = 0; optional[i].name; i++) {
if (!strcmp(args[2], optional[i].name)) {
cpu_policy_conf.affinity |= optional[i].affinity_flag;
return 0;
}
}
}
memprintf(err, "'%s' provided with unknown optional argument '%s'. ", args[1], args[2]);
if (optional) {
memprintf(err, "%s Known values are :", *err);
for (i = 0; optional[i].name != NULL; i++)
memprintf(err, "%s %s", *err, optional[i].name);
}
return -1;
}
return 0;
}
}
memprintf(err, "'%s' parsed an unknown directive '%s'. Known values are :", args[0], args[1]);
for (i = 0; ha_cpu_affinity[i].name != NULL; i++)
memprintf(err, "%s %s", *err, ha_cpu_affinity[i].name);
return -1;
}
/* Parse the "cpu-set" global directive, which takes action names and
* optional values, and fills the cpu_set structure above.
*/
@ -1937,12 +2319,26 @@ static int cfg_parse_cpu_policy(char **args, int section_type, struct proxy *cur
{
int i;
if (too_many_args(1, args, err, NULL))
if (too_many_args(3, args, err, NULL))
return -1;
if (*args[2] != 0) {
if (!strcmp(args[2], "threads-per-core")) {
if (!strcmp(args[3], "1"))
cpu_policy_conf.flags |= CPU_POLICY_ONE_THREAD_PER_CORE;
else if (strcmp(args[3], "auto")) {
memprintf(err, "'%s' passed an unknown value '%s' to keyword '%s', known values are 1 or auto", args[0], args[3], args[2]);
return -1;
}
} else {
memprintf(err, "'%s' passed an unknown keyword '%s', the only known values are threads-per-core", args[0], args[2]);
return -1;
}
}
for (i = 0; ha_cpu_policy[i].name; i++) {
if (strcmp(args[1], ha_cpu_policy[i].name) == 0) {
cpu_policy = i;
cpu_policy_conf.cpu_policy = i;
return 0;
}
}
@ -2033,6 +2429,7 @@ REGISTER_POST_DEINIT(cpu_topo_deinit);
static struct cfg_kw_list cfg_kws = {ILH, {
{ CFG_GLOBAL, "cpu-policy", cfg_parse_cpu_policy, 0 },
{ CFG_GLOBAL, "cpu-set", cfg_parse_cpu_set, 0 },
{ CFG_GLOBAL, "cpu-affinity", cfg_parse_cpu_affinity, 0 },
{ 0, NULL, NULL }
}};

View File

@ -384,7 +384,7 @@ static void cli_release_ech(struct appctx *appctx)
static struct cli_kw_list cli_kws = {{ },{
{ { "show", "ssl", "ech", NULL}, "show ssl ech [<name>] : display a named ECH configuation or all", cli_parse_show_ech, cli_io_handler_ech_details, cli_release_ech, NULL, ACCESS_EXPERIMENTAL },
{ { "show", "ssl", "ech", NULL}, "show ssl ech [<name>] : display a named ECH configuration or all", cli_parse_show_ech, cli_io_handler_ech_details, cli_release_ech, NULL, ACCESS_EXPERIMENTAL },
{ { "add", "ssl", "ech", NULL }, "add ssl ech <name> <payload> : add a new PEM-formatted ECH config and key ", cli_parse_add_ech, NULL, NULL, NULL, ACCESS_EXPERIMENTAL },
{ { "set", "ssl", "ech", NULL }, "set ssl ech <name> <payload> : replace all ECH configs with that provided", cli_parse_set_ech, NULL, NULL, NULL, ACCESS_EXPERIMENTAL },
{ { "del", "ssl", "ech", NULL }, "del ssl ech <name> [<age-in-secs>] : delete ECH configs", cli_parse_del_ech, NULL, NULL, NULL, ACCESS_EXPERIMENTAL },

View File

@ -1106,7 +1106,7 @@ int h1_format_htx_data(const struct ist data, struct buffer *chk, int chunked)
}
/* Format the htx message into its H1 representation. It returns 1 on success or
* 0 if <outbuf> is full or not emtpy. No check are preformed on the message, it must be
* 0 if <outbuf> is full or not empty. No check is performed on the message, it must be
* valid. Trailers are silently ignored if the message is not chunked.
*/
int h1_format_htx_msg(const struct htx *htx, struct buffer *outbuf)

View File

@ -1,6 +1,6 @@
/*
* HAProxy : High Availability-enabled HTTP/TCP proxy
* Copyright 2000-2025 Willy Tarreau <willy@haproxy.org>.
* Copyright 2000-2026 Willy Tarreau <willy@haproxy.org>.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@ -151,7 +151,8 @@ int pidfd = -1; /* FD to keep PID */
int daemon_fd[2] = {-1, -1}; /* pipe to communicate with parent process */
int devnullfd = -1;
static unsigned long stopping_tgroup_mask; /* Thread groups acknowledging stopping */
static int stopped_tgroups;
static int stop_detected;
/* global options */
struct global global = {
@ -2922,14 +2923,24 @@ void run_poll_loop()
int i;
if (stopping) {
int old_detected;
/* stop muxes/quic-conns before acknowledging stopping */
if (!(tg_ctx->stopping_threads & ti->ltid_bit)) {
task_wakeup(mux_stopping_data[tid].task, TASK_WOKEN_OTHER);
wake = 1;
}
if (_HA_ATOMIC_OR_FETCH(&tg_ctx->stopping_threads, ti->ltid_bit) == ti->ltid_bit &&
_HA_ATOMIC_OR_FETCH(&stopping_tgroup_mask, tg->tgid_bit) == tg->tgid_bit) {
old_detected = stop_detected;
/*
* Check if ze're the first to detect the
* stop
*/
while (old_detected == 0 &&
!_HA_ATOMIC_CAS(&stop_detected, &old_detected, 1));
if (old_detected == 0) {
/* first one to detect it, notify all threads that stopping was just set */
for (i = 0; i < global.nbthread; i++) {
if (_HA_ATOMIC_LOAD(&ha_thread_info[i].tg->threads_enabled) &
@ -2938,28 +2949,26 @@ void run_poll_loop()
wake_thread(i);
}
}
if (!(tg_ctx->stopping_threads & ti->ltid_bit) &&
_HA_ATOMIC_OR_FETCH(&tg_ctx->stopping_threads,
ti->ltid_bit) == tg->threads_enabled) {
/*
* All threads from the thread group
* are stopped, let it been known.
*/
_HA_ATOMIC_INC(&stopped_tgroups);
}
}
/* stop when there's nothing left to do */
if ((jobs - unstoppable_jobs) == 0 &&
(_HA_ATOMIC_LOAD(&stopping_tgroup_mask) & all_tgroups_mask) == all_tgroups_mask) {
/* check that all threads are aware of the stopping status */
for (i = 0; i < global.nbtgroups; i++)
if ((_HA_ATOMIC_LOAD(&ha_tgroup_ctx[i].stopping_threads) &
_HA_ATOMIC_LOAD(&ha_tgroup_info[i].threads_enabled)) !=
_HA_ATOMIC_LOAD(&ha_tgroup_info[i].threads_enabled))
break;
(_HA_ATOMIC_LOAD(&stopped_tgroups) == global.nbtgroups)) {
#ifdef USE_THREAD
if (i == global.nbtgroups) {
/* all are OK, let's wake them all and stop */
for (i = 0; i < global.nbthread; i++)
if (i != tid && _HA_ATOMIC_LOAD(&ha_thread_info[i].tg->threads_enabled) & ha_thread_info[i].ltid_bit)
wake_thread(i);
break;
}
#else
break;
for (i = 0; i < global.nbthread; i++)
if (i != tid && _HA_ATOMIC_LOAD(&ha_thread_info[i].tg->threads_enabled) & ha_thread_info[i].ltid_bit)
wake_thread(i);
#endif
break;
}
}
@ -3118,10 +3127,8 @@ void *run_thread_poll_loop(void *data)
ptff->fct();
#ifdef USE_THREAD
if (!_HA_ATOMIC_AND_FETCH(&ha_tgroup_info[ti->tgid-1].threads_enabled, ~ti->ltid_bit))
_HA_ATOMIC_AND(&all_tgroups_mask, ~tg->tgid_bit);
if (!_HA_ATOMIC_AND_FETCH(&tg_ctx->stopping_threads, ~ti->ltid_bit))
_HA_ATOMIC_AND(&stopping_tgroup_mask, ~tg->tgid_bit);
_HA_ATOMIC_AND(&ha_tgroup_info[ti->tgid-1].threads_enabled, ~ti->ltid_bit);
_HA_ATOMIC_AND_FETCH(&tg_ctx->stopping_threads, ~ti->ltid_bit);
if (tid > 0)
pthread_exit(NULL);
#endif

View File

@ -14027,7 +14027,11 @@ lua_State *hlua_init_state(int thread_num)
struct prepend_path *pp;
/* Init main lua stack. */
#if defined(LUA_VERSION_NUM) && LUA_VERSION_NUM >= 505
L = lua_newstate(hlua_alloc, &hlua_global_allocator, luaL_makeseed(0));
#else
L = lua_newstate(hlua_alloc, &hlua_global_allocator);
#endif
if (!L) {
fprintf(stderr,

View File

@ -2796,6 +2796,11 @@ static int _hlua_patref_add_bulk(lua_State *L, int status, lua_KContext ctx)
int count = 0;
int ret;
if (!lua_istable(L, 2)) {
luaL_argerror(L, 2, "argument is expected to be a table");
return 0; // not reached
}
if ((ref->flags & HLUA_PATREF_FL_GEN) &&
pat_ref_may_commit(ref->ptr, ref->curr_gen))
curr_gen = ref->curr_gen;
@ -2808,17 +2813,6 @@ static int _hlua_patref_add_bulk(lua_State *L, int status, lua_KContext ctx)
const char *key;
const char *value = NULL;
/* check if we may do something to try to prevent thread contention,
* unless we run from body/init state where hlua_yieldk is no-op
*/
if (count > 100 && hlua_gethlua(L)) {
/* let's yield and wait for being called again to continue where we left off */
HA_RWLOCK_WRUNLOCK(PATREF_LOCK, &ref->ptr->lock);
hlua_yieldk(L, 0, 0, _hlua_patref_add_bulk, TICK_ETERNITY, HLUA_CTRLYIELD); // continue
return 0; // not reached
}
if (ref->ptr->flags & PAT_REF_SMP) {
/* key:val table */
luaL_checktype(L, -2, LUA_TSTRING);
@ -2843,6 +2837,17 @@ static int _hlua_patref_add_bulk(lua_State *L, int status, lua_KContext ctx)
/* removes 'value'; keeps 'key' for next iteration */
lua_pop(L, 1);
count += 1;
/* check if we may do something to try to prevent thread contention,
* unless we run from body/init state where hlua_yieldk is no-op
*/
if (count > 100 && hlua_gethlua(L)) {
/* let's yield and wait for being called again to continue where we left off */
HA_RWLOCK_WRUNLOCK(PATREF_LOCK, &ref->ptr->lock);
hlua_yieldk(L, 0, 0, _hlua_patref_add_bulk, TICK_ETERNITY, HLUA_CTRLYIELD); // continue
return 0; // not reached
}
}
HA_RWLOCK_WRUNLOCK(PATREF_LOCK, &ref->ptr->lock);
lua_pushboolean(L, 1);
@ -3047,7 +3052,6 @@ static int _hlua_listable_patref_pairs_iterator(lua_State *L, int status, lua_KC
int context_index;
struct hlua_patref_iterator_context *hctx;
struct pat_ref_elt *elt;
int cnt = 0;
unsigned int curr_gen;
context_index = lua_upvalueindex(1);
@ -3063,38 +3067,21 @@ static int _hlua_listable_patref_pairs_iterator(lua_State *L, int status, lua_KC
if (LIST_ISEMPTY(&hctx->bref.users)) {
/* first iteration */
hctx->bref.ref = hctx->ref->ptr->head.n;
hctx->gen = pat_ref_gen_get(hctx->ref->ptr, curr_gen);
if (!hctx->gen)
goto done;
hctx->bref.ref = hctx->gen->head.n;
}
else
LIST_DEL_INIT(&hctx->bref.users); // drop back ref from previous iteration
next:
/* reached end of list? */
if (hctx->bref.ref == &hctx->ref->ptr->head) {
HA_RWLOCK_WRUNLOCK(PATREF_LOCK, &hctx->ref->ptr->lock);
lua_pushnil(L);
return 1;
}
if (hctx->bref.ref == &hctx->gen->head)
goto done;
elt = LIST_ELEM(hctx->bref.ref, struct pat_ref_elt *, list);
if (elt->gen_id != curr_gen) {
/* check if we may do something to try to prevent thread contention,
* unless we run from body/init state where hlua_yieldk is no-op
*/
if (cnt > 10000 && hlua_gethlua(L)) {
/* let's yield and wait for being called again to continue where we left off */
LIST_APPEND(&elt->back_refs, &hctx->bref.users);
HA_RWLOCK_WRUNLOCK(PATREF_LOCK, &hctx->ref->ptr->lock);
hlua_yieldk(L, 0, 0, _hlua_listable_patref_pairs_iterator, TICK_ETERNITY, HLUA_CTRLYIELD); // continue
return 0; // not reached
}
hctx->bref.ref = elt->list.n;
cnt++;
goto next;
}
LIST_APPEND(&elt->back_refs, &hctx->bref.users);
HA_RWLOCK_WRUNLOCK(PATREF_LOCK, &hctx->ref->ptr->lock);
@ -3107,6 +3094,11 @@ static int _hlua_listable_patref_pairs_iterator(lua_State *L, int status, lua_KC
return 1;
return 2;
done:
HA_RWLOCK_WRUNLOCK(PATREF_LOCK, &hctx->ref->ptr->lock);
lua_pushnil(L);
return 1;
}
/* iterator must return key as string and value as patref
* element value (as string), if we reach end of list, it

View File

@ -2005,6 +2005,8 @@ static enum act_parse_ret parse_http_set_map(const char **args, int *orig_arg, s
}
rule->action_ptr = http_action_set_map;
rule->release_ptr = release_http_map;
lf_expr_init(&rule->arg.map.key);
lf_expr_init(&rule->arg.map.value);
cur_arg = *orig_arg;
if (rule->action == 1 && (!*args[cur_arg] || !*args[cur_arg+1])) {
@ -2040,7 +2042,6 @@ static enum act_parse_ret parse_http_set_map(const char **args, int *orig_arg, s
}
/* key pattern */
lf_expr_init(&rule->arg.map.key);
if (!parse_logformat_string(args[cur_arg], px, &rule->arg.map.key, LOG_OPT_NONE, cap, err)) {
free(rule->arg.map.ref);
return ACT_RET_PRS_ERR;
@ -2049,7 +2050,6 @@ static enum act_parse_ret parse_http_set_map(const char **args, int *orig_arg, s
if (rule->action == 1) {
/* value pattern for set-map only */
cur_arg++;
lf_expr_init(&rule->arg.map.value);
if (!parse_logformat_string(args[cur_arg], px, &rule->arg.map.value, LOG_OPT_NONE, cap, err)) {
free(rule->arg.map.ref);
return ACT_RET_PRS_ERR;

View File

@ -2258,7 +2258,7 @@ int http_response_forward_body(struct stream *s, struct channel *res, int an_bit
* server abort.
*/
if (msg->msg_state < HTTP_MSG_ENDING && (s->scb->flags & (SC_FL_EOS|SC_FL_ABRT_DONE))) {
if ((s->scb->flags & (SC_FL_ABRT_DONE|SC_FL_SHUT_DONE)) == (SC_FL_ABRT_DONE|SC_FL_SHUT_DONE))
if ((s->scf->flags & SC_FL_EOS) && (s->scb->flags & (SC_FL_ABRT_DONE|SC_FL_SHUT_DONE)) == (SC_FL_ABRT_DONE|SC_FL_SHUT_DONE))
goto return_cli_abort;
/* If we have some pending data, we continue the processing */
if (htx_is_empty(htx))

View File

@ -658,7 +658,7 @@ void httpclient_applet_io_handler(struct appctx *appctx)
blk = DISGUISE(htx_get_head_blk(htx));
sl = htx_get_blk_ptr(htx, blk);
/* Skipp any 1XX interim responses */
/* Skip any 1XX interim responses */
if (sl->info.res.status < 200) {
/* Upgrade are not supported. Report an error */
if (sl->info.res.status == 101)
@ -951,8 +951,14 @@ int httpclient_applet_init(struct appctx *appctx)
s = appctx_strm(appctx);
s->target = target;
if (objt_server(s->target))
s->sv_tgcounters = __objt_server(s->target)->counters.shared.tg[tgid - 1];
if (objt_server(s->target)) {
struct server *srv = __objt_server(s->target);
if (srv->counters.shared.tg)
s->sv_tgcounters = __objt_server(s->target)->counters.shared.tg[tgid - 1];
else
s->sv_tgcounters = NULL;
}
/* set the "timeout server" */
s->scb->ioto = hc->timeout_server;

911
src/jwe.c Normal file
View File

@ -0,0 +1,911 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#include <stdio.h>
#include <haproxy/jwt.h>
#include <haproxy/tools.h>
#include <haproxy/base64.h>
#include <haproxy/chunk.h>
#include <haproxy/init.h>
#include <haproxy/openssl-compat.h>
#include <haproxy/ssl_utils.h>
#include <haproxy/buf.h>
#include <haproxy/sample.h>
#include <haproxy/thread.h>
#include <haproxy/arg.h>
#include <haproxy/vars.h>
#include <haproxy/ssl_sock.h>
#include <haproxy/ssl_ckch.h>
#include <import/mjson.h>
#if defined(HAVE_JWS)
#ifdef USE_OPENSSL
struct alg_enc {
const char *name;
int value;
};
/* https://datatracker.ietf.org/doc/html/rfc7518#section-4.1 */
typedef enum {
JWE_ALG_UNMANAGED = -1,
JWE_ALG_RSA1_5,
JWE_ALG_RSA_OAEP,
JWE_ALG_RSA_OAEP_256,
JWE_ALG_A128KW,
JWE_ALG_A192KW,
JWE_ALG_A256KW,
JWE_ALG_DIR,
// JWE_ALG_ECDH_ES,
// JWE_ALG_ECDH_ES_A128KW,
// JWE_ALG_ECDH_ES_A192KW,
// JWE_ALG_ECDH_ES_A256KW,
JWE_ALG_A128GCMKW,
JWE_ALG_A192GCMKW,
JWE_ALG_A256GCMKW,
// JWE_ALG_PBES2_HS256_A128KW,
// JWE_ALG_PBES2_HS384_A192KW,
// JWE_ALG_PBES2_HS512_A256KW,
} jwe_alg;
struct alg_enc jwe_algs[] = {
{ "RSA1_5", JWE_ALG_RSA1_5 },
{ "RSA-OAEP", JWE_ALG_RSA_OAEP },
{ "RSA-OAEP-256", JWE_ALG_RSA_OAEP_256 },
{ "A128KW", JWE_ALG_A128KW },
{ "A192KW", JWE_ALG_A192KW },
{ "A256KW", JWE_ALG_A256KW },
{ "dir", JWE_ALG_DIR },
{ "ECDH-ES", JWE_ALG_UNMANAGED },
{ "ECDH-ES+A128KW", JWE_ALG_UNMANAGED },
{ "ECDH-ES+A192KW", JWE_ALG_UNMANAGED },
{ "ECDH-ES+A256KW", JWE_ALG_UNMANAGED },
{ "A128GCMKW", JWE_ALG_A128GCMKW },
{ "A192GCMKW", JWE_ALG_A192GCMKW },
{ "A256GCMKW", JWE_ALG_A256GCMKW },
{ "PBES2-HS256+A128KW", JWE_ALG_UNMANAGED },
{ "PBES2-HS384+A192KW", JWE_ALG_UNMANAGED },
{ "PBES2-HS512+A256KW", JWE_ALG_UNMANAGED },
{ NULL, JWE_ALG_UNMANAGED },
};
/* https://datatracker.ietf.org/doc/html/rfc7518#section-5.1 */
typedef enum {
JWE_ENC_UNMANAGED = -1,
JWE_ENC_A128CBC_HS256,
JWE_ENC_A192CBC_HS384,
JWE_ENC_A256CBC_HS512,
JWE_ENC_A128GCM,
JWE_ENC_A192GCM,
JWE_ENC_A256GCM,
} jwe_enc;
struct alg_enc jwe_encodings[] = {
{ "A128CBC-HS256", JWE_ENC_A128CBC_HS256 },
{ "A192CBC-HS384", JWE_ENC_A192CBC_HS384 },
{ "A256CBC-HS512", JWE_ENC_A256CBC_HS512 },
{ "A128GCM", JWE_ENC_A128GCM },
{ "A192GCM", JWE_ENC_A192GCM },
{ "A256GCM", JWE_ENC_A256GCM },
{ NULL, JWE_ENC_UNMANAGED },
};
/*
* In the JWE Compact Serialization, a JWE is represented as the concatenation:
* BASE64URL(UTF8(JWE Protected Header)) || '.' ||
* BASE64URL(JWE Encrypted Key) || '.' ||
* BASE64URL(JWE Initialization Vector) || '.' ||
* BASE64URL(JWE Ciphertext) || '.' ||
* BASE64URL(JWE Authentication Tag)
*/
enum jwe_elt {
JWE_ELT_JOSE = 0,
JWE_ELT_CEK,
JWE_ELT_IV,
JWE_ELT_CIPHERTEXT,
JWE_ELT_TAG,
JWE_ELT_MAX
};
struct jose_fields {
struct buffer *tag;
struct buffer *iv;
};
/*
* Parse contents of "alg" or "enc" field of the JOSE header.
*/
static inline int parse_alg_enc(struct buffer *buf, struct alg_enc *array)
{
struct alg_enc *item = array;
int val = -1;
while (item->name) {
if (strncmp(item->name, b_orig(buf), (int)b_data(buf)) == 0) {
val = item->value;
break;
}
++item;
}
return val;
}
/*
* Look for field <field_name> in JSON <decoded_jose> and base64url decode its
* content in buffer <out>.
* The field might not be found, it won't be raised as an error.
*/
static inline int decode_jose_field(struct buffer *decoded_jose, const char *field_name, struct buffer *out)
{
struct buffer *trash = get_trash_chunk();
int size = 0;
if (!out)
return 0;
size = mjson_get_string(b_orig(decoded_jose), b_data(decoded_jose), field_name,
b_orig(trash), b_size(trash));
if (size != -1) {
trash->data = size;
size = base64urldec(b_orig(trash), b_data(trash),
b_orig(out), b_size(out));
if (size < 0)
return 1;
out->data = size;
}
return 0;
}
/*
* Extract the "alg" and "enc" of the JOSE header as well as some algo-specific
* base64url encoded fields.
*/
static int parse_jose(struct buffer *decoded_jose, int *alg, int *enc, struct jose_fields *jose_fields)
{
struct buffer *trash = NULL;
int retval = 0;
int size = 0;
/* Look for "alg" field */
trash = get_trash_chunk();
size = mjson_get_string(b_orig(decoded_jose), b_data(decoded_jose), "$.alg",
b_orig(trash), b_size(trash));
if (size == -1)
goto end;
trash->data = size;
*alg = parse_alg_enc(trash, jwe_algs);
if (*alg == JWE_ALG_UNMANAGED)
goto end;
/* Look for "enc" field */
chunk_reset(trash);
size = mjson_get_string(b_orig(decoded_jose), b_data(decoded_jose), "$.enc",
b_orig(trash), b_size(trash));
if (size == -1)
goto end;
trash->data = size;
*enc = parse_alg_enc(trash, jwe_encodings);
if (*enc == JWE_ENC_UNMANAGED)
goto end;
/* Look for "tag" field (used by aes gcm encryption) */
if (decode_jose_field(decoded_jose, "$.tag", jose_fields->tag))
goto end;
/* Look for "iv" field (used by aes gcm encryption) */
if (decode_jose_field(decoded_jose, "$.iv", jose_fields->iv))
goto end;
retval = 1;
end:
return retval;
}
/*
* Decrypt Encrypted Key <cek> encrypted with AES GCM Key Wrap algorithm and
* dump the decrypted key into <decrypted_cek> buffer. The decryption is done
* thanks to <iv> Initialization Vector, <secret> key and authentication check
* is performed with <aead_tag>. All those buffers must be in raw format,
* already base64url decoded.
* Return 0 in case of error, 1 otherwise.
*/
static int decrypt_cek_aesgcmkw(struct buffer *cek, struct buffer *aead_tag, struct buffer *iv,
struct buffer *decrypted_cek, struct buffer *secret, jwe_alg crypt_alg)
{
int retval = 0;
int key_size = 0;
int size = 0;
switch(crypt_alg) {
case JWE_ALG_A128GCMKW: key_size = 128; break;
case JWE_ALG_A192GCMKW: key_size = 192; break;
case JWE_ALG_A256GCMKW: key_size = 256; break;
break;
default:
goto end;
}
size = aes_process(cek, iv, secret, key_size, aead_tag, NULL, decrypted_cek, 1, 1);
if (size < 0)
goto end;
decrypted_cek->data = size;
retval = 1;
end:
return retval;
}
/*
* Decrypt Encrypted Key <cek> encrypted with AES CBC Key Wrap algorithm and
* dump the decrypted key into <decrypted_cek> buffer. The decryption is done
* thanks to <iv> Initialization Vector and <secret> key. All those buffers must
* be in raw format, already base64url decoded.
* Return 0 in case of error, 1 otherwise.
*/
static int decrypt_cek_aeskw(struct buffer *cek, struct buffer *decrypted_cek, struct buffer *secret, jwe_alg crypt_alg)
{
EVP_CIPHER_CTX *ctx = NULL;
const EVP_CIPHER *cipher = NULL;
struct buffer *iv = NULL;
int iv_size = 0;
int retval = 0;
int length = 0;
ctx = EVP_CIPHER_CTX_new();
if (!ctx)
goto end;
switch(crypt_alg) {
#ifndef OPENSSL_IS_AWSLC
/* AWS-LC does not support EVP_aes_128_wrap or EVP_aes_192_wrap */
case JWE_ALG_A128KW: cipher = EVP_aes_128_wrap(); break;
case JWE_ALG_A192KW: cipher = EVP_aes_192_wrap(); break;
#endif
case JWE_ALG_A256KW: cipher = EVP_aes_256_wrap(); break;
default:
goto end;
}
#ifndef OPENSSL_IS_AWSLC
/* Comment from AWS-LC (in include/openssl/cipher.h):
* EVP_aes_256_wrap implements AES-256 in Key Wrap mode. OpenSSL 1.1.1
* required |EVP_CIPHER_CTX_FLAG_WRAP_ALLOW| to be set with
* |EVP_CIPHER_CTX_set_flags|, in order for |EVP_aes_256_wrap| to work.
* This is not required in AWS-LC and they are no-op flags maintained
* for compatibility.
*/
EVP_CIPHER_CTX_set_flags(ctx, EVP_CIPHER_CTX_FLAG_WRAP_ALLOW);
#endif
iv_size = EVP_CIPHER_iv_length(cipher);
iv = alloc_trash_chunk();
if (!iv)
goto end;
/* Default IV for AES KW (see RFC3394 section-2.2.3.1) */
memset(iv->area, 0xA6, iv_size);
iv->data = iv_size;
/* Initialise IV and key */
if (EVP_DecryptInit_ex(ctx, cipher, NULL, (unsigned char*)b_orig(secret), (unsigned char*)b_orig(iv)) <= 0)
goto end;
if (EVP_DecryptUpdate(ctx, (unsigned char*)b_orig(decrypted_cek), &length,
(unsigned char*)b_orig(cek), b_data(cek)) <= 0)
goto end;
if (EVP_DecryptFinal_ex(ctx, (unsigned char*)decrypted_cek->area + length, (int*)&decrypted_cek->data) <= 0)
goto end;
decrypted_cek->data += length;
retval = 1;
end:
EVP_CIPHER_CTX_free(ctx);
free_trash_chunk(iv);
return retval;
}
/*
* Build a signature tag when AES-CBC encoding is used and check that it matches
* the one found in the JWE token.
* The tag is built out of a HMAC of some concatenated data taken from the JWE
* token (see https://datatracker.ietf.org/doc/html/rfc7518#section-5.2). The
* firest half of the previously decrypted cek is used as HMAC key.
* Returns 0 in case of success, 1 otherwise.
*/
static int build_and_check_tag(jwe_enc enc, struct jwt_item items[JWE_ELT_MAX],
struct buffer *decoded_items[JWE_ELT_MAX],
struct buffer *decrypted_cek)
{
int retval = 1;
const EVP_MD *hash = NULL;
int mac_key_len = 0;
uint64_t aad_len = my_htonll(items[JWE_ELT_JOSE].length << 3);
struct buffer *tag_data = alloc_trash_chunk();
struct buffer *hmac = alloc_trash_chunk();
if (!tag_data || !hmac)
goto end;
/*
* Concatenate the AAD (base64url encoded JOSE header),
* the Initialization Vector, the ciphertext,
* and the AL value (number of bits in the AAD in 64bits big endian)
*/
if (!chunk_memcpy(tag_data, items[JWE_ELT_JOSE].start, items[JWE_ELT_JOSE].length) ||
!chunk_memcat(tag_data, b_orig(decoded_items[JWE_ELT_IV]), b_data(decoded_items[JWE_ELT_IV])) ||
!chunk_memcat(tag_data, b_orig(decoded_items[JWE_ELT_CIPHERTEXT]), b_data(decoded_items[JWE_ELT_CIPHERTEXT])) ||
!chunk_memcat(tag_data, (char*)&aad_len, sizeof(aad_len)))
goto end;
switch(enc) {
case JWE_ENC_A128CBC_HS256: mac_key_len = 16; hash = EVP_sha256(); break;
case JWE_ENC_A192CBC_HS384: mac_key_len = 24; hash = EVP_sha384(); break;
case JWE_ENC_A256CBC_HS512: mac_key_len = 32; hash = EVP_sha512(); break;
default: goto end;
}
if (b_data(decrypted_cek) < mac_key_len)
goto end;
/* Compute the HMAC SHA-XXX of the concatenated value above */
if (!HMAC(hash, b_orig(decrypted_cek), mac_key_len,
(unsigned char*)b_orig(tag_data), b_data(tag_data),
(unsigned char*)b_orig(hmac), (unsigned int*)&hmac->data))
goto end;
/* Use the first half of the HMAC output M as the Authentication Tag output T */
retval = memcmp(b_orig(decoded_items[JWE_ELT_TAG]), b_orig(hmac), b_data(hmac) >> 1);
end:
free_trash_chunk(tag_data);
free_trash_chunk(hmac);
return retval;
}
/*
* Decrypt the ciphertext.
* Returns 0 in case of success, 1 otherwise.
*/
static int decrypt_ciphertext(jwe_enc enc, struct jwt_item items[JWE_ELT_MAX],
struct buffer *decoded_items[JWE_ELT_MAX],
struct buffer *decrypted_cek, struct buffer **out)
{
struct buffer **ciphertext = NULL, **iv = NULL, **aead_tag = NULL, *aad = NULL;
int size = 0;
int gcm = 0;
int key_size = 0;
struct buffer *aes_key = NULL;
int retval = 1;
switch (enc) {
case JWE_ENC_A128CBC_HS256: gcm = 0; key_size = 16; break;
case JWE_ENC_A192CBC_HS384: gcm = 0; key_size = 24; break;
case JWE_ENC_A256CBC_HS512: gcm = 0; key_size = 32; break;
case JWE_ENC_A128GCM: gcm = 1; key_size = 16; break;
case JWE_ENC_A192GCM: gcm = 1; key_size = 24; break;
case JWE_ENC_A256GCM: gcm = 1; key_size = 32; break;
default: goto end;
}
/* Base64 decode cipher text */
ciphertext = &decoded_items[JWE_ELT_CIPHERTEXT];
*ciphertext = alloc_trash_chunk();
if (!*ciphertext)
goto end;
size = base64urldec(items[JWE_ELT_CIPHERTEXT].start, items[JWE_ELT_CIPHERTEXT].length,
(*ciphertext)->area, (*ciphertext)->size);
if (size < 0)
goto end;
(*ciphertext)->data = size;
/* Base64 decode Initialization Vector */
iv = &decoded_items[JWE_ELT_IV];
*iv = alloc_trash_chunk();
if (!*iv)
goto end;
size = base64urldec(items[JWE_ELT_IV].start, items[JWE_ELT_IV].length,
(*iv)->area, (*iv)->size);
if (size < 0)
goto end;
(*iv)->data = size;
/* Base64 decode Additional Data */
aead_tag = &decoded_items[JWE_ELT_TAG];
*aead_tag = alloc_trash_chunk();
if (!*aead_tag)
goto end;
size = base64urldec(items[JWE_ELT_TAG].start, items[JWE_ELT_TAG].length,
(*aead_tag)->area, (*aead_tag)->size);
if (size < 0)
goto end;
(*aead_tag)->data = size;
if (gcm) {
aad = alloc_trash_chunk();
if (!aad)
goto end;
chunk_memcpy(aad, items[JWE_ELT_JOSE].start, items[JWE_ELT_JOSE].length);
aes_key = decrypted_cek;
} else {
/* https://datatracker.ietf.org/doc/html/rfc7518#section-5.2.2.1
* Build the authentication tag out of the first part of the
* cipher key and a combination of information extracted from
* the JWE token.
*/
if (build_and_check_tag(enc, items, decoded_items, decrypted_cek))
goto end;
aes_key = alloc_trash_chunk();
if (!aes_key)
goto end;
/* Only use the second part of the decrypted key for actual
* content decryption. */
if (b_data(decrypted_cek) != key_size * 2)
goto end;
chunk_memcpy(aes_key, decrypted_cek->area + key_size, key_size);
}
*out = alloc_trash_chunk();
if (!*out)
goto end;
size = aes_process(*ciphertext, *iv, aes_key, key_size*8, *aead_tag, aad, *out, 1, gcm);
if (size < 0)
goto end;
retval = 0;
end:
free_trash_chunk(aad);
if (!gcm)
free_trash_chunk(aes_key);
return retval;
}
static inline void clear_decoded_items(struct buffer *decoded_items[JWE_ELT_MAX])
{
struct buffer *buf = NULL;
int idx = JWE_ELT_JOSE;
while(idx != JWE_ELT_MAX) {
buf = decoded_items[idx];
free_trash_chunk(buf);
++idx;
}
}
/*
* Decrypt the contents of a JWE token thanks to the user-provided base64
* encoded secret. This converter can only be used for tokens that have a
* symetric algorithm (AESKW, AESGCMKW or "dir" special case).
* Returns the decrypted contents, or nothing if any error happened.
*/
static int sample_conv_jwt_decrypt_secret(const struct arg *args, struct sample *smp, void *private)
{
struct buffer *input = NULL;
unsigned int item_num = JWE_ELT_MAX;
int retval = 0;
struct jwt_item items[JWE_ELT_MAX] = {};
struct buffer *decoded_items[JWE_ELT_MAX] = {};
struct sample secret_smp;
struct buffer *secret = NULL;
struct buffer **cek = NULL;
struct buffer *decrypted_cek = NULL;
struct buffer *out = NULL;
struct buffer *alg_tag = NULL;
struct buffer *alg_iv = NULL;
int size = 0;
jwe_alg alg = JWE_ALG_UNMANAGED;
jwe_enc enc = JWE_ENC_UNMANAGED;
int gcm = 0;
struct jose_fields fields = {};
input = alloc_trash_chunk();
if (!input)
return 0;
if (!chunk_cpy(input, &smp->data.u.str))
goto end;
if (jwt_tokenize(input, items, &item_num) || item_num != JWE_ELT_MAX)
goto end;
alg_tag = alloc_trash_chunk();
if (!alg_tag)
goto end;
alg_iv = alloc_trash_chunk();
if (!alg_iv)
goto end;
fields.tag = alg_tag;
fields.iv = alg_iv;
/* Base64Url decode the JOSE header */
decoded_items[JWE_ELT_JOSE] = alloc_trash_chunk();
if (!decoded_items[JWE_ELT_JOSE])
goto end;
size = base64urldec(items[JWE_ELT_JOSE].start, items[JWE_ELT_JOSE].length,
b_orig(decoded_items[JWE_ELT_JOSE]), b_size(decoded_items[JWE_ELT_JOSE]));
if (size < 0)
goto end;
decoded_items[JWE_ELT_JOSE]->data = size;
if (!parse_jose(decoded_items[JWE_ELT_JOSE], &alg, &enc, &fields))
goto end;
/* Check if "alg" fits secret-based JWEs */
switch (alg) {
case JWE_ALG_A128KW:
case JWE_ALG_A192KW:
case JWE_ALG_A256KW:
gcm = 0;
break;
case JWE_ALG_A128GCMKW:
case JWE_ALG_A192GCMKW:
case JWE_ALG_A256GCMKW:
gcm = 1;
break;
case JWE_ALG_DIR:
break;
default:
/* Cannot use a secret for this type of "alg" */
goto end;
}
/* Parse secret argument and base64dec it if it comes from a variable. */
smp_set_owner(&secret_smp, smp->px, smp->sess, smp->strm, smp->opt);
if (!sample_conv_var2smp_str(&args[0], &secret_smp))
goto end;
if (args[0].type == ARGT_VAR) {
secret = alloc_trash_chunk();
if (!secret)
goto end;
size = base64dec(secret_smp.data.u.str.area, secret_smp.data.u.str.data, secret->area, secret->size);
if (size < 0)
goto end;
secret->data = size;
secret_smp.data.u.str = *secret;
}
if (items[JWE_ELT_CEK].length) {
int cek_size = 0;
cek = &decoded_items[JWE_ELT_CEK];
*cek = alloc_trash_chunk();
if (!*cek)
goto end;
decrypted_cek = alloc_trash_chunk();
if (!decrypted_cek) {
goto end;
}
cek_size = base64urldec(items[JWE_ELT_CEK].start, items[JWE_ELT_CEK].length,
(*cek)->area, (*cek)->size);
if (cek_size < 0) {
goto end;
}
(*cek)->data = cek_size;
if (gcm) {
if (!decrypt_cek_aesgcmkw(*cek, alg_tag, alg_iv, decrypted_cek, &secret_smp.data.u.str, alg))
goto end;
} else {
if (!decrypt_cek_aeskw(*cek, decrypted_cek, &secret_smp.data.u.str, alg))
goto end;
}
} else if (alg == JWE_ALG_DIR) {
/* The secret given as parameter should be used directly to
* decode the encrypted content. */
decrypted_cek = alloc_trash_chunk();
if (!decrypted_cek)
goto end;
chunk_memcpy(decrypted_cek, secret_smp.data.u.str.area, secret_smp.data.u.str.data);
}
/* Decode the encrypted content thanks to decrypted_cek secret */
if (decrypt_ciphertext(enc, items, decoded_items, decrypted_cek, &out))
goto end;
smp->data.u.str.data = b_data(out);
smp->data.u.str.area = b_orig(out);
smp->data.type = SMP_T_BIN;
smp_dup(smp);
retval = 1;
end:
free_trash_chunk(input);
free_trash_chunk(decrypted_cek);
free_trash_chunk(out);
free_trash_chunk(alg_tag);
free_trash_chunk(alg_iv);
clear_decoded_items(decoded_items);
return retval;
}
static int decrypt_cek_rsa(struct buffer *cek, struct buffer *decrypted_cek,
struct buffer *cert, jwe_alg crypt_alg)
{
EVP_PKEY_CTX *ctx = NULL;
const EVP_MD *md = NULL;
EVP_PKEY *pkey = NULL;
int retval = 0;
int pad = 0;
size_t outl = b_size(decrypted_cek);
struct ckch_store *store = NULL;
if (HA_SPIN_TRYLOCK(CKCH_LOCK, &ckch_lock))
goto end;
store = ckchs_lookup(b_orig(cert));
if (!store || !store->data->key || !store->conf.jwt) {
HA_SPIN_UNLOCK(CKCH_LOCK, &ckch_lock);
goto end;
}
pkey = store->data->key;
EVP_PKEY_up_ref(pkey);
HA_SPIN_UNLOCK(CKCH_LOCK, &ckch_lock);
switch(crypt_alg) {
case JWE_ALG_RSA1_5:
pad = RSA_PKCS1_PADDING;
md = EVP_sha1();
break;
case JWE_ALG_RSA_OAEP:
pad = RSA_PKCS1_OAEP_PADDING;
md = EVP_sha1();
break;
case JWE_ALG_RSA_OAEP_256:
pad = RSA_PKCS1_OAEP_PADDING;
md = EVP_sha256();
break;
default:
goto end;
}
ctx = EVP_PKEY_CTX_new(pkey, NULL);
if (!ctx)
goto end;
if (EVP_PKEY_decrypt_init(ctx) <= 0)
goto end;
if (EVP_PKEY_CTX_set_rsa_padding(ctx, pad) <= 0)
goto end;
if (pad == RSA_PKCS1_OAEP_PADDING) {
if (EVP_PKEY_CTX_set_rsa_oaep_md(ctx, md) <= 0)
goto end;
if (EVP_PKEY_CTX_set_rsa_mgf1_md(ctx, md) <= 0)
goto end;
}
if (EVP_PKEY_decrypt(ctx, (unsigned char*)b_orig(decrypted_cek), &outl,
(unsigned char*)b_orig(cek), b_data(cek)) <= 0)
goto end;
decrypted_cek->data = outl;
retval = 1;
end:
EVP_PKEY_CTX_free(ctx);
EVP_PKEY_free(pkey);
return retval;
}
/*
* Decrypt the contents of a JWE token thanks to the user-provided certificate
* and private key. This converter can only be used for tokens that have an
* asymetric algorithm (RSA only for now).
* Returns the decrypted contents, or nothing if any error happened.
*/
static int sample_conv_jwt_decrypt_cert(const struct arg *args, struct sample *smp, void *private)
{
struct sample cert_smp;
struct buffer *input = NULL;
unsigned int item_num = JWE_ELT_MAX;
int retval = 0;
struct jwt_item items[JWE_ELT_MAX] = {};
struct buffer *decoded_items[JWE_ELT_MAX] = {};
jwe_alg alg = JWE_ALG_UNMANAGED;
jwe_enc enc = JWE_ENC_UNMANAGED;
int rsa = 0;
int size = 0;
struct buffer *cert = NULL;
struct buffer **cek = NULL;
struct buffer *decrypted_cek = NULL;
struct buffer *out = NULL;
struct jose_fields fields = {};
input = alloc_trash_chunk();
if (!input)
return 0;
if (!chunk_cpy(input, &smp->data.u.str))
goto end;
if (jwt_tokenize(input, items, &item_num) || item_num != JWE_ELT_MAX)
goto end;
/* Base64Url decode the JOSE header */
decoded_items[JWE_ELT_JOSE] = alloc_trash_chunk();
if (!decoded_items[JWE_ELT_JOSE])
goto end;
size = base64urldec(items[JWE_ELT_JOSE].start, items[JWE_ELT_JOSE].length,
b_orig(decoded_items[JWE_ELT_JOSE]), b_size(decoded_items[JWE_ELT_JOSE]));
if (size < 0)
goto end;
decoded_items[JWE_ELT_JOSE]->data = size;
if (!parse_jose(decoded_items[JWE_ELT_JOSE], &alg, &enc, &fields))
goto end;
/* Check if "alg" fits certificate-based JWEs */
switch (alg) {
case JWE_ALG_RSA1_5:
case JWE_ALG_RSA_OAEP:
case JWE_ALG_RSA_OAEP_256:
rsa = 1;
break;
default:
/* Not managed yet */
goto end;
}
cert = alloc_trash_chunk();
if (!cert)
goto end;
smp_set_owner(&cert_smp, smp->px, smp->sess, smp->strm, smp->opt);
if (!sample_conv_var2smp_str(&args[0], &cert_smp))
goto end;
if (chunk_printf(cert, "%.*s", (int)b_data(&cert_smp.data.u.str), b_orig(&cert_smp.data.u.str)) <= 0)
goto end;
/* With asymetric crypto algorithms we should always have a CEK */
if (!items[JWE_ELT_CEK].length)
goto end;
cek = &decoded_items[JWE_ELT_CEK];
*cek = alloc_trash_chunk();
if (!*cek)
goto end;
decrypted_cek = alloc_trash_chunk();
if (!decrypted_cek) {
goto end;
}
size = base64urldec(items[JWE_ELT_CEK].start, items[JWE_ELT_CEK].length,
(*cek)->area, (*cek)->size);
if (size < 0) {
goto end;
}
(*cek)->data = size;
if (rsa && !decrypt_cek_rsa(*cek, decrypted_cek, cert, alg))
goto end;
if (decrypt_ciphertext(enc, items, decoded_items, decrypted_cek, &out))
goto end;
smp->data.u.str.data = b_data(out);
smp->data.u.str.area = b_orig(out);
smp->data.type = SMP_T_BIN;
smp_dup(smp);
retval = 1;
end:
free_trash_chunk(input);
free_trash_chunk(cert);
free_trash_chunk(decrypted_cek);
free_trash_chunk(out);
clear_decoded_items(decoded_items);
return retval;
}
/* "jwt_decrypt_cert" converter check function.
* The first and only parameter should be a path to a pem certificate or a
* variable holding a path to a pem certificate. The certificate must already
* exist in the certificate store.
* This converter will be used for JWEs with an RSA type "alg" field in their
* JOSE header.
*/
static int sample_conv_jwt_decrypt_cert_check(struct arg *args, struct sample_conv *conv,
const char *file, int line, char **err)
{
vars_check_arg(&args[0], NULL);
if (args[0].type == ARGT_STR) {
struct ckch_store *store = NULL;
if (HA_SPIN_TRYLOCK(CKCH_LOCK, &ckch_lock))
return 0;
store = ckchs_lookup(args[0].data.str.area);
if (!store) {
memprintf(err, "unknown certificate %s", args[0].data.str.area);
HA_SPIN_UNLOCK(CKCH_LOCK, &ckch_lock);
return 0;
} else if (!store->conf.jwt) {
memprintf(err, "unusable certificate %s (\"jwt\" option not set to \"on\")", args[0].data.str.area);
HA_SPIN_UNLOCK(CKCH_LOCK, &ckch_lock);
return 0;
}
HA_SPIN_UNLOCK(CKCH_LOCK, &ckch_lock);
}
return 1;
}
/* "jwt_decrypt_secret" converter check function.
* The first and only parameter should be a base64 encoded secret or a variable
* holding a base64 encoded secret. This converter will be used mainly for JWEs
* with an AES type "alg" field in their JOSE header.
*/
static int sample_conv_jwt_decrypt_secret_check(struct arg *args, struct sample_conv *conv,
const char *file, int line, char **err)
{
/* Try to decode variables. */
if (!sample_check_arg_base64(&args[0], err)) {
memprintf(err, "failed to parse secret: %s", *err);
return 0;
}
return 1;
}
static struct sample_conv_kw_list sample_conv_kws = {ILH, {
/* JSON Web Token converters */
{ "jwt_decrypt_secret", sample_conv_jwt_decrypt_secret, ARG1(1,STR), sample_conv_jwt_decrypt_secret_check, SMP_T_BIN, SMP_T_BIN },
{ "jwt_decrypt_cert", sample_conv_jwt_decrypt_cert, ARG1(1,STR), sample_conv_jwt_decrypt_cert_check, SMP_T_BIN, SMP_T_BIN },
{ NULL, NULL, 0, 0, 0 },
}};
INITCALL1(STG_REGISTER, sample_register_convs, &sample_conv_kws);
#endif /* USE_OPENSSL */
#endif /* HAVE_JWS */

View File

@ -229,7 +229,7 @@ REGISTER_POST_DEINIT(accept_queue_deinit);
*/
int li_init_per_thr(struct listener *li)
{
int nbthr = MIN(global.nbthread, MAX_THREADS_PER_GROUP);
int nbthr = MIN(global.nbthread, global.maxthrpertgroup);
int i;
/* allocate per-thread elements for listener */
@ -846,7 +846,7 @@ int create_listeners(struct bind_conf *bc, const struct sockaddr_storage *ss,
proto->add(proto, l);
if (fd != -1)
l->rx.flags |= RX_F_INHERITED;
l->rx.flags |= RX_F_INHERITED_FD|RX_F_INHERITED_SOCK;
guid_init(&l->guid);
@ -879,12 +879,15 @@ struct shard_info *shard_info_attach(struct receiver *rx, struct shard_info *si)
return NULL;
si->ref = rx;
si->members = calloc(global.nbtgroups, sizeof(*si->members));
if (si->members == NULL) {
free(si);
return NULL;
}
}
rx->shard_info = si;
BUG_ON (si->tgroup_mask & 1UL << (rx->bind_tgroup - 1));
si->tgroup_mask |= 1UL << (rx->bind_tgroup - 1);
si->nbgroups = my_popcountl(si->tgroup_mask);
si->nbgroups++;
si->nbthreads += my_popcountl(rx->bind_thread);
si->members[si->nbgroups - 1] = rx;
return si;
@ -913,8 +916,7 @@ void shard_info_detach(struct receiver *rx)
BUG_ON(gr == MAX_TGROUPS);
si->nbthreads -= my_popcountl(rx->bind_thread);
si->tgroup_mask &= ~(1UL << (rx->bind_tgroup - 1));
si->nbgroups = my_popcountl(si->tgroup_mask);
si->nbgroups--;
/* replace the member by the last one. If we removed the reference, we
* have to switch to another one. It's always the first entry so we can
@ -924,8 +926,10 @@ void shard_info_detach(struct receiver *rx)
si->members[si->nbgroups] = NULL;
si->ref = si->members[0];
if (!si->nbgroups)
if (!si->nbgroups) {
free(si->members);
free(si);
}
}
/* clones listener <src> and returns the new one. All dynamically allocated
@ -1113,7 +1117,7 @@ void listener_accept(struct listener *l)
int max = 0;
int it;
for (it = 0; (it < global.nbtgroups && p->fe_counters.shared.tg[it]); it++)
for (it = 0; (it < global.nbtgroups && p->fe_counters.shared.tg && p->fe_counters.shared.tg[it]); it++)
max += freq_ctr_remain(&p->fe_counters.shared.tg[it]->sess_per_sec, p->fe_sps_lim, 0);
if (unlikely(!max)) {
@ -1394,7 +1398,7 @@ void listener_accept(struct listener *l)
/* no more threads here, switch to
* last thread of previous group.
*/
t2 = MAX_THREADS_PER_GROUP - 1;
t2 = global.maxthrpertgroup - 1;
if (l->rx.shard_info)
r2--;
/* loop again */
@ -1456,10 +1460,10 @@ void listener_accept(struct listener *l)
new_li = l->rx.shard_info->members[r1]->owner;
t2--;
if (t2 >= MAX_THREADS_PER_GROUP) {
if (t2 >= global.maxthrpertgroup) {
if (l->rx.shard_info)
r2--;
t2 = MAX_THREADS_PER_GROUP - 1;
t2 = global.maxthrpertgroup - 1;
}
}
else if (q1 - q2 > 0) {
@ -1480,7 +1484,7 @@ void listener_accept(struct listener *l)
new_li = l->rx.shard_info->members[r1]->owner;
updt_t1:
t1++;
if (t1 >= MAX_THREADS_PER_GROUP) {
if (t1 >= global.maxthrpertgroup) {
if (l->rx.shard_info)
r1++;
t1 = 0;
@ -1752,7 +1756,8 @@ int bind_complete_thread_setup(struct bind_conf *bind_conf, int *err_code)
struct listener *li, *new_li, *ref;
struct thread_set new_ts;
int shard, shards, todo, done, grp, dups;
ulong mask, gmask, bit;
ulong mask, bit;
int nbgrps;
int cfgerr = 0;
char *err;
@ -1784,7 +1789,7 @@ int bind_complete_thread_setup(struct bind_conf *bind_conf, int *err_code)
}
}
else if (shards == -2)
shards = protocol_supports_flag(li->rx.proto, PROTO_F_REUSEPORT_SUPPORTED) ? my_popcountl(bind_conf->thread_set.grps) : 1;
shards = protocol_supports_flag(li->rx.proto, PROTO_F_REUSEPORT_SUPPORTED) ? bind_conf->thread_set.nbgrps : 1;
/* no more shards than total threads */
if (shards > todo)
@ -1817,25 +1822,25 @@ int bind_complete_thread_setup(struct bind_conf *bind_conf, int *err_code)
/* take next unassigned bit */
bit = (bind_conf->thread_set.rel[grp] & ~mask) & -(bind_conf->thread_set.rel[grp] & ~mask);
if (!new_ts.rel[grp])
new_ts.nbgrps++;
new_ts.rel[grp] |= bit;
mask |= bit;
new_ts.grps |= 1UL << grp;
done += shards;
};
BUG_ON(!new_ts.grps); // no more bits left unassigned
BUG_ON(!new_ts.nbgrps); // no more group ?
/* Create all required listeners for all bound groups. If more than one group is
* needed, the first receiver serves as a reference, and subsequent ones point to
* it. We already have a listener available in new_li() so we only allocate a new
* one if we're not on the last one. We count the remaining groups by copying their
* mask into <gmask> and dropping the lowest bit at the end of the loop until there
* is no more. Ah yes, it's not pretty :-/
* one if we're not on the last one.
*
*/
ref = new_li;
gmask = new_ts.grps;
for (dups = 0; gmask; dups++) {
nbgrps = new_ts.nbgrps;
for (dups = 0; nbgrps; dups++) {
/* assign the first (and only) thread and group */
new_li->rx.bind_thread = thread_set_nth_tmask(&new_ts, dups);
new_li->rx.bind_tgroup = thread_set_nth_group(&new_ts, dups);
@ -1844,10 +1849,16 @@ int bind_complete_thread_setup(struct bind_conf *bind_conf, int *err_code)
/* it has been allocated already in the previous round */
shard_info_attach(&new_li->rx, ref->rx.shard_info);
new_li->rx.flags |= RX_F_MUST_DUP;
/* taking the other one's FD will result in it being marked
* extern and being dup()ed. Let's mark the receiver as
* inherited so that it properly bypasses all second-stage
* setup/unbind and avoids being passed to new processes.
*/
new_li->rx.flags |= ref->rx.flags & RX_F_INHERITED_SOCK;
}
gmask &= gmask - 1; // drop lowest bit
if (gmask) {
nbgrps--;
if (nbgrps) {
/* yet another listener expected in this shard, let's
* chain it.
*/
@ -2662,7 +2673,7 @@ static int bind_parse_thread(char **args, int cur_arg, struct proxy *px, struct
l = LIST_NEXT(&conf->listeners, struct listener *, by_bind);
if (l->rx.addr.ss_family == AF_CUST_RHTTP_SRV &&
atleast2(conf->thread_set.grps)) {
conf->thread_set.nbgrps >= 2) {
memprintf(err, "'%s' : reverse HTTP bind cannot span multiple thread groups.", args[cur_arg]);
return ERR_ALERT | ERR_FATAL;
}

View File

@ -362,6 +362,7 @@ struct show_map_ctx {
unsigned int display_flags;
unsigned int curr_gen; /* current/latest generation, for show/clear */
unsigned int prev_gen; /* prev generation, for clear */
struct pat_ref_gen *gen; /* link to the generation being displayed, for show */
enum {
STATE_INIT = 0, /* initialize list and backrefs */
STATE_LIST, /* list entries */
@ -387,17 +388,20 @@ static int cli_io_handler_pat_list(struct appctx *appctx)
LIST_DELETE(&ctx->bref.users);
LIST_INIT(&ctx->bref.users);
} else {
ctx->bref.ref = ctx->ref->head.n;
ctx->gen = pat_ref_gen_get(ctx->ref, ctx->curr_gen);
if (!ctx->gen) {
HA_RWLOCK_WRUNLOCK(PATREF_LOCK, &ctx->ref->lock);
ctx->state = STATE_DONE;
return 1;
}
ctx->bref.ref = ctx->gen->head.n;
}
while (ctx->bref.ref != &ctx->ref->head) {
while (ctx->bref.ref != &ctx->gen->head) {
chunk_reset(&trash);
elt = LIST_ELEM(ctx->bref.ref, struct pat_ref_elt *, list);
if (elt->gen_id != ctx->curr_gen)
goto skip;
/* build messages */
if (elt->sample)
chunk_appendf(&trash, "%p %s %s\n",

View File

@ -58,6 +58,7 @@ struct h1c {
struct h1_counters *px_counters; /* h1 counters attached to proxy */
struct buffer_wait buf_wait; /* Wait list for buffer allocation */
struct wait_event wait_event; /* To be used if we're waiting for I/Os */
int glitches; /* Number of glitches on this connection */
};
/* H1 stream descriptor */
@ -95,6 +96,9 @@ struct h1_hdr_entry {
static struct h1_hdrs_map hdrs_map = { .name = NULL, .map = EB_ROOT };
static int accept_payload_with_any_method = 0;
static int h1_be_glitches_threshold = 0; /* backend's max glitches: unlimited */
static int h1_fe_glitches_threshold = 0; /* frontend's max glitches: unlimited */
/* trace source and events */
static void h1_trace(enum trace_level level, uint64_t mask,
const struct trace_source *src,
@ -504,6 +508,41 @@ static void h1_trace_fill_ctx(struct trace_ctx *ctx, const struct trace_source *
}
}
/* report one or more glitches on the connection. That is any unexpected event
* that may occasionally happen but if repeated a bit too much, might indicate
* a misbehaving or completely bogus peer. It normally returns zero, unless the
* glitch limit was reached, in which case an error is also reported on the
* connection.
*/
#define h1_report_glitch(h1c, inc, ...) ({ \
COUNT_GLITCH(__VA_ARGS__); \
_h1_report_glitch(h1c, inc); \
})
static inline int _h1_report_glitch(struct h1c *h1c, int increment)
{
int thres = (h1c->flags & H1C_F_IS_BACK) ?
h1_be_glitches_threshold : h1_fe_glitches_threshold;
h1c->glitches += increment;
if (unlikely(thres && h1c->glitches >= (thres * 3 + 1) / 4)) {
/* at 75% of the threshold, we switch to close mode
* to force clients to periodically reconnect.
*/
h1c->h1s->flags = (h1c->h1s->flags & ~H1S_F_WANT_MSK) | H1S_F_WANT_CLO;
/* at 100% of the threshold and excess of CPU usage we also
* actively kill the connection.
*/
if (h1c->glitches >= thres &&
(th_ctx->idle_pct <= global.tune.glitch_kill_maxidle)) {
h1c->flags |= H1C_F_ERROR;
return 1;
}
}
return 0;
}
/*****************************************************/
/* functions below are for dynamic buffer management */
@ -832,6 +871,14 @@ static inline size_t h1s_data_pending(const struct h1s *h1s)
return ((h1m->state == H1_MSG_DONE) ? 0 : b_data(&h1s->h1c->ibuf));
}
static inline void h1s_consume_kop(struct h1s *h1s, size_t count)
{
if (h1s->sd->kop > count)
h1s->sd->kop -= count;
else
h1s->sd->kop = 0;
}
/* Creates a new stream connector and the associate stream. <input> is used as input
* buffer for the stream. On success, it is transferred to the stream and the
* mux is no longer responsible of it. On error, <input> is unchanged, thus the
@ -1265,6 +1312,7 @@ static int h1_init(struct connection *conn, struct proxy *proxy, struct session
h1c->task = NULL;
h1c->req_count = 0;
h1c->term_evts_log = 0;
h1c->glitches = 0;
LIST_INIT(&h1c->buf_wait.list);
h1c->wait_event.tasklet = tasklet_new();
@ -1962,6 +2010,7 @@ static size_t h1_handle_headers(struct h1s *h1s, struct h1m *h1m, struct htx *ht
h1s->h1c->errcode = h1m->err_code;
TRACE_ERROR("parsing error, reject H1 message", H1_EV_RX_DATA|H1_EV_RX_HDRS|H1_EV_H1S_ERR, h1s->h1c->conn, h1s);
h1_capture_bad_message(h1s->h1c, h1s, h1m, buf);
h1_report_glitch(h1s->h1c, 1, "parsing error");
}
else if (ret == -2) {
TRACE_STATE("RX path congested, waiting for more space", H1_EV_RX_DATA|H1_EV_RX_HDRS|H1_EV_H1S_BLK, h1s->h1c->conn, h1s);
@ -1987,6 +2036,7 @@ static size_t h1_handle_headers(struct h1s *h1s, struct h1m *h1m, struct htx *ht
h1s->h1c->errcode = 413;
TRACE_ERROR("HTTP/1.0 GET/HEAD/DELETE request with a payload forbidden", H1_EV_RX_DATA|H1_EV_RX_HDRS|H1_EV_H1S_ERR, h1s->h1c->conn, h1s);
h1_capture_bad_message(h1s->h1c, h1s, h1m, buf);
h1_report_glitch(h1s->h1c, 1, "HTTP/1.0 GET/HEAD/DELETE with payload");
ret = 0;
goto end;
}
@ -2003,6 +2053,7 @@ static size_t h1_handle_headers(struct h1s *h1s, struct h1m *h1m, struct htx *ht
h1s->h1c->errcode = 422;
TRACE_ERROR("Unknown transfer-encoding", H1_EV_RX_DATA|H1_EV_RX_HDRS|H1_EV_H1S_ERR, h1s->h1c->conn, h1s);
h1_capture_bad_message(h1s->h1c, h1s, h1m, buf);
h1_report_glitch(h1s->h1c, 1, "unknown transfer-encoding");
ret = 0;
goto end;
}
@ -2022,12 +2073,14 @@ static size_t h1_handle_headers(struct h1s *h1s, struct h1m *h1m, struct htx *ht
TRACE_ERROR("missing/invalid websocket key, reject H1 message",
H1_EV_RX_DATA|H1_EV_RX_HDRS|H1_EV_H1S_ERR, h1s->h1c->conn, h1s);
h1_report_glitch(h1s->h1c, 1, "rejecting missing/invalid websocket key");
ret = 0;
goto end;
} else {
TRACE_ERROR("missing/invalid websocket key, but accepting this "
"violation according to configuration",
H1_EV_RX_DATA|H1_EV_RX_HDRS|H1_EV_H1S_ERR, h1s->h1c->conn, h1s);
h1_report_glitch(h1s->h1c, 1, "accepting missing/invalid websocket key");
}
}
}
@ -2039,6 +2092,7 @@ static size_t h1_handle_headers(struct h1s *h1s, struct h1m *h1m, struct htx *ht
*/
TRACE_STATE("Ignored parsing error", H1_EV_RX_DATA|H1_EV_RX_HDRS, h1s->h1c->conn, h1s);
h1_capture_bad_message(h1s->h1c, h1s, h1m, buf);
h1_report_glitch(h1s->h1c, 1, "ignored parsing error");
}
if (!(h1m->flags & H1_MF_RESP)) {
@ -2078,6 +2132,7 @@ static size_t h1_handle_data(struct h1s *h1s, struct h1m *h1m, struct htx **htx,
h1s->flags |= H1S_F_PARSING_ERROR;
TRACE_ERROR("parsing error, reject H1 message", H1_EV_RX_DATA|H1_EV_RX_BODY|H1_EV_H1S_ERR, h1s->h1c->conn, h1s);
h1_capture_bad_message(h1s->h1c, h1s, h1m, buf);
h1_report_glitch(h1s->h1c, 1, "parsing error");
}
goto end;
}
@ -2114,6 +2169,7 @@ static size_t h1_handle_trailers(struct h1s *h1s, struct h1m *h1m, struct htx *h
h1s->flags |= H1S_F_PARSING_ERROR;
TRACE_ERROR("parsing error, reject H1 message", H1_EV_RX_DATA|H1_EV_RX_TLRS|H1_EV_H1S_ERR, h1s->h1c->conn, h1s);
h1_capture_bad_message(h1s->h1c, h1s, h1m, buf);
h1_report_glitch(h1s->h1c, 1, "parsing error");
}
else if (ret == -2) {
TRACE_STATE("RX path congested, waiting for more space", H1_EV_RX_DATA|H1_EV_RX_TLRS|H1_EV_H1S_BLK, h1s->h1c->conn, h1s);
@ -3045,7 +3101,7 @@ static size_t h1_make_data(struct h1s *h1s, struct h1m *h1m, struct buffer *buf,
goto error;
}
h1m->curr_len = (h1s->sd->kop ? h1s->sd->kop : count);
h1s->sd->kop = 0;
h1s_consume_kop(h1s, h1m->curr_len);
/* Because chunk meta-data are prepended, the chunk size of the current chunk
* must be handled before the end of the previous chunk.
@ -3137,7 +3193,7 @@ static size_t h1_make_data(struct h1s *h1s, struct h1m *h1m, struct buffer *buf,
h1m->curr_len = 0;
goto full;
}
h1s->sd->kop = 0;
h1s_consume_kop(h1s, h1m->curr_len);
h1m->state = H1_MSG_DATA;
}
@ -4093,6 +4149,7 @@ static int h1_process(struct h1c * h1c)
if (b_data(&h1c->ibuf) && /* Input data to be processed */
((h1c->state < H1_CS_RUNNING) || (h1c->state == H1_CS_DRAINING)) && /* IDLE, EMBRYONIC, UPGRADING or DRAINING */
!(h1c->flags & (H1C_F_IN_SALLOC|H1C_F_ABRT_PENDING))) { /* No allocation failure on the stream rxbuf and no ERROR on the H1C */
int prev_glitches = h1c->glitches;
struct h1s *h1s = h1c->h1s;
struct buffer *buf;
size_t count;
@ -4161,6 +4218,8 @@ static int h1_process(struct h1c * h1c)
h1c->conn->xprt->subscribe(h1c->conn, h1c->conn->xprt_ctx, SUB_RETRY_RECV, &h1c->wait_event);
}
}
if (h1c->glitches != prev_glitches && !(h1c->flags & H1C_F_IS_BACK))
session_add_glitch_ctr(h1c->conn->owner, h1c->glitches - prev_glitches);
}
no_parsing:
@ -4908,6 +4967,7 @@ static size_t h1_nego_ff(struct stconn *sc, struct buffer *input, size_t count,
goto out;
}
h1m->curr_len = count;
h1s_consume_kop(h1s, h1m->curr_len);
}
else {
/* The producer does not know the chunk size, thus this will be emitted at the
@ -5043,11 +5103,13 @@ static size_t h1_done_ff(struct stconn *sc)
struct buffer buf = b_make(b_orig(&h1c->obuf), b_size(&h1c->obuf),
b_peek_ofs(&h1c->obuf, b_data(&h1c->obuf) - sd->iobuf.data + sd->iobuf.offset),
sd->iobuf.data);
h1_prepend_chunk_size(&buf, sd->iobuf.data, sd->iobuf.offset - ((h1m->state == H1_MSG_CHUNK_CRLF) ? 2 : 0));
if (h1m->state == H1_MSG_CHUNK_CRLF)
h1_prepend_chunk_crlf(&buf);
b_add(&h1c->obuf, sd->iobuf.offset);
h1m->state = H1_MSG_CHUNK_CRLF;
h1s_consume_kop(h1s, sd->iobuf.data);
}
total = sd->iobuf.data;
@ -5416,6 +5478,8 @@ static int h1_ctl(struct connection *conn, enum mux_ctl_type mux_ctl, void *outp
if (!(h1c->wait_event.events & SUB_RETRY_RECV))
h1c->conn->xprt->subscribe(h1c->conn, h1c->conn->xprt_ctx, SUB_RETRY_RECV, &h1c->wait_event);
return 0;
case MUX_CTL_GET_GLITCHES:
return h1c->glitches;
case MUX_CTL_GET_NBSTRM:
return h1_used_streams(conn);
case MUX_CTL_GET_MAXSTRM:
@ -5484,6 +5548,7 @@ static int h1_dump_h1c_info(struct buffer *msg, struct h1c *h1c, const char *pfx
(unsigned int)b_head_ofs(&h1c->obuf), (unsigned int)b_size(&h1c->obuf),
tevt_evts2str(h1c->term_evts_log));
chunk_appendf(msg, " .glitches=%d", h1c->glitches);
chunk_appendf(msg, " .task=%p", h1c->task);
if (h1c->task) {
chunk_appendf(msg, " .exp=%s",
@ -5871,6 +5936,27 @@ static int cfg_parse_h1_headers_case_adjust_file(char **args, int section_type,
return 0;
}
/* config parser for global "tune.h1.{fe,be}.glitches-threshold" */
static int cfg_parse_h1_glitches_threshold(char **args, int section_type, struct proxy *curpx,
const struct proxy *defpx, const char *file, int line,
char **err)
{
int *vptr;
if (too_many_args(1, args, err, NULL))
return -1;
/* backend/frontend */
vptr = (args[0][8] == 'b') ? &h1_be_glitches_threshold : &h1_fe_glitches_threshold;
*vptr = atoi(args[1]);
if (*vptr < 0) {
memprintf(err, "'%s' expects a positive numeric value.", args[0]);
return -1;
}
return 0;
}
/* config parser for global "tune.h1.zero-copy-fwd-recv" */
static int cfg_parse_h1_zero_copy_fwd_rcv(char **args, int section_type, struct proxy *curpx,
const struct proxy *defpx, const char *file, int line,
@ -5914,6 +6000,8 @@ static struct cfg_kw_list cfg_kws = {{ }, {
{ CFG_GLOBAL, "h1-accept-payload-with-any-method", cfg_parse_h1_accept_payload_with_any_method },
{ CFG_GLOBAL, "h1-case-adjust", cfg_parse_h1_header_case_adjust },
{ CFG_GLOBAL, "h1-case-adjust-file", cfg_parse_h1_headers_case_adjust_file },
{ CFG_GLOBAL, "tune.h1.be.glitches-threshold", cfg_parse_h1_glitches_threshold },
{ CFG_GLOBAL, "tune.h1.fe.glitches-threshold", cfg_parse_h1_glitches_threshold },
{ CFG_GLOBAL, "tune.h1.zero-copy-fwd-recv", cfg_parse_h1_zero_copy_fwd_rcv },
{ CFG_GLOBAL, "tune.h1.zero-copy-fwd-send", cfg_parse_h1_zero_copy_fwd_snd },
{ 0, NULL, NULL },

View File

@ -533,6 +533,7 @@ struct task *h2_timeout_task(struct task *t, void *context, unsigned int state);
static int h2_send(struct h2c *h2c);
static int h2_recv(struct h2c *h2c);
static int h2_process(struct h2c *h2c);
static int h2c_send_goaway_error(struct h2c *h2c, struct h2s *h2s);
/* h2_io_cb is exported to see it resolved in "show fd" */
struct task *h2_io_cb(struct task *t, void *ctx, unsigned int state);
static inline struct h2s *h2c_st_by_id(struct h2c *h2c, int id);
@ -979,6 +980,33 @@ static void h2c_update_timeout(struct h2c *h2c)
TRACE_LEAVE(H2_EV_H2C_WAKE);
}
/* returns non-zero if the connection reached its last possible stream, i.e.
* on a frontend, if last_sid is set, max_id reached it, and on a backend,
* last_sid is set (since it's forbidden by the spec to initiate new streams
* on a connection that received a GOAWAY regardless of the advertised last
* stream id).
*/
static inline int h2c_reached_last_stream(const struct h2c *h2c)
{
/* highest stream ID already reached ? */
if (h2c->max_id >= 0x7fffffff)
return 1;
/* GOAWAY sent or received ? */
if (h2c->last_sid < 0)
return 0;
if (h2c->flags & H2_CF_IS_BACK)
return 1;
/* front: reached advertised limit ? */
if (h2c->max_id >= h2c->last_sid)
return 1;
/* ok not yet */
return 0;
}
static __inline int
h2c_is_dead(const struct h2c *h2c)
{
@ -989,7 +1017,7 @@ h2c_is_dead(const struct h2c *h2c)
(!(h2c->conn->owner) && !conn_is_reverse(h2c->conn)) || /* Nobody's left to take care of the connection, drop it now */
(!br_data(h2c->mbuf) && /* mux buffer empty, also process clean events below */
((h2c->flags & H2_CF_RCVD_SHUT) ||
(h2c->last_sid >= 0 && h2c->max_id >= h2c->last_sid)))))
h2c_reached_last_stream(h2c)))))
return 1;
return 0;
@ -1146,12 +1174,14 @@ static inline int h2_streams_left(const struct h2c *h2c)
{
int ret;
if (h2c_reached_last_stream(h2c))
return 0;
/* consider the number of outgoing streams we're allowed to create before
* reaching the last GOAWAY frame seen. max_id is the last assigned id,
* reaching the highest stream number. max_id is the last assigned id,
* nb_reserved is the number of streams which don't yet have an ID.
*/
ret = (h2c->last_sid >= 0) ? h2c->last_sid : 0x7FFFFFFF;
ret = (unsigned int)(ret - h2c->max_id) / 2 - h2c->nb_reserved - 1;
ret = (unsigned int)(0x7FFFFFFF - h2c->max_id) / 2 - h2c->nb_reserved - 1;
if (ret < 0)
ret = 0;
return ret;
@ -1680,10 +1710,25 @@ static inline int _h2c_report_glitch(struct h2c *h2c, int increment)
h2_be_glitches_threshold : h2_fe_glitches_threshold;
h2c->glitches += increment;
if (thres && h2c->glitches >= thres &&
(th_ctx->idle_pct <= global.tune.glitch_kill_maxidle)) {
h2c_error(h2c, H2_ERR_ENHANCE_YOUR_CALM);
return 1;
if (unlikely(thres && h2c->glitches >= (thres * 3 + 1) / 4)) {
/* at 75% of the threshold, we switch to close mode
* to force clients to periodically reconnect.
*/
if (h2c->last_sid <= 0 ||
h2c->last_sid > h2c->max_id + 2 * h2c_max_concurrent_streams(h2c)) {
/* not set yet or was too high */
h2c->last_sid = h2c->max_id + 2 * h2c_max_concurrent_streams(h2c);
h2c_send_goaway_error(h2c, NULL);
}
/* at 100% of the threshold and excess of CPU usage we also
* actively kill the connection.
*/
if (h2c->glitches >= thres &&
(th_ctx->idle_pct <= global.tune.glitch_kill_maxidle)) {
h2c_error(h2c, H2_ERR_ENHANCE_YOUR_CALM);
return 1;
}
}
return 0;
}
@ -3692,6 +3737,7 @@ static struct h2s *h2c_bck_handle_headers(struct h2c *h2c, struct h2s *h2s)
}
/* stream error : send RST_STREAM */
h2c_report_glitch(h2c, 1, "couldn't decode response HEADERS");
TRACE_ERROR("couldn't decode response HEADERS", H2_EV_RX_FRAME|H2_EV_RX_HDR, h2c->conn, h2s);
h2s_error(h2s, H2_ERR_PROTOCOL_ERROR);
h2c->st0 = H2_CS_FRAME_E;
@ -5161,8 +5207,7 @@ static int h2_process(struct h2c *h2c)
if ((h2c->flags & H2_CF_ERROR) || h2c_read0_pending(h2c) ||
h2c->st0 == H2_CS_ERROR2 || h2c->flags & H2_CF_GOAWAY_FAILED ||
(eb_is_empty(&h2c->streams_by_id) && h2c->last_sid >= 0 &&
h2c->max_id >= h2c->last_sid)) {
(eb_is_empty(&h2c->streams_by_id) && h2c_reached_last_stream(h2c))) {
h2_wake_some_streams(h2c, 0);
if (eb_is_empty(&h2c->streams_by_id)) {
@ -7795,7 +7840,7 @@ static size_t h2_rcv_buf(struct stconn *sc, struct buffer *buf, size_t count, in
ret -= h2s_htx->data;
end:
/* If ther is no content-length, take care to update <kip> field */
/* If there is no content-length, take care to update <kip> field */
if (!(h2s->flags & H2_SF_DATA_CLEN))
h2s->sd->kip += prev_body_len - h2s->body_len;

View File

@ -3832,7 +3832,7 @@ static int qmux_strm_attach(struct connection *conn, struct sedesc *sd, struct s
*/
BUG_ON(!qcc_fctl_avail_streams(qcc, 1));
/* Connnection should not be reused if already on error/closed. */
/* Connection should not be reused if already on error/closed. */
BUG_ON(qcc->flags & QC_CF_ERRL || qcc->app_st >= QCC_APP_ST_SHUT);
qcs = qcc_init_stream_local(qcc, 1);

View File

@ -26,10 +26,9 @@
#include <haproxy/errors.h>
#include <haproxy/fd.h>
#include <haproxy/global.h>
#include <haproxy/list.h>
#include <haproxy/log.h>
#include <haproxy/listener.h>
#include <haproxy/list.h>
#include <haproxy/listener.h>
#include <haproxy/mworker.h>
#include <haproxy/peers.h>
#include <haproxy/proto_sockpair.h>
@ -223,7 +222,7 @@ int mworker_env_to_proc_list()
child->version = strdup(subtoken+8);
}
}
if (child->pid) {
if (child->pid > 0) {
LIST_APPEND(&proc_list, &child->list);
} else {
mworker_free_child(child);
@ -509,6 +508,14 @@ void mworker_catch_sigterm(struct sig_handler *sh)
mworker_kill(sig);
}
/* handle operations that can't be done in the signal handler */
static struct task *mworker_task_child_failure(struct task *task, void *context, unsigned int state)
{
mworker_unblock_signals();
task_destroy(task);
return NULL;
}
/*
* Performs some routines for the worker process, which has failed the reload,
* updates the global load_status.
@ -516,6 +523,7 @@ void mworker_catch_sigterm(struct sig_handler *sh)
static void mworker_on_new_child_failure(int exitpid, int status)
{
struct mworker_proc *child;
struct task *t;
/* increment the number of failed reloads */
list_for_each_entry(child, &proc_list, list) {
@ -532,6 +540,15 @@ static void mworker_on_new_child_failure(int exitpid, int status)
* the READY=1 signal still need to be sent */
if (global.tune.options & GTUNE_USE_SYSTEMD)
sd_notify(0, "READY=1\nSTATUS=Reload failed!\n");
/* call a task to unblock the signals from outside the sig handler */
if ((t = task_new_here()) == NULL) {
ha_warning("Can't restore HAProxy signals!\n");
return;
}
t->process = mworker_task_child_failure;
task_wakeup(t, TASK_WOKEN_MSG);
}
/*
@ -808,9 +825,26 @@ void mworker_cleanup_proc()
struct cli_showproc_ctx {
int debug;
int next_uptime; /* uptime must be greater than this value */
int next_reload; /* reload number to resume from, 0 = from the beginning */
};
/* Append a single worker row to trash (shared between current/old sections) */
static void cli_append_worker_row(struct cli_showproc_ctx *ctx, struct mworker_proc *child, time_t tv_sec)
{
char *uptime = NULL;
int up = tv_sec - child->timestamp;
if (up < 0) /* must never be negative because of clock drift */
up = 0;
memprintf(&uptime, "%dd%02dh%02dm%02ds", up / 86400, (up % 86400) / 3600, (up % 3600) / 60, (up % 60));
chunk_appendf(&trash, "%-15u %-15s %-15d %-15s %-15s", child->pid, "worker", child->reloads, uptime, child->version);
if (ctx->debug)
chunk_appendf(&trash, "\t\t %-15d %-15d", child->ipc_fd[0], child->ipc_fd[1]);
chunk_appendf(&trash, "\n");
ha_free(&uptime);
}
/* Displays workers and processes */
static int cli_io_handler_show_proc(struct appctx *appctx)
{
@ -826,7 +860,7 @@ static int cli_io_handler_show_proc(struct appctx *appctx)
chunk_reset(&trash);
if (ctx->next_uptime == 0) {
if (ctx->next_reload == 0) {
memprintf(&reloadtxt, "%d [failed: %d]", proc_self->reloads, proc_self->failedreloads);
chunk_printf(&trash, "#%-14s %-15s %-15s %-15s %-15s", "<PID>", "<type>", "<reloads>", "<uptime>", "<version>");
if (ctx->debug)
@ -844,18 +878,14 @@ static int cli_io_handler_show_proc(struct appctx *appctx)
ha_free(&uptime);
/* displays current processes */
if (ctx->next_uptime == 0)
if (ctx->next_reload == 0)
chunk_appendf(&trash, "# workers\n");
list_for_each_entry(child, &proc_list, list) {
/* don't display current worker if we only need the next ones */
if (ctx->next_uptime != 0)
if (ctx->next_reload != 0)
continue;
up = date.tv_sec - child->timestamp;
if (up < 0) /* must never be negative because of clock drift */
up = 0;
if (!(child->options & PROC_O_TYPE_WORKER))
continue;
@ -863,52 +893,44 @@ static int cli_io_handler_show_proc(struct appctx *appctx)
old++;
continue;
}
memprintf(&uptime, "%dd%02dh%02dm%02ds", up / 86400, (up % 86400) / 3600, (up % 3600) / 60, (up % 60));
chunk_appendf(&trash, "%-15u %-15s %-15d %-15s %-15s", child->pid, "worker", child->reloads, uptime, child->version);
if (ctx->debug)
chunk_appendf(&trash, "\t\t %-15d %-15d", child->ipc_fd[0], child->ipc_fd[1]);
chunk_appendf(&trash, "\n");
ha_free(&uptime);
cli_append_worker_row(ctx, child, date.tv_sec);
}
if (applet_putchk(appctx, &trash) == -1)
return 0;
/* displays old processes */
if (old || ctx->next_uptime) { /* there's more */
if (ctx->next_uptime == 0)
if (old || ctx->next_reload) { /* there's more */
if (ctx->next_reload == 0)
chunk_appendf(&trash, "# old workers\n");
list_for_each_entry(child, &proc_list, list) {
up = date.tv_sec - child->timestamp;
if (up <= 0) /* must never be negative because of clock drift */
up = 0;
if (child->timestamp < ctx->next_uptime)
/* If we're resuming, skip entries that were already printed (reload >= ctx->next_reload) */
if (ctx->next_reload && child->reloads >= ctx->next_reload)
continue;
if (!(child->options & PROC_O_TYPE_WORKER))
continue;
if (child->options & PROC_O_LEAVING) {
memprintf(&uptime, "%dd%02dh%02dm%02ds", up / 86400, (up % 86400) / 3600, (up % 3600) / 60, (up % 60));
chunk_appendf(&trash, "%-15u %-15s %-15d %-15s %-15s", child->pid, "worker", child->reloads, uptime, child->version);
if (ctx->debug)
chunk_appendf(&trash, "\t\t %-15d %-15d", child->ipc_fd[0], child->ipc_fd[1]);
chunk_appendf(&trash, "\n");
ha_free(&uptime);
cli_append_worker_row(ctx, child, date.tv_sec);
/* Try to flush so we can resume after this reload on next page if the buffer is full. */
if (applet_putchk(appctx, &trash) == -1) {
/* resume at this reload (exclude it on next pass) */
ctx->next_reload = child->reloads; /* resume after entries >= this reload */
return 0;
}
chunk_reset(&trash);
}
/* start from there if there's not enough place */
ctx->next_uptime = child->timestamp;
if (applet_putchk(appctx, &trash) == -1)
return 0;
}
}
/* dump complete */
/* dump complete: reset resume cursor so next 'show proc' starts from the top */
ctx->next_reload = 0;
return 1;
}
/* reload the master process */
static int cli_parse_show_proc(char **args, char *payload, struct appctx *appctx, void *private)
{

855
src/net_helper.c Normal file
View File

@ -0,0 +1,855 @@
#include <string.h>
#include <stdio.h>
#include <haproxy/api.h>
#include <haproxy/arg.h>
#include <haproxy/buf.h>
#include <haproxy/cfgparse.h>
#include <haproxy/chunk.h>
#include <haproxy/errors.h>
#include <haproxy/global.h>
#include <haproxy/net_helper.h>
#include <haproxy/sample.h>
/*****************************************************/
/* Converters used to process Ethernet frame headers */
/*****************************************************/
/* returns only the data part of an input ethernet frame header, skipping any
* possible VLAN header. This is typically used to return the beginning of the
* IP packet.
*/
static int sample_conv_eth_data(const struct arg *arg_p, struct sample *smp, void *private)
{
size_t idx;
for (idx = 12; idx + 2 < smp->data.u.str.data; idx += 4) {
if (read_n16(smp->data.u.str.area + idx) != 0x8100) {
smp->data.u.str.area += idx + 2;
smp->data.u.str.data -= idx + 2;
return 1;
}
}
/* incomplete header */
return 0;
}
/* returns the 6 bytes of MAC DST address of an input ethernet frame header */
static int sample_conv_eth_dst(const struct arg *arg_p, struct sample *smp, void *private)
{
if (smp->data.u.str.data < 6)
return 0;
smp->data.u.str.data = 6; // output length is 6
return 1;
}
/* returns only the ethernet header for an input ethernet frame header,
* including any possible VLAN headers, but stopping before data.
*/
static int sample_conv_eth_hdr(const struct arg *arg_p, struct sample *smp, void *private)
{
size_t idx;
for (idx = 12; idx + 2 < smp->data.u.str.data; idx += 4) {
if (read_n16(smp->data.u.str.area + idx) != 0x8100) {
smp->data.u.str.data = idx + 2;
return 1;
}
}
/* incomplete header */
return 0;
}
/* returns the ethernet protocol of an input ethernet frame header, skipping
* any VLAN tag.
*/
static int sample_conv_eth_proto(const struct arg *arg_p, struct sample *smp, void *private)
{
ushort proto;
size_t idx;
for (idx = 12; idx + 2 < smp->data.u.str.data; idx += 4) {
proto = read_n16(smp->data.u.str.area + idx);
if (proto != 0x8100) {
smp->data.u.sint = proto;
smp->data.type = SMP_T_SINT;
smp->flags &= ~SMP_F_CONST;
return 1;
}
}
/* incomplete header */
return 0;
}
/* returns the 6 bytes of MAC SRC address of an input ethernet frame header */
static int sample_conv_eth_src(const struct arg *arg_p, struct sample *smp, void *private)
{
if (smp->data.u.str.data < 12)
return 0;
smp->data.u.str.area += 6; // src is at address 6
smp->data.u.str.data = 6; // output length is 6
return 1;
}
/* returns the last VLAN ID seen in an input ethernet frame header, if any.
* Note that VLAN ID 0 is considered as absence of VLAN.
*/
static int sample_conv_eth_vlan(const struct arg *arg_p, struct sample *smp, void *private)
{
ushort vlan = 0;
size_t idx;
for (idx = 12; idx + 2 < smp->data.u.str.data; idx += 4) {
if (read_n16(smp->data.u.str.area + idx) != 0x8100) {
smp->data.u.sint = vlan;
smp->data.type = SMP_T_SINT;
smp->flags &= ~SMP_F_CONST;
return !!vlan;
}
if (idx + 4 < smp->data.u.str.data)
break;
vlan = read_n16(smp->data.u.str.area + idx + 2) & 0xfff;
}
/* incomplete header */
return 0;
}
/*******************************************************/
/* Converters used to process IPv4/IPv6 packet headers */
/*******************************************************/
/* returns the total header length for the input IP packet header (v4 or v6),
* including all extensions if any. It corresponds to the length to skip to
* find the TCP or UDP header. If data are missing or unparsable, it returns
* 0.
*/
static size_t ip_header_length(const struct sample *smp)
{
size_t len;
uchar next;
uchar ver;
if (smp->data.u.str.data < 1)
return 0;
ver = (uchar)smp->data.u.str.area[0] >> 4;
if (ver == 4) {
len = (smp->data.u.str.area[0] & 0xF) * 4;
if (smp->data.u.str.data < len)
return 0;
}
else if (ver == 6) {
if (smp->data.u.str.data < 40)
return 0;
len = 40;
next = smp->data.u.str.area[6];
while (next != 6 && next != 17) {
if (smp->data.u.str.data < len + 2)
return 0;
next = smp->data.u.str.area[len];
len += (uchar)smp->data.u.str.area[len + 1] * 8 + 8;
}
if (smp->data.u.str.data < len)
return 0;
}
else {
return 0;
}
return len;
}
/* returns the payload following the input IP packet header (v4 or v6) skipping
* all extensions if any. For IPv6, it returns the TCP or UDP next header.
*/
static int sample_conv_ip_data(const struct arg *arg_p, struct sample *smp, void *private)
{
size_t len;
len = ip_header_length(smp);
if (!len)
return 0;
/* advance buffer by <len> */
smp->data.u.str.area += len;
smp->data.u.str.data -= len;
return 1;
}
/* returns the DF (don't fragment) flag from an IPv4 header, as 0 or 1. The
* value is always one for IPv6 since DF is implicit.
*/
static int sample_conv_ip_df(const struct arg *arg_p, struct sample *smp, void *private)
{
uchar ver;
uchar df;
if (smp->data.u.str.data < 1)
return 0;
ver = (uchar)smp->data.u.str.area[0] >> 4;
if (ver == 4) {
if (smp->data.u.str.data < 6)
return 0;
df = !!(smp->data.u.str.area[6] & 0x40);
}
else if (ver == 6) {
df = 1;
}
else {
return 0;
}
smp->data.u.sint = df;
smp->data.type = SMP_T_SINT;
smp->flags &= ~SMP_F_CONST;
return 1;
}
/* returns the IP DST address found in an input IP packet header (v4 or v6). */
static int sample_conv_ip_dst(const struct arg *arg_p, struct sample *smp, void *private)
{
uchar ver;
if (smp->data.u.str.data < 1)
return 0;
ver = (uchar)smp->data.u.str.area[0] >> 4;
if (ver == 4) {
if (smp->data.u.str.data < 20)
return 0;
smp->data.u.ipv4.s_addr = read_u32(smp->data.u.str.area + 16);
smp->data.type = SMP_T_IPV4;
}
else if (ver == 6) {
if (smp->data.u.str.data < 40)
return 0;
memcpy(&smp->data.u.ipv6, smp->data.u.str.area + 24, 16);
smp->data.type = SMP_T_IPV6;
}
else {
return 0;
}
smp->flags &= ~SMP_F_CONST;
return 1;
}
/* returns the IP header only for an input IP packet header (v4 or v6), including
* all extensions if any. For IPv6, it includes every extension before TCP/UDP.
*/
static int sample_conv_ip_hdr(const struct arg *arg_p, struct sample *smp, void *private)
{
size_t len;
len = ip_header_length(smp);
if (!len)
return 0;
/* truncate buffer to <len> */
smp->data.u.str.data = len;
return 1;
}
/* returns the upper layer protocol number (TCP/UDP) for an input IP packet
* header (v4 or v6).
*/
static int sample_conv_ip_proto(const struct arg *arg_p, struct sample *smp, void *private)
{
size_t len;
uchar next;
uchar ver;
if (smp->data.u.str.data < 1)
return 0;
ver = (uchar)smp->data.u.str.area[0] >> 4;
if (ver == 4) {
if (smp->data.u.str.data < 10)
return 0;
next = smp->data.u.str.area[9];
}
else if (ver == 6) {
/* skip all extensions */
if (smp->data.u.str.data < 40)
return 0;
len = 40;
next = smp->data.u.str.area[6];
while (next != 6 && next != 17) {
if (smp->data.u.str.data < len + 2)
return 0;
next = smp->data.u.str.area[len];
len += (uchar)smp->data.u.str.area[len + 1] * 8 + 8;
}
if (smp->data.u.str.data < len)
return 0;
}
else {
return 0;
}
/* protocol number is in <next> */
smp->data.u.sint = next;
smp->data.type = SMP_T_SINT;
smp->flags &= ~SMP_F_CONST;
return 1;
}
/* returns the IP SRC address found in an input IP packet header (v4 or v6). */
static int sample_conv_ip_src(const struct arg *arg_p, struct sample *smp, void *private)
{
uchar ver;
if (smp->data.u.str.data < 1)
return 0;
ver = (uchar)smp->data.u.str.area[0] >> 4;
if (ver == 4) {
if (smp->data.u.str.data < 20)
return 0;
smp->data.u.ipv4.s_addr = read_u32(smp->data.u.str.area + 12);
smp->data.type = SMP_T_IPV4;
}
else if (ver == 6) {
if (smp->data.u.str.data < 40)
return 0;
memcpy(&smp->data.u.ipv6, smp->data.u.str.area + 8, 16);
smp->data.type = SMP_T_IPV6;
}
else {
return 0;
}
smp->flags &= ~SMP_F_CONST;
return 1;
}
/* returns the IP TOS/TC field found in an input IP packet header (v4 or v6). */
static int sample_conv_ip_tos(const struct arg *arg_p, struct sample *smp, void *private)
{
uchar ver;
if (smp->data.u.str.data < 1)
return 0;
ver = (uchar)smp->data.u.str.area[0] >> 4;
if (ver == 4) {
/* TOS field is at offset 1 */
if (smp->data.u.str.data < 2)
return 0;
smp->data.u.sint = (uchar)smp->data.u.str.area[1];
}
else if (ver == 6) {
/* TOS field is between offset 0 and 1 */
if (smp->data.u.str.data < 2)
return 0;
smp->data.u.sint = (uchar)(read_n16(smp->data.u.str.area) >> 4);
}
else {
return 0;
}
/* OK we have the value in data.u.sint */
smp->data.type = SMP_T_SINT;
smp->flags &= ~SMP_F_CONST;
return 1;
}
/* returns the IP TTL/HL field found in an input IP packet header (v4 or v6). */
static int sample_conv_ip_ttl(const struct arg *arg_p, struct sample *smp, void *private)
{
uchar ver;
if (smp->data.u.str.data < 1)
return 0;
ver = (uchar)smp->data.u.str.area[0] >> 4;
if (ver == 4) {
if (smp->data.u.str.data < 20)
return 0;
smp->data.u.sint = (uchar)smp->data.u.str.area[8];
}
else if (ver == 6) {
if (smp->data.u.str.data < 40)
return 0;
smp->data.u.sint = (uchar)smp->data.u.str.area[7];
}
else {
return 0;
}
/* OK we have the value in data.u.sint */
smp->data.type = SMP_T_SINT;
smp->flags &= ~SMP_F_CONST;
return 1;
}
/* returns the IP version found in an input IP packet header (v4 or v6). */
static int sample_conv_ip_ver(const struct arg *arg_p, struct sample *smp, void *private)
{
if (smp->data.u.str.data < 1)
return 0;
smp->data.u.sint = (uchar)smp->data.u.str.area[0] >> 4;
smp->data.type = SMP_T_SINT;
smp->flags &= ~SMP_F_CONST;
return 1;
}
/******************************************/
/* Converters used to process TCP headers */
/******************************************/
/* returns the TCP header length in bytes if complete, otherwise zero */
static int tcp_fullhdr_length(const struct sample *smp)
{
size_t ofs;
if (smp->data.u.str.data < 20)
return 0;
/* check that header is complete */
ofs = ((uchar)smp->data.u.str.area[12] >> 4) * 4;
if (ofs < 20 || smp->data.u.str.data < ofs)
return 0;
return ofs;
}
/* returns the offset in the input TCP header where option kind <opt> is first
* seen, otherwise 0 if not found. NOP and END cannot be searched.
*/
static size_t tcp_fullhdr_find_opt(const struct sample *smp, uint8_t opt)
{
size_t len = tcp_fullhdr_length(smp);
size_t next = 20, curr;
while ((curr = next) < len) {
if (smp->data.u.str.area[next] == 0) // kind0=end of options
break;
/* kind1 = NOP and is a single byte, others have a length field */
next += (smp->data.u.str.area[next] == 1) ? 1 : smp->data.u.str.area[next + 1];
if (smp->data.u.str.area[curr] == opt && next <= len)
return curr;
}
return 0;
}
/* returns the destination port field found in an input TCP header */
static int sample_conv_tcp_dst(const struct arg *arg_p, struct sample *smp, void *private)
{
if (smp->data.u.str.data < 20)
return 0;
smp->data.u.sint = read_n16(smp->data.u.str.area + 2);
smp->data.type = SMP_T_SINT;
smp->flags &= ~SMP_F_CONST;
return 1;
}
/* returns the flags field found in an input TCP header */
static int sample_conv_tcp_flags(const struct arg *arg_p, struct sample *smp, void *private)
{
if (smp->data.u.str.data < 20)
return 0;
smp->data.u.sint = (uchar)smp->data.u.str.area[13];
smp->data.type = SMP_T_SINT;
smp->flags &= ~SMP_F_CONST;
return 1;
}
/* returns the MSS value of an input TCP header, or 0 if absent. Returns
* nothing if the header is incomplete.
*/
static int sample_conv_tcp_options_mss(const struct arg *arg_p, struct sample *smp, void *private)
{
size_t len = tcp_fullhdr_length(smp);
size_t ofs = tcp_fullhdr_find_opt(smp, 2 /* MSS */);
if (!len)
return 0;
smp->data.u.sint = ofs ? read_n16(smp->data.u.str.area + ofs + 2) : 0;
smp->data.type = SMP_T_SINT;
smp->flags &= ~SMP_F_CONST;
return 1;
}
/* returns 1 if the SackPerm option is present in an input TCP header,
* otherwise 0. Returns nothing if the header is incomplete.
*/
static int sample_conv_tcp_options_sack(const struct arg *arg_p, struct sample *smp, void *private)
{
size_t len = tcp_fullhdr_length(smp);
size_t ofs = tcp_fullhdr_find_opt(smp, 4 /* sackperm */);
if (!len)
return 0;
smp->data.u.sint = !!ofs;
smp->data.type = SMP_T_SINT;
smp->flags &= ~SMP_F_CONST;
return 1;
}
/* returns 1 if the TimeStamp option is present in an input TCP header,
* otherwise 0. Returns nothing if the header is incomplete.
*/
static int sample_conv_tcp_options_tsopt(const struct arg *arg_p, struct sample *smp, void *private)
{
size_t len = tcp_fullhdr_length(smp);
size_t ofs = tcp_fullhdr_find_opt(smp, 8 /* TS */);
if (!len)
return 0;
smp->data.u.sint = !!ofs;
smp->data.type = SMP_T_SINT;
smp->flags &= ~SMP_F_CONST;
return 1;
}
/* returns the TSval value in the TimeStamp option found in an input TCP
* header, if found, otherwise 0. Returns nothing if the header is incomplete
* (see also tsopt).
*/
static int sample_conv_tcp_options_tsval(const struct arg *arg_p, struct sample *smp, void *private)
{
size_t len = tcp_fullhdr_length(smp);
size_t ofs = tcp_fullhdr_find_opt(smp, 8 /* TS */);
if (!len)
return 0;
smp->data.u.sint = ofs ? read_n32(smp->data.u.str.area + ofs + 2) : 0;
smp->data.type = SMP_T_SINT;
smp->flags &= ~SMP_F_CONST;
return 1;
}
/* returns the window scaling shift count from an input TCP header, otherwise 0
* if option not found (see also wsopt). Returns nothing if the header is
* incomplete.
*/
static int sample_conv_tcp_options_wscale(const struct arg *arg_p, struct sample *smp, void *private)
{
size_t len = tcp_fullhdr_length(smp);
size_t ofs = tcp_fullhdr_find_opt(smp, 3 /* wscale */);
if (!len)
return 0;
smp->data.u.sint = ofs ? (uchar)smp->data.u.str.area[ofs + 2] : 0;
smp->data.type = SMP_T_SINT;
smp->flags &= ~SMP_F_CONST;
return 1;
}
/* returns 1 if the WScale option is present in an input TCP header,
* otherwise 0. Returns nothing if the header is incomplete.
*/
static int sample_conv_tcp_options_wsopt(const struct arg *arg_p, struct sample *smp, void *private)
{
size_t len = tcp_fullhdr_length(smp);
size_t ofs = tcp_fullhdr_find_opt(smp, 3 /* wscale */);
if (!len)
return 0;
smp->data.u.sint = !!ofs;
smp->data.type = SMP_T_SINT;
smp->flags &= ~SMP_F_CONST;
return 1;
}
/* returns only the TCP options kinds of an input TCP header, as a binary
* block of one byte per option.
*/
static int sample_conv_tcp_options_list(const struct arg *arg_p, struct sample *smp, void *private)
{
struct buffer *trash = get_trash_chunk();
size_t len = tcp_fullhdr_length(smp);
size_t ofs = 20;
if (!len)
return 0;
while (ofs < len) {
if (smp->data.u.str.area[ofs] == 0) // kind0=end of options
break;
trash->area[trash->data++] = smp->data.u.str.area[ofs];
/* kind1 = NOP and is a single byte, others have a length field */
if (smp->data.u.str.area[ofs] == 1)
ofs++;
else if (ofs + 1 <= len)
ofs += smp->data.u.str.area[ofs + 1];
else
break;
}
/* returns a binary block of 1 byte per option */
smp->data.u.str = *trash;
smp->flags &= ~SMP_F_CONST;
return 1;
}
/* returns the sequence number field found in an input TCP header */
static int sample_conv_tcp_seq(const struct arg *arg_p, struct sample *smp, void *private)
{
if (smp->data.u.str.data < 20)
return 0;
smp->data.u.sint = read_n32(smp->data.u.str.area + 4);
smp->data.type = SMP_T_SINT;
smp->flags &= ~SMP_F_CONST;
return 1;
}
/* returns the source port field found in an input TCP header */
static int sample_conv_tcp_src(const struct arg *arg_p, struct sample *smp, void *private)
{
if (smp->data.u.str.data < 20)
return 0;
smp->data.u.sint = read_n16(smp->data.u.str.area);
smp->data.type = SMP_T_SINT;
smp->flags &= ~SMP_F_CONST;
return 1;
}
/* returns the window field found in an input TCP header */
static int sample_conv_tcp_win(const struct arg *arg_p, struct sample *smp, void *private)
{
if (smp->data.u.str.data < 20)
return 0;
smp->data.u.sint = read_n16(smp->data.u.str.area + 14);
smp->data.type = SMP_T_SINT;
smp->flags &= ~SMP_F_CONST;
return 1;
}
/* Builds a binary fingerprint of the IP+TCP input contents that are supposed
* to rely essentially on the client stack's settings. This can be used for
* example to selectively block bad behaviors at one IP address without
* blocking others. The resulting fingerprint is a binary block of 56 to 376
* bytes long (56 being the fixed part and the rest depending on the provided
* TCP extensions).
*/
static int sample_conv_ip_fp(const struct arg *arg_p, struct sample *smp, void *private)
{
struct buffer *trash = get_trash_chunk();
char *ipsrc;
uchar ipver;
uchar iptos;
uchar ipttl;
uchar ipdf;
uchar ipext;
uchar tcpflags;
uchar tcplen;
uchar tcpws;
ushort pktlen;
ushort tcpwin;
ushort tcpmss;
size_t iplen;
size_t ofs;
int mode;
/* check arg for mode > 0 */
if (arg_p[0].type == ARGT_SINT)
mode = arg_p[0].data.sint;
else
mode = 0;
/* retrieve IP version */
if (smp->data.u.str.data < 1)
return 0;
ipver = (uchar)smp->data.u.str.area[0] >> 4;
if (ipver == 4) {
/* check fields for IPv4 */
// extension present if header length != 5 words.
ipext = (smp->data.u.str.area[0] & 0xF) != 5;
iplen = (smp->data.u.str.area[0] & 0xF) * 4;
if (smp->data.u.str.data < iplen)
return 0;
iptos = smp->data.u.str.area[1];
pktlen = read_n16(smp->data.u.str.area + 2);
ipdf = !!(smp->data.u.str.area[6] & 0x40);
ipttl = smp->data.u.str.area[8];
ipsrc = smp->data.u.str.area + 12;
}
else if (ipver == 6) {
/* check fields for IPv6 */
if (smp->data.u.str.data < 40)
return 0;
pktlen = 40 + read_n16(smp->data.u.str.area + 4);
// extension/next proto => ext present if !tcp && !udp
ipext = smp->data.u.str.area[6];
ipext = ipext != 6 && ipext != 17;
iptos = read_n16(smp->data.u.str.area) >> 4;
ipdf = 1; // no fragments by default in IPv6
ipttl = smp->data.u.str.area[7];
ipsrc = smp->data.u.str.area + 8;
}
else
return 0;
/* prepare trash to contain at least 7 bytes */
trash->data = 7;
/* store the TOS in the FP's first byte */
trash->area[0] = iptos;
if (mode & 1) // append TTL
trash->area[trash->data++] = ipttl;
/* keep only two bits for TTL: <=32, <=64, <=128, <=255 */
ipttl = (ipttl > 64) ? ((ipttl > 128) ? 3 : 2) : ((ipttl > 32) ? 1 : 0);
/* OK we've collected required IP fields, let's advance to TCP now */
iplen = ip_header_length(smp);
if (!iplen || iplen > pktlen)
return 0;
/* advance buffer by <len> */
smp->data.u.str.area += iplen;
smp->data.u.str.data -= iplen;
pktlen -= iplen;
/* now SMP points to the TCP header. It must be complete */
tcplen = tcp_fullhdr_length(smp);
if (!tcplen || tcplen > pktlen)
return 0;
pktlen -= tcplen; // remaining data length (e.g. TFO)
tcpflags = smp->data.u.str.area[13];
tcpwin = read_n16(smp->data.u.str.area + 14);
/* second byte of FP contains:
* - bit 7..4: IP.v6(1), IP.DF(1), IP.TTL(2),
* - bit 3..0: IP.ext(1), TCP.have_data(1), TCP.CWR(1), TCP.ECE(1)
*/
trash->area[1] =
((ipver == 6) << 7) |
(ipdf << 6) |
(ipttl << 4) |
(ipext << 3) |
((pktlen > 0) << 2) | // data present (TFO)
(tcpflags >> 6 << 0); // CWR, ECE
tcpmss = tcpws = 0;
ofs = 20;
while (ofs < tcplen) {
size_t next;
if (smp->data.u.str.area[ofs] == 0) // kind0=end of options
break;
/* kind1 = NOP and is a single byte, others have a length field */
if (smp->data.u.str.area[ofs] == 1)
next = ofs + 1;
else if (ofs + 1 <= tcplen)
next = ofs + smp->data.u.str.area[ofs + 1];
else
break;
if (next > tcplen)
break;
/* option is complete, take a copy of it */
if (mode & 2) // mode & 2: append tcp.options_list
trash->area[trash->data++] = smp->data.u.str.area[ofs];
if (smp->data.u.str.area[ofs] == 2 /* MSS */) {
tcpmss = read_n16(smp->data.u.str.area + ofs + 2);
}
else if (smp->data.u.str.area[ofs] == 3 /* WS */) {
tcpws = (uchar)smp->data.u.str.area[ofs + 2];
/* output from 1 to 15, thus 0=not found */
tcpws = tcpws > 14 ? 15 : tcpws + 1;
}
ofs = next;
}
/* third byte contains hdrlen(4) and wscale(4) */
trash->area[2] = (tcplen << 2) | tcpws;
/* then tcpwin(16) then tcpmss(16) */
write_n16(trash->area + 3, tcpwin);
write_n16(trash->area + 5, tcpmss);
/* mode 4: append source IP address */
if (mode & 4) {
iplen = (ipver == 4) ? 4 : 16;
memcpy(trash->area + trash->data, ipsrc, iplen);
trash->data += iplen;
}
/* option kinds if any are stored starting at offset 7 */
smp->data.u.str = *trash;
smp->flags &= ~SMP_F_CONST;
return 1;
}
/* Note: must not be declared <const> as its list will be overwritten */
static struct sample_conv_kw_list sample_conv_kws = {ILH, {
{ "eth.data", sample_conv_eth_data, 0, NULL, SMP_T_BIN, SMP_T_BIN },
{ "eth.dst", sample_conv_eth_dst, 0, NULL, SMP_T_BIN, SMP_T_BIN },
{ "eth.hdr", sample_conv_eth_hdr, 0, NULL, SMP_T_BIN, SMP_T_BIN },
{ "eth.proto", sample_conv_eth_proto, 0, NULL, SMP_T_BIN, SMP_T_SINT },
{ "eth.src", sample_conv_eth_src, 0, NULL, SMP_T_BIN, SMP_T_BIN },
{ "eth.vlan", sample_conv_eth_vlan, 0, NULL, SMP_T_BIN, SMP_T_SINT },
{ "ip.data", sample_conv_ip_data, 0, NULL, SMP_T_BIN, SMP_T_BIN },
{ "ip.df", sample_conv_ip_df, 0, NULL, SMP_T_BIN, SMP_T_SINT },
{ "ip.dst", sample_conv_ip_dst, 0, NULL, SMP_T_BIN, SMP_T_ADDR },
{ "ip.fp", sample_conv_ip_fp, ARG1(0,SINT), NULL, SMP_T_BIN, SMP_T_BIN },
{ "ip.hdr", sample_conv_ip_hdr, 0, NULL, SMP_T_BIN, SMP_T_BIN },
{ "ip.proto", sample_conv_ip_proto, 0, NULL, SMP_T_BIN, SMP_T_SINT },
{ "ip.src", sample_conv_ip_src, 0, NULL, SMP_T_BIN, SMP_T_ADDR },
{ "ip.tos", sample_conv_ip_tos, 0, NULL, SMP_T_BIN, SMP_T_SINT },
{ "ip.ttl", sample_conv_ip_ttl, 0, NULL, SMP_T_BIN, SMP_T_SINT },
{ "ip.ver", sample_conv_ip_ver, 0, NULL, SMP_T_BIN, SMP_T_SINT },
{ "tcp.dst", sample_conv_tcp_dst, 0, NULL, SMP_T_BIN, SMP_T_SINT },
{ "tcp.flags", sample_conv_tcp_flags, 0, NULL, SMP_T_BIN, SMP_T_SINT },
{ "tcp.options.mss", sample_conv_tcp_options_mss, 0, NULL, SMP_T_BIN, SMP_T_SINT },
{ "tcp.options.sack", sample_conv_tcp_options_sack, 0, NULL, SMP_T_BIN, SMP_T_SINT },
{ "tcp.options.tsopt", sample_conv_tcp_options_tsopt, 0, NULL, SMP_T_BIN, SMP_T_SINT },
{ "tcp.options.tsval", sample_conv_tcp_options_tsval, 0, NULL, SMP_T_BIN, SMP_T_SINT },
{ "tcp.options.wscale", sample_conv_tcp_options_wscale, 0, NULL, SMP_T_BIN, SMP_T_SINT },
{ "tcp.options.wsopt", sample_conv_tcp_options_wsopt, 0, NULL, SMP_T_BIN, SMP_T_SINT },
{ "tcp.options_list", sample_conv_tcp_options_list, 0, NULL, SMP_T_BIN, SMP_T_BIN },
{ "tcp.seq", sample_conv_tcp_seq, 0, NULL, SMP_T_BIN, SMP_T_SINT },
{ "tcp.src", sample_conv_tcp_src, 0, NULL, SMP_T_BIN, SMP_T_SINT },
{ "tcp.win", sample_conv_tcp_win, 0, NULL, SMP_T_BIN, SMP_T_SINT },
{ NULL, NULL, 0, 0, 0 },
}};
INITCALL1(STG_REGISTER, sample_register_convs, &sample_conv_kws);

View File

@ -15,6 +15,7 @@
#include <errno.h>
#include <import/cebs_tree.h>
#include <import/ceb32_tree.h>
#include <import/ebistree.h>
#include <import/ebpttree.h>
#include <import/ebsttree.h>
@ -31,6 +32,18 @@
#include <haproxy/xxhash.h>
/* Convenience macros for iterating over generations. */
#define pat_ref_gen_foreach(gen, ref) \
for (gen = cebu32_item_first(&ref->gen_root, gen_node, gen_id, struct pat_ref_gen); \
gen; \
gen = cebu32_item_next(&ref->gen_root, gen_node, gen_id, gen))
/* Safe variant that allows deleting an entry in the body of the loop. */
#define pat_ref_gen_foreach_safe(gen, next, ref) \
for (gen = cebu32_item_first(&ref->gen_root, gen_node, gen_id, struct pat_ref_gen); \
gen && (next = cebu32_item_next(&ref->gen_root, gen_node, gen_id, gen), 1); \
gen = next)
const char *const pat_match_names[PAT_MATCH_NUM] = {
[PAT_MATCH_FOUND] = "found",
[PAT_MATCH_BOOL] = "bool",
@ -1568,9 +1581,13 @@ struct pat_ref *pat_ref_lookupid(int unique_id)
*/
void pat_ref_delete_by_ptr(struct pat_ref *ref, struct pat_ref_elt *elt)
{
struct pat_ref_gen *gen;
struct pattern_expr *expr;
struct bref *bref, *back;
gen = pat_ref_gen_get(ref, elt->gen_id);
BUG_ON(!gen);
/*
* we have to unlink all watchers from this reference pattern. We must
* not relink them if this elt was the last one in the list.
@ -1578,7 +1595,7 @@ void pat_ref_delete_by_ptr(struct pat_ref *ref, struct pat_ref_elt *elt)
list_for_each_entry_safe(bref, back, &elt->back_refs, users) {
LIST_DELETE(&bref->users);
LIST_INIT(&bref->users);
if (elt->list.n != &ref->head)
if (elt->list.n != &gen->head)
LIST_APPEND(&LIST_ELEM(elt->list.n, typeof(elt), list)->back_refs, &bref->users);
bref->ref = elt->list.n;
}
@ -1593,7 +1610,7 @@ void pat_ref_delete_by_ptr(struct pat_ref *ref, struct pat_ref_elt *elt)
HA_RWLOCK_WRUNLOCK(PATEXP_LOCK, &expr->lock);
LIST_DELETE(&elt->list);
cebs_item_delete(&ref->ceb_root, node, pattern, elt);
cebs_item_delete(&gen->elt_root, node, pattern, elt);
free(elt->sample);
free(elt);
HA_ATOMIC_INC(&patterns_freed);
@ -1608,19 +1625,67 @@ void pat_ref_delete_by_ptr(struct pat_ref *ref, struct pat_ref_elt *elt)
*/
int pat_ref_delete_by_id(struct pat_ref *ref, struct pat_ref_elt *refelt)
{
struct pat_ref_gen *gen;
struct pat_ref_elt *elt, *safe;
/* delete pattern from reference */
list_for_each_entry_safe(elt, safe, &ref->head, list) {
if (elt == refelt) {
event_hdl_publish(&ref->e_subs, EVENT_HDL_SUB_PAT_REF_DEL, NULL);
pat_ref_delete_by_ptr(ref, elt);
return 1;
pat_ref_gen_foreach(gen, ref) {
list_for_each_entry_safe(elt, safe, &gen->head, list) {
if (elt == refelt) {
event_hdl_publish(&ref->e_subs, EVENT_HDL_SUB_PAT_REF_DEL, NULL);
pat_ref_delete_by_ptr(ref, elt);
return 1;
}
}
}
return 0;
}
/* Create a new generation object.
*
* Returns NULL in case of memory allocation failure.
*/
struct pat_ref_gen *pat_ref_gen_new(struct pat_ref *ref, unsigned int gen_id)
{
struct pat_ref_gen *gen, *old;
gen = malloc(sizeof(struct pat_ref_gen));
if (!gen)
return NULL;
LIST_INIT(&gen->head);
ceb_init_root(&gen->elt_root);
gen->gen_id = gen_id;
old = cebu32_item_insert(&ref->gen_root, gen_node, gen_id, gen);
BUG_ON(old != gen, "Generation ID already exists");
return gen;
}
/* Find the generation <gen_id> in the pattern reference <ref>.
*
* Returns NULL if the generation cannot be found.
*/
struct pat_ref_gen *pat_ref_gen_get(struct pat_ref *ref, unsigned int gen_id)
{
struct pat_ref_gen *gen;
/* We optimistically try to use the cached generation if it's the current one. */
if (likely(gen_id == ref->curr_gen && gen_id == ref->cached_gen.id && ref->cached_gen.data))
return ref->cached_gen.data;
gen = cebu32_item_lookup(&ref->gen_root, gen_node, gen_id, gen_id, struct pat_ref_gen);
if (unlikely(!gen))
return NULL;
if (gen_id == ref->curr_gen) {
ref->cached_gen.id = gen_id;
ref->cached_gen.data = gen;
}
return gen;
}
/* This function removes all elements belonging to <gen_id> and matching <key>
* from the reference <ref>.
* This function returns 1 if the deletion is done and returns 0 if
@ -1628,24 +1693,21 @@ int pat_ref_delete_by_id(struct pat_ref *ref, struct pat_ref_elt *refelt)
*/
int pat_ref_gen_delete(struct pat_ref *ref, unsigned int gen_id, const char *key)
{
struct pat_ref_elt *elt, *elt2;
int found = 0;
struct pat_ref_gen *gen;
struct pat_ref_elt *elt;
gen = pat_ref_gen_get(ref, gen_id);
if (!gen)
return 0;
/* delete pattern from reference */
elt = cebs_item_lookup(&ref->ceb_root, node, pattern, key, struct pat_ref_elt);
while (elt) {
elt2 = cebs_item_next_dup(&ref->ceb_root, node, pattern, elt);
if (elt->gen_id == gen_id) {
pat_ref_delete_by_ptr(ref, elt);
found = 1;
}
elt = elt2;
}
elt = cebs_item_lookup(&gen->elt_root, node, pattern, key, struct pat_ref_elt);
if (!elt)
return 0;
if (found)
event_hdl_publish(&ref->e_subs, EVENT_HDL_SUB_PAT_REF_DEL, NULL);
return found;
pat_ref_delete_by_ptr(ref, elt);
event_hdl_publish(&ref->e_subs, EVENT_HDL_SUB_PAT_REF_DEL, NULL);
return 1;
}
/* This function removes all patterns matching <key> from the reference
@ -1663,15 +1725,12 @@ int pat_ref_delete(struct pat_ref *ref, const char *key)
*/
struct pat_ref_elt *pat_ref_gen_find_elt(struct pat_ref *ref, unsigned int gen_id, const char *key)
{
struct pat_ref_elt *elt;
struct pat_ref_gen *gen;
elt = cebs_item_lookup(&ref->ceb_root, node, pattern, key, struct pat_ref_elt);
while (elt) {
if (elt->gen_id == gen_id)
break;
elt = cebs_item_next_dup(&ref->ceb_root, node, pattern, elt);
}
return elt;
gen = pat_ref_gen_get(ref, gen_id);
if (!gen)
return NULL;
return cebs_item_lookup(&gen->elt_root, node, pattern, key, struct pat_ref_elt);
}
/*
@ -1771,14 +1830,17 @@ static inline int pat_ref_set_elt(struct pat_ref *ref, struct pat_ref_elt *elt,
*/
int pat_ref_set_by_id(struct pat_ref *ref, struct pat_ref_elt *refelt, const char *value, char **err)
{
struct pat_ref_gen *gen;
struct pat_ref_elt *elt;
/* Look for pattern in the reference. */
list_for_each_entry(elt, &ref->head, list) {
if (elt == refelt) {
if (!pat_ref_set_elt(ref, elt, value, err))
return 0;
return 1;
pat_ref_gen_foreach(gen, ref) {
list_for_each_entry(elt, &gen->head, list) {
if (elt == refelt) {
if (!pat_ref_set_elt(ref, elt, value, err))
return 0;
return 1;
}
}
}
@ -1788,31 +1850,30 @@ int pat_ref_set_by_id(struct pat_ref *ref, struct pat_ref_elt *refelt, const cha
static int pat_ref_set_from_elt(struct pat_ref *ref, struct pat_ref_elt *elt, const char *value, char **err)
{
unsigned int gen;
struct pat_ref_gen *gen;
struct pat_ref_elt *elt2;
int first = 1;
int found = 0;
int found = 0, publish = 0;
for (; elt; elt = elt2) {
char *tmp_err = NULL;
if (elt) {
if (elt->gen_id == ref->curr_gen)
publish = 1;
gen = pat_ref_gen_get(ref, elt->gen_id);
BUG_ON(!gen);
elt2 = cebs_item_next_dup(&ref->ceb_root, node, pattern, elt);
if (first)
gen = elt->gen_id;
else if (elt->gen_id != gen) {
/* only consider duplicate elements from the same gen! */
continue;
for (; elt; elt = elt2) {
char *tmp_err = NULL;
elt2 = cebs_item_next_dup(&gen->elt_root, node, pattern, elt);
if (!pat_ref_set_elt(ref, elt, value, &tmp_err)) {
if (err)
*err = tmp_err;
else
ha_free(&tmp_err);
return 0;
}
found = 1;
}
if (!pat_ref_set_elt(ref, elt, value, &tmp_err)) {
if (err)
*err = tmp_err;
else
ha_free(&tmp_err);
return 0;
}
found = 1;
first = 0;
}
if (!found) {
@ -1820,7 +1881,7 @@ static int pat_ref_set_from_elt(struct pat_ref *ref, struct pat_ref_elt *elt, co
return 0;
}
if (gen == ref->curr_gen) // gen cannot be uninitialized here
if (publish)
event_hdl_publish(&ref->e_subs, EVENT_HDL_SUB_PAT_REF_SET, NULL);
return 1;
@ -1839,15 +1900,15 @@ int pat_ref_set_elt_duplicate(struct pat_ref *ref, struct pat_ref_elt *elt, cons
int pat_ref_gen_set(struct pat_ref *ref, unsigned int gen_id,
const char *key, const char *value, char **err)
{
struct pat_ref_gen *gen;
struct pat_ref_elt *elt;
/* Look for pattern in the reference. */
elt = cebs_item_lookup(&ref->ceb_root, node, pattern, key, struct pat_ref_elt);
while (elt) {
if (elt->gen_id == gen_id)
break;
elt = cebs_item_next_dup(&ref->ceb_root, node, pattern, elt);
}
gen = pat_ref_gen_get(ref, gen_id);
if (gen)
elt = cebs_item_lookup(&gen->elt_root, node, pattern, key, struct pat_ref_elt);
else
elt = NULL;
return pat_ref_set_from_elt(ref, elt, value, err);
}
@ -1889,8 +1950,9 @@ static struct pat_ref *_pat_ref_new(const char *display, unsigned int flags)
ref->unique_id = -1;
ref->revision = 0;
ref->entry_cnt = 0;
LIST_INIT(&ref->head);
ref->ceb_root = NULL;
ceb_init_root(&ref->gen_root);
ref->cached_gen.id = ref->curr_gen;
ref->cached_gen.data = NULL;
LIST_INIT(&ref->pat);
HA_RWLOCK_INIT(&ref->lock);
event_hdl_sub_list_init(&ref->e_subs);
@ -1972,8 +2034,10 @@ struct pat_ref *pat_ref_newid(int unique_id, const char *display, unsigned int f
* <ref> must be held. It sets the newly created pattern's generation number
* to the same value as the reference's.
*/
struct pat_ref_elt *pat_ref_append(struct pat_ref *ref, const char *pattern, const char *sample, int line)
struct pat_ref_elt *pat_ref_append(struct pat_ref *ref, unsigned int gen_id,
const char *pattern, const char *sample, int line)
{
struct pat_ref_gen *gen;
struct pat_ref_elt *elt;
int len = strlen(pattern);
@ -1981,7 +2045,14 @@ struct pat_ref_elt *pat_ref_append(struct pat_ref *ref, const char *pattern, con
if (!elt)
goto fail;
elt->gen_id = ref->curr_gen;
gen = pat_ref_gen_get(ref, gen_id);
if (!gen) {
gen = pat_ref_gen_new(ref, gen_id);
if (!gen)
goto fail;
}
elt->gen_id = gen_id;
elt->line = line;
memcpy((char*)elt->pattern, pattern, len + 1);
@ -1995,8 +2066,8 @@ struct pat_ref_elt *pat_ref_append(struct pat_ref *ref, const char *pattern, con
LIST_INIT(&elt->back_refs);
elt->list_head = NULL;
elt->tree_head = NULL;
LIST_APPEND(&ref->head, &elt->list);
cebs_item_insert(&ref->ceb_root, node, pattern, elt);
LIST_APPEND(&gen->head, &elt->list);
cebs_item_insert(&gen->elt_root, node, pattern, elt);
HA_ATOMIC_INC(&patterns_added);
return elt;
fail:
@ -2094,9 +2165,8 @@ struct pat_ref_elt *pat_ref_load(struct pat_ref *ref, unsigned int gen,
{
struct pat_ref_elt *elt;
elt = pat_ref_append(ref, pattern, sample, line);
elt = pat_ref_append(ref, gen, pattern, sample, line);
if (elt) {
elt->gen_id = gen;
if (!pat_ref_commit_elt(ref, elt, err))
elt = NULL;
} else
@ -2133,6 +2203,7 @@ int pat_ref_add(struct pat_ref *ref,
*/
int pat_ref_purge_range(struct pat_ref *ref, uint from, uint to, int budget)
{
struct pat_ref_gen *gen, *gen2;
struct pat_ref_elt *elt, *elt_bck;
struct bref *bref, *bref_bck;
struct pattern_expr *expr;
@ -2145,35 +2216,53 @@ int pat_ref_purge_range(struct pat_ref *ref, uint from, uint to, int budget)
/* assume completion for e.g. empty lists */
done = 1;
list_for_each_entry_safe(elt, elt_bck, &ref->head, list) {
if (elt->gen_id - from > to - from)
pat_ref_gen_foreach_safe(gen, gen2, ref) {
if (gen->gen_id - from > to - from) {
if (from <= to) {
break;
}
continue;
}
if (budget >= 0 && !budget--) {
done = 0;
list_for_each_entry_safe(elt, elt_bck, &gen->head, list) {
if (budget >= 0 && !budget--) {
done = 0;
break;
}
BUG_ON(elt->gen_id != gen->gen_id);
/*
* we have to unlink all watchers from this reference pattern. We must
* not relink them if this elt was the last one in the list.
*/
list_for_each_entry_safe(bref, bref_bck, &elt->back_refs, users) {
LIST_DELETE(&bref->users);
LIST_INIT(&bref->users);
if (elt->list.n != &gen->head)
LIST_APPEND(&LIST_ELEM(elt->list.n, typeof(elt), list)->back_refs, &bref->users);
bref->ref = elt->list.n;
}
/* delete the storage for all representations of this pattern. */
pat_delete_gen(ref, elt);
LIST_DELETE(&elt->list);
cebs_item_delete(&gen->elt_root, node, pattern, elt);
free(elt->sample);
free(elt);
HA_ATOMIC_INC(&patterns_freed);
}
if (!done)
break;
}
/*
* we have to unlink all watchers from this reference pattern. We must
* not relink them if this elt was the last one in the list.
*/
list_for_each_entry_safe(bref, bref_bck, &elt->back_refs, users) {
LIST_DELETE(&bref->users);
LIST_INIT(&bref->users);
if (elt->list.n != &ref->head)
LIST_APPEND(&LIST_ELEM(elt->list.n, typeof(elt), list)->back_refs, &bref->users);
bref->ref = elt->list.n;
}
/* delete the storage for all representations of this pattern. */
pat_delete_gen(ref, elt);
LIST_DELETE(&elt->list);
cebs_item_delete(&ref->ceb_root, node, pattern, elt);
free(elt->sample);
free(elt);
HA_ATOMIC_INC(&patterns_freed);
BUG_ON(!LIST_ISEMPTY(&gen->head));
BUG_ON(!ceb_isempty(&gen->elt_root));
cebu32_item_delete(&ref->gen_root, gen_node, gen_id, gen);
if (gen->gen_id == ref->cached_gen.id)
ref->cached_gen.data = NULL;
free(gen);
}
list_for_each_entry(expr, &ref->pat, list)
@ -2388,7 +2477,7 @@ int pat_ref_read_from_file_smp(struct pat_ref *ref, char **err)
*value_end = '\0';
/* insert values */
if (!pat_ref_append(ref, key_beg, value_beg, line)) {
if (!pat_ref_append(ref, ref->curr_gen, key_beg, value_beg, line)) {
memprintf(err, "out of memory");
goto out_close;
}
@ -2455,7 +2544,7 @@ int pat_ref_read_from_file(struct pat_ref *ref, char **err)
if (c == arg)
continue;
if (!pat_ref_append(ref, arg, NULL, line)) {
if (!pat_ref_append(ref, ref->curr_gen, arg, NULL, line)) {
memprintf(err, "out of memory when loading patterns from file <%s>", ref->reference);
goto out_close;
}
@ -2479,6 +2568,7 @@ int pattern_read_from_file(struct pattern_head *head, unsigned int refflags,
{
struct pat_ref *ref;
struct pattern_expr *expr;
struct pat_ref_gen *gen;
struct pat_ref_elt *elt;
int reuse = 0;
@ -2579,12 +2669,14 @@ int pattern_read_from_file(struct pattern_head *head, unsigned int refflags,
* content-based in case of duplicated keys we only want the first key
* in the file to be considered.
*/
list_for_each_entry(elt, &ref->head, list) {
if (!pat_ref_push(elt, expr, patflags, err)) {
if (elt->line > 0)
memprintf(err, "%s at line %d of file '%s'",
*err, elt->line, filename);
return 0;
pat_ref_gen_foreach(gen, ref) {
list_for_each_entry(elt, &gen->head, list) {
if (!pat_ref_push(elt, expr, patflags, err)) {
if (elt->line > 0)
memprintf(err, "%s at line %d of file '%s'",
*err, elt->line, filename);
return 0;
}
}
}

View File

@ -313,7 +313,7 @@ static const struct trace_event peers_trace_events[] = {
#define PEERS_EV_PROTO_ERR (1ULL << 13)
{ .mask = PEERS_EV_PROTO_ERR, .name = "proto_error", .desc = "protocol error" },
#define PEERS_EV_PROTO_HELLO (1ULL << 14)
{ .mask = PEERS_EV_PROTO_HELLO, .name = "proto_hello", .desc = "protocol hello mesage" },
{ .mask = PEERS_EV_PROTO_HELLO, .name = "proto_hello", .desc = "protocol hello message" },
#define PEERS_EV_PROTO_SUCCESS (1ULL << 15)
{ .mask = PEERS_EV_PROTO_SUCCESS, .name = "proto_success", .desc = "protocol success message" },
#define PEERS_EV_PROTO_UPDATE (1ULL << 16)
@ -1212,7 +1212,7 @@ static inline int peer_getline(struct appctx *appctx)
int n = 0;
TRACE_ENTER(PEERS_EV_SESS_IO|PEERS_EV_RX_MSG, appctx);
if (applet_get_inbuf(appctx) == NULL || !applet_input_data(appctx)) {
if (applet_get_inbuf(appctx) == NULL) {
applet_need_more_data(appctx);
goto out;
}
@ -1301,7 +1301,7 @@ static inline int peer_send_hellomsg(struct appctx *appctx, struct peer *peer)
*/
static inline int peer_send_status_successmsg(struct appctx *appctx)
{
TRACE_PROTO("send status sucess message", PEERS_EV_SESS_IO|PEERS_EV_TX_MSG|PEERS_EV_PROTO_SUCCESS, appctx);
TRACE_PROTO("send status success message", PEERS_EV_SESS_IO|PEERS_EV_TX_MSG|PEERS_EV_PROTO_SUCCESS, appctx);
return peer_send_msg(appctx, peer_prepare_status_successmsg, NULL);
}

View File

@ -151,12 +151,6 @@ int sockpair_bind_receiver(struct receiver *rx, char **errmsg)
err |= ERR_RETRYABLE;
goto bind_ret_err;
}
/* taking the other one's FD will result in it being marked
* extern and being dup()ed. Let's mark the receiver as
* inherited so that it properly bypasses all second-stage
* setup and avoids being passed to new processes.
*/
rx->flags |= RX_F_INHERITED;
rx->fd = rx->shard_info->ref->fd;
}
@ -243,12 +237,15 @@ int send_fd_uxst(int fd, int send_fd)
struct iovec iov;
struct msghdr msghdr;
char cmsgbuf[CMSG_SPACE(sizeof(int))] = {0};
char buf[CMSG_SPACE(sizeof(int))] = {0};
char cmsgbuf[CMSG_SPACE(sizeof(int))];
char buf[CMSG_SPACE(sizeof(int))];
struct cmsghdr *cmsg = (void *)buf;
int *fdptr;
memset(cmsgbuf, 0, sizeof(cmsgbuf));
memset(buf, 0, sizeof(buf));
iov.iov_base = iobuf;
iov.iov_len = sizeof(iobuf);

View File

@ -76,6 +76,7 @@ struct protocol proto_tcpv4 = {
.check_events = sock_check_events,
.ignore_events = sock_ignore_events,
.get_info = tcp_get_info,
.get_opt = sock_conn_get_opt,
/* binding layer */
.rx_suspend = tcp_suspend_receiver,
@ -121,6 +122,7 @@ struct protocol proto_tcpv6 = {
.check_events = sock_check_events,
.ignore_events = sock_ignore_events,
.get_info = tcp_get_info,
.get_opt = sock_conn_get_opt,
/* binding layer */
.rx_suspend = tcp_suspend_receiver,
@ -167,6 +169,7 @@ struct protocol proto_mptcpv4 = {
.check_events = sock_check_events,
.ignore_events = sock_ignore_events,
.get_info = tcp_get_info,
.get_opt = sock_conn_get_opt,
/* binding layer */
.rx_suspend = tcp_suspend_receiver,
@ -212,6 +215,7 @@ struct protocol proto_mptcpv6 = {
.check_events = sock_check_events,
.ignore_events = sock_ignore_events,
.get_info = tcp_get_info,
.get_opt = sock_conn_get_opt,
/* binding layer */
.rx_suspend = tcp_suspend_receiver,
@ -778,6 +782,17 @@ int tcp_bind_listener(struct listener *listener, char *errmsg, int errlen)
}
}
#endif
#if defined(TCP_SAVE_SYN)
if (listener->bind_conf->tcp_ss) {
if (setsockopt(fd, IPPROTO_TCP, TCP_SAVE_SYN,
&listener->bind_conf->tcp_ss, sizeof(listener->bind_conf->tcp_ss)) == -1) {
chunk_appendf(msg, "%scannot set TCP Save SYN, (%s)", msg->data ? ", " : "",
strerror(errno));
err |= ERR_WARN;
}
} else
setsockopt(fd, IPPROTO_TCP, TCP_SAVE_SYN, &zero, sizeof(zero));
#endif
#if defined(TCP_USER_TIMEOUT)
if (listener->bind_conf->tcp_ut) {
if (setsockopt(fd, IPPROTO_TCP, TCP_USER_TIMEOUT,
@ -909,7 +924,7 @@ static int tcp_suspend_receiver(struct receiver *rx)
* parent process and any possible subsequent worker inheriting it.
* Thus we just stop receiving from it.
*/
if (rx->flags & RX_F_INHERITED)
if (rx->flags & RX_F_INHERITED_SOCK)
goto done;
if (connect(rx->fd, &sa, sizeof(sa)) < 0)
@ -945,7 +960,7 @@ static int tcp_resume_receiver(struct receiver *rx)
if (rx->fd < 0)
return 0;
if ((rx->flags & RX_F_INHERITED) || listen(rx->fd, listener_backlog(l)) == 0) {
if ((rx->flags & RX_F_INHERITED_SOCK) || listen(rx->fd, listener_backlog(l)) == 0) {
fd_want_recv(l->rx.fd);
return 1;
}

View File

@ -220,7 +220,7 @@ int udp_suspend_receiver(struct receiver *rx)
/* we never do that with a shared FD otherwise we'd break it in the
* parent process and any possible subsequent worker inheriting it.
*/
if (rx->flags & RX_F_INHERITED)
if (rx->flags & RX_F_INHERITED_SOCK)
goto done;
if (getsockname(rx->fd, (struct sockaddr *)&ss, &len) < 0)
@ -248,7 +248,7 @@ int udp_resume_receiver(struct receiver *rx)
if (rx->fd < 0)
return 0;
if (!(rx->flags & RX_F_INHERITED) && connect(rx->fd, &sa, sizeof(sa)) < 0)
if (!(rx->flags & RX_F_INHERITED_SOCK) && connect(rx->fd, &sa, sizeof(sa)) < 0)
return -1;
fd_want_recv(rx->fd);

View File

@ -320,6 +320,12 @@ void deinit_proxy(struct proxy *p)
EXTRA_COUNTERS_FREE(p->extra_counters_fe);
EXTRA_COUNTERS_FREE(p->extra_counters_be);
list_for_each_entry_safe(rule, ruleb, &p->persist_rules, list) {
LIST_DELETE(&rule->list);
free_acl_cond(rule->cond);
free(rule);
}
free_server_rules(&p->server_rules);
list_for_each_entry_safe(rule, ruleb, &p->switching_rules, list) {
@ -1154,6 +1160,56 @@ static int proxy_parse_tcpka_intvl(char **args, int section, struct proxy *proxy
}
#endif
static int proxy_parse_force_be_switch(char **args, int section_type, struct proxy *curpx,
const struct proxy *defpx, const char *file, int line,
char **err)
{
struct acl_cond *cond = NULL;
struct persist_rule *rule;
if (curpx->cap & PR_CAP_DEF) {
memprintf(err, "'%s' not allowed in 'defaults' section.", args[0]);
goto err;
}
if (!(curpx->cap & PR_CAP_FE)) {
memprintf(err, "'%s' only available in frontend or listen section.", args[0]);
goto err;
}
if (strcmp(args[1], "if") != 0 && strcmp(args[1], "unless") != 0) {
memprintf(err, "'%s' requires either 'if' or 'unless' followed by a condition.", args[0]);
goto err;
}
if (!(cond = build_acl_cond(file, line, &curpx->acl, curpx, (const char **)args + 1, err))) {
memprintf(err, "'%s' : %s.", args[0], *err);
goto err;
}
if (warnif_cond_conflicts(cond, SMP_VAL_FE_REQ_CNT, err)) {
memprintf(err, "'%s' : %s.", args[0], *err);
goto err;
}
rule = calloc(1, sizeof(*rule));
if (!rule) {
memprintf(err, "'%s' : out of memory.", args[0]);
goto err;
}
rule->cond = cond;
rule->type = PERSIST_TYPE_BE_SWITCH;
LIST_INIT(&rule->list);
LIST_APPEND(&curpx->persist_rules, &rule->list);
return 0;
err:
free_acl_cond(cond);
return -1;
}
static int proxy_parse_guid(char **args, int section_type, struct proxy *curpx,
const struct proxy *defpx, const char *file, int line,
char **err)
@ -2578,7 +2634,11 @@ int stream_set_backend(struct stream *s, struct proxy *be)
return 0;
s->be = be;
s->be_tgcounters = be->be_counters.shared.tg[tgid - 1];
if (be->be_counters.shared.tg)
s->be_tgcounters = be->be_counters.shared.tg[tgid - 1];
else
s->be_tgcounters = NULL;
HA_ATOMIC_UPDATE_MAX(&be->be_counters.conn_max,
HA_ATOMIC_ADD_FETCH(&be->beconn, 1));
proxy_inc_be_ctr(be);
@ -2859,6 +2919,7 @@ static struct cfg_kw_list cfg_kws = {ILH, {
{ CFG_LISTEN, "clitcpka-intvl", proxy_parse_tcpka_intvl },
{ CFG_LISTEN, "srvtcpka-intvl", proxy_parse_tcpka_intvl },
#endif
{ CFG_LISTEN, "force-be-switch", proxy_parse_force_be_switch },
{ CFG_LISTEN, "guid", proxy_parse_guid },
{ 0, NULL, NULL },
}};
@ -3350,6 +3411,54 @@ static int cli_parse_enable_frontend(char **args, char *payload, struct appctx *
return 1;
}
static int cli_parse_publish_backend(char **args, char *payload, struct appctx *appctx, void *private)
{
struct proxy *px;
usermsgs_clr("CLI");
if (!cli_has_level(appctx, ACCESS_LVL_ADMIN))
return 1;
px = cli_find_backend(appctx, args[2]);
if (!px)
return cli_err(appctx, "No such backend.\n");
if (px->flags & PR_FL_DISABLED)
return cli_err(appctx, "No effect on a disabled backend.\n");
thread_isolate();
px->flags &= ~PR_FL_BE_UNPUBLISHED;
thread_release();
ha_notice("Backend published.\n");
return cli_umsg(appctx, LOG_INFO);
}
static int cli_parse_unpublish_backend(char **args, char *payload, struct appctx *appctx, void *private)
{
struct proxy *px;
usermsgs_clr("CLI");
if (!cli_has_level(appctx, ACCESS_LVL_ADMIN))
return 1;
px = cli_find_backend(appctx, args[2]);
if (!px)
return cli_err(appctx, "No such backend.\n");
if (px->flags & PR_FL_DISABLED)
return cli_err(appctx, "No effect on a disabled backend.\n");
thread_isolate();
px->flags |= PR_FL_BE_UNPUBLISHED;
thread_release();
ha_notice("Backend unpublished.\n");
return cli_umsg(appctx, LOG_INFO);
}
/* appctx context used during "show errors" */
struct show_errors_ctx {
struct proxy *px; /* current proxy being dumped, NULL = not started yet. */
@ -3564,12 +3673,14 @@ static int cli_io_handler_show_errors(struct appctx *appctx)
static struct cli_kw_list cli_kws = {{ },{
{ { "disable", "frontend", NULL }, "disable frontend <frontend> : temporarily disable specific frontend", cli_parse_disable_frontend, NULL, NULL },
{ { "enable", "frontend", NULL }, "enable frontend <frontend> : re-enable specific frontend", cli_parse_enable_frontend, NULL, NULL },
{ { "publish", "backend", NULL }, "publish backend <backend> : mark backend as ready for traffic", cli_parse_publish_backend, NULL, NULL },
{ { "set", "maxconn", "frontend", NULL }, "set maxconn frontend <frontend> <value> : change a frontend's maxconn setting", cli_parse_set_maxconn_frontend, NULL },
{ { "show","servers", "conn", NULL }, "show servers conn [<backend>] : dump server connections status (all or for a single backend)", cli_parse_show_servers, cli_io_handler_servers_state },
{ { "show","servers", "state", NULL }, "show servers state [<backend>] : dump volatile server information (all or for a single backend)", cli_parse_show_servers, cli_io_handler_servers_state },
{ { "show", "backend", NULL }, "show backend : list backends in the current running config", NULL, cli_io_handler_show_backend },
{ { "shutdown", "frontend", NULL }, "shutdown frontend <frontend> : stop a specific frontend", cli_parse_shutdown_frontend, NULL, NULL },
{ { "set", "dynamic-cookie-key", "backend", NULL }, "set dynamic-cookie-key backend <bk> <k> : change a backend secret key for dynamic cookies", cli_parse_set_dyncookie_key_backend, NULL },
{ { "unpublish", "backend", NULL }, "unpublish backend <backend> : remove backend for traffic processing", cli_parse_unpublish_backend, NULL, NULL },
{ { "enable", "dynamic-cookie", "backend", NULL }, "enable dynamic-cookie backend <bk> : enable dynamic cookies on a specific backend", cli_parse_enable_dyncookie_backend, NULL },
{ { "disable", "dynamic-cookie", "backend", NULL }, "disable dynamic-cookie backend <bk> : disable dynamic cookies on a specific backend", cli_parse_disable_dyncookie_backend, NULL },
{ { "show", "errors", NULL }, "show errors [<px>] [request|response] : report last request and/or response errors for each proxy", cli_parse_show_errors, cli_io_handler_show_errors, NULL },

View File

@ -386,11 +386,23 @@ int process_srv_queue(struct server *s)
{
struct server *ref = s->track ? s->track : s;
struct proxy *p = s->proxy;
uint64_t non_empty_tgids = all_tgroups_mask;
long non_empty_tgids[(global.nbtgroups / LONGBITS) + 1];
int maxconn;
int done = 0;
int px_ok;
int cur_tgrp;
int i = global.nbtgroups;
int curgrpnb = i;
while (i >= LONGBITS) {
non_empty_tgids[(global.nbtgroups - i) / LONGBITS] = ULONG_MAX;
i -= LONGBITS;
}
while (i > 0) {
ha_bit_set(global.nbtgroups - i, non_empty_tgids);
i--;
}
/* if a server is not usable or backup and must not be used
* to dequeue backend requests.
@ -420,7 +432,7 @@ int process_srv_queue(struct server *s)
* to our thread group, then we'll get one from a different one, to
* be sure those actually get processed too.
*/
while (non_empty_tgids != 0
while (curgrpnb != 0
&& (done < global.tune.maxpollevents || !s->served) &&
s->served < (maxconn = srv_dynamic_maxconn(s))) {
int self_served;
@ -431,8 +443,8 @@ int process_srv_queue(struct server *s)
* from our own thread-group queue.
*/
self_served = _HA_ATOMIC_LOAD(&s->per_tgrp[tgid - 1].self_served) % (MAX_SELF_USE_QUEUE + 1);
if ((self_served == MAX_SELF_USE_QUEUE && non_empty_tgids != (1UL << (tgid - 1))) ||
!(non_empty_tgids & (1UL << (tgid - 1)))) {
if ((self_served == MAX_SELF_USE_QUEUE && (curgrpnb > 1 || !ha_bit_test(tgid - 1, non_empty_tgids))) ||
!ha_bit_test(tgid - 1, non_empty_tgids)) {
unsigned int old_served, new_served;
/*
@ -452,7 +464,7 @@ int process_srv_queue(struct server *s)
*/
while (new_served == tgid ||
new_served == global.nbtgroups + 1 ||
!(non_empty_tgids & (1UL << (new_served - 1)))) {
!ha_bit_test(new_served - 1, non_empty_tgids)) {
if (new_served == global.nbtgroups + 1)
new_served = 1;
else
@ -468,7 +480,8 @@ int process_srv_queue(struct server *s)
to_dequeue = MAX_SELF_USE_QUEUE - self_served;
}
if (HA_ATOMIC_XCHG(&s->per_tgrp[cur_tgrp - 1].dequeuing, 1)) {
non_empty_tgids &= ~(1UL << (cur_tgrp - 1));
ha_bit_clr(cur_tgrp - 1, non_empty_tgids);
curgrpnb--;
continue;
}
@ -479,7 +492,8 @@ int process_srv_queue(struct server *s)
* the served field, only if it is < maxconn.
*/
if (!pendconn_process_next_strm(s, p, px_ok, cur_tgrp)) {
non_empty_tgids &= ~(1UL << (cur_tgrp - 1));
ha_bit_clr(cur_tgrp - 1, non_empty_tgids);
curgrpnb--;
break;
}
to_dequeue--;

View File

@ -21,7 +21,7 @@
* trees are used on frontend and backend sides.
*
* . CID global tree splitting
* To reduce the thread contention, a global CID tree is in reality splitted
* To reduce the thread contention, a global CID tree is in reality split
* into 256 distinct instances. Each CID is assigned to a single tree instance
* based on its content. Use quic_cid_tree_idx() to retrieve the expected tree
* location for a CID.

View File

@ -363,11 +363,16 @@ static int quic_parse_ack_ecn_frame(struct quic_frame *frm, struct quic_conn *qc
const unsigned char **pos, const unsigned char *end)
{
struct qf_ack *ack_frm = &frm->ack;
/* TODO implement ECN advertising */
uint64_t ect0, ect1, ecn_ce;
return quic_dec_int(&ack_frm->largest_ack, pos, end) &&
quic_dec_int(&ack_frm->ack_delay, pos, end) &&
quic_dec_int(&ack_frm->first_ack_range, pos, end) &&
quic_dec_int(&ack_frm->ack_range_num, pos, end);
quic_dec_int(&ack_frm->ack_delay, pos, end) &&
quic_dec_int(&ack_frm->ack_range_num, pos, end) &&
quic_dec_int(&ack_frm->first_ack_range, pos, end) &&
quic_dec_int(&ect0, pos, end) &&
quic_dec_int(&ect1, pos, end) &&
quic_dec_int(&ecn_ce, pos, end);
}
/* Encode a RESET_STREAM frame at <pos> buffer position.

View File

@ -353,7 +353,7 @@ int quic_retry_packet_check(struct quic_conn *qc, struct quic_rx_packet *pkt,
if (!quic_tls_generate_retry_integrity_tag(qc->odcid.data, qc->odcid.len,
beg, end - beg - QUIC_TLS_TAG_LEN,
tag, pkt->version)) {
TRACE_PROTO("retry integrity tag faild", QUIC_EV_CONN_SPKT, qc);
TRACE_PROTO("retry integrity tag failed", QUIC_EV_CONN_SPKT, qc);
goto err;
}

View File

@ -10,6 +10,9 @@
#include <haproxy/ssl_sock.h>
#include <haproxy/stats.h>
#include <haproxy/trace.h>
#ifdef USE_ECH
#include <haproxy/ech.h>
#endif
DECLARE_TYPED_POOL(pool_head_quic_ssl_sock_ctx, "quic_ssl_sock_ctx", struct ssl_sock_ctx);
const char *default_quic_ciphersuites = "TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384"
@ -810,6 +813,20 @@ int ssl_quic_initial_ctx(struct bind_conf *bind_conf)
cfgerr++;
#endif
#ifdef USE_ECH
if (bind_conf->ssl_conf.ech_filedir) {
int loaded = 0;
if (load_echkeys(ctx, bind_conf->ssl_conf.ech_filedir, &loaded) != 1) {
cfgerr += 1;
ha_alert("Proxy '%s': failed to load ECH key s from %s for '%s' at [%s:%d].\n",
bind_conf->frontend->id, bind_conf->ssl_conf.ech_filedir,
bind_conf->arg, bind_conf->file, bind_conf->line);
}
}
#endif
return cfgerr;
}
@ -1014,7 +1031,7 @@ int qc_ssl_do_hanshake(struct quic_conn *qc, struct ssl_sock_ctx *ctx)
qc->conn->mux->wake(qc->conn);
}
else {
/* Wake up upper layer if the MUX is alreay initialized.
/* Wake up upper layer if the MUX is already initialized.
* This is the case when the MUX was started for a 0-RTT session
* but without early-data secrets to send them (when the server
* does not support 0-RTT).

View File

@ -439,7 +439,8 @@ static int qc_send_ppkts(struct buffer *buf, struct quic_conn *qc)
}
qc->path->in_flight += pkt->in_flight_len;
pkt->pktns->tx.in_flight += pkt->in_flight_len;
if (quic_tune_test(QUIC_TUNE_FB_CC_HYSTART, qc) && pkt->pktns == qc->apktns)
if (quic_tune_test(QUIC_TUNE_FB_CC_HYSTART, qc) && pkt->pktns == qc->apktns &&
cc->algo->hystart_start_round != NULL)
cc->algo->hystart_start_round(cc, pkt->pn_node.key);
if (pkt->in_flight_len)
qc_set_timer(qc);
@ -726,7 +727,7 @@ static int qc_prep_pkts(struct quic_conn *qc, struct buffer *buf,
/* TODO currently it's not possible to emit an ACK and probing data simultaneously (see qc_do_build_pkt()).
* As a side-effect, this could cause coalescing of two packets of the same type which should be avoided.
* To implement this, a new datagram is forced by invokation of qc_txb_store(). This must then be checked
* To implement this, a new datagram is forced by invocation of qc_txb_store(). This must then be checked
* if padding is required as in this case this will be the last packet of the current datagram.
*/
if (probe && (must_ack || (qel->pktns->flags & QUIC_FL_PKTNS_ACK_REQUIRED)))
@ -2060,7 +2061,7 @@ static int qc_do_build_pkt(unsigned char *pos, const unsigned char *end,
* must be at least QUIC_PACKET_PN_MAXLEN(4) bytes long, so that the sample
* will be extracted as the AEAD tag.
*
* Note that from here, <len> includes <*pn_len>, the total frame lenghts,
* Note that from here, <len> includes <*pn_len>, the total frame lengths,
* and QUIC_TLS_TAG_LEN(16).
*/
if (len < QUIC_PACKET_PN_MAXLEN + QUIC_HP_SAMPLE_LEN) {

View File

@ -5381,12 +5381,20 @@ static int smp_fetch_conn_timers(const struct arg *args, struct sample *smp, con
{
struct strm_logs *logs;
if (!smp->strm)
return 0;
smp->data.type = SMP_T_SINT;
smp->flags = 0;
if (!smp->strm) {
/* no stream: only fc.timer.handshake is available via
* the session.
*/
if (kw[0] == 'f' && kw[9] == 'h') {
smp->data.u.sint = smp->sess->t_handshake;
return 1;
}
return 0;
}
logs = &smp->strm->logs;
if (kw[0] == 'b') {

View File

@ -7159,7 +7159,7 @@ static void srv_update_status(struct server *s, int type, int cause)
}
else if (s->cur_state == SRV_ST_STOPPED) {
/* server was up and is currently down */
if (s->counters.shared.tg[tgid - 1])
if (s->counters.shared.tg)
HA_ATOMIC_INC(&s->counters.shared.tg[tgid - 1]->down_trans);
_srv_event_hdl_publish(EVENT_HDL_SUB_SERVER_DOWN, cb_data.common, s);
}
@ -7174,7 +7174,7 @@ static void srv_update_status(struct server *s, int type, int cause)
}
s->last_change = ns_to_sec(now_ns);
if (s->counters.shared.tg[tgid - 1])
if (s->counters.shared.tg)
HA_ATOMIC_STORE(&s->counters.shared.tg[tgid - 1]->last_state_change, s->last_change);
/* publish the state change */
@ -7195,7 +7195,7 @@ static void srv_update_status(struct server *s, int type, int cause)
if (last_change < ns_to_sec(now_ns)) // ignore negative times
s->proxy->down_time += ns_to_sec(now_ns) - last_change;
s->proxy->last_change = ns_to_sec(now_ns);
if (s->proxy->be_counters.shared.tg[tgid - 1])
if (s->proxy->be_counters.shared.tg)
HA_ATOMIC_STORE(&s->proxy->be_counters.shared.tg[tgid - 1]->last_state_change, s->proxy->last_change);
}
}

View File

@ -322,7 +322,7 @@ static void srv_state_srv_update(struct server *srv, int version, char **params)
}
srv->last_change = ns_to_sec(now_ns) - srv_last_time_change;
if (srv->counters.shared.tg[0])
if (srv->counters.shared.tg && srv->counters.shared.tg[0])
HA_ATOMIC_STORE(&srv->counters.shared.tg[0]->last_state_change, srv->last_change);
srv->check.status = srv_check_status;
srv->check.result = srv_check_result;

View File

@ -99,8 +99,12 @@ struct session *session_new(struct proxy *fe, struct listener *li, enum obj_type
sess->flags = SESS_FL_NONE;
sess->src = NULL;
sess->dst = NULL;
sess->fe_tgcounters = sess->fe->fe_counters.shared.tg[tgid - 1];
if (sess->listener && sess->listener->counters)
if (sess->fe->fe_counters.shared.tg)
sess->fe_tgcounters = sess->fe->fe_counters.shared.tg[tgid - 1];
else
sess->fe_tgcounters = NULL;
if (sess->listener && sess->listener->counters && sess->listener->counters->shared.tg)
sess->li_tgcounters = sess->listener->counters->shared.tg[tgid - 1];
else
sess->li_tgcounters = NULL;
@ -737,7 +741,7 @@ int session_reinsert_idle_conn(struct session *sess, struct connection *conn)
* target server will be incremented.
*
* Returns 0 if the connection is kept, else non-zero if the connection was
* explicitely removed from session.
* explicitly removed from session.
*/
int session_check_idle_conn(struct session *sess, struct connection *conn)
{
@ -852,7 +856,7 @@ void session_unown_conn(struct session *sess, struct connection *conn)
* session_unown_conn(), this function is not protected by a lock, so the
* caller is responsible to properly use idle_conns_lock prior to calling it.
*
* Another notable difference is that <owner> member of <conn> is not resetted.
* Another notable difference is that <owner> member of <conn> is not reset.
* This is a convenience as this function usage is generally coupled with a
* following session_reinsert_idle_conn().
*

View File

@ -381,7 +381,7 @@ void sock_unbind(struct receiver *rx)
return;
if (!stopping && master &&
rx->flags & RX_F_INHERITED)
rx->flags & RX_F_INHERITED_FD)
return;
rx->flags &= ~RX_F_BOUND;
@ -905,6 +905,23 @@ void sock_conn_ctrl_close(struct connection *conn)
conn->handle.fd = DEAD_FD_MAGIC;
}
/* call getsockopt() for <level> and <optname> on connection <conn>'s socket,
* store the result in <buf> for at most <size> bytes, and return the number
* of bytes read on success (which may be zero). Returns < 0 on error.
* Note that the recommended way to use the level is to pass IPPROTO_TCP for
* TCP_*, IPPROTO_UDP for UDP_*, IPPROTO_IP for IP_*, IPPROTO_IPV6 for IPV6_*,
* and SOL_SOCKET for UNIX sockets.
*/
int sock_conn_get_opt(const struct connection *conn, int level, int optname, void *buf, int size)
{
socklen_t opt_len = size;
if (getsockopt(conn->handle.fd, level, optname, buf, &opt_len) == -1)
return -1;
return opt_len;
}
/* This is the callback which is set when a connection establishment is pending
* and we have nothing to send. It may update the FD polling status to indicate
* !READY. It returns 0 if it fails in a fatal way or needs to poll to go

View File

@ -327,12 +327,6 @@ int sock_inet_bind_receiver(struct receiver *rx, char **errmsg)
err |= ERR_RETRYABLE;
goto bind_ret_err;
}
/* taking the other one's FD will result in it being marked
* extern and being dup()ed. Let's mark the receiver as
* inherited so that it properly bypasses all second-stage
* setup and avoids being passed to new processes.
*/
rx->flags |= RX_F_INHERITED;
rx->fd = rx->shard_info->ref->fd;
}
@ -467,7 +461,7 @@ int sock_inet_bind_receiver(struct receiver *rx, char **errmsg)
fd_insert(fd, rx->owner, rx->iocb, rx->bind_tgroup, rx->bind_thread);
/* for now, all regularly bound TCP listeners are exportable */
if (!(rx->flags & RX_F_INHERITED))
if (!(rx->flags & (RX_F_INHERITED_FD|RX_F_INHERITED_SOCK)))
HA_ATOMIC_OR(&fdtab[fd].state, FD_EXPORTED);
bind_return:

View File

@ -237,12 +237,6 @@ int sock_unix_bind_receiver(struct receiver *rx, char **errmsg)
err |= ERR_RETRYABLE;
goto bind_ret_err;
}
/* taking the other one's FD will result in it being marked
* extern and being dup()ed. Let's mark the receiver as
* inherited so that it properly bypasses all second-stage
* setup and avoids being passed to new processes.
*/
rx->flags |= RX_F_INHERITED;
rx->fd = rx->shard_info->ref->fd;
}
@ -418,7 +412,7 @@ int sock_unix_bind_receiver(struct receiver *rx, char **errmsg)
fd_insert(fd, rx->owner, rx->iocb, rx->bind_tgroup, rx->bind_thread);
/* for now, all regularly bound TCP listeners are exportable */
if (!(rx->flags & RX_F_INHERITED))
if (!(rx->flags & (RX_F_INHERITED_FD|RX_F_INHERITED_SOCK)))
HA_ATOMIC_OR(&fdtab[fd].state, FD_EXPORTED);
return err;

Some files were not shown because too many files have changed in this diff Show More