Commit Graph

24215 Commits

Author SHA1 Message Date
Amaury Denoyelle
2f36162ee1 REGTESTS: extend conn reuse test with transparent proxy
Recently, work on connection reuses reveals an issue when mixed with
transparent proxy and set-dst. This patch rewrites the related regtests
to be able to catch this now fixed bug.

Note that it is the first regtest which relies on bc_reused recently
introduced sample fetches. This fetch could be reuse in other related
connection reuse regtests to simplify them.
2025-04-02 14:57:40 +02:00
Amaury Denoyelle
ec76d52cea MINOR: sample: define bc_reused fetch
Define a new layer4 sample fetch "bc_reused". It is used as a boolean,
set to true if backend connection was reused for the request.
2025-04-02 14:57:40 +02:00
Amaury Denoyelle
5fda64e87e BUG/MEDIUM: backend: fix reuse with set-dst/set-dst-port
On backend connection reuse, a hash is calculated from various
parameters, to ensure the selected connection match the requested
parameters. Notably, destination address is one of these parameters.
However, it is only taken into account if using a transparent server
(server address 0.0.0.0).

This may cause issue where an incorrect connection is reused, which is
not targetted to the correct destination address. This may be the case
if a set-dst/set-dst-port is used with a transparent proxy (proxy option
transparent).

The fix is simple enough. Destination address is now always used as
input to the connection reuse hash.

This must be backported up to 2.6. Note that for reverse HTTP to work,
it relies on the following patch, which ensures destination address
remains NULL in this case.

  commit e94baf6ca71cb2319610baa74dbf17b9bc602b18
  BUG/MINOR: rhttp: fix incorrect dst/dst_port values
2025-04-02 14:57:40 +02:00
Amaury Denoyelle
d7fa8e88c4 BUG/MINOR: backend: do not overwrite srv dst address on reuse
Previously, destination address of backend connection was systematically
always reassigned. However, this step is unnecessary on connection
reuse. Indeed, reuse should only be conducted with connection using the
same destination address matching the stream requirements.

This patch removes this unnecessary assignment. It is now only performed
when reuse cannot be conducted and a new connection is instantiated.

Functionnally speaking, this patch should not change anything in theory,
as reuse is performed in conformance with the destination address.
However, it appears that it was not always properly enforced. The
systematic assignment of the destination address hides these issues, so
it is now remove. The identified bogus cases will then be fixed in the
following patches.would

This should be backported up to all stable versions.
2025-04-02 14:57:40 +02:00
Amaury Denoyelle
c05bb8c967 BUG/MINOR: rhttp: fix incorrect dst/dst_port values
With a @rhttp server, connect is not possible, transfer is only possible
via idle connection reuse. The server does not have any network address.

Thus, it is unnecessary to allocate the stream destination address prior
to connection reuse. This patch adjusts this by fixing
alloc_dst_address() to take this into account.

Prior to this patch, alloc_dst_address() would incorrectly assimilate a
@rhttp server with a transparent proxy mode. Thus stream destination
address would be copied from the destination address. Connection adress
would then be rewrote with this incorrect value. This did not impact
connect or reuse as destination addr is only used in idle conn hash
calculation for transparent servers. However, it causes incorrect values
for dst/dst_port samples.

This should be backported up to 2.9.
2025-04-02 14:57:40 +02:00
Olivier Houchard
f59297e492 BUG/MEDIUM: leastconn: Don't try to reposition if the server is down
It may happen that the server is going down, and fwlc_srv_reposition()
is still called, because streams still attached to the server are
being terminated.
So in fwlc_srv_reposition(), just do nothing if we've been removed from
the tree.

This should fix github issue #2919.

This should not be backported, unless commit
9fe72bba3c is also backported.

2025-04-02 12:24:04 +02:00
Willy Tarreau
4ec5509541 BUILD: compiler: undefine the CONCAT() macro if already defined
As Ilya reported in issue #2911, the CONCAT() macro breaks on NetBSD
which defines its own as __CONCAT() (which is exactly the same). Let's
just undefine it before ours to fix the issue instead of renaming, but
keep ours so that we don't have doubts about what we're running with.

Note that the patch introducing this breaking change was backported
to 3.0.
2025-04-02 11:36:43 +02:00
David Carlier
a703eeaef7 MINOR: cpu-topo: cpu_dump_topology() SMT info check little optimisation
Once we stumble across the first cpu having the criteria, we exit
earlier from the loop.
2025-04-02 11:31:37 +02:00
Willy Tarreau
3de99a0919 DOC: config: fix two missing "content" in "tcp-request" examples
As reported by Uku Srmus in GitHub issue #2917, two "tcp-request" rules
in an example were mistakenly missing the "content" hook, rendering them
invalid.

This can be backported.
2025-04-02 11:17:05 +02:00
Ilia Shipitsin
78b849b839 CLEANUP: assorted typo fixes in the code and comments
code, comments and doc actually.
2025-04-02 11:12:20 +02:00
Aurelien DARRAGON
423cca64b6 MINOR: log: support "raw" logformat node typecast
"raw" logformat node typecast is a special value (unlike str,bool,int..)
which tells haproxy to completely ignore logformat options (including
encoding ones) and force binary output for the current node only. It is
mainly intended for use with JSON or CBOR encoders in order to generate
nested CBOR or nested JSON by storing intermediate log-formats within
variables and assembling the final object in the parent log-format.

Example:

  http-request set-var-fmt(txn.intermediate) "%{+json}o %(lower)[str(value)]"

  log-format "%{+json}o %(upper)[str(value)] %(intermediate:raw)[var(txn.intermediate)]"

Would produce:

   {"upper": "value", "intermediate": {"lower": "value"}}
2025-04-02 11:04:43 +02:00
William Lallemand
31bd3627cd BUILD: ssl/ckch: potential null pointer dereference
src/ssl_ckch.c: In function ‘ckch_conf_parse’:
src/ssl_ckch.c:4852:40: error: potential null pointer dereference [-Werror=null-dereference]
 4852 |                                 while (*r) {
      |                                        ^~

Add a test on r before using *r.

No backport needed
2025-04-02 10:02:07 +02:00
William Lallemand
2e8acf54d4 BUG/MINOR: ssl/ckch: leak in error path
fdcb97614c ("MINOR: ssl/ckch: add substring parser for ckch_conf")
introduced a leak in the error path when the strndup fails.

This patch fixes issue #2920. No backport needed.
2025-04-02 09:53:48 +02:00
Olivier Houchard
9fe72bba3c MAJOR: leastconn; Revamp the way servers are ordered.
For leastconn, servers used to just be stored in an ebtree.
Each server would be one node.
Change that so that nodes contain multiple mt_lists. Each list
will contain servers that share the same key (typically meaning
they have the same number of connections). Using mt_lists means
that as long as tree elements already exist, moving a server from
one tree element to another does no longer require the lbprm write
lock.
We use multiple mt_lists to reduce the contention when moving
a server from one tree element to another. A list in the new
element will be chosen randomly.
We no longer remove a tree element as soon as they no longer
contain any server. Instead, we keep a list of all elements,
and when we need a new element, we look at that list only if it
contains a number of elements already, otherwise we'll allocate
a new one. Keeping nodes in the tree ensures that we very
rarely have to take the lbrpm write lock (as it only happens
when we're moving the server to a position for which no
element is currently in the tree).

The number of mt_lists used is defined as FWLC_NB_LISTS.
The number of tree elements we want to keep is defined as
FWLC_MIN_FREE_ENTRIES, both in defaults.h.
The value used were picked afrer experimentation, and
seems to be the best choice of performances vs memory
usage.

Doing that gives a good boost in performances when a lot of
servers are used.
With a configuration using 500 servers, before that patch,
about 830000 requests per second could be processed, with
that patch, about 1550000 requests per second are
processed, on an 64-cores AMD, using 1200 concurrent connections.
2025-04-01 18:05:30 +02:00
Olivier Houchard
ba521a1d88 MINOR: threads: Add HA_RWLOCK_TRYRDTOWR()
Add HA_RWLOCK_TRYRDTOWR(), that tries to upgrade a lock
from reader to writer, and fails if any seeker or writer already
holds it.
2025-04-01 18:05:30 +02:00
Olivier Houchard
2a9436f96b MINOR: lbprm: Add method to deinit server and proxy
Add two new methods to lbprm, server_deinit() and proxy_deinit(),
in case something should be done at the lbprm level when
removing servers and proxies.
2025-04-01 18:05:30 +02:00
Olivier Houchard
17059098e7 MINOR: mt_list: Implement mt_list_try_lock_prev().
Implement mt_list_try_lock_prev(), that does the same thing
as mt_list_lock_prev(), exceot if the list is locked, it
returns { NULL, NULL } instaed of waiting.
2025-04-01 18:05:30 +02:00
William Lallemand
fdcb97614c MINOR: ssl/ckch: add substring parser for ckch_conf
Add a substring parser for the ckch_conf keyword parser, this will split
a string into multiple substring, and strdup them in a array.
2025-04-01 15:38:32 +02:00
William Lallemand
fa01c9d92b TESTS: jws: change the jwk format
The format of the jwk output changed a little bit because of the
previous commit.
2025-04-01 14:37:22 +02:00
William Lallemand
f8fe84caca MINOR: jws: emit the JWK thumbprint
jwk_thumbprint() is a function which is a function which implements
RFC7368 and emits a JWK thumbprint using a EVP_PKEY.

EVP_PKEY_EC_to_pub_jwk() and EVP_PKEY_RSA_to_pub_jwk() were changed in
order to match what is required to emit a thumbprint (ie, no spaces or
lines and the lexicographic order of the fields)
2025-04-01 11:57:55 +02:00
Willy Tarreau
ed1d4807da EXAMPLES: add "games.cfg" and an example game in Lua
The purpose is mainly to exhibit certain limitations that come with such
less common programming models, to show users how to program interactive
tools in Lua, and how to connect interactively.

Other use cases that could be envisioned are "top" and various monitoring
utilities, with sliding graphs etc. Lua is particularly attractive for
this usage, easy to program, well known from most AI tools (including its
integration into haproxy), making such programs very quick to obtain in
their basic form, and to improve later.

A very limited example game is provided, following the principle of a
very popular one, where the player must compose lines from falling
pieces. It quickly revealed the need to the ability to enforce a timeout
to applet:receive(). Other identified limitations include the difficulty
from the Lua side to monitor multiple events at once, but it seems that
callbacks and/or event dispatchers would be useful here.

At the moment the CLI is not workable (it interactivity was broken in 2.9
when line buffering was adopted), though it was verified that it works
with older releases.

The command needed to connect to the game is displayed as a notice message
during boot.
2025-04-01 09:10:00 +02:00
Willy Tarreau
2c779f3938 BUG/MINOR: config: silence .notice/.warning/.alert in discovery mode
When first pre-parsing the config to detect the presence or absence of
the master mode, we must not emit messages because they are not supposed
to be visible at this point, otherwise they appear twice each. The
pre-parsing, also called discovery mode, is only for internal use,
thus it should remain silent.

This should be backported to 3.1 where this mode was introduced.
2025-04-01 09:06:25 +02:00
Willy Tarreau
9f00702dc6 MINOR: cpu-topo: add new cpu-policies "group-by-2-clusters" and above
This adds "group-by-{2,3,4}-clusters", which, as its name implies,
create one thread group per X clusters. This can be useful when CPUs
are split into too small clusters, as well as when the total number
of assigned cores is not even between the clusters, to try to spread
the load between less different ones.
2025-03-31 16:21:37 +02:00
Willy Tarreau
1e9a2529aa MINOR: cpu-topo: pass an extra argument to ha_cpu_policy
This extra argument will allow common functions to distinguish between
multiple policies. For now it's not used.
2025-03-31 16:21:37 +02:00
Willy Tarreau
e4053b0d09 MINOR: cpu-topo: add a dump of thread-to-CPU mapping to -dc
When emitting the CPU topology info with -dc, also emit a list of
thread-to-CPU mapping. The group/thread and thread ID are emitted
with the list of their CPUs on each line. The count of CPUs is shown
to ease comparisons, and as much as possible, we try to pack identical
lines within a group by showing thread ranges.
2025-03-31 16:21:37 +02:00
Willy Tarreau
571573874a MINOR: cpu-set: add a new function to print cpu-sets in human-friendly mode
The new function "print_cpu_set()" will print cpu sets in a human-friendly
way, with commas and dashes for intervals. The goal is to keep them compact
enough.
2025-03-31 16:21:37 +02:00
Willy Tarreau
3955f151b1 MINOR: cpu-set: compare two cpu sets with ha_cpuset_isequal()
This function returns true if two CPU sets are equal.
2025-03-31 16:21:37 +02:00
Willy Tarreau
e17512c3b2 MINOR: thread: dump the CPU topology in thread_map_to_groups()
It was previously done in thread_detect_count() but that's not quite
handy because we still don't know about the groups setting. Better do
it slightly later and have all the relevant info instead.
2025-03-31 15:42:13 +02:00
Valentine Krasnobaeva
b303861469 MINOR: compiler: add __nonstring macro
GCC 15 throws the following warning on fixed-size char arrays if they do not
contain terminated NUL:

src/tools.c:2041:25: error: initializer-string for array of 'char' truncates NUL terminator but destination lacks 'nonstring' attribute (17 chars into 16 available) [-Werror=unterminated-string-initialization]
 2041 | const char hextab[16] = "0123456789ABCDEF";

We are using a couple of such definitions for some constants. Converting them
to flexible arrays, like: hextab[] = "0123456789ABCDEF" may have consequences,
as enlarged arrays won't fit anymore where they were possibly located due to
the memory alignement constraints.

GCC adds 'nonstring' variable attribute for such char arrays, but clang and
other compilers don't have it. Let's wrap 'nonstring' with our
__nonstring macro, which will test if the compiler supports this attribute.

This fixes the issue #2910.
2025-03-31 13:50:28 +02:00
Ilia Shipitsin
415d446065 CI: QUIC Interop on LibreSSL: allow "on: workflow_dispatch" in forks
previously that build were limited to "haproxy" github organization
only. let's allow manual builds from forks
2025-03-28 09:51:35 +01:00
Ilia Shipitsin
8d591c387a CI: QUIC Interop on AWS-LC: allow "on: workflow_dispatch" in forks
previously that build were limited to "haproxy" github organization
only. let's allow manual builds from forks
2025-03-28 09:51:35 +01:00
Ilia Shipitsin
7de45e3874 CI: NetBSD: allow "on: workflow_dispatch" in forks
previously that build were limited to "haproxy" github organization
only. let's allow manual builds from forks
2025-03-28 09:51:35 +01:00
Ilia Shipitsin
8231f58fdc CI: Illumos: allow "on: workflow_dispatch" in forks
previously that build were limited to "haproxy" github organization
only. let's allow manual builds from forks
2025-03-28 09:51:35 +01:00
Ilia Shipitsin
7495dbed22 CI: cross compile: allow "on: workflow_dispatch" in forks
previously that build were limited to "haproxy" github organization
only. let's allow manual builds from forks
2025-03-28 09:51:35 +01:00
Ilia Shipitsin
7eb54656ae CI: coverity scan: allow "on: workflow_dispatch" in forks
previously that build were limited to "haproxy" github organization
only. let's allow manual builds from forks
2025-03-28 09:51:35 +01:00
Ilia Shipitsin
424ca19831 CI: spellcheck: allow "on: workflow_dispatch" in forks
previously that build were limited to "haproxy" github organization
only. let's allow manual builds from forks
2025-03-28 09:51:35 +01:00
Ilia Shipitsin
d9cb95c2a5 CI: fedora rawhide: install "awk" as a dependency
for some reason it is not installed by default on rawhide anymore
2025-03-28 09:51:35 +01:00
Ilia Shipitsin
21894300c1 CI: fedora rawhide: allow "on: workflow_dispatch" in forks
previously that build were limited to "haproxy" github organization
only. let's allow manual builds from forks
2025-03-28 09:51:35 +01:00
Valentine Krasnobaeva
44f98f1747 BUG/MINOR: log: fix gcc warn about truncating NUL terminator while init char arrays
gcc 15 throws such kind of warnings about initialization of some char arrays:

src/log.c:181:33: error: initializer-string for array of 'char' truncates NUL terminator but destination lacks 'nonstring' attribute (17 chars into 16 available) [-Werror=unterminated-string-initialization]
  181 | const char sess_term_cond[16] = "-LcCsSPRIDKUIIII"; /* normal, Local, CliTo, CliErr, SrvTo, SrvErr, PxErr, Resource, Internal, Down, Killed, Up, -- */
      |                                 ^~~~~~~~~~~~~~~~~~
src/log.c:182:33: error: initializer-string for array of 'char' truncates NUL terminator but destination lacks 'nonstring' attribute (9 chars into 8 available) [-Werror=unterminated-string-initialization]
  182 | const char sess_fin_state[8]  = "-RCHDLQT";     /* cliRequest, srvConnect, srvHeader, Data, Last, Queue, Tarpit */

So, let's make it happy by not giving the sizes of these char arrays
explicitly, thus he can accomodate there NUL terminators.

Reported in GitHub issue #2910.

This should be backported up to 2.6.
2025-03-27 11:52:33 +01:00
Willy Tarreau
9b53a4a7fb REGTESTS: disable the test balance/balance-hash-maxqueue
This test brought by commit 8ed1e91efd ("MEDIUM: lb-chash: add directive
hash-preserve-affinity") seems to have hit a limitation of what can be
expressed in vtc, as it would be desirable to have one server response
release two clients at once but the various attempts using barriers
have failed so far. The test seems to work fine locally but still fails
almost 100% of the time on the CI, so it remains timing dependent in
some ways. Tests have been done with nbthread 1, pool-idle-shared off,
http-reuse never (since always fails locally) etc but to no avail. Let's
just mark it broken in case we later figure another way to fix it. It's
still usable locally most of the time, though.
2025-03-25 18:24:49 +01:00
Willy Tarreau
6b17310757 MEDIUM: pools: be a bit smarter when merging comparable size pools
By default, pools of comparable sizes are merged together. However, the
current algorithm is dumb: it rounds the requested size to the next
multiple of 16 and compares the sizes like this. This results in many
entries which are already multiples of 16 not being merged, for example
1024 and 1032 are separate, 65536 and 65540 are separate, 48 and 56 are
separate (though 56 merges with 64).

This commit changes this to consider not just the entry size but also the
average entry size, that is, it compares the average size of all objects
sharing the pool with the size of the object looking for a pool. If the
object is not more than 1% bigger nor smaller than the current average
size or if it neither 16 bytes smaller nor larger, then it can be merged.
Also, it always respects exact matches in order to avoid merging objects
into larger pools or worse, extending existing ones for no reason, and
when there's a tie, it always avoids extending an existing pool.

Also, we now visit all existing pools in order to spot the best one, we
do not stop anymore at the smallest one large enough. Theoretically this
could cost a bit of CPU but in practice it's O(N^2) with N quite small
(typically in the order of 100) and the cost at each step is very low
(compare a few integer values). But as a side effect, pools are no
longer sorted by size, "show pools bysize" is needed for this.

This causes the objects to be much better grouped together, accepting to
use a little bit more sometimes to avoid fragmentation, without causing
everyone to be merged into the same pool. Thanks to this we're now
seeing 36 pools instead of 48 by default, with some very nice examples
of compact grouping:

  - Pool qc_stream_r (80 bytes) : 13 users
      >  qc_stream_r : size=72 flags=0x1 align=0
      >  quic_cstrea : size=80 flags=0x1 align=0
      >  qc_stream_a : size=64 flags=0x1 align=0
      >  hlua_esub   : size=64 flags=0x1 align=0
      >  stconn      : size=80 flags=0x1 align=0
      >  dns_query   : size=64 flags=0x1 align=0
      >  vars        : size=80 flags=0x1 align=0
      >  filter      : size=64 flags=0x1 align=0
      >  session pri : size=64 flags=0x1 align=0
      >  fcgi_hdr_ru : size=72 flags=0x1 align=0
      >  fcgi_param_ : size=72 flags=0x1 align=0
      >  pendconn    : size=80 flags=0x1 align=0
      >  capture     : size=64 flags=0x1 align=0

  - Pool h3s (56 bytes) : 17 users
      >  h3s         : size=56 flags=0x1 align=0
      >  qf_crypto   : size=48 flags=0x1 align=0
      >  quic_tls_se : size=48 flags=0x1 align=0
      >  quic_arng   : size=56 flags=0x1 align=0
      >  hlua_flt_ct : size=56 flags=0x1 align=0
      >  promex_metr : size=48 flags=0x1 align=0
      >  conn_hash_n : size=56 flags=0x1 align=0
      >  resolv_requ : size=48 flags=0x1 align=0
      >  mux_pt      : size=40 flags=0x1 align=0
      >  comp_state  : size=40 flags=0x1 align=0
      >  notificatio : size=48 flags=0x1 align=0
      >  tasklet     : size=56 flags=0x1 align=0
      >  bwlim_state : size=48 flags=0x1 align=0
      >  xprt_handsh : size=48 flags=0x1 align=0
      >  email_alert : size=56 flags=0x1 align=0
      >  caphdr      : size=41 flags=0x1 align=0
      >  caphdr      : size=41 flags=0x1 align=0

  - Pool quic_cids (32 bytes) : 13 users
      >  quic_cids   : size=16 flags=0x1 align=0
      >  quic_tls_ke : size=32 flags=0x1 align=0
      >  quic_tls_iv : size=12 flags=0x1 align=0
      >  cbuf        : size=32 flags=0x1 align=0
      >  hlua_queuew : size=24 flags=0x1 align=0
      >  hlua_queue  : size=24 flags=0x1 align=0
      >  promex_modu : size=24 flags=0x1 align=0
      >  cache_st    : size=24 flags=0x1 align=0
      >  spoe_appctx : size=32 flags=0x1 align=0
      >  ehdl_sub_tc : size=32 flags=0x1 align=0
      >  fcgi_flt_ct : size=16 flags=0x1 align=0
      >  sig_handler : size=32 flags=0x1 align=0
      >  pipe        : size=24 flags=0x1 align=0

  - Pool quic_crypto (1032 bytes) : 2 users
      >  quic_crypto : size=1032 flags=0x1 align=0
      >  requri      : size=1024 flags=0x1 align=0

  - Pool quic_conn_r (65544 bytes) : 2 users
      >  quic_conn_r : size=65536 flags=0x1 align=0
      >  dns_msg_buf : size=65540 flags=0x1 align=0

On a very unscientific test consisting in sending 1 million H1 requests
and 1 million H2 requests to the stats page, we're seeing an ~6% lower
memory usage with the patch:

  before the patch:
    Total: 48 pools, 4120832 bytes allocated, 4120832 used (~3555680 by thread caches).

  after the patch:
    Total: 36 pools, 3880648 bytes allocated, 3880648 used (~3299064 by thread caches).

This should be taken with care however since pools allocate and release
in batches.
2025-03-25 18:01:01 +01:00
Pierre-Andre Savalle
8ed1e91efd MEDIUM: lb-chash: add directive hash-preserve-affinity
When using hash-based load balancing, requests are always assigned to
the server corresponding to the hash bucket for the balancing key,
without taking maxconn or maxqueue into account, unlike in other load
balancing methods like 'first'. This adds a new backend directive that
can be used to take maxconn and possibly maxqueue in that context. This
can be used when hashing is desired to achieve cache locality, but
sending requests to a different server is preferable to queuing for a
long time or failing requests when the initial server is saturated.

By default, affinity is preserved as was the case previously. When
'hash-preserve-affinity' is set to 'maxqueue', servers are considered
successively in the order of the hash ring until a server that does not
have a full queue is found.

When 'maxconn' is set on a server, queueing cannot be disabled, as
'maxqueue=0' means unlimited.  To support picking a different server
when a server is at 'maxconn' irrespective of the queue,
'hash-preserve-affinity' can be set to 'maxconn'.
2025-03-25 18:01:01 +01:00
Amaury Denoyelle
cf9e40bd8a MINOR: quic: define max-stream-data configuration as a ratio 2025-03-25 16:30:35 +01:00
Amaury Denoyelle
68c10d444d MINOR: mux-quic: define config for max-data
Define a new global configuration tune.quic.frontend.max-data. This
allows users to explicitely set the value for the corresponding QUIC TP
initial-max-data, with direct impact on haproxy memory consumption.
2025-03-25 16:30:09 +01:00
Amaury Denoyelle
1f1a18e318 MINOR: quic: ignore uni-stream for initial max data TP
Initial TP value for max-data is automatically calculated to be adjusted
to the maximum number of opened streams over a QUIC connection. This
took into account both max-streams-bidi-remote and uni-streams. By
default, this is equivalent to 100 + 3 = 103 max opened streams.

This patch simplifies the calculation by only using bidirectional
streams. Uni streams are ignored because they are only used for HTTP/3
control exchanges, which should only represents a few bytes. For now,
users can only configure the max number of remote bidi streams, so the
simplified calculation should make more sense to them.

Note that this relies on the assumption that HTTP/3 is used as
application protocol. To support other protocols, it may be necessary to
review this and take into account both local bidi and uni streams.
2025-03-25 16:29:38 +01:00
Amaury Denoyelle
3db5320289 CLEANUP: quic: reorganize TP flow-control initialization
Adjust initialization of flow-control transport parameters via
quic_transport_params_init().

This is purely cosmetic, with some comments added. It is also a
preparatory step for future patches with addition of new configuration
keywords related to flow-control TP values.
2025-03-25 16:29:35 +01:00
Amaury Denoyelle
a71007c088 MINOR: quic: move global tune options into quic_tune
A new structure quic_tune has recently been defined. Its purpose is to
store global options related to QUIC. Previously, only the tunable to
toggle pacing was stored in it.

This commit moves several QUIC related tunable from global to quic_tune
structure. This better centralizes QUIC configuration option and gives
room for future generic options.
2025-03-24 10:01:46 +01:00
Willy Tarreau
119a79f479 [RELEASE] Released version 3.2-dev8
Released version 3.2-dev8 with the following main changes :
    - MINOR: jws: implement JWS signing
    - TESTS: jws: implement a test for JWS signing
    - CI: github: add "jose" to apt dependencies
    - CLEANUP: log-forward: remove useless options2 init
    - CLEANUP: log: add syslog_process_message() helper
    - MINOR: proxy: add proxy->options3
    - MINOR: log: migrate log-forward options from proxy->options2 to options3
    - MINOR: log: provide source address information in syslog_process_message()
    - MINOR: tools: only print address in sa2str() when port == -1
    - MINOR: log: add "option host" log-forward option
    - MINOR: log: handle log-forward "option host"
    - MEDIUM: log: change default "host" strategy for log-forward section
    - BUG/MEDIUM: thread: use pthread_self() not ha_pthread[tid] in set_affinity
    - MINOR: compiler: add a simple macro to concatenate resolved strings
    - MINOR: compiler: add a new __decl_thread_var() macro to declare local variables
    - BUILD: tools: silence a build warning when USE_THREAD=0
    - BUILD: backend: silence a build warning when threads are disabled
    - DOC: management: rename some last occurences from domain "dns" to "resolvers"
    - BUG/MINOR: stats: fix capabilities and hide settings for some generic metrics
    - MINOR: cli: export cli_io_handler() to ease symbol resolution
    - MINOR: tools: improve symbol resolution without dl_addr
    - MINOR: tools: ease the declaration of known symbols in resolve_sym_name()
    - MINOR: tools: teach resolve_sym_name() a few more common symbols
    - BUILD: tools: avoid a build warning on gcc-4.8 in resolve_sym_name()
    - DEV: ncpu: also emulate sysconf() for _SC_NPROCESSORS_*
    - DOC: design-thoughts: commit numa-auto.txt
    - MINOR: cpuset: make the API support negative CPU IDs
    - MINOR: thread: rely on the cpuset functions to count bound CPUs
    - MINOR: cpu-topo: add ha_cpu_topo definition
    - MINOR: cpu-topo: allocate and initialize the ha_cpu_topo array.
    - MINOR: cpu-topo: rely on _SC_NPROCESSORS_CONF to trim maxcpus
    - MINOR: cpu-topo: add a function to dump CPU topology
    - MINOR: cpu-topo: update CPU topology from excluded CPUs at boot
    - REORG: cpu-topo: move bound cpu detection from cpuset to cpu-topo
    - MINOR: cpu-topo: add detection of online CPUs on Linux
    - MINOR: cpu-topo: add detection of online CPUs on FreeBSD
    - MINOR: cpu-topo: try to detect offline cpus at boot
    - MINOR: cpu-topo: add CPU topology detection for linux
    - MINOR: cpu-topo: also store the sibling ID with SMT
    - MINOR: cpu-topo: add NUMA node identification to CPUs on Linux
    - MINOR: cpu-topo: add NUMA node identification to CPUs on FreeBSD
    - MINOR: thread: turn thread_cpu_mask_forced() into an init-time variable
    - MINOR: cfgparse: move the binding detection into numa_detect_topology()
    - MINOR: cfgparse: use already known offline CPU information
    - MINOR: global: add a command-line option to enable CPU binding debugging
    - MINOR: cpu-topo: add a new "cpu-set" global directive to choose cpus
    - MINOR: cpu-topo: add "drop-cpu" and "only-cpu" to cpu-set
    - MEDIUM: thread: start to detect thread groups and threads min/max
    - MEDIUM: cpu-topo: make sure to properly assign CPUs to threads as a fallback
    - MEDIUM: thread: reimplement first numa node detection
    - MEDIUM: cfgparse: remove now unused numa & thread-count detection
    - MINOR: cpu-topo: refine cpu dump output to better show kept/dropped CPUs
    - MINOR: cpu-topo: fall back to nominal_perf and scaling_max_freq for the capacity
    - MINOR: cpu-topo: use cpufreq before acpi cppc
    - MINOR: cpu-topo: boost the capacity of performance cores with cpufreq
    - MINOR: cpu-topo: skip CPU detection when /sys/.../cpu does not exist
    - MINOR: cpu-topo: skip identification of non-existing CPUs
    - MINOR: cpu-topo: skip CPU properties that we've verified do not exist
    - MINOR: cpu-topo: implement a sorting mechanism for CPU index
    - MINOR: cpu-topo: implement a sorting mechanism by CPU locality
    - MINOR: cpu-topo: implement a CPU sorting mechanism by cluster ID
    - MINOR: cpu-topo: ignore single-core clusters
    - MINOR: cpu-topo: assign clusters to cores without and renumber them
    - MINOR: cpu-topo: make sure we don't leave unassigned IDs in the cpu_topo
    - MINOR: cpu-topo: assign an L3 cache if more than 2 L2 instances
    - MINOR: cpu-topo: renumber cores to avoid holes and make them contiguous
    - MINOR: cpu-topo: add a function to sort by cluster+capacity
    - MINOR: cpu-topo: consider capacity when forming clusters
    - MINOR: cpu-topo: create an array of the clusters
    - MINOR: cpu-topo: ignore excess of too small clusters
    - MINOR: cpu-topo: add "only-node" and "drop-node" to cpu-set
    - MINOR: cpu-topo: add "only-thread" and "drop-thread" to cpu-set
    - MINOR: cpu-topo: add "only-core" and "drop-core" to cpu-set
    - MINOR: cpu-topo: add "only-cluster" and "drop-cluster" to cpu-set
    - MINOR: cpu-topo: add a CPU policy setting to the global section
    - MINOR: cpu-topo: add a 'first-usable-node' cpu policy
    - MEDIUM: cpu-topo: use the "first-usable-node" cpu-policy by default
    - CLEANUP: thread: now remove the temporary CPU node binding code
    - MINOR: cpu-topo: add cpu-policy "group-by-cluster"
    - MEDIUM: cpu-topo: let the "group-by-cluster" split groups
    - MINOR: cpu-topo: add a new "performance" cpu-policy
    - MINOR: cpu-topo: add a new "efficiency" cpu-policy
    - MINOR: cpu-topo: add a new "resource" cpu-policy
    - MINOR: jws: add new functions in jws.h
    - MINOR: cpu-topo: fix unused stack var 'cpu2' reported by coverity
    - MINOR: hlua: add an optional timeout to AppletTCP:receive()
    - MINOR: jws: use jwt_alg type instead of a char
    - BUG/MINOR: log: prevent saddr NULL deref in syslog_io_handler()
    - MINOR: stream: decrement srv->served after detaching from the list
    - BUG/MINOR: hlua: fix optional timeout argument index for AppletTCP:receive()
    - MINOR: server: simplify srv_has_streams()
    - CLEANUP: server: make it clear that srv_check_for_deletion() is thread-safe
    - MINOR: cli/server: don't take thread isolation to check for srv-removable
    - BUG/MINOR: limits: compute_ideal_maxconn: don't cap remain if fd_hard_limit=0
    - MINOR: limits: fix check_if_maxsock_permitted description
    - BUG/MEDIUM: hlua/cli: fix cli applet UAF in hlua_applet_wakeup()
    - MINOR: tools: path_base() concatenates a path with a base path
    - MEDIUM: ssl/ckch: make the ckch_conf more generic
    - BUG/MINOR: mux-h2: Reset streams with NO_ERROR code if full response was already sent
    - MINOR: stats: add .generic explicit field in stat_col struct
    - MINOR: stats: STATS_PX_CAP___B_ macro
    - MINOR: stats: add .cap for some static metrics
    - MINOR: stats: use stat_col storage stat_cols_info
    - MEDIUM: promex: switch to using stat_cols_info for global metrics
    - MINOR: promex: expose ST_I_INF_WARNINGS (AKA total_warnings) metric
    - MEDIUM: promex: switch to using stat_cols_px for front/back/server metrics
    - MINOR: stats: explicitly add frontend cap for ST_I_PX_REQ_TOT
    - CLEANUP: promex: remove unused PROMEX_FL_{INFO,FRONT,BACK,LI,SRV} flags
    - BUG/MEDIUM: mux-quic: fix crash on RS/SS emission if already close local
    - BUG/MINOR: mux-quic: remove extra BUG_ON() in _qcc_send_stream()
    - MEDIUM: mt_list: Reduce the max number of loops with exponential backoff
    - MINOR: stats: add alt_name field to stat_col struct
    - MINOR: stats: add alt name info to stat_cols_info where relevant
    - MINOR: promex: get rid of promex_global_metric array
    - MINOR: stats-proxy: add alt_name field for ME_NEW_{FE,BE,PX} helpers
    - MINOR: stats-proxy: add alt name info to stat_cols_px where relevant
    - MINOR: promex: get rid of promex_st_metrics array
    - MINOR: pools: rename the "by_what" field of the show pools context to "how"
    - MINOR: cli/pools: record the list of pool registrations even when merging them
2025-03-21 17:33:36 +01:00
Willy Tarreau
9091c5317f MINOR: cli/pools: record the list of pool registrations even when merging them
By default, create_pool() tries to merge similar pools into one. But when
dealing with certain bugs, it's hard to say which ones were merged together.
We do have the information at registration time, so let's just create a
list of registrations ("pool_registration") attached to each pool, that
will store that information. It can then be consulted on the CLI using
"show pools detailed", where the names, sizes, alignment and flags are
reported.
2025-03-21 17:09:30 +01:00
Willy Tarreau
baf8b742b4 MINOR: pools: rename the "by_what" field of the show pools context to "how"
The goal will be to support other dump options. We don't need 32 bits to
express sorting criteria, let's reserve only 4 bits for them and leave
the remaining ones unused.
2025-03-21 17:09:30 +01:00