18755 Commits

Author SHA1 Message Date
Christopher Faulet
36d151dc10 MEDIUM: stream: No longer use TASK_F_UEVT* to shut a stream down
Thanks to the previous patch, it is now possible to explicitly rely on
stream's events to shut it down. The right event is set in
stream_shutdown(), before waking up the stream, via an atomic operation. In
process_stream(), this event will be handled as expected.

Thus, TASK_F_UEVT* are no longer used, but not removed since still usable
for other tasks.

This patch depends on "MEDIUM: stream: Map task wake up reasons to dedicated
stream events".
2025-01-28 14:53:37 +01:00
Christopher Faulet
6048460102 MEDIUM: stream: Map task wake up reasons to dedicated stream events
To fix thread-safety issues when a stream must be shut, three new task
states were added. These states are generic (UEVT1, UEVT2 and UEVT3), the
task callback function is responsible to know what to do with them. However,
it is not really scalable.

The best is to use an atomic field in the stream structure itself to deal
with these dedicated events. There is already the "pending_events" field
that save wake up reasons (TASK_WOKEN_*) to not loose them if
process_stream() is interrupted before it had a chance to handle them.

So the idea is to introduce a new field to handle streams dedicated events
and merged them with the task's wake up reasons used by the stream. This
means a mapping must be performed between some task wake up reasons and
streams events. Note that not all task wake up reasons will be mapped.

In this patch, the "new_events" field is introduced. It is an atomic
bit-field. Streams events (STRM_EVT_*) are also introduced to map the task
wake up reasons used by process_stream(). Only TASK_WOKEN_TIMER and
TASK_WOKEN_MSG are mapped, in addition to TASK_F_UEVT* flags. In
process_stream(), "pending_events" field is now filled with new stream
events and the mapping of the wake up reasons.
2025-01-28 14:53:37 +01:00
Christopher Faulet
0a52a75ef7 BUG/MINOR: stream: Properly handle "on-marked-up shutdown-backup-sessions"
shutdown-backup-sessions action for on-marked-up directive does not work anymore
since the stream_shutdown() function was modified to be async-safe.

When stream_shutdown() was modified to be async-safe, dedicated task events were
added to map the reasons to shut a stream down. SF_ERR_DOWN was mapped to
TASK_F_EVT1 and SF_ERR_KILLED was mapped to TASK_F_EVT2. The reverse mapping was
performed by process_stream() to shut the stream with the appropriate reason.

However, SF_ERR_UP reason, used by shutdown-backup-sessions action to shut a
stream down because a preferred server became available, was not mapped in the
same way. So since commit b8e3b0a18d ("BUG/MEDIUM: stream: make
stream_shutdown() async-safe"), this action is ignored and does not work
anymore.

To fix an issue, and being able to bakcport the fix, a third task event was
added. TASK_F_EVT3 is now mapped on SF_ERR_UP.

This patch should fix the issue #2848. It must be backported as far as 2.6.
2025-01-28 14:53:37 +01:00
Olivier Houchard
26b3e5236f MEDIUM: servers/proxies: Switch to using per-tgroup queues.
For both servers and proxies, use one connection queue per thread-group,
instead of only one. Having only one can lead to severe performance
issues on NUMA machines, it is actually trivial to get the watchdog to
trigger on an AMD machine, having a server with a maxconn of 96, and an
injector that uses 160 concurrent connections.
We now have one queue per thread-group, however when dequeueing, we're
dequeuing MAX_SELF_USE_QUEUE (currently 9) pendconns from our own queue,
before dequeueing one from another thread group, if available, to make
sure everybody is still running.
2025-01-28 12:49:41 +01:00
Olivier Houchard
583303c48b MINOR: proxies/servers: Calculate queueslength and use it.
For both proxies and servers, properly calculates queueslength, which is
the total number of element in each queues (as they currently are only
using one queue, it is equivalent to the number of element of that
queue), and use it instead of the queue's length.
2025-01-28 12:49:41 +01:00
Olivier Houchard
59eddabe16 MINOR: Add fields to the per-thread group field in struct server.
Add a per-thread group queue and associated fields in per-thread group
field in struct server, as well as a new field, queues length.
This is currently unused, so should change nothing.
2025-01-28 12:49:41 +01:00
Olivier Houchard
f879b9a18a MINOR: proxies: Add a per-thread group field to struct proxy.
Add a per-thread group field to struct proxy, that will contain a struct
queue, as well as a new field, "queueslength".
This is currently unused, so should change nothing.
Please note that proxy_init_per_thr() must now be called for each proxy
once the thread groups number is known.
2025-01-28 12:49:41 +01:00
Willy Tarreau
7fa70da06d MINOR: epoll: permit to mask certain specific events
A few times in the past we've seen cases where epoll was caught reporting
a wrong event that caused trouble (e.g. spuriously reporting HUP or RDHUP
after a successful connect()). The new tune.epoll.mask-events directive
permits to mask events such as ERR, HUP and RDHUP and convert them to IN
events that are processed by the regular receive path. This should help
better diagnose and troubleshoot issues such as this one, as well as rule
out such a cause when similar issues are reported:

   https://github.com/haproxy/haproxy/issues/2368
   https://www.spinics.net/lists/netdev/msg876470.html

It should be harmless to backport this if necessary.
2025-01-27 15:47:46 +01:00
Aurelien DARRAGON
e768a531b7 CLEANUP: tree-wide: define and use acl_match_cond() helper
acl_match_cond() combines acl_exec_cond() + acl_pass() and a check on the
condition->pol (to check if the cond is inverted) in order to return
either 0 if the cond doesn't match or 1 if it matches (or NULL).

Thanks to this we can actually simplify some redundant constructs that
iterate over rules and evaluate if the condition matches or not.

Conditions for tcp-request inspect-content and tcp-response
inspect-content couldn't be simplified because they perform an extra
check for missing data, and thus still need to leverage acl_exec_cond()

It's best to display the patch using "-w", like "git show xxxx -w",
because some blocks had to be re-indented after the cleanup, which
makes the patch hard to review by default.
2025-01-27 11:11:43 +01:00
Valentine Krasnobaeva
846819b316 CLEANUP: ssl: rename ssl_sock_load_ca to ssl_sock_gencert_load_ca
ssl_sock_load_ca is defined in ssl_gencert.c and compiled only if
SSL_NO_GENERATE_CERTIFICATES is not defined. It's name is a bit confusing, as
we may think at the first glance, that it's a generic function, which is also
used to load CA file, provided via 'ca-file' keyword.
ssl_set_verify_locations_file is used in this case.

So let's rename ssl_sock_load_ca into ssl_sock_gencert_load_ca. Same is
applied to ssl_sock_free_ca.
2025-01-24 12:31:07 +01:00
Valentine Krasnobaeva
c987f30245 BUG/MINOR: ssl: put ssl_sock_load_ca under SSL_NO_GENERATE_CERTIFICATES
ssl_sock_load_ca and ssl_sock_free_ca definitions are compiled only, if
SSL_NO_GENERATE_CERTIFICATES is not set. In case, when we set this define and
build haproxy, linker throws an error. So, let's fix this.

This should be backported in all stable versions.
2025-01-24 12:31:07 +01:00
Valentine Krasnobaeva
8620ae7962 MINOR: debug: show boot and runtime process settings in table
Let's reformat output of "show dev" in order to show some boot and runtime
process settings in a table. This makes the output less crowded.
2025-01-24 09:54:57 +01:00
Valentine Krasnobaeva
df7f16d960 MINOR: debug: debug_parse_cli_show_dev: use errname
Let's use errname, introduced in the previous commit in the output of
"show dev". This output is destined to engineers. So, no need to provide a
long descriptions of errnos given by strerror.
2025-01-24 09:54:57 +01:00
Amaury Denoyelle
42bac9339c MINOR: quic: adapt credit based pacing to BBR
Credit based pacing has been further refined to be able to calculate
dynamically burst size based on congestion parameter. However, BBR
algorithm already provides pacing rate and burst size (labelled as
send_quantum) for 1ms of emission.

Adapt quic_pacing_reload() to use BBR values to compute pacing credit.
This is done via pacing_burst callback which is now only defined for
BBR. For other algorithms, determine the burst size over 1ms with the
congestion window size and RTT.

This should be backported up to 3.1.
2025-01-23 17:41:07 +01:00
Amaury Denoyelle
7896edccdc MINOR: quic: remove unused pacing burst in bind_conf/quic_cc_path
Pacing burst size is now dynamic. As such, configuration value has been
removed and related fields in bind_conf and quic_cc_path structures can
be safely removed.

This should be backported up to 3.1.
2025-01-23 17:40:48 +01:00
Amaury Denoyelle
cb91ccd8a8 MEDIUM: quic: use dynamic credit for pacing
Major improvements have been introduced in pacing recently. Most
notably, QMUX schedules emission on a millisecond resolution, which
allow to use passive wait to be much CPU friendly.

However, an issue remains with the pacing max credit. Unless BBR is
used, it is fixed to the configured value from quic-cc-algo bind
statement. This is not practical as if too low, it may drastically
reduce performance due to 1ms sleep resolution. If too high, some
clients will suffer from too much packet loss.

This commit fixes the issue by implementing a dynamic maximum credit
value based on the network condition specific to each clients.
Calculation is done to fix a maximum value which should allow QMUX
current tasklet context to emit enough data to cover the delay with the
next tasklet invokation. As such, avg_loop_us is used to detect the
process load. If too small, 1.5ms is used as minimal value, to cover the
extra delay incurred by the system which will happen for a default 1ms
sleep.

This should be backported up to 3.1.
2025-01-23 17:40:48 +01:00
Amaury Denoyelle
8098be1fdc MEDIUM: mux-quic: reduce pacing CPU usage with passive wait
Pacing algorithm has been revamped in the previous commit to implement a
credit based solution. This is a far more adaptative solution, in
particular which allow to catch up in case pause between pacing emission
was longer than expected.

This allows QMUX to remove the active loop based on tasklet wake-up.
Instead, a new task is used when emission should be paced. The main
advantage is that CPU usage is drastically reduced.

New pacing task timer is reset each time qcc_io_send() is invoked. Timer
will be set only if pacing engine reports that emission must be
interrupted. In this case timer is set via qcc_wakeup_pacing() to the
delay reported by congestion algorithm, or 1ms if delay is too short. At
the end of qcc_io_cb(), pacing task is queued if timer has been set.

Pacing task execution is simple enough : it immediately wakes up QCC I/O
handler.

Note that to have decent performance, it requires to have a large enough
burst defined in configuration of quic-cc-algo. However, this value is
common to every listener clients, which may cause too much loss under
network conditions. This will be address in a future patch.

This should be backported up to 3.1.
2025-01-23 17:40:22 +01:00
Amaury Denoyelle
4489a61585 MEDIUM: quic: implement credit based pacing
Implement a new method for QUIC pacing emission based on credit. This
represents the number of packets which can be emitted in a single burst.
After emission, decrement from the credit the number of emitted packets.
Several emission can be conducted in the same sequence until the credit
is completely decremented.

When a new emission sequence is initiated (i.e. under a new QMUX tasklet
invokation), credit is refilled according to the delay which occured
between the last and current emission context.

This new mechanism main advantage is that it allows to conduct several
emission in the same task context without having to wait between each
invokation. Wait is only forced if pacing is expired, which is now
equivalent to having a null credit.

Furthermore, if delay between two emissions sequence would have been
smaller than expected, credit is only partially refilled. This allows to
restart emission without having to wait for the whole credit to be
available.

On the implementation side, a new field <credit> is avaiable in
quic_pacer structure. It is automatically decremented on
quic_pacing_sent_done() invokation. Also, a new function
quic_pacing_reload() must be used by QUIC MUX when a new emission
sequence is initiated to refill credit. <next> field from quic_pacer has
been removed.

For the moment, credit is based on the burst configured via quic-cc-algo
keyword, or directly reported by BBR.

This should be backported up to 3.1.
2025-01-23 17:40:20 +01:00
Amaury Denoyelle
9d8589f0de MINOR: mux-quic: increment pacing retry counter on expired
A field <paced_sent_ctr> from quic_pacer structure is used to report the
number of occurences where emission has been interrupted due to pacing.
However, it was not incremented when QUIC MUX had to pause immediately
emission as pacing was still not yet expired.

Fix this by incrementing <paced_sent_ctr> in qcc_io_send() prior to
emission if pacing is expired. Note that incrementation is only done
once if the tasklet is then repeatdely woken up until the timer is
expired.

This should be backported up to 3.1.
2025-01-23 17:29:14 +01:00
Amaury Denoyelle
bbaa7aef7b BUG/MINOR: quic: do not increase congestion window if app limited
Previously, congestion window was increased any time each time a new
acknowledge was received. However, it did not take into account the
window filling level. In a network condition with negligible loss, this
will cause the window to be incremented until the maximum value (by
default 480k), even though the application does not have enough data to
fill it.

In most cases, this issue is not noticeable. However, it may lead to
excessive memory consumption when a QUIC connection is suddendly
interrupted, as in this case haproxy will fill the window with
retransmission. It even has caused OOM crash when thousands of clients
were interrupted at once on a local network benchmark.

Fix this by first checking window level prior to every incrementation
via a new helper function quic_cwnd_may_increase(). It was arbitrarily
decided that the window must be at least 50% full when the ACK is
handled prior to increment it. This value is a good compromise to keep
window in check while still allowing fast increment when needed.

Note that this patch only concerns cubic and newreno algorithm. BBR has
already its notion of application limited which ensures the window is
only incremented when necessary.

This should be backported up to 2.6.
2025-01-23 14:49:35 +01:00
Amaury Denoyelle
7c0820892f MINOR: quic: rename pacing_rate cb to pacing_inter
Rename one of the congestion algorithms pacing callback from pacing_rate
to pacing_inter. This better reflects that this function returns a delay
(in nanoseconds) which should be applied between each packet emission to
fill the congestion window with a perfectly smoothed emission.

This should be backported up to 3.1.
2025-01-23 14:49:35 +01:00
Christopher Faulet
b18e988e0d BUG/MEDIUM: mux-h1: Properly close H1C if an error is reported before sending data
It is possible to have front H1 connections waiting for the client timeout
while they should be closed because a conneciton error was reported before
sebding an error message to the client. It is not a leak because the
connections are closed when the timeout expires but it is a waste of
ressources, especially if the client timeout is high.

When an early error message must be sent to the client, if an error was
already detected, no data are sent and the output buffer is released. At
this stage, the H1 connection is in CLOSING state and it must be
released. But because of a bug, this is not performed. The client timeout is
rearmed and the H1 connection is only closed when it expires.

To fix the issue, the condition to close a H1C must also be evaluated when
an error is detected before sending data.

It is only an issue with idle client connections, because there is no H1
stream in that case and the error message is generated by the mux itself.

This patch must be backported as far as 2.8.
2025-01-23 11:05:48 +01:00
Frederic Lecaille
1f099db7e2 BUILD: quic: Move an ASSUME_NONNULL() for variable which is not null
Some new compilers warn that <oldest_lost> variable can be null even this cannot be
the case as mentioned by the comment about an already present ASSUME_NONNULL()
call comment as follows:

src/quic_loss.c: In function ‘qc_release_lost_pkts’:
src/quic_loss.c:307:86: error: potential null pointer dereference [-Werror=null-dereference]
  307 |   unsigned int period = newest_lost->time_sent_ms - oldest_lost->time_sent_ms;
      |                                                     ~~~~~~~~~~~^~~~~~~~~~~~~~

Move up this ASSUME_NONNULL() statement to please these compiler.

Must be backported as far as 2.6 to easy any further backport around this code part.
2025-01-21 22:01:34 +01:00
Willy Tarreau
b066c0affb REORG: version: move the remaining BUILD_* stuff from haproxy.c to version.c
version.c tries to centralize all variables conveying version information,
but there's still an issue with the BUILD_* variables which are only
passed to haproxy.o and are only updated when that one is rebuilt. This
is not very logical given that we can end up with values there which
contradict info from version.c.

Better move all of these to version.c which is systematically rebuilt.
Most of these variables only end up as string concatenation at the
moment. Some of them are even duplicated. In version.c we now have one
variable (or constant) for each of them and haproxy.c references them
in messages. This is much more logical and easier to maintain in a
consistent state.

The patch looks a bit large but it really only moves the ifdefed string
assignment from one file to another, placing them into variables.
2025-01-20 17:53:55 +01:00
Willy Tarreau
9e61cf6790 BUG/MINOR: init: set HAPROXY_STARTUP_VERSION from the variable, not the macro
This environment variable was added by commit d4c0be6b20 ("MINOR: startup:
HAPROXY_STARTUP_VERSION contains the version used to start"). However, it's
set from the macro that is passed during the build process instead of being
set from the variable that's kept up to date in version.c. The difference
is visible only during debugging/bisecting because only changed files and
version.o are rebuilt, but not necessarily haproxy.o, which is where the
environment variable is set. This means that the version exposed in the
environment is not necessarily the same as the one presented in
"haproxy -v" during such debugging sessions.

This should be backported to 2.8. It has no impact at all on regularly
built binaries.
2025-01-20 17:53:55 +01:00
Aurelien DARRAGON
bfa493d4be BUG/MAJOR: log/sink: possible sink collision in sink_new_from_srv()
sink_new_from_srv() leverages sink_new_buf() with the server id as name,
sink_new_buf() then calls __sink_new() with the provided name.

Unfortunately sink_new() is designed in such a way that it will first look
up in the list of existing sinks to check if a sink already exists with
given name, in which case the existing sink is returned. While this
behavior may be error-prone, it is actually up to the caller to ensure
that the provided name is unique if it really expects a unique sink
pointer.

Due to this bug in sink_new_from_srv(), multiple tcp servers with the same
name defined in distinct log backends would end up sharing the same sink,
which means messages sent to one of the servers would also be forwarded to
all servers with the same name across all log backend sections defined in
the config, which is obviously an issue and could even raise security
concerns.

Example:

  defaults
    log backend@log-1 local0

  backend log-1
    mode log
    server s1 127.0.0.1:514
  backend log-2
    mode log
    server s1 127.0.0.1:5114

With the above config, logs sent to log-1/s1 would also end up being sent
to log-2/s1 due to server id "s1" being used for tcp servers in distinct
log backends.

To fix the issue, we now prefix the sink ame with the backend name:
back_name/srv_id combination is known to be unique (backend name
serves as a namespace)

This bug was reported by GH user @landon-lengyel under #2846.

UDP servers (with udp@ prefix before the address) are not affected as they
don't make use of the sink facility.

As a workaround, one should manually ensure that all tcp servers across
different log backends (backend with "mode log" enabled) use unique names

This bug was introduced in e58a9b4 ("MINOR: sink: add sink_new_from_srv()
function") thus it exists since the introduction of log backends in 2.9,
which means this patch should be backported up to 2.9.
2025-01-20 12:33:20 +01:00
Amaury Denoyelle
c3a4a4d166 BUG/MAJOR: quic: reject too large CRYPTO frames
Received CRYPTO frames are inserted in a ncbuf to handle out-of-order
reception via ncb_add(). They are stored on the position relative to the
frame offset, minus a base offset which corresponds to the in-order data
length already handled.

Previouly, no check was implemented on the frame offset value prior to
ncb_add(), which could easily trigger a crash if relative offset was too
large. Fix this by ensuring first that the frame can be stored in the
buffer before ncb_add() invokation. If this is not the case, connection
is closed with error CRYPTO_BUFFER_EXCEEDED, as required by QUIC
specification.

This should fix github issue #2842.

This must be backported up to 2.6.
2025-01-20 11:43:23 +01:00
Aurelien DARRAGON
0486b9e491 MINOR: stktable: add table_{inc,clr}_gpc* converters
As discussed in GH #2423, there are some cases where src_{inc,clr}_gpc*
is not sufficient because we need to perform the lookup on a specific
key. Indeed, just like we did in e642916 ("MEDIUM: stktable: leverage
smp_fetch_* helpers from sample conv"), we can easily implement new
table converters based on existing fetches. This is what we do in
this patch.

Also the doc was updated so that src_{inc,clr}_gpc* fetches now point to
their generic equivalent table_{inc,clr}_gpc*. Indeed, src_{inc,clr}_gpc*
are simply aliases.

This should fix GH #2423.
2025-01-16 11:50:33 +01:00
Aurelien DARRAGON
9f68049cc1 CLEANUP: stktable: move sample_conv_table_bytes_out_rate()
sample_conv_table_bytes_out_rate() was defined in the middle of other
stick-table sample convs without any ordering logic. Let's put it
where it belongs, right after sample_conv_table_bytes_in_rate().
2025-01-16 11:50:27 +01:00
Aurelien DARRAGON
e6429166b9 MEDIUM: stktable: leverage smp_fetch_* helpers from sample conv
In this patch we try to prevent code duplication: some fetches and sample
converters do the exact same thing, except that the converter takes the
argument as input data. Until now, both the converter and the fetch
had their own implementation (copy pasted), with the fetch specific or
converter specific lookup part.

Thanks to previous commits, we now have generic sample fetch helpers
that take the stkctr as argument, so let's leverage them directly
from the converter functions when available. This allows to remove
a lot of code duplication and should make code maintenance easier in the
future.
2025-01-15 14:04:55 +01:00
Aurelien DARRAGON
6c9b315187 MEDIUM: stktable: split sc_ and src_ fetch lookup logics
While this patch actually adds more insertions than deletions, it actually
tries to simplify the lookup logic for sc_ and src_ sticktable fetches.

Indeed, smp_create_src_stkctr() and smp_fetch_sc_stkctr() combination
was used everywhere the fetch supports sc_ and src_ form, and
smp_fetch_sc_stkctr() even integrated some of the src-oriented fetch logic.

Not only this was confusing, but it made the task of adding new generic
fetches even more complex.

Thus in this patch we completely dedicate smp_fetch_sc_stkctr() to sc_
oriented fetches, while smp_create_src_stkctr() is now renamed to
smp_fetch_src_stkctr() and can now work on its own for src_ oriented
fetches. It takes an additional paramater, "create" to tell the function
if the entry should be created if it doesn't exist yet.

Now it's up to the calling function to know if it should be using the
sc_ oriented fetch or the src_ oriented one based on the input keyword.
2025-01-15 14:04:50 +01:00
Aurelien DARRAGON
22229a41a2 MEDIUM: stktable: split src-based key smp_fetch_sc functions
In this patch we split several sample fetch functions that are leveraged
by the "src-" fetches such as smp_fetch_sc_inc_gpc().

Indeed, for all of them, we add an intermediate helper function that takes
a stkctr pointer as parameter and performs the logic, leaving the lookup
part in the calling function. Before this patch existing functions were
doing the lookup + the fetch logic. Thanks to this patch it will become
easier to add generic converters taking lookup key as input.

List of targeted functions:
 - smp_fetch_sc_inc_gpc()
 - smp_fetch_sc_inc_gpc0()
 - smp_fetch_sc_inc_gpc1()
 - smp_fetch_sc_clr_gpc()
 - smp_fetch_sc_clr_gpc0()
 - smp_fetch_sc_clr_gpc1()
 - smp_fetch_sc_conn_cnt()
 - smp_fetch_sc_conn_rate()
 - smp_fetch_sc_updt_conn_cnt()
 - smp_fetch_sc_conn_curr()
 - smp_fetch_sc_glitch_cnt()
 - smp_fetch_sc_glitch_rate()
 - smp_fetch_sc_sess_cnt()
 - smp_fetch_sc_sess_rate()
 - smp_fetch_sc_http_req_cnt()
 - smp_fetch_sc_http_req_rate()
 - smp_fetch_sc_http_err_cnt()
 - smp_fetch_sc_http_err_rate()
 - smp_fetch_sc_http_fail_cnt()
 - smp_fetch_sc_http_fail_rate()
 - smp_fetch_sc_kbytes_in()
 - smp_fetch_sc_bytes_in_rate()
 - smp_fetch_kbytes_out()
 - smp_fetch_sc_gpc1_rate()
 - smp_fetch_sc_gpc0_rate()
 - smp_fetch_sc_gpc_rate()
 - smp_fetch_sc_get_gpc1()
 - smp_fetch_sc_get_gpc0()
 - smp_fetch_sc_get_gpc()
 - smp_fetch_sc_get_gpt0()
 - smp_fetch_sc_get_gpt()
 - smp_fetch_sc_bytes_out_rate()

Please note that this patch doesn't render any good using "git show" or
"git diff". For all the functions listed above, a new helper function was
defined right above it, with the same name without "_sc". These new
functions perform the fetch part, while the original ones (with "_sc")
now simply perform the lookup and then leverage the corresponding fetch
helper.
2025-01-15 14:04:45 +01:00
Aurelien DARRAGON
f71bad4694 MINOR: stktable: add smp_fetch_stksess() helper function
smp_fetch_stksess(table, smp, create) performs a lookup in <table> by
using <smp> as a key. It returns matching entry on success and NULL on
failure. <create> can be set to 1 to force the entry creation.

We then use this helper everywhere relevant to prevent code duplication
2025-01-15 14:04:40 +01:00
Aurelien DARRAGON
0fb8807820 MINOR: stktable: fix potential build issue in smp_to_stkey (2nd try)
As discussed in GH #2838, the previous fix f399dbf
("MINOR: stktable: fix potential build issue in smp_to_stkey") which
attempted to remove conversion ambiguity and prevent build warning proved
to be insufficient.

This time, we implement Willy's suggestion, which is to use an union to
perform the conversion.

Hopefully this should fix GH #2838. If that's the case (and only in that
case), then this patch may be backported with f399dbf (else the patch
won't apply) anywhere b59d1fd ("BUG/MINOR: stktable: fix big-endian
compatiblity in smp_to_stkey()") was backported.
2025-01-15 14:04:31 +01:00
Aurelien DARRAGON
8919a80da9 BUG/MEDIUM: stktable: fix missing lock on some table converters
In 819fc6f563
("MEDIUM: threads/stick-tables: handle multithreads on stick tables"),
sample fetch and action functions were properly guarded with stksess
read/write locks for read and write operations respectively, but the
sample_conv_table functions leveraged by "table_*" converters were
overlooked.

This bug was not known to cause issues in existing deployments yet (at
least it was not reported), but due to its nature it can theorically lead
to inconsistent values being reported by "table_*" converters if the value
is being updated by another thread in parallel.

It should be backported to all stable versions.

[ada: for versions < 3.0, glitch_cnt and glitch_rate samples should be
 ignored as they first appeared in 3.0]
2025-01-14 11:36:04 +01:00
Aurelien DARRAGON
f399dbf70c MINOR: stktable: fix potential build issue in smp_to_stkey
smp_to_stkey() uses an ambiguous cast from 64bit integer to 32 bit
unsigned integer. While it is intended, let's make the cast less
ambiguous by explicitly casting the right part of the assignment to the
proper type.

This should fix GH #2838
2025-01-13 09:45:40 +01:00
Amaury Denoyelle
4a5d82a97d BUG/MINOR: quic: reject NEW_TOKEN frames from clients
As specified by RFC 9000, reject NEW_TOKEN frames emitted by clients.
Close the connection with error code PROTOCOL_VIOLATION.

This must be backported up to 2.6.
2025-01-10 14:50:59 +01:00
Amaury Denoyelle
a2c0c459a4 MINOR: trace: support all source alias on -dt
Command line argument -dt can be used to activate traces during startup.
Via its optional argument, it is possible to change settings for a
particular trace source. It is also possible to update every registered
sources by specifying an empty name.

Support the trace source alias "all". This is an alternative to the
empty name to update every sources.
2025-01-10 14:50:59 +01:00
Amaury Denoyelle
a50dd07c16 MINOR: trace: ensure -dt priority over traces config section
Traces can be activated on startup either via -dt command line argument
or via the traces configuration section. This can caused confusion as it
may not be clear as trace source can be completed or overriden by one or
the other.

Fix the precedence to give the priority to the command line argument.
Now, each trace source configured via -dt is first resetted to a default
state before applying new settings. Then, it is impossible to change a
trace source via the configuration file if it was already targetted via
-dt argument.
2025-01-10 14:50:59 +01:00
Amaury Denoyelle
da9a7e0bd9 MINOR: trace: add help message for -dt argument
Traces can be activated on startup via -dt command line argument. To
facilitate its usage, display a usage description and examples when
"help" is specified.
2025-01-10 14:50:59 +01:00
Olivier Houchard
659d5f6579 BUG/MEDIUM: queues: Adjust the proxy counters when appropriate
In process_srv_queue(), if we manage to successfully run an extra task,
don't forget to adjust the proxy's totpend and served counters accordingly.
Having an inaccurate served could lead to various subtle bugs, as it is
used when making load balancing decisions.

This should not be backported, unless cda7275ef5d5e49fb2ea2373ea3b1ba63fc927c3
is backported too.
2025-01-09 17:46:46 +01:00
Aurelien DARRAGON
24042df94e MINOR: stktable: add sc[0-2]_key fetches
As discussed in GH #1750, we were lacking a sample fetch to be able to
retrieve the key from the currently tracked counter entry. To do so,
sc_key fetch can now be used. It returns a sample with the correct type
(table key type) corresponding to the tracked counter entry (from previous
track-sc rules).

If no entry is currently tracked, it returns nothing.

It can be used using the standard form "sc_key(<sc_number>)" or the legacy
form: "sc0_key", "sc1_key", "sc2_key"

Documentation was updated.
2025-01-09 10:57:01 +01:00
Aurelien DARRAGON
7423310d5d MINOR: stktable: add stksess_getkey() helper
stksess_getkey(t, ts) returns a stktable_key struct pointer filled with
data from input <ts> entry in <t> table. Returned pointer uses the
static_table_key variable. Indeed, stktable_key struct is more convenient
to manipulate than having to deal with the key extraction from stktsess
struct directly.
2025-01-09 10:56:56 +01:00
Aurelien DARRAGON
df9c2ef2c3 MINOR: stktable: add stkey_to_smp() helper
reverse operation for smp_to_stkey(): fills input <smp> from a
stktable_key struct.

Returns 1 on success and 0 on failure.
2025-01-09 10:56:50 +01:00
Aurelien DARRAGON
b59d1fd911 BUG/MINOR: stktable: fix big-endian compatiblity in smp_to_stkey()
When smp_to_stkey() deals with SINT samples, since stick-tables deals with
32 bits integers while SINT sample is 64 bit integer, inplace conversion
was done in smp_to_stkey. For that the 64 bit integer was truncated before
the key would point to it. Unfortunately this only works on little endian
architectures because with big endian ones, the key would point to the
wrong 32bit range.

To fix the issue and make the conversion endian-proof, let's re-assign
the sample as 32bit integer before the key points to it.

Thanks to Willy for having spotted the bug and suggesting the above fix.

It should be backported to all stable versions.
2025-01-09 10:56:43 +01:00
Willy Tarreau
7be596b35c [RELEASE] Released version 3.2-dev3
Released version 3.2-dev3 with the following main changes :
    - DOC: config: add missing "track-sc0" in action keywords matrix
    - BUG/MINOR: stktable: invalid use of stkctr_set_entry() with mixed table types
    - BUG/MAJOR: mux-quic: fix BUG_ON on empty STREAM emission
    - BUG/MEDIUM: mux-h2: Count copied data when looping on RX bufs in h2_rcv_buf()
    - Revert "BUG/MAJOR: mux-quic: fix BUG_ON on empty STREAM emission"
    - BUG/MAJOR: mux-quic: properly fix BUG_ON on empty STREAM emission
    - MINOR: mux-quic: add traces on sd attach
    - BUG/MEDIUM: mux-quic: do not attach on already closed stream
    - BUG/MINOR: compression: handle a possible strdup() failure
    - BUG/MINOR: pool: handle a possible strdup() failure
    - BUG/MINOR: cfgparse-tcp: handle a possible strdup() failure
    - BUG/MINOR: log: Allow to use if/unless conditionnals for do-log action
    - MINOR: config: Alert about extra arguments for errorfile and errorloc
    - BUG/MINOR: mux-quic: fix wakeup on qcc_set_error()
    - MINOR: mux-quic: change return value of qcs_attach_sc()
    - BUG/MINOR: mux-quic: handle closure of uni-stream
    - BUG/MEDIUM: promex/resolvers: Don't dump metrics if no nameserver is defined
    - BUG/MAJOR: ssl/ocsp: fix NULL conn object dereferencing to access QUIC TLS counters
    - MEDIUM: errors: get rid of shm_open()
    - BUILD: makefile: do not clean standalone binaries on a simple "make clean"
    - BUILD: makefile: add a qinfo macro to pass info in quiet mode
    - DEV: ncpu: add a simple utility to help with NUMA development
    - DEV: ncpu: implement a wrapper mode
    - DEV: ncpu: make the wrapper work both as a lib and executable
    - BUG/MEDIUM: h1-htx: Properly handle bodyless messages
    - MINOR: tools: add a few functions to simply check for a file's existence
2025-01-09 09:21:04 +01:00
Willy Tarreau
b25850f25b MINOR: tools: add a few functions to simply check for a file's existence
At many places we'd like to be able to simply construct a path from a
format string and check if that path corresponds to an existing file,
directory etc. Here we add 3 functions, a generic one to test that a
path corresponds to a given file mode (e.g. S_IFDIR, S_IFREG etc), and
two other ones specifically checking for a file or a dir for easier
use.
2025-01-09 09:18:49 +01:00
Christopher Faulet
b9cc361b35 BUG/MEDIUM: h1-htx: Properly handle bodyless messages
During h1 parsing, there are some postparsing checks to detect bodyless
messages and switch the parsing in DONE state. However, a case was not
properly handled. Responses to HEAD requests with a "transfer-encoding"
header. The response parser remained blocked waiting for the response body.

To fix the issue, the postparsing was sliglty modified. Instead of trying to
handle bodyless messages in a common way between the request and the
response, it is now performed in the dedicated postparsing functions. It is
easier to enumerate all cases, especially because there is already a test
for responses to HEAD requests.

This patch should fix the issue #2836. It must be backported as far as 2.9.
2025-01-08 18:20:26 +01:00
William Lallemand
143be1b59f MEDIUM: errors: get rid of shm_open()
Since 5ee266b7 ("MINOR: error: simplify startup_logs_init_shm"), the FD
of the startup logs is always closed and the HAPROXY_STARTUPLOGS_FD
variable is not used anymore. Which means we only need a mmap.

Indeed the shm_open() function was only needed to keep the shm between
the exec() of the master so we can get the logs stored there after doing
the final exec() in wait mode. Since the wait mode doesn't exist
anymore and the parsing is done in a worker, we only need to share a
memory zone between the master and the worker.

This patch removes shm_open() and replace it with a simple mmap(), this
way the shared startup-logs become more portable and USE_SHM_OPEN is not
required anymore.
2025-01-07 16:42:38 +01:00
Frederic Lecaille
d7fc90afe9 BUG/MAJOR: ssl/ocsp: fix NULL conn object dereferencing to access QUIC TLS counters
This bug arrived with this commit in the current dev branch:

	056ec51c26 MEDIUM: ssl/ocsp: counters for OCSP stapling

and could occur for QUIC connections during handshake when the underlying
<conn> connection object is not already initialized. So in this case the TLS
counters attached to TLS listeners cannot be accessed through this object but
from the QUIC connection object.

Modify the code to initialize the listener (<li> variable) for both QUIC
and TCP connections, then initialize the variables for the TLS counters
if the listener is also initialized.

Thank you to @Tristan971 for having reported this issue in GH #2833.

Must be backported with the commit mentioned above if it is planned to be
backported.
2025-01-07 15:19:42 +01:00