On frontend side, when a stream is shut while the response was already fully
sent, it was cancelled by sending a RST_STREAM(CANCEL) frame. However, it is
not accurrate. CANCEL error code must only be used if the response headers
were sent, but not the full response. As stated in the RFC 9113, when the
response was fully sent, to stop the request sending, a RST_STREAM with an
error code of NO_ERROR must be sent.
This patch should solve the issue #1219. It must be backported to all stable
versions.
The ckch_store_load_files() function makes specific processing for
PARSE_TYPE_STR as if it was a type only used for paths.
This patch changes a little bit the way it's done,
PARSE_TYPE_STR is only meant to strdup() a string and stores the
resulting pointer in the ckch_conf structure.
Any processing regarding the path is now done in the callback.
Since the callbacks were basically doing the same thing, they were
transformed into the DECLARE_CKCH_CONF_LOAD() macros which allows to
do some templating of these functions.
The resulting ckch_conf_load_* functions will do the same as before,
except they will also do the path processing instead of letting
ckch_store_load_files() do it, which means we don't need the "base"
member anymore in the struct ckch_conf_kws.
With the SSL configuration, crt-base, key-base are often used, these
keywords concatenates the base path with the path when the path does not
start by '/'.
This is done at several places in the code, so a function to do this
would be better to standardize the code.
Recent commit e5e36ce09 ("BUG/MEDIUM: hlua/cli: Fix lua CLI commands
to work with applet's buffers") revealed a bug in hlua cli applet handling
Indeed, playing with Willy's lua tetris script on the cli, a segfault
would be encountered when forcefully closing the session by sending a
CTRL+C on the terminal.
In fact the crash was caused by a UAF: while the cli applet was already
freed, the lua task responsible for waking it up would still point to it.
Thus hlua_applet_wakeup() could be called even if the applet didn't exist
anymore.
To fix the issue, in hlua_cli_io_release_fct() we must also free the hlua
task linked to the applet, like we already do for
hlua_applet_tcp_release() and hlua_applet_http_release().
While this bug exists on stable versions (where it should be backported
too for precaution), it only seems to be triggered starting with 3.0.
'global.fd_hard_limit' stays uninitialized, if haproxy is started with -m
(global.rlimit_memmax). 'remain' is the MAX between soft and hard process fd
limits. It will be always bigger than 'global.fd_hard_limit' (0) in this case.
So, if we reassign 'remain' to the 'global.fd_hard_limit' unconditionally,
calculated then 'maxconn' will be even negative and the DEFAULT_MAXCONN (100)
will be set as the 'ideal_maxconn'.
During the 'global.maxconn' calculations in set_global_maxconn(), if the
provided 'global.rlimit_memmax' is quite big, system will refuse to calculate
based on its 'global.maxconn' and we will do a fallback to the 'ideal_maxconn',
which is 100.
Same problem for the configs with SSL frontends and backends.
This fixes the issue #2899.
This should be backported to v3.1.0.
Thanks to the previous commits, we now know that "wait srv-removable"
does not require thread isolation, as long as 3372a2ea00 ("BUG/MEDIUM:
queues: Stricly respect maxconn for outgoing connections") and c880c32b16
("MINOR: stream: decrement srv->served after detaching from the list")
are present. Let's just get rid of thread_isolate() here, which can
consume a lot of CPU on highly threaded machines when removing many
servers at once.
This function was marked as requiring thread isolation because its code
was extracted from cli_parse_delete_server() and was running under
isolation. But upon closer inspection, and using atomic loads to check
a few counters, it is actually safe to run without isolation, so let's
reflect that in its description.
However, it remains true that cli_parse_delete_server() continues to call
it under isolation.
Now that thanks to commit c880c32b16 ("MINOR: stream: decrement
srv->served after detaching from the list") we can trust srv->served,
let's use it and no longer loop on threads when checking if a server
still has streams attached to it. This will be much cheaper and will
result in keeping isolation for a shorter time in the "wait" command.
Baptiste reported that using the new optional timeout argument introduced
in 19e48f2 ("MINOR: hlua: add an optional timeout to AppletTCP:receive()")
the following error would occur at some point:
runtime error: file.lua:lineno: bad argument #-2 to 'receive' (number
expected, got light userdata) from [C]: in method 'receive...
In fact this is caused by exp_date being retrieved using relative index -1
instead of absolute index 3. Indeed, while using relative index is fine
most of the time when we trust the stack, when combined with yielding the
top of the stack when resuming from yielding is not necessarily the same
as when the function was first called (ie: if some data was pushed to the
stack in the yieldable function itself). As such, it is safer to use
explicit index to access exp_date variable at position 3 on the stack.
It was confirmed that doing so addresses the issue.
No backport needed unless 19e48f2 is.
In commit 3372a2ea00 ("BUG/MEDIUM: queues: Stricly respect maxconn for
outgoing connections"), it has been ensured that srv->served is held
as long as possible around the periods where a stream is attached to a
server. However, it's decremented early when entering sess_change_server,
and actually just before detaching from that server's list. While there
is theoretically nothing wrong with this, it prevents us from looking at
this counter to know if streams are still using a server or not.
We could imagine decrementing it much later but that wouldn't work with
leastconn, since that algo needs ->served to be final before calling
lbprm.server_drop_conn(). Thus what we're doing here is to detach from
the server, then decrement ->served, and only then call the LB callback
to update the server's position in the tree. At this moment the stream
doesn't know the server anymore anyway (except via this function's
local variable) so it's safe to consider that no stream knows the server
once the variable reaches zero.
In ad0133cc ("MINOR: log: handle log-forward "option host""), we
de-reference saddr without first checking if saddr is NULL. In practise
saddr shouldn't be null, but it may be the case if memory error happens
for tcp syslog handler so we must assume that it can be NULL at some
point.
To fix the bug, we simply check for NULL before de-referencing it
under syslog_io_handler(), as the function comment suggests.
No backport needed unless ad0133cc is.
This patch implements the function EVP_PKEY_to_jws_algo() which returns
a jwt_alg compatible with the private key.
This value can then be passed to jws_b64_protected() and
jws_b64_signature() which modified to take an jwt_alg instead of a char.
TCP services might want to be interactive, and without a timeout on
receive(), the possibilities are a bit limited. Let's add an optional
timeout in the 3rd argument to possibly limit the wait time. In this
case if the timeout strikes before the requested size is complete,
a possibly incomplete block will be returned.
Coverity has reported that cpu2 seems sometimes unused in
cpu_fixup_topology():
*** CID 1593776: Code maintainability issues (UNUSED_VALUE)
/src/cpu_topo.c: 690 in cpu_fixup_topology()
684 continue;
685
686 if (ha_cpu_topo[cpu].cl_gid != curr_id) {
687 if (curr_id >= 0 && cl_cpu <= 2)
688 small_cl++;
689 cl_cpu = 0;
>>> CID 1593776: Code maintainability issues (UNUSED_VALUE)
>>> Assigning value from "cpu" to "cpu2" here, but that stored value is overwritten before it can be used.
690 cpu2 = cpu;
691 curr_id = ha_cpu_topo[cpu].cl_gid;
692 }
693 cl_cpu++;
694 }
695
That's it. 'cpu2' automatic/stack variable is used only in for() loop scopes to
save cpus ID in which we are interested in. In the loop pointed by coverity
this variable is not used for further processing within the loop's scope.
Then it is always reinitialized to 0 in the another following loops.
This fixes GitHUb issue #2895.
This cpu policy keeps the smallest CPU cluster. This can
be used to limit the resource usage to the strict minimum
that still delivers decent performance, for example to
try to further reduce power consumption or minimize the
number of cores needed on some rented systems for a
sidecar setup, in order to scale the system down more
easily. Note that if a single cluster is present, it
will still be fully used.
When started on a 64-core EPYC gen3, it uses only one CCX
with 8 cores and 16 threads, all in the same group.
This cpu policy tries to evict performant core clusters and only
focuses on efficiency-oriented ones. On an intel i9-14900k, we can
get 525k rps using 8 performance cores, versus 405k when using all
24 efficiency cores. In some cases the power savings might be more
desirable (e.g. scalability tests on a developer's laptop), or the
performance cores might be better suited for another component
(application or security component).
This cpu policy tries to evict efficient core clusters and only
focuses on performance-oriented ones. On an intel i9-14900k, we can
get 525k rps using only 8 cores this way, versus 594k when using all
24 cores. The gains from using all these codes are not significant
enough to waste them on this. Also these cores can be much slower
at doing SSL handshakes so it can make sense to evict them. Better
keep the efficiency cores for network interrupts for example.
Also, on a developer's machine it can be convenient to keep all these
cores for the local tasks and extra tools (load generators etc).
When a cluster is too large to fit into a single group, let's split it
into two equal groups, which will still be allowed to use all the CPUs
of the cluster. This allows haproxy to start all the threads with a
minimum number of groups (e.g. 2x40 for 80 cores).
This policy forms thread groups from the CPU clusters, and bind all the
threads in them to all the CPUs of the cluster. This is recommended on
system with bad inter-CCX latencies. It was shown to simply triple the
performance with queuing on a 64-core EPYC without having to manually
assign the cores with cpu-map.
This is now superseded by the default "safe" cpu-policy, and every time
it's used, that code was bypassed anyway since global.nbthread was set.
We can now safely remove it. Note that for other policies which do not
set a thread count nor further restrict CPUs (such as "none", or even
"safe" when finding a single node), we continue to go through the fallback
code that automatically assigns CPUs to threads and counts them.
This now turns the cpu-policy to "first-usable-node" by default, so that
we preserve the current default behavior consisting in binding to the
first node if nothing was forced. If a second node is found,
global.nbthread is set and the previous code will be skipped.
This is a reimplemlentation of the current default policy. It binds to
the first node having usable CPUs if found, and drops CPUs from the
second and next nodes.
We'll need to let the user decide what's best for their workload, and in
order to do this we'll have to provide tunable options. For that, we're
introducing struct ha_cpu_policy which contains a name, a description
and a function pointer. The purpose will be to use that function pointer
to choose the best CPUs to use and now to set the number of threads and
thread-groups, that will be called during the thread setup phase. The
only supported policy for now is "none" which doesn't set/touch anything
(i.e. all available CPUs are used).
These are processed after the topology is detected, and they allow to
restrict binding to or evict CPUs matching the indicated hardware
cluster number(s). It can be used to bind to only some clusters, such
as CCX or different energy efficiency cores. For this reason, here we
use the cluster's local ID (local to the node).
These are processed after the topology is detected, and they allow to
restrict binding to or evict CPUs matching the indicated hardware
core number(s). It can be used to bind to only some clusters as well
as to evict efficient cores whose number is known.
These are processed after the topology is detected, and they allow to
restrict binding to or evict CPUs matching the indicated hardware
thread number(s). It can be used to reserve even threads for HW IRQs
and odd threads for haproxy for example, or to evict efficient cores
that do only have thread #0.
On some Arm systems (typically A76/N1) where CPUs can be associated in
pairs, clusters are reported while they have no incidence on I/O etc.
Yet it's possible to have tens of clusters of 2 CPUs each, which is
counter productive since it does not even allow to start enough threads.
Let's detect this situation as soon as there are at least 4 clusters
having each 2 CPUs or less, which is already very suspcious. In this
case, all these clusters will be reset as meaningless. In the worst
case if needed they'll be re-assigned based on L2/L3.
The goal here is to keep an array of the known CPU clusters, because
we'll use that often to decide of the performance of a cluster and
its relevance compared to other ones. We'll store the number of CPUs
in it, the total capacity etc. For the capacity, we count one unit
per core, and 1/3 of it per extra SMT thread, since this is roughly
what has been measured on modern CPUs.
In order to ease debugging, they're also dumped with -dc.
The purpose here is to detect heterogenous clusters which are not
properly reported, based on the exposed information about the cores
capacity. The algorithm here consists in sorting CPUs by capacity
within a cluster, and considering as equal all those which have 5%
or less difference in capacity with the previous one. This allows
large clusters of more than 5% total between extremities, while
keeping apart those where the limit is more pronounced. This is
quite common in embedded environments with big.little systems, as
well as on some laptops.
Due to the way core numbers are assigned and the presence of SMT on
some of them, some holes may remain in the array. Let's renumber them
to plug holes once they're known, following pkg/node/die/llc etc, so
that they're local to a (pkg,node) set. Now an i7-14700 shows cores
0 to 19, not 0 to 27.
On some machines, L3 is not always reported (e.g. on some lx2 or some
armada8040). But some also don't have L3 (core 2 quad). However, no L3
when there are more than 2 L2 is quite unheard of, and while we don't
really care about firing 2 thread groups for 2 L2, we'd rather avoid
doing this if there are 8! In this case we'll declare an L3 instance
to fix the situation. This allows small machines to continue to start
with two groups while not derivating on large ones.
It's important that we don't leave unassigned IDs in the topology,
because the selection mechanism is based on index-based masks, so an
unassigned ID will never be kept. This is particularly visible on
systems where we cannot access the CPU topology, the package id, node id
and even thread id are set to -1, and all CPUs are evicted due to -1 not
being set in the "only-cpu" sets.
Here in new function "cpu_fixup_topology()", we assign them with the
smallest unassigned value. This function will be used to assign IDs
where missing in general.
Due to the previous commit we can end up with cores not assigned
any cluster ID. For this, at the end we sort the CPUs by topology
and assign cluster IDs to remaining CPUs based on pkg/node/llc.
For example an 14900 now shows 5 clusters, one for the 8 p-cores,
and 4 of 4 e-cores each.
The local cluster numbers are per (node,pkg) ID so that any rule could
easily be applied on them, but we also keep the global numbers that
will help with thread group assignment.
We still need to force to assign distinct cluster IDs to cores
running on a different L3. For example the EPYC 74F3 is reported
as having 8 different L3s (which is true) and only one cluster.
Here we introduce a new function "cpu_compose_clusters()" that is called
from the main init code just after cpu_detect_topology() so that it's
not OS-dependent. It deals with this renumbering of all clusters in
topology order, taking care of considering any distinct LLC as being
on a distinct cluster.
Some platforms (several armv7, intel 14900 etc) report one distinct
cluster per core. This is problematic as it cannot let clusters be
used to distinguish real groups of cores, and cannot be used to build
thread groups.
Let's just compare the cluster cpus to the siblings, and ignore it if
they exactly match. We must also take care of not falling back to
core_cpus_list, which can enumerate cores that already have their
cluster assigned (e.g. intel 14900 has 4 4-Ecore clusters in addition
to the 8 Pcores).
This will be used to detect and fix incorrect setups which report
the same cluster ID for multiple L3 instances.
The arrangement of functions in this file is becoming a real problem.
Maybe we should move all this to cpu_topo for example, and better
distinguish OS-specific and generic code.
Once we've kept only the CPUs we want, the next step will be to form
groups and these ones are based on locality. Thus we'll have to sort by
locality. For now the locality is only inferred by the index. No grouping
is made at this point. For this we add the "cpu_reorder_by_locality"
function with a locality-based comparison function.
CPU selection will be performed by sorting CPUs according to
various criteria. For dumps however, that's really not convenient
and we'll need to reorder the CPUs according to their index only.
This is what the new function cpu_reorder_by_index() does. It's
called in thread_detect_count() before dumping the CPU topology.
A number of entries under /cpu/cpu%d only exist on certain kernel
versions, certain archs and/or with certain modules loaded. It's
pointless to insist on trying to read them all for all CPUs when
we've already verified they do not exist. Thus let's use stat()
the first time prior to checking some of them, and only try to
access them when they really exist. This almost completely
eliminates the large number of ENOENT that was visible in strace
during startup.
Cpufreq alone isn't a good metric on heterogenous CPUs because efficient
cores can reach almost as high frequencies as performant ones. Tests have
shown that majoring performance cores by 50% gives a pretty accurate
estimate of the performance to expect on modern CPUs, and that counting
+33% per extra SMT thread is reasonable as well. We don't have the info
about the core's quality, but using the presence of SMT is a reasonable
approach in this case, given that efficiency cores will not use it.
As an example, using one thread of each of the 8 P-cores of an intel
i9-14900k gives 395k rps for a corrected total capacity of 69.3k, using
the 16 E-cores gives 40.5k for a total capacity of 70.4k, and using both
threads of 6 P-cores gives 41.1k for a total capacity of 69.6k. Thus the
3 same scores deliver the same performance in various combinations.
The acpi_cppc method was found to take about 5ms per CPU on a 64-core
EPYC system, which is plain unacceptable as it delays the boot by half
a second. Let's use the less accurate cpufreq first, which should be
sufficient anyway since many systems do not have acpi_cppc. We'll only
fall back to acpi_cppc for systems without cpufreq. If it were to be
an issue over time, we could also automatically consider that all
threads of the same core or even of the same cluster run at the same
speed (when a cluster is known to be accurate).
When cpu_capacity is not present, let's try to check acpi_cppc's
nominal_perf which is similar and commonly found on servers, then
scaling_max_freq (though that last one may vary a bit between CPUs
depending on die quality). That variation is not a problem since
we can absorb a ~5% variation without issue.
It was verified on an i9-14900 featuring 5.7-P, 6.0-P and 4.4-E GHz
that P-cores were not reordered and that E cores were placed last.
It was also OK on a W3-2345 with 4.3 to 4.5GHz.
It's becoming difficult to see which CPUs are going to be kept/dropped.
Let's just skip all offline CPUs, and indicate "keep" in front of those
that are going to be used, and "----" in front of the excluded ones. It
is way more readable this way.
Also let's just drop the array entry number, since it's always the same
as the CPU number and is only an internal representation anyway.
Let's reimplement automatic binding to the first NUMA node when thread
count is not forced. It's the same thing as is already done in
check_config_validity() except that this time it's based on the
collected CPU information. The threads are automatically counted
and CPUs from non-first node(s) are evicted.