Before this patch, REGISTER_CONFIG_SECTION() allowed to register one and only
one callback (<post>) called after the parsing of a section.
It was limitating because you couldn't register a post callback from anywhere
else in the code.
This patch introduces the new REGISTER_CONFIG_SECTION_POST() macros which allows
to register a new post callback for a section keyword from anywhere.
This patch introduces the feature by allowing `struct cfg_section` entries that
does not have a `section_parser`, and then iterating on all cfg_section with a
post_section_parser for a keyword.
Previous patch 2c270a05f ("BUG/MINOR: mworker: section ignored in
discovery after a post_section_parser") needs an adjustment for the last
section of the file.
Indeed the post_section_parser of the last section must not be called in
discovery mode.
Must be backported in 3.1.
When a new section is discovered, the post_section_parser of the
previous section is called. However in the new master-worker mode the
discovery mode will skip the post_section_parser. But instead of
trying to parse the current section keyword after that, it would skip
completely the current line.
This is a minor bug since there isn't a lot of section with
post_section_parser, and not a lot of section to parse in discovery
mode.
But this could be reproduced like this:
global
expose-deprecated-directives
resolvers res
parse-resolv-conf
program foo
command sleep 10
program bar
command sleep 10
Ths 'resolvers' section has a post_section_parser which will be ignored
in discovery mode with the consequence of ignoring the first program
section.
This must be backported in 3.1.
STREAM and CRYPTO frames have a similar encoding format. In particular,
both of them have a variable-length integer Length field just before the
frame payload.
It is complex to determine the optimal Length value before copying the
payload data in the remaining buffer space. As such, helper functions
were implemented to calculate this. However, CRYPTO and STREAM frames
encoding implementation were not completely aligned, which renders the
code harder to follow.
The purpose of this commit is to simplify CRYPTO and STREAM frames
encoding. First, a new helper quic_int_cap_length() is defined which is
useful to determine the optimal buffer room available if prefixed by a
variable-length integer as Length field. Then, processing of both CRYPTO
and STREAM frames is now nearly identical, based on this new helper
function. Functions max_available_room() and max_stream_data_size() are
now unused and are removed.
Function max_stream_data_size() is used to determine the payload length
of a CRYPTO frame. It takes into account that the CRYPTO length field is
a variable length integer.
Implemented calcul was incorrect as it reserved too much space as a
frame header. This error is mostly due because max_stream_data_size()
reuses max_available_room() which also reserve space for a variable
length integer. This results in CRYPTO frames shorter of 1 to 2 bytes
than the maximum achievable value, which produces in the end datagram
shorter than the MTU.
Fix max_stream_data_size() implementation. It is now merely a wrapper on
max_available_room(). This ensures that CRYPTO frame encoding is now
properly optimized to use the MTU available.
This should be backported up to 2.6.
Long header packets have a mandatory Length field, which contains the
size of Packet number and payload, encoded as a variable-length integer.
Its value can thus only be determined after the payload size is known,
which depends on the remaining buffer space after this variable-length
field.
Packet payload are encoded in two steps. First, a list of input frames
is processed until the packet buffer is full. CRYPTO and STREAM frames
payload can be splitted if need to fill the buffer. Real encoding is
then performed as a second stage operation, first with Length field,
then with the selected frames themselves.
Before this patch, no space was reserved in the buffer for Length field
when attaching the frames to the packet. This could result in a error as
the packet payload would be too large for the remaining space.
In practice, this issue was rarely encounted, mostly as a side-effect
from another issue linked to CRYPTO frame encoding. Indeed, a wrong
calculation is performed on CRYPTO splitting, which results in frame
payload shorter by a few bytes than expected. This however ensured there
would be always enough room for the Length field and payload during
encoding. As CRYPTO frames are the only big enough content emitted with
a Long header packet, this renders the current issue mostly non
reproducible.
Fix the original issue by reserving some space for Length field prior to
frame payload calculation, using a maximum value based on the remaining
room space. Packet length is then reduced if needed when encoding is
performed, which ensures there is always enough room for the selected
frames.
Note that the other issue impacting CRYPTO frame encoding is not yet
fixed. This could result in datagrams with Long header packets not
completely extended to the full MTU. The issue will be addressed in
another patch.
This should be backported up to 2.6.
When leastconn is used under many threads, there can be a lot of
contention on leastconn, because the same node has to be moved around
all the time (when picking it and when releasing it). In GH issue #2861
it was noticed that 46 threads out of 64 were waiting on the same lock
in fwlc_srv_reposition().
In such a case, the accuracy of the server's key becomes quite irrelevant
because nobody cares if the same server is picked twice in a row and the
next one twice again.
While other approaches in the past considered using a floating key to
avoid moving the server each time (which was not compatible with the
round-robin rule for equal keys), here a more drastic solution is needed.
What we're doing instead is that we turn this lock into a trylock. If we
can grab it, we do the job. If we can't, then we just wake up a server's
tasklet dedicated to this. That tasklet will then try again slightly
later, knowing that during this short time frame, the server's position
in the queue is slightly inaccurate. Note that any thread touching the
same server will also reposition it and save that work for next time.
Also if multiple threads wake the tasklet up, then that's fine, their
calls will be merged and a single lock will be taken in the end.
Testing this on a 24-core EPYC 74F3 showed a significant performance
boost from 382krps to 610krps. The performance profile reported by
perf top dropped from 43% to 2.5%:
Before:
Overhead Shared Object Symbol
43.46% haproxy-master-inlineebo [.] fwlc_srv_reposition
21.20% haproxy-master-inlineebo [.] fwlc_get_next_server
0.91% haproxy-master-inlineebo [.] process_stream
0.75% [kernel] [k] ice_napi_poll
0.51% [kernel] [k] tcp_recvmsg
0.50% [kernel] [k] ice_start_xmit
0.50% [kernel] [k] tcp_ack
After:
Overhead Shared Object Symbol
30.37% haproxy [.] fwlc_get_next_server
2.51% haproxy [.] fwlc_srv_reposition
1.91% haproxy [.] process_stream
1.46% [kernel] [k] ice_napi_poll
1.36% [kernel] [k] tcp_recvmsg
1.04% [kernel] [k] tcp_ack
1.00% [kernel] [k] skb_release_data
0.96% [kernel] [k] ice_start_xmit
0.91% haproxy [.] conn_backend_get
0.82% haproxy [.] connect_server
0.82% haproxy [.] run_tasks_from_lists
Tested on an Ampere Altra with 64 aarch64 cores dedicated to haproxy,
the gain is even more visible (3.6x):
Before: 311-323k rps, 3.16-3.25ms, 6400% CPU
Overhead Shared Object Symbol
55.69% haproxy-master [.] fwlc_srv_reposition
33.30% haproxy-master [.] fwlc_get_next_server
0.89% haproxy-master [.] process_stream
0.45% haproxy-master [.] h1_snd_buf
0.34% haproxy-master [.] run_tasks_from_lists
0.32% haproxy-master [.] connect_server
0.31% haproxy-master [.] conn_backend_get
0.31% haproxy-master [.] h1_headers_to_hdr_list
0.24% haproxy-master [.] srv_add_to_idle_list
0.23% haproxy-master [.] http_request_forward_body
0.22% haproxy-master [.] __pool_alloc
0.21% haproxy-master [.] http_wait_for_response
0.21% haproxy-master [.] h1_send
After: 1.21M rps, 0.842ms, 6400% CPU
Overhead Shared Object Symbol
17.44% haproxy [.] fwlc_get_next_server
6.33% haproxy [.] process_stream
4.40% haproxy [.] fwlc_srv_reposition
3.64% haproxy [.] conn_backend_get
2.75% haproxy [.] connect_server
2.71% haproxy [.] h1_snd_buf
2.66% haproxy [.] srv_add_to_idle_list
2.33% haproxy [.] run_tasks_from_lists
2.14% haproxy [.] h1_headers_to_hdr_list
1.56% haproxy [.] stream_set_backend
1.37% haproxy [.] http_request_forward_body
1.35% haproxy [.] http_wait_for_response
1.34% haproxy [.] h1_send
And at similar loads, the CPU usage considerably drops (3.55x), as
well as the response time (10x):
After: 320k rps, 0.322ms, 1800% CPU
Overhead Shared Object Symbol
7.62% haproxy [.] process_stream
4.64% haproxy [.] h1_headers_to_hdr_list
3.09% haproxy [.] h1_snd_buf
3.08% haproxy [.] h1_process_demux
2.22% haproxy [.] __pool_alloc
2.14% haproxy [.] connect_server
1.87% haproxy [.] h1_send
> 1.84% haproxy [.] fwlc_srv_reposition
1.84% haproxy [.] run_tasks_from_lists
1.77% haproxy [.] sock_conn_iocb
1.75% haproxy [.] srv_add_to_idle_list
1.66% haproxy [.] http_request_forward_body
1.65% haproxy [.] wake_expired_tasks
1.59% haproxy [.] h1_parse_msg_hdrs
1.51% haproxy [.] http_wait_for_response
> 1.50% haproxy [.] fwlc_get_next_server
The cost of fwlc_get_next_server() naturally increases as the server
count increases, but now has no visible effect on updates. The load
distribution remains unchanged compared to the previous approach,
the weight still being respected.
For further improvements to the fwlc algo, please consult github
issue #881 which centralizes everything related to this algorithm.
This creates a tasklet that only expects to be called when the LB
algorithm is under contention when trying to reposition the server
in its tree. Indeed, that's one of the operations that usually
requires to take a write lock on a highly contended area, often
for very little benefits under contention; indeed, under load, if
a server keeps its previous position for a few extra microseconds,
usually there's no harm. Thus this new tasklet can be woken up by
the LB algo to ask the server to later call lbprm.server_requeue().
It does nothing else.
This callback will be used to reposition a server to its expected
position regardless of the fact that it was taken or dropped. It
will only be used by supporting LB algos. For now, only fwlc defines
it and assigns it to fwlc_srv_reposition(). At the moment it's not
used yet.
Storing only 30 buckets means we only keep 256 bytes per label. This
further simplifies address calculation and reduces the memory used
without complicating the locking code. It means we won't measure wait
times larger than a second but we're not supposed to face this as it
would trigger the watchdog anyway. It may become a little bit just if
measuring using rdtsc() instead of now_mono_time() though (typically
the limit would be around 350ms for a 3 GHz CPU).
It's more convenient (and more readable) to have the lock stats arranged
by operation type (read, seek, write). It will also allow to later simplify
the structure format and the bucket address calculation. Now lock_stat[]
got split into lock_stats_rd[], lock_stats_sk[], lock_stats_wr[].
Now that we have our sums by bucket, the _locked counter is redundant
since it's always equal to the sum of all entries. Let's just get rid
of it and replace its consumption with a loop over all buckets, this
will reduce the overhead of taking each lock at the expense of a tiny
extra effort when dumping all locks, which we don't care about.
In addition to the total/average wait time, we now also store the
wait time in 2^N buckets. There are 32 buckets for each type (read,
seek, write), allowing to store wait times from 1-2ns to 2.1-4.3s,
which is quite sufficient, even if we'd want to switch from NS to
CPU cycles in the future. The counters are only reported for non-
zero buckets so as not to visually pollute the output.
This significantly inflates the lock_stat struct, which is now
aligned to 256 bytes and rounded up to 1kB. But that's not really
a problem, given that there's only one per lock label.
The lock time computation was a bit inconsistent between functions,
particularly those using a try_lock. Some of them would count the lock
as taken without counting the time, others would simply not count it.
This is essentially due to the way the time is retrieved, as it was
done inside the atomic increment.
Let's instead always use start_time to carry the elapsed time, by
presetting it to the negative time before the event and addinf the
positive time after, so that it finally contains the duration. Then
depending on the try lock's success, we add the result or not. This
was generalized to all lock functions for consistency, and because
this will be handy for future changes.
Technically speaking, spin locks use a seek lock, not a write lock,
so better count them appropriately for consistency (lock time, or
function calls count).
The rework of the thread dumping mechanism in 2.8 with commit 9a6ecbd590
("MEDIUM: debug: simplify the thread dump mechanism") opened a small
race, which is that a thread in the process of dumping other ones may
block the other one from panicing while it's looping at the end of
ha_thread_dump_fill(), or any other sequence involving the currently
dumped one.
This was emphasized in 3.1 with commit 148eb5875f ("DEBUG: wdt: better
detect apparently locked up threads and warn about them") that allowed
to emit warnings about long-stuck threads, because in this case, what
happens is that sometimes a thread starts to emit a warning (or a set
of warnings), and while the warning is being awaited for, a panic
finally happens and interrupts either the dumping thread, which never
finishes and waits for the target's pointer to become NULL which will
never happen since it was supposed to do it itself, or the currently
dumped thread which could wait for the dumping thread to become ready
while this one has not released the former.
In order to address this, first we now make sure never to dump a thread
that is already in the process of dumping another one. We're adding a
new thread flag to know this situation, that is set in ha_thread_dump_fill()
and cleared in ha_thread_dump_done(). And similarly, we don't trigger
the watchdog on a thread waiting for another one to finish its dump,
as it's likely a case of warning (and maybe even a panic) that makes
them wait for each other and we don't want such cases to be reentrant.
Finally, we check in the main polling loop that the flag never accidentally
leaked (e.g. wrong flag manipulation) as this would be difficult to spot
with bad consequences.
This should be backported at least to 2.8, and should resolve github
issue #2860. Thanks to Chris Staite for the very informative backtrace
that exhibited the problem.
Released version 3.2-dev5 with the following main changes :
- BUG/MINOR: ssl: put ssl_sock_load_ca under SSL_NO_GENERATE_CERTIFICATES
- CLEANUP: ssl: rename ssl_sock_load_ca to ssl_sock_gencert_load_ca
- CLEANUP: ssl: move ssl_sock_gencert_load_ca declaration in ssl_gencert.h
- CLEANUP: tree-wide: define and use acl_match_cond() helper
- MINOR: epoll: permit to mask certain specific events
- MINOR: proxies: Add a per-thread group field to struct proxy.
- MINOR: Add fields to the per-thread group field in struct server.
- MINOR: proxies/servers: Calculate queueslength and use it.
- MEDIUM: servers/proxies: Switch to using per-tgroup queues.
- BUG/MINOR: stream: Properly handle "on-marked-up shutdown-backup-sessions"
- MEDIUM: stream: Map task wake up reasons to dedicated stream events
- MEDIUM: stream: No longer use TASK_F_UEVT* to shut a stream down
- BUILD: tools: fix build on BSD by dropping the ETIME check
- MINOR: queues: use __ha_cpu_relax() on failed CAS.
- BUILD: queues: Use unsigned int when needed
- BUILD: ssl: allow to build without the renegotiation API of WolfSSL
- BUILD: ssl: more cleaner approach to WolfSSL without renegotiation
- BUG/MEDIUM: chunk: make sure to flush the trash pool before resizing
- MINOR: quic: remove references to burst in quic-cc-algo parsing
- MINOR: quic: allow BBR testing without pacing
- MINOR: quic: transform pacing settings into a global option
- MAJOR: quic: mark pacing as stable and enable it by default
- MINOR: quic: mark BBR as stable
- MINOR: quic: define quic_tune
- BUILD: quic: fix overflow in global tune
- DEBUG: fd: add a counter of takeovers of an FD since it was last opened
- MINOR: fd: add a generation number to file descriptors
- DEBUG: epoll: store and compare the FD's generation count with reported event
- MEDIUM: epoll: skip reports of stale file descriptors
- MINOR: mux-h1: Add masks to group H1S DEMUX and MUX errors
- BUG/MINOR: mux-h1: Only report a SE error on demux error
- MINOR: tevt: Add the termination events log's fundations
- MINOR: tevt/stconn: Add a termination events log in the SE descriptor
- MINOR: tevt/mux-h1: Report termination events for the H1C and H1S
- MINOR: tevt/mux-h2: Report termination events for the H2C
- MINOR: tevt/stream/stconn: Report termination events for stream and sc
- MINOR: tevt/conn: Report intercepted event for L4 rules
- MINOR: tevt/mux-h1/mux-h2: Add termination events log when dumping mux info
- MINOR: tevt/muxes: Add CTL and SCTL command to get the termination event logs
- MINOR: tevt/mux-pt: Add support for termination event logs
- MINOR: tevt/connection: Add dedicated termination events for lower locations
- MEDIUM: tevt/muxes: Add dedicated termination events for muxc/se locations
- MINOR: tevt/stconn: Be more accurate to report shutw events
- MEDIUM: tevt/stconn/stream: Add dedicated termination events for stream location
- MINOR: tevt: Don't duplicate termination event during reporting
- MINOR: tevt/applet: Add limited support for termination event logs for applets
- MINOR: tevt: Add a sample to get termination events for all locations
- MINOR: tevt: Improve function to convert a termination events log to string
- REORG: tevt/connection: Move enums at the end of the header file
- MINOR: tevt/dev: Add term_events tool
- MINOR: tevt/connection: Add support for POLL_HUP/POLL_ERR events
- MINOR: tevt/dev: Parse tuple of termination events
- BUG/MEDIUM: htx: wrong count computation in htx_xfer_blks()
- DOC: htx: clarify <mark> parameter for htx_xfer_blks()
- BUILD: quic: remove GCC undefined error in qc_release_lost_pkts()
- MEDIUM: htx: prevent <mark> to copy incomplete headers in htx_xfer_blks()
- BUG/MEDIUM: mux-fcgi: Properly handle read0 on partial records
- BUG/MINOR: tevt/http-ana: Remove badly placed event reports
- DEBUG: http-ana: Remove debug counters from HTTP analyzers
- DEBUG: mux-h1: Remove some debug counters
- BUG/MINOR: tcp-rules: Don't forward close during tcp-response content rules eval
- MEDIUM: stream: interrupt costly rulesets after too many evaluations
- BUG/MINOR: http-check: Don't pretend a C-L heeader is set before adding it
- BUILD: ssl: remove a boringssl definition defined by recent boringssl libs
- BUG/MINOR: tevt/mux-h2: Set truncated receive/eos events at SE level on error
- BUG/MEDIUM: flt-spoe: Set/test applet flags instead of SE flags from I/O handler
- BUG/MEDIUM: applet: Don't pretend to have more data to handle EOI/EOS/ERROR
- BUG/MEDIUM: flt-spoe: Properly handle end of stream from the SPOE applet
- MINOR: flt-spoe: Report end of input immediately after applet init
- MINOR: mux-spop: Report EOI on the SE when a ACK is received for a stream
- MINOR: mux-spop: Set SPOP_CF_ERROR flag on connection error only
- MINOR: tevt/mux-spop: Report termination events for the SPOP connect/stream
- CLEANUP: mux-spop: Remove useless comments
- MINOR: mux-spop: Dump info about connections and streams in dedicated functions
- MINOR: mux-spop: Implement .show_sd callback function
- MEDIUM: mux-fcgi: Add a function to propagate termination flags from fstrm to SE
- BUG/MEDIUM: mux-fcgi: Propagate flags to SE in fcgi_strm_wake_one_stream
- MINOR: tevt/mux-fcgi: Report termination events for the FCGI connect/stream
- MINOR: mux-fcgi: Dump info about connections and streams in dedicated functions
- MINOR: mux-spop/mux-fcgi: Add support of the debug string for logs
- BUG/MINOR: cli: Don't set SE flags from the cli applet
- BUG/MINOR: cli: Fix memory leak on error for _getsocks command
- BUG/MINOR: cli: Fix a possible infinite loop in _getsocks()
- BUG/MINOR: config/userlist: Support one 'users' option for 'group' directive
- BUG/MINOR: auth: Fix a leak on error path when parsing user's groups
- BUG/MINOR: flt-trace: Support only one name option
- MINOR: filters: Improve errors formating during filters parsing
- BUG/MINOR: stats-json: Define JSON_INT_MAX as a signed integer
- DOC: option redispatch should mention persist options
- BUG/MINOR: debug: make "debug dev sched" accept a negative TID
- BUG/MINOR: debug: make sure the "debug dev sched" tasks don't block stopping
- IMPORT: plock: export the uninlined version of the lock wait function
- IMPORT: plock: give higher precedence to W than S
- IMPORT: plock: lower the slope of the exponential back-off
- IMPORT: plock: use cpu_relax() for a shorter time in EBO
- Revert "IMPORT: plock: export the uninlined version of the lock wait function"
- BUG/MEDIUM: ssl: chosing correct certificate using RSA-PSS with TLSv1.3
The clienthello callback was written when TLSv1.3 was not yet out, and
signatures algorithm changed since then.
With TLSv1.2, the least significant byte was used to determine the
SignatureAlgorithm, which could be rsa(1), dsa(2), ecdsa(3).
https://datatracker.ietf.org/doc/html/rfc5246#section-7.4.1.4.1
This was used to chose which type of certificate to push to the client.
But TLSv1.3 changed that, and introduced new RSA-PSS algorithms that
does not have the least sinificant byte to 1.
https://datatracker.ietf.org/doc/html/rfc8446#section-4.2.3
This would result in chosing the wrong certificate when an RSA an ECDSA
ones are in the configuration for the same SNI or default entry.
This patch fixes the issue by parsing bothe hash and signature field to
check the RSA-PSS signature scheme.
This must fix issue #2852.
This must be backported in every stable versions. The code was moved
from ssl_sock.c to ssl_clienthello in recent versions.
This reverts commit 5496d06b2b.
It breaks the build on Windows which apparently doesn't support the weak
attribute well on functions. It's not big deal anyway, playing with build
options while debugging still works though it's less easy to use.
Tests have shown that on modern CPUs it's interesting to wait a bit less
in cpu_relax(). Till now we were looping down to 60 iterations and then
switching to just barriers. Increasing the threshold to 90 iterations
left before getting out of the loop improved the average and max time
to grab a write lock by a few percent (e.g. 10% at 1us, 20% at 256ns
or lower). Higher values tend to progressively lose that gain so let's
stick to this one. This was measured on an EPYC 74F3 like previous
measurements that initially led to this value, and the value might
possibly depend on the mask applied to the loop counter.
This is plock commit 74ca0a7307fa6aec3139f27d3b7e534e1bdb748e.
Along many tests involving both haproxy's scheduler and forwarded
traffic, various exponents and algorithms were attempted for the EBO
and their effects were measured. It was found that a growth in 1.25^N
limited to 128k cycles consistently gives a better latency than 1.5^N
limited to 256k cycles, without degrading general performance. The
measures of the time to grab a write lock on a 48-thread EPYC show
that the number of occurrences of low times was roughly multiplied by
2-3 while the number of occurrences of times above 64us was reduced
by similar factors, to even reach 300 at 64us and limiting the maximum
time by a factor of 4.
The other variants that were experimented with are:
m = ((m + (m >> 1)) + 2) & 0x3ffff; // original
m = ((m + (m >> 1) + (m >> 3)) + 2) & 0x3ffff;
m = ((m + (m >> 1) + (m >> 4)) + 2) & 0x3ffff;
m = ((m + (m >> 1) + (m >> 4)) + 2) & 0x1ffff;
m = ((m + (m >> 1) + (m >> 4)) + 1) & 0x1ffff;
m = ((m + (m >> 2) + (m >> 4)) + 1) & 0x1ffff; // lowest CPU on pl_wr test + good perf
m = ((m + (m >> 2)) + 1) & 0x1ffff; // even lower cpu usage, lowest max
m = ((m + (m >> 1) + (m >> 2)) + 1) & 0x1ffff; // correct but slightly higher maxes
m = ((m + (m >> 1) + (m >> 3)) + 1) & 0x1ffff; // less good than m+m>>2
m = ((m + (m >> 2) + (m >> 3)) + 1) & 0x1ffff; // better but not as good as m+m>>2
m = ((m + (m >> 3) + (m >> 4)) + 1) & 0x1ffff; // less good, lower rates on small coounts.
m = ((m + (m >> 2) + (m >> 3) + (m >> 4)) + 1) & 0x1ffff; // less good as well
m = ((m & 0x7fff) + (m >> 1) + (m >> 4)) + 2;
m = ((m & 0xffff) + (m >> 1) + (m >> 4)) + 2;
This is plock commit dddd9ee01c522da33c353e2e4d4fd743d8336ec3.
It was noticed in haproxy that in certain extreme cases, a write lock
subject to EBO may fail for a very long time in front of a large set
of readers constantly trying to upgrade to the S state. The reason is
that among many readers, one will succeed in its upgrade, and this
situation can last for a very long time with many readers upgrading
in turn, while the writer waits longer and longer before trying again.
Here we're taking a reasonable approach which is that the write lock
should have a higher precedence in its attempt to grab the lock. What
is done is that instead of fully rolling back in case of conflict with
a pure S lock, the writer will only release its read part in order to
let the S upgrade to W if needed, and finish its operations. This
guarantees no other seek/read/write can enter. Once the conflict is
resolved, the writer grabs the read part again and waits for readers
to be gone (in practice it could even return without waiting since we
know that any possible wanderers would leave or even not be there at
all, but it avoids a complicated loop code that wouldn't improve the
practical situation but inflate the code).
Thanks to this change, the maximum write lock latency on a 48 threads
AMD with aheavily loaded scheduler went down from 256 to 64 ms, and the
number of occurrences of 32ms or more was divided by 300, while all
occurrences of 1ms or less were multiplied by up to 3 (3 for the 4-16ns
cases).
This is plock commit b6a28366d156812f59c91346edc2eab6374a5ebd.
The inlining of the lock waiting function was made more easily
configurable with commit 7505c2e ("plock: always expose the inline
version of the lock wait function"). However, the standard one remained
static, but in order to resolve the symbols in "perf top", it's much
better to export it, so let's move "static" with "inline" and leave it
exported when PLOCK_INLINE_EBO is not set.
This is plock commit 3bea7812ec705b9339bbb0ed482a2cd8aa6c185c.
When "debug dev sched" is used to pop up background tasks, these tasks
are never stopped, so we must be careful to stop them when the stopping
flag is set, otherwise they can prevent the process from stopping when
sufficiently numerous (tests went as far as 100 million tasks, leading
the run queue never being completely purged in one poll round).
No backport is needed since this is only used when debugging and tuning
the scheduler.
The TID passed to "debug dev sched" is used to pin the task to a given
thread. A negative value normally means the task is unpinned and goes
to the shared wait queue and run queue. However due to the type of the
variable, negative values were mapped as highly positive values and were
set to the current thread. Let's add the proper cast to fix this.
No backport is needed since this is only used to experiment with the
scheduler and measure its performance.
"option redispatch" remains vague in which cases a session would persist;
let's mention "option persist" and "force-persist" as an example so folks
don't draw the conclusion that this may be default.
Should be backported to stable branches.
A JSON integer is defined in the range [-(2**53)+1, (2**53)-1]. Macro are used
to define the minimum and the maximum value, The minimum one is defined using
the maximum one. So JSON_INT_MAX must be defined as a signed integer value to
avoid wrong cast of JSON_INT_MIN.
It was reported by Coverity in #2841: CID 1587769.
This patch could be backported to all stable versions.
When a trace filter is defined, only one 'name' option is expected. But it
was not tested. Thus it was possible to set several names leading to a
memory leak.
It is now tested, and it is not allowed to redefine the trace filter name.
It was reported by Coverity in #2841: CID 1587768.
This patch could be backported to all stable versions.
In a userlist section, when a user is parsed, if a specified group is not
found, an error is reported. In this case we must take care to release the
alredy built groups list.
It was reported by Coverity in #2841: CID 1587770.
This patch could be backported to all stable versions.
When a group is defined in a userlist section, only one 'users' option is
expected. But it was not tested. Thus it was possible to set several options
leading to a memory leak.
It is now tested, and it is not allowed to redefine the users option.
It was reported by Coverity in #2841: CID 1587771.
This patch could be backported to all stable versions.
In _getsocks() functuoin, when we failed to set the unix socket in
non-blocking mode, a goto to "out" label led to loop infinitly. To fix the
issue, we must only let the function exit.
This patch should be backported to all stable versions.
Some errors in parse function of _getsocks commands were not properly handled
and immediately returned, leading to a memory leak on cmsgbuf and tmpbuf
buffers.
To fix the issue, instead of immediately return with -1, we jump to "out"
label. Returning 1 intead of -1 in that case is valid.
This was reported by Coverity in #2841: CIDs 1587773 and 1587772.
This patch should be backported as far as 2.4.
Since the CLI was updated to use the new applet API, it should no longer set
directly the SE flags. Instead, the corresponding applet flags must be set,
using the applet API (appet_set_*). It is true for the CLI I/O handler but also
for the commands parse function and I/O callback function.
This patch should be backported as far as 3.0.
Now it is possible to have debug info about FCGI and SPOP multiplexers. To do
so, the support for the MUX_SCTL_DBG_STR command was implemented for these
muxes.
The have this log message, the log-format must be set to:
log-format "$HAPROXY_HTTP_LOG_FMT bs=<%[bs.debug_str]>"
fcgi_show_fd() function was splitted to dump the info about the FCGI
connections and the FCGI streams in dedicated functions, duplicating this
way what is performed in other muxes.
In addition, the FCGI multiplexer now implements the .show_sd callback
function called by "show sess" CLI command.
Termination events are now reported for the FCGI connections and the FCGI
streams. In addition, all available termination events logs are reported in
the "show-fd" callback function. The .ctl and .sctl callback functions were
also update to support, respectively, MUX_CTL_TEVTS and MUX_SCTL_TEVTS
commands.
The commit is flagged as a bug because the same fix on the H2 multiplexer was
reported as a bug. But no issue was reported.
When a stream is explicitly woken up by the FCGI conneciton, if an error
condition is detected, the corresponding error flag is set on the SE. So
SE_FL_ERROR or SE_FL_ERR_PENDING, depending if the end of stream was
reported or not.
However, there is no attempt to propagate other termination flags. We must
be sure to properly set SE_FL_EOI and SE_FL_EOS when appropriate to be able
to switch a pending error to a fatal error.
Because of this bug, the SE could remain with a pending error and no end of
stream, preventing the applicative stream to trully abort it. It means on
some abort scenario, it seems to be possible to block a stream infinitely.
This patche depends on:
* MEDIUM: mux-fcgi: Add a function to propagate termination flags from fstrm to SE
* BUG/MEDIUM: mux-fcgi: Properly handle read0 on partial records
This patch could be backported at least as far as 2.8 after a period of
observation. However no bug was reportedn so there is no rush.
The function fcgi_strm_propagate_term_flags() was added to check the FSTRM
state and evaluate when EOI/EOS/ERR_PENDING/ERROR flags must be set on the
SE. It is not the only place where those flags are set. But it centralizes
the synchro between the FCGI stream and the SC.
For now, this function is only used at the end of fcgi_rcv_buf(). But it
will be used to fix a potential bug.
spop_show_fd() function was splitted to dump the info about the SPOP
connections and the SPOP streams in dedicated functions, duplicating this
way what is performed in other muxes.
Termination events are now reported for the SPOP connections and the SPOP
streams. In addition, all available termination events logs are reported in
the "show-fd" callback function. The .ctl and .sctl callback functions were
also update to support, respectively, MUX_CTL_TEVTS and MUX_SCTL_TEVTS
commands.
The SPOP_CF_ERROR flag is now set on connection error only. It was also set
on some demux failures. But it is not mandatory because the connection is
closed anyway. And it is handy to have a flag dedicated to tcp connection
error. It was the original purpose of this flag.
This patch could be backported to 3.1 to ease future backports.
The spop stream now reports the end of input when the ACK is transferred to
the SPOE applet. To do so, the flag SPOP_SF_ACK_RCVD was added. It is set on
the SPOP stream when its ACK is received by the SPOP connection.
In addition when SPOP stream flags are propagated to the SE, the error is
now reported if end of input was not reached instead of testing the
connection error code. It is more accurate.
This patch should be backported to 3.1.
The SPOE applet forwards the message that must be sent to agent during its
init stage. So just after it is created. When it is performed, the end of
input must be reported because no more data will be forwarded. However, it
was performed after receiving the ACK response. It is harmless, but there is
no reason to delay the EOI. It is now fixed.
This patch must be backported to 3.1.
The previous fix ("BUG/MEDIUM: applet: Don't pretend to have more data to
handle EOI/EOS/ERROR") revealed an issue with the way the SPOE applet was
reporting the end of stream, leading to never shut the applet down.
In fact, there is two bug in one. The first one is about the applet
shutdown. Since the fix above, the applet is no longer closed. Before, it
was closed because it was reported in error. But now, it is just delayed
because the applet and the SPOP stream are declared to support half close
connections. So the applet is only closed when the SPOP connection is
closed. To fix this bug, both side are now stating that half close
connections are not supported.
The second bug is about the way the end of stream is reported. It is
reported when the ACK response is received. But it is too early, because the
parent stream must process the response first. So now, we take care to have
processed the ACK from the parent applet before reporting an end of stream.
This patch must be backported with the commit above to 3.1.
The way appctx EOI/EOS/ERROR flags were reported for applets using the new
API were to state the applet had more data to deliver. But it was not
correct and for APPCTX_FL_EOS, this led to report an error on the SE because
it is not expected. More data to deliver and an end of stream is an
impossible situation.
This was added as a fix by commit b8ca114031 ("BUG/MEDIUM: applet: State
appctx have more data if its EOI/EOS/ERROR flag is set"), mainly to make the
SPOE applet work.
When an applet set one of these flags, it really means it has no more data
to deliver. So we must not try to trigger a new receive to handle these
flags. Instead we must handle them directly in task_process_applet()
function and only if the corresponding SE flags were not already set.
This patch must be backported to 3.1.