This function is in relation with the Initial packet number space which is
more linked to the QUIC TLS specifications. Let's move it to quic_tls.h
to be inlined.
Move quic_path struct from quic_conn-t.h to quic_cc-t.h and rename it to quic_cc_path.
Update the code consequently.
Also some inlined functions in relation with QUIC path to quic_cc.h
Move quic_pkt_type(), quic_saddr_cpy(), quic_write_uint32(), max_available_room(),
max_stream_data_size(), quic_packet_number_length(), quic_packet_number_encode()
and quic_compute_ack_delay_us() to quic_tx.c because only used in this file.
Also move quic_ack_delay_ms() and quic_read_uint32() to quic_tx.c because they
are used only in this file.
Move quic_rx_packet_refinc() and quic_rx_packet_refdec() to quic_rx.h header.
Move qc_el_rx_pkts(), qc_el_rx_pkts_del() and qc_list_qel_rx_pkts() to quic_tls.h
header.
Move quic_cstream struct definition from quic_conn-t.h to quic_tls-t.h.
Its pool is also moved from quic_conn module to quic_tls. Same thing for
quic_cstream_new() and quic_cstream_free().
Move quic_cid and quic_connnection_id from quic_conn-t.h to new quic_cid-t.h header.
Move defintions of quic_stateless_reset_token_init(), quic_derive_cid(),
new_quic_cid(), quic_get_cid_tid() and retrieve_qc_conn_from_cid() to quic_cid.c
new C file.
When a H2 stream is blocked during data fast-forwarding, we must take care
to remove H2_SF_NOTIFIED flag. This was only performed when data
fast-forward was attempted. However, if the H2 stream was blocked for any
reason, this flag was not removed. During our tests, we found it was
possible to infinitely block a connection because one of its streams was in
the send_list with the flag set. In this case, the stream was no longer
woken up to resume the sends, blocking all other streams.
No backport needed.
CONNECTION_CLOSE_APP encoding is broken, which prevents the sending of
every packet with such a frame. This bug was always present in quic
haproxy. However, it was slightly dissimulated by the previous code
which always initialized all frame members to zero, which was sufficient
to ensure CONNECTION_CLOSE_APP encoding was ok. The below patch changes
this behavior by removing this costly initialization step.
4cf784f38e
MINOR: quic: Avoid zeroing frame structures
Now, frames members must always be initialized individually given the
type of frame to used. However, for CONNECTION_CLOSE_APP this was not
done as qc_cc_build_frm() accessed the wrong union member refering to a
CONNECTION_CLOSE instead.
This bug was detected when trying to generate a HTTP/3 error. The
CONNECTION_CLOSE_APP frame encoding failed due to a non-initialized
<reason_phrase_len> which was too big. This was reported by the
following trace :
"frame building error : qc@0x5555561b86c0 idle_timer_task@0x5555561e5050 flags=0x86038058 CONNECTION_CLOSE_APP"
This must be backported up to 2.6. This is necessary even if above
commit is not as previous code is also buggy, albeit with a different
behavior.
It's the exact same as commit 0a7ab7067 ("OPTIM: mux-h2: don't allocate
more buffers per connections than streams"), but for the zero-copy case
this time. Previously it was only done on the regular snd_buf() path, but
this one is needed as well. A transfer on 16 parallel streams now consumes
half of the memory, and a single stream consumes much less.
An alternate approach would be worth investigating in the future, based
on the same principle as the CF_STREAMER_FAST at the higher level: in
short, by monitoring how many mux buffers we write at once before refilling
them, we would get an idea of how much is worth keeping in buffers max,
given that anything beyond would just waste memory. Some tests show that
a single buffer already seems almost as good, except for single-stream
transfers, which is why it's worth spending more time on this.
Add an optional argument for "-dt". This argument is interpreted as a
list of several trace statement separated by comma. For each statement,
a specific trace name can be specifed, or none to act on all sources.
Using double-colon separator, it is possible to add specifications on
the wanted level and verbosity.
Extract conversion of level string argument to integer value in a
dedicated internal function trace_parse_level(). This function is used
to for CLI trace parsing and will also be useful for "-dt" process
argument.
Add '-dt' haproxy process argument. This will automatically activate all
trace sources on stderr with the error level. This could be useful to
troubleshoot issues such as protocol violations.
<pattern> field pointer of pat_ref_elt structure has been by a
zero-length array. As such, it's now unneeded to check for NULL address
before printing it.
This type conversion was done in the following commit :
3ac9912837
OPTIM: pattern: save memory and time using ebst instead of ebis
The current patch is mandatory to fix the following GCC warning :
CC src/map.o
src/map.c: In function ‘cli_io_handler_map_lookup’:
src/map.c:549:54: error: the comparison will always evaluate as ‘true’ for the address of ‘pattern’ will never be NULL [-Werror=address]
549 | if (pat->ref && pat->ref->pattern)
|
No need to backport it unless the above commit is.
In the pat_ref_elt struct, the pattern string is stored outside of the
node element, using a pointer to an strdup(). Not only this needlessly
wastes at least 16-24 bytes per entry (8 for the pointer, 8-16 for the
allocator), it also makes the tree descent less efficient since both
the node and the string have to be visited for each layer (hence at least
two cache lines). Let's use an ebmb storage and place the pattern right
at the end of the pat_ref_elt, making it a variable-sized element instead.
The set-map test below jumps from 173 to 182 kreq/s/core, and the memory
usage drops from 356 MB to 324 MB:
http-request set-map(/dev/null) %[rand(1000000)] 1
This is even more visible with large maps: after loading 16M IP addresses
into a map, the process uses this amount of memory:
- 3.15 GB with haproxy-2.8
- 4.21 GB with haproxy-2.9-dev11
- 3.68 GB with this patch
So that's a net saving of 32 bytes per entry here, which cuts in half the
extra cost of the tree, and loading a large map takes about 20% less time.
It is not possible in H1, but in H2 (and probably H3) it is possible to have
trailers at the end of a message while a Content-Length was announced.
However, depending if the trailers are received with the last HTX DATA block
or the zero-copy forwarding is used or not, an processing error may be
triggered, leading to a 500-internal-error.
To fix the issue, when a content-length is announced and all the payload was
processed, we switch the message to H1_MSG_DONE state only if the
end-of-message was also reported (HTX_FL_EOM flag set). Otherwise, it is
switched to H1_MSG_TRAILERS state to be able to properly ignored the
trailers, if so.
The patch must be backported as far as 2.4. Be careful, this part was highly
refactored. The patch will have to be adapted to be backported.
The mworker mode never had a proper 'hard-stop' (-st) for the reload,
this is a mode which was commonly used with the daemon mode, but it was
never implemented in mworker mode.
This patch fixes the problem by implementing a "hard-reload" command
over the master CLI. It does the same as the "reload" command, but
instead of waiting for the connections to stop in the previous process,
it immediately quits the previous process after binding.
This patch removes the code which selects the SSL certificate in the
OpenSSL Client Hello callback, to use the ssl_sock_chose_sni_ctx()
function which does the same.
The bigger part of the function which remains is the extraction of the
servername, ciphers and sigalgs, because it's done manually by parsing
the TLS extensions.
This is not supposed to change anything functionally.
The certificate selection used in the WolfSSL cert_cb and in the OpenSSL
clienthello callback is the same, the function was duplicate to achieve
the same.
This patch move the selection code to a common function called
ssl_sock_chose_sni_ctx().
The servername string is still lowered in the callback, however the
search for the first dot in the string (wildp) is done in
ssl_sock_chose_sni_ctx()
The function uses the same certificate selection algorithm as before, it
needs to know if you need rsa or ecdsa, the bind_conf to achieve the
lookup, and the servername string.
This patch moves the code for WolSSL only.
PR https://github.com/wolfSSL/wolfssl/pull/6963 implements primitives to
extract ciphers and algorithm signatures.
It allows to chose a certificate depending on the sigals and
ciphers presented by the client (RSA or ECDSA).
Since WolfSSL does not implement the clienthello callback, the patch
uses the certificate callback (SSL_CTX_set_cert_cb())
The callback is inspired by our clienthello callback, however the
extraction of client ciphers and sigalgs is simpler,
wolfSSL_get_sigalg_info() and wolfSSL_get_ciphersuite_info() are used.
This is not enabled by default yet as the PR was not merged.
Maintain proper px->lbprm.tot_weight for log backends. server's weight is
considered as 1 as long as the server is usable.
This will allow the stats page to correctly display the proxy status since
the check currently relies on proxy's lbprm.tot_weight variable.
server_rules declared using "use-server" keyword within a proxy are not
supported inside a log backend (with "mode log" set), so we report a
warning to the user and reset the setting.
Take the px->server_rules freeing part out of free_proxy() and make it
a dedicated helper function so that it becomes possible to use it from
anywhere.
There are multiple places inside free_proxy() where we need to perform
the exact same operation: freeing a logformat list which includes freeing
every member.
To prevent code duplication, we add the free_logformat_list() function
that takes such list as parameter and does all the freeing job on its
own.
This reverts commit 5884e46ec8c8231e73c68e1bdd345c75c9af97a0 since we
cannot perform the test during parsing as the effective proxy mode is
not yet known.
This is a leftover from 1e0093a317 ("MINOR: backend/balance: "balance"
requires TCP or HTTP mode").
Indeed, we cannot perform the test during parsing as the effective proxy
type is not yet known. Moreover, thanks to b61147fd ("MEDIUM: log/balance:
merge tcp/http algo with log ones") we could potentially benefit from
this setting even in log mode, but for now it is ignored by all log
compatible load-balancing algorithms.
In this patch we fix the prototype for ipcmp() and ipcpy() functions so
that input pointers that are used exclusively for reads are used as const
pointers. This way, the compiler can safely assume that those variables
won't be altered by the function.
In this patch we add the support for a new SERVER event in the
event_hdl API.
SERVER_INETADDR is implemented as an advanced server event.
It is published each time the server's ip address or port is
about to change. (ie: from the cli, dns, lua...)
SERVER_INETADDR data is an event_hdl_cb_data_server_inetaddr struct
that provides additional info related to the server inet addr change,
but can be casted as a regular event_hdl_cb_data_server struct if
additional info is not needed.
When possible, we try send DATA frame without copying data. To do so, we
swap the input buffer with QCS tx buffer. It is only possible iff:
* There is only one HTX block of data at the beginning of the message
* Amount of data to send is equal to the size of the HTX data block
* The QCS tx buffer is empty
In this case, both buffers are swapped. The frame metadata are written at
the begining of the buffer, before data and where the HTX structure is
stored.
After giving it some thought, it could pretty well happen that other
protocols benefit from the sticky algorithm that some used to emulate
using a "stick-on int(0)" or things like this previously. So better
rename it to "sticky" right now instead of having to keep that "log-"
prefix forever. It's still limited to logs, of course, only the algo
is renamed in the config.
Thanks to previous commit, a reverse HTTP listener is able to distribute
actively opened connections accross its threads. To be able to exploit
this, allow "thread" keyword for such a listener.
An extra check is added to explicitely forbids a reverse bind to span
multiple thread groups. Without this, multiple listeners instances will
be created, each with its owned "nbconn" value. This may surprise users
so for now, better to deactivate this possibility.
Implement support for active HTTP reverse task migration on listener
threads. This operation is done each time a new reversable connection
will be instantiated. Instead of directly allocate the connection, a
lookup is done among all the listener threads.
A comparison is done to select the thread with the smallest number of
current reverse connection. If the thread found is different from the
current one, the connection allocation is delayed and the task
rescheduled on the chosen thread. The connection will then be created
and pinned on the new thread. This mechanisms allows to balance reverse
HTTP connections accross different threads.
Note that rhttp_set_affinity is still defined to disable thread
migration on accept. This is necessary as it's unsafe to move an
existing connection to another thread. However, active reverse task
migration should be sufficient to distribute connections accross several
threads. Better than that, this design allows to differentiate standard
frontend and reversable connections. The latest are designed to be
long-lived so it's useful to have their repartition solely based on
others reversed connections.
Add a new member <nb_rhttp_conns> in thread_ctx structure. Its purpose
is to count the current number of opened reverse HTTP connections
regarding from their listeners membership.
This patch will be useful to support multi-thread for active reverse
HTTP, in order to select the less loaded thread.
Note that despite access to <nb_rhttp_conns> are only done by the
current thread, atomic operations are used. This is because once
multi-thread support will be added, external threads will also retrieve
values from others.
Previous commit renames 'proto_reverse_connect' module to 'proto_rhttp'.
This commits follows this by replacing various custom prefix by 'rhttp_'
to make the code uniform.
Note that 'reverse_' prefix was kept in connection module. This is
because if a new reversable protocol not based on HTTP is implemented,
it may be necessary to reused the same connection function which are
protocol agnostic.
In the mux-to-mux fast-forwarding, when end-of-input is reached on the producer
side, the consumer side must not set the CO_SFL_MSG_MORE flag on send. It means
the H1C_F_CO_MSG_MORE flag must be removed from the H1 connection.
No backport needed.
In Github issue #2128, @jvincze84 explained the complexity of using
external checks in some advanced setups due to the systematic purge of
environment variables, and expressed the desire to preserve the
existing environment. During the discussion an agreement was found
around having an option to "external-check" to do that and that
solution was tested and confirmed to work by user @nyxi.
This patch just cleans this up, implements the option as
"preserve-env" and documents it. The default behavior does not change,
the environment is still purged, unless "preserve-env" is passed. The
choice of not using "import-env" instead was made so that we could
later use it to name specific variables that have to be imported
instead of keeping the whole environment.
The patch is simple enough that it could be backported if needed (and
was in fact tested on 2.6 first).
Here the idea is to collect components' versions and build options. The
main component is haproxy, but the API is made so that any sub-system
can easily add a component there (for example the detailed version of a
device detection lib, or some info about a lib loaded from Lua).
The elements are stored as a pointer to an array of structs and its count
so that it's sufficient to issue this in gdb to list them all at once:
print *post_mortem.components@post_mortem.nb_components
For now we collect name, version, toolchain, toolchain options, build
options and path. Maybe more could be useful in the future.
Having the libs and their addresses listed in the post_mortem struct
is also helpful. Sometimes it helps notice that one version is not the
expected one, e.g. due to some LD_LIBRARY_PATH. We don't emit it on
"show dev" however since that's already available via "show libs".
The last starting thread now copies the pthread ID and stack top of
each thread into post_mortem. That way it's as easy as issuing
"p post_mortem" in gdb to see all thread IDs and stack frames and more
easily map them to the threads met in a core.
Here we collect the original uid/gid/rlimits for FD and RAM since these
ones do affect behavior and are sometimes different from expected in
containers or when starting as a service.
When the x86 CPU flags show the "hypervisor" flag, we know we're running
inside QEMU, VMware or possibly other flavors of hypervisors. In this
case we'll report either "qemu", "vmware" or "yes" for other ones in
the "virt_techno" field, based on the DMI hardware vendor name,
otherwise "no" when the flag is not found.
The CPU model and type has significant impact on certain bugs, such
as contention issues caused by CPUs having split L3 caches, or stricter
memory models that exhibit some barrier issues. It's complicated though
because the info about the model depends on the arch. For example, x86
reports an SKU name while ARM rather reports the CPU core types, families
and versions for each CPU core. There, the SoC will sometimes be reported
in the device tree or DMI info instead. But we don't really care, it's
essentially useful to know if the code is running on an armv8.0 such as
A53, a 8.2 such as A55/A76/Neoverse etc. For MIPS the model appears to
generally be there, and in addition the SoC is often present in the
"system type" field before the first CPU, and the type of machine in the
"machine" field, to replace the missing DMI and DT, so they are also
collected. Note that only the first CPU is checked and reported, that's
expected to be vastly sufficient, since we're just trying to spot known
incompatibilities or issues.
If we detect we're running inside a container on Linux, let's check if
it seems to be docker. Docker usually creates a /.dockerenv file, which
is easy to check. It's uncertain whether it's always the case, but on the
few tested instances that was true, and we don't really care, what matters
is to place helpful debugging info for developers. When this file is
detected, we report "docker" instead of "yes" in the container techno.
Containers often cause significant trouble depending on how they're
set up, and they're not always trivial for their users to extract info
from. Here we're trying to detect if we're running inside a container
on Linux. There are plenty of approaches and none is perfectly clean
nor reliable, which makes sense since the goal is to remain transparent
enough.
One interesting approach is to rely on the observation that containers
generally do not expose most kernel threads, and that the very firsts
of them are extremely stable across all kernel versions: pid 2 was
called "keventd" in kernel 2.4, became "kthreadd" in kernel 2.6, and
has since not changed. This is true on all architectures tested, even
with highly stripped down kernels such as those found on 15 year-old
OpenWRT images. And this one doesn't appear inside containers. Thus
here we check if we find such a thread via /proc and whether it's
called keventd or kthreadd, to detect a container, and we set the
"cont_techno" variable to "yes" or "no" depending on what is found.
Let's extract some info about the system (board model, vendor etc),
this will indicate some hypervisors, some cloud instances or some
uncommon embedded boards etc. Typically, vmware, qemu and raspberry-pi
are visible here and can help during the troubleshooting session.
The goal here is to accumulate precious debugging information in a
struct that is easy to find in memory. It's aligned to 256-byte as
it also helps. We'll progressively add a lot of info about the
startup conditions, the operating system, the hardware and hypervisor
so as to limit the number of round trips between developers and users
during debugging sessions. Also, opening a core file with an hex editor
should often be sufficient to extract most of the info.
In addition, a new "show dev" command will show these information so
that they can be checked at runtime without having to wait for a crash
(e.g. if a limit is bad in a container, better know it early).
For now the struct only contains utsname that's fed at boot time.
When debugging a core, it's difficult to match a given gdb thread number
against an internal thread. Let's just store the pthread ID and the stack
pointer in each tinfo. This could help in the future by allowing to just
glance over them and pick the right one depending what info is found
first.
When a default-server directive is used in a defaults section, it's never
freed and the "defaults" proxy gets reset without freeing the fields from
that default-server. Normally there are no allocation there, except for
the config file location stored in srv->conf.file form an strdup() since
commit 9394a9444 ("REORG: server: move alert traces in parse_server")
that appeared in 2.4. In addition, if a "default-server" directive
appears multiple times in a defaults section, one more entry will be
leaked per call.
This commit addresses this by checking that we don't overwrite the file
upon multiple calls, and by clearing it when resetting the default proxy.
This should be backported to 2.4.
This bug could be reproduced with -dMfail and h2load generating plenty of connections.
A "show pools" CLI command showed that some memory in relation with RX packet pool
was never release. Furthermore, adding a RX packet counter to each connection
and a BUG_ON() in quic_conn_release() has proved that this unreleased memory
was in relation with RX packet which were not linked to a connection.
The responsible is quic_dgram_parse() which does not release some RX packet
memory before exiting after the connection thread affinity has changed.
Must be backported as far as 2.7.
This bug could be reproduced with -dMfail and detected added a counter of TX packet
to the QUIC connection. When released calling quic_conn_release() the connection
should have a null counter of TX packets. This was not always the case.
This could occur during the handshake step: a first packet was built, then another
one should have followed in the same datagram, but fail due to a memory allocation
issue. As the datagram length and first TX packet were not written in the TX
buffer, this latter could not really be purged by qc_purge_tx_buf() even if
called. This bug occured only when building coalesced packets in the same datagram.
To fix this, write the packet information (datagram length and first packet
address) in the TX buffer before purging it.
Must be backported as far as 2.6.
This bug could be reproduced with -dMfail and dectected by libasan as follows:
$ ASAN_OPTIONS=disable_coredump=0:unmap_shadow_on_exit=1:abort_on_error=f quic-freeze.cfg -dMfail -dMno-cache -dM0x55
=================================================================
==82989==ERROR: AddressSanitizer: stack-use-after-scope on address 0x7ffc 0x560790cc4749 bp 0x7fff8e0e8e30 sp 0x7fff8e0e8e28
WRITE of size 8 at 0x7fff8e0ea338 thread T0
#0 0x560790cc4748 in qc_frm_free src/quic_frame.c:1222
#1 0x560790cc5260 in qc_release_frm src/quic_frame.c:1261
#2 0x560790d1de99 in qc_treat_acked_tx_frm src/quic_rx.c:312
#3 0x560790d1e708 in qc_ackrng_pkts src/quic_rx.c:370
#4 0x560790d22a1d in qc_parse_ack_frm src/quic_rx.c:694
#5 0x560790d25daa in qc_parse_pkt_frms src/quic_rx.c:988
#6 0x560790d2a509 in qc_treat_rx_pkts src/quic_rx.c:1373
#7 0x560790c72d45 in quic_conn_io_cb src/quic_conn.c:906
#8 0x560791207847 in run_tasks_from_lists src/task.c:596
#9 0x5607912095f0 in process_runnable_tasks src/task.c:876
#10 0x560791135564 in run_poll_loop src/haproxy.c:2966
#11 0x5607911363af in run_thread_poll_loop src/haproxy.c:3165
#12 0x56079113938c in main src/haproxy.c:3862
#13 0x7f92606edd09 in __libc_start_main ../csu/libc-start.c:308
#14 0x560790bcd529 in _start (/home/flecaille/src/haproxy/haproxy+0x
Address 0x7fff8e0ea338 is located in stack of thread T0 at offset 1032 i
#0 0x560790d29b52 in qc_treat_rx_pkts src/quic_rx.c:1341
This frame has 2 object(s):
[32, 48) 'ar' (line 1380)
[64, 1088) '_msg' (line 1368) <== Memory access at offset 1032 is inable
HINT: this may be a false positive if your program uses some custom stacnism, swapcontext or vfork
(longjmp and C++ exceptions *are* supported)
SUMMARY: AddressSanitizer: stack-use-after-scope src/quic_frame.c:1222 i
Shadow bytes around the buggy address:
0x100071c15410: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
0x100071c15420: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
0x100071c15430: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
0x100071c15440: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
0x100071c15450: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
=>0x100071c15460: f8 f8 f8 f8 f8 f8 f8[f8]f8 f8 f8 f8 f8 f8 f3 f3
0x100071c15470: f3 f3 f3 f3 f3 f3 f3 f3 f3 f3 f3 f3 f3 f3 00 00
0x100071c15480: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x100071c15490: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x100071c154a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x100071c154b0: 00 00 00 00 00 00 00 00 f1 f1 f1 f1 04 f3 f3 f3
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==82989==ABORTING
AddressSanitizer:DEADLYSIGNAL
AddressSanitizer:DEADLYSIGNAL
AddressSanitizer:DEADLYSIGNAL
AddressSanitizer:DEADLYSIGNAL
AddressSanitizer:DEADLYSIGNAL
AddressSanitizer:DEADLYSIGNAL
Aborted (core dumped)
Note that a coredump could not always be produced with all compilers. This was
always the case with clang 11.
When allocating frames to be retransmitted from qc_dgrams_retransmit(), if they
could not be sent for any reason, they could remain attached to a local list to
qc_dgrams_retransmit() and trigger a crash with libasan when releasing the
original frames they were duplicated from.
To fix this, always release the frames which could not be sent during
retransmissions calling qc_free_frm_list() where needed.
Must be backported as far as 2.6.
This is really boring to not know why some retransmissions could not be done
from qc_prep_hpkts() which allocates frames, prepare packets and send them.
Especially to not know about if frames are not remaining allocated and
attached to list on the stack. This patch already helped in diagnosing
such an issue during "-dMfail" tests.
Building without threads emits two warnings because the proxy pointer
is no longer used (only serves for the lock) since 2.9 commit 9a74a6cb1
("MAJOR: log: introduce log backends"). No backport is needed.
On CONNECTION_CLOSE reception/emission, QUIC connections enter CLOSING
state. At this stage, only CONNECTION_CLOSE can be reemitted and all
other exchanges are stopped.
Previously, on haproxy process stopping, if all QUIC connections were in
CLOSING state, they were released before their closing timer expiration
to not block the process shutdown. However, since a recent commit, the
closing timer has been shorten to a more reasonable delay. It is now
consider viable to respect connections closing state even on process
shutdown. As such, stopping specific code in QUIC connections idle timer
task was removed.
A specific function quic_handle_stopping() was implemented to notify
QUIC connections on shutdown from main() function. It should have been
deleted along the removal in QUIC idle timer task. This patch just does
this.
The connections are flagged as "to be killed" asap when the peer has left
(detected by sendto() "Connection refused" errno) by qc_kill_conn(). This
function has to wakeup the idle timer task to release the connection (and the idle
timer and the idle timer task itself). Then if in the meantime the connection
was flagged as having to process some retransmissions, some packet could lead
to sendto() errors again with a call to qc_kill_conn(), this time with a released
idle timer task.
This bug could be detected by libasan as follows:
.AddressSanitizer:DEADLYSIGNAL
=================================================================
==21018==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x 560b5d898717 bp 0x7f9aaac30000 sp 0x7f9aaac2ff80 T3)
==21018==The signal is caused by a READ memory access.
==21018==Hint: address points to the zero page.
. #0 0x560b5d898717 in _task_wakeup include/haproxy/task.h:209
#1 0x560b5d8a563c in qc_kill_conn src/quic_conn.c:171
#2 0x560b5d97f832 in qc_send_ppkts src/quic_tx.c:636
#3 0x560b5d981b53 in qc_send_app_pkts src/quic_tx.c:876
#4 0x560b5d987122 in qc_send_app_probing src/quic_tx.c:910
#5 0x560b5d987122 in qc_dgrams_retransmit src/quic_tx.c:1397
#6 0x560b5d8ab250 in quic_conn_app_io_cb src/quic_conn.c:712
#7 0x560b5de41593 in run_tasks_from_lists src/task.c:596
#8 0x560b5de4333c in process_runnable_tasks src/task.c:876
#9 0x560b5dd6f2b0 in run_poll_loop src/haproxy.c:2966
#10 0x560b5dd700fb in run_thread_poll_loop src/haproxy.c:3165
#11 0x7f9ab9188ea6 in start_thread nptl/pthread_create.c:477
#12 0x7f9ab90a8a2e in __clone (/lib/x86_64-linux-gnu/libc.so.6+0xfba2e)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV include/haproxy/task.h:209 in _task_wakeup
Thread T3 created by T0 here:
#0 0x7f9ab97ac2a2 in __interceptor_pthread_create ../../../../src/libsaniti zer/asan/asan_interceptors.cpp:214
#1 0x560b5df4f3ef in setup_extra_threads src/thread.c:252 o
#2 0x560b5dd730c7 in main src/haproxy.c:3856
#3 0x7f9ab8fd0d09 in __libc_start_main ../csu/libc-start.c:308 i
==21018==ABORTING
AddressSanitizer:DEADLYSIGNAL
Aborted (core dumped)
To fix, simply reset the connection flag QUIC_FL_CONN_RETRANS_NEEDED to cancel
the retransmission when qc_kill_conn is called.
Note that this new bug arrived with this fix which is correct and flagged as to be
backported as far as 2.6.
BUG/MINOR: quic: idle timer task requeued in the past
Must be backported as far as 2.6.
A quic_conn is instantiated and tied on the first thread which has
received the first INITIAL packet. After handshake completion,
listener_accept() is called. For each quic_conn, a new thread is
selected among the least loaded ones Note that this occurs earlier if
handling 0-RTT data.
This thread connection migration is done in two steps :
* inside listener_accept(), on the origin thread, quic_conn
tasks/tasklet are killed. After this, no quic_conn related processing
will occur on this thread. The connection is flagged with
QUIC_FL_CONN_AFFINITY_CHANGED.
* as soon as the first quic_conn related processing occurs on the new
thread, the migration is finalized. This allows to allocate the new
tasks/tasklet directly on the destination thread.
This last step on the new thread must be done prior to other quic_conn
access. There is two events which may trigger it :
* a packet is received on the new thread. In this case,
qc_finalize_affinity_rebind() is called from quic_dgram_parse().
* the recently accepted connection is popped from accept_queue_ring via
accept_queue_process(). This will called session_accept_fd() as
listener.bind_conf.accept callback. This instantiates a new session
and start connection stack via conn_xprt_start(), which itself calls
qc_xprt_start() where qc_finalize_affinity_rebind() is used.
A condition was recently found which could cause a closing to be used
with qc_finalize_affinity_rebind() which is forbidden with a BUG_ON().
This lat step was not compatible with layer 4 rule such as "tcp-request
connection reject" which closes the connection early. In this case, most
of the body of session_accept_fd() is skipped, including
qc_xprt_start(), so thread migration is not finalized. At the end of the
function, conn_xprt_close() is then called which flags the connection as
CLOSING.
If a datagram is received for this connection before it is released,
this will call qc_finalize_affinity_rebind() which triggers its BUG_ON()
to prevent thread migration for CLOSING quic_conn.
FATAL: bug condition "qc->flags & ((1U << 29)|(1U << 30))" matched at src/quic_conn.c:2036
Thread 3 "haproxy" received signal SIGILL, Illegal instruction.
[Switching to Thread 0x7ffff794f700 (LWP 2973030)]
0x00005555556221f3 in qc_finalize_affinity_rebind (qc=0x7ffff002d060) at src/quic_conn.c:2036
2036 BUG_ON(qc->flags & (QUIC_FL_CONN_CLOSING|QUIC_FL_CONN_DRAINING));
(gdb) bt
#0 0x00005555556221f3 in qc_finalize_affinity_rebind (qc=0x7ffff002d060) at src/quic_conn.c:2036
#1 0x0000555555682463 in quic_dgram_parse (dgram=0x7fff5003ef10, from_qc=0x0, li=0x555555f38670) at src/quic_rx.c:2602
#2 0x0000555555651aae in quic_lstnr_dghdlr (t=0x555555fc4440, ctx=0x555555fc3f78, state=32832) at src/quic_sock.c:189
#3 0x00005555558c9393 in run_tasks_from_lists (budgets=0x7ffff7944c90) at src/task.c:596
#4 0x00005555558c9e8e in process_runnable_tasks () at src/task.c:876
#5 0x000055555586b7b2 in run_poll_loop () at src/haproxy.c:2966
#6 0x000055555586be87 in run_thread_poll_loop (data=0x555555d3d340 <ha_thread_info+64>) at src/haproxy.c:3165
#7 0x00007ffff7b59609 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#8 0x00007ffff7a7e133 in clone () from /lib/x86_64-linux-gnu/libc.so.6
To fix this issue, ensure quic_conn migration is completed earlier
inside session_accept_fd(), before any tcp rules processing. This is
done by moving qc_finalize_affinity_rebind() invocation from
qc_xprt_start() to qc_conn_init().
This must be backported up to 2.7.
pre-c99 compilers will fail to build the cache since commit 48f81ec09
("MAJOR: cache: Delay cache entry delete in reserve_hot function") due
to an int declaration in the for loop. No backport is needed.
In 2.3, we started to get a cleaner socket unbinding mechanism with
commit f58b8db47 ("MEDIUM: receivers: add an rx_unbind() method in
the protocols"). This mechanism rightfully refrains from unbinding
when sockets are expected to be transferrable to another worker via
"expose-fd listeners", but this is not compatible with ABNS sockets,
which do not support reuseport, unbinding nor being renamed: in short
they will always prevent a new process from binding.
It turns out that this is not much visible because by pure accident,
GTUNE_SOCKET_TRANSFER is only set in the code dealing with master mode
and deamons, so it's never set in foreground mode nor in tests even if
present on the stats socket. However with master mode, it is now always
set even when not present on the stats socket, and will always conflict.
The only reasonable approach seems to consist in marking these abns
sockets as non-suspendable so that the generic sock_unbind() code can
decide to just unbind them regardless of GTUNE_SOCKET_TRANSFER.
This should carefully be backported as far as 2.4.
This bug was forbidding the GTUNE_SOCKET_TRANSFER option to be set
when haproxy is neither in daemon mode nor in mworker mode. So it
basically only impacts the foreground mode.
The fix moves the code outside the 'if (global.mode & (MODE_DAEMON |
MODE_MWORKER | MODE_MWORKER_WAIT))' condition.
Bug was introduced with 7f80eb23 ("MEDIUM: proxy: zombify proxies only
when the expose-fd socket is bound").
Must be backported in every stable version.
We start implementing some postparsing compatibility checks for log
backends.
Here we report a warning if user tries to use tcp-{request,response} rules
with log backend, and we properly ignore such rules when inherited from
defaults section.
add proxy_cfg_ensure_no_log() function (similar to
proxy_cfg_ensure_no_http()) to ensure at the end of proxy parsing that
no log exclusive options are found if the proxy is not in log mode.
"log-balance" directive was recently introduced to configure the
balancing algorithm to use when in a log backend. However, it is
confusing and it causes issues when used in default section.
In this patch, we take another approach: first we remove the
"log-balance" directive, and instead we rely on existing "balance"
directive to configure log load balancing in log backend.
Some algorithms such as roundrobin can be used as-is in a log backend,
and for log-only algorithms, they are implemented as "log-$name" inside
the "backend" directive.
The documentation was updated accordingly.
In 1b8e68e ("MEDIUM: stick-table: Stop handling stick-tables as proxies.")
we forgot to free the table pointer which is now dynamically allocated.
Let's take this opportunity to also fix a missing free in the table itself
(the table expire task wasn't properly destroyed)
This patch depends on:
- "MINOR: stktable: add sktable_deinit function"
It should be backported in every stable versions.
This one reports streams considered as "suspicious", i.e. those with
no expiration dates or dates in the past, or those without a front
endpoint. More criteria could be added in the future.
It's often needed to be able to refine "show sess" when debugging, and
very often a first glance at old streams is performed, but that's a
difficult task in large dumps, and it takes lots of resources to dump
everything.
This commit adds "older <age>" to "show sess" in order to specify the
minimum age of streams that will be dumped. This should simplify the
identification of blocked ones.
Since 2.4-dev2 with commit 15e525f49 ("MINOR: stream: Don't retrieve
anymore timing info from the mux csinfo"), we don't replace the
tv_accept (now accept_ts) anymore with the current request's, so that
it properly reflects the session's accept date and not the request's
date. However, since then we failed to update "show sess" to make use
of the request's timestamp instead of the session's timestamp, resulting
in fantasist values in the "age" field of "show sess" for the task.
Indeed, the session's age is displayed instead of the stream's, which
leads to great confusion when debugging, particularly when it comes to
multiplexed inter-proxy connections which are kept up forever.
Let's fix this now. This must be backported as far as 2.4. However,
for 2.7 and older, the field was named tv_request and was a timeval.
If less connections than threads are established on a reverse-http gateway
and these servers have a non-nul pool-min-conn, then conn_backend_get()
will refrain from picking available connections from other threads. But
this makes no sense for protocols for which there is no ->connect(),
since there's no way the current thread will manage to establish its own
connection. For such situations we should always accept to use another
thread's connection. That's precisely what this patch does.
A dummy connect() function previously had to be installed for the log
server so that a reverse-http address could be referenced on a "server"
line, but after the recent rework of the server line parsing, this is
no longer needed, and this is actually annoying as it makes one believe
there is a way to connect outside, which is not true. Let's now get rid
of this function.
This is the equivalent of the previous "BUG/MEDIUM: mux-h1: fail earlier
on malloc in takeover()".
Connection takeover was implemented for fcgi in 2.2 by commit a41bb0b6c
("MEDIUM: mux_fcgi: Implement the takeover() method."). It does have one
corner case related to memory allocation failure: in case the task or
tasklet allocation fails, the connection gets released synchronously.
Unfortunately the situation is bad there, because the lower layers are
already switched to the new thread while the tasklet is either NULL or
still the old one, and calling fcgi_release() will also result in
touching the thread-local list of buffer waiters, calling unsubscribe(),
There are even code paths where the thread will try to grab the lock of
its own idle conns list, believing the connection is there while it has
no useful effect. However, if the owner thread was doing the same at the
same moment, and ended up trying to pick from the current thread (which
could happen if picking a connection for a different name), the two
could even deadlock.
No tests were made to try to reproduce the problem, but the description
above is sufficient to see that nothing can guarantee against it.
This patch takes a simple but radically different approach. Instead of
starting to migrate the connection before risking to face allocation
failures, it first pre-allocates a new task and tasklet, then assigns
them to the connection if the migration succeeds, otherwise it just
frees them. This way it's no longer needed to manipulate the connection
until it's fully migrated, and as a bonus this means the connection will
continue to exist and the use-after-free condition is solved at the same
time.
This should be backported to 2.2. Thanks to Fred for the initial analysis
of the problem!