Factorize shutdown operation in a dedicated function qc_shutdown(). This
will allow to call it from multiple places. A new flag QC_CF_APP_SHUT is
also defined to ensure it will only be executed once even if called
multiple times per connection.
This commit will be useful to properly support haproxy soft stop.
This should be backported up to 2.7.
When a GOAWAY has been emitted, an ID is announced to represent handled
streams. H3 RFC suggests that higher streams should be resetted with the
error code H3_REQUEST_CANCELLED. This allows the peer to replay requests
on another connection.
For the moment, the impact of this change is limitted as GOAWAY is only
used on connection shutdown just before the MUX is freed. However, for
soft-stop support, a GOAWAY can be emitted in anticipation while keeping
the MUX to finish the active streams. In this case, new streams opened
by the client are resetted.
As a consequence of this change, app_ops.attach() operation has been
delayed at the very end of qcs_new(). This ensure that all qcs members
are initialized to support RESET_STREAM sending.
This should be backported up to 2.7.
h3s stores the current demux frame type and length as a state info. It
should be big enough to store a QUIC variable-length integer which is
the maximum H3 frame type and size.
Without this patch, there is a risk of integer overflow if H3 frame size
is bigger than INT_MAX. This can typically causes demux state mismatch
and demux frame error. However, no occurence has been found yet of this
bug with the current implementation.
This should be backported up to 2.6.
When the MUX is freed, the quic-conn layer may stay active until all
streams acknowledgment are processed. In this interval, if a new stream
is opened by the client, the quic-conn is thus now responsible to handle
it. This is done by the emission of a STOP_SENDING + RESET_STREAM.
Prior to this patch, the received packet was not acknowledged. This is
undesirable if the quic-conn is able to properly reject the request as
this can lead to unneeded retransmission from the client.
This must be backported up to 2.6.
When the MUX is freed, the quic-conn layer may stay active until all
streams acknowledgment are processed. In this interval, if a new stream
is opened by the client, the quic-conn is thus now responsible to handle
it. This is done by the emission of a STOP_SENDING.
This process has been completed to also emit a RESET_STREAM with the
same error code H3_REQUEST_REJECTED. This is done to conform with the H3
specification to invite the client to retry its request on a new
connection.
This should be backported up to 2.6.
When the MUX is freed, the quic-conn layer may stay active until all
streams acknowledgment are processed. In this interval, if a new stream
is opened by the client, the quic-conn is thus now responsible to handle
it. This is done by the emission of a STOP_SENDING.
This process is closely related to HTTP/3 protocol despite being handled
by the quic-conn layer. This highlights a flaw in our QUIC architecture
which should be adjusted. To reflect this situation, the function
qc_stop_sending_frm_enqueue() is renamed qc_h3_request_reject(). Also,
internal H3 treatment such as uni-directional bypass has been moved
inside the function.
This commit is only a refactor. However, bug fix on next patches will
rely on it so it should be backported up to 2.6.
This was revealed by Amaury when setting tune.quic.frontend.max-streams-bidi to 8
and asking a client to open 12 streams. haproxy has to send short packets
with little MAX_STREAMS frames encoded with 2 bytes. In addition to a packet number
encoded with only one byte. In the case <len_frms> is the length of the encoded
frames to be added to the packet plus the length of the packet number.
Ensure the length of the packet is at least QUIC_PACKET_PN_MAXLEN adding a PADDING
frame wich (QUIC_PACKET_PN_MAXLEN - <len_frms>) as size. For instance with
a two bytes MAX_STREAMS frames and a one byte packet number length, this adds
one byte of padding.
See https://datatracker.ietf.org/doc/html/rfc9001#name-header-protection-sample.
Must be backported to 2.7 and 2.6.
When receiving an Initial packet a peer must drop it if the datagram is smaller
than 1200. Before this patch, this is the entire datagram which was dropped.
In such a case, drop the packet after having parsed its length.
Must be backported to 2.6 and 2.7
This bug arrives with this commit:
982896961 MINOR: quic: split and rename qc_lstnr_pkt_rcv()
The first block of code consists in possibly setting this variable to true.
But it was already initialized to true before entering this code section.
Should be initialized to false.
Also take the opportunity to remove an unused "err" label.
Must be backported to 2.6 and 2.7.
Before probing the Initial packet number space, verify that we can at least
sent 1200 bytes by datagram. This may not be the case due to the amplification limit.
Must be backported to 2.6 and 2.7.
This should help in diagnosing issues revealed by the interop runner which counts
the number of handshakes from the number of Initial packets sent by the server.
Must be backported to 2.7.
The aim of this function is to rearm the idle timer. The ->expire
field of the timer task was updated without being requeued.
Some connection could be unexpectedly terminated.
Must be backported to 2.6 and 2.7.
This is very helpful during retranmission when receiving ICMP port unreachable
errors after the peer has left. This is the unique case at prevent where
qc_send_hdshk_pkts() or qc_send_app_probing() may fail (when they call
qc_send_ppkts() which fails with ECONNREFUSED as errno).
Also make the callers qc_dgrams_retransmit() stop their packet process. This
is the case of quic_conn_app_io_cb() and quic_conn_io_cb().
This modifications stops definitively any packet processing when receiving
ICMP port unreachable errors.
Must be backported to 2.7.
The send*() syscall which are responsible of such ICMP packets reception
fails with ECONNREFUSED as errno.
man(7) udp
ECONNREFUSED
No receiver was associated with the destination address.
This might be caused by a previous packet sent over the socket.
We must kill asap the underlying connection.
Must be backported to 2.7.
This code was there because the timer task was not running on the same thread
as the one which parse the QUIC packets. Now that this is no more the case,
we can wake up this task directly.
Must be backported to 2.7.
Move quic_rx_pkts_del() out of quic_conn.h to make it benefit from the TRACE API.
Add traces which already already helped in diagnosing an issue encountered with
ngtcp2 which sent too much 1RTT packets before the handshake completion. This
has been fixed here after having discussed with Tasuhiro on QUIC dev slack:
https://github.com/ngtcp2/ngtcp2/pull/663
Must be backported to 2.7.
Some counters could potentially be incremented even if send*() syscall returned
no error when ret >= 0 and ret != sz. This could be the case for instance if
a first call to send*() returned -1 with errno set to EINTR (or any previous syscall
which set errno to a non-null value) and if the next call to send*() returned
something positive and smaller than <sz>.
Must be backported to 2.7 and 2.6.
Add traces inside h3_decode_qcs(). Every error path has now its
dedicated trace which should simplify debugging. Each early returns has
been converted to a goto invocation.
To complete the demux tracing, demux frame type and length are now
printed using the h3s instance whenever its possible on trace
invocation. A new internal value H3_FT_UNINIT is used as a frame type to
mark demuxing as inactive.
This should be backported up to 2.7.
Since the recent changes on the clocks, now.tv_sec is not to be used
between processes because it's a clock which is local to the process and
does not contain a real unix timestamp. This patch fixes the issue by
using "data.tv_sec" which is the wall clock instead of "now.tv_sec'.
It prevents having incoherent timestamps.
It also introduces some checks on negatives values in order to never
displays a netative value if it was computed from a wrong value set by a
previous haproxy version.
It must be backported as far as 2.0.
Implement support for clients that emit the stream FIN with an empty
STREAM frame. For that, qcc_recv() offset comparison has been adjusted.
If offset has already been received but the FIN bit is now transmitted,
do not skip the rest of the function and call application layer
decode_qcs() callback.
Without this, streams will be kept open forever as HTX EOM is never
transfered to the upper stream layer.
This behavior was observed with mvfst client prior to its patch
38c955a024aba753be8bf50fdeb45fba3ac23cfd
Fix hq-interop (HTTP 0.9 over QUIC)
This notably caused the interop multiplexing test to fail as unclosed
streams on haproxy side prevented the emission of new MAX_STREAMS frame
to the client.
This shoud be backported up to 2.6. It also relies on previous commit :
381d8137e3
MINOR: h3/hq-interop: handle no data in decode_qcs() with FIN set
Properly handle a STREAM frame with no data but the FIN bit set at the
application layer. H3 and hq-interop decode_qcs() callback have been
adjusted to not return early in this case.
If the FIN bit is accepted, a HTX EOM must be inserted for the upper
stream layer. If the FIN is rejected because the stream cannot be
closed, a proper CONNECTION_CLOSE error will be triggered.
A new utility function qcs_http_handle_standalone_fin() has been
implemented in the qmux_http module. This allows to simply add the HTX
EOM on qcs HTX buffer. If the HTX buffer is empty, a EOT is first added
to ensure it will be transmitted above.
This commit will allow to properly handle FIN notify through an empty
STREAM frame. However, it is not sufficient as currently qcc_recv() skip
the decode_qcs() invocation when the offset is already received. This
will be fixed in the next commit.
This should be backported up to 2.6 along with the next patch.
Several times during debugging it has been difficult to find a way to
reliably indicate if a thread had been started and if it was still
running. It's really not easy because the elements we look at are not
necessarily reliable (e.g. harmless bit or idle bit might not reflect
what we think during a signal). And such notions can be subjective
anyway.
Here we define two thread flags, TH_FL_STARTED which is set as soon as
a thread enters run_thread_poll_loop() and drops the idle bit, and
another one, TH_FL_IN_LOOP, which is set when entering run_poll_loop()
and cleared when leaving it. This should help init/deinit code know
whether it's called from a non-initialized thread (i.e. tid must not
be trusted), or shared functions know if they're being called from a
running thread or from init/deinit code outside of the polling loop.
As reported in github issue #1881, there are situations where an excess
of TLS handshakes can cause a livelock. What's happening is that normally
we process at most one TLS handshake per loop iteration to maintain the
latency low. This is done by tagging them with TASK_HEAVY, queuing these
tasklets in the TL_HEAVY queue. But if something slows down the loop, such
as a connect() call when no more ports are available, we could end up
processing no more than a few hundred or thousands handshakes per second.
If the llmit becomes lower than the rate of incoming handshakes, we will
accumulate them and at some point users will get impatient and give up or
retry. Then a new problem happens: the queue fills up with even more
handshake attempts, only one of which will be handled per iteration, so
we can end up processing only outdated handshakes at a low rate, with
basically nothing else in the queue. This can for example happen in
parallel with health checks that don't require incoming handshakes to
succeed to continue to cause some activity that could maintain the high
latency stuff active.
Here we're taking a slightly different approach. First, instead of always
allowing only one handshake per loop (and usually it's critical for
latency), we take the current situation into account:
- if configured with tune.sched.low-latency, the limit remains 1
- if there are other non-heavy tasks, we set the limit to 1 + one
per 1024 tasks, so that a heavily loaded queue of 4k handshakes
per thread will be able to drain them at ~4 per loops with a
limited impact on latency
- if there are no other tasks, the limit grows to 1 + one per 128
tasks, so that a heavily loaded queue of 4k handshakes per thread
will be able to drain them at ~32 per loop with still a very
limited impact on latency since only I/O will get delayed.
It was verified on a 56-core Xeon-8480 that this did not degrade the
latency; all requests remained below 1ms end-to-end in full close+
handshake, and even 500us under low-lat + busy-polling.
This must be backported to 2.4.
There's a per-thread "long_rq" counter that is used to indicate how
often we leave the scheduler with tasks still present in the run queue.
The purpose is to know when tune.runqueue-depth served to limit latency,
due to a large number of tasks being runnable at once.
However there's a bug there, it's not always set: if after the first
run, one heavy task was processed and later only heavy tasks remain,
we'll loop back to not_done_yet where we try to pick more tasks, but
none are eligible (since heavy ones have already run) so we directly
return without incrementing the counter. This is what causes ultra-low
values on long_rq during massive SSL handshakes, that are confusing
because they make one believe that tl_class_mask doesn't have the HEAVY
flag anymore. Let's just fix that by not returning from the middle of
the function.
This can be backported as far as 2.4.
In 2.7, the method used to check for a sleeping thread changed with
commit e7475c8e7 ("MEDIUM: tasks/fd: replace sleeping_thread_mask with
a TH_FL_SLEEPING flag"). Previously there was a global sleeping mask
and now there is a flag per thread. The commit above partially broke
the watchdog by looking at the current thread's flags via th_ctx
instead of the reported thread's flags, and using an AND condition
instead of an OR to update and leave. This can cause a wrong thread
to be killed when the load is uneven. For example, when enabling
busy polling and sending traffic over a single connection, all
threads have their run time grow, and if the one receiving the
signal is also processing some traffic, it will not match the
sleeping/harmless condition and will set the stuck flag, then die
upon next invocation. While it's reproducible in tests, it's unlikely
to be met in field.
This fix should be backported to 2.7.
A feature command was added to detect if infinite forward is disabled to be
able to skip the script. Unfortunately, it is no supported to evaluate such
expression. Thus remove it. For now, reg-tests must not be executed with
"-dF" option.
The -dF option can now be used to disable data fast-forward. It does the
same than the global option "tune.fast-forward off". Some reg-tests may rely
on this optim. To detect the feature and skip such script, the following
vtest command must be used:
feature cmd "$HAPROXY_PROGRAM -cc '!(globa.tune & GTUNE_NO_FAST_FWD)'"
The new global option "tune.fast-forward" can be set to "off" to disable the
data fast-forward. It is an debug option, thus it is internally marked as
experimental. The directive "expose-experimental-directives" must be set
first to use this one. By default, the data fast-forward is enable.
It could be usefull to force to wake the stream up when data are
received. To be sure, evreything works fine in this case. The data
fast-forward is an optim. It must work without it. But some code may rely on
the fact the stream will not be woken up. With this option, it is possible
to spot some hidden bugs.
At the stream level, the read expiration date is unset if a shutr was
received but not if the end of input was reached. If we know no more data
are excpected, there is no reason to let the read expiration date armed,
except to respect clientfin/serverfin timeout on some circumstances.
This patch could slowly be backported as far as 2.2.
During the payload forwarding, since the commit f2b02cfd9 ("MAJOR: http-ana:
Review error handling during HTTP payload forwarding"), when an error
occurred on one side, we don't rely anymore on a specific HTTP message state
to detect it on the other side. However, nothing was added to detect the
error. Thus, when this happens, a spinning loop may be experienced and an
abort because of the watchdog.
To fix the bug, we must detect the opposite side is closed by checking the
opposite SC state. Concretly, in http_end_request() and http_end_response(),
we wait for the other side iff the HTTP message state is lower to
HTTP_MSG_DONE (the message is not finished) and the SC state is not
SC_ST_CLO (the opposite side is not closed). In these function, we don't
care if there was an error on the opposite side. We only take care to detect
when we must stop waiting the other side.
This patch should fix the issue #2042. No backport needed.
The ssl_bind_kw structure is exclusively used for crt-list keyword, it
must be named otherwise to remove the confusion.
The structure was renamed ssl_crtlist_kws.
Released version 2.8-dev4 with the following main changes :
- BUG/MINOR: stats: fix source buffer size for http dump
- BUG/MEDIUM: stats: fix resolvers dump
- BUG/MINOR: stats: fix ctx->field update in stats_dump_proxy_to_buffer()
- BUG/MINOR: stats: fix show stats field ctx for servers
- BUG/MINOR: stats: fix STAT_STARTED behavior with full htx
- MINOR: quic: Update version_information transport parameter to draft-14
- BUG/MINOR: stats: Prevent HTTP "other sessions" counter underflows
- BUG/MEDIUM: thread: fix extraneous shift in the thread_set parser
- BUG/MEDIUM: listener/thread: bypass shards setting on failed thread resolution
- BUG/MINOR: ssl/crt-list: warn when a line is malformated
- BUG/MEDIUM: stick-table: do not leave entries in end of window during purge
- BUG/MINOR: clock: do not mix wall-clock and monotonic time in uptime calculation
- BUG/MEDIUM: cache: use the correct time reference when comparing dates
- MEDIUM: clock: force internal time to wrap early after boot
- BUILD: ssl/ocsp: ssl_ocsp-t.h depends on ssl_sock-t.h
- MINOR: ssl/ocsp: add a function to check the OCSP update configuration
- MINOR: cfgparse/server: move (min/max)conn postparsing logic into dedicated function
- BUG/MINOR: server/add: ensure minconn/maxconn consistency when adding server
- BUG/MEDIUM: stconn: Schedule a shutw on shutr if data must be sent first
- BUG/MEDIUM: quic: fix crash when "option nolinger" is set in the frontend
- MINOR: quic: implement a basic "show quic" CLI handler
- MINOR: quic: display CIDs and state in "show quic"
- MINOR: quic: display socket info on "show quic"
- MINOR: quic: display infos about various encryption level on "show quic"
- MINOR: quic: display Tx stream info on "show quic"
- MINOR: quic: filter closing conn on "show quic"
- BUG/MINOR: quic: fix filtering of closing connections on "show quic"
- BUG/MEDIUM: stconn: Don't needlessly wake the stream on send during fast-forward
- BUG/MINOR: quic: fix type bug on "show quic" for 32-bits arch
- BUG/MINOR: mworker: fix uptime for master process
- BUG/MINOR: clock/stats: also use start_time not start_date in HTML info
- BUG/MEDIUM: stconn: stop to enable/disable reads from streams via si_update_rx
- BUG/MEDIUM: quic: Buffer overflow when looking through QUIC CLI keyword list
- DOC: proxy-protocol: fix wrong byte in provided example
- MINOR: ssl-ckch: Stop to test CF_WRITE_ERROR to commit CA/CRL file
- MINOR: bwlim: Remove useless test on CF_READ_ERROR to detect the last packet
- BUG/MINOR: http-ana: Fix condition to set LAST termination flag
- BUG/MINOR: mux-h1: Don't report an H1C error on client timeout
- BUG/MEDIUM: spoe: Don't set the default traget for the SPOE agent frontend
- BUG/MINOR: quic: Wrong datagram dispatch because of qc_check_dcid()
- BUG/CRITICAL: http: properly reject empty http header field names
The HTTP header parsers surprizingly accepts empty header field names,
and this is a leftover from the original code that was agnostic to this.
When muxes were introduced, for H2 first, the HPACK decompressor needed
to feed headers lists, and since empty header names were strictly
forbidden by the protocol, the lists of headers were purposely designed
to be terminated by an empty header field name (a principle that is
similar to H1's empty line termination). This principle was preserved
and generalized to other protocols migrated to muxes (H1/FCGI/H3 etc)
without anyone ever noticing that the H1 parser was still able to deliver
empty header field names to this list. In addition to this it turns out
that the HPACK decompressor, despite a comment in the code, may
successfully decompress an empty header field name, and this mistake
was propagated to the QPACK decompressor as well.
The impact is that an empty header field name may be used to truncate
the list of headers and thus make some headers disappear. While for
H2/H3 the impact is limited as haproxy sees a request with missing
headers, and headers are not used to delimit messages, in the case of
HTTP/1, the impact is significant because the presence (and sometimes
contents) of certain sensitive headers is detected during the parsing.
Thus, some of these headers may be seen, marked as present, their value
extracted, but never delivered to upper layers and obviously not
forwarded to the other side either. This can have for consequence that
certain important header fields such as Connection, Upgrade, Host,
Content-length, Transfer-Encoding etc are possibly seen as different
between what haproxy uses to parse/forward/route and what is observed
in http-request rules and of course, forwarded. One direct consequence
is that it is possible to exploit this property in HTTP/1 to make
affected versions of haproxy forward more data than is advertised on
the other side, and bypass some access controls or routing rules by
crafting extraneous requests. Note, however, that responses to such
requests will normally not be passed back to the client, but this can
still cause some harm.
This specific risk can be mostly worked around in configuration using
the following rule that will rely on the bug's impact to precisely
detect the inconsistency between the known body size and the one
expected to be advertised to the server (the rule works from 2.0 to
2.8-dev):
http-request deny if { fc_http_major 1 } !{ req.body_size 0 } !{ req.hdr(content-length) -m found } !{ req.hdr(transfer-encoding) -m found } !{ method CONNECT }
This will exclusively block such carefully crafted requests delivered
over HTTP/1. HTTP/2 and HTTP/3 do not need content-length, and a body
that arrives without being announced with a content-length will be
forwarded using transfer-encoding, hence will not cause discrepancies.
In HAProxy 2.0 in legacy mode ("no option http-use-htx"), this rule will
simply have no effect but will not cause trouble either.
A clean solution would consist in modifying the loops iterating over
these headers lists to check the header name's pointer instead of its
length (since both are zero at the end of the list), but this requires
to touch tens of places and it's very easy to miss one. Functions such
as htx_add_header(), htx_add_trailer(), htx_add_all_headers() would be
good starting points for such a possible future change.
Instead the current fix focuses on blocking empty headers where they
are first inserted, hence in the H1/HPACK/QPACK decoders. One benefit
of the current solution (for H1) is that it allows "show errors" to
report a precise diagnostic when facing such invalid HTTP/1 requests,
with the exact location of the problem and the originating address:
$ printf "GET / HTTP/1.1\r\nHost: localhost\r\n:empty header\r\n\r\n" | nc 0 8001
HTTP/1.1 400 Bad request
Content-length: 90
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>400 Bad request</h1>
Your browser sent an invalid request.
</body></html>
$ socat /var/run/haproxy.stat <<< "show errors"
Total events captured on [10/Feb/2023:16:29:37.530] : 1
[10/Feb/2023:16:29:34.155] frontend decrypt (#2): invalid request
backend <NONE> (#-1), server <NONE> (#-1), event #0, src 127.0.0.1:31092
buffer starts at 0 (including 0 out), 16334 free,
len 50, wraps at 16336, error at position 33
H1 connection flags 0x00000000, H1 stream flags 0x00000810
H1 msg state MSG_HDR_NAME(17), H1 msg flags 0x00001410
H1 chunk len 0 bytes, H1 body len 0 bytes :
00000 GET / HTTP/1.1\r\n
00016 Host: localhost\r\n
00033 :empty header\r\n
00048 \r\n
I want to address sincere and warm thanks for their great work to the
team composed of the following security researchers who found the issue
together and reported it: Bahruz Jabiyev, Anthony Gavazzi, and Engin
Kirda from Northeastern University, Kaan Onarlioglu from Akamai
Technologies, Adi Peleg and Harvey Tuch from Google. And kudos to Amaury
Denoyelle from HAProxy Technologies for spotting that the HPACK and
QPACK decoders would let this pass despite the comment explicitly
saying otherwise.
This fix must be backported as far as 2.0. The QPACK changes can be
dropped before 2.6. In 2.0 there is also the equivalent code for legacy
mode, which doesn't suffer from the list truncation, but it would better
be fixed regardless.
CVE-2023-25725 was assigned to this issue.
There was a parenthesis placed in the wrong place for a memcmp().
As a consequence, clients could not reuse a UDP address for a new connection.
Must be backported to 2.7.
The commit d5983cef8 ("MINOR: listener: remove the useless ->default_target
field") revealed a bug in the SPOE. No default-target must be defined for
the SPOE agent frontend. SPOE applets are used on the frontend side and a
TCP connection is established on the backend side.
Because of this bug, since the commit above, the stream target is set to the
SPOE applet instead of the backend connection, leading to a spinning loop on
the applet when it is released because are unable to close the backend side.
This patch should fix the issue #2040. It only affects the 2.8-DEV but to
avoid any future bug, it should be backported to all stable versions.
When a client timeout is reported by the H1 mux, it is not an error but an
abort. Thus, H1C_F_ERROR flag must not be set. It is espacially important to
not inhibit the send. Because of this bug, a 408-Request-time-out is
reported in logs but the error message is not sent to the client.
This patch must be backported to 2.7.
We should not report LAST data in log if the response is in TUNNEL mode on
client close/timeout because there is no way to be sure it is the last
data. It means, it can only be reported in DONE, CLOSING or CLOSE states.
No backport needed.
There is already a test on CF_EOI and CF_SHUTR. The last one is always set
when a read error is reported. Thus there is no reason to check
CF_READ_ERROR.
This change was performed on all applet I/O handlers but one was missed. In
the CLI I/O handler used to commit a CA/CRL file, we can remove the test on
CF_WRITE_ERROR because there is already a test on CF_SHUTW.
There was a mistake in the example of proxy-proto frame
provided, it cannot end with 0x02 but only 0x20 or 0x21
since the version is in the upper 4 bits and the lower ones
are 0 for LOCAL or 1 for PROXY, hence the example should be:
\x0D\x0A\x0D\x0A\x00\x0D\x0A\x51\x55\x49\x54\x0A\x20
Thanks to Bram Grit for reporting this mistake.
This has been detected by libasan as follows:
=================================================================
==3170559==ERROR: AddressSanitizer: global-buffer-overflow on address 0x55cf77faad08 at pc 0x55cf77a87370 bp 0x7ffc01bdba70 sp 0x7ffc01bdba68
READ of size 8 at 0x55cf77faad08 thread T0
#0 0x55cf77a8736f in cli_find_kw src/cli.c:335
#1 0x55cf77a8a9bb in cli_parse_request src/cli.c:792
#2 0x55cf77a8c385 in cli_io_handler src/cli.c:1024
#3 0x55cf77d19ca1 in task_run_applet src/applet.c:245
#4 0x55cf77c0b6ba in run_tasks_from_lists src/task.c:634
#5 0x55cf77c0cf16 in process_runnable_tasks src/task.c:861
#6 0x55cf77b48425 in run_poll_loop src/haproxy.c:2934
#7 0x55cf77b491cf in run_thread_poll_loop src/haproxy.c:3127
#8 0x55cf77b4bef2 in main src/haproxy.c:3783
#9 0x7fb8b0693d09 in __libc_start_main ../csu/libc-start.c:308
#10 0x55cf7764f4c9 in _start (/home/flecaille/src/haproxy-untouched/haproxy+0x1914c9)
0x55cf77faad08 is located 0 bytes to the right of global variable 'cli_kws' defined in 'src/quic_conn.c:7834:27' (0x55cf77faaca0) of size 104
SUMMARY: AddressSanitizer: global-buffer-overflow src/cli.c:335 in cli_find_kw
Shadow bytes around the buggy address:
According to cli_find_kw() code and cli_kw_list struct definition, the second
member of this structure ->kw[] must be a null-terminated array.
Add a last element with default initializers to <cli_kws> global variable which
is impacted by this bug.
This bug arrived with this commit:
15c74702d MINOR: quic: implement a basic "show quic" CLI handler
Must be backported to 2.7 where this previous commit has been already
backported.
It is not really a bug because it does not fix any known issue. And it is
flagged as MEDIUM because it is sensitive. But if there are some extra calls
to process_stream(), it can be an issue because, in si_update_rx(), we may
disable reading for the SC when outgoing data are blocked in the input
channel. But it is not really the process_stream() job to take care of
that. This may block data receipt.
It is an old code, mainly here to avoid wakeup in loop on the stats
applet. Today, it seems useless and can lead to bugs. An endpoint is
responsible to block the SC if it waits for some room and the opposite
endpoint is responsible to unblock it when some data are sent. The stream
should not interfere on this part.
This patch could be backported to 2.7 after a period of observation. And it
should only be backported to lower versions if an issue is reported.
For an unknown reason in the change of uptime calculation for the HTML
page didn't make it to commit 6093ba47c ("BUG/MINOR: clock: do not mix
wall-clock and monotonic time in uptime calculation"). Let's address it
as well otherwise the stats page will display an incorrect uptime.
No backport needed unless the patch above is backported.
Uptime calculation for master process was incorrect as it used
<start_date> as its timestamp base time. Fix this by using the scheduler
time <start_time> for this.
The impact of this bug is minor as timestamp base time is only used for
"show proc" CLI output. it was highlighted by the following commit.
which caused a negative value to be displayed for the master process
uptime on "show proc" output.
28360dc53f
MEDIUM: clock: force internal time to wrap early after boot
This should be backported up to 2.0.
Incorrect printf format specifier "%lu" was used on "show quic" handler
for uint64_t. This breaks build on 32-bits architecture. To fix this
portability issue, force an explicit cast to unsigned long long with
"%llu" specifier.
This must be backported up to 2.7.