When calling ssl_ocsp_response_print which is used to display an OCSP
response's details when calling the "show ssl ocsp-response" on the CLI,
we use the BIO_read function that copies an OpenSSL BIO into a trash.
The return value was not checked though, which could lead to some
crashes since BIO_read can return a negative value in case of error.
This patch should be backported to 2.5.
When calling the "show ssl ocsp-response" CLI command some OpenSSL
objects need to be created in order to get some information related to
the OCSP response and some of them were not freed.
It should be backported to 2.5.
The b_istput function called to append the last data block to the end of
an OCSP response's detailed output was not checked in
ssl_ocsp_response_print. The ssl_ocsp_response_print return value checks
were added as well since some of them were missing.
This error was raised by Coverity (CID 1469513).
This patch fixes GitHub issue #1541.
It can be backported to 2.5.
The 'dst' optionnal field on a httpclient request can be used to set an
alternative server address in the haproxy address format. Which means it
could be use with unix@, ipv6@ etc.
Should fix issue #1471.
When starting for the 2nd time a request from the same httpclient *hc
context, the flags are not reinitialized and the httpclient will stop
after the first call to the IO handler, because the END flag is always
present.
This patch also add a test before httpclient_start() to ensure we don't
start a client already started.
Must be backported in 2.5.
The idle connection delay calculation before a request is a bit tricky,
especially for multiplexed protocols. It changed between 2.3 and 2.4 by
the integration of the idle delay inside the session itself with these
commits:
dd78921c6 ("MINOR: logs: Use session idle duration when no stream is provided")
7a6c51324 ("MINOR: stream: Always get idle duration from the session")
and by then it was only set by the H1 mux. But over multiple changes, what
used to be a zero idle delay + a request delay for H2 became a bit odd, with
the idle time slipping into the request time measurement. The effect is that,
as reported in GH issue #1395, some H2 request times look huge.
This patch introduces the calculation of the session's idle time on the
H2 mux before creating the stream. This is made possible because the
stream_new() code immediately copies this value into the stream for use
at log time. Thus we don't care about changing something that will be
touched by every single request. The idle time is calculated as documented,
i.e. the delay from the previous request to the current one. This also
means that when a single stream is present on a connection, a part of
the server's response time may appear in the %Ti measurement, but this
reflects the reality since nothing would prevent the client from using
the connection to fetch more objects. In addition this shows how long
it takes a client to find references to objects in an HTML page and
start to fetch them.
A different approach could have consisted in counting from the last time
the connection was left without any request (i.e. really idle), but this
would at least require a documentation change and it's not certain this
would provide a more useful information.
Thanks to Bart Butler and Luke Seelenbinder for reporting enough elements
to diagnose this issue.
This should be backported to 2.4.
Sadly, despite particular care, commit 39a0a1e12 ("MEDIUM: h2/hpack: emit
a Dynamic Table Size Update after settings change") broke H2 when sending
DTSU. A missing negation on the flag caused the DTSU_EMITTED flag to be
lost and the DTSU to be sent again on the next stream, and possibly to
break flow control or a few other internal states.
This will have to be backported wherever the patch above was backported.
Thanks to Yves Lafon for notifying us with elements to reproduce the
issue!
tls_basic_sync_wo_stkt_backend fails once every 200 runs for me. This
seems to be because the startup delay doesn't always allow peers to
perform a simultaneous connect, close and new attempt. With 3s I can't
see it fail anymore. In addition the long "delay 0.2" are still way too
much since we do not really care about the startup order in practice.
Sometimes when sending commands to shut down a server, haproxy complains
that some connections remain, this is because the server-side connection
might not always be completely released at the moment the client leaves
and the operation is emitted. While shutting down server sessions work,
it seems cleaner to just use "option httpclose" which releases the server
earlier and avoids the race.
This can be backported to 2.5.
There's a bug in spoe_release_appctx() which checks the presence of items
in the wrong list rt[tid].agents to run over rt[tid].waiting_queue and
zero their spoe_appctx. The effect is that these contexts are not zeroed
and if spoe_stop_processing() is called, "sa->cur_fpa--" will be applied
to one of these recently freed contexts and will corrupt random memory
locations, as found at least in bugs #1494 and #1525.
This must be backported to all stable versions.
Many thanks to Christian Ruppert from Babiel for exchanging so many
useful traces over the last two months, testing debugging code and
helping set up a similar environment to reproduce it!
Ensure calls to http_find_header() terminate. If a "Set-Cookie2"
header is found then the while(1) loop in
http_manage_server_side_cookies() will never terminate, resulting in
the watchdog firing and the process terminating via SIGABRT.
The while(1) loop becomes unbounded because an unmatched call to
http_find_header("Set-Cookie") will leave ctx->blk=NULL. Subsequent
calls to check for "Set-Cookie2" will now enumerate from the beginning
of all the blocks and will once again match on subsequent
passes (assuming a match first time around), hence the loop becoming
unbounded.
This issue was introduced with HTX and this fix should be backported
to all versions supporting HTX.
Many thanks to Grant Spence (gspence@redhat.com) for working through
this issue with me.
ist are not ended by '\0', leading to junk characters being displayed
when using %s for printing the HTTP start line.
Fix the issue by replacing %s by %.*s + istlen.
Must be backported in 2.5.
If the same filename was specified in multiple calls of the jwt_verify
converter, we would have parsed the contents of the file every time it
was used instead of checking if the entry already existed in the tree.
This lead to memory leaks because we would not insert the duplicated
entry and we would not free it (as well as the EVP_PKEY it referenced).
We now check the return value of ebst_insert and free the current entry
if it is a duplicate of an existing entry.
The order in which the tree insert and the pkey parsing happen was also
switched in order to avoid parsing key files in case of duplicates.
Should be backported to 2.5.
When emptying the jwt_cert_tree during deinit, the entries are freed but
not the EVP_PKEY reference they kept, leading in a memory leak.
Should be backported in 2.5.
The node pointer was not moving properly along the jwt_cert_tree during
the deinit which ended in a double free during cleanup (or when checking
a configuration that used the jwt_verify converter with an explicit
certificate specified).
This patch fixes GitHub issue #1533.
It should be backported to 2.5.
Inspect return code of HEADERS/DATA parsing functions and use a BUG_ON
to signal an error. The stream should be closed to handle the error
in a more clean fashion.
This is the same fixe as for this commit:
"BUILD: tree-wide: avoid warnings caused by redundant checks of obj_types"
Should fix CID 1469649 for GH #1546
Remove this server specific code section. It is useless, not tested. Furthermore
this is really not the good place to retrieve the peer transport parameters.
Add a new function h3_data_to_htx. This function is used to parse a H3
DATA frame and copy it in the mux stream HTX buffer. This is required to
support HTTP POST data.
Note that partial transfers if the HTX buffer is fulled is not properly
handle. This causes large DATA transfer to fail at the moment.
Move the HEADERS parsing code outside of generic h3_decode_qcs to a new
dedicated function h3_headers_to_htx. The benefit will be visible when
other H3 frames parsing will be implemented such as DATA.
Flags EOI/EOS must be set on conn-stream when transfering the last data
of a stream in rcv_buf. This is activated if qcs HTX buffer has the EOM
flag and has been fully transfered.
Implement the stream rcv_buf operation on QUIC mux.
A new buffer is stored in qcs structure named app_buf. This new buffer
will contains HTX and will be filled for example on H3 DATA frame
parsing.
The rcv_buf operation transfer as much as possible data from the HTX
from app_buf to the conn-stream buffer. This is mainly identical to
mux-h2. This is required to support HTTP POST data.
Adjust the method to detect that a H3 HEADERS frame is the last one of
the stream. If this is true, the flags EOM and BODYLESS must be set on
the HTX message.
Pass the H3 frame length to QPACK decoding instead of the length of the
whole buffer.
Without this fix, if there is multiple H3 frames starting with a
HEADERS, QPACK decoding will be erroneously applied over all of them,
most probably leading to a decoding error.
If the last frame is not entirely copied and must be buffered, FIN
must not be signaled to the upper layer.
This might fix a rare bug which could cause the request channel to be
closed too early leading to an incomplete request.
If a CONNECTION_CLOSE is received during handshake or after mux release,
a segfault happens due to invalid dereferencement of qc->qcc. Check
mux_state first to prevent this.
Move the QUIC datagram handlers oustide of the receivers. Use a global
handler per-thread which is allocated on post-config. Implement a free
function on process deinit to avoid a memory leak.
Since the relaxation of the run-queue locks in 2.0 there has been a
very small but existing race between expired tasks and running tasks:
a task might be expiring and being woken up at the same time, on
different threads. This is protected against via the TASK_QUEUED and
TASK_RUNNING flags, but just after the task finishes executing, it
releases it TASK_RUNNING bit an only then it may go to task_queue().
This one will do nothing if the task's ->expire field is zero, but
if the field turns to zero between this test and the call to
__task_queue() then three things may happen:
- the task may remain in the WQ until the 24 next days if it's in
the future;
- the task may prevent any other task after it from expiring during
the 24 next days once it's queued
- if DEBUG_STRICT is set on 2.4 and above, an abort may happen
- since 2.2, if the task got killed in between, then we may
even requeue a freed task, causing random behaviour next time
it's found there, or possibly corrupting the tree if it gets
reinserted later.
The peers code is one call path that easily reproduces the case with
the ->expire field being reset, because it starts by setting it to
TICK_ETERNITY as the first thing when entering the task handler. But
other code parts also use multi-threaded tasks and rightfully expect
to be able to touch their expire field without causing trouble. No
trivial code path was found that would destroy such a shared task at
runtime, which already limits the risks.
This must be backported to 2.0.
Along recent evolutions of the pools, we've lost the ability to reliably
detect double-frees because while in the past the same pointer was being
used to chain the objects in the cache and to store the pool's address,
since 2.0 they're different so the pool's address is never overwritten on
free() and a double-free will rarely be detected.
This patch sets the caller's return address there. It can never be equal
to a pool's address and will help guess what was the previous call path.
It will not work on exotic architectures nor with very old compilers but
these are not the environments where we're trying to get detailed bug
reports, and this is not done by default anyway so we don't care about
this limitation. Note that depending on the inlining status of the
function, the result may differ but that's no big deal either.
A test by placing a double free of an appctx inside the release handler
itself successfully reported the trouble during appctx_free() and showed
that the return address was in stream_int_shutw_applet() (this one calls
the release handler).
During global eviction we're visiting nodes from the LRU tail and we
determine their pool cache head and their pool. In order to make sure
we never mess up, let's add some backwards pointer to the thread number
and pool from the pool_cache_head. It's 64-byte aligned anyway so we're
not wasting space and it helps for debugging and will prevent memory
corruption the earliest possible.
When refilling caches from the shared cache, it's pointless to set the
pointer to the local pool since it may be overwritten immediately after
by the LIST_INSERT(). This is a leftover from the pre-2.4 code in fact.
It didn't hurt, though.