qc_frm_free() is a helper used to clean up a QUIC frame object. It is
used by MUX layer both for QUIC and QMux protocols.
This function takes a pointer to the underlying quic_conn, used only for
trace purpose. This patch fixes its usage for QMux to ensure that in
this case a NULL value is used.
No need to backport.
When dealing with EOH block, we must be sure to force the close mode for
message with no payload but annoncing a non-null content-length.
It is mainly an issue on the server side but it could be encountered on
client side too. Without this fix, a request can be switched to the DONE
state while the server is still expecting the payload. In an ideal world,
this case should not happen. But in conjunction with other bugs, it may lead
to a desynchro between haproxy and the server.
Now, when a non-null content-length is announced but we know we reached the
end of the message, we force the close mode. The only exception is for
bodyless responses (204s, 304s and responses to head requests).
Thanks to Martino Spagnuolo (r3verii) for his detailed report on this issue.
This patch must be backported to all stable version.
Checks are already made on H2 to detect inconsistencies between
advertised content-length and transferred data (excess of data or
premature END_STREAM flag on DATA frame). However, as found by
Martino Spagnuolo (r3verii), a subtle case remains: if the END_STREAM
appears on the HEADERS frame (i.e. a regular request for example),
then the check is not made. In this case it is possible to advertise
more contents than will really be transferred. If the other side uses
HTTP/1.1, and the server responds before the end of the transfer,
this means that the number of advertised bytes that will never be
transferred and that the server will drain will be taken from the
next request, effectively hiding a part of the header.
In practice this can be used to force subsequent requests to fail, or
when running with "http-reuse never" or when running with a totally
idle server, to perform a request smuggling by constructing specially
crafted request pairs where the first one is used to trigger an early
response and hide parts of or all headers of the second one, to
instead use a second embedded one that was not subject to analysis.
The risk remains moderate given the low prevalence of "http-reuse never"
in production environments, and of idle servers.
The fix consists in detecting if advertised content-length remains when
processing an END_STREAM flag on a HEADERS frame. It also does it for
trailers, which turn out to be another way to abuse the bug. However it
takes great care not to break bodyless responses (204, 304 and responses
to HEAD requests) that may present a content-length that doesn't reflect
the presence of a body in the response.
A temporary alternative to the fix is to disable HTTP/2 by specifying
"alpn http/1.1" on "bind" lines, and adding "option disable-h2-upgrade"
in HTTP frontends.
This must be backported to all stable versions.
In 3.4-dev6, commit de5fc2f515 ("BUG/MINOR: server: set auto SNI for
dynamic servers") allowed to properly set the SNI, and return an error
message. However the error message is leaked after being printed on the
CLI.
This should be backported to 3.3.
In 3.4-dev7, commit e1738b665d ("MINOR: debug: read all libs in memory
when set-dumpable=libs") reads dependencies into memory to store them as
a tar archive for later debugging. There was an attempt to mark the whole
archive read-only, except that the size passed in argument to mprotect()
is wrong: lib_size is only assigned after the operation and is still zero
at the moment this is done. new_size ought to be used instead.
This needs to be backported wherever the commit above is backported, at
least 3.2.
3 new enum values and a mask were added in latest -dev with commit
24e05fe33a ("MINOR: stream: Use a pcli transaction to replace pcli_*
members"), unfortunately the entries needed by the "flags" command were
forgotten.
No backport is needed.
Since commit 0af603f46f ("MEDIUM: threads: change the default
max-threads-per-group value to 16"), it was written "Tha minimum" instead
of "The minimum". No backport needed, this is only in latest -dev.
In 3.4-dev8, commit e264523112 ("MINOR: servers: Don't update last_sess
if it did not change") adjusted the last_sess date to avoid writing to
the same cache line all the time, however a typo makes it pick the wrong
second because it uses now_ms instead of now_ns (so the date would roughly
change every 12 days).
No backport needed.
In 2.8, commit ead43fe4f2 ("MEDIUM: compression: Make it so we can
compress requests as well.") added the ability to independently enable
compression on request and/or response. However there's a bug in the
"compression direction response" case, which preserves only the request
flag and adds the response direction instead of clearing the request
flag, so this directive would clear offload and make it impossible to
disable request if it was already previously enabled.
This can be backported to stable releases as far as 2.8.
When a http request is sent during an http healthcheck, if an error is
triggered while the output buffer is a small buffer, another attempt is made
with a larger one. When this happens, the temporary chunk used to format
headers must be released.
No backport needed.
Ruleset are stored in a global tree, released on deinit staged. All errors
are fatal and abort the configuration parsing. So the current ruleset must
not be released here.
The test to remove trailers from chunked messages was inverted and is thus
ineffective. The flag for the requests was tested on client side and the flag
for the response was tested on server side. It should be the opposite.
This patch must be backported as far as 3.2.
When the EOH block is processed, before sending message headers, there is a
test to know if there is no payload. In case of a chunked message, a
null-chunk is emitted, except for bodyless response. For instance, a
response to a HEAD request has no payload at all and no null-chunk.
However, the test for bodyless responses is not correct. Only
H1S_F_BODYLESS_RESP flag is tested. But this flag can be set on server side
when we are processing the request. To fix the issue, the test was
adapted. The null-chunk is added if a message with no payload is chunked and
it is a request or a non-bodyless responses.
This patch must be backported to all stable version.
A typo in commit e51be30f78 ("BUG/MINOR: log: consider format expression
dependencies to decide when to log") made HRSHP appear twice (persistent
response) while the second one ought to be HRSHV (volatile response, e.g.
header values). This is harmless in practice since logs always wait for
at least headers.
This should be backported wherever the patch above was backported.
In h2_init(), if we have a failure while creating the h2c, and we
allocated shared_tx_bufs, don't forget to free it, otherwise we'll have
a memory leak.
This was introduced in 3.1 by commit a891534bfd ("MINOR: mux-h2: allocate
the array of shared rx bufs in the h2c"), so the fix should be backported
as far as 3.2.
When receiving a PRIORITY frame, when checking if the stream id provided
is ours, ignore bit 31, as it is the exclusive bit, and not part of the
stream id, whoever sends a PRIORITY frame with its own id and the
exclusive bit set will not be considered an error, as it should per the
RFC.
The impact is basically non-existent since we don't use PRIORITY frames,
it's only that we would ignore such an invalid frame instead of breaking
the connection.
The bug was introduced in 1.9 with commit 92153fccd3 ("BUG/MINOR: h2:
properly check PRIORITY frames") so the fix must be backported to all
versions.
Commit e67e36c9eb35eb1477ae0e425a660ee0c631cecd introduced
tune.h2.log-errors, that would let you pick if you wanted to know about
stream errors, connection errors, or no error.
However, a logic error made it so no error will be picked for any value
except for "none", in which case connection would be picked. Fix that by
just checking the strcmp() return value correctly.
This should be backported wherever e67e36c9eb35eb1477ae0e425a660ee0c631cecd
has been backported.
A malformed tcp option with an option length set to 0 can cause
an infinite loop on ip.fp converter.
The patch also forces the computation to use an unsigned char to
avoid a shift back during the parsing.
This fix should be backported on all versions including the ip.fp
converter.
In task_schedule(), before attempting to set the new task expiration
date, make sure it is not running by trying to set the TASK_RUNNING
flag, and waiting if it is already there. Having the flag set will
ensure that the task won't be running while we're modifying it.
There is a very rare race condition, where the expire would be set by
task_schedule(), then the running task might set it to something else,
and if it sets it to TICK_ETERNITY before task_schedule() calls
__task_queue(), then we will hit a BUG_ON() there.
This is very hard to reproduce, but has been reported a few times,
included in Github issue #3327, which should now be fixed.
This should be backported as far back as 2.8.
WIP: Make sure the task is not running before changing expire
The proxy error counter was not updated in h2c_frt_handle_headers() in
case of failure to decode a HEADERS frame. Make sure to keep it updated.
This can be backported to all stable versions.
Commit aab1a60977 ("BUG/MEDIUM: h2/htx: always fail on too large trailers")
explicitly returned an RST_STREAM on failure to decode some trailers, and
used the code H2_ERR_INTERNAL_ERROR. However there are multiple possible
causes for this failure to happen, and it turns out that it's much more
likely to be related to a protocol error than a decompression error. So
let's change this to PROTOCOL_ERROR, and count a protocol error on the
proxy and in the session.
This can be backported to all stable versions (with adjustments related
to these versions, maybe focusing on 3.2 max is reasonable).
This pointer was used during the appctx refactoring performed in 2.6. The
ctx union was still there and this pointer was used as the "shadow" of the
svcctx pointer used by most commands. In 2.7, the union was removed, making
the shadow pointer useless. Let's remove it now.
A new type of transaction was introduced for master-cli streams. So
SF_TXN_PCLI flag and functions to allocate and destroy PCLI transactions
were added.
In the stream structure, all pcli_* members were moved in the pcli
transaction and the txn union was updated accordingly.
When it was ambiguous, a test on the transaction type was performed. For
instance to destroy the transaciton.
To be able to deal with different types of transaction for a stream, new
stream flags was added to know the transaction type when allocated. For now
only HTTP transactions can be allocated, so only SF_TXN_HTTP was
introduced. The mask SF_TXN_MASK must be used to get the transaction type.
The transaction type is set when it is allocated and removed when it is
destroyed.
The HTTP transaction is moved in an union. For now, it is the only possible
transaction that can be allocated. But that will change. Thanks to this
commit and the next one, it will be possible to deal with different kind of
transactions for a stream.
This patch looks quite huge, but it is more or less a renaming of all
accesses to "txn" field by "txn.http".
The maximum size allowed for the payload pattern was increase up to 64 bytes
(65 bytes because of the trailing \0), to be able to use a sha256 of random
data for instance. It could be useful to prevent any data smuggling on the
payload.
Note that on the CLI, it could be possible to have only the buffer size as a
limit, because the command line is only consumed once all commands are
executed. The payload pattern is only a pointer in the buffer where the
command line was copied. However, for the master CLI, the data are streamed
to the worker, so we must keep a copy of he payload pattern. This is why we
must limit its size.
It is now possible to deal with too big payload to fit in a buffer, without
changing the buffer size. By default, a payload up to 128 KB can be
dynamically allocated. "tune.cli.max-payload-size" global parameter can be
used to change this value, with some caution for huge values.
For CLI command handler functions, there is no change at all. A pointer on
the payload is still passed as parameter. Internally, an area is allocated
for the payload only if it is too big.
The payload pattern used to detect the end of the payload is part from the
allocated area.
The payload is now saved as a buffer in the CLI context instead of a simple
pointer. It is mandatory to be able to reallocate the payload if it is too
big.
Instead of copying the payload pattern in the CLI context, we now only save
a pointer on this pattern. It is possible because the command line is copied
in the CLI context. Arguments are already handled this way when the command
is processed.
Detect shell parser errors in test LOG files right after vtest execution
and mark the run as failed when such errors are found.
This turns malformed feature cmd expressions from warning-like diagnostics
into hard failures, so broken test conditions are caught reliably.
The single-threaded build is currently broken in development since commit
0af603f46f ("MEDIUM: threads: change the default max-threads-per-group
value to 16") because it doesn't set the default for the non-threaded
build. Let's set it to 1.
No backport is needed.