The pool_alloc() function was already a wrapper to __pool_alloc() which
was also inlined but took a set of flags. This latter was uninlined and
moved to pool.c, and pool_alloc()/pool_zalloc() turned to macros so that
they can more easily evolve to support debugging options.
The number of call places made this code grow over time and doing only
this change saved ~1% of the whole executable's size.
The pool_free() function has become a bit big over time due to the
extra consistency checks. It used to remain inline only to deal
cleanly with the NULL pointer free that's quite present on some
structures (e.g. in stream_free()).
Here we're splitting the function in two:
- __pool_free() does the inner block without the pointer test and
becomes a function ;
- pool_free() is now a macro that only checks the pointer and calls
__pool_free() if needed.
The use of a macro versus an inline function is only motivated by an
easier intrumentation of the code later.
With this change, the code size reduces by ~1%, which means that at
this point all pool_free() call places used to represent more than
1% of the total code size.
Fix potential null pointer dereference. In fact, this case is not
possible, only a mistake in SSL ex-data initialization may cause it :
either connection is set or quic_conn, which allows to retrieve
the bind_conf.
A BUG_ON was already present but this does not cover release build.
Extract the allocation of ssl_sock_ctx from qc_conn_init to a dedicated
function qc_conn_alloc_ssl_ctx. This function is called just after
allocating a new quic_conn, without waiting for the initialization of
the connection. It allocates the ssl_sock_ctx and the quic_conn tasklet.
This change is now possible because the SSL callbacks are dealing with a
quic_conn instance.
This change is required to be able to delay the connection allocation
and handle handshake packets without it.
Allow to register quic_conn as ex-data in SSL callbacks. A new index is
used to identify it as ssl_qc_app_data_index.
Replace connection by quic_conn as SSL ex-data when initializing the QUIC
SSL session. When using SSL callbacks in QUIC context, the connection is
now NULL. Used quic_conn instead to retrieve the required parameters.
Also clean up
The same changes are conducted inside the QUIC SSL methods of xprt-quic
: connection instance usage is replaced by quic_conn.
Define a special accept cb for QUIC listeners to quic_session_accept().
This operation is conducted during the proto.add callback when creating
listeners.
A special care is now taken care when setting the standard callback
session_accept_fd() to not overwrite if already defined by the proto
layer.
Some functions of xprt-quic were still using connection instead of
quic_conn. This must be removed as the two are decorrelated : a
quic_conn can exist without a connection.
When enabled, objects picked from the cache are checked for corruption
by comparing their contents against a pattern that was placed when they
were inserted into the cache. Objects are also allocated in the reverse
order, from the oldest one to the most recent, so as to maximize the
ability to detect such a corruption. The goal is to detect writes after
free (or possibly hardware memory corruptions). Contrary to DEBUG_UAF
this cannot detect reads after free, but may possibly detect later
corruptions and will not consume extra memory. The CPU usage will
increase a bit due to the cost of filling/checking the area and for the
preference for cold cache instead of hot cache, though not as much as
with DEBUG_UAF. This option is meant to be usable in production.
It is possible that the listener is in INITIAL state, but have to probe
with Handshake packets. In this case, when entering qc_prep_pkts() there
is nothing to do. We must select the next packet number space (or encryption
level) to be able to probe with such packet type.
Remove the unsafe call to tasklet_free in quic_close. At this stage the
tasklet may already be scheduled by an other threads even after if the
quic_conn refcount is now null. It will probably cause a crash on the
next tasklet processing.
Use tasklet_kill instead to ensure that the tasklet is freed in a
thread-safe way. Note that quic_conn_io_cb is not protected by the
refcount so only the quic_conn pinned thread must kill the tasklet.
Adjust slightly refcount code decrement on quic_conn close. A new
function named quic_conn_release is implemented. This function is
responsible to remove the quic_conn from CIDs trees and decrement the
refcount to free the quic_conn once all threads have finished to work
with it.
For now, quic_close is responsible to call it so the quic_conn is
scheduled to be free by upper layers. In the future, it may be useful to
delay it to be able to send remaining data or waiting for missing ACKs
for example.
This simplify quic_conn_drop which do not require the lock anymore.
Also, this can help to free the connection more quickly in some cases.
quic_conn_drop decrement the refcount and may free the quic_conn if
reaching 0. The quic_conn should not be dereferenced again after it in
any case even for traces.
We have an anti-looping protection in process_stream() that detects bugs
that used to affect a few filters like compression in the past which
sometimes forgot to handle a read0 or a particular error, leaving a
thread looping at 100% CPU forever. When such a condition is detected,
an alert it emitted and the process is killed so that it can be replaced
by a sane one:
[ALERT] (19061) : A bogus STREAM [0x274abe0] is spinning at 2057156
calls per second and refuses to die, aborting now! Please
report this error to developers [strm=0x274abe0,3 src=unix
fe=MASTER be=MASTER dst=<MCLI> txn=(nil),0 txn.req=-,0
txn.rsp=-,0 rqf=c02000 rqa=10000 rpf=88000021 rpa=8000000
sif=EST,40008 sib=DIS,84018 af=(nil),0 csf=0x274ab90,8600
ab=0x272fd40,1 csb=(nil),0
cof=0x25d5d80,1300:PASS(0x274aaf0)/RAW((nil))/unix_stream(9)
cob=(nil),0:NONE((nil))/NONE((nil))/NONE(0) filters={}]
call trace(11):
| 0x4dbaab [c7 04 25 01 00 00 00 00]: stream_dump_and_crash+0x17b/0x1b4
| 0x4df31f [e9 bd c8 ff ff 49 83 7c]: process_stream+0x382f/0x53a3
(...)
One problem with this detection is that it used to only count the call
rate because we weren't sure how to make it more accurate, but the
threshold was high enough to prevent accidental false positives.
There is actually one case that manages to trigger it, which is when
sending huge amounts of requests pipelined on the master CLI. Some
short requests such as "show version" are sufficient to be handled
extremely fast and to cause a wake up of an analyser to parse the
next request, then an applet to handle it, back and forth. But this
condition is not an error, since some data are being forwarded by
the stream, and it's easy to detect it.
This patch modifies the detection so that update_freq_ctr() only
applies to calls made without CF_READ_PARTIAL nor CF_WRITE_PARTIAL
set on any of the channels, which really indicates that nothing is
happening at all.
This is greatly sufficient and extremely effective, as the call above
is still caught (shutr being ignored by an analyser) while a loop on
the master CLI now has no effect. The "call_rate" field in the detailed
"show sess" output will now be much lower, except for bogus streams,
which may help spot them. This field is only there for developers
anyway so it's pretty fine to slightly adjust its meaning.
This patch could be backported to stable versions in case of reports
of such an issue, but as that's unlikely, it's not really needed.
Pipelined commands easily result in request buffers to wrap, and the
master-cli parser only deals with linear buffers since it needs contiguous
keywords to look for in a list. As soon as a buffer wraps, some commands
are ignored and the parser is called in loops because the wrapped data
do not leave the buffer.
Let's take the easiest path that's already used at the HTTP layer, we
simply realign the buffer if its input wraps. This rarely happens anyway
(typically once per buffer), remains reasonably cheap and guarantees this
cannot happen anymore.
This needs to be backported as far as 2.0.
When pcli_parse_request() is called with an empty buffer, it still tries
to parse it and can go on believing it finds an empty request if the last
char before the beginning of the buffer is a '\n'. In this case it overwrites
it with a zero and processes it as an empty command, doing nothing but not
making the buffer progress. This results in an infinite loop that is stopped
by the watchdog. For a reason related to another issue (yet to be fixed),
this can easily be reproduced by pipelining lots of commands such as
"show version".
Let's add a length check after the search for a '\n'.
This needs to be backported as far as 2.0.
When a shutdown is detected on the cli, we try to execute all pending
commands first before closing the connection. It is required because
commands execution is serialized. However, when the last part is a partial
command, the cli connection is not closed, waiting for more data. Because
there is no timeout for now on the cli socket, the connection remains
infinitely in this state. And because the maxconn is set to 10, if it
happens several times, the cli socket quickly becomes unresponsive because
all its slots are waiting for more data on a closed connections.
This patch should fix the issue #1512. It must be backported as far as 2.0.
Again, we fix a reminiscence of the way we probed before probing by packet.
When we were probing by datagram we inspected <prv_pkt> to know if we were
coalescing several packets. There is no need to do that at all when probing by packet.
Furthermore this could lead to blocking situations where we want to probe but
are limited by the congestion control (<cwnd> path variable). This must not be
the case. When probing we must do it regardless of the congestion control.
If a client resend Initial CRYPTO data, this is because it did not receive all
the server Initial CRYPTO data. With this patch we prepare a fast retransmission
without waiting for the PTO timer expiration sending old Initial CRYPTO data,
coalescing them with Handshake CRYPTO if present in the same datagram. Furthermore
we send also a datagram made of previously sent Hanshashke CRYPTO data if any.
When probing, we must not take into an account the congestion control window.
This was not completely correctly implemented: qc_build_frms() could fail
because of this limit when comparing the head of the packet againts the
congestion control window. With this patch we make it fail only when
we are not probing.
This is to avoid too much PTO timer expirations for 01RTT and Handshake packet
number spaces. Furthermore we are not limited by the anti-amplication for 01RTT
packet number space. According to the RFC we can send up to two packets.
This modification should have come with this commit:
"MINOR: quic: Remove nb_pto_dgrams quic_conn struct member"
where the nb_pto_dgrams quic_conn struct member was removed.
When building packets to send, we build frames computing their sizes
to have more chance to be added to new packets. There are rare cases
where this packet coult not be built because of the congestion control
which may for instance prevent us from building a packet with padding
(retransmitted Initial packets). In such a case, the pre-built frames
were lost because added to the packet frame list but not move packet
to the packet number space they come from.
With this patch we add the frames to the packet only if it could be built
and move them back to the packet number space if not.
There is no need to use an MT_LIST to store frames to send from a packet
number space. This is a reminiscence for multi-threading support for the TX part.
As reported by @jinsubsim in github issue #1498, there is an
interoperability issue between nghttp2 as a client and a few servers
among which haproxy (in fact likely all those which do not make use
of the dynamic headers table in responses or which do not intend to
use a larger table), when reducing the header table size below 4096.
These are easily testable this way:
nghttp -v -H":method: HEAD" --header-table-size=0 https://$SITE
It will result in a compression error for those which do not start
with an HPACK dynamic table size update opcode.
There is a possible interpretation of the H2 and HPACK specs that
says that an HPACK encoder must send an HPACK headers table update
confirming the new size it will be using after having acknowledged
it, because since it's possible for a decoder to advertise a late
SETTINGS and change it after transfers have begun, the initially
advertised value might very well be seen as a first change from the
initial setting, and the HPACK spec doesn't specify the side which
causes the change that triggers a DTSU update, which was essentially
summed up in this question from nghttp2's author when this issue
was already raised 6 years ago, but which didn't really find a solid
response by then:
https://lists.w3.org/Archives/Public/ietf-http-wg/2015OctDec/0107.html
The ongoing consensus based on what some servers are doing and that aims
at limiting interoperability issues seems to be that a DTSU is expected
for each reduction from the current size, which should be reflected in
the next revision of the H2 spec:
https://github.com/httpwg/http2-spec/pull/1005
Given that we do not make use of this table we can emit a DTSU of zero
before encoding any HPACK frame. However, some clients do not support
receiving DTSU with such values (e.g. VTest) so we cannot do it
inconditionnally!
The current patch aims at sticking as close to the spec as possible by
proceeding this way:
- when a SETTINGS_HEADER_TABLE_SIZE is received, a flag is set
indicating that the value changed
- before sending any HPACK frame, this flag is checked to see if
an update is wanted and if none was sent
- in this case a DTSU of size zero is emitted and a flag is set
to mention it was emitted so that it never has to be sent again
This addresses the problem with nghttp2 without affecting VTest.
More context is available here:
https://github.com/nghttp2/nghttp2/issues/1660https://lists.w3.org/Archives/Public/ietf-http-wg/2021OctDec/0235.html
Many thanks to @jinsubsim for this report and participating to the issue
that led to an improvement of the H2 spec.
This should be backported to stable releases in a timely manner, ideally
as far as 2.4 once the h2spec update is merged, then to other versions
after a few months of observation or in case an issue around this is
reported.
Sending pipelined commands on the CLI using a semi-colon as a delimiter
has a cost that grows linearly with the buffer size, because co_getline()
is called for each word and looks up a '\n' in the whole buffer while
copying its contents into a temporary buffer.
This causes huge parsing delays, for example 3s for 100k "show version"
versus 110ms if parsed only once for a default 16k buffer.
This patch makes use of the new co_getdelim() function to support both
an LF and a semi-colon as delimiters so that it's no more needed to parse
the whole buffer, and that commands are instantly retrieved. We still
need to rely on co_getline() in payload mode as escapes and semi-colons
are not used there.
It should likely be backported where CLI processing speed matters, but
will require to also backport previous patch "MINOR: channel: add new
function co_getdelim() to support multiple delimiters". It's worth noting
that backporting it without "MEDIUM: cli: yield between each pipelined
command" would significantly increase the ratio of disconnections caused
by empty request buffers, for the sole reason that the currently slow
parsing grants more time to request data to come in. As such it would
be better to backport the patch above before taking this one.
For now we have co_getline() which reads a buffer and stops on LF, and
co_getword() which reads a buffer and stops on one arbitrary delimiter.
But sometimes we'd need to stop on a set of delimiters (CR and LF, etc).
This patch adds a new function co_getdelim() which takes a set of delimiters
as a string, and constructs a small map (32 bytes) that's looked up during
parsing to stop after the first delimiter found within the set. It also
supports an optional escape character that skips a delimiter (typically a
backslash). For the rest it works exactly like the two other variants.
Pipelining commands on the CLI is sometimes needed for batched operations
such as map deletion etc, but it causes two problems:
- some possibly long-running commands will be run in series without
yielding, possibly causing extremely long latencies that will affect
quality of service and even trigger the watchdog, as seen in github
issue #1515.
- short commands that end on a buffer size boundary, when not run in
interactive mode, will often cause the socket to be closed when
the last command is parsed, because the buffer is empty.
This patch proposes a small change to this: by yielding in the CLI applet
after processing a command when there are data left, we significantly
reduce the latency, since only one command is executed per call, and
we leave an opportunity for the I/O layers to refill the request buffer
with more commands, hence to execute all of them much more often.
With this change there's no more watchdog triggered on long series of
"del map" on large map files, and the operations are much less disturbed.
It would be desirable to backport this patch to stable versions after some
period of observation in recent versions.
This command was integrated in 2.4 when it was not possible to handle
SSL with dynamic servers, this is now possible so we should prefer this
way.
Must be backported in 2.5.
While giving a fresh try to `set server ssl` (which I wrote), I realised
the behavior is a bit inconsistent. Indeed when using this command over
a server with ssl enabled for the data path but also for the health
check path we have:
- data and health check done using tls
- emit `set server be_foo/srv0 ssl off`
- data path and health check path becomes plain text
- emit `set server be_foo/srv0 ssl on`
- data path becomes tls and health check path remains plain text
while I thought the end result would be:
- data path and health check path comes back in tls
In the current code we indeed erase all connections while deactivating,
but restore only the data path while activating. I made this mistake in
the past because I was testing with a case where the health check plain
text by default.
There are several ways to solve this issue. The cleanest one would
probably be to avoid changing the health check connection when we use
`set server ssl` command, and create a new command `set server
ssl-check` to change this. For now I assumed this would be ok to simply
avoid changing the health check path and be more consistent.
This patch tries to address that and also update the documentation. It
should not break the existing usage with health check on plain text, as
in this case they should have `no-check-ssl` in defaults. Without this
patch, it makes the command unusable in an env where you have a list of
server to add along the way with initial `server-template`, and all
using tls for data and healthcheck path.
For 2.6 we should probably reconsider and add `set server ssl-check`
command for better granularity of cases.
If this solution is accepted, this patch should be backported up to >=
2.4.
The alternative solution was to restore the previous state, but I
believe this will create even more confusion in the future.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
hlua_httpclient_table_to_hdrs() does a lua_pop(L, 1) at the end of the
function, this is supposed to be done in the caller and it is already be
done in hlua_httpclient_send().
This call has the consequence of poping the next parameter of the
httpclient, ignoring it.
This patch fixes the issue by removing the lua_pop(L, 1).
Must be backported in 2.5.
Forbid the httpclient to send an empty chunked client when there is no
data to send. It does happen when doing a simple GET too.
Must be backported in 2.5.
htx_add_data() is able to partially consume data. However there is a bug
when the HTX buffer is empty. The data length is not properly
adjusted. Thus, if it exceeds the HTX buffer size, no block is added. To fix
the issue, the length is now adjusted first.
This patch must be backported as far as 2.0.
If we wakeup the I/O handler before the mux is started, it is possible
it has enough time to parse the ClientHello TLS message and update the
mux transport parameters, leading to a crash.
So, we initialize ->qcc quic_conn struct member at the very last time,
when the mux if fully initialized. The condition to wakeup the I/O handler
from lstnr_rcv_pkt() is: xprt context and mux both initialized.
Note that if the xprt context is initialized, it implies its tasklet is
initialized. So, we do not check anymore this latter condition.
During 2.4-dev, fault injection was enabled for cached pools with commit
207c09509 ("MINOR: pools: move the fault injector to __pool_alloc()"),
except that the condition for CONFIG_HAP_POOLS still depended on
DEBUG_FAIL_ALLOC not being set, which limits the usability to cases
where the define is set by hand. Let's remove it from the equation as
this is not a constraint anymore. While a bit old, there's no need to
backport this as it's only used during development.
The stopping-list management introduced by commit d3a88c1c3 ("MEDIUM:
connection: close front idling connection on soft-stop") missed two
error paths in the H1 and H2 muxes. The effect is that if a stream
or HPACK table couldn't be allocated for these incoming connections,
we would leave with the connection freed still attached to the
stopping_list and it would never leave it, resulting in use-after-free
hence either a crash or a data corruption.
This is marked as medium as it only happens under extreme memory pressure
or when playing with tune.fail-alloc. Other stability issues remain in
such a case so that abnormal behaviors cannot be explained by this bug
alone.
This must be backported to 2.4.
Free the ssl_sock_ctx tasklet in quic_close() instead of
quic_conn_drop(). This ensures that the tasklet is destroyed safely by
the same thread.
This has no impact as the free operation was previously conducted with
care and should not be responsible of any crash.
Implement the emission of Retry packets. These packets are emitted in
response to Initial from clients without token. The token from the Retry
packet contains the ODCID from the Initial packet.
By default, Retry packet emission is disabled and the handshake can
continue without address validation. To enable Retry, a new bind option
has been defined named "quic-force-retry". If set, the handshake must be
conducted only after receiving a token in the Initial packet.