From now on, the FD's running_mask only refers to local thread IDs. However,
there remains a limitation, in updt_fd_polling(), we temporarily have to
check and set shared FDs against .thread_mask, which still contains global
ones. As such, nbtgroups > 1 may break (but this is not yet supported without
special build options).
From now on, the FD's update_mask only refers to local thread IDs. However,
there remains a limitation, in updt_fd_polling(), we temporarily have to
check and set shared FDs against .thread_mask, which still contains global
ones. As such, nbtgroups > 1 may break (but this is not yet supported without
special build options).
This changes the signification of each bit in the polled_mask so that
now each bit represents a local thread ID for the current group instead
of a global thread ID. As such, all tests now apply to ltid_bit instead
of tid_bit.
No particular check was made to verify that the FD's tgid matches the
current one because there should be no case where this is not true. A
check was added in epoll's __fd_clo() to confirm it never differs unless
expected (soft stop under thread isolation, or master in starting mode
going to exec mode), but that doesn't prevent from doing the job: it
only consists in checking in the group's threads those that are still
polling this FD and to remove them.
Some atomic loads were added at the various locations, and most repetitive
references to polled_mask[fd].xx were turned to a local copy instead making
the code much more clear.
We now grab a reference to the FD's tgid before manipulating the
running_mask so that we're certain it corresponds to our own group
(hence bits), and we drop it once we've set the bit. For now there's
no measurable performance impact in doing this, which is great. The
lock can be observed by perf top as taking a small share of the time
spent in fd_update_events(), itself taking no more than 0.28% of CPU
under 8 threads.
However due to the fact that the thread groups are not yet properly
spread across the pollers and the thread masks are still wrong, this
will trigger some BUG_ON() in fd_insert() after a few tens of thousands
of connections when threads other than those of group 1 are reached,
and this is expected.
The file descriptors will need to know the thread group ID in addition
to the mask. This extends fd_insert() to take the tgid, and will store
it into the FD.
In the FD, the tgid is stored as a combination of tgid on the lower 16
bits and a refcount on the higher 16 bits. This allows to know when it's
really possible to trust the tgid and the running mask. If a refcount is
higher than 1 it indeed indicates another thread else might be in the
process of updating these values.
Since a closed FD must necessarily have a zero refcount, a test was
added to fd_insert() to make sure that it is the case.
It's a bit ugly to see that half of the callers of fd_insert() have to
apply all_threads_mask themselves to the bit field they're passing,
because usually it comes from a listener that may have other bits set.
Let's make the function apply the mask itself.
After a poller's ->clo() was called to completely terminate operations
on an FD, there's no reason for keeping updates on this FD, so if any
updates were already programmed it would be nice if we could delete them.
Tests show that __fd_clo() is called roughly half of the time with the
last FD from the local update list, which possibly makes sense if a close
has to appear after a polling change resulting from an incomplete read or
the end of a send().
We can detect this and remove the last entry, which gives less work to
do during the update() call, and eliminates most of the poll_drop_fd
event reports.
Note that while tempting, this must not be backported because it's only
safe to be done now that fd_delete_orphan() clears the update mask as
we need to be certain not to miss it:
- if the update mask is kept up with no entry, we can miss future
updates ;
- if the update mask is cleared too fast, it may result in failure
to add a shared event.
The update-list needs to be per-group because its inspection is based
on a mask and we need to be certain when scanning it if a mask is for
the same thread or another one. Once per-group there's no doubt about
it, even if the FD's polling changes, the entry remains valid. It will
be needed to check the tgid though.
Note that a soft-stop or pause/resume might not necessarily work here
with tgroups>1, because the operation might be delivered to a thread
that doesn't belong to the group and whoe update mask will not reflect
one that is interesting here. We can't do better at this stage.
Dealing with long-lasting updates that outlive a close() is always
going to be quite a problem, not because of the thread that will
discover such updates late, but mostly due to the shared update_list
that will have an entry on hold making it difficult to reuse it, and
requiring that the fd's tgid is changed and the update_mask reset
from a safe location.
After careful inspection, it turns out that all our pollers that support
automatic event removal upon close() do not need any extra bookkeeping,
and that poll and select that use an internal representation already
provide a poller->clo() callback that is already used to update the
local event. As such, it is already safe to reset the update mask and
to remove the event from the shared list just before the final close,
because nothing remains to be done with this FD by the poller.
Doing so considerably simplifies the handling of updates, which will
only have to be inspected by the pollers, while the writers can continue
to consider that the entries are always valid. Another benefit is that
it will be possible to reduce contention on the update_list by just
having one update_list per group (left to be done later if needed).
We don't want to pick idle connections from another thread group,
this would be very slow by forcing to share undesirable data.
This patch makes sure that we start seeking from the current thread
group's threads only and loops over that range exclusively.
It's worth noting that the next_takeover pointer remains per-server
and will bounce when multiple groups use it at the same time. But we
preserve the perturbation by applying a modulo when retrieving it,
so that when groups are of the same size (most common case), the
index will not even change. At this time it doesn't seem worth
storing one index per group in servers, but that might be an option
if any contention is detected later.
This one is only used as a hint to improve scheduling latency, so there
is no more point in keeping it global since each thread group handles
its own run q
Their migration was postponed for convenience only but now's time for
having the shared wait queues per thread group and not just per process,
otherwise the WQ lock uses a huge amount of CPU alone.
Since commit d2494e048 ("BUG/MEDIUM: peers/config: properly set the
thread mask") there must not remain any single case of a receiver that
is bound nowhere, so there's no need anymore for thread_mask().
We're adding a test in fd_insert() to make sure this doesn't happen by
accident though, but the function was removed and its rare uses were
replaced with the original value of the bind_thread msak.
When using multiple groups, the stats socket starts to emit errors and
it's not natural to have to touch the global section just to specify
"thread 1/all". Let's pre-attach these sockets to thread group 1. This
will cause errors when trying to change the group but this really is not
a problem for now as thread groups are not enabled by default. This will
make sure configs remain portable and may possibly be relaxed later.
As a side effect of commit 34aae2fd1 ("MEDIUM: mworker: set the iocb of
the socketpair without using fd_insert()"), a config may now refuse to
start if there are multiple groups configured because the default bind
mask may span over multiple groups, and it is not possible to force it
to work differently.
Let's just assign thread group 1 to the master<->worker sockets so that
the thread bindings automatically resolve to a single group. The same was
done for the master side of the socket even if it's not used. It will
avoid being forgotten in the future.
The principle remains the same, but instead of having a single process
and ignoring extra ones, now we set the affinity masks for the respective
threads of all groups.
The doc was updated with a few extra examples.
These old legacy pollers are not designed for this. They're still
using a shared list of events for all threads, this will not scale at
all, so there's no point in enabling thread-groups there. Modern
systems have epoll, kqueue or event ports and do not need these ones.
We arrange for failing at boot time, only when thread-groups > 1 so
that existing setups will remain unaffected.
If there's a compelling reason for supporting thread groups with these
pollers in the future, the rework should not be too hard, it would just
consume a lot of memory to have an fd_evts[] array per thread, but that
is doable.
When an FD is migrated, all pollers program an update. That's useless
code duplication, and when thread groups will be supported, this will
require an extra round of locking just to verify the update_mask on
return. Let's just program the update direction from fd_update_events()
as it already does for closed FDs, this becomes more logical.
protocol_stop_now() is called from do_soft_stop_now() running on any
thread that received the signal. The problem is that it will call some
listener handlers to close the FD, resulting in an fd_delete() being
called from the wrong group. That's not clean and we cannot even rely
on the thread mask to show up.
One interesting long-term approach could be to have kill queues for
FDs, and maybe we'll need them in the long run. However that doesn't
work well for listeners in this situation.
Let's simply isolate ourselves during this instant. We know we'll be
alone dealing with the close and that the FD will be instantly deleted
since not in use by any other thread. It's not the cleanest solution
but it should last long enough without causing trouble.
Since we have to use masks to verify owners/waiters, we have no other
option but to have them per group. This definitely inflates the size
of the locks, but this is only used for extreme debugging anyway so
that's not dramatic.
Thus as of now, all masks in the lock stats are local bit masks, derived
from ti->ltid_bit. Since at boot ltid_bit might not be set, we just take
care of this situation (since some structs are initialized under look
during boot), and use bit 0 from group 0 only.
They were initially made to deal with both the cache and the update list
but there's no cache anymore and keeping them for the update list adds a
lot of obfuscation that is really not desired. Let's get rid of them now.
Their purpose was simply to get a pointer to fdtab[fd].update.{,next,prev}
in order to perform atomic tests and modifications. The offset passed in
argument to the functions (fd_add_to_fd_list() and fd_rm_from_fd_list())
was the offset of the ->update field in fdtab, and as it's not used anymore
it was removed. This also removes a number of casts, though those used by
the atomic ops have to remain since only scalars are supported.
It was deprecated, marked for removal in 2.7 and was already emitting a
warning, let's get rid of it. Note that we've kept the keyword detection
to suggest to use "thread" instead.
This was already causing a deprecation warning and was marked for removal
in 2.7, now it happens. An error message indicates this doesn't exist
anymore.
The "ctx" and "st2" parts in the appctx were marked for removal in 2.7
and were emulated using memcpy/memset etc for possible external code.
Let's remove this now.
The output of "show activity" can be so large that the output is visually
unreadable on a screen. Let's add an option to filter on the desired
column (actually the thread number), use "0" to report only the first
column (aggregated/sum/avg), and use "-1", the default, for the normal
detailed dump.
This command will create the requested number of tasks competing on a
lock, resulting in triggering the watchdog and crashing the process.
This will help stress the watchdog and inspect the lock debugging parts.
The previous attempt to fix thread dumps in commit 672972604 ("BUG/MEDIUM:
debug: fix possible hang when multiple threads dump at once") still had
some shortcomings. Sometimes parallel dumps are jerky essentially due to
the way that threads synchronize on startup and end. In addition the risk
of waiting forever for a stopped thread exists, and panics happening in
parallel to thread dumps are not more reliable either.
This commit revisits the state transitions so that all threads may request
a dump in parallel, that all of them wait for each other in the handler,
and that one thread is responsible for counting every other and checking
that the total matches the number of active threads.
Then for stopping there's a finishing phase that all threads wait for so
that none quits this area too early. Given that we now know the number of
participants to the dump, we can let them each decrement the counter when
leaving so that another dump may only start after the last participant
has completely left.
Now many thread dumps in parallel are running fine, so do panics. No
backport is needed as this was the result of the changes for thread
groups.
Some panic dumps are mangled or truncated due to the watchdog firing at
the same time on multiple threads and calling ha_panic() simultaneously.
What may happen in this case is that the second one waits for the first
one to finish but as soon as it's done the second one resets the buffer
and dumps again, sometimes resetting the first one's dump. Also the first
one's abort() may trigger while the second one is currently dumping,
resulting in a full dump followed by a truncated one, leading to
confusion. Sometimes some lines appear in the middle of a dump as well.
It doesn't happen often and is easier to trigger by causing massive
deadlocks.
There's no reason for the process to resist to a panic, so we can safely
add a counter and no nothing on subsequent calls. Ideally we'd wait there
forever but as this may happen inside a signal handler (e.g. watchdog),
it doesn't always work, so the easiest thing to do is to return so that
the thread is interrupted as soon as possible and brought to the debug
handler to be dumped.
This should be backported, at least to 2.6 and possibly to older versions
as well.
In ha_tkillall(), the current thread's group was used to check for the
thread being running instead of using the target thread's group mask.
Most of the time it would not have any effect unless some groups are
uneven where it can lead to incomplete thread dumps for example.
No backport is needed, this is purely 2.7.
Running several concurrent "show threads" in loops might occasionally
cause a segfault when trying to retrieve the stream from appctx_sc()
which may be null while the applet is finishing. It's not easy to
reproduce, it requires 3-5 sessions in parallel for about a minute
or so. The appctx_sc must be checked before passing it to sc_strm().
This must be backported to 2.6 which also has the bug.
In thread_resolve_group_mask(), if a global thread number is passed
and it belongs to a group greater than 1, an incorrect shift resulted
in shifting that ID again which made it appear nowhere or in a wrong
group possibly. The bug was introduced in 2.5 with commit 627def9e5
("MINOR: threads: add a new function to resolve config groups and
masks") though the groups only starts to be usable in 2.7, so there
is no impact for this bug, hence no backport is needed.
Implement graceful shutdown as specified in RFC 9114. A GOAWAY frame is
generated with stream ID to indicate range of processed requests.
This process is done via the release app protocol operation. The MUX
is responsible to emit the generated GOAWAY frame after app release. A
CONNECTION_CLOSE will be emitted once there is no unacknowledged STREAM
frames.
Store a reference to the HTTP/3 control stream in h3c context.
This will be useful to implement GOAWAY emission without having to store
the control stream ID on opening.
Call qc_send() on qc_release(). This is mostly useful for application
protocol with a connection closing procedure. Most notably, this will be
useful to implement HTTP/3 GOAWAY emission.
This change is purely cosmetic. qc_release() function is moved just
before qc_io_cb(). It's cleaner as it brings it closer where it is used.
More importantly, this will be required to be able to use it in
qc_send() function.
Send a CONNECTION_CLOSE if the MUX has been released and all STREAM data
are acknowledged. This is useful to prevent a client from trying to use
a connection which have the upper layer closed.
To implement this a new function qc_check_close_on_released_mux() has
been added. It is called on QUIC MUX release notification and each time
a qc_stream_desc has been released.
This commit is associated with the previous one :
MINOR: mux-quic/h3: schedule CONNECTION_CLOSE on app release
Both patches are required to prevent the risk of browsers stuck on
webpage loading if MUX has been released. On CONNECTION_CLOSE reception,
the client will reopen a new QUIC connection.
When MUX is released, a CONNECTION_CLOSE frame should be emitted. This
will ensure that the client does not use anymore a half-dead connection.
App protocol layer is responsible to provide the error code via release
callback. For HTTP/3 NO_ERROR is used as specified in RFC 9114. If no
release callback is provided, generic QUIC NO_ERROR code is used. Note
that a graceful shutdown is used : quic_conn must emit CONNECTION_CLOSE
frame when possible. This will be provided in another patch.
This change should limit the risk of browsers stuck on webpage loading
if MUX has been released. On CONNECTION_CLOSE reception, the client will
reopen a new QUIC connection.
Adjust qcc_emit_cc_app() to allow the delay of emission of a
CONNECTION_CLOSE. This will only set the error code but the quic-conn
layer is not flagged for immediate close. The quic-conn will be
responsible to shut the connection when deemed suitable.
This change will allow to implement application graceful shutdown, such
as HTTP/3 with GOAWAY emission. This will allow to emit closing frames
on MUX release. Once all work is done at the lower layer, the quic-conn
should emit a CONNECTION_CLOSE with the registered error code.
Define a new structure quic_err to abstract a QUIC error type. This
allows to easily differentiate a transport and an application error
code. This simplifies error transmission from QUIC MUX and H3 layers.
This new type is defined in quic_frame module. It is used to replace
<err_code> field in <quic_conn>. QUIC_FL_CONN_APP_ALERT flag is removed
as it is now useless.
Utility functions are defined to be able to quickly instantiate
transport, tls and application errors.
Reception is disabled as soon as a CONNECTION_CLOSE emission is
required. An early return is done on qc_lstnr_pkt_rcv() to implement
this.
This condition is not functional if the error code sent is NO_ERROR
(0x00). To fix this, check the quic-conn flags instead of the
error code.
Currently this bug has no impact has NO_ERROR emission is not used.
This can be backported up to 2.6.
A bug in the thread dumper was introduced by commit 00c27b50c ("MEDIUM:
debug: make the thread dumper not rely on a thread mask anymore"). If
two or more threads try to trigger a thread dump exactly at the same
time, the second one may loop indefinitely trying to set the value to 1
while the other ones will wait for it to finish dumping before leaving.
This is a consequence of a logic change using thread numbers instead of
a thread mask, as threads do not need to see all other ones there anymore.
No backport is needed, this is only for 2.7.
Implement support for STOP_SENDING frame parsing. The stream is resetted
as specified by RFC 9000. This will automatically interrupt all future
send operation in qc_send(). A RESET_STREAM will be sent with the code
extracted from the original STOP_SENDING frame.
Implement functions to be able to reset a stream via RESET_STREAM.
If needed, a qcs instance is flagged with QC_SF_TO_RESET to schedule a
stream reset. This will interrupt all future send operations.
On stream emission, if a stream is flagged with QC_SF_TO_RESET, a
RESET_STREAM frame is generated and emitted to the transport layer. If
this operation succeeds, the stream is locally closed. If upper layer is
instantiated, error flag is set on it.
Adjust condition to detach a qcs instance : if the stream is not locally
close it is not directly free. This should improve stream closing by
ensuring that either FIN or a RESET_STREAM is sent before destroying it.
Implement a basic state machine to represent stream lifecycle. By
default a stream is idle. It is marked as open when sending or receiving
the first data on a stream.
Bidirectional streams has two states to represent the closing on both
receive and send channels. This distinction does not exists for
unidirectional streams which passed automatically from open to close
state.
This patch is mostly internal and has a limited visible impact. Some
behaviors are slightly updated :
* closed streams are garbage collected at the start of io handler
* send operation is interrupted if a stream is close locally
Outside of this, there is no functional change. However, some additional
BUG_ON guards are implemented to ensure that we do not conduct invalid
operation on a stream. This should strengthen the code safety. Also,
stream states are displayed on trace which should help debugging.
MAX_STREAM_DATA can be used as the first frame of a stream. In this
case, the stream should be opened, if it respects flow-control limit. To
implement this, simply replace plain lookup in stream tree by
qcc_get_qcs() at the start of the parsing function. This automatically
takes care of opening the stream if not already done.
As specified by RFC 9000, if MAX_STREAM_DATA is receive for a
receive-only stream, a STREAM_STATE_ERROR connection error is emitted.
Improve return path for qcc_recv() on STREAM parsing. It returns 0 on
success. On error, a non-zero value is returned which indicates to the
caller that the packet containing the frame should not be acknowledged.
When qcc_recv() generates a CONNECTION_CLOSE or RESET_STREAM, either
directly or via qcc_get_qcs(), an error is returned which ensure that no
acknowledgement is generated. This required an adjustment on
qcc_get_qcs() API which now returns a success/error code. The stream
instance is returned via a new out argument.
Extend the function qcc_get_qcs() to be able to filter send/receive-only
unidirectional streams. A connection error STREAM_STATE_ERROR is emitted
if this new filter does not match.
This will be useful when various frames handlers are converted with
qcc_get_qcs(). Depending on the frame type, it will be easy to filter on
the forbidden stream types as specified in RFC 9000.
Implement a simple function to notify a possible subscriber or wake up
the upper layer if a special condition happens on a stream. For the
moment, this is only used to replace identical code in
qc_wake_some_streams().
Rename qc_release_detached_streams() to qc_purge_streams(). The aim is
to have a more generic name. It's expected to complete this function to
add other criteria to purge dead streams.
Also the function documentation has been corrected. It does not return a
number of streams. Instead it is a boolean value, to true if at least
one stream was released.
Rename both qcc_open_stream_local/remote() functions to
qcc_init_stream_local/remote(). This change is purely cosmetic. It will
reduces the ambiguity with the soon to be implemented OPEN states for
QCS instances.
QUIC MUX was not able to correctly deal with server response using
chunked transfer-encoding. All data will be transfered correctly to the
client but the FIN bit is missing. The transfer will never stop as the
client will wait indefinitely for the FIN bit.
This bug happened because the HTX message representing a chunked encoded
payload contains a final empty block with the EOM flag. However,
emission is skipped by QUIC MUX if there is no data to transfer. To fix
this, the condition was completed to ensure that there is no need to
send the FIN signal. If this is false, data emission will proceed even
if there is no data : this will generate an empty QUIC STREAM frame with
FIN set which will mark the end of the transfer.
To ensure that a FIN STREAM frame is sent only one time,
QC_SF_FIN_STREAM is resetted on send confirmation from the transport in
qcc_streams_sent_done().
This bug was reproduced when dealing with chunked transfer-encoding
response for the HTTP server.
This must be backported up to 2.6.
When building with gcc-5, one can see this warning:
src/http_fetch.c: In function 'smp_fetch_meth':
src/http_fetch.c:356:6: warning: 'htx' may be used uninitialized in this function [-Wmaybe-uninitialized]
sl = http_get_stline(htx);
^
It's wrong since the only way to reach this code is to have met the
same condition a few lines before and initialized the htx variable.
The reason in fact is that the same test happens on different variables
of distinct types, so the compiler possibly doesn't know that the
condition is the same. Newer gcc versions do not have this problem.
Let's just move the assignment earlier and have the exact same test,
as it's sufficient to shut this up. This may have to be backported
to 2.6 since the code is the same there.
Between 1.8 and 1.9 commit d9e7e36c6 ("BUG/MEDIUM: epoll/threads: use one
epoll_fd per thread") split the epoll poller to use one poller per thread
(and this was backported to 1.8). This patch added a call to epoll_ctl(DEL)
on return from the I/O handler as a safe way to deal with a detected thread
migration when that code was still quite fragile. One aspect of this choice
was that by then we wanted to maintain support for the rare old bogus epoll
implementations that failed to remove events on close(), so risking to lose
the event was not an option.
Later in 2.5, commit 200bd50b7 ("MEDIUM: fd: rely more on fd_update_events()
to detect changes") changed the code to perform most of the operations
inside fd_update_events(), but it maintained that oddity, to the point that
strictly all pollers except epoll now just add an update to be dealt with at
the next round.
This approach is much more efficient, because under load and server-side
connection reuse, it's perfectly possible for a thread to see the same FD
several times in a poll loop, the first time to relinquish it after a
migration, then the other thread makes a request, gets its response, and
still during the same loop for the first one, grabbing an idle connection
to send a request and wait for a response will program a new update on
this FD. By using a synchronous epoll_ctl(DEL), we effectively lose the
opportunity to aggregate certain changes in the same update.
Some tests performed locally with 8 threads and one server show that on
average, by using an update instead of a synchronous call, we reduce the
number of epoll_ctl() calls by 25-30% (under low loads it will probably
not change anything).
So this patch implements the same method for all pollers and replaces the
synchronous epoll_ctl() with an update.
Since commit d1480cc8 ("BUG/MEDIUM: stream-int: do not rely on the
connection error once established"), connection errors are not handled
anymore by the stream-connector once established. But it is a problem for
the H1 mux when an error occurred during a synchronous send in
h1_snd_buf(). Because in this case, the connction error is just missed. It
leads to a session leak until a timeout is reached (client or server).
To fix the bug, the connection flags are now checked in h1_snd_buf(). If
there is an error, it is reported to the stconn level by setting SF_FL_ERROR
flags. But only if there is no pending data in the input buffer.
This patch should solve the issue #1774. It must be backported as far as
2.2.
When we want to establish a tunnel on a side, we wait to have flush all data
from the buffer. At this stage the other side is at least in DONE state. But
there is no real reason to wait. We are already in DONE state on its
side. So all the HTTP message was already forwarded or planned to be
forwarded. Depending on the scheduling if the mux already started to
transfer tunneled data, these data may block the switch in TUNNEL state and
thus block these data infinitly.
This bug exists since the early days of HTX. May it was mandatory but today
it seems useless. But I honestly don't remember why this prerequisite was
added. So be careful during the backports.
This patch should be backported with caution. At least as far as 2.4. For
2.2 and 2.0, it seems to be mandatory too. But a review must be performed.
When a buffer area is casted to an htx message, depending on the method
used, the underlying buffer may be updated or not. The htxbuf() function
does not change the buffer state. We assume the buffer was already prepared
to store an htx message. htx_from_buf() on its side, updates the
buffer. With the first function, we only need to commit changes to the
underlying buffer if the htx message is changed. With last one, we must
always commit the changes. The idea is to be sure to keep non-empty HTX
messages while an empty message must be lead to an empty buffer after
commit.
All that said because in h1_process_demux(), the changes is not always
committed as expected. When the demux is blocked, we just leave the
function. So it is possible to have an empty htx message stored in a buffer
that appears non-empty. It is not critical, but the buffer cannot be
released in this state. And we should always release an empty buffer.
This patch must be backported as far as 2.4.
The sd_notify API is not able to change the "Active:" line in "systemcl
status". However a message can still be displayed on a "Status: " line,
even if the service is still green and "active (running)".
When startup succeed the Status will be set to "Ready.", upon a reload
it will be set to "Reloading Configuration." If the configuration
succeed "Ready." again. However if the reload failed, it will be set to
"Reload failed!".
Keep in mind that the "Active:" line won't change upon a reload failure,
and will still be green.
The "method" sample fetch does not perform any check on the stream existence
before using it. However, for errors triggered at the mux level, there is no
stream. When the log message is formatted, this sample fetch must fail. It
must also fail when it is called from a health-check.
This patch must be backported as far as 2.4.
From time to time, users complain to get 400-Bad-request responses for
totally valid CONNECT requests. After analysis, it is due to the H1 parser
performs an exact match between the authority and the host header value. For
non-CONNECT requests, it is valid. But for CONNECT requests the authority
must contain a port while it is often omitted from the host header value
(for default ports).
So, to be sure to not reject valid CONNECT requests, a basic authority
validation is now performed during the message parsing. In addition, the
host header value is normalized. It means the default port is removed if
possible.
This patch should solve the issue #1761. It must be backported to 2.6 and
probably as far as 2.4.
http_is_default_port() can be used to test if a port is a default HTTP/HTTPS
port. A scheme may be specified. In this case, it is used to detect defaults
ports, 80 for "http://" and 443 for "https://". Otherwise, with no scheme, both
are considered as default ports.
http_get_host_port() function can be used to get the port part of a host. It
will be used to get the port of an uri authority or a host header
value. This function only look for a port starting from the end of the
host. It is the caller responsibility to call it with a valid host value. An
indirect string is returned.
The scheme based normalization is not properly handled the URI's userinfo,
if any. First, the authority parser is not called with "no_userinfo"
parameter set. Then it is skipped from the URI normalization.
This patch must be backported as far as 2.4.
Patch 49f6f4b ("BUG/MEDIUM: peers: fix segfault using multiple bind on
peers sections") introduced possible NULL dereferences when parsing the
peers configuration.
Fix the issue by checking the return value of bind_conf_uniq_alloc().
This patch should be backported as far as 2.0.
As much as possible we should take care of not leaving bits from stopped
threads in shared thread masks. It can avoid issues like the previous
fix and will also make debugging less confusing.
When soft-stopping, there's a comparison between stopping_threads and
threads_enabled to make sure all threads are stopped, but this is not
correct and is racy since the threads_enabled bit is removed when a
thread is stopped but not its stopping_threads bit. The consequence is
that depending on timing, when stopping, if the first stopping thread
is fast enough to remove its bit from threads_enabled, the other threads
will see that stopping_threads doesn't match threads_enabled anymore and
will wait forever. As such the mask must be applied to stopping_threads
during the test. This issue was introduced in recent commit ef422ced9
("MEDIUM: thread: make stopping_threads per-group and add stopping_tgroups"),
no backport is needed.
When several "early-hint" rules are used, we try, as far as possible, to
merge links into the same 103-early-hints response. However, it only works
if there is no ACLs. If a "early-hint" rule is not executed an invalid
response is generated. the EOH block or the start-line may be missing,
depending on the rule order.
To fix the bug, we use the transaction status code. It is unused at this
stage. Thus, it is set to 103 when a 103-early-hints response is in
progress. And it is reset when the response is forwarded. In addition, the
response is forwarded if the next rule is an "early-hint" rule with an
ACL. This way, the response is always valid.
This patch must be backported as far as 2.2.
When an explicit "http-check send" rule is used, if it is the first one, it
is merge with the implicit rule created by "option httpchk" statement. The
opposite is also true. Idea is to have only one send rule with the merged
info. It means info defined in the second rule override those defined in the
first one. However, if an element is not defined in the second rule, it must
be ignored, keeping this way info from the first rule. It works as expected
for the method, the uri and the request version. But it is not true for the
header list.
For instance, with the following statements, a x-forwarded-proto header is
added to healthcheck requests:
option httpchk
http-check send meth GET hdr x-forwarded-proto https
while by inverting the statements, no extra headers are added:
http-check send meth GET hdr x-forwarded-proto https
option httpchk
Now the old header list is overriden if the new one is not empty.
This patch should fix the issue #1772. It must be backported as far as 2.2.
Calls to free() are replaced by ha_free(). And otherwise, the pointers are
explicitly set to NULL after a release. There is no issue here but it could
help debugging sessions.
The peers didn't have their bind_conf thread mask nor group set, because
they're still not part of the global proxy list. Till 2.6 it seems it does
not have any visible impact, since most listener-oriented operations pass
through thread_mask() which detects null masks and turns them to
all_threads_mask. But starting with 2.7 it becomes a problem as won't
permit these null masks anymore.
This patch duplicates (yes, sorry) the loop that applies to the frontend's
bind_conf, though it is simplified (no sharding, etc).
As the code is right now, it simply seems impossible to trigger the second
(and largest) part of the check when leaving thread_resolve_group_mask()
on success, so it looks like it might be removed.
No backport is needed, unless a report in 2.6 or earlier mentions an issue
with a null thread_mask.
Some generic frontend errors mention the bind_conf by its name as
"bind '%s'", but if this is used on peers "bind" lines it shows
"(null)" because the argument is set to NULL in the call to
bind_conf_uniq_alloc() instead of passing the argument. Fortunately
that's trivial to fix.
This may be backported to older versions.
Add a check on stream size when the stream is in state Size Known. In
this case, a STREAM frame cannot change the stream size. If this is not
respected, a CONNECTION_CLOSE with FINAL_SIZE_ERROR will be emitted as
specified in the RFC 9000.
Rename QC_SF_FIN_RECV to the more generic name QC_SF_SIZE_KNOWN. This
better align with the QUIC RFC 9000 which uses the "Size Known" state
definition. This change is purely cosmetic.
Review the whole API used to access/instantiate qcs.
A public function qcc_open_stream_local() is available to the
application protocol layer. It allows to easily opening a local stream.
The ID is automatically attributed to the next one available.
For remote streams, qcc_open_stream_remote() has been implemented. It
will automatically take care of allocating streams in a linear way
according to the ID. This function is called via qcc_get_qcs() which can
be used for each qcc_recv*() operations. For the moment, it is only used
for STREAM frames via qcc_recv(), but soon it will be implemented for
other frames types which can also be used to open a new stream.
qcs_new() and qcs_free() has been restricted to the MUX QUIC only as
they are now reserved for internal usage.
This change is a pure refactoring and should not have any noticeable
impact. It clarifies the developer intent and help to ensure that a
stream is not automatically opened when not desired.
Implement a function <qcs_sc> to easily access to the stconn associated
with a QCS. This takes care of qcs.sd which may be NULL, for example for
unidirectional streams.
It is expected that in the future when implementing
STOP_SENDING/RESET_STREAM, stconn must be notify about the event. This
accessor will allow to easily test if the stconn is instantiated or not.
The function mworker_pipe_register_per_thread() is called this way
because the master first used pipes instead of socketpairs.
Rename mworker_pipe_register_per_thread() to
mworker_sockpair_register_per_thread() in order to be more consistent.
Also update a comment inside the function.
The worker was previously changing the iocb of the socketpair in the
worker by mworker_accept_wrapper(). However, it was done using
fd_insert() instead of changing directly the callback in the
fdtab[].iocb pointer.
This patch cleans up this by part by removing fd_insert().
It also stops setting tid_bit on the thread mask, the socketpair will be
handled by any thread from now.
Let's rely on tg->threads_enabled there to detect running threads. We
should probably have a dedicated function for this in order to simplify
the code and avoid the risk of using the wrong group ID.
Commit ef422ced9 ("MEDIUM: thread: make stopping_threads per-group and add
stopping_tgroups") moved the stopping_threads mask to per-group, but one
test in the loop preserved its global value instead, resulting in stopping
threads never sleeping on stop and eating 100% CPU until all were stopped.
No backport is needed.
Commit 377e37a80 ("MINOR: tinfo: add the mask of enabled threads in each
group") forgot -1 on the tgid, thus the groups was not always correctly
tested, which is visible only when running with more than one group. No
backport is needed.
Building with threads and without thread dump (e.g. macos, freebsd)
warns that thread_dump_state is unused. This happened in fact with
recentcommit 1229ef312 ("MINOR: wdt: do not rely on threads_to_dump
anymore"). The solution would be to mark it unused, but after a
second thought, it can be convenient to keep it exported to help
debug crashes, so let's export it again. It's just not referenced in
include files since it's not needed outside.
The thread mask is too short to dump more than 64 bits. Thus here we're
using a different approach with two counters, one for the next thread ID
to dump (which always exists, as it's looked up), and the second one for
the number of threads done dumping. This allows to dump threads in ascending
order then to let them wait for all others to be done, then to leave without
the risk of an overlapping dump until the done count is null again.
This allows to remove threads_to_dump which was the last non-FD variable
using a global thread mask.
This flag is not needed anymore as we're already marking the waiting
threads as harmless, thus the thread's bit is already covered by this
information. The variable was unexported.
The debug_handler() function waits for other threads to join, but does
not mark itself as harmless, so if at the same time another thread tries
to isolate, this may deadlock. In practice this does not happen as the
signal is received during epoll_wait() hence under harmless mode, but
it can possibly arrive under other conditions.
In order to improve this, while waiting for other threads to join, we're
now marking the current thread as harmless, as it's doing nothing but
waiting for the other ones. This way another harmless waiter will be able
to proceed. It's valid to do this since we're not doing anything else in
this loop.
One improvement could be to also check for the thread being idle and
marking it idle in addition to harmless, so that it can even release a
full isolation requester. But that really doesn't look worth it.
The harmless status is not re-entrant, so sometimes for signal handling
it can be useful to know if we're already harmless or not. Let's add a
function doing that, and make the debugger use it instead of manipulating
the harmless mask.
thread_isolate() and thread_isolate_full() were relying on a set of thread
masks for all threads in different states (rdv, harmless, idle). This cannot
work anymore when the number of threads increases beyond LONGBITS so we need
to change the mechanism.
What is done here is to have a counter of requesters and the number of the
current isolated thread. Threads which want to isolate themselves increment
the request counter and wait for all threads to be marked harmless (or idle)
by scanning all groups and watching the respective masks. This is possible
because threads cannot escape once they discover this counter, unless they
also want to isolate and possibly pass first. Once all threads are harmless,
the requesting thread tries to self-assign the isolated thread number, and
if it fails it loops back to checking all threads. If it wins it's guaranted
to be alone, and can drop its harmless bit, so that other competing threads
go back to the loop waiting for all threads to be harmless. The benefit of
proceeding this way is that there's very little write contention on the
thread number (none during work), hence no cache line moves between caches,
thus frozen threads do not slow down the isolated one.
Once it's done, the isolated thread resets the thread number (hence lets
another thread take the place) and decrements the requester count, thus
possibly releasing all harmless threads.
With this change there's no more need for any global mask to synchronize
any thread, and we only need to loop over a number of groups to check
64 threads at a time per iteration. As such, tinfo's threads_want_rdv
could be dropped.
This was tested with 64 threads spread into 2 groups, running 64 tasks
(from the debug dev command), 20 "show sess" (thread_isolate()), 20
"add server blah/blah" (thread_isolate()), and 20 "del server blah/blah"
(thread_isolate_full()). The load remained very low (limited by external
socat forks) and no stuck nor starved thread was found.
Stopping threads need a mask to figure who's still there without scanning
everything in the poll loop. This means this will have to be per-group.
And we also need to have a global stopping groups mask to know what groups
were already signaled. This is used both to figure what thread is the first
one to catch the event, and which one is the first one to detect the end of
the last job. The logic isn't changed, though a loop is required in the
slow path to make sure all threads are aware of the end.
Note that for now the soft-stop still takes time for group IDs > 1 as the
poller is not yet started on these threads and needs to expire its timeout
as there's no way to wake it up. But all threads are eventually stopped.
The thread group info is not sufficient to represent a thread group's
current state as it's read-only. We also need something comparable to
the thread context to represent the aggregate state of the threads in
that group. This patch introduces ha_tgroup_ctx[] and tg_ctx for this.
It's indexed on the group id and must be cache-line aligned. The thread
masks that were global and that do not need to remain global were moved
there (want_rdv, harmless, idle).
Given that all the masks placed there now become group-specific, the
associated thread mask (tid_bit) now switches to the thread's local
bit (ltid_bit). Both are the same for nbtgroups 1 but will differ for
other values.
There's also a tg_ctx pointer in the thread so that it can be reached
from other threads.
This function was added in 2.0 when reworking the thread isolation
mechanism to make it more reliable. However it if fundamentally
incompatible with the full isolation mechanism provided by
thread_isolate_full() since that one will wait for all threads to
become idle while the former will wait for all threads to finish
waiting, causing a deadlock.
Given that it's not used, let's just drop it entirely before it gets
used by accident.
In order to kill all_threads_mask we'll need to have an equivalent for
the thread groups. The all_tgroups_mask does just this, it keeps one bit
set per enabled group.
Since commit cc7a11ee3 ("MINOR: threads: set the tid, ltid and their bit
in thread_cfg") we ought not use (1UL << thr) to get the group mask for
thread <thr>, but (ha_thread_info[thr].ltid_bit). ha_tkillall() needs
this.
Since commit cc7a11ee3 ("MINOR: threads: set the tid, ltid and their bit
in thread_cfg") we ought not use (1UL << thr) to get the group mask for
thread <thr>, but (ha_thread_info[thr].ltid_bit). clock_report_idle()
needs this.
This also implies not using all_threads_mask anymore but taking the mask
from the tgroup since it becomes relative now.
Since commit cc7a11ee3 ("MINOR: threads: set the tid, ltid and their bit
in thread_cfg") we ought not use (1UL << thr) to get the group mask for
thread <thr>, but (ha_thread_info[thr].ltid_bit). wdt_handler() needs
this.
Since commit cc7a11ee3 ("MINOR: threads: set the tid, ltid and their bit
in thread_cfg") we ought not use (1UL << thr) to get the group mask for
thread <thr>, but (ha_thread_info[thr].ltid_bit). ha_thread_dump() needs
this.
In order to replace the global "all_threads_mask" we'll need to have an
equivalent per group. Take this opportunity for calling it threads_enabled
and make sure which ones are counted there (in case in the future we allow
to stop some).
Now that the tgid is accessible from the thread, it's pointless to have
it in the group, and it was only set but never used. However we'll soon
frequently need the mask corresponding to the group ID and the risk of
getting it wrong with the +1 or to shift 1 instead of 1UL is important,
so let's store the tgid_bit there.
At several places we're dereferencing the thread group just to catch
the group number, and this will become even more required once we start
to use per-group contexts. Let's just add the tgid in the thread_info
struct to make this easier.
Every single place where sleeping_thread_mask was still used was to test
or set a single thread. We can now add a per-thread flag to indicate a
thread is sleeping, and remove this shared mask.
The wake_thread() function now always performs an atomic fetch-and-or
instead of a first load then an atomic OR. That's cleaner and more
reliable.
This is not easy to test, as broadcast FD events are rare. The good
way to test for this is to run a very low rate-limited frontend with
a listener that listens to the fewest possible threads (2), and to
send it only 1 connection at a time. The listener will periodically
pause and the wakeup task will sometimes wake up on a random thread
and will call wake_thread():
frontend test
bind :8888 maxconn 10 thread 1-2
rate-limit sessions 5
Alternately, disabling/enabling a frontend in loops via the CLI also
broadcasts such events, but they're more difficult to observe since
this is causing connection failures.
Right now when an inter-thread wakeup happens, we preliminary check if the
thread was asleep, and if so we wake the poller up and remove its bit from
the sleeping mask. That's not very clean since the sleeping mask cannot be
entirely trusted since a thread that's about to wake up will already have
its sleeping bit removed.
This patch adds a new per-thread flag (TH_FL_NOTIFIED) to remember that a
thread was notified to wake up. It's cleared before checking the task lists
last, so that new wakeups can be considered again (since wake_thread() is
only used to notify about task wakeups and FD polling changes). This way
we do not need to modify a remote thread's sleeping mask anymore. As such
wake_thread() now only tests and sets the TH_FL_NOTIFIED flag but doesn't
clear sleeping anymore.
When enabling an FD that's only bound to another thread, instead of
always picking the first one, let's pick a random one. This is rarely
used (enabling a frontend, or session rate-limiting period ending),
and has greater chances of avoiding that some obscure corner cases
could degenerate into a poorly distributed load.
Till now, update_fd_polling() used to check if all the target threads were
sleeping, and only then would wake an owning thread up. This causes several
problems among which the need for the sleeping_thread_mask and the fact that
by the time we wake one thread up, it has changed.
This commit changes this by leaving it to wake_thread() to perform this
check on the selected thread, since wake_thread() is already capable of
doing this now. Concretely speaking, for updt_fd_polling() it will mean
performing one computation of an ffsl() before knowing the sleeping status
on a global FD state change (which is very rare and not important here,
as it basically happens after relaxing a rate-limit (i.e. once a second
at beast) or after enabling a frontend from the CLI); thus we don't care.
When returning from the polling syscall, all pollers have a certain
dance to follow, made of wall clock updates, thread harmless updates,
idle time management and sleeping mask updates. Let's have a centralized
function to deal with all of this boring stuff: fd_leaving_poll(), and
make all the pollers use it.
The thread flags are touched a little bit by other threads, e.g. the STUCK
flag may be set by other ones, and they're watched a little bit. As such
we need to use atomic ops only to manipulate them. Most places were already
using them, but here we generalize the practice. Only ha_thread_dump() does
not change because it's run under isolation.
Almost every call place of wake_thread() checks for sleeping threads and
clears the sleeping mask itself, while the function is solely used for
there. Let's move the check and the clearing of the bit inside the function
itself. Note that updt_fd_polling() still performs the check because its
rules are a bit different.
Now that the inter-task wakeups are cheap, there's no point in using
task_instant_wakeup() anymore when dequeueing tasks. The use of the
regular task_wakeup() is sufficient there and will preserve a better
fairness: the test that went from 40k to 570k RPS now gives 580k RPS
(down from 585k RPS with previous commit). This essentially reverts
commit 27fab1dcb ("MEDIUM: queue: use tasklet_instant_wakeup() to
wake tasks").
Since we don't mix tasks from different threads in the run queues
anymore, we don't need to use the eb32sc_ trees and we can switch
to the regular eb32 ones. This uses cheaper lookup and insert code,
and a 16-thread test on the queues shows a performance increase
from 570k RPS to 585k RPS.
This bit field used to be a per-thread cache of the result of the last
lookup of the presence of a task for each thread in the shared cache.
Since we now know that each thread has its own shared cache, a test of
emptiness is now sufficient to decide whether or not the shared tree
has a task for the current thread. Let's just remove this mask.
grq_total was only used to know how many tasks were being queued in the
global runqueue for stats purposes, and that was transferred to the per
thread rq_total counter once assigned. We don't need this anymore since
we know where they are, so let's just directly update rq_total and drop
that one.
Since we only use the shared runqueue to put tasks only assigned to
known threads, let's move that runqueue to each of these threads. The
goal will be to arrange an N*(N-1) mesh instead of a central contention
point.
The global_rqueue_ticks had to be dropped (for good) since we'll now
use the per-thread rqueue_ticks counter for both trees.
A few points to note:
- the rq_lock stlil remains the global one for now so there should not
be any gain in doing this, but should this trigger any regression, it
is important to detect whether it's related to the lock or to the tree.
- there's no more reason for using the scope-based version of the ebtree
now, we could switch back to the regular eb32_tree.
- it's worth checking if we still need TASK_GLOBAL (probably only to
delete a task in one's own shared queue maybe).
The runqueue ticks counter is per-thread and wasn't initially meant to
be shared. We'll soon have to share it so let's make it atomic. It's
only updated when waking up a task, and no performance difference was
observed. It was moved in the thread_ctx struct so that it doesn't
pollute the local cache line when it's later updated by other threads.
TASK_SHARED_WQ was set upon task creation and never changed afterwards.
Thus if a task was created to run anywhere (e.g. a check or a Lua task),
all its timers would always pass through the shared timers queue with a
lock. Now we know that tid<0 indicates a shared task, so we can use that
to decide whether or not to use the shared queue. The task might be
migrated using task_set_affinity() but it's always dequeued first so
the check will still be valid.
Not only this removes a flag that's difficult to keep synchronized with
the thread ID, but it should significantly lower the load on systems with
many checks. A quick test with 5000 servers and fast checks that were
saturating the CPU shows that the check rate increased by 20% (hence the
CPU usage dropped by 17%). It's worth noting that run_task_lists() almost
no longer appears in perf top now.
This removes the mask-based variant so that from now on the low-level
function becomes appctx_new_on() and it takes either a thread number or
a negative value for "any thread". This way we can use task_new_on() and
task_new_anywhere() instead of task_new() which will soon disappear.
At a few places where the task's thread mask. Now we know that it's always
either one bit or all bits of all_threads_mask, so we can replace it with
either 1<<tid or all_threads_mask depending on what's expected.
It's worth noting that the global_tasks_mask is still set this way and
that it's reaching its limits. Similarly, the task_new() API would deserve
an update to stop using a thread mask and use a thread number instead.
Similarly, task_set_affinity() should be updated to directly take a
thread number.
At this point the task's thread mask is not used anymore.
At several places we need to figure the ID of the first thread allowed
to run a task. Till now this was performed using my_ffsl(t->thread_mask)
but since we now have the thread ID stored into the task, let's use it
instead. This is tagged major because it starts to assume that tid<0 is
strictly equivalent to atleast2(thread_mask), and that as such, among
the allowed threads are the current one.
The tasks currently rely on a mask but do not have an assigned thread ID,
contrary to tasklets. However, in practice they're either running on a
single thread or on any thread, so that it will be worth simplifying all
this in order to ease the transition to the thread groups.
This patch introduces a "tid" field in the task struct, that's either
the number of the thread the task is attached to, or a negative value
if the task is not bound to a thread, (i.e. its mask is all_threads_mask).
The new ID is only set and updated but not used yet.
The thread mask will not be used anymore, instead the thread id only
is used. Interestingly it was already implemented in the parsing but
not used. The single/multi thread argument is not needed anymore since
it's sufficient to pass tid<0 to get a multi-threaded task/tasklet.
This is in preparation for the removal of the thread_mask in tasks as
only this debug code was using it!
Before 2.3, after an async crypto processing or on session close, the engine
async file's descriptors were removed from the fdtab but not closed because
it is the engine which has created the file descriptor, and it is responsible
for closing it. In 2.3 the fd_remove() call was replaced by fd_stop_both()
which stops the polling but does not remove the fd from the fdtab and the
fd remains indefinitively in the fdtab.
A simple replacement by fd_delete() is not a valid fix because fd_delete()
removes the fd from the fdtab but also closes the fd. And the fd will be
closed twice: by the haproxy's core and by the engine itself.
Instead, let's set FD_DISOWN on the FD before calling fd_delete() which will
take care of not closing it.
This patch must be backported on branches >= 2.3, and it relies on this
previous patch:
MINOR: fd: add a new FD_DISOWN flag to prevent from closing a deleted FD
As mentioned in the patch above, a different flag will be needed in 2.3.
Some FDs might be offered to some external code (external libraries)
which will deal with them until they close them. As such we must not
close them upon fd_delete() but we need to delete them anyway so that
they do not appear anymore in the fdtab. This used to be handled by
fd_remove() before 2.3 but we don't have this anymore.
This patch introduces a new flag FD_DISOWN to let fd_delete() know that
the core doesn't own the fd and it must not be closed upon removal from
the fd_tab. This way it's totally unregistered from the poller but still
open.
This patch must be backported on branches >= 2.3 because it will be
needed to fix a bug affecting SSL async. it should be adapted on 2.3
because state flags were stored in a different way (via bits in the
structure).
Adjust FIN signal on Rx path for the application layer : ensure that the
receive buffer has no gap.
Without this extra condition, FIN was signalled as soon as the STREAM
frame with FIN was received, even if we were still waiting to receive
missing offsets.
This bug could have lead to incomplete requests read from the
application protocol. However, in practice this bug has very little
chance to happen as the application layer ensures that the demuxed frame
length is equivalent to the buffer data size. The only way to happen is
if to receive the FIN STREAM as the H3 demuxer is still processing on a
frame which is not the last one of the stream.
This must be backported up to 2.6. The previous patch on ncbuf is
required for the newly defined function ncb_is_fragmented().
MINOR: ncbuf: implement ncb_is_fragmented()
Implement a new status function for ncbuf. It allows to quickly report
if a buffer contains data in a fragmented way, i.e. with gaps in between
or at start of the buffer.
To summarize, a buffer is considered as non-fragmented in the following
cases :
- a null or empty buffer
- a full buffer
- a buffer containing exactly one data block at the beginning, following
by a gap until the end.
Since a previous refactoring, application protocol layer is not require
anymore to call qcs_consume(). This function is now automatically used
by the MUX itself.
First we add a loop around recfrom() into the most low level I/O handler
quic_sock_fd_iocb() to collect as most as possible datagrams before during
its tasklet wakeup with a limit: we recvfrom() at most "maxpollevents"
datagrams. Furthermore we add a local task list into the datagram handler
quic_lstnr_dghdlr() which is passed to the first datagrams parser qc_lstnr_pkt_rcv().
This latter parser only identifies the connection associated to the datagrams then
wakeup the highest level packet parser I/O handlers (quic_conn.*io_cb()) after
it is done, thanks to the call to tasklet_wakeup_after() which replaces from now on
the call to tasklet_wakeup(). This should reduce drastically the latency and the
chances to fulfil the RX buffers at the QUIC connections level as reported in
GH #1737 by Tritan.
These modifications depend on this commit:
"MINOR: task: Add tasklet_wakeup_after()"
Must be backported to 2.6 with the previous commit.
Remove the call to qc_list_all_rx_pkts() which print messages on stderr
during RX buffer overruns and add a new counter for the number of dropped packets
because of such events.
Must be backported to 2.6
When the connection RX buffer is full, the received packets are dropped.
Some of them were not taken into an account by the ->dropped_pkt counter.
This very simple patch has no impact at all on the packet handling workflow.
Must be backported to 2.6.
We want to be able to schedule a tasklet onto a thread after the current tasklet
is done. What we have to do is to insert this tasklet at the head of the thread
task list. Furthermore, we would like to serialize the tasklets. They must be
run in the same order as the order in which they have been scheduled. This is
implemented passing a list of tasklet as parameter (see <head> parameters) which
must be reused for subsequent calls.
_tasklet_wakeup_after_on() is implemented to accomplish this job.
tasklet_wakeup_after_on() and tasklet_wake_after() are only wrapper macros around
_tasklet_wakeup_after_on(). tasklet_wakeup_after_on() does exactly the same thing
as _tasklet_wakeup_after_on() without having to pass the filename and line in the
filename as parameters (usefull when DEBUG_TASK is enabled).
tasklet_wakeup_after() hides also the usage of the thread parameter which is
<tl> tasklet thread ID.
Return QPACK_DECOMPRESSION_FAILED error code when dealing with dynamic
table references. This is justified as for now haproxy does not
implement dynamic table support and advertizes a zero-sized table.
The H3 calling function will thus reuse this code in a CONNECTION_CLOSE
frame, in conformance with the QPACK RFC.
This proper error management allows to remove obsolete ABORT_NOW guards.
This is a complement to partial fix from commit
debaa04f9e
BUG/MINOR: qpack: abort on dynamic index field line decoding
The main objective is to fix coverity report about usage of
uninitialized variable when receiving dynamic table references. These
references are invalid as for the moment haproxy advertizes a 0-sized
dynamic table. An ABORT_NOW clause is present to catch this. A following
patch will clean up this in order to properly handle QPACK errors with
CONNECTION_CLOSE.
This should fix github issue #1753.
No need to backport as this was introduced in the current dev branch.
Emit a CONNECTION_CLOSE if HEADERS parsing function returns an error.
This is useful to remove previous ABORT_NOW guards.
For the moment, the whole connection is closed. In the future, it may be
justified to only reset the faulting stream in some cases. This requires
the implementation of RESET_STREAM emission.
The local variable 't' was renamed 'static_tbl'. Fix its name in the
qpack_debug_printf() statement which is activated only with QPACK_DEBUG
mode.
No need to backport as this was introduced in current dev branch.
This patch adds a filter to limit bandwith at the stream level. Several
filters can be defined. A filter may limit incoming data (upload) or
outgoing data (download). The limit can be defined per-stream or shared via
a stick-table. For a given stream, the bandwith limitation filters can be
enabled using the "set-bandwidth-limit" action.
A bandwith limitation filter can be used indifferently for HTTP or TCP
stream. For HTTP stream, only the payload transfer is limited. The filter is
pretty simple for now. But it was designed to be extensible. The current
design tries, as far as possible, to never exceed the limit. There is no
burst.
In GH #1760 (which is marked as being a feature), there were compilation
errors on MacOS which could be reproduced in Linux when building 32-bit code
(-m32 gcc option). Most of them were due to variables types mixing in QUIC_MIN macro
or using size_t type in place of uint64_t type.
Must be backported to 2.6.
This previous commit:
"BUG/MAJOR: Big RX dgrams leak when fulfilling a buffer"
partially fixed an RX dgram memleak. There is a missing break in the loop which
looks for the first datagram attached to an RX buffer dgrams list which may be
reused (because consumed by the connection thread). So when several dgrams were
consumed by the connection thread and are present in the RX buffer list, some are
leaked because never reused for ever. They are removed for their list.
Furthermore, as commented in this patch, there is always at least one dgram
object attached to an RX dgrams list, excepted the first time we enter this
I/O handler function for this RX buffer. So, there is no need to use a loop
to lookup and reuse the first datagram in an RX buffer dgrams list.
This isssue was reproduced with quiche client with plenty of POST requests
(100000 streams):
cargo run --bin quiche-client -- https://127.0.0.1:8080/helloworld.html
--no-verify -n 100000 --method POST --body /var/www/html/helloworld.html
and could be reproduce with GET request.
This bug was reported by Tristan in GH #1749.
Must be backported to 2.6.
When entering quic_sock_fd_iocb() I/O handler which is responsible
of recvfrom() datagrams, the first thing which is done it to try
to reuse a dgram object containing metadata about the received
datagrams which has been consumed by the connection thread.
If this object could not be used for any reason, so when we
"goto out" of this function, we must release the memory allocated
for this objet, if not it will leak. Most of the time, this happened
when we fulfilled a buffer as reported in GH #1749 by Tristan. This is why we
added a pool_free() call just before the out label. We mark <new_dgram> as NULL
when it successfully could be used.
Thank you for Tristan and Willy for their participation on this issue.
Must be backported to 2.6.
After having fulfilled a buffer, then marked it as full, we must
consume the remaining space. But to do that, and not to erase the
already existing data, we must check there is not remaining data in
after the tail of the buffer (between the tail and the head).
This is done adding a condition to test that adding the number of
bytes from the remaining contiguous space to the tail does not
pass the wrapping postion in the buffer.
Must be backported to 2.6.
Sometimes using "debug dev memstats" can be frustrating because all
pool allocations are reported through pool-os.h and that's all.
But in practice there's nothing wrong with also intercepting pool_alloc,
pool_free and pool_zalloc and report their call counts and locations,
so that's what this patch does. It only uses an alternate set of macroes
for these 3 calls when DEBUG_MEM_STATS is defined. The outputs are
reported as P_ALLOC (for both pool_malloc() and pool_zalloc()) and
P_FREE (for pool_free()).
A curious practise seems to have started long ago and contaminated various
code areas, consisting in appending "_pool" at the end of the name of a
given pool. That makes no sense as the name is only used to name the pool
in diags such as "show pools", and since names are truncated there, this
adds some confusion when analysing the dump outputs. Let's just clean all
of them at once. there were essentially in SSL and QUIC.
There's a subtle bug in stream_free() when releasing captures. The
pools may be NULL when no capture is defined, and the calls to
pool_free() are inconditional. The only reason why this doesn't
cause trouble is because the pointer to be freed is always NULL in
this case and we don't go further down the chain. That's particularly
ugly and it complicates debugging, so let's only call these ones when
the pointers are set.
There's no impact on running code, it only fools those trying to debug
pools manually. There's no need to backport it though it unless it helps
for debugging sessions.
freq_ctr_overshoot_period() function may be used to retrieve the excess of
events over the current period for a givent frequency counter, ignoring the
history. It is a way compare the "current rate" (the number of events over
the current period) to a given rate and estimate the excess of events.
It may be used to safely add new events, especially at the begining of the
current period for a frequency counter with large period. This way, it is
possible to smoothly add events during the whole period without quickly
consuming all the quota at the beginning of the period and waiting for the
next one to be able to add new events.
Because of the previous fix, if the HTTP parsing is performed when the
"method" sample fetch is called, we always rely on the string representation
of the request method.
Indeed, if no parsing was performed when the "method" sample fetch is
called, the transaction method is HTTP_METH_OTHER because it was just
initialized. However, without this patch, in this case, we always retrieve
the method by reading the request start-line.
Now, when the method is HTTP_METH_OTHER, we systematically try to parse the
request but the method is tested once again after the parsing to be able to
use the integer representation when possible.
This patch must be backported as far as 2.0.
This patch is required to fix "method" sample fetch. But it make sense to
initialize the method of an HTTP transaction to HTTP_METH_OTHER. This way,
before the request parsing, the method is considered as unknown except if we
are able to retrieve the request start-line. It is especially important for
TCP streams.
About the "method" sample fetch, this patch is a way to be sure no random
method is returned when the sample fetch is used on a TCP stream before any
HTTP parsing.
This patch must be backported as far as 2.0.
This bug was revealed by key_update QUIC tracker test. During this test,
after the handshake has succeeded, the client sends a 1-RTT packet containing
only a PING frame. On our side, we do not acknowledge it, because we have
no ack-eliciting packet to send. This is not correct. We must acknowledge all
the ack-eliciting packets unconditionally. But we must not send too much
ACK frames: every two packets since we have sent an ACK frame. This is the test
(nb_aepkts_since_last_ack >= QUIC_MAX_RX_AEPKTS_SINCE_LAST_ACK) which ensure
this is the case. But this condition must be checked at the very last time,
when we are building a packet and when an acknowledgment is required. This
is not to qc_may_build_pkt() to do that. This boolean function decides if
we have packets to send. It must return "true" if there is no more ack-eliciting
packets to send and if an acknowledgement is required.
We must also add a "force_ack" parameter to qc_build_pkt() to force the
acknowledments during the handshakes (for each packet). If not, they will
be sent every two packets. This parameter must be passed to qc_do_build_pkt()
and be checked alongside the others conditions when building a packet to
decide to add an ACK frame or not to this packet.
Must be backported to 2.6.
A bug was introduced by commit 9bf3a1f67e
"BUG/MINOR: ssl: Fix crash when no private key is found in pem".
If a private key is already contained in a pem file, we will still look
for a .key file and load its private key if it exists when we should
not.
This patch should be backported to all branches where the original fix
was backported (all the way to 2.2).
As reported in issue #1755, gcc-9.3 and 9.4 emit a "maybe-uninitialized"
warning in cli_io_handler_commit_cafile_crlfile() because it sees that
when the "path" variable is not set, we're jumping to the error label
inside the loop but cannot see that the new state will avoid the places
where the value is used. Thus it's a false positive but a difficult one.
Let's just preset the value to NULL to make it happy.
This was introduced in 2.7-dev by commit ddc8e1cf8 ("MINOR: ssl_ckch:
Simplify I/O handler to commit changes on CA/CRL entry"), thus no
backport is needed for now.
Sometimes we need to be able to signal one thread among a mask, without
caring much about which bit will be picked. At the moment we use ffsl()
for this but this sometimes results in imbalance at certain specific
places where the same first thread in a set is always the same one that
is selected.
Another approach would consist in using the rank finding function but it
requires a popcount and a setup phase, and possibly a modulo operation
depending on the popcount, which starts to be very expensive.
Here we take a different approach. The idea is an input bit value is
passed, from 0 to LONGBITS-1, and that as much as possible we try to
pick the bit matching it if it is set. Otherwise we look at a mirror
position based on a decreasing power of two, and jump to the side
that still has bits left. In 6 iterations it ends up spotting one bit
among 64 and the operations are very cheap and optimizable. This method
has the benefit that we don't care where the holes are located in the
mask, thus it shows a good distribution of output bits based on the
input ones. A long-time test shows an average of 16 cycles, or ~4ns
per lookup at 3.8 GHz, which is about twice as fast as using the rank
finding function.
Just like for that one, the code was stored into tools.c since we don't
have a C file for intops.
In bug #1751, it was reported that haproxy is consumming too much memory
since the 2.4 version. This is because of a change in the master, which
loses completely its configuration in wait mode, and lose its maxconn.
Without the maxconn, haproxy will try to compute one itself, and will
allocate RAM consequently, too much in our case. Which means the master
will have a too high maxconn and too much RAM allocated.
The patch fixes the issue by setting the maxconn to the default value
when re-executing the master in wait mode.
Must be backported as far as 2.5.
Implement quic_tp_version_info_dump() to dump such a transport parameter (only remote).
Call it from quic_transport_params_dump() which dump all the transport parameters.
Can be backported to 2.6 as it's useful for debugging.
All packets received during hanshakes must be acknowledged asap. This was
not the case for Handshake packets received. At this time, this had
no impact because the client has often only one Handshake packet to send
and last handshake to be sent on our side always embeds an HANDSHAKE_DONE
frame which leads the client to consider it has no more handshake packet
to send.
Add <force_ack> to qc_may_build_pkt() to force an ACK frame to be sent.
Set this parameter to 1 when sending packets from Initial or Handshake
packet number spaces, 0 when sending only Application level packet.
Must be backported to 2.6.
The crash occures when the same certificate which is used on both a
server line and a bind line is inserted in a crt-list over the CLI.
This is quite uncommon as using the same file for a client and a server
certificate does not make sense in a lot of environments.
This patch fixes the issue by skipping the insertion of the SNI when no
bind_conf is available in the ckch_inst.
Change the reg-test to reproduce this corner case.
Should fix issue #1748.
Must be backported as far as 2.2. (it was previously in ssl_sock.c)
Add an ABORT_NOW() clause if indexed field line referred to the dynamic
table. This is required as current haproxy QPACK implementation does not
support the dynamic table.
Note that this should not happen as haproxy explicitely advertizes a
null-sized dynamic table to the other peer.
This shoud fix github issue #1753.
No need to backport as this was introduced by commit
b666c6b26e
MINOR: qpack: improve decoding function
Free Rx packet in the datagram handler if the packet was not taken in
charge by a quic_conn instance. This is reflected by the packet refcount
which is null.
A packet can be rejected for a variety of reasons. For example, failed
decryption, no Initial token and Retry emission or for datagram null
padding.
This patch should resolve the Rx packets memory leak observed via "show
pools" with the previous commit
2c31e12936
BUG/MINOR: quic: purge conn Rx packet list on release
This specific memory leak instance was reproduced using quiche client
which uses null datagram padding.
This should partially resolve github issue #1751.
It must be backported up to 2.6.
When releasing a quic_conn instance, free all remaining Rx packets in
quic_conn.rx.pkt_list. This partially fixes a memory leak on Rx packets
which can be observed after several QUIC connections establishment.
This should partially resolve github issue #1751.
It must be backported up to 2.6.
As reported by broxio in GH #1757, there was a duplication field name
for "quic_streams_data_blocked_bidi", due to a copy and paste without
renaming I guess.
Must be backported to 2.6.
This counter must be incremented only one time by connection and decremented
as soon as the handshake has failed or succeeded. This is a gauge. Under certain
conditions this counter could be decremented twice. For instance
after having received a TLS alert, then upon SSL_do_handshake() failure.
To stop having to deal to all the current combinations which can lead to such a
situation (and the next to come), add a connection flag to denote if this counter
has been already decremented for a connection. So, this counter must be decremented
only if this flag has not been already set.
Must be backported up to 2.6.
Non constant expressions were used to initialize constant variables leading to
such compilation errors:
src/xprt_quic.c:66:3: error: initializer element is not a constant expression
.key_label_len = strlen(QUIC_HKDF_KEY_LABEL_V1),
Reproduced with CC=gcc-4.9 compilation option.
Fix using macros for each HKDF label.
In order to better detect the danger caused by extra shared libraries
which replace some symbols, upon dlopen() we now compare a few critical
symbols such as malloc(), free(), and some OpenSSL symbols, to see if
the loaded library comes with its own version. If this happens, a
warning is emitted and TAINTED_REDEFINITION is set. This is important
because some external libs might be linked against different libraries
than the ones haproxy was linked with, and most often this will end
very badly (e.g. an OpenSSL object is allocated by haproxy and freed
by such libs).
Since the main source of dlopen() calls is the Lua lib, via a "require"
statement, it's worth trying to show a Lua call trace when detecting a
symbol redefinition during dlopen(). As such we emit a Lua backtrace if
Lua is detected as being in use.
Several bug reports were caused by shared libraries being loaded by other
libraries or some Lua code. Such libraries could define alternate symbols
or include dependencies to alternate versions of a library used by haproxy,
making it very hard to understand backtraces and analyze the issue.
Let's intercept dlopen() and set a new TAINTED_SHARED_LIBS flag when it
succeeds, so that "show info" makes it visible that some external libs
were added.
The redefinition is based on the definition of RTLD_DEFAULT or RTLD_NEXT
that were previously used to detect that dlsym() is usable, since we need
it as well. This should be sufficient to catch most situations.
This function may be used to try to show where some Lua code is currently
being executed. It tries hard to detect the initialization phase, both for
the global and the per-thread states, and for runtime states. This intends
to be used by error handlers to provide the users with indications about
what Lua code was being executed when the error triggered.
Calling hlua_traceback() sometimes reports empty entries looking like:
[C]: ?
These ones correspond to certain internal C functions of the Lua library,
but they do not provide any information and even complicate the
interpretation of the dump. Better just skip them.
The commit 731c8e6cf ("MINOR: stream: Simplify retries counter calculation")
introduced a regression. It broke the dontlog-normal option because the test
on the connection retries counter was not updated accordingly.
This patch should fix the issue #1754. It must be backported to 2.6.
On destructive connection upgrade, instead of using the new mux name to
abort the old stream, we can relay on the stream connector flags. If it is
detached after the upgrade, it means the stream will not be resused by the
new mux and it must be aborted.
This patch may be backported to 2.6.
When the protocol is changed for a client connection at the stream level
(from TCP to H1/H2), there are two cases. The stream may be reused or
not. The first case, when the stream is reused is working. The second one is
buggy since the conn-stream refactoring and leads to a crash.
In this case, the new mux don't reuse the stream. It must be silently
aborted. However, its front stream connector is still referencing the
connection. So it must be detached. But it must be performed in two stages,
to be sure to not loose the context for the upgrade and to be able to
rollback on error. So now, before the upgrade, we prepare to detach the
stconn and it is finally detached if the upgrade succeeds. There is a trick
here. Because we pretend the stconn is detached but its state is preserved.
This patch must be backported to 2.6.
tasklet_kill() was introduced in 2.5-dev4 with commit 7b368339a
("MEDIUM: task: implement tasklet kill"), but a comparison error
there makes tasklets killed on thread 1 assigned to the killing
thread. Fortunately, the function was finally not used so there's
no harm right now, hence the minor tag, but this must be fixed and
backported in case a later fix relies on it.
This should be backported to 2.5.
At this time haproxy supported only incompatible version negotiation feature which
consists in sending a Version Negotiation packet after having received a long packet
without compatible value in its version field. This version value is the version
use to build the current packet. This patch does not modify this behavior.
This patch adds the support for compatible version negotiation feature which
allows endpoints to negotiate during the first flight or packets sent by the
client the QUIC version to use for the connection (or after the first flight).
This is done thanks to "version_information" parameter sent by both endpoints.
To be short, the client offers a list of supported versions by preference order.
The server (or haproxy listener) chooses the first version it also supported as
negotiated version.
This implementation has an impact on the tranport parameters handling (in both
direcetions). Indeed, the server must sent its version information, but only
after received and parsed the client transport parameters). So we cannot
encode these parameters at the same time we instantiated a new connection.
Add QUIC_TP_DRAFT_VERSION_INFORMATION(0xff73db) new transport parameter.
Add tp_version_information new C struct to handle this new parameter.
Implement quic_transport_param_enc_version_info() (resp.
quic_transport_param_dec_version_info()) to encode (resp. decode) this
parameter.
Add qc_conn_finalize() which encodes the transport parameters and configure
the TLS stack to send them.
Add ->negotiated_ictx quic_conn C struct new member to store the Initial
QUIC TLS context for the negotiated version. The Initial secrets derivation
is version dependent.
Rename ->version to ->original_version and add ->negotiated_version to
this C struct to reflect the QUIC-VN RFC denomination.
Modify most of the QUIC TLS API functions to pass a version as parameter.
Export the QUIC version definitions to be reused at least from quic_tp.c
(transport parameters.
Move the token check after the QUIC connection lookup. As this is the original
version which is sent into a Retry packet, and because this original version is
stored into the connection, we must check the token after having retreived this
connection.
Add packet version to traces.
See https://datatracker.ietf.org/doc/html/draft-ietf-quic-version-negotiation-08
for more information about this new feature.
This is not clear at all how to distinguish a QUIC draft version number from a
released one. And among these QUIC draft versions, which one must use the draft
QUIC TLS extension.
According to the QUIC implementations which support v2 draft, the TLS extension
(transport parameters) to be used is the released one
(TLS_EXTENSION_QUIC_TRANSPORT_PARAMETERS).
As the unique QUIC draft version we support is 0xff00001d and as at this time the
unique version with 0xff as most significant byte is this latter which must use
the draft TLS extension, we select the draft TLS extension
(TLS_EXTENSION_QUIC_TRANSPORT_PARAMETERS_DRAFT) only for such versions with 0xff
as most signification byte.
This is becoming difficult to handle the QUIC TLS related definitions
which arrive with a QUIC version (draft or not). So, here we add
quic_version C struct which does not define only the QUIC version number,
but also the QUIC TLS definitions which depend on a QUIC version.
Modify consequently all the QUIC TLS API to reuse these definitions
through new quic_version C struct.
Implement quic_pkt_type() function which return a packet type (0 up to 3)
depending on the QUIC version number.
Stop harding the Retry packet first byte in send_retry(): this is not more
possible because the packet type field depends on the QUIC version.
Also modify quic_build_packet_long_header() for the same reason: the packet
type depends on the QUIC version.
Add a quic_version C struct member to quic_conn C struct.
Modify qc_lstnr_pkt_rcv() to set this member asap.
Remove the version member from quic_rx_packet C struct: a packet is attached
asap to a connection (or dropped) which is the unique object which should
store the QUIC version.
Modify qc_pkt_is_supported_version() to return a supported quic_version C
struct from a version number.
Add Initial salt for QUIC v2 draft (initial_salt_v2_draft).
This is to prepare the support for QUIC v2 version. The packet type
depends on the version. So, we must parse early enough the version
before defining the type of each packet.
The nonce and keys used to cipher the Retry tag depend on the QUIC version.
Add these definitions for 0xff00001d (draft-29) and v2 QUIC version. At least
draft-29 is useful for QUIC tracker tests with "quic-force-retry" enabled
on haproxy side.
Validated with -v 0xff00001d ngtcp2 option.
Could not validate the v2 nonce and key at this time because not supported.
Use the same version as the one received. This is safe because the
version is treated before anything else sending a Version packet.
Must be backported to 2.6.
A pretty ugly mistake introduced recently with an invalid goto statement
which prevents QUIC compilation on haproxy.
This must be backported on 2.6 as a complement to
60ef19f137
BUG/MINOR: h3/qpack: deal with too many headers
Adjust decoding loop by using temporary ist for header name and value.
The header is inserted at the end of an iteration, which guarantee that
we do not insert name only in the list in case of an error on value
decoding. This also helps the function readability by centralizing the
LIST insert operation.
The return value of the decoding function is also changed. Now on
success the number of headers inserted in the input list is returned.
This change as no impact as success value is not used by the caller.
This is mainly done to have a behavior similar to hpack decoding
function.
ensures that we never insert too many entries in a headers input list.
On the decoding side, a new error QPACK_ERR_TOO_LARGE is reported in
this case.
This prevents crash if headers number on a H3 request or response is
superior to tune.http.maxhdr config value. Previously, a crash would
occur in QPACK decoding function.
Note that the process still crashes later with ABORT_NOW() because error
reporting on frame parsing is not implemented for now. It should be
treated with a RESET_STREAM frame in most cases.
This can be backported up to 2.6.
Post-base indices is not supported at the moment for decoding. This
should never be encountered as it is only used with a dynamic table.
However, haproxy deactivates support for the dynamic table via its
SETTINGS.
Use ABORT_NOW() if this situation happens anyway. This should help
debugging instead of silently failed without error reporting.
Complete QPACK decoding with full implementation of litteral field name
with litteral value representation. This change is mandatory to support
decoding of headers name not present in the QPACK static table.
Previously, these headers were silently ignored and not transferred on
the backend request.
QPACK decoding should now be sufficient to deal with all real
situations. Only post-base indices representation are not handled but
this should not cause a problem as they are only used for the dynamic
table whose size is null as announced by the haproxy implementation.
The direct impact of this change is that it should now be possible to
use complex webapp through a QUIC frontend.
This must be backported up to 2.6.
Clean up QPACK decoder API by removing dependencies on ncbuf and
MUX-QUIC. This reduces includes statements. It will also help to
implement a standalone QPACK decoder.
Add comments on the decoding function to facilitate code analysis.
Also remove the qpack_debug_hexdump() which prints the whole left buffer
on each header parsing. With large HEADERS frame payload, QPACK traces
are complicated to debug with this statement.
If we've stopped consulting the local wait queue due to too many tasks
(max_processed <= 0), there's no point starting to lock the shared WQ,
check the first task's expiration date, upgrading the lock just to
refrain from doing the work because of the limit. All this does is
increase contention on an already contended system.
Note that there is still a fairness issue in this WQ dequeuing code. If
each thread is busy with expired tasks, no thread will dequeue the global
ones. In practice it doesn't make much sense and should quickly resorb,
but it could be nice to have an alternating flag indicating where to
start from on next call to improve this.
This macro was used both for binding and for lookups. When binding tasks
or FDs, using all_threads_mask instead is better as it will later be per
group. For lookups, ~0UL always does the job. Thus in practice the macro
was already almost not used anymore since the rest of the code could run
fine with a constant of all ones there.
In 1.9-dev1, commit 5bc9972ed ("BUG/MINOR: lua/threads: Make lua's tasks
sticky to the current thread") to detect unconfigured Lua tasks that could
run on any thread, by comparing their thread mask with MAX_THREADS_MASK.
The proper way to do it is to check for at least 2 threads in their mask
in fact. This is more reliable and allows to get rid of MAX_THREADS_MASK
there.
Each thread has its own local thread id and its own global thread id,
in addition to the masks corresponding to each. Once the global thread
ID can go beyond 64 it will not be possible to have a global thread Id
bit anymore, so better start to remove it and use only the local one
from the struct thread_info.
This simply replaces a call to task_new(1<<thr) with task_new_on(thr)
so that we can later isolate the changes required to add more thread
group stuff.
Instead of having a global mask of all the profiled threads, let's have
one flag per thread in each thread's flags. They are never accessed more
than one at a time an are better located inside the threads' contexts for
both performance and scalability.
LIST_ELEM macro was incorrectly used in the loop when purging
flow-control frames from qcc.lfctl.frms on MUX release. This caused a
segfault in qc_release() due to an invalid quic_frame pointer instance.
The occurence of this bug seems fairly rare. To happen, some
flow-control frames must have been allocated but not yet sent just as
the MUX release is triggered.
I did not find a reproducer scenario. Instead, I artificially triggered
it by inserting a quic_frame in qcc.lfctl.frms just before purging it in
qc_release() using the following snippet.
struct quic_frame *frm;
frm = pool_zalloc(pool_head_quic_frame);
LIST_INIT(&frm->reflist);
frm->type = QUIC_FT_MAX_DATA;
frm->max_data.max_data = 0;
LIST_APPEND(&qcc->lfctl.frms, &frm->list);
This should fix github issue #1747.
This must be backported up to 2.6.
The CLI applet process one request after another. Thus, when several
requests are pipelined, it is important to notify it won't consume remaining
outgoing data while it is processing a request. Otherwise, the applet may be
woken up in loop. For instance, it may happen with the HTTP client while we
are waiting for the server response if a shutr is received.
This patch must be backported in all supported versions after an observation
period. But a massive refactoring was performed in 2.6. So, for the 2.5 and
below, the patch will have to be adapted. Note also that, AFAIK, the bug can
only be triggered by the HTTP client for now.
in .chk_snd applet callback function, we must not wake up an applet if
SE_FL_WONT_CONSUME flag is set. Indeed, if an applet explicitly specify it
will not consume any outgoing data, it is useless to wake it up when more
data are sent. Note the applet may still be woken up for another reason. In
this case SE_FL_WONT_CONSUME flag will be removed. It is the applet
responsibility to set it again, if necessary.
This patch must be backported to 2.6 after an observation period. On earlier
versions, the bug probably exists too. However, because a massive
refactoring was performed in 2.6, the patch will have to be adapted and
carefully reviewed/tested if it is backported..
Since the conn-stream refactoring, from the time the health-check is in
progress, its stream-connector is always defined. So, some tests on it are
useless and can be removed.
This patch should fix the issue #1739.
When a TCP content ruleset is evaluated, we stop waiting for more data if
the inspect-delay is reached, if there is a read error or if we know no more
data will be received. This last point is only valid for ACLs. An action may
decide to yield for another reason. For instance, in the SPOE, the
"send-spoe-group" action yields while the agent response is not
received. Thus, now, an action call is final only when the inspect-delay is
reached or if there is a read error. But it is possible for an action to
yield if the buffer is full or if CF_EOI flag is set.
This patch could be backported to all supported versions.
When the MUX transfers a big amount of data to the client, the transport
layer may reject some of them because of the congestion controller
limit. Frames built by the MUX are thus dropped, even if streams transferred
data are kept in buffers for future new frames.
Thus, the MUX is required to free rejected frames. This fixes a memory
leak which may grow with important data transfers.
It should be backported to 2.6 after it has been tested and validated.
TX flow-control enforcing is not straightforward : it requires the usage
of several counters at stream and connection level, in part due to the
difficult sending API between MUX and quic-conn layers.
To strengthen this part and ensures it behaves as expected, some
existing BUG_ON statements were adjusted and new one were added. This
should help to catch errors as early as possible, as in the case with
github issue #1738.
The flow control enforced at connection level is incorrectly calculated.
There is a risk of exceeding the limit. In most cases, this results in a
segfault produced by a BUG_ON which is here to catch this kind of error.
If not compiled with DEBUG_STRICT, this should generate a connection
closed by the client due to the flow control overflow.
The problem is encountered when transfered payload is big enough to fill
the transport congestion window. In this case, some data are rejected by
the transport layer and kept by the MUX to be reemitted later. However,
these preserved data are not counted on the connection flow control when
resubmitted, which gradually amplify the gap between expected and real
consumed flow control.
To fix this, handle the flow-control at the connection level in the same
way as the stream level. A new field qcc.tx.offsets is incremented as
soon as data are transfered between stream TX buffers. The field
qcc.tx.sent_offsets is preserved to count bytes handled by the transport
layer and stop the MUX transfer if limit is reached.
As already stated, this bug can occur during transfers with enough
emitted data, over multiple streams. When using a single stream, the
flow control at the stream level hides it.
The BUG_ON crash is reproduced systematically with quiche client :
$ quiche-client --no-verify --http-version HTTP/3 -n 10000 https://127.0.0.1:20443/10K
This must be backported up to 2.6 when confirmed to work as expected.
This should fix github issue #1738.
This is the continuation of commit 5a0e7ca5d ("BUG/MINOR: cli/stats: add
missing trailing LF after JSON outputs"). There's also a "show info json"
command which was also missing the trailing LF. It's constructed exactly
like the "show stat json", in that it dumps a series of fields without any
LF. The difference however is that for the stats output, everything was
enclosed in an array which required an LF *after* the closing bracket,
while here there's no such array so we have to emit the LF after the loop.
That makes the two functions a bit inconsistent, it's quite annoying, but
making them better would require sending one LF per line in the stats
output, which is not particularly interesting.
Given that it took 5+ years to spot that this code wasn't working as
expected it doesn't seem worth investing much time trying to refactor
it to make it look cleaner at the risk of breaking other obscure parts.
Leonhard Wimmer reported an interesting bug in github issue #1742.
Servers in disabled proxies that are configured for resolution are still
subscribed to DNS resolutions, but the LB algos are not initialized at
all since the proxy is disabled, so when the server state changes,
attempts to update its status cause a crash when the server's weight
is recalculated via a divide by the proxy's total weight which is zero.
This should be backported to all versions. Beware that before 2.5 or
so, there's no PR_FL_DISABLED flag, instead px->disabled should be
used (2.3-2.4) or PR_STSTOPPED for older versions.
Thanks to Leonhard for his report and quick test!
Patrick Hemmer reported that we have a bug in the CLI commands
"show stat json" and "show schema json" in that they forget the trailing
LF that's required to mark the end of the response. This has been the
case since the introduction of the feature in 1.8-dev1 by commit 6f6bb380e
("MEDIUM: stats: Add show json schema"), so this fix may be backported to
all versions.
Function used to parse SETTINGS frame is incorrect as it does not stop
at the frame length but continue to parse beyond it. In most cases, it
will result in a connection closed with error H3_FRAME_ERROR.
This bug can be reproduced with clients that sent more than just a
SETTINGS frame on the H3 control stream. This is notably the case with
aioquic which emit a MAX_PUSH_ID after SETTINGS.
This bug has been introduced in the current dev release, by the
following patch
62eef85961
MINOR: mux-quic: simplify decode_qcs API
thus, it does not need to be backported.
This changes the default from RFC 7540's default 65535 (64k-1) to avoid
avoid some degenerative WINDOW_UPDATE behaviors in the wild observed with
clients using 65536 as their buffer size, and have to complete each block
with a 1-byte frame, which with some servers tend to degenerate in 1-byte
WU causing more 1-byte frames to be sent until the transfer almost only
uses 1-byte frames.
More details here: https://github.com/nghttp2/nghttp2/issues/1722
As mentioned in previous commit (MEDIUM: mux-h2: try to coalesce outgoing
WINDOW_UPDATE frames) the issue could not be reproduced with haproxy but
individual WU frames are sent so theoretically nothing prevents this from
happening. As such it should be backported as a workaround for already
deployed clients after watching for any possible side effect with rare
clients. As an added benefit, uploads from curl now use less DATA frames
(all are 16384 now). Note that the previous patch alone is sufficient to
stop the issue with curl in case this one would need to be reverted.
[wt: edited commit messaged, updated doc]
Glenn Strauss from Lighttpd reported a corner case affecting curl+lighttpd
that causes some uploads to degenerate to extremely suboptimal conditions
under certain circumstances, and noted that many other implementations
were possibly not safe against this degradation.
Glenn's detailed analysis is available here:
https://github.com/nghttp2/nghttp2/issues/1722
In short, curl uses a 65536 bytes buffer and the default stream window
is 65535, with 16384 bytes per frame. Curl will then send 3 frames of
16384 bytes followed by one of 16383, will wait for a window update to
send the last byte before recycling the buffer to read the next 64kB.
On each round like this, one extra single-byte frame will be sent, and
if ACKs for these single-byte frames are not aggregated, this will only
allow the client to send one extra byte at a time. At some point it is
possible (at least Glenn observed it) to have mostly 1-byte frames in
the transfer, resulting in huge CPU usage and a long transfer.
It was not possible to reproduce this with haproxy, even when playing
with frame sizes, buffer sizes nor window sizes. One reason seems to
be that we're using the same buffer size for the connection and the
stream and that the frame headers prevent the filling of the window
from happening on the same boundaries as on the sender. However it
does occasionally happen to see up to two 1-byte data frames in a row,
indicating that there's definitely room for improvement.
The WINDOW_UPDATE frames for the connection are sent at the end of the
demuxing, but the ones for the streams are currently sent immediately
after a DATA frame is processed, mostly for convenience. But we don't
need to proceed like this, we already have the counter of unacked bytes
in rcvd_s, so we can simply use that to decide when to send an ACK. It
must just be done before processing a new frame. The benefit is that
contiguous frames for the same stream will now only produce a single
WU, like for the connection. On complicated tests involving a client
that was limited to 100 Mbps transfers and a dummy Lua-based payload
consumer, it was possible to see the number of stream WU frames being
halved for a 100 MB transfer, which is already a nice saving anyway.
Glenn proposed a better workaround consisting in increasing the
default window size to 65536. This will be done in a separate patch
so that both can be studied independently in field and backported as
needed.
This patch is not much complicated and shold be backportable. It just
needs to be tested in development first.
BUG_ON() assertion to check for incomplete SETTINGS frame is incorrect.
It should check if frame length is greater, not smaller, than current
buffer data. Anyway, this BUG_ON() is useless as h3_decode_qcs()
prevents parsing of an incomplete frame, except for H3 DATA. Remove it
to fix this bug.
This bug was introduced in the current dev tree by commit
commit 62eef85961
MINOR: mux-quic: simplify decode_qcs API
Thus it does not need to be backported.
This fixes crashes which happen with DEBUG_STRICT=2. Most notably, this
is reproducible with clients that emit more than just a SETTINGS frame
on the H3 control stream. It can be reproduced with aioquic for example.
The health-check attached to an email alert has no type. It is unexpected,
and since the 2.6, it is important because we rely on it to know the
application type in front of a connection at the stream-connector
level. Because the object type is not set, the SE descriptor is not properly
initialized, leading to a segfault when a connection to the SMTP server is
established.
This patch must be backported to 2.6 and may be backported as far as
2.0. However, it is only an issue for the 2.6 and upper.
There is no server for email alerts. So the trace messages must be adapted
to handle this case. Information related to the server are now skipped for
email alerts and "[EMAIL]" prefix is used.
This patch must be backported as far as 2.4.
Email alerts are based on health-checks but with no server. Thus, in
__trace() function, responsible to write a trace message, we must be
prepared to have no server and thus no proxy.
This patch must be backported as far as 2.4.
Building QUIC with gcc-4.4 on el6 shows this error:
src/xprt_quic.c: In function 'qc_release_lost_pkts':
src/xprt_quic.c:1905: error: unknown field 'loss' specified in initializer
compilation terminated due to -Wfatal-errors.
make: *** [src/xprt_quic.o] Error 1
make: *** Waiting for unfinished jobs....
Initializing an anonymous form of union like :
struct quic_cc_event ev = {
(...)
.loss.time_sent = newest_lost->time_sent,
(...)
};
generates an error with gcc-4.4 but not when initializing the
fields outside of the declaration.
Convert return code to -1 when an error has been detected. This is
required since the previous API change on return value from the patch :
1f21ebdd76
MINOR: mux-quic/h3: adjust demuxing function return values
Without this, QUIC MUX won't consider the call as an error and will try
to remove one byte from the buffer. This may cause a BUG_ON failure if
the buffer is empty at this stage.
This bug was introduced in the current dev tree. Does not need to be
backported.
Clean the API used by decode_qcs() and transcoder internal functions.
Parsing functions now returns a ssize_t which represents the number of
consumed bytes or a negative error code. The total consumed bytes is
returned via decode_qcs().
The API is now unified and cleaner. The MUX can thus simply use the
return value of decode_qcs() instead of substracting the data bytes in
the buffer before and after the call. Transcoders functions are not
anymore obliged to remove consumed bytes from the buffer which was not
obvious.
Slightly modify decode_qcs function used by transcoders. The MUX now
gives a buffer instance on which each transcoder is free to work on it.
At the return of the function, the MUX removes consume data from its own
buffer.
This reduces the number of invocation to qcs_consume at the end of a
full demuxing process. The API is also cleaner with the transcoders not
responsible of calling it with the risk of having the input buffer
freed if empty.
As a mirror to qcc/qcs types, add a h3c pointer into h3s struct. This
should help to clean up H3 code and avoid to use qcs.qcc.ctx to retrieve
the h3c instance.
smp_fc_http_major may be used to return the http version as an integer
used on the frontend or backend side. Previously, the handler only
checked for version 2 or 1 as a fallback. Extend it to support version 3
with the QUIC mux.
Commit d6c66f06a ("MINOR: ssl_ckch: Remove service context for "set ssl
crl-file" command") introduced a regression leading to a build error because
of a possible uninitialized value. It is now fixed.
This patch must be backported as far as 2.5.
A build error is reported about the path variable in the switch statement on
the commit type, in cli_io_handler_commit_cafile_crlfile() function. The
enum contains only 2 values, but a default clause has been added to return an
error to make GCC happy.
This patch must be backported as far as 2.5.
Commit 9a99e5478 ("BUG/MINOR: ssl_ckch: Dump CRL transaction only once if
show command yield") introduced a regression leading to a build error
because of a possible uninitialized value. It is now fixed.
This patch must be backported as far as 2.5.
Commit 5a2154bf7 ("BUG/MINOR: ssl_ckch: Dump CA transaction only once if
show command yield") introduced a regression leading to a build error
because of a possible uninitialized value. It is now fixed.
This patch must be backported as far as 2.5.
Commit 3e94f5d4b ("BUG/MINOR: ssl_ckch: Dump cert transaction only once if
show command yield") introduced a regression leading to a build error
because of a possible uninitialized value. It is now fixed.
This patch must be backported as far as 2.2.
The same type is used for CA and CRL entries. So, in commit_cert_ctx
structure, there is no reason to have different fields for the CA and CRL
entries.
.next_ckchi_link field must be initialized to NULL instead of .next_ckchi in
cli_parse_commit_crlfile() function. Only '.nex_ckchi_link' is used in the
I/O handler.
This patch must be backported as far as 2.5 with some adaptations for the 2.5.
When loaded SSL certificates are displayed via "show ssl cert" command, the
in-progess transaction, if any, is also displayed. However, if the command
yield, the transaction is re-displayed again and again.
To fix the issue, old_ckchs field is used to remember the transaction was
already displayed.
This patch must be backported as far as 2.2.
When loaded CA files are displayed via "show ssl ca-file" command, the
in-progress transaction, if any, is also displayed. However, if the command
yield, the transaction is re-displayed again and again.
To fix the issue, old_cafile_entry field is used to remember the transaction
was already displayed.
This patch must be backported as far as 2.5.
When loaded CRL files are displayed via "show ssl crl-file" command, the
in-progess transaction, if any, is also displayed. However, if the command
yield, the transaction is re-displayed again and again.
To fix the issue, old_crlfile_entry field is used to remember the transaction
was already displayed.
This patch must be backported as far as 2.5.
Because of a typo (I guess), an unknown type is used for the old entry in
show_crlfile_ctx structure. Because this field is unused, there is no
compilation error. But it must be a cafile_entry and not a crlfile_entry.
Note this field is not used for now, but it will be used.
This patch must be backported to 2.6.
Simplify cli_io_handler_commit_cafile_crlfile() handler function by
retrieving old and new entries at the beginning. In addition the path is
also retrieved at this stage. This removes several switch statements.
Note that the ctx was already validated by the corresponding parsing
function. Thus there is no reason to test the pointers.
While it is not a bug, this patch may help to fix issue #1731.
There is an enum to determine the entry entry type when changes are
committed on a CA/CRL entry. So use it in the service context instead of an
integer.
This patch may help to fix issue #1731.
Rewrite failures in http rules are reported as proxy errors (PRXCOND) in
logs. However, other rewrite errors are reported as internal errors. For
instance, it happens when we fail to add X-Forwarded-For header. It is not
consistent and it is confusing. So now, all rewite failures are reported as
proxy errors.
This patch may be backported if necessary.
'httpclient' command does not properly handle full buffer cases. When the
response buffer is full, we exit to retry later. However, the context flags
are updated. It means when this happens, we may loose a part of the
response.
So now, flags are preserved when we fail to push data into the response
buffer. In addition, instead of dumping one part per call, we now try to
dump as much data as possible.
Finally, when there is no more data, because everything was dumped or
because we are waiting for more data from the HTTP client, the applet is
updated accordingly by calling applet_have_no_more_data(). Otherwise, when
some data are blocked, applet_putchk() already takes care to update the SE
flags. So, it is useless to call sc_need_room().
This patch should fix the issue #1723. It must be backported as far as
2.5. But a massive refactoring was performed in 2.6. So, for the 2.5 and
below, the patch will have to be adapted.
Commit 534645d6 ("BUG/MEDIUM: httpclient: Fix loop consuming HTX blocks from
the response channel") introduced a regression. When the response is
consumed, The HTX header blocks are removed before duplicating them. Thus,
the first header block is always lost.
This patch must be backported as far as 2.5.
'add ssl crt-list' command is also concerned. This patch is similar to the
previous ones. Full buffer cases when we try to push the reply are not
properly handled. To fix the issue, the functions responsible to add a
crt-list entry were reworked.
First, the error message is now part of the service context. This way, if we
cannot push the error message in the reponse buffer, we may retry later. To
do so, a dedicated state was created (ADDCRT_ST_ERROR,). Then, the success
message is also handled in a dedicated state (ADDCRT_ST_SUCCESS). This way
we are able to retry to push it if necessary. Finally, the dot displayed for
each new instance is now immediatly pushed in the response buffer, and
before the update. This way, we are able to retry too if necessary.
This patch should fix the issue #1724. It must be backported as far as
2.2. But a massive refactoring was performed in 2.6. So, for the 2.5 and
below, the patch will have to be adapted.
'commit ssl crl-file' command is also concerned. This patch is similar to
the previous one. Full buffer cases when we try to push the reply are not
properly handled. To fix the issue, the functions responsible to commit CA
or CRL entry changes were reworked.
First, the error message is now part of the service context. This way, if we
cannot push the error message in the reponse buffer, we may retry later. To
do so, a dedicated state was created (CACRL_ST_ERROR). Then, the success
message is also handled in a dedicated state (CACRL_ST_SUCCESS). This way we
are able to retry to push it if necessary. Finally, the dot displayed for
each updated CKCH instance is now immediatly pushed in the response buffer,
and before the update. This way, we are able to retry too if necessary.
This patch should fix the issue #1722. It must be backported as far as
2.5. But a massive refactoring was performed in 2.6. So, for the 2.5, the
patch will have to be adapted.
When changes on a certificate are commited, a trash buffer is used to create
the response. Once done, the message is copied in the response buffer.
However, if the buffer is full, there is no way to retry and the message is
lost. The same issue may happen with the error message. It is a design issue
of cli_io_handler_commit_cert() function.
To fix it, the function was reworked. First, the error message is now part
of the service context. This way, if we cannot push the error message in the
reponse buffer, we may retry later. To do so, a dedicated state was created
(CERT_ST_ERROR). Then, the success message is also handled in a dedicated
state (CERT_ST_SUCCESS). This way we are able to retry to push it if
necessary. Finally, the dot displayed for each updated CKCH instance is now
immediatly pushed in the response buffer, and before the update. This way,
we are able to retry too if necessary.
This patch should fix the issue #1725. It must be backported as far as
2.2. But massive refactoring was performed in 2.6. So, for the 2.5 and
below, the patch must be adapted.
When a CA or CRL entry is replaced (via 'set ssl ca-file' or 'set ssl
crl-file' commands), the path is duplicated and used to identify the ongoing
transaction. However, if the same command is repeated, the path is still
duplicated but the transaction is not changed and the duplicated path is not
released. Thus there is a memory leak.
By reviewing the code, it appears there is no reason to duplicate the
path. It is always the filename path of the old entry. So, a reference on it
is now used. This simplifies the code and this fixes the memory leak.
This patch must be backported as far as 2.5.
When a certificate entry is replaced (via 'set ssl cert' command), the path
is duplicated and used to identify the ongoing transaction. However, if the
same command is repeated, the path is still duplicated but the transaction
is not changed and the duplicated path is not released. Thus there is a
memory leak.
By reviewing the code, it appears there is no reason to duplicate the
path. It is always the path of the old entry. So, a reference on it is now
used. This simplifies the code and this fixes the memory leak.
This patch must be backported as far as 2.2.
When a CA or a CRL entry is being modified, we must take care to no delete
it because the corresponding ongoing transaction still references it. If we
do so, it leads to a null-deref and a crash may be exeperienced if changes
are commited.
This patch must be backported as far as 2.5.
When a certificate entry is being modified, we must take care to no delete
it because the corresponding ongoing transaction still references it. If we
do so, it leads to a null-deref and a crash may be exeperienced if changes
are commited.
This patch must be backported as far as 2.2.
On the CLI, If we fail to commit changes on a CA or a CRL entry, an error
message is returned. This error must be released.
This patch must be backported as far as 2.4.
On the CLI, If we fail to commit changes on a certificate entry, an error
message is returned. This error must be released.
This patch must be backported as far as 2.2.
When parsing QPACK encoder/decoder streams, h3_decode_qcs() displays an
error trace if they are empty. Change the return code used in QPACK code
to avoid this trace.
To uniformize with MUX/H3 code, 0 is now used to indicate success.
Beyond this spurious error trace, this bug has no impact.
The MUX now provides a single API for both uni and bidirectional
streams. It is responsible to reject reception on a local unidirectional
stream with the error STREAM_STATE_ERROR. This is already implemented in
qcc_recv(). As such, remove this duplicated check from xprt_quic.c.
The H3 frame demuxing code is incorrect when receiving a STREAM frame
which contains only a new H3 frame header without its payload.
In this case, the check on frames bigger than the buffer size is
incorrect. This is because the buffer has been freed via
qcs_consume()/qc_free_ncbuf() as it was emptied after H3 frame header
parsing. This causes the connection to be incorrectly closed with
H3_EXCESSIVE_LOAD error.
This bug was reproduced with xquic client on the interop and with the
command-line invocation :
$ ./interop_client -l d -k $SSLKEYLOGFILE -a <addr> -p <port> -D /tmp \
-A h3 -U https://<addr>:<port>/hello_world.txt
Note also that h3_is_frame_valid() invocation has been moved before the
new buffer size check. This ensures that first we check the frame
validity before returning from the function. It's also better
positionned as this is only needed when a new H3 frame header has been
parsed.
The H3 demuxing code was not fully correct. After parsing the H3 frame
header, the check between frame length and buffer data is wrong as we
compare a copy of the buffer made before the H3 header removal.
Fix this by improving the H3 demuxing code API. h3_decode_frm_header()
now uses a ncbuf instance, this prevents an unnecessary cast
ncbuf/buffer in h3_decode_qcs() which resolves this error.
This bug was not triggered at this moment. Its impact should be really
limited.
Replace ncb_blk_is_null() by ncb_is_null() as a prelude to ncb_data().
The result is the same : the function will return 0 if the buffer is
uninitialized. However, it is clearer to directly call ncb_is_null() to
reflect this.
There is no functional change with this commit.
Some server keywords are currently silently ignored in the peers
section, which is not good because it wastes time on user-side, trying
to make something work while it cannot by design.
With this patch we at least report a few of them (the most common ones),
which are init_addr, resolvers, check, agent-check. Others might follow.
This may be backported to 2.5 to encourage some cleaning of bogus configs.
When parsing a peers section, it's particularly difficult to make the
difference between the local peer which doesn't have any address, and
other peers which need one, and the error messages do not help because
with just:
peers foo
bind :8001
server foo 127.0.0.1:8001
server bar 127.0.0.2:8001
One can get such a confusing message when the local peer is "bar":
[peers.cfg:15] : 'server foo/bar' : unknown keyword '127.0.0.1:8001'.
It's not clear there why the other peer doesn't trigger an error.
With this commit we add a hint in the error message when no address
was expected. The error remains quite generic (since deep into the
server code) but at least the useer gets a hint about why the keyword
wasn't understood:
[peers.cfg:15] : 'server foo/bar' : unknown keyword '127.0.0.1:8001'.
Hint: no address was expected for this server.
For some poor historical reasons, the name of a peers proxy used to be
set to the name of the local peer itself. That causes some confusion when
multiple sections are present because the same proxy name appears at
multiple places in "show peers", but since 2.5 where parsing errors include
the proxy name, a config like this one :
peers foo
server foobar blah
Would report this when the local peer name isn't "foobar":
'server (null)/foobar' : invalid address: 'blah' in 'blah'
And this when it is foobar:
'server foobar/foobar' : invalid address: 'blah' in 'blah'
This is wrong, confusing and not very practical. This commit addresses
all this by using the peers section's name when it's created. This now
allows to report messages such as:
'server foo/foobar' : invalid address: 'blah' in 'blah'
Which make it clear that the section is called "foo" and the server
"foobar".
This may be backported to 2.5, though the patch may be simplified if
needed, by just adding the change at the output of init_peers_frontend().
By having the appctx in argument this function wouldn't have experienced
the previous bug. Better do that now to avoid proliferation of awkward
functions.
In the context of a CLI command, it's particularly not welcome to use
an "appctx" variable that is not the current one. In addition it was
created for use at exactly 6 places in 2 lines. Let's just remove it
and stick to peer->appctx which is used elsewhere in the function and
is unambiguous.
Commit d0a06d52f ("CLEANUP: applet: use applet_put*() everywhere possible")
replaced most accesses to the conn_stream with simpler accesses to the
appctx. Unfortunately, in all the CLI functions using an appctx, one
makes an exception where the appctx is not the caller's but the one being
inspected! When no peers connection is active, the early exit immediately
crashes.
No backport is needed.
stream.c and mux_fcgi.c may cause a warning for a possible NULL deref
at -Os, while that is not possible thanks to the previous test. Let's
just switch to __htx_get_head_blk() instead.
Remove an unneeded BUG_ON statement when find_http_meth() returns
HTTP_METH_OTHER.
This fix is necessary to support requests with unusual methods with
DEBUG_STRICT activated. This was detected when browsing with HTTP/3 over
a nextcloud instance which uses PROPFIND method for Webdav.
Prefix-integer encoding function was incomplete. It was not able to deal
correctly with value encoded on more than 2 bytes. This maximum value depends
on the size of the prefix, but value greater than 254 were all impacted.
Most notably, this change is required to support header name/value with
sizeable length. Previously, length was incorrectly encoded. The client thus
closed the connection with QPACK_DECOMPRESSION_ERROR.
Replace bogus call b_data() by b_room() to check if there is enough
space left in the buffer before encoding a prefix integer.
At this moment, no real scenario was found to trigger a bug related to
this change. This is probably because the buffer always contains data
(field section line and status code) before calling
qpack_encode_prefix_integer() which prevents an occurrence of this bug.
If the connection client timeout has expired, the mux is released.
If the client decides to initiate a new request, we send a STOP_SENDING
frame. Then, the client endessly sends a RESET_STREAM frame.
At this time, we simulate the fact that we support the RESET_STREAM frame
thanks to this ridiculously minimalistic patch.
If the connection client timeout has expired, the mux is released.
If the client decides to initiate a new request, we do not ack its
request. This leads the client to endlessly sent it request.
This patch makes a QUIC listener send a STOP_SENDING frame in such
a situation.
Add ->inc_err_cnt new callback to qcc_app_ops struct which can
be called from xprt to increment the application level error code counters.
It take the application context as first parameter to be generic and support
new QUIC applications to come.
Add h3_stats.c module with counters for all the frame types and error codes.
Rename "tune.quic.conn-buf-limit" to "tune.quic.frontend.conn-tx-buffers.limit"
to reflect the stream direction (TX) and the objects (frontends) which are
concerned.
Add new counters to count the number of dropped packet upon parsing error, lost
sent packets and the number of stateless reset packet sent.
Take the oppportunity of this patch to rename CONN_OPENINGS to QUIC_ST_HALF_OPEN_CONN
(total number of half open connections) and QUIC_ST_HDSHK_FAILS to QUIC_ST_HDSHK_FAIL.
When we select the next encryption level in qc_treat_rx_pkts() we
must reset the local largest_pn variable if we do not want to reuse its
previous value for this encryption. This bug could only happend during
handshake step and had no visible impact because this variable
is only used during the header protection removal step which hopefully
supports the packet reordering.
This is becoming difficult to distinguish the default values for
transport parameters which come with the RFC from our implementation
default values when not set by configuration (tunable parameters).
Add a comment to distinguish them.
Prefix these default values by QUIC_TP_DFLT_ to distinguish them from
QUIC_DFLT_* value even if there are not numerous.
Furthermore ->max_udp_payload_size must be first initialized to
QUIC_TP_DFLT_MAX_UDP_PAYLOAD_SIZE especially for received value.
Add tunable "tune.quic.frontend.max_streams_bidi" setting for QUIC frontends
to set the "initial_max_streams_bidi" transport parameter.
Add some documentation for this new setting.
Add two tunable settings both for backends and frontends "max_idle_timeout"
QUIC transport parameter, "tune.quic.frontend.max-idle-timeout" and
"tune.quic.backend.max-idle-timeout" respectively.
cfg_parse_quic_time() has been implemented to parse a time value thanks
to parse_time_err(). It should be reused for any tunable time value to be
parsed.
Add the documentation for this tunable setting only for frontend.
Add quic_transport_params_dump() static inline function to do so for
a quic_transport_parameters struct as parameter.
We use the trace API do dump these transport parameters both
after they have been initialized (RX/local) or received (TX/remote).
Make the transport parameters be standlone as much as possible as
it consists only in encoding/decoding data into/from buffers.
Reduce the size of xprt_quic.h. Unfortunalety, I think we will
have to continue to include <xprt_quic-t.h> to use the trace API
into this module.
We do not want to count the out of packet padding as being belonging
to an invalid packet, the firt byte of a QUIC packet being never null.
Some browsers like firefox proceeds this way to add PADDING frames
after an Initial packet and increase the size of their Initial packets.
In muxes, the stream-endoint descriptor of a stream is always defined. Thus,
in .show_fd callback functions, there is no reason to test it.
This patch should address the issue #1727.
.
Thanks to the recent refactoring, when tcpcheck_main() function is called,
the stream-connector of the healthchek is always defined. There is no reason
to still test it.
This patch should fix the issue #1721.
The only two places where it was used was to carefully preserve the
SE_FL_WILL_CONSUME flag (since others are irrelevant there and the
previous RXBLK* flags moved to the stconn). Now that the flag is
cleared by default there's no need to re-created a fresh new one
when replacing the descriptor, so we can eliminate that remaining
trick.
Now that the data consumption from the endpoint is the default setting,
we can generalize the pre-clearing of the wont_consume flag, which is
no more specific to applets. In practice it's not needed anymore to do
it, but since streams might be initiatied from asynchronous applets,
these might have blocked their consumption side before creating the
stream thus it's safer to preserve the clearing of the flag.
The stream endpoint descriptor that was named "endp" is now called "sd"
both in the fcgi_strm struct and in the few functions using this. The
name was also updated in the "show fd" output.
The stream endpoint descriptor that was named "endp" is now called "sd"
both in the h2s struct and in the few functions using this. The name
was also updated in the "show fd" output.
The stream endpoint descriptor that was named "endp" is now called "sd"
both in the h1s struct and in the few functions using this. The name
was also updated in the "show fd" output.
In ssl_action_wait_for_hs() the local variables called "cs" is just a
copy of s->scf that's only used once, so it can be removed. In addition
the check was removed as well since it's not possible to have a NULL SC
on a stream.
Function arguments and local variables called "cs" were renamed to
"sc" to avoid future confusion. There was also one place in traces
where "cs" used to display the stconn, which were turned to "sc".
Function arguments and local variables called "cs" were renamed to
"sc" to avoid future confusion. There were also 2 places in traces
where "cs" used to display the stconn, which were turned to "sc".
The "nb_cs" struct field and "h2_has_too_many_cs()" functions were
also renamed.
Function arguments and local variables called "cs" were renamed to
"sc" to avoid future confusion. There were also 2 places in traces
where "cs" used to display the stconn, which were turned to "sc".
h1s_upgrade_cs() and h1s_new_cs() were both renamed to _cs.
Function arguments and local variables called "cs" were renamed to
"sc" to avoid future confusion. There were also 3 places in debugging
traces where "cs" used to display the stconn, which were turned to "sc"
for similar reasons. The number of streams "nb_cs" was turned to "nb_sc".
Function arguments and local variables called "cs" were renamed to "sc"
to avoid future confusion. Both the core functions and the ones in the
resolvers files were updated.
Function arguments and local variables called "cs" were renamed to "sc"
to avoid future confusion. The HTTP analyser and the backend functions
were all updated after being reviewed. Function stream_update_both_cs()
was renamed to stream_update_both_sc()
Function arguments and local variables called "cs" were renamed to "sc"
to avoid future confusion. The "nb_cs" stream-connector counter was
renamed to "nb_sc" and qc_attach_cs() was renamed to qc_attach_sc().
Function arguments and local variables called "cs" were renamed to "sc"
to avoid future confusion. The change is huge (~580 lines), so extreme
care was given not to change anything else.
The check struct had a "cs" field renamed to "sc", which also required
a tiny update to a few functions using it to distinguish a check from
a stream (log.c, payload.c, ssl_sample.c, tcp_sample.c, tcpcheck.c,
connection.c).
Function arguments and local variables called "cs" were renamed to "sc".
The presence of one "cs=" in the debugging traces was also turned to
"sc=" for consistency.
There's no more reason for keepin the code and definitions in conn_stream,
let's move all that to stconn. The alphabetical ordering of include files
was adjusted.
This file contains all the stream-connector functions that are specific
to application layers of type stream. So let's name it accordingly so
that it's easier to figure what's located there.
The alphabetical ordering of include files was preserved.
QUIC was the last user of entities with "conn_stream" in their names,
though there's no more reason for this given that the pool names were
already pretty straightforward. The renaming does this:
qc_stream_desc: pool_head_quic_conn_stream -> pool_head_quic_stream_desc
qc_stream_buf: pool_head_quic_conn_stream_buf -> pool_head_quic_stream_buf
An equivalent applet_need_more_data() was added as well since that function
is mostly used from applet code. It makes it much clearer that the applet
is waiting for data from the stream layer.
These ones are essentially for the stream endpoint, let's give them a
name that matches the intent. Equivalent versions were provided in the
applet namespace to ease code legibility.
The following flags are not at all related to the endpoint but to the
connector itself:
- SE_FL_RXBLK_ROOM
- SE_FL_RXBLK_BUFF
- SE_FL_RXBLK_CHAN
As such they have no business staying in the endpoint descriptor and
they must move to the stream connector. They've also been renamed
accordingly to better match what they correspond to (the same name
as the function that sets them).
The rare occurrences of cs_rx_blocked() were replaced by an explicit
test on the list of flags. The reason is that cs_rx_blocked() used to
preserve some tests that are not needed at certain places since already
known. For the same reason SE_FL_RXBLK_ANY wasn't converted. As such it
will later be possible to carefully review these few locations and
eliminate the unneeded flags from the tests. No particular function
was made to test them since they're explicit enough.
It now looks like ci_putchk() and friends could very well place the flag
themselves on the connector when they detect a buffer full condition, as
this would significantly simplify the high-level API. But all usages must
first be reviewed before this simplification can be done. For now it
remains done by applet_put*() instead.
It's more explicit this way. The cs_rx_endp_ready() function could be
removed so that the flag is directly tested. In the future it should
be inverted and the few places where it's set (or preserved via
SE_FL_APP_MASK) could be dropped.
At plenty of places we combine multiple flags checks to determine if we
can receive (endp_ready, rx_blocked, cf_shutr etc). Let's group them
under a single function that is meant to replace existing tests.
Some tests were only checking the rxblk flags at the connection level,
so for now they were not converted, this requires a bit of auditing
first, and probably a test to determine whether or not to check for
cf_shutr (e.g. there is none if no stream is present).
The analysis of cs_rx_endp_more() showed that the purpose is for a stream
endpoint to inform the connector that it's ready to deliver more data to
that one, and conversely cs_rx_endp_done() that it's done delivering data
so it should not be bothered again for this.
This was modified two ways:
- the operation is no longer performed on the connector but on the
endpoint so that there is no more doubt when reading applet code
about what this rx refers to; it's the endpoint that has more or
no more data.
- an applet implementation is also provided and mostly used from
applet code since it saves the caller from having to access the
endpoint descriptor.
It's visible that the flag ought to be inverted because some places
have to set it by default for no reason.
These functions are used by the application layer to disable or enable
reading at the stream connector's level when the input buffer failed to
be allocated (or was finally allocated). The new names makes things
clearer.
These functions were used by the channel to inform the lower layer
whether reading was acceptable or not. Usually this directly mimmicks
the CF_DONT_READ flag from the channel, which may be set when it's
desired not to buffer incoming data that will not be processed, or
that the buffer wants to be flushed before starting to read again,
or that bandwidth limiting might be enforced, etc. It's always a
policy reason, not a purely resource-based one.
The new name mor eclearly indicates that a stream connector cannot make
any more progress because it needs room in the channel buffer, or that
it may be unblocked because the buffer now has more room available. The
testing function is sc_waiting_room(). This is mostly used by applets.
Note that the flags will change soon.
This makes SE_FL_APPLET_NEED_CONN autonomous, in that we check for it
everywhere we have a relevant cs_rx_blocked(), so that the flag doesn't
need anymore to be covered by cs_rx_blocked(). Indeed, this flag doesn't
really translate a receive blocking condition but rather a refusal to
wake up an applet that is waiting for a connection to finish to setup.
This also ensures we will not risk to set it back on a new endpoint
after cs_reset_endp() via SE_FL_APP_MASK, because the flag being
specific to the endpoint only and not to the connector, we don't
want to preserve it when replacing the endpoint.
It's possible that cs_chk_rcv() could later be further simplified if
we can demonstrate that the two tests in it can be merged.
This flag is exclusively used when a front applet needs to wait for the
other side to connect (or fail to). Let's give it a more explicit name
and remove the ambiguous function that was used only once.
This also ensures we will not risk to set it back on a new endpoint
after cs_reset_endp() via SE_FL_APP_MASK, because the flag being
specific to the endpoint only and not to the connector, we don't
want to preserve it when replacing the endpoint.
This flag is no more needed, it was only set on shut read to be tested
by cs_rx_blocked() which is now properly tested for shutr as well. The
cs_rx_blk_shut() calls were removed. Interestingly it allowed to remove
a special case in the L7 retry code.
This also ensures we will not risk to set it back on a new endpoint
after cs_reset_endp() via SE_FL_APP_MASK.
One flag (RXBLK_SHUT) is always set with CF_SHUTR, so in order to remove
it, we first need to make sure we always check for CF_SHUTR where
cs_rx_blocked() is being used.
sc_is_send_allowed() is now used everywhere instead of the combination
of cs_tx_endp_ready() && !cs_tx_blocked(). There's no place where we
need them individually thus it's simpler. The test was placed in cs_util
as we'll complete it later.
First it applies to the stream endpoint and not the conn_stream, and
second it only tests and touches the flags so it makes sense to call
it se_fl_ like other functions which only manipulate the flags, as
it's just a special case of flags.
It returns an stconn from a connection and not the opposite, so the name
change was more appropriate. In addition it was moved to connection.h
which manipulates the connection stuff, and it happens that only
connection.c uses it.
The following functions which act on a connection-based stream connector
were renamed to sc_conn_* (~60 places):
cs_conn_drain_and_shut
cs_conn_process
cs_conn_read0
cs_conn_ready
cs_conn_recv
cs_conn_send
cs_conn_shut
cs_conn_shutr
cs_conn_shutw
The function doesn't return a pointer to the mux but to the mux stream
(h1s, h2s etc). Let's adjust its name to reflect this. It's rarely used,
the name can be enlarged a bit. And of course s/cs/sc to accommodate for
the updated name.
These functions return the app-layer associated with an stconn, which
is a check, a stream or a stream's task. They're used a lot to access
channels, flags and for waking up tasks. Let's just name them
appropriately for the stream connector.
We're starting to propagate the stream connector's new name through the
API. Most call places of these functions that retrieve the channel or its
buffer are in applets. The local variable names are not changed in order
to keep the changes small and reviewable. There were ~92 uses of cs_ic(),
~96 of cs_oc() (due to co_get*() being less factorizable than ci_put*),
and ~5 accesses to the buffer itself.
This applies the change so that the applet code stops using ci_putchk()
and friends everywhere possible, for the much saferapplet_put*() instead.
The change is mechanical but large. Two or three functions used to have no
appctx and a cs derived from the appctx instead, which was a reminiscence
of old times' stream_interface. These were simply changed to directly take
the appctx. No sensitive change was performed, and the old (more complex)
API is still usable when needed (e.g. the channel is already known).
The change touched roughly a hundred of locations, with no less than 124
lines removed.
It's worth noting that the stats applet, the oldest of the series, could
get a serious lifting, as it's still very channel-centric instead of
propagating the appctx along the chain. Given that this code doesn't
change often, there's no emergency to clean it up but it would look
better.
For historical reasons (stream-interface and connections), we used to
require two independent fields for the application level callbacks and
the transport-level functions. Over time the distinction faded away so
much that the low-level functions became specific to the application
and conversely. For example, applets may only work with streams on top
since they rely on the channels, and the stream-level functions differ
between applets and connections. Right now the application level only
contains a wake() callback and the low-level ones contain the functions
that act at the lower level to perform the shutr/shutw and at the upper
level to notify about readability and writability. Let's just merge them
together into a single set and get rid of this confusing distinction.
Note that the check ops do not define any app-level function since these
are only called by streams.
This also follows the natural naming. There are roughly 238 changes, all
totally trivial. conn_stream-t.h has become completely void of any
"conn_stream" related stuff now (except its name).
This renames the "struct conn_stream" to "struct stconn" and updates
the descriptions in all comments (and the rare help descriptions) to
"stream connector" or "connector". This touches a lot of files but
the change is minimal. The local variables were not even renamed, so
there's still a lot of "cs" everywhere.
Let's start to introduce the stream connector at the app_ops level.
This is entirely self-contained into conn_stream.c. The functions
were also updated to reflect the new name, and the comments were
updated.
Just like for the appctx, this is a pointer to a stream endpoint descriptor,
so let's make this explicit and not confuse it with the full endpoint. There
are very few changes thanks to the preliminary refactoring of the flags
manipulation.
Now at least it makes it obvious that it's the stream endpoint descriptor
and not an endpoint. There were few changes thanks to the previous refactor
of the flags.
After some discussion we found that the cs_endpoint was precisely the
descriptor for a stream endpoint, hence the naturally coming name,
stream endpoint constructor.
This patch renames only the type everywhere and the new/init/free functions
to remain consistent with it. Future patches will address field names and
argument names in various code areas.
That's the "stream endpoint" pointer. Let's change it now while it's
not much spread. The function __cs_endp_target() wasn't yet renamed
because that will change more globally soon.
This changes all main uses of endp->flags to the se_fl_*() equivalent
by applying coccinelle script endp_flags.cocci. The se_fl_*() functions
themselves were manually excluded from the change, of course.
Note: 144 locations were touched, manually reviewed and found to be OK.
The script was applied with all includes:
spatch --in-place --recursive-includes -I include --sp-file $script $files
This changes all main uses of cs->endp->flags to the sc_ep_*() equivalent
by applying coccinelle script cs_endp_flags.cocci.
Note: 143 locations were touched, manually reviewed and found to be OK,
except a single one that was adjusted in cs_reset_endp() where the flags
are read and filtered to be used as-is and not as a boolean, hence was
replaced with sc_ep_get() & $FLAGS.
The script was applied with all includes:
spatch --in-place --recursive-includes -I include --sp-file $script $files
This one is exclusively used by the connection, regardless its generic
name "ctx" is rather confusing. Let's make it a struct connection* and
call it "conn". This way there's no doubt about what it is and there's
no way it will be used by accident by being taken for something else.
This test in cs_update_rx() was introduced in 1.9 by commit b26a6f970
("MEDIUM: stream-int: make use of si_rx_chan_{rdy,blk} to control the
stream-int from the channel"), but by then already it was not needed
because the RX_WAIT_EP flag has never been part of RXBLK_ANY so there's
no point doing "flags & RXBLK_ANY & ~RX_WAIT_EP", that part is already
complicated enough like this.
Adjust the size of the sample buffer before we change the "area"
pointer. Otherwise, we end up not changing the size, because the area
pointer is already the same as "start" before we compute the difference
between the two.
This is similar to the change in b28430591d
but for the word converter instead of field.
Fix a typo that lead to using the wrong pointer when loading a
certificate, which lead to always using the pem loader for every
parameeter.
Use the cert_ext->load() ptr instead of cert_exts->load() which was the
first element of the cert_exts[] array.
Enhance the error message with the field name.
Should fix issue #1716
Commit 2cb3be76b ("CLEANUP: init: address a coverity warning about
possible multiply overflow") was incomplete, two other locations were
present. This should address issue #1585.
Bring some improvment to h3_parse_settings_frm() function. The first one
is the parsing which now manipulates a buffer instead of a plain char*.
This is more to unify with other parsing functions rather than dealing
with data wrapping : it's unlikely to happen as SETTINGS is only
received as the first frame on the control STREAM.
Various errors are now properly reported as connection error :
* on incomplete frame payload
* on a duplicated settings in the same frame
* on reserved settings receive
As specified by HTTP/3 draft, an unknown unidirectional stream can be
aborted. To do this, use a new flag QC_SF_READ_ABORTED. When the MUX
detects this flag, QCS instance is automatically freed.
Previously, such streams were instead automatically drained. By aborting
them, we economize some useless memcpy instruction. On future data
reception, QCS instance is not found in the tree and considered as
already closed. The frame payload is thus deleted without copying it.
Remove all unnecessary bits of code for H3 unidirectional streams. Most
notable, an individual tasklet is not require anymore for each stream.
This is useless since the merge of RX/TX uni streams handling with
bidirectional streams code.
The whole QUIC stack is impacted by this change :
* at quic-conn level, a single function is now used to handle uni and
bidirectional streams. It uses qcc_recv() function from MUX.
* at MUX level, qc_recv() io-handler function does not skip uni streams
* most changes are conducted at app layer. Most notably, all received
data is handle by decode_qcs operation.
Now that decode_qcs is the single app read function, the H3 layer can be
simplified. Uni streams parsing was extracted from h3_attach_ruqs() to
h3_decode_qcs().
h3_decode_qcs() is able to deal with all HTTP/3 frame types. It first
check if the frame is valid for the H3 stream type. Most notably,
SETTINGS parsing was moved from h3_control_recv() into h3_decode_qcs().
This commit has some major benefits besides removing duplicated code.
Mainly, QUIC flow control is now enforced for uni streams as with bidi
streams. Also, an unknown frame received on control stream does not set
an error : it is now silently ignored as required by the specification.
Some cleaning in H3 code is already done with this patch :
h3_control_recv() and h3_attach_ruqs() are removed as they are now
unused. A final patch should clean up the unneeded remaining bit.
Define a new function h3_parse_uni_stream_no_h3(). It can be used to
handle the payload of streams which does not convey H3 frames. This is
mainly useful for QPACK encoder/decoder streams. It can also be used for
a stream of unknown type which should be drain without parsing it.
This patch is useful to extract code in a dedicated function. It will be
simple to reuse it in h3_decode_qcs() when uni-streams reception is
unify with bidirectional streams, without using dedicated stream tasklet.
Define a new function h3_is_frame_valid(). It returns if a frame is
valid or not depending on the stream which received it.
For the moment, it is used in h3_decode_qcs() which only deals with
bidirectional streams. Soon, uni streams will use the same function,
rendering the frame type check useful.
Define a new function h3_init_uni_stream(). This can be used to read the
stream type of an unidirectional stream. There is no functional change
with previous code.
This patch will be useful to unify reception for uni streams with
bidirectional ones.
Define a new enum h3s_t. This is used to differentiate between the
different stream types used in a HTTP/3 connection, including the QPACK
encoder/decoder streams.
For the moment, only bidirectional streams is positioned. This patch
will be useful to unify reception of uni streams with bidirectional
ones.
Replace h3_uqs type by qcs in stream callbacks. This change is done in
the context of unification between bidi and uni-streams. h3_uqs type
will be unneeded when this is achieved.
Remove the unneeded skip over unidirectional streams in qc_send(). This
unify sending for both uni and bidi streams.
In fact, the only local unidirectional streams in use for the moment is
the H3 Control stream responsible of SETTINGS emission. The frame was
already properly generated in qcs.tx.buf, but not send due to stream
skip in qc_send(). Now, there is no need to ignore uni streams so remove
this condition.
This fixes the emission of H3 settings which is now properly emitted.
Uni and bidi streams use the same set of funtcions for sending. One of
the most notable gain is that flow-control is now enforced for uni
streams.
Emit STREAM_STATE_ERROR connection error in two cases :
* if receiving data for send-only stream
* if receiving data on a locally initiated stream not open yet
For the moment the first case cannot be encoutered as uni streams
reception does not use qcc_recv(). However, this will be soon
implemented with the unification between bidi and uni streams.
The whole frame payload must have been received to demux a H3 frames,
except for H3 DATA which can be fragmented into multiple HTX blocks.
If the frame is bigger than the buffer and is not a DATA frame, a
connection error is reported with error H3_EXCESSIVE_LOAD.
This should be completed in the future with the H3 settings to limit the
size of uncompressed header section.
This code is more generic : it can handle every H3 frames. This is done
in order to be able to use h3_decode_qcs() to demux both uni and bidir
streams.
Similar to sending, read operations are disabled when a CONNECTION_CLOSE
frame has been emitted.
Most notably, this prevents unneeded loop demuxing when the H3 layer has
issue an error and cannot process the buffer payload anymore.
Note that read is not prevented for unidirectional streams for the
moment. This will supported soon with the unification of bidir and uni
streams treatment.
Complete quic-conn API for error reporting. A new parameter <app> is
defined in the function quic_set_connection_close(). This will transform
the frame into a CONNECTION_CLOSE_APP type.
This type of frame will be generated by the applicative layer, h3 or
hq-interop for the moment. A new function qcc_emit_cc_app() is exported
by the MUX layer for them.
The only change is that the H3_CF_SETTINGS_SENT flag if-condition is
replaced by a BUG_ON statement. This may help to catch multiple calls on
h3_control_send() instead of silently ignore them.
h3_parse_settings_frm() read one byte after the frame payload. Fix the
parsing code. In most cases, this has no impact as we are inside an
allocated buffer but it could cause a segfault depending on the buffer
alignment.
struct h3 represents the whole HTTP/3 connection. A new type h3s was
recently introduced to represent a single HTTP/3 stream. To facilitate
the analogy with other haproxy code, most notable in MUX, rename h3 type
to h3c.
Do not allocate cs_endpoint for every QCS instances in qcs_new().
Instead, this is delayed to qc_attach_cs() function.
In effect, with H3 as app protocol, cs_endpoint will be allocated on
HEADERS parsing. Thus, no cs_endpoint is allocated for H3 unidirectional
streams which do not convey any HTTP data.
h3_b_dup() is used to obtains a ncbuf representation into a struct
buffer. ncbuf can thus be marked as a const parameter. This will allows
function which already manipulates a const ncbuf to use it.
The previous fix:
BUG/MEDIUM: peers: fix segfault using multiple bind on peers
Prevents to declare multiple listeners on a peers sections but if
peers protocol is extended to support this we could raise the bug
again.
Indeed, after allocating a new listener and adding it to a list the
code mistakenly re-configure the first element of the list instead
of the new added one, and the last one remains finally uninitialized.
The previous fix assure there is no more than one listener in this
list but this could be changed in futur.
This patch patch assures we configure and initialize the newly added
listener instead of the first one in the list.
This patch could be backported until version 2.0 to complete
BUG/MEDIUM: peers: fix segfault using multiple bind on peers
If multiple "bind" lines were present on the "peers" section, multiple
listeners were added to a list but the code mistakenly initialize
the first member and this first listener was re-configured instead of
the newly created one. The last one remains uninitialized causing a null
dereference a soon a connection is received.
In addition, the 'peers' sections and protocol are not currently designed to
handle multiple listeners.
This patch check if there is already a listener configured on the 'peers'
section when we want to create a new one. This is rising an error if
a listener is already present showing the file and line in the error
message.
To keep the file and line number of the previous listener available
for the error message, the 'bind_conf_uniq_alloc' function was modified
to keep the file/line data the struct 'bind_conf' was firstly
allocated (previously it was updated each time the 'bind_conf' was
reused).
This patch should be backported until version 2.0
resolvers_deinit() function is called on error, during post-parsing stage,
or on deinit, when HAProxy is stopped. It releases all entities: resolvers,
resolutions and SRV requests. There is no reason to defer the resolutions
release by moving them in the death_row list because this function is
terminal. And it is in fact a bug. Resolutions must not be released at the
end of the function because resolvers were already freed. However some
resolutions may still be attached to a reolver. Thus, when we try to remove
it from the resolver's tree, in resolv_reset_resolution(), this resolver was
already released.
So now, resolution are immediately released. It means there is no more
reason to track this function. calls to
enter_resolver_code()/leave_resolver_code() have been removed.
This patch should fix the issue #1680 and may be related to #1485. It must
be backported as far as 2.2.
We used to support both RTSP and HTTP protocol version names with and
without accept-invalid-http-request, but since this is based on the
characters themselves, any protocol made of chars {0-9/.HPRST} was
possible and not others. Now that such non-standard protocols are
restricted to accept-invalid-http-request, there's no reason for not
allowing other letters. With this patch, characters {0-9./A-Z} are
permitted when the option is set.
This patch hardens the verification of the HTTP/1.x version line
(i.e. the first line within an HTTP/1.x request) to verify that
the protocol name within the version actually reads "HTTP".
Previously protocols that superficially resembled the wire-format
of HTTP/1.x and having a 4-letter acronym as the protocol name, such
as RTSP would pass this check.
This patch fixes GitHub issue #540, it must be backported to all
supported versions. The legacy, non-HTX parser is affected as well,
a fix must be created for it as well.
Note that such protocols can still be used when option
accept-invalid-http-request is set.
In issue #1585 Coverity suspects a risk of multiply overflow when
calculating the SSL cache size, though in practice the cache is
limited to 2^32 anyway thus it cannot really happen. Nevertheless,
casting the operation should be sufficient to avoid marking it as a
false positive.
This reverts commit 118b2cbf84.
This patch was useful mainly for the docker image of QUIC interop to
have traces on stdout.
A better solution has been found by integrating this patch directly in
the qns repository which is used to build the docker image. Thus, this
hack is not require anymore in the main repository.
The wake handler detects if the frontend is closed. This can happen if
the proxy has been disabled individually or even on process soft-stop.
Before this patch, in this condition QCS instances were freed before
being detached from the cs_endpoint. This clearly violates the haproxy
connection architecture and cause a BUG_ON statement crash in cs_free().
To handle this properly, cs_endpoint is notified by setting RD_SH|WR_SH
on connection flags. The cs_endpoint will thus use the detach operation
which allows the QCS instance to be freed.
This code allows the soft-stop process to complete as soon as possible.
However, the client is not notified about the connection closing. It
should be done by emitting a H3 GOAWAY + CONNECTION_CLOSE. Sadly, this
is impossible at this stage because the listener sockets are closed so
the quic-conn cannot use it to emit new frames. At this stage the client
will most probably detect connection closing on its idle timeout
expiration.
Thus, to completely support proxy closing/soft-stop, important
architecture changes are required in QUIC socket management. This is
also linked with the reload feature.
The given size must be the size of the destination buffer, not the size of the
(binary) address representation.
This fixes GitHub issue #1599.
The bug was introduced in 92149f9a82 which is in
2.4+. The fix must be backported there.
If QUIC support is enabled both branches of the ternary conditional are
identical, upsetting Coverity. Move the full conditional into the non-QUIC
preprocessor branch to make the code more clear.
This resolves GitHub issue #1710.
When we receive a CONNECTION_CLOSE frame, we should decrement this counter
if the handshake state was not successful and if we have not received
a TLS alert from the TLS stack.
When a bind line is configured without the "ssl" keyword, a warning is
emitted and a crash happens at runtime:
bind quic4@:4449 crt rsa+dh2048.pem alpn h3 allow-0rtt
[WARNING] (17867) : config : Proxy 'decrypt': A certificate was specified but SSL was not enabled on bind 'quic4@:4449' at [quic-mini.cfg:24] (use 'ssl').
Let's automatically turn SSL on when QUIC is detected, as it doesn't
exist without SSL anyway. It solves the runtime issue, and also makes
sure it is not possible to accidentally configure a quic listener with
no certificate since the error is detected via the SSL checks.
A warning is emitted in this case, to encourage the user to fix the
configuration so that it remains reviewable.
When no mux protocol is configured on a bind line with "proto", and the
transport layer is QUIC, right now mux_h1 is being used, leading to a
crash.
Now when the transport layer of the bind line is already known as being
QUIC, let's automatically try to configure the QUIC mux, so that users
do not have to enter "proto quic" all the time while it's the only
supported option. this means that the following line now works:
bind quic4@:4449 ssl crt rsa+dh2048.pem alpn h3 allow-0rtt
Till now, placing "proto h1" or "proto h2" on a "quic" bind or placing
"proto quic" on a TCP line would parse fine but would crash when traffic
arrived. The reason is that there's a strong binding between the QUIC
mux and QUIC transport and that they're not expected to be called with
other types at all.
Now that we have the mux's type and we know the type of the protocol used
on the bind conf, we can perform such checks. This now returns:
[ALERT] (16978) : config : frontend 'decrypt' : stream-based MUX protocol 'h2' is incompatible with framed transport of 'bind quic4@:4448' at [quic-mini.cfg:27].
[ALERT] (16978) : config : frontend 'decrypt' : frame-based MUX protocol 'quic' is incompatible with stream transport of 'bind :4448' at [quic-mini.cfg:29].
This config tightening is only tagged MINOR since while such a config,
despite not reporting error, cannot work at all so even if it breaks
experimental configs, they were just waiting for a single connection
to crash.
In order to be able to check compatibility between muxes and transport
layers, we'll need a new flag to tag muxes that work on framed transport
layers like QUIC. Only QUIC has this flag now.
We used to preset XPRT_SSL on bind_conf->xprt when parsing the "ssl"
keyword, which required to be careful about what QUIC could have set
before, and which makes it impossible to consider the whole line to
set all options.
Now that we have the BC_O_USE_SSL option on the bind_conf, it becomes
easier to set XPRT_SSL only once the bind_conf's args are parsed.
It used to be set when parsing the listeners' addresses but this comes
with some difficulties in that other places have to be careful not to
replace it (e.g. the "ssl" keyword parser).
Now we know what protocols a bind_conf line relies on, we can set it
after having parsed the whole line.
Now that we have a function to parse all bind keywords, and that we
know what types of sock-level and xprt-level protocols a bind_conf
is using, it's easier to centralize the check for stream vs dgram
conflict by putting it directly at the end of the args parser. This
way it also works for peers, provides better precision in the report,
and will also allow to validate transport layers. The check was even
extended to detect inconsistencies between xprt layer (which were not
covered before). It can even detect that there are two incompatible
"bind" lines in a single peers section.
Let's collect the set of xprt-level and sock-level dgram/stream protocols
seen on a bind line and store that in the bind_conf itself while they're
being parsed. This will make it much easier to detect incompatibilities
later than the current approch which consists in scanning all listeners
in post-parsing.
This now makes sure that both the peers' "bind" line and the regular one
will use the exact same parser with the exact same behavior. Note that
the parser applies after the address and that it could be factored
further, since the peers one still does quite a bit of duplicated work.
The "bind" parsing code was duplicated for the peers section and as a
result it wasn't kept updated, resulting in slightly different error
behavior (e.g. errors were not freed, warnings were emitted as alerts)
Let's first unify it into a new dedicated function that properly reports
and frees the error.
There's been some great confusion between proto_type, ctrl_type and
sock_type. It turns out that ctrl_type was improperly chosen because
it's not the control layer that is of this or that type, but the
transport layer, and it turns out that the transport layer doesn't
(normally) denaturate the underlying control layer, except for QUIC
which turns dgrams to streams. The fact that the SOCK_{DGRAM|STREAM}
set of values was used added to the confusion.
Let's replace it with xprt_type which reuses the later introduced
PROTO_TYPE_* values, and update the comments to explain which one
works at what level.
Just trying "quic4@:4433" with USE_QUIC not set rsults in such a cryptic
error:
[ALERT] (14610) : config : parsing [quic-mini.cfg:44] : 'bind' : unsupported protocol family 2 for address 'quic4@:4433'
Let's at least add the stream and datagram statuses to indicate what was
being looked for:
[ALERT] (15252) : config : parsing [quic-mini.cfg:44] : 'bind' : unsupported stream protocol for datagram family 2 address 'quic4@:4433'
Still not very pretty but gives a little bit more info.
In case the str2listener() parser reports a generic error with no message
when parsing the argument of a "bind" statement in a "peers" section, the
reported error indicates an invalid address on the empty arg. This has
existed since 2.0 with commit 355b2033e ("MINOR: cfgparse: SSL/TLS binding
in "peers" sections."), so this must be backported till 2.0.
As specified by the RFC reception of different STREAM data for the same
offset should be treated with a CONNECTION_CLOSE with error
PROTOCOL_VIOLATION.
Use ncbuf API to detect this case : if add operation fails with
NCB_RET_DATA_REJ with add mode NCB_ADD_COMPARE.
Send a CONNECTION_CLOSE on reception of a STREAM frame for a STREAM id
exceeding the maximum value enforced. Only implemented for bidirectional
streams for the moment.
Send a CONNECTION_CLOSE if the peer emits more data than authorized by
our flow-control. This is implemented for both stream and connection
level.
Fields have been added in qcc/qcs structures to differentiate received
offsets for limit enforcing with consumed offsets for sending of
MAX_DATA/MAX_STREAM_DATA frames.
Define an API to easily set a CONNECTION_CLOSE. This will mainly be
useful for the MUX when an error is detected which require to close the
whole connection.
On the MUX side, a new flag is added when a CONNECTION_CLOSE has been
prepared. This will disable add future send operations.
We rely on <conn_opening> stats counter and tune.quic.retry_threshold
setting to dynamically start sending Retry packets. We continue to send such packets
when "quic-force-retry" setting is set. The difference is when we receive tokens.
We check them regardless of this setting because the Retry could have been
dynamically started. We must also send Retry packets when we receive Initial
packets without token if the dynamic Retry threshold was reached but only for connection
which are not currently opening or in others words for Initial packets without
connection already instantiated. Indeed, we must not send Retry packets for all
Initial packets without token. For instance a client may have already sent an
Initial packet without receiving Retry packet because the Retry feature was not
started, then the Retry starts on exeeding the threshold value due to others
connections, then finally our client decide to send another Initial packet
(to ACK Initial CRYPTO data for instance). It does this without token. So, for
this already existing connection we must not send a Retry packet.
This QUIC specific keyword may be used to set the theshold, in number of
connection openings, beyond which QUIC Retry feature will be automatically
enabled. Its default value is 100.
First commit to handle the QUIC stats counters. There is nothing special to say
except perhaps for ->conn_openings which is a gauge to count the number of
connection openings. It is incremented after having instantiated a quic_conn
struct, then decremented when the handshake was successful (handshake completed
state) or failed or when the connection timed out without reaching the handshake
completed state.
Move the code which finalizes the QUIC connections initialisations after
having called qc_new_conn() into this function to benefit from its
error handling to release the memory allocated for QUIC connections
the initialization of which could not be finalized.
Here is the format of a token:
- format (1 byte)
- ODCID (from 9 up 21 bytes)
- creation timestamp (4 bytes)
- salt (16 bytes)
A format byte is required to distinguish the Retry token from others sent in
NEW_TOKEN frames.
The Retry token is ciphered after having derived a strong secret from the cluster secret
and generated the AEAD AAD, as well as a 16 bytes long salt. This salt is
added to the token. Obviously it is not ciphered. The format byte is not
ciphered too.
The AAD are built by quic_generate_retry_token_aad() which concatenates the version,
the client SCID and the IP address and port. We had to implement quic_saddr_cpy()
to copy the IP address and port to the AAD buffer. Only the Retry SCID is generated
on our side to build a Retry packet, the others fields come from the first packet
received by the client. It must reuse this Retry SCID in response to our Retry packet.
So, we have not to store it on our side. Everything is offloaded to the client (stateless).
quic_generate_retry_token() must be used to generate a Retry packet. It calls
quic_pkt_encrypt() to cipher the token.
quic_generate_retry_check() must be used to check the validity of a Retry token.
It is able to decipher a token which arrives into an Initial packet in response
to a Retry packet. It calls parse_retry_token() after having deciphered the token
to store the ODCID into a local quic_cid struct variable. Finally this ODCID may
be stored into the transport parameter thanks to qc_lstnr_params_init().
The Retry token lifetime is 10 seconds. This lifetime is also checked by
quic_generate_retry_check(). If quic_generate_retry_check() fails, the received
packet is dropped without anymore packet processing at this time.
This function does exactly the same thing as quic_tls_decrypt(), except that
it does reuse its input buffer as output buffer. This is needed
to decrypt the Retry token without modifying the packet buffer which
contains this token. Indeed, this would prevent us from decryption
the packet itself as the token belong to the AEAD AAD for the packet.
This function must be used to derive strong secrets from a non pseudo-random
secret (cluster-secret setting in our case) and an IV. First it call
quic_hkdf_extract_and_expand() to do that for a temporary strong secret (tmpkey)
then two calls to quic_hkdf_expand() reusing this strong temporary secret
to derive the final strong secret and IV.