In 1.9-dev1, commit 5bc9972ed ("BUG/MINOR: lua/threads: Make lua's tasks
sticky to the current thread") to detect unconfigured Lua tasks that could
run on any thread, by comparing their thread mask with MAX_THREADS_MASK.
The proper way to do it is to check for at least 2 threads in their mask
in fact. This is more reliable and allows to get rid of MAX_THREADS_MASK
there.
Each thread has its own local thread id and its own global thread id,
in addition to the masks corresponding to each. Once the global thread
ID can go beyond 64 it will not be possible to have a global thread Id
bit anymore, so better start to remove it and use only the local one
from the struct thread_info.
This simply replaces a call to task_new(1<<thr) with task_new_on(thr)
so that we can later isolate the changes required to add more thread
group stuff.
Instead of having a global mask of all the profiled threads, let's have
one flag per thread in each thread's flags. They are never accessed more
than one at a time an are better located inside the threads' contexts for
both performance and scalability.
LIST_ELEM macro was incorrectly used in the loop when purging
flow-control frames from qcc.lfctl.frms on MUX release. This caused a
segfault in qc_release() due to an invalid quic_frame pointer instance.
The occurence of this bug seems fairly rare. To happen, some
flow-control frames must have been allocated but not yet sent just as
the MUX release is triggered.
I did not find a reproducer scenario. Instead, I artificially triggered
it by inserting a quic_frame in qcc.lfctl.frms just before purging it in
qc_release() using the following snippet.
struct quic_frame *frm;
frm = pool_zalloc(pool_head_quic_frame);
LIST_INIT(&frm->reflist);
frm->type = QUIC_FT_MAX_DATA;
frm->max_data.max_data = 0;
LIST_APPEND(&qcc->lfctl.frms, &frm->list);
This should fix github issue #1747.
This must be backported up to 2.6.
The CLI applet process one request after another. Thus, when several
requests are pipelined, it is important to notify it won't consume remaining
outgoing data while it is processing a request. Otherwise, the applet may be
woken up in loop. For instance, it may happen with the HTTP client while we
are waiting for the server response if a shutr is received.
This patch must be backported in all supported versions after an observation
period. But a massive refactoring was performed in 2.6. So, for the 2.5 and
below, the patch will have to be adapted. Note also that, AFAIK, the bug can
only be triggered by the HTTP client for now.
in .chk_snd applet callback function, we must not wake up an applet if
SE_FL_WONT_CONSUME flag is set. Indeed, if an applet explicitly specify it
will not consume any outgoing data, it is useless to wake it up when more
data are sent. Note the applet may still be woken up for another reason. In
this case SE_FL_WONT_CONSUME flag will be removed. It is the applet
responsibility to set it again, if necessary.
This patch must be backported to 2.6 after an observation period. On earlier
versions, the bug probably exists too. However, because a massive
refactoring was performed in 2.6, the patch will have to be adapted and
carefully reviewed/tested if it is backported..
Since the conn-stream refactoring, from the time the health-check is in
progress, its stream-connector is always defined. So, some tests on it are
useless and can be removed.
This patch should fix the issue #1739.
When a TCP content ruleset is evaluated, we stop waiting for more data if
the inspect-delay is reached, if there is a read error or if we know no more
data will be received. This last point is only valid for ACLs. An action may
decide to yield for another reason. For instance, in the SPOE, the
"send-spoe-group" action yields while the agent response is not
received. Thus, now, an action call is final only when the inspect-delay is
reached or if there is a read error. But it is possible for an action to
yield if the buffer is full or if CF_EOI flag is set.
This patch could be backported to all supported versions.
When the MUX transfers a big amount of data to the client, the transport
layer may reject some of them because of the congestion controller
limit. Frames built by the MUX are thus dropped, even if streams transferred
data are kept in buffers for future new frames.
Thus, the MUX is required to free rejected frames. This fixes a memory
leak which may grow with important data transfers.
It should be backported to 2.6 after it has been tested and validated.
TX flow-control enforcing is not straightforward : it requires the usage
of several counters at stream and connection level, in part due to the
difficult sending API between MUX and quic-conn layers.
To strengthen this part and ensures it behaves as expected, some
existing BUG_ON statements were adjusted and new one were added. This
should help to catch errors as early as possible, as in the case with
github issue #1738.
The flow control enforced at connection level is incorrectly calculated.
There is a risk of exceeding the limit. In most cases, this results in a
segfault produced by a BUG_ON which is here to catch this kind of error.
If not compiled with DEBUG_STRICT, this should generate a connection
closed by the client due to the flow control overflow.
The problem is encountered when transfered payload is big enough to fill
the transport congestion window. In this case, some data are rejected by
the transport layer and kept by the MUX to be reemitted later. However,
these preserved data are not counted on the connection flow control when
resubmitted, which gradually amplify the gap between expected and real
consumed flow control.
To fix this, handle the flow-control at the connection level in the same
way as the stream level. A new field qcc.tx.offsets is incremented as
soon as data are transfered between stream TX buffers. The field
qcc.tx.sent_offsets is preserved to count bytes handled by the transport
layer and stop the MUX transfer if limit is reached.
As already stated, this bug can occur during transfers with enough
emitted data, over multiple streams. When using a single stream, the
flow control at the stream level hides it.
The BUG_ON crash is reproduced systematically with quiche client :
$ quiche-client --no-verify --http-version HTTP/3 -n 10000 https://127.0.0.1:20443/10K
This must be backported up to 2.6 when confirmed to work as expected.
This should fix github issue #1738.
This is the continuation of commit 5a0e7ca5d ("BUG/MINOR: cli/stats: add
missing trailing LF after JSON outputs"). There's also a "show info json"
command which was also missing the trailing LF. It's constructed exactly
like the "show stat json", in that it dumps a series of fields without any
LF. The difference however is that for the stats output, everything was
enclosed in an array which required an LF *after* the closing bracket,
while here there's no such array so we have to emit the LF after the loop.
That makes the two functions a bit inconsistent, it's quite annoying, but
making them better would require sending one LF per line in the stats
output, which is not particularly interesting.
Given that it took 5+ years to spot that this code wasn't working as
expected it doesn't seem worth investing much time trying to refactor
it to make it look cleaner at the risk of breaking other obscure parts.
Leonhard Wimmer reported an interesting bug in github issue #1742.
Servers in disabled proxies that are configured for resolution are still
subscribed to DNS resolutions, but the LB algos are not initialized at
all since the proxy is disabled, so when the server state changes,
attempts to update its status cause a crash when the server's weight
is recalculated via a divide by the proxy's total weight which is zero.
This should be backported to all versions. Beware that before 2.5 or
so, there's no PR_FL_DISABLED flag, instead px->disabled should be
used (2.3-2.4) or PR_STSTOPPED for older versions.
Thanks to Leonhard for his report and quick test!
Patrick Hemmer reported that we have a bug in the CLI commands
"show stat json" and "show schema json" in that they forget the trailing
LF that's required to mark the end of the response. This has been the
case since the introduction of the feature in 1.8-dev1 by commit 6f6bb380e
("MEDIUM: stats: Add show json schema"), so this fix may be backported to
all versions.
Function used to parse SETTINGS frame is incorrect as it does not stop
at the frame length but continue to parse beyond it. In most cases, it
will result in a connection closed with error H3_FRAME_ERROR.
This bug can be reproduced with clients that sent more than just a
SETTINGS frame on the H3 control stream. This is notably the case with
aioquic which emit a MAX_PUSH_ID after SETTINGS.
This bug has been introduced in the current dev release, by the
following patch
62eef85961f4a2a241e0b24ef540cc91f156b842
MINOR: mux-quic: simplify decode_qcs API
thus, it does not need to be backported.
This changes the default from RFC 7540's default 65535 (64k-1) to avoid
avoid some degenerative WINDOW_UPDATE behaviors in the wild observed with
clients using 65536 as their buffer size, and have to complete each block
with a 1-byte frame, which with some servers tend to degenerate in 1-byte
WU causing more 1-byte frames to be sent until the transfer almost only
uses 1-byte frames.
More details here: https://github.com/nghttp2/nghttp2/issues/1722
As mentioned in previous commit (MEDIUM: mux-h2: try to coalesce outgoing
WINDOW_UPDATE frames) the issue could not be reproduced with haproxy but
individual WU frames are sent so theoretically nothing prevents this from
happening. As such it should be backported as a workaround for already
deployed clients after watching for any possible side effect with rare
clients. As an added benefit, uploads from curl now use less DATA frames
(all are 16384 now). Note that the previous patch alone is sufficient to
stop the issue with curl in case this one would need to be reverted.
[wt: edited commit messaged, updated doc]
Glenn Strauss from Lighttpd reported a corner case affecting curl+lighttpd
that causes some uploads to degenerate to extremely suboptimal conditions
under certain circumstances, and noted that many other implementations
were possibly not safe against this degradation.
Glenn's detailed analysis is available here:
https://github.com/nghttp2/nghttp2/issues/1722
In short, curl uses a 65536 bytes buffer and the default stream window
is 65535, with 16384 bytes per frame. Curl will then send 3 frames of
16384 bytes followed by one of 16383, will wait for a window update to
send the last byte before recycling the buffer to read the next 64kB.
On each round like this, one extra single-byte frame will be sent, and
if ACKs for these single-byte frames are not aggregated, this will only
allow the client to send one extra byte at a time. At some point it is
possible (at least Glenn observed it) to have mostly 1-byte frames in
the transfer, resulting in huge CPU usage and a long transfer.
It was not possible to reproduce this with haproxy, even when playing
with frame sizes, buffer sizes nor window sizes. One reason seems to
be that we're using the same buffer size for the connection and the
stream and that the frame headers prevent the filling of the window
from happening on the same boundaries as on the sender. However it
does occasionally happen to see up to two 1-byte data frames in a row,
indicating that there's definitely room for improvement.
The WINDOW_UPDATE frames for the connection are sent at the end of the
demuxing, but the ones for the streams are currently sent immediately
after a DATA frame is processed, mostly for convenience. But we don't
need to proceed like this, we already have the counter of unacked bytes
in rcvd_s, so we can simply use that to decide when to send an ACK. It
must just be done before processing a new frame. The benefit is that
contiguous frames for the same stream will now only produce a single
WU, like for the connection. On complicated tests involving a client
that was limited to 100 Mbps transfers and a dummy Lua-based payload
consumer, it was possible to see the number of stream WU frames being
halved for a 100 MB transfer, which is already a nice saving anyway.
Glenn proposed a better workaround consisting in increasing the
default window size to 65536. This will be done in a separate patch
so that both can be studied independently in field and backported as
needed.
This patch is not much complicated and shold be backportable. It just
needs to be tested in development first.
BUG_ON() assertion to check for incomplete SETTINGS frame is incorrect.
It should check if frame length is greater, not smaller, than current
buffer data. Anyway, this BUG_ON() is useless as h3_decode_qcs()
prevents parsing of an incomplete frame, except for H3 DATA. Remove it
to fix this bug.
This bug was introduced in the current dev tree by commit
commit 62eef85961f4a2a241e0b24ef540cc91f156b842
MINOR: mux-quic: simplify decode_qcs API
Thus it does not need to be backported.
This fixes crashes which happen with DEBUG_STRICT=2. Most notably, this
is reproducible with clients that emit more than just a SETTINGS frame
on the H3 control stream. It can be reproduced with aioquic for example.
The health-check attached to an email alert has no type. It is unexpected,
and since the 2.6, it is important because we rely on it to know the
application type in front of a connection at the stream-connector
level. Because the object type is not set, the SE descriptor is not properly
initialized, leading to a segfault when a connection to the SMTP server is
established.
This patch must be backported to 2.6 and may be backported as far as
2.0. However, it is only an issue for the 2.6 and upper.
There is no server for email alerts. So the trace messages must be adapted
to handle this case. Information related to the server are now skipped for
email alerts and "[EMAIL]" prefix is used.
This patch must be backported as far as 2.4.
Email alerts are based on health-checks but with no server. Thus, in
__trace() function, responsible to write a trace message, we must be
prepared to have no server and thus no proxy.
This patch must be backported as far as 2.4.
Building QUIC with gcc-4.4 on el6 shows this error:
src/xprt_quic.c: In function 'qc_release_lost_pkts':
src/xprt_quic.c:1905: error: unknown field 'loss' specified in initializer
compilation terminated due to -Wfatal-errors.
make: *** [src/xprt_quic.o] Error 1
make: *** Waiting for unfinished jobs....
Initializing an anonymous form of union like :
struct quic_cc_event ev = {
(...)
.loss.time_sent = newest_lost->time_sent,
(...)
};
generates an error with gcc-4.4 but not when initializing the
fields outside of the declaration.
Convert return code to -1 when an error has been detected. This is
required since the previous API change on return value from the patch :
1f21ebdd7686bf435682cacd31e635db0c65b061
MINOR: mux-quic/h3: adjust demuxing function return values
Without this, QUIC MUX won't consider the call as an error and will try
to remove one byte from the buffer. This may cause a BUG_ON failure if
the buffer is empty at this stage.
This bug was introduced in the current dev tree. Does not need to be
backported.
Clean the API used by decode_qcs() and transcoder internal functions.
Parsing functions now returns a ssize_t which represents the number of
consumed bytes or a negative error code. The total consumed bytes is
returned via decode_qcs().
The API is now unified and cleaner. The MUX can thus simply use the
return value of decode_qcs() instead of substracting the data bytes in
the buffer before and after the call. Transcoders functions are not
anymore obliged to remove consumed bytes from the buffer which was not
obvious.
Slightly modify decode_qcs function used by transcoders. The MUX now
gives a buffer instance on which each transcoder is free to work on it.
At the return of the function, the MUX removes consume data from its own
buffer.
This reduces the number of invocation to qcs_consume at the end of a
full demuxing process. The API is also cleaner with the transcoders not
responsible of calling it with the risk of having the input buffer
freed if empty.
As a mirror to qcc/qcs types, add a h3c pointer into h3s struct. This
should help to clean up H3 code and avoid to use qcs.qcc.ctx to retrieve
the h3c instance.
smp_fc_http_major may be used to return the http version as an integer
used on the frontend or backend side. Previously, the handler only
checked for version 2 or 1 as a fallback. Extend it to support version 3
with the QUIC mux.
Commit d6c66f06a ("MINOR: ssl_ckch: Remove service context for "set ssl
crl-file" command") introduced a regression leading to a build error because
of a possible uninitialized value. It is now fixed.
This patch must be backported as far as 2.5.
A build error is reported about the path variable in the switch statement on
the commit type, in cli_io_handler_commit_cafile_crlfile() function. The
enum contains only 2 values, but a default clause has been added to return an
error to make GCC happy.
This patch must be backported as far as 2.5.
Commit 9a99e5478 ("BUG/MINOR: ssl_ckch: Dump CRL transaction only once if
show command yield") introduced a regression leading to a build error
because of a possible uninitialized value. It is now fixed.
This patch must be backported as far as 2.5.
Commit 5a2154bf7 ("BUG/MINOR: ssl_ckch: Dump CA transaction only once if
show command yield") introduced a regression leading to a build error
because of a possible uninitialized value. It is now fixed.
This patch must be backported as far as 2.5.
Commit 3e94f5d4b ("BUG/MINOR: ssl_ckch: Dump cert transaction only once if
show command yield") introduced a regression leading to a build error
because of a possible uninitialized value. It is now fixed.
This patch must be backported as far as 2.2.
The same type is used for CA and CRL entries. So, in commit_cert_ctx
structure, there is no reason to have different fields for the CA and CRL
entries.
.next_ckchi_link field must be initialized to NULL instead of .next_ckchi in
cli_parse_commit_crlfile() function. Only '.nex_ckchi_link' is used in the
I/O handler.
This patch must be backported as far as 2.5 with some adaptations for the 2.5.
When loaded SSL certificates are displayed via "show ssl cert" command, the
in-progess transaction, if any, is also displayed. However, if the command
yield, the transaction is re-displayed again and again.
To fix the issue, old_ckchs field is used to remember the transaction was
already displayed.
This patch must be backported as far as 2.2.
When loaded CA files are displayed via "show ssl ca-file" command, the
in-progress transaction, if any, is also displayed. However, if the command
yield, the transaction is re-displayed again and again.
To fix the issue, old_cafile_entry field is used to remember the transaction
was already displayed.
This patch must be backported as far as 2.5.
When loaded CRL files are displayed via "show ssl crl-file" command, the
in-progess transaction, if any, is also displayed. However, if the command
yield, the transaction is re-displayed again and again.
To fix the issue, old_crlfile_entry field is used to remember the transaction
was already displayed.
This patch must be backported as far as 2.5.
Because of a typo (I guess), an unknown type is used for the old entry in
show_crlfile_ctx structure. Because this field is unused, there is no
compilation error. But it must be a cafile_entry and not a crlfile_entry.
Note this field is not used for now, but it will be used.
This patch must be backported to 2.6.
Simplify cli_io_handler_commit_cafile_crlfile() handler function by
retrieving old and new entries at the beginning. In addition the path is
also retrieved at this stage. This removes several switch statements.
Note that the ctx was already validated by the corresponding parsing
function. Thus there is no reason to test the pointers.
While it is not a bug, this patch may help to fix issue #1731.
There is an enum to determine the entry entry type when changes are
committed on a CA/CRL entry. So use it in the service context instead of an
integer.
This patch may help to fix issue #1731.
This reapplies the xalloc_size.cocci patch across the whole `src/` tree.
see 16cc16dd8235e7eb6c38b7abd210bd1e1d96b1d9
see 63ee0e4c01b94aee5fc6c6dd98cfc4480ae5ea46
Rewrite failures in http rules are reported as proxy errors (PRXCOND) in
logs. However, other rewrite errors are reported as internal errors. For
instance, it happens when we fail to add X-Forwarded-For header. It is not
consistent and it is confusing. So now, all rewite failures are reported as
proxy errors.
This patch may be backported if necessary.
'httpclient' command does not properly handle full buffer cases. When the
response buffer is full, we exit to retry later. However, the context flags
are updated. It means when this happens, we may loose a part of the
response.
So now, flags are preserved when we fail to push data into the response
buffer. In addition, instead of dumping one part per call, we now try to
dump as much data as possible.
Finally, when there is no more data, because everything was dumped or
because we are waiting for more data from the HTTP client, the applet is
updated accordingly by calling applet_have_no_more_data(). Otherwise, when
some data are blocked, applet_putchk() already takes care to update the SE
flags. So, it is useless to call sc_need_room().
This patch should fix the issue #1723. It must be backported as far as
2.5. But a massive refactoring was performed in 2.6. So, for the 2.5 and
below, the patch will have to be adapted.
Commit 534645d6 ("BUG/MEDIUM: httpclient: Fix loop consuming HTX blocks from
the response channel") introduced a regression. When the response is
consumed, The HTX header blocks are removed before duplicating them. Thus,
the first header block is always lost.
This patch must be backported as far as 2.5.