The function fcgi_strm_propagate_term_flags() was added to check the FSTRM
state and evaluate when EOI/EOS/ERR_PENDING/ERROR flags must be set on the
SE. It is not the only place where those flags are set. But it centralizes
the synchro between the FCGI stream and the SC.
For now, this function is only used at the end of fcgi_rcv_buf(). But it
will be used to fix a potential bug.
spop_show_fd() function was splitted to dump the info about the SPOP
connections and the SPOP streams in dedicated functions, duplicating this
way what is performed in other muxes.
Termination events are now reported for the SPOP connections and the SPOP
streams. In addition, all available termination events logs are reported in
the "show-fd" callback function. The .ctl and .sctl callback functions were
also update to support, respectively, MUX_CTL_TEVTS and MUX_SCTL_TEVTS
commands.
The SPOP_CF_ERROR flag is now set on connection error only. It was also set
on some demux failures. But it is not mandatory because the connection is
closed anyway. And it is handy to have a flag dedicated to tcp connection
error. It was the original purpose of this flag.
This patch could be backported to 3.1 to ease future backports.
The spop stream now reports the end of input when the ACK is transferred to
the SPOE applet. To do so, the flag SPOP_SF_ACK_RCVD was added. It is set on
the SPOP stream when its ACK is received by the SPOP connection.
In addition when SPOP stream flags are propagated to the SE, the error is
now reported if end of input was not reached instead of testing the
connection error code. It is more accurate.
This patch should be backported to 3.1.
The SPOE applet forwards the message that must be sent to agent during its
init stage. So just after it is created. When it is performed, the end of
input must be reported because no more data will be forwarded. However, it
was performed after receiving the ACK response. It is harmless, but there is
no reason to delay the EOI. It is now fixed.
This patch must be backported to 3.1.
The previous fix ("BUG/MEDIUM: applet: Don't pretend to have more data to
handle EOI/EOS/ERROR") revealed an issue with the way the SPOE applet was
reporting the end of stream, leading to never shut the applet down.
In fact, there is two bug in one. The first one is about the applet
shutdown. Since the fix above, the applet is no longer closed. Before, it
was closed because it was reported in error. But now, it is just delayed
because the applet and the SPOP stream are declared to support half close
connections. So the applet is only closed when the SPOP connection is
closed. To fix this bug, both side are now stating that half close
connections are not supported.
The second bug is about the way the end of stream is reported. It is
reported when the ACK response is received. But it is too early, because the
parent stream must process the response first. So now, we take care to have
processed the ACK from the parent applet before reporting an end of stream.
This patch must be backported with the commit above to 3.1.
The way appctx EOI/EOS/ERROR flags were reported for applets using the new
API were to state the applet had more data to deliver. But it was not
correct and for APPCTX_FL_EOS, this led to report an error on the SE because
it is not expected. More data to deliver and an end of stream is an
impossible situation.
This was added as a fix by commit b8ca114031 ("BUG/MEDIUM: applet: State
appctx have more data if its EOI/EOS/ERROR flag is set"), mainly to make the
SPOE applet work.
When an applet set one of these flags, it really means it has no more data
to deliver. So we must not try to trigger a new receive to handle these
flags. Instead we must handle them directly in task_process_applet()
function and only if the corresponding SE flags were not already set.
This patch must be backported to 3.1.
The SPOE applet is using the new applet API. Thus end of input, end of
stream and errors must be reported using the applet flags, not the SE
flags. This was not the case. So let's fix it.
It seems this bug is harmless for now.
This patch must be backported to 3.1.
When receive or EOS termination events are reported at the SE level, a
truncation was erroneously reported when no error was detected. Of course, it
must be the opposite.
No backport needed.
This is the case for AWS-LC which derives from boringssl, where
X509_OBJECT_get0_X509_CRL() is already defined. There is definitively
no more need to define this function to build haproxy against TLS libs derived
from boringssl.
When a GET/HEAD/OPTIONS/DELETE healthcheck request was formatted, we claimed
there was a "content-length" header set even when there was no payload,
leading to actually send a "content-length: 0" header to the server. It was
unexpected and could be rejected by servers.
When a healthcheck request is sent we must take care to state there is a
"content-length" header when it is explicitly added.
This patch should fix the issue #2851. It must be backported as far as 2.9.
It is not rare to see configurations with a large number of "tcp-request
content" or "http-request" rules for instance. A large number of rules
combined with cpu-demanding actions (e.g.: actions that work on content)
may create thread contention as all the rules from a given ruleset are
evaluated under the same polling loop if the evaluation is not interrupted
Thus, in this patch we add extra logic around "tcp-request content",
"tcp-response content", "http-request" and "http-response" rulesets, so
that when a certain number of rules are evaluated under the single polling
loop, we force the evaluating function to yield. As such, the rule which
was about to be evaluated is saved, and the function starts evaluating
rules from the save pointer when it returns (in the next polling loop).
We use task_wakeup(task, TASK_WOKEN_MSG) to explicitly wake the task so
that no time is wasted and the processing is resumed ASAP. TASK_WOKEN_MSG
is mandatory here because process_stream() expects TASK_WOKEN_MSG for
explicit analyzers re-evaluation.
rules_bcount stream's attribute was added to count how manu rules were
evaluated since last interruption (yield). Also, SF_RULE_FYIELD flag
was added to know that the s->current_rule was assigned due to forced
yield and not regular yield.
By default haproxy will enforce a yield every 50 rules, this behavior
can be configured using the "tune.max-rules-at-once" global keyword.
There is a limitation though: for now, if the ACT_OPT_FINAL flag is set
on act_opts, we consider it is not safe to yield (as it is already the
case for automatic yield). In this case instead of yielding an taking
the risk of not being called back, we skip the yield and hope it will
not create contention. This is something we should ideally try to
improve in order to yield in all conditions.
When the tcp-response content ruleset evaluation is delayed because of an
ACL condition, the close forwarding on the client side is not explicitly
blocked. So it is possible to close the client side before the end of the
response evaluation.
To fix the issue, this is now done in all cases where some data are
missing. Concretely, channel_dont_close() is called in "missing_data" goto
label.
Note it is only a theorical bug (or pending bug). It is not possible to
trigger it for now because an ACL cannot wait for more data when a close was
received. But the code remains a bit weak. It is safer this way. It is
especially mandatory for the "force yield" option that should be added soon.
This patch could be backported to all stable versions.
Several debug counters were added to debug a strange issue about early
aborts. Most of them are now useless, especially because it is now possible
to rely on the termination events logs. So, it is better to remove them.
Note that these counters are still there in 3.1.
Several debug counters were added in HTTP analyzers to help debugging a
strange issue about early aborts. But these counters are a bit overkill
now. Especially because it is now possible to rely on the termination event
log. So just remove them.
Note that these counters are still there in 3.1.
When specific events for the stream location were added, some reports about
message interception were not removed. These reports are now removed.
No need to backport.
A Read0 event could be ignored by the FCGI multiplexer if it is blocked on a
partial record. Instead of handling the event, it remained blocked, waiting
for the end of the record.
To fix the issue, the same solution than the H2 multiplexer is used. Two
flags are introduced. The first one, FCGI_CF_END_REACHED, is used to
acknowledge a read0. This flag is set when a read0 was received AND the FCGI
multiplexer must handle it. The second one, FCGI_CF_DEM_SHORT_READ, is set
when the demux is interrupted on a partial record. A short read and a read0
lead to set the FCGI_CF_END_REACHED flag.
With these changes, the FCGI mux should be able to properly handle read0 on
partial records.
This patch should be backported to all stable versions after a period of
observation.
Prevent a partial copy of trailers or headers when using the <mark>
parameter.
When using htx_xfer_blks(), transfering partial headers or trailers are
prevented when restricted by the <count> parameter. However using the
<mark> parameter will still allow to do it.
This patch changes the behavior by checking the <mark> type only after
checking the headers/trailers type, so we can still rollback on partial
transfer.
No impact on the current code, which does not try to do that yet.
Every once in a while, GCC reports issues with qc_release_lost_pkts()
function. It seems that its static analysis is foiled by the code
structuring. The latest warning reports the following issue :
CC src/quic_loss.o
src/quic_loss.c: In function ‘qc_release_lost_pkts’:
src/quic_loss.c:313:58: error: potential null pointer dereference [-Werror=null-dereference]
313 | unsigned int period = newest_lost->time_sent_ms - oldest_lost->time_sent_ms;
| ~~~~~~~~~~~^~~~~~~~~~~~~~
To fix definitely this, change slightly the code. <oldest_lost> and
<newest_lost> are now initialized on the first list entry outside of the
loop. This is enough to guarantee to GCC that they cannot be NULL for
the remainder of the function.
When transfering blocks from an src to another dst htx representation,
htx_xfer_blks() decreases the size of each block removed from the <count>
value passed in parameter, so it can't transfer more than <count>. The
size must also contains the metadata, represented by a simple
sizeof(struct htk_blk).
However, the code was doing a sizeof(dstblk) instead of a
sizeof(*dstblk) which as the consequence of removing only a size_t from
count. Fortunately htx_blk size is 64bits, so that does not provoke any
problem in 64bits. But on 32bits architecture, the count value is not
decreased correctly and the function could try to transfer more blocks
than allowed by the count parameter.
Must be backported in every stable release.
Connection errors can be detected via connect/recv/send syscall, but also
because it was reported by the poller. So dedicated events, at the FD level,
are introduced to make the difference.
term_events tool was updated accordingly.
This development tool can be used to convert a string representing a
termination event logs to its human redable representation. Several string
may be converting at a time. To do so, several arguments can be specified on
the commeand line or they can be provided on STDIN, using "-" argument.
Here is an exemple:
> term_events f2x2f4x4 m2m4m1 e2e1 s2s1S1 E1 M1 F1
### f2x2f4x4 : fd:shutr > xprt:shutr > fd:snd_err > xprt:snd_err
### m2m4m1 : muxc:shutr > muxc:snd_err > muxc:shutw
### e2e1 : se:eos > se:shutw
### s2s1S1 : strm:eos > strm:shutw > STRM:shutw
### E1 : SE:shutw
### M1 : MUXC:shutw
### F1 : FD:shutw
The make target "dev/term_events/term_events" must be used to compile it.
Enums used to report events were placed in the connection header for
conveniance. But it is not specifically related to connection. So, they are
moved at the end of the file to have a better isolation.
The function is now responsible to handle empty log because no event was
reported. In that case, an empty string is returned. It is also responsible to
handle case where termination events log is not supported for an given entity
(for instance the quic mux for now). In that case, a dash ("-") is returned.
"term_events" is a sample fetche function that can be used to get
termination events for all locations in one call. The format equivalent to:
{fc_term_events,fc_mux_term_events,fs.term_events,txn.term_events,bs.term_events,bc_mux_term_events,bc_term_events}
If no event was reported for a location, the field is empty. If the feature
is not supported yet, a dash ('-') is printed.
There is no termination events log for applet but events for the SE location
are filled when the endpoint is an applet. Most of them relies on the new
applet API. Only few events are reported for legacy applets.
It is hard to never detect the same event several time without painful
tests. In other words, the same termination event can be reported several
time and this must be handled. To do so, "tevt_report_event" macro is
updated to ignore an event if the last reported one is of the same type, for
the same location. Of course, if the same event is reported several times at
different moment, it will not be detected.
If it is the last patch to introduce dedicated termination events for each
location. In this one, events for the stream location are introcued. The old
enum is also removed because it is now unused.
Here, more accurate evets are added. The "intercepted" event was splitted.
Termination events dedicated to mux connection and stream-endpoint
descriptors are added in this patch. Specific events to these locations are
thus added. Changes for the H1 and H2 multiplexers are reviewed to be more
accurate.
To be able to add more accurate termination events for each location, the
enum will be splitted by location. Indeed, there are at most 16 possbile
events. It will be pretty confusing to use same termination events for the
different locations. So the best is to split them.
In this patch, the termination events for the fd, hs and xprt locations are
introduced. For now some holes are added to keep similar events aligned
across enums. But this may change in future.
MUX_CTL_TEVTS command is added to get the termination event logs of a mux
connection and MUX_SCTL_TEVTS command to get the termination event logs of a
mux stream.
The termiantion events logs of the multiplexer connection and stream are now
dumped when corresponding mux info are dumped. The termination event logs of
the underlying connection is also dumped in the debug string.
In this patch, events for the stream location are reported. These events are
first reported on the corresponding stream-connector. So front events on scf
and back event on scb. Then all events are both merged in the stream. But
only 4 events are saved on the stream.
Several internal events are for now grouped with the type
"tevt_type_intercepted". More events will be added to have a better
resolution. But at least the place to report these events are identified.
For now, when a event is reported on a SC, it is also reported on the stream
and vice versa.
shutdown for reads (read0), receive errors, shutdown for writes and timeouts
are reported, but only for the H2 connection for now.
As for the H1 multiplexer, more events must be added to report protocol
errors, goaways and rst-streams. And of course, all events for the H2
streams must be reported too.
shutdown for reads (read0), receive errors, shutdown for writes and timeouts
are reported. It is not too hard to know where to report events generated by
HAProxy (timeouts and shutw). For detected events (shutr and receive error),
it is not so simple. These events must not be reported when they are
detected but when the mux can handle them. For instance, some unprocessed
input data may block a read0. So, the experience will tell us if these
events are reported at the rigth time and on the right conditions.
For now, no internal errors (parsing errors, protocol errors, intenral
errors...) are reported because these event types have not yet been added.
This termination events log will be used to report events from the mux
streams. The location will be "tevt_loc_se" and the muxes will be
responsible to report the corresponding events.
Termination events logs will be used to report the events that led to close
a connection. Unlike flags, that reflect a state, the idea here is to store
a log to preserve the order of the events. Most of time, when debugging an
issue, the order of the events is crucial to be able to understand the root
cause of the issue. The traces are trully heplful to do so. But it is not
always possible to active them because it is pretty verbose. On heavily
loaded platforms, it is not acceptable. We hope that the termination events
logs will help us in that situations.
One termination events log will be be store at each layer (connection, mux
connection, mux stream...) as a 32-bits integer. Each event will be store on
8 bits, 4 bits for the location and 4 bits for the type. So the first four
events will be stored only for each layer. It should be enough why a
connection is closed.
In this patch, the enums defining the termination event locations and types
are added. The macro to report a new event is also added and a function to
convert a termination events log to a string that could be display in log
messages for instance.
When a demux error is reported by the H1S, an error must be reported on the
SE and not an end-of-input or an end-of-stream. So SE_FL_ERROR flag must be
set and not SE_FL_EOI/SE_FL_EOS.
It seems this bug has no impact. So there is no reason to backport it.
It is just a small patch to clean up mux/demux functions. Instead of listing
the H1S errors that must be handled during demux of mux operations, masks of
flags are used. It is more readable.
Now that we can see that some events are reported for older instances
of a file descriptor, let's skip these ones instead of reporting
dangerous events on them. It might possibly qualify as a bug if it
helps fixing strange issues in certain environments, in which case it
can make sense to backport it along with the following recent patches:
DEBUG: fd: add a counter of takeovers of an FD since it was last opened
MINOR: fd: add a generation number to file descriptors
DEBUG: epoll: store and compare the FD's generation count with reported event
There have been some reported cases where races between threads in epoll
were causing wrong reports of close or error events. Since the epoll_event
data is 64 bits, we can store the FD's generation counter in the upper
bits to verify if we're speaking about the same instance of the FD as the
current one or a stale one. If the generation number does not match, then
we classify these into 3 conditions and increment the relevant COUNT_IF()
counters (stale report for closed FD, stale report of harmless event on
reopened FD, stale report of HUP/ERR on reopened FD). Tests have shown that
with heavy concurrency, a very small maxconn (typically 1 per thread),
http-reuse always and a server closing connections first but randomly
(httpterm with /C=2r), such events can happen at a pace of a few per second
for the closed FDs, and a few per minute for the other ones, so there's value
in leaving this accessible for troubleshooting. E.g after a few minutes:
Count Type Location function(): "condition" [comment]
5541 CNT ev_epoll.c:296 _do_poll(): "1" [epoll report of event on a just closed fd (harmless)]
10 CNT ev_epoll.c:294 _do_poll(): "1" [epoll report of event on a closed recycled fd (rare)]
42 CNT ev_epoll.c:289 _do_poll(): "1" [epoll report of HUP on a stale fd reopened on the same thread (suspicious)]
212 CNT ev_epoll.c:279 _do_poll(): "1" [epoll report of HUP/ERR on a stale fd reopened on another thread (harmless)]
1 CNT mux_h1.c:3911 h1_send(): "b_data(&h1c->obuf)" [connection error (send) with pending output data]
This one with the following setup, whicih abuses threads contention by
starting 64 threads on two cores:
- config:
global
nbthread 64
stats socket /tmp/sock1 level admin
stats timeout 1h
defaults
timeout client 5s
timeout server 5s
timeout connect 5s
mode http
listen p2
bind :8002
http-reuse always
server s1 127.0.0.1:8000 maxconn 4
- haproxy forcefully started on 2C4T:
$ taskset -c 0,1,4,5 ./haproxy -db -f epoll-dbg.cfg
- httpterm on port 8000, cpus 2,3,6,7 (2C4T)
- h1load with responses larger than a single buffer, and randomly
closing/keeping alive:
$ taskset -c 2,3,6,7 h1load -e -t 4 -c 256 -r 1 0:8002/?s=19k/C=2r
This patch adds a counter of close() on file descriptors in the fdtab.
The goal is to better detect if reported events concern the current or
a previous file descriptor. For now the counter is only added, and is
showed in "show fd" as "gen". We're reusing unused space at the end of
the struct. If it's needed for something more important later, this
patch can be reverted.