This change is only significant for the multiplexer part. For the applets,
the context and the endpoint are the same. Thus, there is no much change. For
the multiplexer part, the connection was used to set the conn-stream
endpoint and the mux's stream was the context. But it is a bit strange
because once a mux is installed, it takes over the connection. In a
wonderful world, the connection should be totally hidden behind the mux. The
stream-interface and, in a lesser extent, the stream, still access the
connection because that was inherited from the pre-multiplexer era.
Now, the conn-stream endpoint is the mux's stream (an opaque entity for the
conn-stream) and the connection is the context. Dedicated functions have
been added to attached an applet or a mux to a conn-stream.
It was reported in issue #13 that a GOAWAY frame was sent on timeout even
if no SETTINGS frame was sent. The approach imagined by then was to track
the fact that a SETTINGS frame was already sent to avoid this, but that's
already what is done through the state, though it doesn't stand due to the
fact that we switch the frame to the error state. Thus instead what we're
doing here is to instead set the GOAWAY_FAILED flag in h2c_error() before
switching to the ERROR state when the state indicates we've not yet sent
settings, and refrain from sending anything from the h2c_send_goaway_error()
function for such states.
This could be backported to all versions where it applies well.
The new 'close-spread-time' global option can be used to spread idle and
active HTTP connction closing after a SIGUSR1 signal is received. This
allows to limit bursts of reconnections when too many idle connections
are closed at once. Indeed, without this new mechanism, in case of
soft-stop, all the idle connections would be closed at once (after the
grace period is over), and all active HTTP connections would be closed
by appending a "Connection: close" header to the next response that goes
over it (or via a GOAWAY frame in case of HTTP2).
This patch adds the support of this new option for HTTP as well as HTTP2
connections. It works differently on active and idle connections.
On active connections, instead of sending systematically the GOAWAY
frame or adding the 'Connection: close' header like before once the
soft-stop has started, a random based on the remainder of the close
window is calculated, and depending on its result we could decide to
keep the connection alive. The random will be recalculated for any
subsequent request/response on this connection so the GOAWAY will still
end up being sent, but we might wait a few more round trips. This will
ensure that goaways are distributed along a longer time window than
before.
On idle connections, a random factor is used when determining the expire
field of the connection's task, which should naturally spread connection
closings on the time window (see h2c_update_timeout).
This feature request was described in GitHub issue #1614.
This patch should be backported to 2.5. It depends on "BUG/MEDIUM:
mux-h2: make use of http-request and keep-alive timeouts" which
refactorized the timeout management of HTTP2 connections.
Christian Ruppert reported an issue explaining that it's not possible to
forcefully close H2 connections which do not receive requests anymore if
they continue to send control traffic (window updates, ping etc). This
will indeed refresh the timeout. In H1 we don't have this problem because
any single byte is part of the stream, so the control frames in H2 would
be equivalent to TCP acks in H1, that would not contribute to the timeout
being refreshed.
What misses from H2 is the use of http-request and keep-alive timeouts.
These were not implemented because initially it was hard to see how they
could map to H2. But if we consider the real use of the keep-alive timeout,
that is, how long do we keep a connection alive with no request, then it's
pretty obvious that it does apply to H2 as well. Similarly, http-request
may definitely be honored as soon as a HEADERS frame starts to appear
while there is no stream. This will also allow to deal with too long
CONTINUATION frames.
This patch moves the timeout update to a new function, h2c_update_timeout(),
which is in charge of this. It also adds an "idle_start" timestamp in the
connection, which is set when nb_cs reaches zero or when a headers frame
start to arrive, so that it cannot be delayed too long.
This patch should be backported to recent stable releases after some
observation time. It depends on previous patch "MEDIUM: mux-h2: slightly
relax timeout management rules".
The H2 timeout rules were arranged to cover complex situations In 2.1
with commit c2ea47fb1 ("BUG/MEDIUM: mux-h2: do not enforce timeout on
long connections").
It turns out that such rules while complex, do not perfectly cover all
use cases. The real intent is to say that as long as there are attached
streams, the connection must not timeout. Then once all these streams
have quit (possibly for timeout reasons) then the mux should take over
the management of timeouts.
We do have this nb_cs field which indicates the number of attached
streams, and it's updated even when leaving orphaned streams. So
checking it alone is sufficient to know whether it's the mux or the
streams that are in charge of the timeouts.
In its current state, this doesn't cause visible effects except that
it makes it impossible to implement more subtle parsing timeouts.
This would need to be backported as far as 2.0 along with the next
commit that will depend on it.
The server_id_hdr_name is already processed as an ist in various locations lets
also just store it as such.
see 0643b0e7e ("MINOR: proxy: Make `header_unique_id` a `struct ist`") for a
very similar past commit.
Thanks to all previous changes, it is now possible to move the
stream-interface into the conn-stream. To do so, some SI functions are
removed and their conn-stream counterparts are added. In addition, the
conn-stream is now responsible to create and release the
stream-interface. While the stream-interfaces were inlined in the stream
structure, there is now a pointer in the conn-stream. stream-interfaces are
now dynamically allocated. Thus a dedicated pool is added. It is a temporary
change because, at the end, the stream-interface structure will most
probably disappear.
In the same way the conn-stream has a pointer to the stream endpoint , this
patch adds a pointer to the application entity in the conn-stream
structure. For now, it is a stream or a health-check. It is mandatory to
merge the stream-interface with the conn-stream.
Thanks to previous changes, it is now possible to set an appctx as endpoint
for a conn-stream. This means the appctx is no longer linked to the
stream-interface but to the conn-stream. Thus, a pointer to the conn-stream
is explicitly stored in the stream-interface. The endpoint (connection or
appctx) can be retrieved via the conn-stream.
To be able to handle applets as a conn-stream endpoint, we must be prepared
to handle different types of endpoints. First of all, the conn-strream's
connection must no longer be used directly.
The backend conn-stream is no longer released on connection retry. This
means the conn-stream is detached from the underlying connection but not
released. Thus, during connection retries, the stream has always an
allocated conn-stream with no connection. All previous changes were made to
make this possible.
Note that .attach() mux callback function was changed to get the conn-stream
as argument. The muxes are no longer responsible to create the conn-stream
when a server connection is attached to a stream.
If a parsing error is detected and the corresponding HTX flag is set
(HTX_FL_PARSING_ERROR), we must be sure to always report it to the app
layer. It is especially important when the error occurs during the response
parsing, on the server side. In this case, the RX buffer contains an empty
HTX message to carry the flag. And it remains in this state till the info is
reported to the app layer. This must be done otherwise, on the conn-stream,
the CS_FL_ERR_PENDING flag cannot be switched to CS_FL_ERROR and the
CS_FL_WANT_ROOM flag is always set when h2_rcv_buf() is called. The result
is a ping-pong loop between the mux and the stream.
Note that this patch fixes a bug. But it also reveals a design issue. The
error must not be reported at the HTX level. The error is already carried by
the conn-stream. There is no reason to duplicate it. In addition, it is
errorprone to have an empty HTX message only to report the error to the app
layer.
This patch should fix the issue #1561. It must be backported as far as 2.0
but the bug only affects HAProxy >= 2.4.
The idle connection delay calculation before a request is a bit tricky,
especially for multiplexed protocols. It changed between 2.3 and 2.4 by
the integration of the idle delay inside the session itself with these
commits:
dd78921c6 ("MINOR: logs: Use session idle duration when no stream is provided")
7a6c51324 ("MINOR: stream: Always get idle duration from the session")
and by then it was only set by the H1 mux. But over multiple changes, what
used to be a zero idle delay + a request delay for H2 became a bit odd, with
the idle time slipping into the request time measurement. The effect is that,
as reported in GH issue #1395, some H2 request times look huge.
This patch introduces the calculation of the session's idle time on the
H2 mux before creating the stream. This is made possible because the
stream_new() code immediately copies this value into the stream for use
at log time. Thus we don't care about changing something that will be
touched by every single request. The idle time is calculated as documented,
i.e. the delay from the previous request to the current one. This also
means that when a single stream is present on a connection, a part of
the server's response time may appear in the %Ti measurement, but this
reflects the reality since nothing would prevent the client from using
the connection to fetch more objects. In addition this shows how long
it takes a client to find references to objects in an HTML page and
start to fetch them.
A different approach could have consisted in counting from the last time
the connection was left without any request (i.e. really idle), but this
would at least require a documentation change and it's not certain this
would provide a more useful information.
Thanks to Bart Butler and Luke Seelenbinder for reporting enough elements
to diagnose this issue.
This should be backported to 2.4.
Sadly, despite particular care, commit 39a0a1e12 ("MEDIUM: h2/hpack: emit
a Dynamic Table Size Update after settings change") broke H2 when sending
DTSU. A missing negation on the flag caused the DTSU_EMITTED flag to be
lost and the DTSU to be sent again on the next stream, and possibly to
break flow control or a few other internal states.
This will have to be backported wherever the patch above was backported.
Thanks to Yves Lafon for notifying us with elements to reproduce the
issue!
As reported by @jinsubsim in github issue #1498, there is an
interoperability issue between nghttp2 as a client and a few servers
among which haproxy (in fact likely all those which do not make use
of the dynamic headers table in responses or which do not intend to
use a larger table), when reducing the header table size below 4096.
These are easily testable this way:
nghttp -v -H":method: HEAD" --header-table-size=0 https://$SITE
It will result in a compression error for those which do not start
with an HPACK dynamic table size update opcode.
There is a possible interpretation of the H2 and HPACK specs that
says that an HPACK encoder must send an HPACK headers table update
confirming the new size it will be using after having acknowledged
it, because since it's possible for a decoder to advertise a late
SETTINGS and change it after transfers have begun, the initially
advertised value might very well be seen as a first change from the
initial setting, and the HPACK spec doesn't specify the side which
causes the change that triggers a DTSU update, which was essentially
summed up in this question from nghttp2's author when this issue
was already raised 6 years ago, but which didn't really find a solid
response by then:
https://lists.w3.org/Archives/Public/ietf-http-wg/2015OctDec/0107.html
The ongoing consensus based on what some servers are doing and that aims
at limiting interoperability issues seems to be that a DTSU is expected
for each reduction from the current size, which should be reflected in
the next revision of the H2 spec:
https://github.com/httpwg/http2-spec/pull/1005
Given that we do not make use of this table we can emit a DTSU of zero
before encoding any HPACK frame. However, some clients do not support
receiving DTSU with such values (e.g. VTest) so we cannot do it
inconditionnally!
The current patch aims at sticking as close to the spec as possible by
proceeding this way:
- when a SETTINGS_HEADER_TABLE_SIZE is received, a flag is set
indicating that the value changed
- before sending any HPACK frame, this flag is checked to see if
an update is wanted and if none was sent
- in this case a DTSU of size zero is emitted and a flag is set
to mention it was emitted so that it never has to be sent again
This addresses the problem with nghttp2 without affecting VTest.
More context is available here:
https://github.com/nghttp2/nghttp2/issues/1660https://lists.w3.org/Archives/Public/ietf-http-wg/2021OctDec/0235.html
Many thanks to @jinsubsim for this report and participating to the issue
that led to an improvement of the H2 spec.
This should be backported to stable releases in a timely manner, ideally
as far as 2.4 once the h2spec update is merged, then to other versions
after a few months of observation or in case an issue around this is
reported.
The stopping-list management introduced by commit d3a88c1c3 ("MEDIUM:
connection: close front idling connection on soft-stop") missed two
error paths in the H1 and H2 muxes. The effect is that if a stream
or HPACK table couldn't be allocated for these incoming connections,
we would leave with the connection freed still attached to the
stopping_list and it would never leave it, resulting in use-after-free
hence either a crash or a data corruption.
This is marked as medium as it only happens under extreme memory pressure
or when playing with tune.fail-alloc. Other stability issues remain in
such a case so that abnormal behaviors cannot be explained by this bug
alone.
This must be backported to 2.4.
During 2.4-dev, an issue with partial frames was fixed with commit
3d4631fec ("BUG/MEDIUM: mux-h2: fix read0 handling on partial frames").
However this patch is not completely correct. It makes h2_recv() return
0 if the connection was shut for reads, but this not make h2_io_cb()
call h2_process(), so if there are any pending data left in the demux
buffer, they will never be processed, and the I/O callback will be
called in loops forever from the poller.
The correct return value there is 1, as is done at the end of the
function to report a pending read0.
This should definitely fix issue #1328. However even after a lot of
tests I couldn't manage to reproduce it, the conditions to enter that
situation are quite racy.
This must be backported to 2.0 since the fix above was merged into
2.0.21 and 2.2.9.
The value for H2_CF_DEM_SHORT_READ flag is wrong. 2 bits are erroneously
set, 0x200 and 0x80000. It is not an issue because both bits are not used
anywhere else.
The typo was introduced in the commit b5f7b5296 ("BUG/MEDIUM: mux-h2: Handle
remaining read0 cases on partial frames"). Thus this patch must also be
backported as far a 2.0.
Define a new stream flag SF_WEBSOCKET and a new cs flag CS_FL_WEBSOCKET.
The conn-stream flag is first set by h1/h2 muxes if the request is a
valid websocket upgrade. The flag is then converted to SF_WEBSOCKET on
the stream creation.
This will be useful to properly manage websocket streams in
connect_server().
The RFC8441 was not respected by haproxy in regards with server support
for Extended CONNECT. The Extended CONNECT method was used to convert an
Upgrade header stream even if no SETTINGS_ENABLE_CONNECT_PROTOCOL was
received, which is forbidden by the RFC8441. In this case, the behavior
of the http/2 server is unspecified.
Fix this by flagging the connection on receiption of the RFC8441
settings SETTINGS_ENABLE_CONNECT_PROTOCOL. Extended CONNECT is thus only
be used if the flag is present. In the other case, the stream is
immediatly closed as there is no way to handle it in http/2. It results
in a http/1.1 502 or http/2 RESET_STREAM to the client side.
The protocol-upgrade regtest has been extended to test that haproxy does
not emit Extended CONNECT on servers without RFC8441 support.
It must be backported up to 2.4.
Add a state trace to report that a protocol upgrade is converted using
the rfc8441 Extended connect method. This is useful in regards with the
recent changes to improve http/2 websockets.
While in H1 we can usually close quickly, in H2 a client might be sending
window updates or anything while we're sending a GOAWAY and the pending
data in the socket buffers at the moment the close() is performed on the
socket results in the output data being lost and an RST being emitted.
One example where this happens easily is with h2spec, which randomly
reports connection resets when waiting for a GOAWAY while haproxy sends
it, as seen in issue #1422. With h2spec it's not window updates that are
causing this but the fact that h2spec has to upload the payload that
comes with invalid frames to accommodate various implementations, and
does that in two different segments. When haproxy aborts on the invalid
frame header, the payload was not yet received and causes an RST to
be sent.
Here we're dealing with this two ways:
- we perform a shutdown(WR) on the connection to forcefully push pending
data on a front connection after the xprt is shut and closed ;
- we drain pending data
- then we close
This totally solves the issue with h2spec, and the extra cost is very
low, especially if we consider that H2 connections are not set up and
torn down often. This issue was never observed with regular clients,
most likely because this pattern does not happen in regular traffic.
After more testing it could make sense to backport this, at least to
avoid reporting errors on h2spec tests.
Some checks were added by commit 9a3d3fcb5 ("BUG/MAJOR: mux-h2: Don't try
to send data if we know it is no longer possible") to make sure we don't
loop forever trying to send data that cannot leave. But one of the
conditions there is not correct, the one relying on H2_CS_ERROR2. Indeed,
this state indicates that the error code was serialized into the mux
buffer, and since the test is placed before trying to send the data to
the socket, if the connection states only contains a GOAWAY frame, it
may refrain from sending and may close without sending anything. It's
not dramatic, as GOAWAY reports connection errors in situations where
delivery is not even certain, but it's cleaner to make sure the error
is properly sent, and it avoids upsetting h2spec, as seen in github
issue #1422.
Given that the patch above was backported as far as 1.8, this patch will
also have to be backported that far.
Thanks to Ilya for reporting this one.
This change is required to support TCP/HTTP rules in defaults sections. The
'disabled' bitfield in the proxy structure, used to know if a proxy is
disabled or stopped, is replaced a generic bitfield named 'flags'.
PR_DISABLED and PR_STOPPED flags are renamed to PR_FL_DISABLED and
PR_FL_STOPPED respectively. In addition, everywhere there is a test to know
if a proxy is disabled or stopped, there is now a bitwise AND operation on
PR_FL_DISABLED and/or PR_FL_STOPPED flags.
The last 3 fields were 3 list heads that are per-thread, and which are:
- the pool's LRU head
- the buffer_wq
- the streams list head
Moving them into thread_ctx completes the removal of dynamic elements
from the struct thread_info. Now all these dynamic elements are packed
together at a single place for a thread.
We've found others places where the read0 is ignored because of an
incomplete frame parsing. This time, it happens during parsing of
CONTINUATION frames.
When frames are parsed, incomplete frames are properly handled and
H2_CF_DEM_SHORT_READ flag is set. It is also true for HEADERS
frames. However, for CONTINUATION frames, there is an exception. Besides
parsing the current frame, we try to peek header of the next one to merge
payload of both frames, the current one and the next one. Idea is to create
a sole HEADERS frame before parsing the payload. However, in this case, it
is possible to have an incomplete frame too, not the current one but the
next one. From the demux point of view, the current frame is complete. We
must go to the internal function h2c_decode_headers() to detect an
incomplete frame. And this case was not identified and fixed when
H2_CF_DEM_SHORT_READ flag was introduced in the commit b5f7b5296
("BUG/MEDIUM: mux-h2: Handle remaining read0 cases on partial frames")
This bug was reported in a comment of the issue #1362. The patch must be
backported as far as 2.0.
backend.c, all muxes, backend.c started manipulating ebmb_nodes with
the introduction of idle conns but the types were inherited through
other includes. Let's add ebmbtree.h there.
We'll need to improve the API to pass other arguments in the future, so
let's start to adapt better to the current use cases. task_new() is used:
- 18 times as task_new(tid_bit)
- 18 times as task_new(MAX_THREADS_MASK)
- 2 times with a single bit (in a loop)
- 1 in the debug code that uses a mask
This patch provides 3 new functions to achieve this:
- task_new_here() to create a task on the calling thread
- task_new_anywhere() to create a task to be run anywhere
- task_new_on() to create a task to run on a specific thread
The change is trivial and will allow us to later concentrate the
required adaptations to these 3 functions only. It's still possible
to call task_new() if needed but a comment was added to encourage the
use of the new ones instead. The debug code was not changed and still
uses it.
The transient flag CO_RFL_BUF_NOT_STUCK should now be set when the mux's
rcv_buf() function is called, in si_cs_recv(), to be sure the mux is able to
perform some optimisation during data copy. This flag is set when we are
sure the channel buffer is not stuck. Concretely, it happens when there are
data scheduled to be sent.
It is not a fix and this flag is not used for now. But it makes sense to have
this info to be sure to be able to do some optimisations if necessary.
This patch is related to the issue #1362. It may be backported to 2.4 to
ease future backports.
Instead of returning a 501-Not-implemented error when "Ugrade:" header is
found for a request with a payload, the header is removed. This way, the
upgrade is disabled and the request is still sent to the server. It is
required because some frameworks seem to try to perform H2 upgrade on every
requests, including POST ones.
The h2 mux was slightly fixed to convert Upgrade requests to extended
connect ones only if the rigth HTX flag is set.
This patch should fix the issue #1381. It must be backported to 2.4.
This part was fixed several times since commit aade4edc1 ("BUG/MEDIUM:
mux-h2: Don't handle pending read0 too early on streams") and there are
still some cases where a read0 event may be ignored because a partial frame
inhibits the event.
Here, we must take care to set H2_CF_END_REACHED flag if a read0 was
received while a partial frame header is received or if the padding length
is missing.
To ease partial frame detection, H2_CF_DEM_SHORT_READ flag is introduced. It
is systematically removed when some data are received and is set when a
partial frame is found or when dbuf buffer is empty. At the end of the
demux, if the connection must be closed ASAP or if data are missing to move
forward, we may acknowledge the pending read0 event, if any. For now,
H2_CF_DEM_SHORT_READ is not part of H2_CF_DEM_BLOCK_ANY mask.
This patch should fix the issue #1328. It must be backported as far as 2.0.
If a connection is closed during the preface while no data are received, if
the dontlognull option is set, no log message must be emitted. However, this
will still be handled as a protocol error. Only the log is omitted.
This patch should fix the issue #1336 for H2 sessions. It must be backported
to 2.4 and 2.3 at least, and probably as far as 2.0.
Define a new global config statement named
"h2-workaround-bogus-websocket-clients".
This statement will disable the automatic announce of h2 websocket
support as specified in the RFC8441. This can be use to overcome clients
which fail to implement the relatively fresh RFC8441. Clients will in
his case automatically downgrade to http/1.1 for the websocket tunnel
if the haproxy configuration allows it.
This feature is relatively simple and can be backported up to 2.4, which
saw the introduction of h2 websocket support.
In 2.4, commit d1ac2b90c ("MAJOR: htx: Remove the EOM block type and
use HTX_FL_EOM instead") changed the HTX processing to destroy the
blocks as they are processed. So the traces that were emitted at the
end of the send headers functions didn't have anything to show.
Let's move these traces earlier in the function, right before the HTX
processing, so that everything is still in place.
This should be backported to 2.4.
Since commit 7d013e796 ("BUG/MEDIUM: mux-h2: Xfer rxbuf to the upper
layer when creating a front stream"), the rxbuf is lost during the
call to h2c_frt_stream_new(), so the trace that happens later cannot
find a request there and we've lost the useful part indicating what
the request looked like. Let's move the trace before this call.
This should be backported to 2.4.
We're seeing some browsers setting up multiple connections and closing
some to just keep one. It looks like they do this in case they'd
negotiate H1. This results in aborted prefaces and log pollution about
bad requests and "PR--" in the status flags.
We already have an option to ignore connections with no data, it's called
http-ignore-probes. But it was not used by the H2 mux. However it totally
makes sense to use it during the preface.
This patch changes this so that connections aborted before sending the
preface can avoid being logged.
This should be backported to 2.4 and 2.3 at least, and probably even
as far as 2.0.
"sent H2 request" was already misaligned with the 3 other ones
(sent/rcvd, request/response), and now with "new H2 connection" that's
yet another alignment making the traces even less legible. Let's just
realign all 5 messages, this even eases quick pointer comparisons. This
should probably be backported to 2.4 as it's where it's the most likely
to be used in the mid-term.
It is currently very difficult to match some H2 trace outputs against
some log extracts because there's no exactly equivalent info.
This patch tries to address this by adding a TRACE_USER() call in h2_init()
that is matched in h2_trace() to report:
- connection pointer and direction
- frontend's name or server's name
- transport layer and control layer (e.g. "SSL/tcpv4")
- source and/or destination depending on what is set
This now permits to get something like this at verbosity level complete:
<0>2021-06-16T18:30:19.810897+02:00 [00|h2|1|mux_h2.c:1006] new H2 connection : h2c=0x19fee50(F,PRF) : conn=0x7f373c026850(IN) fe=h2gw RAW/tcpv4 src=127.0.0.1:19540
<0>2021-06-16T18:30:19.810919+02:00 [00|h2|1|mux_h2.c:2731] rcvd H2 request : h2c=0x19fee50(F,FRH)
<0>2021-06-16T18:30:19.810998+02:00 [00|h2|1|mux_h2.c:1006] new H2 connection : h2c=0x1a04ee0(B,PRF) : conn=0x1a04ce0(OUT) sv=h2gw/s1 RAW/tcpv4 dst=127.0.0.1:4446
Implement a safe mechanism to close front idling connection which
prevents the soft-stop to complete. Every h1/h2 front connection is
added in a new per-thread list instance. On shutdown, a new task is
waking up which calls wake mux operation on every connection still
present in the new list.
A new stopping_list attach point has been added in the connection
structure. As this member is only used for frontend connections, it
shared the same union as the session_list reserved for backend
connections.
When a DATA frame is sent, we must take care to properly detect the EOM flag
on the HTX message to set ES flag on the frame when necessary, to finish the
stream. But it is only done when data are copied from the HTX message to the
mux buffer and not when the frame are sent via a zero-copy. This patch fixes
this bug.
It is a 2.4-specific bug. No backport is needed.
Since the input buffer is transferred to the stream when it is created,
there is no longer control on the request size to be sure the buffer's
reserve is still respected. It was automatically performed in h2_rcv_buf()
because the caller took care to provide the correct available space in the
buffer. The control is still there but it is no longer applied on the
request headers. Now, we should take care of the reserve when the headers
are decoded, before the stream creation.
The test is performed for the request and the response.
It is a 2.4-specific bug. No backport is needed.
The H2_CF_RCVD_SHUT flag is used to report a read0 was encountered. It is
used by the H2 mux to properly handle shutdowns. However, this flag is only
set when no data are received. If it is detected at the socket level when
some data are received, it is not handled. And because the event was
reported on the connection, any other read attempts are blocked. In this
case, we are unable to close the connection and release the mux
immediately. We must wait the mux timeout expires.
This patch should fix the issue #1231. It must be backported as far as 2.0.
When header are splitted over several frames, payload of HEADERS and
CONTINUATION frames are merged to form a unique HEADERS frame before
decoding the payload. To do so, info about the current frame are updated
(dff, dfl..) with info of the next one. Here there is a bug when the frame
length (dfl) is update. We must add the next frame length (hdr.dfl) and not
only the amount of data found in the buffer (clen). Because HEADERS frames
are decoded in one pass, dfl value is the whole frame length or 0. nothing
intermediary.
This patch must be backported as far as 2.0.
In the function decoding payload of HEADERS frames, an internal error is
returned if the frame length is too large. it cannot exceed the buffer
size. The same is true when headers are splitted on several frames. The
payload of HEADERS and CONTINUATION frames are merged and the overall size
must not exceed the buffer size.
However, there is a bug when the current frame is big enough to only have
the space for a part of the header of the next frame. Because, in this case,
we wait for more data, to have the whole frame header. We don't properly
detect that the headers are too large to be stored in one buffer. In fact
the test to trigger this error is not accurate. When the buffer is full, the
error is reported if the frame length exceeds the amount of data in the
buffer. But in reality, an error must be reported when we are unable to
decode the current frame while the buffer is full. Because, in this case, we
know there is no way to change this state.
When the bug happens, the H2 connection is woken up in loop, consumming all
the CPU. But the traffic is not blocked for all that.
This patch must be backported as far as 2.0.
The current "ADD" vs "ADDQ" is confusing because when thinking in terms
of appending at the end of a list, "ADD" naturally comes to mind, but
here it does the opposite, it inserts. Several times already it's been
incorrectly used where ADDQ was expected, the latest of which was a
fortunate accident explained in 6fa922562 ("CLEANUP: stream: explain
why we queue the stream at the head of the server list").
Let's use more explicit (but slightly longer) names now:
LIST_ADD -> LIST_INSERT
LIST_ADDQ -> LIST_APPEND
LIST_ADDED -> LIST_INLIST
LIST_DEL -> LIST_DELETE
The same is true for MT_LISTs, including their "TRY" variant.
LIST_DEL_INIT keeps its short name to encourage to use it instead of the
lazier LIST_DELETE which is often less safe.
The change is large (~674 non-comment entries) but is mechanical enough
to remain safe. No permutation was performed, so any out-of-tree code
can easily map older names to new ones.
The list doc was updated.
This patch replaces roughly all occurrences of an HA_ATOMIC_ADD(&foo, 1)
or HA_ATOMIC_SUB(&foo, 1) with the equivalent HA_ATOMIC_INC(&foo) and
HA_ATOMIC_DEC(&foo) respectively. These are 507 changes over 45 files.
MX_FL_NO_UPG flag may now be set on a multiplexer to explicitly disable
upgrades from this mux. For now, it is set on the FCGI multiplexer because
it is not supported and there is no upgrade on backend-only multiplexers. It
is also set on the H2 multiplexer because it is clearly not supported.