Commit c3680ec ("MINOR: add severity information to cli feedback messages")
introduced a severity level to CLI messages, but one of them was missed
on "set server addr". No backport is needed.
Given that all spinning loops we've had since 1.8-rc1 were caused by
unbalanced lock/unlock, let's get rid of all return statements in the
locked check functions and only exit via a a single unlock place.
The "set server <srv> check-port" CLI handler forgot to return after
detecting an error on the port number, and still proceeds with the action.
This needs to be backported to 1.7.
Some unlocks were missing, resulting in deadlocks even with a single thread.
We really need to make these functions safer by getting rid of all those
remaining "return" calls and only leave using a goto!
Using peers or stick table we could update an freq_ctr
using a tick value with the first bit set but this
bit is reserved for lock since multithreading support.
Fix bugs due to missing unlock and recursive lock performing
http health check.
The server's lock scope was enlarged to protect all callers
of 'set_server_check_status' and 'chk_report_conn_err'.
This fix also protects tcpcheck against concurrency.
Before introducing the mux layer, tcp_connect() would poll for sending
to detect the connection establishment. It happens that the health
checks have apparently never explicitly enabled this polling and have
been relying on this implicit one.
Now that there's the mux layer, the conn_stream needs to be enabled
for polling as well and since it's not done in the checks, it's never
done and the check's request doesn't leave the machine, as can be
noticed with http checks.
The solution simply consists in going back to the well-known case
where we enable polling after connecting using cs_want_send() if we
have anything but just a plain connect(). The regular data path is
not affected because the stream interface code automatically computes
the polling needs based on buffer contents.
This situation which must not happen does in fact happen when feeding
artificial responses using errorfiles, Lua or an applet. For now it
causes the H1 response parser to loop forever trying to get a more
complete response. Since it cannot progress, let's return an error.
Don't bother testing if len is nonzero, we know it is, as we're in the
"else" part of a if (!len), and testing it confuses clang into thinking
ret may be left uninitialized.
Store object in the cache. The cache use an shctx for storage.
It uses an http-response action to store the headers and a filter to
store the body. The http-response action is used in order to allow
modifications by other actions before caching.
Forbid shctx to read more than expected, it allows you to use a greater
value as a len with shctx_row_data_get(), the size of the destination
buffer for example.
Previous commit ea3928 (MEDIUM: h2: apply a timeout to h2 connections)
was wrong for two reasons. The first one is that if the client timeout
is not set, it's used as zero, preventing connections from establishing.
The second reason is that if the timeout triggers with active streams
(normally it should not since the task is supposed to be disabled), the
task is removed (h2c->task=NULL), and the last quitting stream might
try to dereference it.
Instead of doing this, we simply not register the task if there's no
timeout (it's useless) and we always control its presence in the streams.
Till now there was no way to deal with a dead H2 connection. Now each
connection creates a task that wakes up to kill the connection. Its
timeout is constantly refreshed when there's some activity. In case
the timeout triggers, the best effort attempts are made at sending a
clean GOAWAY message before closing and signaling the streams.
The timeout is automatically disabled when there's an active stream on
the connection, and restarted when the last stream finishes. This way
it should not affect long sessions.
Given that we're processing data produced by haproxy, we know that the
situations where haproxy doesn't return anything are :
- request timeout with option http-ignore-probes : there's no reason to
hit this since we're creating the stream with the request into it ;
- tcp-request content reject : this definitely means we want to kill the
connection and abort keep-alive and any further processing ;
- using /dev/null as the error file to hide an error
In practice it appears that using the abort on empty response as a hint to
trigger a connection close is very appropriate to continue to give the
control over the connection management. This patch thus tries to send a
GOAWAY frame with the max_id presented as the last stream ID, then sends
an RST_STREAM for the current stream. For the client, this means that the
connection must be shut down immediately after processing the last pending
streams and that the current stream is aborted. This way it's still possible
to force connections to be closed using tcp-request rules.
After some long brainstorming sessions, it appears that "Connection: close"
seems to be the best signal from the L7 layer to indicate the need to close
the connection. Indeed, in H1 it is only present in very rare cases (eg:
certain unrecoverable errors, some of which could remove it now by the way).
It will also be added when the L7 layer wants to force the connection to
terminate. By default when running in keep-alive mode it is not present.
It's worth mentionning that in H1 with persistent connections, we have sort
of a concurrency-1 mux and this header field is used the same way.
Thus here this patch detects "Connection: close" in response headers and
if seen, sends a GOAWAY frame with the highest possible ID so that the
client knows that it can quit whenever it wants to. If more aggressive
closures are needed in the future, we may decide to advertise the max_id
to abort after the current requests and better honor "http-request deny".
For a graceful shutdown, the specs requries to discard frames with a
stream ID higher than the advertised last_id. (RFC7540#6.8). Well,
finally for now the code is disabled (see last page of #6.8). Some
frames need to be processed anyway to maintain the compression state
and the flow control window state, but we don't have any trivial way
to do this and ignore them at the same time. For the headers it's
the worst case where we can't parse headers frames without coming
from the streams, and we don't want to create such streams as we'd
have to abort them, and aborting would cause errors to flow back.
Possibly that a longterm solution might involve using some dummy
streams and dummy buffers for this and calling the parsers directly.
RFC7540#5.1 is pretty clear : "any frame other than WINDOW_UPDATE,
PRIORITY, or RST_STREAM in this state MUST be treated as a connection
error of type STREAM_CLOSED". Instead of dealing with this for each
and every frame type, let's do it once for all in the main demux loop.
RFC7540#5.1 is pretty clear : "any frame other than HEADERS or PRIORITY
in this state MUST be treated as a connection error". Instead of dealing
with this for each and every frame type, let's do it once for all in the
main demux loop.
The ID is respected, and only IDs greater than the advertised last_id
are woken up, with a CS_FL_ERROR flag to signal that the stream is
aborted. This is necessary for a browser to abort a download or to
reject a bad response that affects the connection's state.
Let's replace h2_wake_all_streams() with h2_wake_some_streams(), to
support signaling only streams by their ID (for GOAWAY frames) and
to pass the flags to add on the conn_stream.
When a stream sends a shutw, we send an empty DATA frame with the ES
flag set, except if no HEADERS were sent, in which case we rather send
RST_STREAM. On shutr(1) to abort a request, an RST_STREAM frame is sent
if the stream is OPEN and the stream is closed. Care is taken to switch
the stream's state accordingly and to avoid sending an ES bit again or
another RST once already done.
Data frames are received and transmitted. The per-connection and
per-stream amount of data to ACK is automatically updated. Each
DATA frame is ACKed because usually the downstream link is large
and the upstream one is small, so it seems better to waste a few
bytes every few kilobytes to maintain a low ACK latency and help
the sender keep the link busy. The connection's ACK however is
sent at the end of the demux loop and at the beginning of the mux
loop so that a single aggregated one is emitted (connection
windows tend to be much larger than stream windows).
A future improvement would consist in sending a single ACK for
multiple subsequent DATA frames of the same stream (possibly
interleaved with window updates frames), but this is much trickier
as it also requires to remember the ID of the stream for which
DATA frames have to be sent.
Ideally in the near future we should chunk-encode the body sent
to HTTP/1 when there's no content length and when the request is
not a CONNECT. It's just uncertain whether it's the best option
or not for now.
When it is detected that the number of received bytes is > 0 on the
connection at the end of the demux call or before starting to process
pending output data, an attempt is made at sending a WINDOW UPDATE on
the connection. In case of failure, it's attempted later.
For now we don't build a HEADERS frame with them, but at least we remove
them from the response so that the L7 chunk parser inside isn't blocked
on these (often two) remaining bytes that don't want to leave the buffer.
It also ensures that trailers delivered progressively will correctly be
skipped.
The H1 response data are processed (either following content-length or
chunks) and emitted as H2 DATA frames. In the case of content-length,
the maximum size permitted by the mux buffer, the max frame size, the
connection's window and the stream's window it used to determine the
frame size. For chunked encoding, the same limitation applies, but in
addition, each chunk leads to a distinct frame. This could be improved
in the future to aggregate chunks into larger frames.
Streams blocked on the connection's flow control subscribe to the
connection's fctl_list to be woken up when the window opens again.
Streams blocked on their own flow control don't subscribe to anything,
they just sit waiting for window update frames to reopen the window.
The connection-close mode (without content-length) partially works thanks
to the fact that the SHUTW event leads to a close of the stream. In
practice an empty DATA frame should be sent in this case though.
This calls the h1 response parser and feeds the output through the hpack
encoder to produce stateless HPACK bytecode into an output chunk. For now
it's a bit naive but reasonably efficient.
The HPACK encoder relies on hpack_encode_header() so that the most common
response header fields are encoded based on the static header table. The
forbidden header field names (connection, proxy-connection, upgrade,
transfer-encoding, keep-alive) are dropped before calling the hpack
encoder.
A new flag (H2_CF_HEADERS_SENT) is set once such a frame is emitted. It
will be used to know if we can send an empty DATA+ES frame to use as a
shutdown() signal or if we have to use RST_STREAM.
The trash is already used by the hpack layer and for Huffman decoding,
it's unsafe to use here as a buffer and results in corrupted data. Use
a safely allocated trash instead.
This takes care of creating a new h2s and a new conn_stream when a
HEADERS frame arrives. The recv() callback from the data layer is then
called to extract the frame into the stream's buffer. It is verified
that the stream ID is strictly greater than the known max stream ID.
And the last_id is updated if the current request is properly converted.
The streams are created in open or half-closed(remote) states.
For now there are some limitations :
- frames without END_HEADERS are rejected (CONTINUATION not supported
yet, will require some more changes so that the stream processor
checks the H2 frame header by itself and steals the frames from the
connection)
- padding/stream_dep/priority are currently ignored
- limited error handling, could be improved
But at least the request is properly decoded, transcoded and processed.
If a stream is killed for whatever reason and it happens to be the one
currently blocking the connection, we must unblock the connection and
enable polling again so that it can attempt to make progress. This may
happen for example on upload timeout, where the demux is blocked due to
a full stream buffer, and the stream dies on server timeout and quits.
This does the very minimum required to release a stream and/or a connection
upon the stream's request. The only thing is that it doesn't kill the
connection unless it's already closed or in error or the stream ID reached
the one specified in GOAWAY frame. We're supposed to arm a timer to close
after some idle timeout but it's not done.