Replace ->rx.pqpkts quic_enc_level struct member MT_LIST by an LIST.
Same thing for ->list quic_rx_packet struct member MT_LIST.
Update the code consequently. This was a reminisence of the multithreading
support (several threads by connection).
Must be backported to 2.6
Do not rely on the fact the callers of qc_build_frm() handle their
buffer passed to function the correct way (without leaving garbage).
Make qc_build_frm() update the buffer passed as argument only if
the frame it builds is well formed.
As far as I sse, there is no such callers which does not handle
carefully such buffers.
Must be backported to 2.6.
This is list_for_each_entry_safe() which must be used if we want to delete elements
inside its code block. This could explain that some frames which were not built were added
to packets with a NULL ->pkt member.
Thank you to Tristan for having reported this issue through backtraces in GH #1808
Must be backported to 2.6.
First, these packets must not be inserted in the tree of TX packets.
They are never explicitely acknowledged (for instance an ACK only
packet will never be acknowledged). Furthermore, if taken into an account
these packets may uselessly disturb the congestion control. We do not care
if they are lost or not. Furthermore as the ->in_fligh_len member value is null
they were not released by qc_release_lost_pkts() which rely on these values
to decide to release the allocated memory for such packets.
Must be backported to 2.6.
The master is re-exec with an empty proxies list if no master CLI is
configured.
This results in infinite loop since last patch:
3b68b602 ("BUG/MAJOR: log-forward: Fix log-forward proxies not fully initialized")
This patch avoid to loop again on log-forward proxies list if empty.
This patch should be backported until v2.3
We used to emit a diag warning in case ranges were used both with the
process and thread part of a thread spec. Now with groups it's not
longer a problem, so let's just kill this warning.
Since 2.7-dev2 with commit 5b09341c02 ("MEDIUM: cpu-map: replace the
process number with the thread group number"), the thread group has
replaced the process number in the "cpu-map" directive. In part due to
a design limit in 2.4 and 2.5, a special case was made of thread 1 in
commit bda7c1decd ("MEDIUM: config: simplify cpu-map handling"), because
there was no other location to store a single-threaded setup's mask by
then. The combination of the two resulted in a problem with thread
groups, by which as soon as one line exhibiting thread number 1 alone
was found in a config, the mask would be applied to all threads in the
group.
The loop was reworked to avoid this obsolete special case, and was
factored for better legibility. One obsolete comment about nbproc
was also removed. No backport is needed.
Some clients send CONNECTION_CLOSE frame without acknowledging the STREAM
data haproxy has sent. In this case, when closing the connection if
there were remaining data in QUIC stream buffers, they were not released.
Add a <closing> boolean option to qc_stream_desc_free() to force the
stream buffer memory releasing upon closing connection.
Thank you to Tristan for having reported such a memory leak issue in GH #1801.
Must be backported to 2.6.
In ticket #1805 an user is impacted by the limitation of size of the CLI
buffer when updating a ca-file.
This patch allows a user to append new certificates to a ca-file instead
of trying to put them all with "set ssl ca-file"
The implementation use a new function ssl_store_dup_cafile_entry() which
duplicates a cafile_entry and its X509_STORE.
ssl_store_load_ca_from_buf() was modified to take an apped parameter so
we could share the function for "set" and "add".
In order to be able to append new CA in a cafile_entry,
ssl_store_load_ca_from_buf() was reworked and a "append" parameter was
added.
The function is able to keep the previous X509_STORE which was already
present in the cafile_entry.
"set ssl ca-file" does not return any error when a ca-file is empty or
only contains comments. This could be a problem is the file was
malformated and did not contain any PEM header.
It must be backported as far as 2.5.
Implement quic_tls_rx_hp_ctx_init() and quic_tls_tx_hp_ctx_init() to initiliaze
such header protection cipher contexts for each RX and TX parts and for each
packet number spaces, only one time by connection.
Make qc_new_isecs() call these two functions to initialize the cipher contexts
of the Initial secrets. Same thing for ha_quic_set_encryption_secrets() to
initialize the cipher contexts of the subsequent derived secrets (ORTT, 1RTT,
Handshake).
Modify qc_do_rm_hp() and quic_apply_header_protection() to reuse these
cipher contexts.
Note that there is no need to modify the key update for the header protection.
The header protection secrets are never updated.
Since commit 2071a99df ("MINOR: listener/ssl: set the SSL xprt layer only
once the whole config is known") the xprt is initialized for ssl directly
from a generic funtion used to parse bind args.
But the 'bind' lines from 'log-forward' sections were forgotten in commit
55f0f7bb5 ("MINOR: config: use the new bind_parse_args_list() to parse a
"bind" line").
This patch re-works 'log-forward' section parsing to use the generic
function to parse bind args and fix the issue.
Since the generic way to parse was introduced in 2.6, this patch
should be backported as far as this version.
Some initialisation for log forward proxies was missing such
as ssl configuration on 'log-forward's 'bind' lines.
After the loop on the proxy initialization code for proxies present
in the main proxies list, this patch force to loop again on this code
for proxies present in the log forward proxies list.
Those two lists should be merged. This will be part of a global
re-work of proxy initialization including peers proxies and resolver
proxies.
This patch was made in first attempt to fix the bug and to facilitate
the backport on older branches waiting for a cleaner re-work on proxies
initialization on the dev branch.
This patch should be backported as far as 2.3.
This wrong trace came with this commit:
"BUG/MINOR: quic: Possible crashes when dereferencing ->pkt quic_frame struct member"
In qc_release_frm() we mark frames as acked. Nothing to see with references
to frames.
Thank you to Willy for having caught this one.
Must be backported to 2.6 as these traces arrived with a bug fix to be backported
to 2.6.
When duplicated frames are splitted, we must propagate this information
to the new allocated frame and add a reference to this new frame
to the reference list of the original frame.
Must be backported to 2.6
This was done at several places. First in qc_requeue_nacked_pkt_tx_frms.
This aim of this function is, if needed, to requeue all the TX frames of a lost
<pkt> packet passed as argument and detach them from this packet they have been
sent from. They are possible cases where the frm->pkt quic_frame struct member could
be NULL, as a result of a duplication of an original frame by qc_dup_pkt_frms(). This
function adds the duplicated frame to the original frame reference list:
LIST_APPEND(&origin->reflist, &dup_frm->ref);
But, in this function, the packet which contains the frame is the one which is passed
as argument (for debug purpose). So let us prefer using this variable.
Also do not dereference this ->pkt quic_frame member in qc_release_frm() and
qc_frm_unref() and add a trace to catch the frame with a null ->pkt member.
They are logically frames which have not already been sent.
Thank you to Tristan for having reported such crashes in GH #1808.
Must be backported to 2.6
If a POST upload is cancelled after having advertised a content-length,
or a response body is truncated after a content-length, we're not allowed
to send ES because in this case the total body length must exactly match
the advertised value. Till now that's what we were doing, and that was
causing the other side (possibly haproxy) to respond with an RST_STREAM
PROTOCOL_ERROR due to "ES on DATA frame before content-length".
We can behave a bit cleaner here. Let's detect that we haven't sent
everything, and send an RST_STREAM(CANCEL) instead, which is designed
exactly for this purpose.
This patch could be backported to older versions but only a little bit
of exposure to make sure it doesn't wake up a bad behavior somewhere.
It relies on the following previous commit:
"MINOR: mux-h2: make streams know if they need to send more data"
H2 streams do not even know if they are expected to send more data or
not, which is problematic when closing because we don't know if we're
closing too early or not. Let's start by adding a new stream flag
"H2_SF_MORE_HTX_DATA" to indicate this on the tx path.
Traces indicating "switching to XXX" generally apply before the transition
so that the current connection state is visible in the trace. SETTINGS1
was incorrect in this regard, with the trace being emitted after. Let's
fix this.
No need to backport this, as this is purely cosmetic.
When switching to H2_CS_FRAME_H, we do not want to present the previous
frame's state, flags, length etc in traces, or we risk to confuse the
analysis, making the reader think that the header information presented
is related to the new frame header being analysed. A naive approach could
have consisted in simply relying on the current parser state (FRAME_H
being that state), but traces are emitted before switching the state,
so traces cannot rely on this.
This was initially addressed by commit 73db434f7 ("MINOR: h2/trace: report
the frame type when known") which used to set dsi to -1 when the connection
becomes idle again, but was accidentally broken by commit 5112a603d
("BUG/MAJOR: mux_h2: Don't consume more payload than received for skipped
frames") which moved dsi after calling the trace function.
But in both cases there's problem with this approach. If an RST or WU frame
cannot be uploaded due to a busy mux, and at the same time we complete
processing on a perfect end of frame with no single new frame header, we
can leave the demux loop with dsi=-1 and with RST or WU to be sent, and
these ones will be sent for stream ID -1. This is what was reported in
github issue #1830. This can be reproduced with a config chaining an h1->h2
proxy to an empty h2 frontend, and uploading a large body such as below:
$ (printf "POST / HTTP/1.1\r\nContent-length: 1000000000\r\n\r\n";
cat /dev/zero) | nc 0 4445 > /dev/null
This shows that we must never affect ->dsi which must always remain valid,
and instead we should set "something else". That something else could be
served by the demux frame type, but that one also needs to be preserved
for the RST_STREAM case. Instead, let's just add a connection flag to say
that the demuxing is in progress. This will be set once a new demux header
is set and reset after the end of a frame. This way the trace subsystem
can know that dft/dfl must not be displayed, without affecting the logic
relying on such values.
Given that the commits above are old and were backported to 1.8, this
new one also needs to be backported as far as 1.8.
Many thanks to David le Blanc (@systemmonkey42) for spotting, reporting,
capturing and analyzing this bug; his work permitted to quickly spot the
problem.
Erwan Le Goas reported that chaining certain commands on the CLI would
systematically crash the process; for example, "show version; show sess".
This happened since the conversion of cli context to appctx->svcctx,
because if applet_reserve_svcctx() is called a first time for a tiny
context, it's allocated in-situ, and later a keyword that wants a
larger one will see that it's not null and will reuse it and will
overwrite the end of the first one's context.
What is missing is a reset of the svcctx when looping back to
CLI_ST_GETREQ.
This needs to be backported to 2.6, and relies on previous commit
"MINOR: applet: add a function to reset the svcctx of an applet".
The CLI needs to reset the svcctx between commands, and there was nothing
done to handle this. Let's add appctx_reset_svcctx() to do that, it's the
closing equivalent of appctx_reserve_svcctx().
This will have to be backported to 2.6 as it will be used by a subsequent
patch to fix a bug.
As specified by RFC 9114, multiple cookie headers must be concatenated
into a single entry before passing it to a HTTP/1.1 connection. To
implement this, reuse the same function as already used for HTTP/2
module.
This should answer to feature requested in github issue #1818.
As specified by RFC 7540, multiple cookie headers are merged in a single
entry before passing it to a HTTP/1.1 connection. This step is
implemented during headers parsing in h2 module.
Extract this code in the generic http_htx module. This will allow to
reuse it quickly for HTTP/3 implementation which has the same
requirement for cookie headers.
MUX notification on TX has been edited recently : it will be notified
only when sending its own data, and not for example on retransmission by
the quic-conn layer. This is subject of the patch :
b29a1dc2f4
BUG/MINOR: quic: do not notify MUX on frame retransmit
A new flag QUIC_FL_CONN_RETRANS_LOST_DATA has been introduced to
differentiate qc_send_app_pkts invocation by MUX and directly by the
quic-conn layer in quic_conn_app_io_cb(). However, this is a first
problem as internal quic-conn layer usage is not limited to
retransmission. For example for NEW_CONNECTION_ID emission.
Another problem much important is that send functions are also called
through quic_conn_io_cb() which has not been protected from MUX
notification. This could probably result in crash when trying to notify
the MUX.
To fix both problems, quic-conn flagging has been inverted : when used
by the MUX, quic-conn is flagged with QUIC_FL_CONN_TX_MUX_CONTEXT. To
improve the API, MUX must now used qc_send_mux which ensure the flag is
set. qc_send_app_pkts is now static and can only be used by the
quic-conn layer.
This must be backported wherever the previously mentionned patch is.
When duplication frames in qc_dup_pkt_frms(), ->pkt member was not correctly
initialized (copied from the original frame). This could not have any impact
because this member is initialized whe the frame is added to a packet.
This was also the case for ->flags.
Also replace the pool_zalloc() call by a call to pool_alloc().
Must be backported to 2.6.
When using `option http-restrict-req-hdr-names delete`, HAproxy may
crash or delete wrong header after receiving request containing multiple
forbidden characters in single header name; exact behavior depends on
number of request headers, number of forbidden characters and position
of header containing them.
This patch fixes GitHub issue #1822.
Must be backported as far as 2.2 (buggy feature got included in 2.2.25,
2.4.18 and 2.5.8).
On STREAM emission, quic-conn notifies MUX through a callback named
qcc_streams_sent_done(). This also happens on retransmission : in this
case offset are examined and notification is ignored if already seen.
However, this behavior has slightly changed since
e53b489826
BUG/MEDIUM: mux-quic: fix server chunked encoding response
Indeed, if offset diff is NULL, frame is now not ignored. This is to
support FIN notification with a final empty STREAM frame. A side-effect
of this is that if the last stream frame is retransmitted, it won't be
ignored in qcc_streams_sent_done().
In most cases, this side-effect is harmless as qcs instance will soon be
freed after being closed. But if qcs is still alive, this will cause a
BUG_ON crash as it is considered as locally closed.
This bug depends on delay condition and seems to be extremely rare. But
it might be the reason for a crash seen on interop with s2n client on
http3 testcase :
FATAL: bug condition "qcs->st == QC_SS_CLO" matched at src/mux_quic.c:372
call trace(16):
| 0x558228912b0d [b8 01 00 00 00 c6 00 00]: main-0x1c7878
| 0x558228917a70 [48 8b 55 d8 48 8b 45 e0]: qcc_streams_sent_done+0xcf/0x355
| 0x558228906ff1 [e9 29 05 00 00 48 8b 05]: main-0x1d3394
| 0x558228907cd9 [48 83 c4 10 85 c0 0f 85]: main-0x1d26ac
| 0x5582289089c1 [48 83 c4 50 85 c0 75 12]: main-0x1d19c4
| 0x5582288f8d2a [48 83 c4 40 48 89 45 a0]: main-0x1e165b
| 0x5582288fc4cc [89 45 b4 83 7d b4 ff 74]: qc_send_app_pkts+0xc6/0x1f0
| 0x5582288fd311 [85 c0 74 12 eb 01 90 48]: main-0x1dd074
| 0x558228b2e4c1 [48 c7 c0 d0 60 ff ff 64]: run_tasks_from_lists+0x4e6/0x98e
| 0x558228b2f13f [8b 55 80 29 c2 89 d0 89]: process_runnable_tasks+0x7d6/0x84c
| 0x558228ad9aa9 [8b 05 75 16 4b 00 83 f8]: run_poll_loop+0x80/0x48c
| 0x558228ada12f [48 8b 05 aa c5 20 00 48]: main-0x256
| 0x7ff01ed2e609 [64 48 89 04 25 30 06 00]: libpthread:+0x8609
| 0x7ff01e8ca163 [48 89 c7 b8 3c 00 00 00]: libc:clone+0x43/0x5e
To reproduce it locally, code was artificially patched to produce
retransmission and avoid qcs liberation.
In order to fix this and avoid future class of similar problem, the best
way is to not call qcc_streams_sent_done() to notify MUX for
retranmission. To implement this, we test if any of
QUIC_FL_CONN_RETRANS_OLD_DATA or the new flag
QUIC_FL_CONN_RETRANS_LOST_DATA is set. A new wrapper
qc_send_app_retransmit() has been added to set the new flag as a
complement to already existing qc_send_app_probing().
This must be backported up to 2.6.
Adjust qc_send_app_pkts function : remove <old_data> arg and provide a
new wrapper function qc_send_app_probing() which should be used instead
when probing with old data.
This simplifies the interface of the default function, most notably for
the MUX which does not interfer with retransmission.
QUIC_FL_CONN_RETRANS_OLD_DATA flag is set/unset directly in the wrapper
qc_send_app_probing().
At the same time, function documentation has been updated to clarified
arguments and return values.
This commit will be useful for the next patch to differentiate MUX and
retransmission send context. As a consequence, the current patch should
be backported wherever the next one will be.
Complete some MUX traces by adding qcc or qcs instance as arguments when
this is possible. This will be useful when several connections are
interleaved.
Emit STREAM_LIMIT_ERROR if a client tries to open an unidirectional
stream with an ID greater than the value specified by our flow-control
limit. The code is similar to the bidirectional stream opening.
MAX_STREAMS_UNI emission is not implement for the moment and is left as
a TODO. This should not be too urgent for the moment : in HTTP/3, a
client has only a limited use for unidirectional streams (H3 control
stream + 2 QPACK streams). This is covered by the value provided by
haproxy in transport parameters.
This patch has been tagged with BUG as it should have prevented last
crash reported on github issue #1808 when opening a new unidirectional
streams with an invalid ID. However, it is probably not the main cause
of the bug contrary to the patch
commit 11a6f4007b
BUG/MINOR: quic: Wrong status returned by qc_pkt_decrypt()
This must be backported up to 2.6.
As specified by RFC 9204, encoder and decoder streams must not be
closed. If the peer behaves incorrectly and closes one of them, emit a
H3_CLOSED_CRITICAL_STREAM connection error.
To implement this, QPACK stream decoding API has been slightly adjusted.
Firstly, fin parameter is passed to notify about FIN STREAM bit.
Secondly, qcs instance is passed via unused void* context. This allows
to use qcc_emit_cc_app() function to report a CONNECTION_CLOSE error.
As specified by RFC 9114 the control stream must not be closed. If the
peer behaves incorrectly and closes it, emit a H3_CLOSED_CRITICAL_STREAM
connection error.
Replace a plain '=' operator by '|=' when setting quic_frame
QUIC_FL_TX_FRAME_LOST flag.
For the moment, this change has no impact as only two exclusive flags
are defined for quic_frame. On the edited code path we are certain that
QUIC_FL_TX_FRAME_ACKED is not set due to a previous if statement, so a
plain equal or a binary OR is strictly identical.
This change will be useful if new flags are defined for quic_frame in
the future. These new flags won't be resetted automatically thanks to
binary OR without explictly intended, which otherwise could easily lead
to new bugs.
table_expire() returns the expiration delay for a stick-table entry associated
to an input sample. Its counterpart table_idle() returns the time the entry
remained idle since the last time it was updated.
Both converters may take a default value as second argument which is returned
when the entry is not present.
This function is responsible for all calls to pool_alloc(trash), whose
total size can be huge. As such it's quite a pain that it doesn't provide
more hints about its users. However, since the function is tiny, it fully
makes sense to inline it, the code is less than 0.1% larger with this.
This way we can now detect where the callers are via "show profiling",
e.g.:
0 1953671 0 32071463136| 0x59960f main+0x10676f p_free(-16416) [pool=trash]
0 1 0 16416| 0x59960f main+0x10676f p_free(-16416) [pool=trash]
1953672 0 32071479552 0| 0x599561 main+0x1066c1 p_alloc(16416) [pool=trash]
0 976835 0 16035723360| 0x576ca7 http_reply_to_htx+0x447/0x920 p_free(-16416) [pool=trash]
0 1 0 16416| 0x576ca7 http_reply_to_htx+0x447/0x920 p_free(-16416) [pool=trash]
976835 0 16035723360 0| 0x576a5d http_reply_to_htx+0x1fd/0x920 p_alloc(16416) [pool=trash]
1 0 16416 0| 0x576a5d http_reply_to_htx+0x1fd/0x920 p_alloc(16416) [pool=trash]
Storing the pointer to the pool along with the stats is quite useful as
it allows to report the name. That's what we're doing here. We could
store it in place of another field but that's not convenient as it would
require to change all functions that manipulate counters. Thus here we
store one extra field, as well as some padding because the struct turns
56 bytes long, thus better go to 64 directly. Example of output from
"show profiling memory":
2 0 48 0| 0x4bfb2c ha_quic_set_encryption_secrets+0xcc/0xb5e p_alloc(24) [pool=quic_tls_iv]
0 55252 0 10608384| 0x4bed32 main+0x2beb2 free(-192)
15 0 2760 0| 0x4be855 main+0x2b9d5 p_alloc(184) [pool=quic_frame]
1 0 1048 0| 0x4be266 ha_quic_add_handshake_data+0x2b6/0x66d p_alloc(1048) [pool=quic_crypto]
3 0 552 0| 0x4be142 ha_quic_add_handshake_data+0x192/0x66d p_alloc(184) [pool=quic_frame]
31276 0 6755616 0| 0x4bb8f9 quic_sock_fd_iocb+0x689/0x69b p_alloc(216) [pool=quic_dgram]
0 31424 0 6787584| 0x4bb7f3 quic_sock_fd_iocb+0x583/0x69b p_free(-216) [pool=quic_dgram]
152 0 32832 0| 0x4bb4d9 quic_sock_fd_iocb+0x269/0x69b p_alloc(216) [pool=quic_dgram]
Pools are being used so well that it becomes difficult to profile their
usage via the regular memory profiling. Let's add new entries for pools
there, named "p_alloc" and "p_free" that correspond to pool_alloc() and
pool_free(). Ideally it would be nice to only report those that fail
cache lookups but that's complicated, particularly on the free() path
since free lists are released in clusters to the shared pools.
It's worth noting that the alloc_tot/free_tot fields can easily be
determined by multiplying alloc_calls/free_calls by the pool's size, and
could be better used to store a pointer to the pool itself. However it
would require significant changes down the code that sorts output.
If this were to cause a measurable slowdown, an alternate approach could
consist in using a different value of USE_MEMORY_PROFILING to enable pools
profiling. Also, this profiler doesn't depend on intercepting regular malloc
functions, so we could also imagine enabling it alone or the other one alone
or both.
Tests show that the CPU overhead on QUIC (which is already an extremely
intensive user of pools) jumps from ~7% to ~10%. This is quite acceptable
in most deployments.
Right now it's not possible to feed memory profiling info from outside
activity.c, so let's export the function and move the enum and struct
to the include file.
This bug came with this big commit:
"MEDIUM: quic: xprt traces rework"
This is the <ret> variable value which must be returned by most of the xprt functions.
This leaded packets which could not be decrypted to be parsed, with weird frames
to be parsed as found by Tristan in GH #1808.
To be backported where the commit above was backported.
When building an ack-eliciting frame only packet, if we did not manage to add
at least one such a frame to the packet, we did not notify the caller about
the fact the packet is empty. This could lead the caller to believe
everything was ok and make it endlessly try to build packet again and again.
This issue was amplified by the recent changes where a while(1) loop has been
added to qc_send_app_pkt() which calls qc_do_build_pkt() through qc_prep_app_pkts()
until we could not prepare packets. Before this recent change, I guess only
one empty packet was sent.
This patch checks that non empty packets could be built by qc_do_build_pkt()
and makes this function return an error if this was the case. Also note that
such an issue could happened only when the packet building was limited by
the congestion control.
Thank you to Tristan for having reported this issue in GH #1808.
Must be backported to 2.6.
qc_detach() is used to free a qcs as notified by sedesc. If there is no
more stream active and the connection is considered as dead, it will
then be freed. This prevent to dereference qcc in TRACE macro. Else this
will cause a crash.
Use a different code-path on release for qc_detach() to fix this bug.
This will fix the last occurence of crash on github issue #1808.
This has been introduced by recent QUIC MUX traces rework. Thus, it does
not need to be backport.
In order to ensure that an instant restart of the process will not wipe
precious debugging information, and to leave time for an admin to archive
a copy of a ring, now upon startup, any previously existing file will be
renamed with the extra suffix ".bak", and any previously existing file
with suffix ".bak" will be removed.
The build broke on freebsd with S_IRUSR undefined after commit 0b8e9ceb1
("MINOR: ring: add support for a backing-file"). Maybe another include
is needed there, but the point is that we really don't care about these
symbolic names, file modes are more readable as 0600 than via these
cryptic names anyway, so let's go back to 0600. This will teach me not
to try to make things too clean.
No backport is needed.
This mmaps a file which will serve as the backing-store for the ring's
contents. The idea is to provide a way to retrieve sensitive information
(last logs, debugging traces) even after the process stops and even after
a possible crash. Right now this was possible by connecting to the CLI
and dumping the contents of the ring live, but this is not handy and
consumes quite a bit of resources before it is needed.
With a backing file, the ring is effectively RAM-mapped file, so that
contents stored there are the same as those found in the file (the OS
doesn't guarantee immediate sync but if the process dies it will be OK).
Note that doing that on a filesystem backed by a physical device is a
bad idea, as it will induce slowdowns at high loads. It's really
important that the device is RAM-based.
Also, this may have security implications: if the file is corrupted by
another process, the storage area could be corrupted, causing haproxy
to crash or to overwrite its own memory. As such this should only be
used for debugging.
Instead of allocating two parts, one for the ring struct itself and
one for the storage area, ring_make_from_area() will arrange the two
inside the same memory area, with the storage starting immediately
after the struct. This will allow to store a complete ring state in
shared memory areas for example.
This commit was not complete:
"BUG/MEDIUM: quic: Possible use of uninitialized <odcid>
variable in qc_lstnr_params_init()"
<token_odcid> should have been directly passed to qc_lstnr_params_init()
without dereferencing it to prevent haproxy to have new chances to crash!
Must be backported to 2.6.
It took me a while to figure why a ring declared with "size 1M" was causing
strange effects in a ring, it's just because it's parsed as "1", which is
smaller than the default 16384 size and errors are silently ignored.
This commit tries to address this the best possible way without breaking
existing configs that would work by accident, by warning that the size is
ignored if it's smaller than the current one, and by printing the parsed
size instead of the input string in warnings and errors. This way if some
users have "size 10000" or "size 100k" it will continue to work as 16kB
like today but they will now be aware of it.
In addition the error messages were a bit poor in context in that they
only provided line numbers. The ring name was added to ease locating the
problem.
As the issue was present since day one and was introduced in 2.2 with
commit 99c453df9d ("MEDIUM: ring: new section ring to declare custom ring
buffers."), it could make sense to backport this as far as 2.2, but with
2.2 being quite old now it doesn't seem very reasonable to start emitting
new config warnings in config that apparently worked well.
Thus it looks more reasonable to backport this as far as 2.4.
When receiving a token into a client Initial packet without a cluster secret defined
by configuration, the <odcid> variable used to parse the ODCID from the token
could be used without having been initialized. Such a packet must be dropped. So
the sufficient part of this patch is this check:
+ }
+ else if (!global.cluster_secret && token_len) {
+ /* Impossible case: a token was received without configured
+ * cluster secret.
+ */
+ TRACE_PROTO("Packet dropped", QUIC_EV_CONN_LPKT,
+ NULL, NULL, NULL, qv);
+ goto drop;
}
Take the opportunity of this patch to rework and make it more readable this part
of code where such a packet must be dropped removing the <check_token> variable.
When an ODCID is parsed from a token, new <token_odcid> new pointer variable
is set to the address of the parsed ODCID. This way, is not set but used it will
make crash haproxy. This was not always the case with an uninitialized local
variable.
Adapt the API to used such a pointer variable: <token> boolean variable is removed
from qc_lstnr_params_init() prototype.
This must be backported to 2.6.
Traces argument were incorrectly used in qcs_free(). A qcs was specified
as first arg instead of a connection. This will lead to a crash if
developer qmux traces are activated. This is now fixed.
This bug has been introduced with QUIC MUX traces rework. No need to
backport.
Add new traces to help debugging on QUIC MUX. Most notable, the
following functions are now traced :
* qcc_emit_cc
* qcs_free
* qcs_consume
* qcc_decode_qcs
* qcc_emit_cc_app
* qcc_install_app_ops
* qcc_release_remote_stream
* qcc_streams_sent_done
* qc_init
Change default devel level for some traces in QUIC MUX:
* proto : used to notify about reception/emission of frames
* state : modification of internal state of connection or streams
* data : detailled information about transfer and flow-control
Replace devel traces with error level on all errors situation. Also a
new event QMUX_EV_PROTO_ERR is used. This should help to detect invalid
situations quickly.
This loop is due to the fact that we do not select the next node before
the conditional "continue" statement. Furthermore the condition and the
"continue" statement may be removed after replacing eb64_first() call
by eb64_lookup_ge(): we are sure this condition may not be satisfied.
Add some comments: this function initializes connection IDs with sequence
number 1 upto <max> non included.
Take the opportunity of this patch to remove a "return" wich broke this
traces rule: for any function, do not call TRACE_ENTER() without TRACE_LEAVE()!
Add also TRACE_ERROR() for any encoutered errors.
Must be backported to 2.6
This lock was there be able to handle the RX packets for a connetion
from several threads. This is no more needed since a QUIC connection
is always handled by the same thread.
May be backported to 2.6
gcc-6.x and 7.x emit build warnings about sc possibly being null upon
return from sc_detach_endp(). This actually is not the case and the
compiler is a little bit overzealous there, but there exists code
paths that can make this analysis non-trivial so let's at least add
a similar BUG_ON() to let both the compiler and the deverloper know
this doesn't happen.
This should be backported to 2.6.
Add a least as much as possible TRACE_ENTER() and TRACE_LEAVE() calls
to any function. Note that some functions do not have any access to the
a quic_conn argument when receiving or parsing datagram at very low level.
The poller pipes needed to communicate between multiple threads are
allocated in init_pollers_per_thread() and released in
deinit_pollers_per_thread(). The former adds them via fd_insert()
so that they are known, but the former only closes them using a
regular close().
This asymmetry represents a problem, because we have in the fdtab[]
an entry for something that may disappear when one thread leaves, and
since these FD numbers are very low, there is a very high likelihood
that they are immediately reassigned to another thread trying to
connect() to a server or just sending a health check. In this case,
the other thread is going to fd_insert() the fd and the recently
added consistency checks will notive that ->owner is not NULL and
will crash. We just need to use fd_delete() here to match fd_insert().
Note that this test was added in 2.7-dev2 by commit 36d9097cf
("MINOR: fd: Add BUG_ON checks on fd_insert()") which was backported
to 2.4 as a safety measure (since it allowed to catch particularly
serious issues). The patch in itself isn't wrong, it just revealed
a long-dormant bug (been there since 1.9-dev1, 4 years ago). As such
the current patch needs to be backported wherever the commit above
is backported.
Many thanks to Christian Ruppert for providing detailed traces in
github issue #1807 and Cedric Paillet for bringing his complementary
analysis that helped to understand the required conditions for this
issue to happen (fast health checks @100ms + randomly long connections
~7s + fast reloads every second + hard-stop-after 5s were necessary
on the dev's machine to trigger it from time to time).
Fred managed to reproduce a crash showing a corrupted accept_list when
firing thousands of concurrent picoquicdemo clients to a same instance.
It may happen if the connection was placed into the accept_list and
immediately closed before being processed (e.g. on error or t/o ?).
In any case the quic_conn_release() function should always detach a
connection to be deleted from any list, like it does for other lists,
so let's add an MT_LIST_DELETE() here.
This should be backported to 2.6.
When arriving at the handshake completion, next encryption level will be
null on quic_conn_io_cb(). Thus this must be check this before
dereferencing it via qc_need_sending() to prevent a crash.
This was reproduced quickly when browsing over a local nextcloud
instance through QUIC with firefox.
This has been introduced in the current dev with quic-conn Tx
refactoring. No need to backport it.
Considered a stream as opened when receiving a STOP_SENDING frame as the
first frame on the stream.
This patch is tagged as BUG because a BUG_ON may occur if only a
STOP_SENDING frame has been received for a frame. This will reset the
stream in respect with RFC9000 but internally it is considered invalid
transition to reset an idle stream.
To fix this, simply use qcs_idle_open() on STOP_SENDING parsing
function. This will mark the stream as OPEN before resetting it.
This was detected on haproxy.org with the following backtrace :
FATAL: bug condition "qcs->st == QC_SS_IDLE" matched at
src/mux_quic.c:383
call trace(12):
| 0x490dd3 [b8 01 00 00 00 c6 00 00]: main-0x1d0633
| 0x4975b8 [48 8b 85 58 ff ff ff 8b]: main-0x1c9e4e
| 0x497df4 [48 8b 45 c8 48 89 c7 e8]: main-0x1c9612
| 0x49934c [48 8b 45 c8 48 89 c7 e8]: main-0x1c80ba
| 0x6b3475 [48 8b 05 54 1b 3a 00 64]: run_tasks_from_lists+0x45d/0x8b2
| 0x6b4093 [29 c3 89 d8 89 45 d0 83]: process_runnable_tasks+0x7c9/0x824
| 0x660bde [8b 05 fc b3 4f 00 83 f8]: run_poll_loop+0x74/0x430
| 0x6611de [48 8b 05 7b a6 40 00 48]: main-0x228
| 0x7f66e4fb2ea5 [64 48 89 04 25 30 06 00]: libpthread:+0x7ea5
| 0x7f66e455ab0d [48 89 c7 e8 5b 72 fc ff]: libc:clone+0x6d/0x86
Stream states have been implemented in the current dev tree. Thus, this
patch does not need to be backported.
Check on quic_conn_io_cb() if sending is required. This allows to skip
over Tx buffer allocation if not needed.
To implement this, we check if frame lists on current and next
encryption level are empty. We also need to check if there is no need to
send ACK, PROBE or CONNECTION_CLOSE. This has been isolated in a new
function qc_need_sending() which may be reuse in some other functions in
the future.
This is the final patch on quic-conn Tx refactor. Extend the function
which is used to write a datagram header to save at the same time
written buffer data. This makes sense as the two operations are used at
the same occasion when a pre-written datagram is comitted.
Complete refactor of quic-conn Tx buffer. The buffer is now released
on every send operation completion. This should help to reduce memory
footprint as now Tx buffers are allocated and released on demand.
To simplify allocation/free of quic-conn Tx buffer, two static functions
are created named qc_txb_alloc() and qc_txb_release().
On first prototype version of QUIC, emission was multithreaded. To
support this, a custom thread-safe ring-buffer has been implemented with
qring/cbuf.
Now the thread model has been adjusted : a quic-conn is always used on
the same thread and emission is not multi-threaded. Thus, qring/cbuf
usage can be replace by a standard struct buffer.
The code has been simplified even more as for now buffer is always
drained after a prepare/send invocation. This is the case since a
datagram is always considered as sent even on sendto() error. BUG_ON
statements guard are here to ensure that this model is always valid.
Thus, code to handle data wrapping and consume too small contiguous
space with a 0-length datagram is removed.
qc_send_app_pkts() has now a while loop implemented which allows to send
all possible frames even if the send buffer is full between packet
prepare and send. This is present since commit :
dc07751ed7
MINOR: quic: Send packets as much as possible from qc_send_app_pkts()
This means we can remove code from the MUX which implement this at the
upper layer. This is useful to simplify qc_send_frames() function.
As mentionned commit is subject to backport, this commit should be
backported as well to 2.6.
The first column's width may vary a lot depending on outputs, and it's
annoying to have large empty columns on small names and mangled large
columns that are not yet large enough. In order to overcome this, this
patch adds a width field to the memstats applet's context, and this
width is calculated the first time the function is entered, by estimating
the width of all lines that will be dumped. This is simple enough and
does the job well. If in the future some filtering criteria are added,
it will still be possible to perform a single pass on everything
depending on the desired output format.
The calling function name is now stored in the structure, and it's
reported when the "all" argument is passed. The first column is
significantly enlarged because some names are really wide :-(
These fake datagrams are only used by the low level I/O handler. They
are not provided to the "by connection" datagram handlers. This
is why they are not MT_LIST_APPEND()ed to the listner RX buffer list
(see &quic_dghdlrs[cid_tid].dgrams in quic_lstnr_dgram_dispatch().
Replace the call to pool_zalloc() to by the lighter call to pool_malloc()
and initialize only the ->buf and ->length members. This is safe because
only these fields are inspected by the low level I/O handler.
After removing the packet header protection, we can check the packet is long
enough to contain a 16 bytes length AEAD TAG (at this end of the packet).
This test was missing.
Must be backported to 2.6.
These traces about the available room into the packet currently built and
its payload length could be displayed for each STREAM frame, even for
those which have no chance to be embedded into a packet leading to
very traces to be displayed from a connection with a lot of stream.
This was revealed by traces provide by Tristan in GH #1808
May be backported to 2.6.
When entering this function, we first check the packet length is not too short.
But this was done against the datagram lenght in place of the packet length.
This could lead to the header protection to be removed using data past
the end of the packet (without buffer overflow).
Use the packet length in place of the datagram length which is at <end>
address passed as parameter to this function. As the packet length
is already stored in ->len packet struct member, this <end> parameter is no
more useful.
Must be backported to 2.6.
_GNU_SOURCE used to be defined only when USE_LIBCRYPT was set. It's also
needed for sched_setaffinity() to be exported. As a side effect, when
USE_LIBCRYPT is not set, a warning is emitted, as Ilya found and reported
in issue #1815. Let's just define _GNU_SOURCE regardless of USE_LIBCRYPT,
and also explicitly add sched.h, as right now it appears to be inherited
from one of the other includes.
This should be backported to 2.4.
dh of length 1024 were chosen for EVP_PKEY_EC key type.
let us pick "default_dh_param" instead.
issue was found on Ubuntu 22.04 which is shipped with OpenSSL configured
with SECLEVEL=2 by default. such SECLEVEL value prohibits DH shorter than
2048:
OpenSSL error[0xa00018a] SSL_CTX_set0_tmp_dh_pkey: dh key too small
better strategy for chosing DH still may be considered though.
The function processes packets sent by other threads in the current
thread's queue. But if, for any reason, other threads write faster
than the current one processes, this can lead to a situation where
the function never returns.
It seems that it might be what's happening in issue #1808, though
unfortunately, this function is one of the rare without traces. But
the amount of calls to functions like qc_lstnr_pkt_rcv() on a single
thread seems to indicate this possibility.
Thanks to Tristan for his efforts in collecting extremely precious
traces!
This likely needs to be backported to 2.6.
qc_snd_buf returned a size_t which means that it was never negative
despite its documentation. Thus the caller who checked for this was
never informed of a sendto error.
Clean this by changing the return value of qc_snd_buf() to an integer.
A 0 is returned on success. Every other values are considered as an
error.
This commit should be backported up to 2.6. Note that to not cause
malfunctions, it must be backported after the previous patch :
906b058954
MINOR: quic: explicitely ignore sendto error
This is to ensure that a sendto error does not cause send to be
interrupted which may cause a stalled transfer without a proper retry
mechanism.
The impact of this bug seems null as caller explicitely ignores sendto
error. However this part of code seems to be subject to strange issues
and it may fix them in part. It may be of interest for github issue #1808.
qc_snd_buf() returns an error if sendto has failed. On standard
conditions, we should check for EAGAIN/EWOULDBLOCK errno and if so,
register the file-descriptor in the poller to retry the operation later.
However, quic_conn uses directly the listener fd which is shared for all
QUIC connections of this listener on several threads. Thus, it's
complicated to implement fd supversion via the poller : there is no
mechanism to easily wakeup quic_conn or MUX after a sendto failure.
A quick and simple solution for the moment is to considered a datagram
as properly emitted even on sendto error. In the end, this will trigger
the quic_conn retransmission timer as data will be considered lost on
the network and the send operation will be retried. This solution will
be replaced when fd management for quic_conn is reworked.
In fact, this quick hack was already in use in the current code, albeit
not voluntarily. This is due to a bug caused by an API mismatch on the
return type of qc_snd_buf() which never emits a negative error code
despite its documentation. Thus, all its invocation were considered as a
success. If this bug was fixed, the sending would would have been
interrupted by a break which could cause the transfer to freeze.
qc_snd_buf() invocation is clean up : the break statement is removed.
Send operation is now always explicitely conducted entirely even on
error and buffer data is purged.
A simple optimization has been added to skip over sendto when looping
over several datagrams at the first sendto error. However, to properly
function, it requires a fix on the return type of qc_snd_buf() which is
provided in another patch.
As the behavior before and after this patch seems identical, it is not
labelled as a BUG. However, it should be backported for cleaning
purpose. It may also have an impact on github issue #1808.
The dgram length check in quic_get_dgram_dcid() rejects datagrams
matching exactly the minimum allowed length, which doesn't seem
correct. I doubt any useful packet would be that small but better
fix this to avoid confusing debugging sessions in the future.
This might be backported to 2.6.
This is the same issue as just fixed in b8e0fb97f ("BUG/MINOR: ring/cli:
fix a race condition between the writer and the reader") but this time
for sinks. They're also sucking the ring and present the same race at
high write loads.
This must be backported to 2.2 as well. See comments in the aforementioned
commit for backport hints if needed.
A reference to the sink was added in every forwarder by the commit 2ae25ea24
("MINOR: sink: Add a ref to sink in the sink_forward_target structure"). But
this commit is incomplete. It is not performed for the forwarders created
during a ring parsing.
This patch must be backported to 2.6.
The ring's CLI reader unlocks the read side of a ring and relocks it for
writing only if it needs to re-subscribe. But during this time, the writer
might have pushed data, see nobody subscribed hence woken nobody, while
the reader would have left marking that the applet had no more data. This
results in a dump that will not make any forward progress: the ring is
clogged by this reader which believes there's no data and the writer
will never wake it up.
The right approach consists in verifying after re-attaching if the writer
had made any progress in between, and to report that another call is
needed. Note that a jump back to the beginning would also work but here
we provide better fairness between readers this way.
This needs to be backported to 2.2. The applet API needed to signal the
availability of new data changed a few times since then.
There is a remaining loop in this ugly qc_snd_buf() function which could
lead haproxy to send truncated UDP datagrams. For now on, we send
a complete UDP datagram or nothing!
Must be backported to 2.6.
Implement http-request timeout for QUIC MUX. It is used when the
connection is opened and is triggered if no HTTP request is received in
time. By HTTP request we mean at least a QUIC stream with a full header
section. Then qcs instance is attached to a sedesc and upper layer is
then responsible to wait for the rest of the request.
This timeout is also used when new QUIC streams are opened during the
connection lifetime to wait for full HTTP request on them. As it's
possible to demux multiple streams in parallel with QUIC, each waiting
stream is registered in a list <opening_list> stored in qcc with <start>
as timestamp in qcs for the stream opening. Once a qcs is attached to a
sedesc, it is removed from <opening_list>. When refreshing MUX timeout,
if <opening_list> is not empty, the first waiting stream is used to set
MUX timeout.
This is efficient as streams are stored in the list in their creation
order so CPU usage is minimal. Also, the size of the list is
automatically restricted by flow control limitation so it should not
grow too much.
Streams are insert in <opening_list> by application protocol layer. This
is because only application protocol can differentiate streams for HTTP
messaging from internal usage. A function qcs_wait_http_req() has been
added to register a request stream by app layer. QUIC MUX can then
remove it from the list in qc_attach_sc().
As a side-note, it was necessary to implement attach qcc_app_ops
callback on hq-interop module to be able to insert a stream in waiting
list. Without this, a BUG_ON statement would be triggered when trying to
remove the stream on sedesc attach. This is to ensure that every
requests streams are registered for http-request timeout.
MUX timeout is explicitely refreshed on MAX_STREAM_DATA and STOP_SENDING
frame parsing to schedule http-request timeout if a new stream has been
instantiated. It was already done on STREAM parsing due to a previous
patch.
Try to reorganize qcc_refresh_timeout() to improve its readability. The
main objective is to reduce the indentation level and if sequences by
using goto statement to the end of the function. Also, backend and
frontend code path should be more explicit with this new version.
Refresh the MUX connection timeout in frame parsing functions. This is
necessary as these Rx operation are completed directly from the
quic-conn layer outside of MUX I/O callback. Thus, the timeout should be
refreshed on this occasion.
Note that however on STREAM parsing refresh is only conducted when
receiving the current consecutive data offset.
Timeouts related function have been moved up in the source file to be
able to use them in qcc_decode_qcs().
This commit will be useful for http-request timeout. Indeed, a new
stream may be opened during qcc_decode_qcs() which should trigger this
timeout until a full header section is received and qcs instance is
attached to sedesc.
Store the current step of HTTP message in h3s stream. This reports if we
are in the parsing of headers, content or trailers section. A new enum
h3s_st_req is defined for this.
This field is stored in h3s struct but only used for request stream. It
is left undefined for other streams (control or QPACK streams).
h3_is_frame_valid() has been extended to take into account this state
information. A connection error H3_FRAME_UNEXPECTED is reported if an
invalid frame according to the current state is received; for example a
DATA frame at the beginning of a stream.
It is illegal to call my_flsl() with 0 as parameter value. It is a UB.
This leaded cubic_root() to divide values by 0 at this line:
x = 2 * x + (uint32_t)(val / ((uint64_t)x * (uint64_t)(x - 1)));
Thank you to Tristan971 for having reported this issue in GH #1808
and Willy for having spotted the root cause of this bug.
Must follow any cubic for QUIC backport (2.6).
The decrement was missing in quic_pktns_tx_pkts_release() called each time a
packet number space is discarded. This is not sure this bug could have an impact
during handshakes. This counter is used to cancel the timer used both for packet
detection and PTO, setting its value to null. So there could be retransmissions
or probing which could be triggered for nothing.
Must be backported to 2.6.
When a proxy is initialized with the settings of the default proxy, instead
of doing a raw copy of the default server settings, a custom copy is now
performed by calling srv_settings_copy(). This way, all settings will be
really duplicated. Without this deep copy, some pointers are shared between
several servers, leading to UAF, double-free or such bugs.
This patch relies on following commits:
* b32cb9b51 REORG: server: Export srv_settings_cpy() function
* 0b365e3cb MINOR: server: Constify source server to copy its settings
This patch should fix the issue #1804. It must be backported as far as 2.0.
The connection retry counter is incremented too early when a connection
fails. In SC_ST_CER state, errors handling must be performed before
incrementing the counter. Otherwise, we may consider the max connection
attempt is reached while a last one is in fact possible.
This patch must be backported to 2.6.
When a new DNS session is created, all its fields are not properly
initialized. For instance, "tx_msg_offset" can have any value after the
allocation. So, to fix the bug, pool_zalloc() is now used to allocate new
DNS session.
This patch should fix the issue #1781. It must be backported as far as 2.4.
When a peer open a new connection to another peer, it is considered as
connected when the hello message is sent. To do so, the peer applet was
relying on CF_WRITE_PARTIAL channel flag. However it is not the right flag
to use. This one is a transient flag. Depending on the scheduling, this flag
may be removed by the stream before the peer has a chance to see
it. Instead, CF_WROTE_DATA flag must be checked.
This patch is related to the issue #1799. It must be backported as far as
2.0.
When peers are configured and HAProxy is reloaded or restarted, a
synchronization is performed between the old process and the new one. To do
so, the old process connects on the new one. If the synchronization fails,
it retries. However, there is no delay and reconnect attempts are not
bounded. Thus, it may loop for a while, consuming all the CPU. Of course, it
is unexpected, but it is possible. For instance, if the local peer is
misconfigured, an infinite loop can be observed if the connection succeeds
but not the synchronization. This prevents the old process to exit, except
if "hard-stop-after" option is set.
To fix the bug, the reconnect is delayed. The local peer already has a
expiration date to delay the reconnects. But it was not used on stopping
mode. So we use it not. Thanks to the previous fix, the reconnect timeout is
shorter in this case (500ms against 5s on running mode). In addition, we
also use the peers resync expiration date to not infinitely retries. It is
accurate because the new process, on its side, use this timeout to switch
from a local resync to a remote resync.
This patch depends on "MINOR: peers: Use a dedicated reconnect timeout when
stopping the local peer". It fixes the issue #1799. It should be backported
as far as 2.0.
When a process is stopped or reload, a dedicated reconnect timeout is now
used. For now, this timeout is not used because the current code retries
immediately to reconnect to perform the local synchronization with the new
local peer, if any.
This patch is required to fix the issue #1799. It should be backported as
far as 2.0 with next fixes.
In peers section, it is possible to enable SSL for the local peer. In this
case, the bind line and the server line should both be configured. A
"default-server" directive may also be used to configure the SSL on the
server side. However there is no test to be sure the SSL is enabled on both
sides. It is an problem because the local resync performed during a reload
will be impossible and it is probably not the expected behavior.
So, it is now checked during the configuration validation. A warning message
is displayed if the SSL is not properly configured for the local peer.
This patch is related to issue #1799. It should probably be backported to 2.6.
Complete QUIC MUX timeout refresh function by using http-keep-alive
timeout. It is used when the connection is idle after having handle at
least one request.
To implement this a new member <idle_start> has been defined in qcc
structure. This is used as timestamp for when the connection became idle
and is used as base time for http keep-alive timeout
Add a new qcc member named <nb_hreq>. Its purpose is close to <nb_sc>
which represents the number of attached stream connectors. Both are
incremented inside qc_attach_sc().
The difference is on the decrement operation. While <nb_cs> is
decremented on sedesc detach callback, <nb_hreq> is decremented when the
qcs is locally closed.
In most cases, <nb_hreq> will be decremented before <nb_cs>. However, it
will be the reverse if a stream must be kept alive after detach callback.
The main purpose of this field is to implement http-keep-alive timeout.
Both <nb_sc> and <nb_hreq> must be null to activate the http-keep-alive
timeout.
Implement a new internal function qcc_refresh_timeout(). Its role will be
to reset QUIC MUX timeout depending if there is requests in progress or
not.
qcc_update_timeout() does not set a timeout if there is still attached
streams as in this case the upper layer is responsible to manage it.
Else it will activate the timeout depending on the connection current
status.
Timeout is refreshed on several locations : on stream detach and in I/O
handler and wake callback.
For the moment, only the default timeout is used (client or server). The
function may be expanded in the future to support more specific ones :
* http-keep-alive if connection is idle
* http-request when waiting for incomplete HTTP requests
* client/server-fin for graceful shutdown
Store a reference to proxy in the qcc structure. This will be useful to
access to proxy members outside of qcc_init().
Most notably, this change is required to implement timeout refreshing by
using the various timeouts configured at the proxy level.
Ensure via qcc_is_dead() that a connection is not released instance
until all of qcs streams are detached by the upper layer, even if an
error has been reported or the timeout has fired.
On the other side, as qc_detach() always check the connection status,
this should ensure that we do not keep a connection if not necessary.
Without this patch, a qcc instance may be freed with some of its qcs
streams not detached. This is an incorrect behavior and will lead to a
BUG_ON fault. Note however that no occurence of this bug has been
produced currently. This patch is mainly a safety against future
occurences.
This should be backported up to 2.6.
Timeout in QUIC MUX has evolved from the simple first implementation. At
the beginning, a connection was considered dead unless bidirectional
streams were opened. This was abstracted through an app callback
is_active().
Now this paradigm has been reversed and a connection is considered alive
by default, unless an error has been reported or a timeout has already
been fired. The callback is_active() is thus not used anymore and can be
safely removed to simplify qcc_is_dead().
This commit should be backported to 2.6.
A qcc instance may be freed in the middle of qc_io_cb() if all streams
were purged. This will lead to a crash as qcc instance is reused after
this step. Jump directly to the end of the function to avoid this.
Note that this bug has not been triggered for the moment. This is a
safety fix to prevent it.
This must be backported up to 2.6.
Miroslav reported in issue #1802 a problem that affects atomic map/acl
updates. During an update, incorrect versions are properly skipped, but
in order to do so, we rely on ebmb_next() instead of ebmb_next_dup().
This means that if a new matching entry is in the process of being
added and is the first one to succeed in the lookup, we'll skip it due
to its version and use the next entry regardless of its value provided
that it has the correct version. For IP addresses and string prefixes
it's particularly visible because a lookup may match a new longer prefix
that's not yet committed (e.g. 11.0.0.1 would match 11/8 when 10/7 was
the only committed one), and skipping it could end up on 12/8 for
example. As soon as a commit for the last version happens, the issue
disappears.
This problem only affects tree-based matches: the "str", "ip", and "beg"
matches.
Here we replace the ebmb_next() values with ebmb_next_dup() for exact
string matches, and with ebmb_lookup_shorter() for longest matches,
which will first visit duplicates, then look for shorter prefixes. This
relies on previous commit:
MINOR: ebtree: add ebmb_lookup_shorter() to pursue lookups
Both need to be backported to 2.4, where the generation ID was added.
Note that nowadays a simpler and more efficient approach might be employed,
by having a single version in the current tree, and a list of trees per
version. Manipulations would look up the tree version and work (and lock)
only in the relevant trees, while normal operations would be performed on
the current tree only. Committing would just be a matter of swapping tree
roots and deleting old trees contents.
It's convenient for debugging IP trees. However we're not dumping the
full keys, for the sake of simplicity, only the 4 first bytes are dumped
as a u32 hex value. In practice this is sufficient for debugging. As a
reminder since it seems difficult to recover the command each time it's
needed, the output is converted to an image using dot from Graphviz:
dot -o a.png -Tpng dump.txt
Since version 1.1.0, OpenSSL's libcrypto ignores the provided locking
mechanism and uses pthread's rwlocks instead. The problem is that for
some code paths (e.g. async engines) this results in a huge amount of
syscalls on systems facing a bit of contention, to the point where more
than 80% of the CPU can be spent in the system dealing with spinlocks
just for futex_wake().
This patch provides an alternative by redefining the relevant pthread
rwlocks from the low-overhead version of the progressive rw locks. This
way there will be no more syscalls in case of contention, and CPU will
be burnt in userland. Doing this saves massive amounts of CPU, where
the locks only take 12-15% vs 80% before, which allows SSL to work much
faster on large thread counts (e.g. 24 or more).
The tryrdlock and trywrlock variants have been implemented using a CAS
since their goal is only to succeed on no contention and never to wait.
The pthread_rwlock API is complete except that the timed versions of
the rdlock and wrlock do not wait and simply fall back to trylock
versions.
Since the gains have only been observed with async engines for now,
this option remains disabled by default. It can be enabled at build
time using USE_PTHREAD_EMULATION=1.
When testing strong queue contention on a 48-thread machine, some crashes
would frequently happen due to process_srv_queue() never leaving and
processing pending requests forever. A dump showed more than 500000
loops at once. The problem is that other threads find it working so
they don't do anything and are free to process their pending requests.
Because of this, the dequeuing thread can be kept busy forever and does
not process its own requests anymore (fortunately the watchdog stops it).
This patch adds a limit to the number of rounds, it limits it to
maxpollevents, which is reasonable because it's also an indicator of
latency and batches size. However there's a catch. If all requests
are finished when the thread ends the loop, there might not be enough
anymore to restart processing the queue. Thus we tolerate to re-enter
the loop to process one request at a time when it doesn't have any
anymore. This way we're leaving more room for another thread to take
on this task, and we're sure to eventually end this loop.
Doing this has also improved the overall dequeuing performance by ~20%
in highly contended situations with 48 threads.
It should be backported at least to 2.4, maybe even 2.2 since issues
were faced in the past on machines having many cores.
Add a loop into this function to send more packets from this function
which is called by the mux. It is broken when we could not prepare
packet with qc_prep_app_pkts() due to missing available room in the
buffer used to send packets. This improves the throughput.
Must be backported to 2.6.
As the TX packets are ordered by their packet number and always sent
in the same order. their TX timestamps are inspected from the older to
the newer values when we look for the packet loss. So we can stop
this search as soon as we found the first packet which has not been lost.
Must be backported to 2.6
<loss_time_limit> is the loss time limit computed from <time_sent> packet
transmission timestamps in qc_packet_loss_lookup() to identify the packets which
have been lost. This latter timestamp variable was used in place of
<loss_time_limit> to distinguish such packets from others (still in fly packets).
Must be backported to 2.6
As it could be interesting to be able to choose the QUIC control congestion
algorithm to be used by listener, add "quic-cc-algo" new keyword to do so.
Update the documentation consequently.
Must be backported to 2.6.
Cubic is the congestion control algorithm used by default by the Linux kernel
since 2.6.15 version. This algorithm is supposed to achieve good scalability and
fairness between flows using the same network path, it should also be used by QUIC
by default. This patch implements this algorithm and select it as default algorithm
for the congestion control.
Must be backported to 2.6.
Ease the integration of new congestion control algorithm to come.
Move the congestion controller state to a private array of uint32_t
to stop using a union. We do not want to continue using such long
paths cc->algo_state.<algo>.<var> to modify the internal state variable
for each algorithm.
Must be backported to 2.6
On H3 DATA frame transfer from the client, some streams are not properly
closed by the upper layer, despite all transfer operation completed.
Data integrity is not impacted but this will prevent the stream timeout
to fire and thus keep the owner session opened.
In most cases, sessions are closed on QUIC idle timeout, but it may stay
forever if a client emits PING frames at a regular interval to maintain
it.
This bug is caused by a missing EOI stream desc flag on certain
condition in qc_rcv_buf(). To be triggered, we have to use the
optimization when conn-stream buffer is empty and can be swapped with
qcs buffer. The problem is that it will skip the function body for
default copy but also a condition to check if EOI must be set. Thus this
bug does not happens for every H3 post requets : it requires that
conn-stream buffer is empty on last qc_rcv_buf() invocation.
This was reproduced more frequently when using ngtcp2 client with one or
multiple streams :
$ ngtcp2-client -m POST -d ~/infra/html/10K 127.0.0.1 20443 \
http://127.0.0.1:20443/post
This may fix at least partially github issue #1801.
This must be backported up to 2.6.
The previous attempt was reverted because it would emit a warning when
the sockets are still in the process when a reload failed, so this was
an expected 2nd try.
This warning however, will be displayed if a new process successfully
get the previous sockets AND the sendable number of sockets is 0.
This way the user will be warned if he tried to get the sockets fromt
the wrong process.
Since commit 2be557f ("MEDIUM: mworker: seamless reload use the internal
sockpair"), we are using the PROC_O_LEAVING flag to determine which
sockpair worker will be used with -x during the next reload.
However in mworker_reexec(), the PROC_O_LEAVING flag is not updated, it
is only updated at startup in mworker_env_to_proc_list().
This could be a problem when a remaining process is still in the list,
it could be selected as the current worker, and its socket will be used
even if _getsocks doesn't work anymore on it. (bug #1803)
This patch fixes the issue by updating the PROC_O_LEAVING flag in
mworker_proc_list_to_env() just before using it in mworker_reexec()
Must be backported to 2.6.
The _getsocks CLI command can be used only once, after that the sockets
are not available anymore.
Emit a warning when the command was already used once.
Christopher found that since commit 8e2c0fa8e ("MINOR: fd: delete unused
updates on close()") we may crash in a late stop due to an fd_delete()
in the main thread performed after all threads have deleted the fd_updt[]
array. Prior to that commit that didn't happen because we didn't touch
the updates on this path, but now it may happen. We don't care about these
ones anyway since the poller is stopped, so let's just wipe them by
resetting their counter before freeing the array.
No backport is needed as this is only 2.7.
When haproxy starts with a resolver section, and there is a default one
since 2.6 which use /etc/resolv.conf, it tries to do a connect() with the UDP
socket in order to check if the routes of the system allows to reach the
server.
This check is too much restrictive as it won't prevent any runtime
failure.
Relax the check by making it a warning instead of a fatal alert.
This must be backported in 2.6.
This reverts commit 356866acce.
It seems that an undocumented expectation of peers is based on the peers
proxy name to determine if the local peer is fully configured or not. Thus
because of the commit above, we are no longer able to detect incomplete
peers sections.
On side effect of this bug is a segfault when HAProxy is stopped/reloaded if
we try to perform a local resync on a mis-configured local peer. So waiting
for a better solution, the patch is reverted.
This patch must be backported as far as 2.5.
The fd_send_uxst() function which is used to send a socket over the
socketpair returns 1 upon error instead of -1, which means the error
case of the sendmsg() is never catched correctly.
Must be backported as far as 1.9.
A bug was introduced in 2.7-dev2 by commit 1f947cb39 ("MAJOR: poller:
only touch/inspect the update_mask under tgid protection"): once the
FD's tgid is held, we would forget to drop it in case the update mask
doesn't match, resulting in random watchdog panics of older processes
on successive reloads.
This should fix issue #1798. Thanks to Christian for the report and
to Christopher for the reproducer.
No backport is needed.
Christopher bisected that recent commit d0b73bca71 ("MEDIUM: listener:
switch bind_thread from global to group-local") broke the master socket
in that only the first out of the Nth initial connections would work,
where N is the number of threads, after which they all work.
The cause is that the master socket was bound to multiple threads,
despite global.nbthread being 1 there, so the incoming connection load
balancing would try to send incoming connections to non-existing threads,
however the bind_thread mask would nonetheless include multiple threads.
What happened is that in 1.9 we forced "nbthread" to 1 in the master's poll
loop with commit b3f2be338b ("MEDIUM: mworker: use the haproxy poll loop").
In 2.0, nbthread detection was enabled by default in commit 149ab779cc
("MAJOR: threads: enable one thread per CPU by default"). From this point
on, the operation above is unsafe because everything during startup is
performed with nbthread corresponding to the default value, then it
changes to one when starting the polling loop. But by then we weren't
using the wait mode except for reload errors, so even if it would have
happened nobody would have noticed.
In 2.5 with commit fab0fdce9 ("MEDIUM: mworker: reexec in waitpid mode
after successful loading") we started to rexecute all the time, not just
for errors, so as to release precious resources and to possibly spot bugs
that were rarely exposed in this mode. By then the incoming connection LB
was enforcing all_threads_mask on the listener's thread mask so that the
incorrect value was being corrected while using it.
Finally in 2.7 commit d0b73bca71 ("MEDIUM: listener: switch bind_thread
from global to group-local") replaces the all_threads_mask there with
the listener's bind_thread, but that one was never adjusted by the
starting master, whose thread group was filled to N threads by the
automatic detection during early setup.
The best approach here is to set nbthread to 1 very early in init()
when we're in the master in wait mode, so that we don't try to guess
the best value and don't end up with incorrect bindings anymore. This
patch does this and also sets nbtgroups to 1 in preparation for a
possible future where this will also be automatically calculated.
There is no need to backport this patch since no other versions were
affected, but if it were to be discovered that the incorrect bind mask
on some of the master's FDs could be responsible for any trouble in
older versions, then the backport should be safe (provided that
nbtgroups is dropped of course).
If the loadbalancing is performed on the source IP address, an internal
error was returned on error. So for an applet on the client side (for
instance an SPOE applet) or for a client connected to a unix socket, an
internal error is returned.
However, when other LB algos fail, a fallback on round-robin is
performed. There is no reson to not do the same here.
This patch should fix the issue #1797. It must be backported to all
supported versions.
Since commit ae024ced0 ("MEDIUM: stream-int/stream: Use connect expiration
instead of SI expiration"), the connect expiration date is per-stream. So
there is only one expiration date instead of one per side, front and
back. So when a stream-connector is processed, we must test if it is a
frontend or a backend stconn before updating the connect expiration
date. Indeed, the frontend stconn must not reset the connect expiration
date.
This bug may have several side effect. One known bug is about peer sessions
blocked because the frontend peer applet is in ST_CLO state and its backend
connection is in ST_TAR state but without connect expiration date.
This patch should fix the issue #1791 and #1792. It must be backported to
2.6.
As reported in github issue #1765, some people get trapped into building
haproxy and companion libraries on Windows using a compiler following the
LLP64 model. This has no chance to work, and definitely causes nasty bugs
everywhere when pointers are passed as longs. Let's save them time and
detect this at boot time.
The message and detection was factored with the existing one for -fwrapv
since we need the same info and actions.
This should be backported to all recent supported versions (the ones
that are likely to be tried on such platforms when people don't know).
When updating from 2.4 to 2.6, the child->reloads++ instruction changed
place, resulting in a former worker from the 2.4 process, still
identified as a current worker once in 2.6, because its reload counter
is still 0.
Unfortunately this counter is used to chose the mworker_proc structure
that will be used for the new worker.
What happens next, is that the mworker_proc structure of the previous
process is selected, and this one has ipc_fd[1] set to -1, because this
structure was supposed to be in the master.
The process then forks, and mworker_sockpair_register_per_thread() tries
to register ipc_fd[1] which is set to -1, instead of the fd of the new
socketpair.
This patch fixes the issue by checking if child->pid is equal to -1 when
selecting proc_self. This way we could be sure it wasn't a previous
process.
Should fix issue #1785.
This must be backported as far as 2.4 to fix the issue related to the
reload computation difference. However backporting it in every stable
branch will enforce the reload process.
Stream data reception is incorrect when dealing with a partially new
offset with some data already consumed out of the RX buffer. In this
case, data length is adjusted but not the data buffer. In most cases,
ncb_add() operation will be rejected as already stored data does not
correspond with the new inserted offset. This will result in an invalid
CONNECTION_CLOSE with PROTOCOL_VIOLATION.
To fix this, buffer pointer is advanced while the length is reduced.
This can be reproduced with a POST request and patching haproxy to call
qcc_recv() multiple times by copying a quic_stream frame with different
offsets.
Must be backported to 2.6.
Since e8422bf ("MEDIUM: global: remove the relative_pid from global and
mworker"), the relative pid prefix is not tested anymore on the master
CLI. Which means any value will fall into the "1" process.
Since we removed the nbproc, only the "1" and the "0" (master) value are
correct, any other value should return an error.
Fix issue #1793.
This must be backported as far as 2.5.
Enhance the errors and warnings when trying to load a ca-file with
ssl_store_load_locations_file().
Add errors from ERR_get_error() and strerror to give more information to
the user.
In commit cfdd20a0b ("MEDIUM: fd: support broadcasting updates for foreign
groups in updt_fd_polling") we decided to pick a random thread number among
a set of candidates for a wakeup in case we need an instant change. But the
thread count range was wrong (MAX_THREADS) instead of tg->count, resulting
in random crashes when thread groups are > 1 and MAX_THREADS > 64.
No backport is needed, this was introduced in 2.7-dev2.
In src/ev_epoll.c, a CHECK_IF() is guarded by an if statement. So, when the
macro is empty, GCC (at least 11.3.1) is not happy because there is an if
statement with an empty body without braces... It is handled by
"-Wempty-body" option.
So, braces are added and GCC is now happy.
No backport needed.
As specified by RFC 9000, it is forbidden to send a CONNECTION_CLOSE of
type 0x1d (CONNECTION_CLOSE_APP) in an Initial or Handshake packet. It
must be converted to type 0x1c (CONNECTION_CLOSE) with APPLICATION_ERROR
code.
CONNECTION_CLOSE_APP are generated by QUIC MUX interaction. Thus,
special care must be taken when dealing with a 0-RTT packet, as this is
the only case where the MUX can be instantiated and quic-conn still on
the Initial or Handshake encryption level.
To enforce RFC 9000, xprt build packet function is now responsible to
translate a CONNECTION_CLOSE_APP if still on Initial/Handshake
encryption. This process is done in a dedicated function named
qc_build_cc_frm().
Without this patch, BUG_ON() statement in qc_build_frm() will be
triggered when building a CONNECTION_CLOSE_APP frame on Initial or
Handshake level. This is because QUIC_FT_CONNECTION_CLOSE_APP frame
builder mask does not allow these encryption levels, as opposed to
QUIC_FT_CONNECTION_CLOSE builder. This crash was reproduced by modifying
the H3 layer to force emission of a CONNECTION_CLOSE_APP on first frame
of a 0-RTT session.
Note however that CONNECTION_CLOSE emission during Handshake is a
complicated process for the server. For the moment, this is still
incomplete on haproxy side. RFC 9000 requires to emit it multiple times
in several packets under different encryption levels, depending on what
we know about the client encryption context.
This patch should be backported up to 2.6.
It looks like OpenSSL 1.0.2 returns an error when trying to insert a
certificate whis is already present in a X509_STORE.
This patch simply ignores the X509_R_CERT_ALREADY_IN_HASH_TABLE error if
emitted.
Should fix part of issue #1780.
Must be backported in 2.6.
When the resolv.conf file is empty or there is no resolv.conf file, an
empty resolvers will be created, which emits a warning during the
postparsing step.
This patch fixes the problem by freeing the resolvers section if the
parsing failed or if the nameserver list is empty.
Must be backported in 2.6, the previous patch which introduces
resolvers_destroy() is also required.
The first approach in commit 288dc1d8e ("BUG/MEDIUM: tools: avoid calling
dlsym() in static builds") relied on dlopen() but on certain configs (at
least gcc-4.8+ld-2.27+glibc-2.17) it used to catch situations where it
ought not fail.
Let's have a second try on this using dladdr() instead. The variable was
renamed "build_is_static" as it's exactly what's being detected there.
We could even take it for reporting in -vv though that doesn't seem very
useful. At least the variable was made global to ease inspection via the
debugger, or in case it's useful later.
Now it properly detects a static build even with gcc-4.4+glibc-2.11.1 and
doesn't crash anymore.
Since 2.4 with commit 64192392c ("MINOR: tools: add functions to retrieve
the address of a symbol"), we can resolve symbols. However some old glibc
crash in dlsym() when the program is statically built.
Fortunately even on these old libs we can detect lack of support by
calling dlopen(NULL). Normally it returns a handle to the current
program, but on a static build it returns NULL. This is sufficient to
refrain from calling dlsym() (which will be of very limited use anyway),
so we check this once at boot and use the result when needed.
This may be backported to 2.4. On stable versions, be careful to place
the init code inside an if/endif guard that checks for DL support.
Since these are not used anymore, let's now remove them. Given the
number of places where we're using ti->ldit_bit, maybe an equivalent
might be useful though.
We're still facing the situation where it's impossible to update an FD
for a foreign group. That's of particular concern when disabling/enabling
listeners (e.g. pause/resume on signals) since we don't decide which thread
gets the signal and it needs to process all listeners at once.
Fortunately, not that much is unprotected in FDs. This patch adds a test for
tgid's equality in updt_fd_polling() so that if a change is applied for a
foreing group, then it's detected and taken care of separately. The method
consists in forcing the update on all bound threads in this group, adding it
to the group's update_list, and sending a wake-up as would be done for a
remote thread in the local group, except that this is done by grabbing a
reference to the FD's tgid.
Thanks to this, SIGTTOU/SIGTTIN now work for nbtgroups > 1 (after that was
temporarily broken by "MEDIUM: fd/poller: make the update-list per-group").
With thread groups and group-local masks, the update_mask cannot be
touched nor even checked if it may change below us. In order to avoid
this, we have to grab a reference to the FD's tgid before checking the
update mask. The operations are cheap enough so that we don't notice it
in performance tests. This is expected because the risk of meeting a
reassigned FD during an update remains very low.
It's worth noting that the tgid cannot be trusted during startup nor
during soft-stop since that may come from anywhere at the moment. Since
soft-stop runs under thread isolation we use that hint to decide whether
or not to check that the FD's tgid matches the current one.
The modification is applied to the 3 thread-aware pollers, i.e. epoll,
kqueue, and evports. Also one poll_drop counter was missing for shared
updates, though it might be hard to trigger it.
With this change applied, thread groups are usable in benchmarks.
The poller-specific thread init code now uses that new function to
safely register boot events. This ensures that we don't register an
event for another group and that we properly deal with parallel
thread startup.
It's only done for thread-aware pollers, there's no point in using
that in poll/select though that should work as well.
There's a nasty case during boot, which is the master process. It stops
all listeners from the main thread, and as such we're seeing calls to
fd_delete() from a thread that doesn't match the FD's mask, but more
importantly from a group that doesn't match either. Fortunately this
happens in a process that doesn't see the threads creation, so the FDs
are left intact in the table and we can overwrite the tgid there.
The approach is ugly, it probably shows that we should use a dummy
value for the tgid during boot, that would be replaced once the FDs
migrate to their target, but we also need a way to make sure not to
miss them. Also that doesn't solve the possibility of closing a
listener at run time from the wrong thread group.
At boot the pollers are allocated for each thread and they need to
reprogram updates for all FDs they will manage. This code is not
trivial, especially when trying to respect thread groups, so we'd
rather avoid duplicating it.
Let's centralize this into fd.c with this function. It avoids closed
FDs, those whose thread mask doesn't match the requested one or whose
thread group doesn't match the requested one, and performs the update
if required under thread-group protection.
It requires to both adapt the parser and change the algorithm to
redispatch incoming traffic so that local threads IDs may always
be used.
The internal structures now only reference thread group IDs and
group-local masks which are compatible with those now used by the
FD layer and the rest of the code.
It used to turn group+local to global but now we're doing the exact
opposite as we want to stick to group-local masks. This means that
"thread 3-4" might very well emit what "thread 2/1-2" used to emit
till now for 2 groups and 4 threads. This is needed because we'll
have to support group-local thread masks in receivers.
However the rest of the code (receivers) is not ready yet for this,
so using this code with more than one thread group will definitely
break some bindings.
The IOCB might have closed the FD itself, so it's not an error to
have fd.tgid==0 or anything else, nor to have a null running_mask.
In fact there are different conditions under which we can leave the
IOCB, all of them have been enumerated in the code's comments (namely
FD still valid and used, hence has running bit, FD closed but not yet
reassigned thus running==0, FD closed and reassigned, hence different
tgid and running becomes irrelevant, just like all other masks). For
this reason we have no other solution but to try to grab the tgid on
return before checking the other bits. In practice it doesn't represent
a big cost, because if the FD was closed and reassigned, it's instantly
detected and the bit is immediately released without blocking other
threads, and if the FD wasn't closed this doesn't prevent it from
being migrated to another thread. In the worst case a close by another
thread after a migration will be postponed till the moment the running
bit is cleared, which is the same as before.
These functions need to set/reset the FD's tgid but when they're called
there may still be wakeups on other threads that discover late updates
and have to touch the tgid at the same time. As such, it is not possible
to just read/write the tgid there. It must only be done using operations
that are compatible with what other threads may be doing.
As we're using inc/dec on the refcount, it's safe to AND the area to zero
the lower part when resetting the value. However, in order to set the
value, there's no other choice but fd_claim_tgid() which will assign it
only if possible (via a CAS). This is convenient in the end because it
protects the FD's masks from being modified by late threads, so while
we hold this refcount we can safely reset the thread_mask and a few other
elements. A debug test for non-null masks was added to fd_insert() as it
must not be possible to face this situation thanks to the protection
offered by the tgid.
fd_insert() was already given a thread group ID and a global thread mask.
Now we're changing the few callers to take the group-local thread mask
instead. It's passed directly into the FD's thread mask. Just like for
previous commit, it must not change anything when a single group is
configured.
With the change that was started on other masks, the thread mask was
still not fully converted, sometimes being used as a global mask and
sometimes as a local one. This finishes the code modifications so that
the mask is always considered as a group-local mask. This doesn't
change anything as long as there's a single group, but is necessary
for groups 2 and above since it's used against running_mask and so on.
It's an AND so it destroys information and due to this there's a call
place where we have to perform two reads to know the previous value
then to change it. With a fetch-and-and instead, in a single operation
we can know if the bit was previously present, which is more efficient.
From now on, the FD's running_mask only refers to local thread IDs. However,
there remains a limitation, in updt_fd_polling(), we temporarily have to
check and set shared FDs against .thread_mask, which still contains global
ones. As such, nbtgroups > 1 may break (but this is not yet supported without
special build options).
From now on, the FD's update_mask only refers to local thread IDs. However,
there remains a limitation, in updt_fd_polling(), we temporarily have to
check and set shared FDs against .thread_mask, which still contains global
ones. As such, nbtgroups > 1 may break (but this is not yet supported without
special build options).
This changes the signification of each bit in the polled_mask so that
now each bit represents a local thread ID for the current group instead
of a global thread ID. As such, all tests now apply to ltid_bit instead
of tid_bit.
No particular check was made to verify that the FD's tgid matches the
current one because there should be no case where this is not true. A
check was added in epoll's __fd_clo() to confirm it never differs unless
expected (soft stop under thread isolation, or master in starting mode
going to exec mode), but that doesn't prevent from doing the job: it
only consists in checking in the group's threads those that are still
polling this FD and to remove them.
Some atomic loads were added at the various locations, and most repetitive
references to polled_mask[fd].xx were turned to a local copy instead making
the code much more clear.
We now grab a reference to the FD's tgid before manipulating the
running_mask so that we're certain it corresponds to our own group
(hence bits), and we drop it once we've set the bit. For now there's
no measurable performance impact in doing this, which is great. The
lock can be observed by perf top as taking a small share of the time
spent in fd_update_events(), itself taking no more than 0.28% of CPU
under 8 threads.
However due to the fact that the thread groups are not yet properly
spread across the pollers and the thread masks are still wrong, this
will trigger some BUG_ON() in fd_insert() after a few tens of thousands
of connections when threads other than those of group 1 are reached,
and this is expected.
The file descriptors will need to know the thread group ID in addition
to the mask. This extends fd_insert() to take the tgid, and will store
it into the FD.
In the FD, the tgid is stored as a combination of tgid on the lower 16
bits and a refcount on the higher 16 bits. This allows to know when it's
really possible to trust the tgid and the running mask. If a refcount is
higher than 1 it indeed indicates another thread else might be in the
process of updating these values.
Since a closed FD must necessarily have a zero refcount, a test was
added to fd_insert() to make sure that it is the case.
It's a bit ugly to see that half of the callers of fd_insert() have to
apply all_threads_mask themselves to the bit field they're passing,
because usually it comes from a listener that may have other bits set.
Let's make the function apply the mask itself.
After a poller's ->clo() was called to completely terminate operations
on an FD, there's no reason for keeping updates on this FD, so if any
updates were already programmed it would be nice if we could delete them.
Tests show that __fd_clo() is called roughly half of the time with the
last FD from the local update list, which possibly makes sense if a close
has to appear after a polling change resulting from an incomplete read or
the end of a send().
We can detect this and remove the last entry, which gives less work to
do during the update() call, and eliminates most of the poll_drop_fd
event reports.
Note that while tempting, this must not be backported because it's only
safe to be done now that fd_delete_orphan() clears the update mask as
we need to be certain not to miss it:
- if the update mask is kept up with no entry, we can miss future
updates ;
- if the update mask is cleared too fast, it may result in failure
to add a shared event.
The update-list needs to be per-group because its inspection is based
on a mask and we need to be certain when scanning it if a mask is for
the same thread or another one. Once per-group there's no doubt about
it, even if the FD's polling changes, the entry remains valid. It will
be needed to check the tgid though.
Note that a soft-stop or pause/resume might not necessarily work here
with tgroups>1, because the operation might be delivered to a thread
that doesn't belong to the group and whoe update mask will not reflect
one that is interesting here. We can't do better at this stage.
Dealing with long-lasting updates that outlive a close() is always
going to be quite a problem, not because of the thread that will
discover such updates late, but mostly due to the shared update_list
that will have an entry on hold making it difficult to reuse it, and
requiring that the fd's tgid is changed and the update_mask reset
from a safe location.
After careful inspection, it turns out that all our pollers that support
automatic event removal upon close() do not need any extra bookkeeping,
and that poll and select that use an internal representation already
provide a poller->clo() callback that is already used to update the
local event. As such, it is already safe to reset the update mask and
to remove the event from the shared list just before the final close,
because nothing remains to be done with this FD by the poller.
Doing so considerably simplifies the handling of updates, which will
only have to be inspected by the pollers, while the writers can continue
to consider that the entries are always valid. Another benefit is that
it will be possible to reduce contention on the update_list by just
having one update_list per group (left to be done later if needed).
We don't want to pick idle connections from another thread group,
this would be very slow by forcing to share undesirable data.
This patch makes sure that we start seeking from the current thread
group's threads only and loops over that range exclusively.
It's worth noting that the next_takeover pointer remains per-server
and will bounce when multiple groups use it at the same time. But we
preserve the perturbation by applying a modulo when retrieving it,
so that when groups are of the same size (most common case), the
index will not even change. At this time it doesn't seem worth
storing one index per group in servers, but that might be an option
if any contention is detected later.
This one is only used as a hint to improve scheduling latency, so there
is no more point in keeping it global since each thread group handles
its own run q
Their migration was postponed for convenience only but now's time for
having the shared wait queues per thread group and not just per process,
otherwise the WQ lock uses a huge amount of CPU alone.
Since commit d2494e048 ("BUG/MEDIUM: peers/config: properly set the
thread mask") there must not remain any single case of a receiver that
is bound nowhere, so there's no need anymore for thread_mask().
We're adding a test in fd_insert() to make sure this doesn't happen by
accident though, but the function was removed and its rare uses were
replaced with the original value of the bind_thread msak.
When using multiple groups, the stats socket starts to emit errors and
it's not natural to have to touch the global section just to specify
"thread 1/all". Let's pre-attach these sockets to thread group 1. This
will cause errors when trying to change the group but this really is not
a problem for now as thread groups are not enabled by default. This will
make sure configs remain portable and may possibly be relaxed later.
As a side effect of commit 34aae2fd1 ("MEDIUM: mworker: set the iocb of
the socketpair without using fd_insert()"), a config may now refuse to
start if there are multiple groups configured because the default bind
mask may span over multiple groups, and it is not possible to force it
to work differently.
Let's just assign thread group 1 to the master<->worker sockets so that
the thread bindings automatically resolve to a single group. The same was
done for the master side of the socket even if it's not used. It will
avoid being forgotten in the future.
The principle remains the same, but instead of having a single process
and ignoring extra ones, now we set the affinity masks for the respective
threads of all groups.
The doc was updated with a few extra examples.
These old legacy pollers are not designed for this. They're still
using a shared list of events for all threads, this will not scale at
all, so there's no point in enabling thread-groups there. Modern
systems have epoll, kqueue or event ports and do not need these ones.
We arrange for failing at boot time, only when thread-groups > 1 so
that existing setups will remain unaffected.
If there's a compelling reason for supporting thread groups with these
pollers in the future, the rework should not be too hard, it would just
consume a lot of memory to have an fd_evts[] array per thread, but that
is doable.
When an FD is migrated, all pollers program an update. That's useless
code duplication, and when thread groups will be supported, this will
require an extra round of locking just to verify the update_mask on
return. Let's just program the update direction from fd_update_events()
as it already does for closed FDs, this becomes more logical.
protocol_stop_now() is called from do_soft_stop_now() running on any
thread that received the signal. The problem is that it will call some
listener handlers to close the FD, resulting in an fd_delete() being
called from the wrong group. That's not clean and we cannot even rely
on the thread mask to show up.
One interesting long-term approach could be to have kill queues for
FDs, and maybe we'll need them in the long run. However that doesn't
work well for listeners in this situation.
Let's simply isolate ourselves during this instant. We know we'll be
alone dealing with the close and that the FD will be instantly deleted
since not in use by any other thread. It's not the cleanest solution
but it should last long enough without causing trouble.
Since we have to use masks to verify owners/waiters, we have no other
option but to have them per group. This definitely inflates the size
of the locks, but this is only used for extreme debugging anyway so
that's not dramatic.
Thus as of now, all masks in the lock stats are local bit masks, derived
from ti->ltid_bit. Since at boot ltid_bit might not be set, we just take
care of this situation (since some structs are initialized under look
during boot), and use bit 0 from group 0 only.
They were initially made to deal with both the cache and the update list
but there's no cache anymore and keeping them for the update list adds a
lot of obfuscation that is really not desired. Let's get rid of them now.
Their purpose was simply to get a pointer to fdtab[fd].update.{,next,prev}
in order to perform atomic tests and modifications. The offset passed in
argument to the functions (fd_add_to_fd_list() and fd_rm_from_fd_list())
was the offset of the ->update field in fdtab, and as it's not used anymore
it was removed. This also removes a number of casts, though those used by
the atomic ops have to remain since only scalars are supported.
It was deprecated, marked for removal in 2.7 and was already emitting a
warning, let's get rid of it. Note that we've kept the keyword detection
to suggest to use "thread" instead.
This was already causing a deprecation warning and was marked for removal
in 2.7, now it happens. An error message indicates this doesn't exist
anymore.
The "ctx" and "st2" parts in the appctx were marked for removal in 2.7
and were emulated using memcpy/memset etc for possible external code.
Let's remove this now.
The output of "show activity" can be so large that the output is visually
unreadable on a screen. Let's add an option to filter on the desired
column (actually the thread number), use "0" to report only the first
column (aggregated/sum/avg), and use "-1", the default, for the normal
detailed dump.
This command will create the requested number of tasks competing on a
lock, resulting in triggering the watchdog and crashing the process.
This will help stress the watchdog and inspect the lock debugging parts.
The previous attempt to fix thread dumps in commit 672972604 ("BUG/MEDIUM:
debug: fix possible hang when multiple threads dump at once") still had
some shortcomings. Sometimes parallel dumps are jerky essentially due to
the way that threads synchronize on startup and end. In addition the risk
of waiting forever for a stopped thread exists, and panics happening in
parallel to thread dumps are not more reliable either.
This commit revisits the state transitions so that all threads may request
a dump in parallel, that all of them wait for each other in the handler,
and that one thread is responsible for counting every other and checking
that the total matches the number of active threads.
Then for stopping there's a finishing phase that all threads wait for so
that none quits this area too early. Given that we now know the number of
participants to the dump, we can let them each decrement the counter when
leaving so that another dump may only start after the last participant
has completely left.
Now many thread dumps in parallel are running fine, so do panics. No
backport is needed as this was the result of the changes for thread
groups.
Some panic dumps are mangled or truncated due to the watchdog firing at
the same time on multiple threads and calling ha_panic() simultaneously.
What may happen in this case is that the second one waits for the first
one to finish but as soon as it's done the second one resets the buffer
and dumps again, sometimes resetting the first one's dump. Also the first
one's abort() may trigger while the second one is currently dumping,
resulting in a full dump followed by a truncated one, leading to
confusion. Sometimes some lines appear in the middle of a dump as well.
It doesn't happen often and is easier to trigger by causing massive
deadlocks.
There's no reason for the process to resist to a panic, so we can safely
add a counter and no nothing on subsequent calls. Ideally we'd wait there
forever but as this may happen inside a signal handler (e.g. watchdog),
it doesn't always work, so the easiest thing to do is to return so that
the thread is interrupted as soon as possible and brought to the debug
handler to be dumped.
This should be backported, at least to 2.6 and possibly to older versions
as well.
In ha_tkillall(), the current thread's group was used to check for the
thread being running instead of using the target thread's group mask.
Most of the time it would not have any effect unless some groups are
uneven where it can lead to incomplete thread dumps for example.
No backport is needed, this is purely 2.7.
Running several concurrent "show threads" in loops might occasionally
cause a segfault when trying to retrieve the stream from appctx_sc()
which may be null while the applet is finishing. It's not easy to
reproduce, it requires 3-5 sessions in parallel for about a minute
or so. The appctx_sc must be checked before passing it to sc_strm().
This must be backported to 2.6 which also has the bug.
In thread_resolve_group_mask(), if a global thread number is passed
and it belongs to a group greater than 1, an incorrect shift resulted
in shifting that ID again which made it appear nowhere or in a wrong
group possibly. The bug was introduced in 2.5 with commit 627def9e5
("MINOR: threads: add a new function to resolve config groups and
masks") though the groups only starts to be usable in 2.7, so there
is no impact for this bug, hence no backport is needed.
Implement graceful shutdown as specified in RFC 9114. A GOAWAY frame is
generated with stream ID to indicate range of processed requests.
This process is done via the release app protocol operation. The MUX
is responsible to emit the generated GOAWAY frame after app release. A
CONNECTION_CLOSE will be emitted once there is no unacknowledged STREAM
frames.
Store a reference to the HTTP/3 control stream in h3c context.
This will be useful to implement GOAWAY emission without having to store
the control stream ID on opening.
Call qc_send() on qc_release(). This is mostly useful for application
protocol with a connection closing procedure. Most notably, this will be
useful to implement HTTP/3 GOAWAY emission.
This change is purely cosmetic. qc_release() function is moved just
before qc_io_cb(). It's cleaner as it brings it closer where it is used.
More importantly, this will be required to be able to use it in
qc_send() function.
Send a CONNECTION_CLOSE if the MUX has been released and all STREAM data
are acknowledged. This is useful to prevent a client from trying to use
a connection which have the upper layer closed.
To implement this a new function qc_check_close_on_released_mux() has
been added. It is called on QUIC MUX release notification and each time
a qc_stream_desc has been released.
This commit is associated with the previous one :
MINOR: mux-quic/h3: schedule CONNECTION_CLOSE on app release
Both patches are required to prevent the risk of browsers stuck on
webpage loading if MUX has been released. On CONNECTION_CLOSE reception,
the client will reopen a new QUIC connection.
When MUX is released, a CONNECTION_CLOSE frame should be emitted. This
will ensure that the client does not use anymore a half-dead connection.
App protocol layer is responsible to provide the error code via release
callback. For HTTP/3 NO_ERROR is used as specified in RFC 9114. If no
release callback is provided, generic QUIC NO_ERROR code is used. Note
that a graceful shutdown is used : quic_conn must emit CONNECTION_CLOSE
frame when possible. This will be provided in another patch.
This change should limit the risk of browsers stuck on webpage loading
if MUX has been released. On CONNECTION_CLOSE reception, the client will
reopen a new QUIC connection.
Adjust qcc_emit_cc_app() to allow the delay of emission of a
CONNECTION_CLOSE. This will only set the error code but the quic-conn
layer is not flagged for immediate close. The quic-conn will be
responsible to shut the connection when deemed suitable.
This change will allow to implement application graceful shutdown, such
as HTTP/3 with GOAWAY emission. This will allow to emit closing frames
on MUX release. Once all work is done at the lower layer, the quic-conn
should emit a CONNECTION_CLOSE with the registered error code.
Define a new structure quic_err to abstract a QUIC error type. This
allows to easily differentiate a transport and an application error
code. This simplifies error transmission from QUIC MUX and H3 layers.
This new type is defined in quic_frame module. It is used to replace
<err_code> field in <quic_conn>. QUIC_FL_CONN_APP_ALERT flag is removed
as it is now useless.
Utility functions are defined to be able to quickly instantiate
transport, tls and application errors.
Reception is disabled as soon as a CONNECTION_CLOSE emission is
required. An early return is done on qc_lstnr_pkt_rcv() to implement
this.
This condition is not functional if the error code sent is NO_ERROR
(0x00). To fix this, check the quic-conn flags instead of the
error code.
Currently this bug has no impact has NO_ERROR emission is not used.
This can be backported up to 2.6.
A bug in the thread dumper was introduced by commit 00c27b50c ("MEDIUM:
debug: make the thread dumper not rely on a thread mask anymore"). If
two or more threads try to trigger a thread dump exactly at the same
time, the second one may loop indefinitely trying to set the value to 1
while the other ones will wait for it to finish dumping before leaving.
This is a consequence of a logic change using thread numbers instead of
a thread mask, as threads do not need to see all other ones there anymore.
No backport is needed, this is only for 2.7.
Implement support for STOP_SENDING frame parsing. The stream is resetted
as specified by RFC 9000. This will automatically interrupt all future
send operation in qc_send(). A RESET_STREAM will be sent with the code
extracted from the original STOP_SENDING frame.
Implement functions to be able to reset a stream via RESET_STREAM.
If needed, a qcs instance is flagged with QC_SF_TO_RESET to schedule a
stream reset. This will interrupt all future send operations.
On stream emission, if a stream is flagged with QC_SF_TO_RESET, a
RESET_STREAM frame is generated and emitted to the transport layer. If
this operation succeeds, the stream is locally closed. If upper layer is
instantiated, error flag is set on it.
Adjust condition to detach a qcs instance : if the stream is not locally
close it is not directly free. This should improve stream closing by
ensuring that either FIN or a RESET_STREAM is sent before destroying it.
Implement a basic state machine to represent stream lifecycle. By
default a stream is idle. It is marked as open when sending or receiving
the first data on a stream.
Bidirectional streams has two states to represent the closing on both
receive and send channels. This distinction does not exists for
unidirectional streams which passed automatically from open to close
state.
This patch is mostly internal and has a limited visible impact. Some
behaviors are slightly updated :
* closed streams are garbage collected at the start of io handler
* send operation is interrupted if a stream is close locally
Outside of this, there is no functional change. However, some additional
BUG_ON guards are implemented to ensure that we do not conduct invalid
operation on a stream. This should strengthen the code safety. Also,
stream states are displayed on trace which should help debugging.
MAX_STREAM_DATA can be used as the first frame of a stream. In this
case, the stream should be opened, if it respects flow-control limit. To
implement this, simply replace plain lookup in stream tree by
qcc_get_qcs() at the start of the parsing function. This automatically
takes care of opening the stream if not already done.
As specified by RFC 9000, if MAX_STREAM_DATA is receive for a
receive-only stream, a STREAM_STATE_ERROR connection error is emitted.
Improve return path for qcc_recv() on STREAM parsing. It returns 0 on
success. On error, a non-zero value is returned which indicates to the
caller that the packet containing the frame should not be acknowledged.
When qcc_recv() generates a CONNECTION_CLOSE or RESET_STREAM, either
directly or via qcc_get_qcs(), an error is returned which ensure that no
acknowledgement is generated. This required an adjustment on
qcc_get_qcs() API which now returns a success/error code. The stream
instance is returned via a new out argument.
Extend the function qcc_get_qcs() to be able to filter send/receive-only
unidirectional streams. A connection error STREAM_STATE_ERROR is emitted
if this new filter does not match.
This will be useful when various frames handlers are converted with
qcc_get_qcs(). Depending on the frame type, it will be easy to filter on
the forbidden stream types as specified in RFC 9000.
Implement a simple function to notify a possible subscriber or wake up
the upper layer if a special condition happens on a stream. For the
moment, this is only used to replace identical code in
qc_wake_some_streams().
Rename qc_release_detached_streams() to qc_purge_streams(). The aim is
to have a more generic name. It's expected to complete this function to
add other criteria to purge dead streams.
Also the function documentation has been corrected. It does not return a
number of streams. Instead it is a boolean value, to true if at least
one stream was released.
Rename both qcc_open_stream_local/remote() functions to
qcc_init_stream_local/remote(). This change is purely cosmetic. It will
reduces the ambiguity with the soon to be implemented OPEN states for
QCS instances.
QUIC MUX was not able to correctly deal with server response using
chunked transfer-encoding. All data will be transfered correctly to the
client but the FIN bit is missing. The transfer will never stop as the
client will wait indefinitely for the FIN bit.
This bug happened because the HTX message representing a chunked encoded
payload contains a final empty block with the EOM flag. However,
emission is skipped by QUIC MUX if there is no data to transfer. To fix
this, the condition was completed to ensure that there is no need to
send the FIN signal. If this is false, data emission will proceed even
if there is no data : this will generate an empty QUIC STREAM frame with
FIN set which will mark the end of the transfer.
To ensure that a FIN STREAM frame is sent only one time,
QC_SF_FIN_STREAM is resetted on send confirmation from the transport in
qcc_streams_sent_done().
This bug was reproduced when dealing with chunked transfer-encoding
response for the HTTP server.
This must be backported up to 2.6.
When building with gcc-5, one can see this warning:
src/http_fetch.c: In function 'smp_fetch_meth':
src/http_fetch.c:356:6: warning: 'htx' may be used uninitialized in this function [-Wmaybe-uninitialized]
sl = http_get_stline(htx);
^
It's wrong since the only way to reach this code is to have met the
same condition a few lines before and initialized the htx variable.
The reason in fact is that the same test happens on different variables
of distinct types, so the compiler possibly doesn't know that the
condition is the same. Newer gcc versions do not have this problem.
Let's just move the assignment earlier and have the exact same test,
as it's sufficient to shut this up. This may have to be backported
to 2.6 since the code is the same there.
Between 1.8 and 1.9 commit d9e7e36c6 ("BUG/MEDIUM: epoll/threads: use one
epoll_fd per thread") split the epoll poller to use one poller per thread
(and this was backported to 1.8). This patch added a call to epoll_ctl(DEL)
on return from the I/O handler as a safe way to deal with a detected thread
migration when that code was still quite fragile. One aspect of this choice
was that by then we wanted to maintain support for the rare old bogus epoll
implementations that failed to remove events on close(), so risking to lose
the event was not an option.
Later in 2.5, commit 200bd50b7 ("MEDIUM: fd: rely more on fd_update_events()
to detect changes") changed the code to perform most of the operations
inside fd_update_events(), but it maintained that oddity, to the point that
strictly all pollers except epoll now just add an update to be dealt with at
the next round.
This approach is much more efficient, because under load and server-side
connection reuse, it's perfectly possible for a thread to see the same FD
several times in a poll loop, the first time to relinquish it after a
migration, then the other thread makes a request, gets its response, and
still during the same loop for the first one, grabbing an idle connection
to send a request and wait for a response will program a new update on
this FD. By using a synchronous epoll_ctl(DEL), we effectively lose the
opportunity to aggregate certain changes in the same update.
Some tests performed locally with 8 threads and one server show that on
average, by using an update instead of a synchronous call, we reduce the
number of epoll_ctl() calls by 25-30% (under low loads it will probably
not change anything).
So this patch implements the same method for all pollers and replaces the
synchronous epoll_ctl() with an update.
Since commit d1480cc8 ("BUG/MEDIUM: stream-int: do not rely on the
connection error once established"), connection errors are not handled
anymore by the stream-connector once established. But it is a problem for
the H1 mux when an error occurred during a synchronous send in
h1_snd_buf(). Because in this case, the connction error is just missed. It
leads to a session leak until a timeout is reached (client or server).
To fix the bug, the connection flags are now checked in h1_snd_buf(). If
there is an error, it is reported to the stconn level by setting SF_FL_ERROR
flags. But only if there is no pending data in the input buffer.
This patch should solve the issue #1774. It must be backported as far as
2.2.
When we want to establish a tunnel on a side, we wait to have flush all data
from the buffer. At this stage the other side is at least in DONE state. But
there is no real reason to wait. We are already in DONE state on its
side. So all the HTTP message was already forwarded or planned to be
forwarded. Depending on the scheduling if the mux already started to
transfer tunneled data, these data may block the switch in TUNNEL state and
thus block these data infinitly.
This bug exists since the early days of HTX. May it was mandatory but today
it seems useless. But I honestly don't remember why this prerequisite was
added. So be careful during the backports.
This patch should be backported with caution. At least as far as 2.4. For
2.2 and 2.0, it seems to be mandatory too. But a review must be performed.
When a buffer area is casted to an htx message, depending on the method
used, the underlying buffer may be updated or not. The htxbuf() function
does not change the buffer state. We assume the buffer was already prepared
to store an htx message. htx_from_buf() on its side, updates the
buffer. With the first function, we only need to commit changes to the
underlying buffer if the htx message is changed. With last one, we must
always commit the changes. The idea is to be sure to keep non-empty HTX
messages while an empty message must be lead to an empty buffer after
commit.
All that said because in h1_process_demux(), the changes is not always
committed as expected. When the demux is blocked, we just leave the
function. So it is possible to have an empty htx message stored in a buffer
that appears non-empty. It is not critical, but the buffer cannot be
released in this state. And we should always release an empty buffer.
This patch must be backported as far as 2.4.
The sd_notify API is not able to change the "Active:" line in "systemcl
status". However a message can still be displayed on a "Status: " line,
even if the service is still green and "active (running)".
When startup succeed the Status will be set to "Ready.", upon a reload
it will be set to "Reloading Configuration." If the configuration
succeed "Ready." again. However if the reload failed, it will be set to
"Reload failed!".
Keep in mind that the "Active:" line won't change upon a reload failure,
and will still be green.
The "method" sample fetch does not perform any check on the stream existence
before using it. However, for errors triggered at the mux level, there is no
stream. When the log message is formatted, this sample fetch must fail. It
must also fail when it is called from a health-check.
This patch must be backported as far as 2.4.
From time to time, users complain to get 400-Bad-request responses for
totally valid CONNECT requests. After analysis, it is due to the H1 parser
performs an exact match between the authority and the host header value. For
non-CONNECT requests, it is valid. But for CONNECT requests the authority
must contain a port while it is often omitted from the host header value
(for default ports).
So, to be sure to not reject valid CONNECT requests, a basic authority
validation is now performed during the message parsing. In addition, the
host header value is normalized. It means the default port is removed if
possible.
This patch should solve the issue #1761. It must be backported to 2.6 and
probably as far as 2.4.
http_is_default_port() can be used to test if a port is a default HTTP/HTTPS
port. A scheme may be specified. In this case, it is used to detect defaults
ports, 80 for "http://" and 443 for "https://". Otherwise, with no scheme, both
are considered as default ports.
http_get_host_port() function can be used to get the port part of a host. It
will be used to get the port of an uri authority or a host header
value. This function only look for a port starting from the end of the
host. It is the caller responsibility to call it with a valid host value. An
indirect string is returned.
The scheme based normalization is not properly handled the URI's userinfo,
if any. First, the authority parser is not called with "no_userinfo"
parameter set. Then it is skipped from the URI normalization.
This patch must be backported as far as 2.4.
Patch 49f6f4b ("BUG/MEDIUM: peers: fix segfault using multiple bind on
peers sections") introduced possible NULL dereferences when parsing the
peers configuration.
Fix the issue by checking the return value of bind_conf_uniq_alloc().
This patch should be backported as far as 2.0.
As much as possible we should take care of not leaving bits from stopped
threads in shared thread masks. It can avoid issues like the previous
fix and will also make debugging less confusing.
When soft-stopping, there's a comparison between stopping_threads and
threads_enabled to make sure all threads are stopped, but this is not
correct and is racy since the threads_enabled bit is removed when a
thread is stopped but not its stopping_threads bit. The consequence is
that depending on timing, when stopping, if the first stopping thread
is fast enough to remove its bit from threads_enabled, the other threads
will see that stopping_threads doesn't match threads_enabled anymore and
will wait forever. As such the mask must be applied to stopping_threads
during the test. This issue was introduced in recent commit ef422ced9
("MEDIUM: thread: make stopping_threads per-group and add stopping_tgroups"),
no backport is needed.
When several "early-hint" rules are used, we try, as far as possible, to
merge links into the same 103-early-hints response. However, it only works
if there is no ACLs. If a "early-hint" rule is not executed an invalid
response is generated. the EOH block or the start-line may be missing,
depending on the rule order.
To fix the bug, we use the transaction status code. It is unused at this
stage. Thus, it is set to 103 when a 103-early-hints response is in
progress. And it is reset when the response is forwarded. In addition, the
response is forwarded if the next rule is an "early-hint" rule with an
ACL. This way, the response is always valid.
This patch must be backported as far as 2.2.
When an explicit "http-check send" rule is used, if it is the first one, it
is merge with the implicit rule created by "option httpchk" statement. The
opposite is also true. Idea is to have only one send rule with the merged
info. It means info defined in the second rule override those defined in the
first one. However, if an element is not defined in the second rule, it must
be ignored, keeping this way info from the first rule. It works as expected
for the method, the uri and the request version. But it is not true for the
header list.
For instance, with the following statements, a x-forwarded-proto header is
added to healthcheck requests:
option httpchk
http-check send meth GET hdr x-forwarded-proto https
while by inverting the statements, no extra headers are added:
http-check send meth GET hdr x-forwarded-proto https
option httpchk
Now the old header list is overriden if the new one is not empty.
This patch should fix the issue #1772. It must be backported as far as 2.2.
Calls to free() are replaced by ha_free(). And otherwise, the pointers are
explicitly set to NULL after a release. There is no issue here but it could
help debugging sessions.
The peers didn't have their bind_conf thread mask nor group set, because
they're still not part of the global proxy list. Till 2.6 it seems it does
not have any visible impact, since most listener-oriented operations pass
through thread_mask() which detects null masks and turns them to
all_threads_mask. But starting with 2.7 it becomes a problem as won't
permit these null masks anymore.
This patch duplicates (yes, sorry) the loop that applies to the frontend's
bind_conf, though it is simplified (no sharding, etc).
As the code is right now, it simply seems impossible to trigger the second
(and largest) part of the check when leaving thread_resolve_group_mask()
on success, so it looks like it might be removed.
No backport is needed, unless a report in 2.6 or earlier mentions an issue
with a null thread_mask.
Some generic frontend errors mention the bind_conf by its name as
"bind '%s'", but if this is used on peers "bind" lines it shows
"(null)" because the argument is set to NULL in the call to
bind_conf_uniq_alloc() instead of passing the argument. Fortunately
that's trivial to fix.
This may be backported to older versions.
Add a check on stream size when the stream is in state Size Known. In
this case, a STREAM frame cannot change the stream size. If this is not
respected, a CONNECTION_CLOSE with FINAL_SIZE_ERROR will be emitted as
specified in the RFC 9000.
Rename QC_SF_FIN_RECV to the more generic name QC_SF_SIZE_KNOWN. This
better align with the QUIC RFC 9000 which uses the "Size Known" state
definition. This change is purely cosmetic.
Review the whole API used to access/instantiate qcs.
A public function qcc_open_stream_local() is available to the
application protocol layer. It allows to easily opening a local stream.
The ID is automatically attributed to the next one available.
For remote streams, qcc_open_stream_remote() has been implemented. It
will automatically take care of allocating streams in a linear way
according to the ID. This function is called via qcc_get_qcs() which can
be used for each qcc_recv*() operations. For the moment, it is only used
for STREAM frames via qcc_recv(), but soon it will be implemented for
other frames types which can also be used to open a new stream.
qcs_new() and qcs_free() has been restricted to the MUX QUIC only as
they are now reserved for internal usage.
This change is a pure refactoring and should not have any noticeable
impact. It clarifies the developer intent and help to ensure that a
stream is not automatically opened when not desired.
Implement a function <qcs_sc> to easily access to the stconn associated
with a QCS. This takes care of qcs.sd which may be NULL, for example for
unidirectional streams.
It is expected that in the future when implementing
STOP_SENDING/RESET_STREAM, stconn must be notify about the event. This
accessor will allow to easily test if the stconn is instantiated or not.
The function mworker_pipe_register_per_thread() is called this way
because the master first used pipes instead of socketpairs.
Rename mworker_pipe_register_per_thread() to
mworker_sockpair_register_per_thread() in order to be more consistent.
Also update a comment inside the function.
The worker was previously changing the iocb of the socketpair in the
worker by mworker_accept_wrapper(). However, it was done using
fd_insert() instead of changing directly the callback in the
fdtab[].iocb pointer.
This patch cleans up this by part by removing fd_insert().
It also stops setting tid_bit on the thread mask, the socketpair will be
handled by any thread from now.
Let's rely on tg->threads_enabled there to detect running threads. We
should probably have a dedicated function for this in order to simplify
the code and avoid the risk of using the wrong group ID.
Commit ef422ced9 ("MEDIUM: thread: make stopping_threads per-group and add
stopping_tgroups") moved the stopping_threads mask to per-group, but one
test in the loop preserved its global value instead, resulting in stopping
threads never sleeping on stop and eating 100% CPU until all were stopped.
No backport is needed.
Commit 377e37a80 ("MINOR: tinfo: add the mask of enabled threads in each
group") forgot -1 on the tgid, thus the groups was not always correctly
tested, which is visible only when running with more than one group. No
backport is needed.
Building with threads and without thread dump (e.g. macos, freebsd)
warns that thread_dump_state is unused. This happened in fact with
recentcommit 1229ef312 ("MINOR: wdt: do not rely on threads_to_dump
anymore"). The solution would be to mark it unused, but after a
second thought, it can be convenient to keep it exported to help
debug crashes, so let's export it again. It's just not referenced in
include files since it's not needed outside.
The thread mask is too short to dump more than 64 bits. Thus here we're
using a different approach with two counters, one for the next thread ID
to dump (which always exists, as it's looked up), and the second one for
the number of threads done dumping. This allows to dump threads in ascending
order then to let them wait for all others to be done, then to leave without
the risk of an overlapping dump until the done count is null again.
This allows to remove threads_to_dump which was the last non-FD variable
using a global thread mask.
This flag is not needed anymore as we're already marking the waiting
threads as harmless, thus the thread's bit is already covered by this
information. The variable was unexported.
The debug_handler() function waits for other threads to join, but does
not mark itself as harmless, so if at the same time another thread tries
to isolate, this may deadlock. In practice this does not happen as the
signal is received during epoll_wait() hence under harmless mode, but
it can possibly arrive under other conditions.
In order to improve this, while waiting for other threads to join, we're
now marking the current thread as harmless, as it's doing nothing but
waiting for the other ones. This way another harmless waiter will be able
to proceed. It's valid to do this since we're not doing anything else in
this loop.
One improvement could be to also check for the thread being idle and
marking it idle in addition to harmless, so that it can even release a
full isolation requester. But that really doesn't look worth it.
The harmless status is not re-entrant, so sometimes for signal handling
it can be useful to know if we're already harmless or not. Let's add a
function doing that, and make the debugger use it instead of manipulating
the harmless mask.
thread_isolate() and thread_isolate_full() were relying on a set of thread
masks for all threads in different states (rdv, harmless, idle). This cannot
work anymore when the number of threads increases beyond LONGBITS so we need
to change the mechanism.
What is done here is to have a counter of requesters and the number of the
current isolated thread. Threads which want to isolate themselves increment
the request counter and wait for all threads to be marked harmless (or idle)
by scanning all groups and watching the respective masks. This is possible
because threads cannot escape once they discover this counter, unless they
also want to isolate and possibly pass first. Once all threads are harmless,
the requesting thread tries to self-assign the isolated thread number, and
if it fails it loops back to checking all threads. If it wins it's guaranted
to be alone, and can drop its harmless bit, so that other competing threads
go back to the loop waiting for all threads to be harmless. The benefit of
proceeding this way is that there's very little write contention on the
thread number (none during work), hence no cache line moves between caches,
thus frozen threads do not slow down the isolated one.
Once it's done, the isolated thread resets the thread number (hence lets
another thread take the place) and decrements the requester count, thus
possibly releasing all harmless threads.
With this change there's no more need for any global mask to synchronize
any thread, and we only need to loop over a number of groups to check
64 threads at a time per iteration. As such, tinfo's threads_want_rdv
could be dropped.
This was tested with 64 threads spread into 2 groups, running 64 tasks
(from the debug dev command), 20 "show sess" (thread_isolate()), 20
"add server blah/blah" (thread_isolate()), and 20 "del server blah/blah"
(thread_isolate_full()). The load remained very low (limited by external
socat forks) and no stuck nor starved thread was found.
Stopping threads need a mask to figure who's still there without scanning
everything in the poll loop. This means this will have to be per-group.
And we also need to have a global stopping groups mask to know what groups
were already signaled. This is used both to figure what thread is the first
one to catch the event, and which one is the first one to detect the end of
the last job. The logic isn't changed, though a loop is required in the
slow path to make sure all threads are aware of the end.
Note that for now the soft-stop still takes time for group IDs > 1 as the
poller is not yet started on these threads and needs to expire its timeout
as there's no way to wake it up. But all threads are eventually stopped.
The thread group info is not sufficient to represent a thread group's
current state as it's read-only. We also need something comparable to
the thread context to represent the aggregate state of the threads in
that group. This patch introduces ha_tgroup_ctx[] and tg_ctx for this.
It's indexed on the group id and must be cache-line aligned. The thread
masks that were global and that do not need to remain global were moved
there (want_rdv, harmless, idle).
Given that all the masks placed there now become group-specific, the
associated thread mask (tid_bit) now switches to the thread's local
bit (ltid_bit). Both are the same for nbtgroups 1 but will differ for
other values.
There's also a tg_ctx pointer in the thread so that it can be reached
from other threads.
This function was added in 2.0 when reworking the thread isolation
mechanism to make it more reliable. However it if fundamentally
incompatible with the full isolation mechanism provided by
thread_isolate_full() since that one will wait for all threads to
become idle while the former will wait for all threads to finish
waiting, causing a deadlock.
Given that it's not used, let's just drop it entirely before it gets
used by accident.
In order to kill all_threads_mask we'll need to have an equivalent for
the thread groups. The all_tgroups_mask does just this, it keeps one bit
set per enabled group.
Since commit cc7a11ee3 ("MINOR: threads: set the tid, ltid and their bit
in thread_cfg") we ought not use (1UL << thr) to get the group mask for
thread <thr>, but (ha_thread_info[thr].ltid_bit). ha_tkillall() needs
this.
Since commit cc7a11ee3 ("MINOR: threads: set the tid, ltid and their bit
in thread_cfg") we ought not use (1UL << thr) to get the group mask for
thread <thr>, but (ha_thread_info[thr].ltid_bit). clock_report_idle()
needs this.
This also implies not using all_threads_mask anymore but taking the mask
from the tgroup since it becomes relative now.
Since commit cc7a11ee3 ("MINOR: threads: set the tid, ltid and their bit
in thread_cfg") we ought not use (1UL << thr) to get the group mask for
thread <thr>, but (ha_thread_info[thr].ltid_bit). wdt_handler() needs
this.
Since commit cc7a11ee3 ("MINOR: threads: set the tid, ltid and their bit
in thread_cfg") we ought not use (1UL << thr) to get the group mask for
thread <thr>, but (ha_thread_info[thr].ltid_bit). ha_thread_dump() needs
this.
In order to replace the global "all_threads_mask" we'll need to have an
equivalent per group. Take this opportunity for calling it threads_enabled
and make sure which ones are counted there (in case in the future we allow
to stop some).
Now that the tgid is accessible from the thread, it's pointless to have
it in the group, and it was only set but never used. However we'll soon
frequently need the mask corresponding to the group ID and the risk of
getting it wrong with the +1 or to shift 1 instead of 1UL is important,
so let's store the tgid_bit there.
At several places we're dereferencing the thread group just to catch
the group number, and this will become even more required once we start
to use per-group contexts. Let's just add the tgid in the thread_info
struct to make this easier.
Every single place where sleeping_thread_mask was still used was to test
or set a single thread. We can now add a per-thread flag to indicate a
thread is sleeping, and remove this shared mask.
The wake_thread() function now always performs an atomic fetch-and-or
instead of a first load then an atomic OR. That's cleaner and more
reliable.
This is not easy to test, as broadcast FD events are rare. The good
way to test for this is to run a very low rate-limited frontend with
a listener that listens to the fewest possible threads (2), and to
send it only 1 connection at a time. The listener will periodically
pause and the wakeup task will sometimes wake up on a random thread
and will call wake_thread():
frontend test
bind :8888 maxconn 10 thread 1-2
rate-limit sessions 5
Alternately, disabling/enabling a frontend in loops via the CLI also
broadcasts such events, but they're more difficult to observe since
this is causing connection failures.
Right now when an inter-thread wakeup happens, we preliminary check if the
thread was asleep, and if so we wake the poller up and remove its bit from
the sleeping mask. That's not very clean since the sleeping mask cannot be
entirely trusted since a thread that's about to wake up will already have
its sleeping bit removed.
This patch adds a new per-thread flag (TH_FL_NOTIFIED) to remember that a
thread was notified to wake up. It's cleared before checking the task lists
last, so that new wakeups can be considered again (since wake_thread() is
only used to notify about task wakeups and FD polling changes). This way
we do not need to modify a remote thread's sleeping mask anymore. As such
wake_thread() now only tests and sets the TH_FL_NOTIFIED flag but doesn't
clear sleeping anymore.
When enabling an FD that's only bound to another thread, instead of
always picking the first one, let's pick a random one. This is rarely
used (enabling a frontend, or session rate-limiting period ending),
and has greater chances of avoiding that some obscure corner cases
could degenerate into a poorly distributed load.
Till now, update_fd_polling() used to check if all the target threads were
sleeping, and only then would wake an owning thread up. This causes several
problems among which the need for the sleeping_thread_mask and the fact that
by the time we wake one thread up, it has changed.
This commit changes this by leaving it to wake_thread() to perform this
check on the selected thread, since wake_thread() is already capable of
doing this now. Concretely speaking, for updt_fd_polling() it will mean
performing one computation of an ffsl() before knowing the sleeping status
on a global FD state change (which is very rare and not important here,
as it basically happens after relaxing a rate-limit (i.e. once a second
at beast) or after enabling a frontend from the CLI); thus we don't care.
When returning from the polling syscall, all pollers have a certain
dance to follow, made of wall clock updates, thread harmless updates,
idle time management and sleeping mask updates. Let's have a centralized
function to deal with all of this boring stuff: fd_leaving_poll(), and
make all the pollers use it.
The thread flags are touched a little bit by other threads, e.g. the STUCK
flag may be set by other ones, and they're watched a little bit. As such
we need to use atomic ops only to manipulate them. Most places were already
using them, but here we generalize the practice. Only ha_thread_dump() does
not change because it's run under isolation.
Almost every call place of wake_thread() checks for sleeping threads and
clears the sleeping mask itself, while the function is solely used for
there. Let's move the check and the clearing of the bit inside the function
itself. Note that updt_fd_polling() still performs the check because its
rules are a bit different.
Now that the inter-task wakeups are cheap, there's no point in using
task_instant_wakeup() anymore when dequeueing tasks. The use of the
regular task_wakeup() is sufficient there and will preserve a better
fairness: the test that went from 40k to 570k RPS now gives 580k RPS
(down from 585k RPS with previous commit). This essentially reverts
commit 27fab1dcb ("MEDIUM: queue: use tasklet_instant_wakeup() to
wake tasks").
Since we don't mix tasks from different threads in the run queues
anymore, we don't need to use the eb32sc_ trees and we can switch
to the regular eb32 ones. This uses cheaper lookup and insert code,
and a 16-thread test on the queues shows a performance increase
from 570k RPS to 585k RPS.
This bit field used to be a per-thread cache of the result of the last
lookup of the presence of a task for each thread in the shared cache.
Since we now know that each thread has its own shared cache, a test of
emptiness is now sufficient to decide whether or not the shared tree
has a task for the current thread. Let's just remove this mask.
grq_total was only used to know how many tasks were being queued in the
global runqueue for stats purposes, and that was transferred to the per
thread rq_total counter once assigned. We don't need this anymore since
we know where they are, so let's just directly update rq_total and drop
that one.
Since we only use the shared runqueue to put tasks only assigned to
known threads, let's move that runqueue to each of these threads. The
goal will be to arrange an N*(N-1) mesh instead of a central contention
point.
The global_rqueue_ticks had to be dropped (for good) since we'll now
use the per-thread rqueue_ticks counter for both trees.
A few points to note:
- the rq_lock stlil remains the global one for now so there should not
be any gain in doing this, but should this trigger any regression, it
is important to detect whether it's related to the lock or to the tree.
- there's no more reason for using the scope-based version of the ebtree
now, we could switch back to the regular eb32_tree.
- it's worth checking if we still need TASK_GLOBAL (probably only to
delete a task in one's own shared queue maybe).
The runqueue ticks counter is per-thread and wasn't initially meant to
be shared. We'll soon have to share it so let's make it atomic. It's
only updated when waking up a task, and no performance difference was
observed. It was moved in the thread_ctx struct so that it doesn't
pollute the local cache line when it's later updated by other threads.
TASK_SHARED_WQ was set upon task creation and never changed afterwards.
Thus if a task was created to run anywhere (e.g. a check or a Lua task),
all its timers would always pass through the shared timers queue with a
lock. Now we know that tid<0 indicates a shared task, so we can use that
to decide whether or not to use the shared queue. The task might be
migrated using task_set_affinity() but it's always dequeued first so
the check will still be valid.
Not only this removes a flag that's difficult to keep synchronized with
the thread ID, but it should significantly lower the load on systems with
many checks. A quick test with 5000 servers and fast checks that were
saturating the CPU shows that the check rate increased by 20% (hence the
CPU usage dropped by 17%). It's worth noting that run_task_lists() almost
no longer appears in perf top now.
This removes the mask-based variant so that from now on the low-level
function becomes appctx_new_on() and it takes either a thread number or
a negative value for "any thread". This way we can use task_new_on() and
task_new_anywhere() instead of task_new() which will soon disappear.
At a few places where the task's thread mask. Now we know that it's always
either one bit or all bits of all_threads_mask, so we can replace it with
either 1<<tid or all_threads_mask depending on what's expected.
It's worth noting that the global_tasks_mask is still set this way and
that it's reaching its limits. Similarly, the task_new() API would deserve
an update to stop using a thread mask and use a thread number instead.
Similarly, task_set_affinity() should be updated to directly take a
thread number.
At this point the task's thread mask is not used anymore.
At several places we need to figure the ID of the first thread allowed
to run a task. Till now this was performed using my_ffsl(t->thread_mask)
but since we now have the thread ID stored into the task, let's use it
instead. This is tagged major because it starts to assume that tid<0 is
strictly equivalent to atleast2(thread_mask), and that as such, among
the allowed threads are the current one.
The tasks currently rely on a mask but do not have an assigned thread ID,
contrary to tasklets. However, in practice they're either running on a
single thread or on any thread, so that it will be worth simplifying all
this in order to ease the transition to the thread groups.
This patch introduces a "tid" field in the task struct, that's either
the number of the thread the task is attached to, or a negative value
if the task is not bound to a thread, (i.e. its mask is all_threads_mask).
The new ID is only set and updated but not used yet.
The thread mask will not be used anymore, instead the thread id only
is used. Interestingly it was already implemented in the parsing but
not used. The single/multi thread argument is not needed anymore since
it's sufficient to pass tid<0 to get a multi-threaded task/tasklet.
This is in preparation for the removal of the thread_mask in tasks as
only this debug code was using it!
Before 2.3, after an async crypto processing or on session close, the engine
async file's descriptors were removed from the fdtab but not closed because
it is the engine which has created the file descriptor, and it is responsible
for closing it. In 2.3 the fd_remove() call was replaced by fd_stop_both()
which stops the polling but does not remove the fd from the fdtab and the
fd remains indefinitively in the fdtab.
A simple replacement by fd_delete() is not a valid fix because fd_delete()
removes the fd from the fdtab but also closes the fd. And the fd will be
closed twice: by the haproxy's core and by the engine itself.
Instead, let's set FD_DISOWN on the FD before calling fd_delete() which will
take care of not closing it.
This patch must be backported on branches >= 2.3, and it relies on this
previous patch:
MINOR: fd: add a new FD_DISOWN flag to prevent from closing a deleted FD
As mentioned in the patch above, a different flag will be needed in 2.3.
Some FDs might be offered to some external code (external libraries)
which will deal with them until they close them. As such we must not
close them upon fd_delete() but we need to delete them anyway so that
they do not appear anymore in the fdtab. This used to be handled by
fd_remove() before 2.3 but we don't have this anymore.
This patch introduces a new flag FD_DISOWN to let fd_delete() know that
the core doesn't own the fd and it must not be closed upon removal from
the fd_tab. This way it's totally unregistered from the poller but still
open.
This patch must be backported on branches >= 2.3 because it will be
needed to fix a bug affecting SSL async. it should be adapted on 2.3
because state flags were stored in a different way (via bits in the
structure).
Adjust FIN signal on Rx path for the application layer : ensure that the
receive buffer has no gap.
Without this extra condition, FIN was signalled as soon as the STREAM
frame with FIN was received, even if we were still waiting to receive
missing offsets.
This bug could have lead to incomplete requests read from the
application protocol. However, in practice this bug has very little
chance to happen as the application layer ensures that the demuxed frame
length is equivalent to the buffer data size. The only way to happen is
if to receive the FIN STREAM as the H3 demuxer is still processing on a
frame which is not the last one of the stream.
This must be backported up to 2.6. The previous patch on ncbuf is
required for the newly defined function ncb_is_fragmented().
MINOR: ncbuf: implement ncb_is_fragmented()
Implement a new status function for ncbuf. It allows to quickly report
if a buffer contains data in a fragmented way, i.e. with gaps in between
or at start of the buffer.
To summarize, a buffer is considered as non-fragmented in the following
cases :
- a null or empty buffer
- a full buffer
- a buffer containing exactly one data block at the beginning, following
by a gap until the end.
Since a previous refactoring, application protocol layer is not require
anymore to call qcs_consume(). This function is now automatically used
by the MUX itself.
First we add a loop around recfrom() into the most low level I/O handler
quic_sock_fd_iocb() to collect as most as possible datagrams before during
its tasklet wakeup with a limit: we recvfrom() at most "maxpollevents"
datagrams. Furthermore we add a local task list into the datagram handler
quic_lstnr_dghdlr() which is passed to the first datagrams parser qc_lstnr_pkt_rcv().
This latter parser only identifies the connection associated to the datagrams then
wakeup the highest level packet parser I/O handlers (quic_conn.*io_cb()) after
it is done, thanks to the call to tasklet_wakeup_after() which replaces from now on
the call to tasklet_wakeup(). This should reduce drastically the latency and the
chances to fulfil the RX buffers at the QUIC connections level as reported in
GH #1737 by Tritan.
These modifications depend on this commit:
"MINOR: task: Add tasklet_wakeup_after()"
Must be backported to 2.6 with the previous commit.
Remove the call to qc_list_all_rx_pkts() which print messages on stderr
during RX buffer overruns and add a new counter for the number of dropped packets
because of such events.
Must be backported to 2.6
When the connection RX buffer is full, the received packets are dropped.
Some of them were not taken into an account by the ->dropped_pkt counter.
This very simple patch has no impact at all on the packet handling workflow.
Must be backported to 2.6.
We want to be able to schedule a tasklet onto a thread after the current tasklet
is done. What we have to do is to insert this tasklet at the head of the thread
task list. Furthermore, we would like to serialize the tasklets. They must be
run in the same order as the order in which they have been scheduled. This is
implemented passing a list of tasklet as parameter (see <head> parameters) which
must be reused for subsequent calls.
_tasklet_wakeup_after_on() is implemented to accomplish this job.
tasklet_wakeup_after_on() and tasklet_wake_after() are only wrapper macros around
_tasklet_wakeup_after_on(). tasklet_wakeup_after_on() does exactly the same thing
as _tasklet_wakeup_after_on() without having to pass the filename and line in the
filename as parameters (usefull when DEBUG_TASK is enabled).
tasklet_wakeup_after() hides also the usage of the thread parameter which is
<tl> tasklet thread ID.
Return QPACK_DECOMPRESSION_FAILED error code when dealing with dynamic
table references. This is justified as for now haproxy does not
implement dynamic table support and advertizes a zero-sized table.
The H3 calling function will thus reuse this code in a CONNECTION_CLOSE
frame, in conformance with the QPACK RFC.
This proper error management allows to remove obsolete ABORT_NOW guards.
This is a complement to partial fix from commit
debaa04f9e
BUG/MINOR: qpack: abort on dynamic index field line decoding
The main objective is to fix coverity report about usage of
uninitialized variable when receiving dynamic table references. These
references are invalid as for the moment haproxy advertizes a 0-sized
dynamic table. An ABORT_NOW clause is present to catch this. A following
patch will clean up this in order to properly handle QPACK errors with
CONNECTION_CLOSE.
This should fix github issue #1753.
No need to backport as this was introduced in the current dev branch.
Emit a CONNECTION_CLOSE if HEADERS parsing function returns an error.
This is useful to remove previous ABORT_NOW guards.
For the moment, the whole connection is closed. In the future, it may be
justified to only reset the faulting stream in some cases. This requires
the implementation of RESET_STREAM emission.
The local variable 't' was renamed 'static_tbl'. Fix its name in the
qpack_debug_printf() statement which is activated only with QPACK_DEBUG
mode.
No need to backport as this was introduced in current dev branch.
This patch adds a filter to limit bandwith at the stream level. Several
filters can be defined. A filter may limit incoming data (upload) or
outgoing data (download). The limit can be defined per-stream or shared via
a stick-table. For a given stream, the bandwith limitation filters can be
enabled using the "set-bandwidth-limit" action.
A bandwith limitation filter can be used indifferently for HTTP or TCP
stream. For HTTP stream, only the payload transfer is limited. The filter is
pretty simple for now. But it was designed to be extensible. The current
design tries, as far as possible, to never exceed the limit. There is no
burst.
In GH #1760 (which is marked as being a feature), there were compilation
errors on MacOS which could be reproduced in Linux when building 32-bit code
(-m32 gcc option). Most of them were due to variables types mixing in QUIC_MIN macro
or using size_t type in place of uint64_t type.
Must be backported to 2.6.
This previous commit:
"BUG/MAJOR: Big RX dgrams leak when fulfilling a buffer"
partially fixed an RX dgram memleak. There is a missing break in the loop which
looks for the first datagram attached to an RX buffer dgrams list which may be
reused (because consumed by the connection thread). So when several dgrams were
consumed by the connection thread and are present in the RX buffer list, some are
leaked because never reused for ever. They are removed for their list.
Furthermore, as commented in this patch, there is always at least one dgram
object attached to an RX dgrams list, excepted the first time we enter this
I/O handler function for this RX buffer. So, there is no need to use a loop
to lookup and reuse the first datagram in an RX buffer dgrams list.
This isssue was reproduced with quiche client with plenty of POST requests
(100000 streams):
cargo run --bin quiche-client -- https://127.0.0.1:8080/helloworld.html
--no-verify -n 100000 --method POST --body /var/www/html/helloworld.html
and could be reproduce with GET request.
This bug was reported by Tristan in GH #1749.
Must be backported to 2.6.
When entering quic_sock_fd_iocb() I/O handler which is responsible
of recvfrom() datagrams, the first thing which is done it to try
to reuse a dgram object containing metadata about the received
datagrams which has been consumed by the connection thread.
If this object could not be used for any reason, so when we
"goto out" of this function, we must release the memory allocated
for this objet, if not it will leak. Most of the time, this happened
when we fulfilled a buffer as reported in GH #1749 by Tristan. This is why we
added a pool_free() call just before the out label. We mark <new_dgram> as NULL
when it successfully could be used.
Thank you for Tristan and Willy for their participation on this issue.
Must be backported to 2.6.
After having fulfilled a buffer, then marked it as full, we must
consume the remaining space. But to do that, and not to erase the
already existing data, we must check there is not remaining data in
after the tail of the buffer (between the tail and the head).
This is done adding a condition to test that adding the number of
bytes from the remaining contiguous space to the tail does not
pass the wrapping postion in the buffer.
Must be backported to 2.6.
Sometimes using "debug dev memstats" can be frustrating because all
pool allocations are reported through pool-os.h and that's all.
But in practice there's nothing wrong with also intercepting pool_alloc,
pool_free and pool_zalloc and report their call counts and locations,
so that's what this patch does. It only uses an alternate set of macroes
for these 3 calls when DEBUG_MEM_STATS is defined. The outputs are
reported as P_ALLOC (for both pool_malloc() and pool_zalloc()) and
P_FREE (for pool_free()).
A curious practise seems to have started long ago and contaminated various
code areas, consisting in appending "_pool" at the end of the name of a
given pool. That makes no sense as the name is only used to name the pool
in diags such as "show pools", and since names are truncated there, this
adds some confusion when analysing the dump outputs. Let's just clean all
of them at once. there were essentially in SSL and QUIC.
There's a subtle bug in stream_free() when releasing captures. The
pools may be NULL when no capture is defined, and the calls to
pool_free() are inconditional. The only reason why this doesn't
cause trouble is because the pointer to be freed is always NULL in
this case and we don't go further down the chain. That's particularly
ugly and it complicates debugging, so let's only call these ones when
the pointers are set.
There's no impact on running code, it only fools those trying to debug
pools manually. There's no need to backport it though it unless it helps
for debugging sessions.
freq_ctr_overshoot_period() function may be used to retrieve the excess of
events over the current period for a givent frequency counter, ignoring the
history. It is a way compare the "current rate" (the number of events over
the current period) to a given rate and estimate the excess of events.
It may be used to safely add new events, especially at the begining of the
current period for a frequency counter with large period. This way, it is
possible to smoothly add events during the whole period without quickly
consuming all the quota at the beginning of the period and waiting for the
next one to be able to add new events.
Because of the previous fix, if the HTTP parsing is performed when the
"method" sample fetch is called, we always rely on the string representation
of the request method.
Indeed, if no parsing was performed when the "method" sample fetch is
called, the transaction method is HTTP_METH_OTHER because it was just
initialized. However, without this patch, in this case, we always retrieve
the method by reading the request start-line.
Now, when the method is HTTP_METH_OTHER, we systematically try to parse the
request but the method is tested once again after the parsing to be able to
use the integer representation when possible.
This patch must be backported as far as 2.0.
This patch is required to fix "method" sample fetch. But it make sense to
initialize the method of an HTTP transaction to HTTP_METH_OTHER. This way,
before the request parsing, the method is considered as unknown except if we
are able to retrieve the request start-line. It is especially important for
TCP streams.
About the "method" sample fetch, this patch is a way to be sure no random
method is returned when the sample fetch is used on a TCP stream before any
HTTP parsing.
This patch must be backported as far as 2.0.
This bug was revealed by key_update QUIC tracker test. During this test,
after the handshake has succeeded, the client sends a 1-RTT packet containing
only a PING frame. On our side, we do not acknowledge it, because we have
no ack-eliciting packet to send. This is not correct. We must acknowledge all
the ack-eliciting packets unconditionally. But we must not send too much
ACK frames: every two packets since we have sent an ACK frame. This is the test
(nb_aepkts_since_last_ack >= QUIC_MAX_RX_AEPKTS_SINCE_LAST_ACK) which ensure
this is the case. But this condition must be checked at the very last time,
when we are building a packet and when an acknowledgment is required. This
is not to qc_may_build_pkt() to do that. This boolean function decides if
we have packets to send. It must return "true" if there is no more ack-eliciting
packets to send and if an acknowledgement is required.
We must also add a "force_ack" parameter to qc_build_pkt() to force the
acknowledments during the handshakes (for each packet). If not, they will
be sent every two packets. This parameter must be passed to qc_do_build_pkt()
and be checked alongside the others conditions when building a packet to
decide to add an ACK frame or not to this packet.
Must be backported to 2.6.
A bug was introduced by commit 9bf3a1f67e
"BUG/MINOR: ssl: Fix crash when no private key is found in pem".
If a private key is already contained in a pem file, we will still look
for a .key file and load its private key if it exists when we should
not.
This patch should be backported to all branches where the original fix
was backported (all the way to 2.2).
As reported in issue #1755, gcc-9.3 and 9.4 emit a "maybe-uninitialized"
warning in cli_io_handler_commit_cafile_crlfile() because it sees that
when the "path" variable is not set, we're jumping to the error label
inside the loop but cannot see that the new state will avoid the places
where the value is used. Thus it's a false positive but a difficult one.
Let's just preset the value to NULL to make it happy.
This was introduced in 2.7-dev by commit ddc8e1cf8 ("MINOR: ssl_ckch:
Simplify I/O handler to commit changes on CA/CRL entry"), thus no
backport is needed for now.
Sometimes we need to be able to signal one thread among a mask, without
caring much about which bit will be picked. At the moment we use ffsl()
for this but this sometimes results in imbalance at certain specific
places where the same first thread in a set is always the same one that
is selected.
Another approach would consist in using the rank finding function but it
requires a popcount and a setup phase, and possibly a modulo operation
depending on the popcount, which starts to be very expensive.
Here we take a different approach. The idea is an input bit value is
passed, from 0 to LONGBITS-1, and that as much as possible we try to
pick the bit matching it if it is set. Otherwise we look at a mirror
position based on a decreasing power of two, and jump to the side
that still has bits left. In 6 iterations it ends up spotting one bit
among 64 and the operations are very cheap and optimizable. This method
has the benefit that we don't care where the holes are located in the
mask, thus it shows a good distribution of output bits based on the
input ones. A long-time test shows an average of 16 cycles, or ~4ns
per lookup at 3.8 GHz, which is about twice as fast as using the rank
finding function.
Just like for that one, the code was stored into tools.c since we don't
have a C file for intops.
In bug #1751, it was reported that haproxy is consumming too much memory
since the 2.4 version. This is because of a change in the master, which
loses completely its configuration in wait mode, and lose its maxconn.
Without the maxconn, haproxy will try to compute one itself, and will
allocate RAM consequently, too much in our case. Which means the master
will have a too high maxconn and too much RAM allocated.
The patch fixes the issue by setting the maxconn to the default value
when re-executing the master in wait mode.
Must be backported as far as 2.5.
Implement quic_tp_version_info_dump() to dump such a transport parameter (only remote).
Call it from quic_transport_params_dump() which dump all the transport parameters.
Can be backported to 2.6 as it's useful for debugging.
All packets received during hanshakes must be acknowledged asap. This was
not the case for Handshake packets received. At this time, this had
no impact because the client has often only one Handshake packet to send
and last handshake to be sent on our side always embeds an HANDSHAKE_DONE
frame which leads the client to consider it has no more handshake packet
to send.
Add <force_ack> to qc_may_build_pkt() to force an ACK frame to be sent.
Set this parameter to 1 when sending packets from Initial or Handshake
packet number spaces, 0 when sending only Application level packet.
Must be backported to 2.6.
The crash occures when the same certificate which is used on both a
server line and a bind line is inserted in a crt-list over the CLI.
This is quite uncommon as using the same file for a client and a server
certificate does not make sense in a lot of environments.
This patch fixes the issue by skipping the insertion of the SNI when no
bind_conf is available in the ckch_inst.
Change the reg-test to reproduce this corner case.
Should fix issue #1748.
Must be backported as far as 2.2. (it was previously in ssl_sock.c)
Add an ABORT_NOW() clause if indexed field line referred to the dynamic
table. This is required as current haproxy QPACK implementation does not
support the dynamic table.
Note that this should not happen as haproxy explicitely advertizes a
null-sized dynamic table to the other peer.
This shoud fix github issue #1753.
No need to backport as this was introduced by commit
b666c6b26e
MINOR: qpack: improve decoding function
Free Rx packet in the datagram handler if the packet was not taken in
charge by a quic_conn instance. This is reflected by the packet refcount
which is null.
A packet can be rejected for a variety of reasons. For example, failed
decryption, no Initial token and Retry emission or for datagram null
padding.
This patch should resolve the Rx packets memory leak observed via "show
pools" with the previous commit
2c31e12936
BUG/MINOR: quic: purge conn Rx packet list on release
This specific memory leak instance was reproduced using quiche client
which uses null datagram padding.
This should partially resolve github issue #1751.
It must be backported up to 2.6.
When releasing a quic_conn instance, free all remaining Rx packets in
quic_conn.rx.pkt_list. This partially fixes a memory leak on Rx packets
which can be observed after several QUIC connections establishment.
This should partially resolve github issue #1751.
It must be backported up to 2.6.
As reported by broxio in GH #1757, there was a duplication field name
for "quic_streams_data_blocked_bidi", due to a copy and paste without
renaming I guess.
Must be backported to 2.6.
This counter must be incremented only one time by connection and decremented
as soon as the handshake has failed or succeeded. This is a gauge. Under certain
conditions this counter could be decremented twice. For instance
after having received a TLS alert, then upon SSL_do_handshake() failure.
To stop having to deal to all the current combinations which can lead to such a
situation (and the next to come), add a connection flag to denote if this counter
has been already decremented for a connection. So, this counter must be decremented
only if this flag has not been already set.
Must be backported up to 2.6.
Non constant expressions were used to initialize constant variables leading to
such compilation errors:
src/xprt_quic.c:66:3: error: initializer element is not a constant expression
.key_label_len = strlen(QUIC_HKDF_KEY_LABEL_V1),
Reproduced with CC=gcc-4.9 compilation option.
Fix using macros for each HKDF label.
In order to better detect the danger caused by extra shared libraries
which replace some symbols, upon dlopen() we now compare a few critical
symbols such as malloc(), free(), and some OpenSSL symbols, to see if
the loaded library comes with its own version. If this happens, a
warning is emitted and TAINTED_REDEFINITION is set. This is important
because some external libs might be linked against different libraries
than the ones haproxy was linked with, and most often this will end
very badly (e.g. an OpenSSL object is allocated by haproxy and freed
by such libs).
Since the main source of dlopen() calls is the Lua lib, via a "require"
statement, it's worth trying to show a Lua call trace when detecting a
symbol redefinition during dlopen(). As such we emit a Lua backtrace if
Lua is detected as being in use.
Several bug reports were caused by shared libraries being loaded by other
libraries or some Lua code. Such libraries could define alternate symbols
or include dependencies to alternate versions of a library used by haproxy,
making it very hard to understand backtraces and analyze the issue.
Let's intercept dlopen() and set a new TAINTED_SHARED_LIBS flag when it
succeeds, so that "show info" makes it visible that some external libs
were added.
The redefinition is based on the definition of RTLD_DEFAULT or RTLD_NEXT
that were previously used to detect that dlsym() is usable, since we need
it as well. This should be sufficient to catch most situations.
This function may be used to try to show where some Lua code is currently
being executed. It tries hard to detect the initialization phase, both for
the global and the per-thread states, and for runtime states. This intends
to be used by error handlers to provide the users with indications about
what Lua code was being executed when the error triggered.
Calling hlua_traceback() sometimes reports empty entries looking like:
[C]: ?
These ones correspond to certain internal C functions of the Lua library,
but they do not provide any information and even complicate the
interpretation of the dump. Better just skip them.
The commit 731c8e6cf ("MINOR: stream: Simplify retries counter calculation")
introduced a regression. It broke the dontlog-normal option because the test
on the connection retries counter was not updated accordingly.
This patch should fix the issue #1754. It must be backported to 2.6.
On destructive connection upgrade, instead of using the new mux name to
abort the old stream, we can relay on the stream connector flags. If it is
detached after the upgrade, it means the stream will not be resused by the
new mux and it must be aborted.
This patch may be backported to 2.6.
When the protocol is changed for a client connection at the stream level
(from TCP to H1/H2), there are two cases. The stream may be reused or
not. The first case, when the stream is reused is working. The second one is
buggy since the conn-stream refactoring and leads to a crash.
In this case, the new mux don't reuse the stream. It must be silently
aborted. However, its front stream connector is still referencing the
connection. So it must be detached. But it must be performed in two stages,
to be sure to not loose the context for the upgrade and to be able to
rollback on error. So now, before the upgrade, we prepare to detach the
stconn and it is finally detached if the upgrade succeeds. There is a trick
here. Because we pretend the stconn is detached but its state is preserved.
This patch must be backported to 2.6.
tasklet_kill() was introduced in 2.5-dev4 with commit 7b368339a
("MEDIUM: task: implement tasklet kill"), but a comparison error
there makes tasklets killed on thread 1 assigned to the killing
thread. Fortunately, the function was finally not used so there's
no harm right now, hence the minor tag, but this must be fixed and
backported in case a later fix relies on it.
This should be backported to 2.5.
At this time haproxy supported only incompatible version negotiation feature which
consists in sending a Version Negotiation packet after having received a long packet
without compatible value in its version field. This version value is the version
use to build the current packet. This patch does not modify this behavior.
This patch adds the support for compatible version negotiation feature which
allows endpoints to negotiate during the first flight or packets sent by the
client the QUIC version to use for the connection (or after the first flight).
This is done thanks to "version_information" parameter sent by both endpoints.
To be short, the client offers a list of supported versions by preference order.
The server (or haproxy listener) chooses the first version it also supported as
negotiated version.
This implementation has an impact on the tranport parameters handling (in both
direcetions). Indeed, the server must sent its version information, but only
after received and parsed the client transport parameters). So we cannot
encode these parameters at the same time we instantiated a new connection.
Add QUIC_TP_DRAFT_VERSION_INFORMATION(0xff73db) new transport parameter.
Add tp_version_information new C struct to handle this new parameter.
Implement quic_transport_param_enc_version_info() (resp.
quic_transport_param_dec_version_info()) to encode (resp. decode) this
parameter.
Add qc_conn_finalize() which encodes the transport parameters and configure
the TLS stack to send them.
Add ->negotiated_ictx quic_conn C struct new member to store the Initial
QUIC TLS context for the negotiated version. The Initial secrets derivation
is version dependent.
Rename ->version to ->original_version and add ->negotiated_version to
this C struct to reflect the QUIC-VN RFC denomination.
Modify most of the QUIC TLS API functions to pass a version as parameter.
Export the QUIC version definitions to be reused at least from quic_tp.c
(transport parameters.
Move the token check after the QUIC connection lookup. As this is the original
version which is sent into a Retry packet, and because this original version is
stored into the connection, we must check the token after having retreived this
connection.
Add packet version to traces.
See https://datatracker.ietf.org/doc/html/draft-ietf-quic-version-negotiation-08
for more information about this new feature.
This is not clear at all how to distinguish a QUIC draft version number from a
released one. And among these QUIC draft versions, which one must use the draft
QUIC TLS extension.
According to the QUIC implementations which support v2 draft, the TLS extension
(transport parameters) to be used is the released one
(TLS_EXTENSION_QUIC_TRANSPORT_PARAMETERS).
As the unique QUIC draft version we support is 0xff00001d and as at this time the
unique version with 0xff as most significant byte is this latter which must use
the draft TLS extension, we select the draft TLS extension
(TLS_EXTENSION_QUIC_TRANSPORT_PARAMETERS_DRAFT) only for such versions with 0xff
as most signification byte.
This is becoming difficult to handle the QUIC TLS related definitions
which arrive with a QUIC version (draft or not). So, here we add
quic_version C struct which does not define only the QUIC version number,
but also the QUIC TLS definitions which depend on a QUIC version.
Modify consequently all the QUIC TLS API to reuse these definitions
through new quic_version C struct.
Implement quic_pkt_type() function which return a packet type (0 up to 3)
depending on the QUIC version number.
Stop harding the Retry packet first byte in send_retry(): this is not more
possible because the packet type field depends on the QUIC version.
Also modify quic_build_packet_long_header() for the same reason: the packet
type depends on the QUIC version.
Add a quic_version C struct member to quic_conn C struct.
Modify qc_lstnr_pkt_rcv() to set this member asap.
Remove the version member from quic_rx_packet C struct: a packet is attached
asap to a connection (or dropped) which is the unique object which should
store the QUIC version.
Modify qc_pkt_is_supported_version() to return a supported quic_version C
struct from a version number.
Add Initial salt for QUIC v2 draft (initial_salt_v2_draft).
This is to prepare the support for QUIC v2 version. The packet type
depends on the version. So, we must parse early enough the version
before defining the type of each packet.
The nonce and keys used to cipher the Retry tag depend on the QUIC version.
Add these definitions for 0xff00001d (draft-29) and v2 QUIC version. At least
draft-29 is useful for QUIC tracker tests with "quic-force-retry" enabled
on haproxy side.
Validated with -v 0xff00001d ngtcp2 option.
Could not validate the v2 nonce and key at this time because not supported.
Use the same version as the one received. This is safe because the
version is treated before anything else sending a Version packet.
Must be backported to 2.6.
A pretty ugly mistake introduced recently with an invalid goto statement
which prevents QUIC compilation on haproxy.
This must be backported on 2.6 as a complement to
60ef19f137
BUG/MINOR: h3/qpack: deal with too many headers
Adjust decoding loop by using temporary ist for header name and value.
The header is inserted at the end of an iteration, which guarantee that
we do not insert name only in the list in case of an error on value
decoding. This also helps the function readability by centralizing the
LIST insert operation.
The return value of the decoding function is also changed. Now on
success the number of headers inserted in the input list is returned.
This change as no impact as success value is not used by the caller.
This is mainly done to have a behavior similar to hpack decoding
function.
ensures that we never insert too many entries in a headers input list.
On the decoding side, a new error QPACK_ERR_TOO_LARGE is reported in
this case.
This prevents crash if headers number on a H3 request or response is
superior to tune.http.maxhdr config value. Previously, a crash would
occur in QPACK decoding function.
Note that the process still crashes later with ABORT_NOW() because error
reporting on frame parsing is not implemented for now. It should be
treated with a RESET_STREAM frame in most cases.
This can be backported up to 2.6.
Post-base indices is not supported at the moment for decoding. This
should never be encountered as it is only used with a dynamic table.
However, haproxy deactivates support for the dynamic table via its
SETTINGS.
Use ABORT_NOW() if this situation happens anyway. This should help
debugging instead of silently failed without error reporting.
Complete QPACK decoding with full implementation of litteral field name
with litteral value representation. This change is mandatory to support
decoding of headers name not present in the QPACK static table.
Previously, these headers were silently ignored and not transferred on
the backend request.
QPACK decoding should now be sufficient to deal with all real
situations. Only post-base indices representation are not handled but
this should not cause a problem as they are only used for the dynamic
table whose size is null as announced by the haproxy implementation.
The direct impact of this change is that it should now be possible to
use complex webapp through a QUIC frontend.
This must be backported up to 2.6.
Clean up QPACK decoder API by removing dependencies on ncbuf and
MUX-QUIC. This reduces includes statements. It will also help to
implement a standalone QPACK decoder.
Add comments on the decoding function to facilitate code analysis.
Also remove the qpack_debug_hexdump() which prints the whole left buffer
on each header parsing. With large HEADERS frame payload, QPACK traces
are complicated to debug with this statement.
If we've stopped consulting the local wait queue due to too many tasks
(max_processed <= 0), there's no point starting to lock the shared WQ,
check the first task's expiration date, upgrading the lock just to
refrain from doing the work because of the limit. All this does is
increase contention on an already contended system.
Note that there is still a fairness issue in this WQ dequeuing code. If
each thread is busy with expired tasks, no thread will dequeue the global
ones. In practice it doesn't make much sense and should quickly resorb,
but it could be nice to have an alternating flag indicating where to
start from on next call to improve this.
This macro was used both for binding and for lookups. When binding tasks
or FDs, using all_threads_mask instead is better as it will later be per
group. For lookups, ~0UL always does the job. Thus in practice the macro
was already almost not used anymore since the rest of the code could run
fine with a constant of all ones there.
In 1.9-dev1, commit 5bc9972ed ("BUG/MINOR: lua/threads: Make lua's tasks
sticky to the current thread") to detect unconfigured Lua tasks that could
run on any thread, by comparing their thread mask with MAX_THREADS_MASK.
The proper way to do it is to check for at least 2 threads in their mask
in fact. This is more reliable and allows to get rid of MAX_THREADS_MASK
there.
Each thread has its own local thread id and its own global thread id,
in addition to the masks corresponding to each. Once the global thread
ID can go beyond 64 it will not be possible to have a global thread Id
bit anymore, so better start to remove it and use only the local one
from the struct thread_info.
This simply replaces a call to task_new(1<<thr) with task_new_on(thr)
so that we can later isolate the changes required to add more thread
group stuff.
Instead of having a global mask of all the profiled threads, let's have
one flag per thread in each thread's flags. They are never accessed more
than one at a time an are better located inside the threads' contexts for
both performance and scalability.
LIST_ELEM macro was incorrectly used in the loop when purging
flow-control frames from qcc.lfctl.frms on MUX release. This caused a
segfault in qc_release() due to an invalid quic_frame pointer instance.
The occurence of this bug seems fairly rare. To happen, some
flow-control frames must have been allocated but not yet sent just as
the MUX release is triggered.
I did not find a reproducer scenario. Instead, I artificially triggered
it by inserting a quic_frame in qcc.lfctl.frms just before purging it in
qc_release() using the following snippet.
struct quic_frame *frm;
frm = pool_zalloc(pool_head_quic_frame);
LIST_INIT(&frm->reflist);
frm->type = QUIC_FT_MAX_DATA;
frm->max_data.max_data = 0;
LIST_APPEND(&qcc->lfctl.frms, &frm->list);
This should fix github issue #1747.
This must be backported up to 2.6.
The CLI applet process one request after another. Thus, when several
requests are pipelined, it is important to notify it won't consume remaining
outgoing data while it is processing a request. Otherwise, the applet may be
woken up in loop. For instance, it may happen with the HTTP client while we
are waiting for the server response if a shutr is received.
This patch must be backported in all supported versions after an observation
period. But a massive refactoring was performed in 2.6. So, for the 2.5 and
below, the patch will have to be adapted. Note also that, AFAIK, the bug can
only be triggered by the HTTP client for now.
in .chk_snd applet callback function, we must not wake up an applet if
SE_FL_WONT_CONSUME flag is set. Indeed, if an applet explicitly specify it
will not consume any outgoing data, it is useless to wake it up when more
data are sent. Note the applet may still be woken up for another reason. In
this case SE_FL_WONT_CONSUME flag will be removed. It is the applet
responsibility to set it again, if necessary.
This patch must be backported to 2.6 after an observation period. On earlier
versions, the bug probably exists too. However, because a massive
refactoring was performed in 2.6, the patch will have to be adapted and
carefully reviewed/tested if it is backported..
Since the conn-stream refactoring, from the time the health-check is in
progress, its stream-connector is always defined. So, some tests on it are
useless and can be removed.
This patch should fix the issue #1739.
When a TCP content ruleset is evaluated, we stop waiting for more data if
the inspect-delay is reached, if there is a read error or if we know no more
data will be received. This last point is only valid for ACLs. An action may
decide to yield for another reason. For instance, in the SPOE, the
"send-spoe-group" action yields while the agent response is not
received. Thus, now, an action call is final only when the inspect-delay is
reached or if there is a read error. But it is possible for an action to
yield if the buffer is full or if CF_EOI flag is set.
This patch could be backported to all supported versions.
When the MUX transfers a big amount of data to the client, the transport
layer may reject some of them because of the congestion controller
limit. Frames built by the MUX are thus dropped, even if streams transferred
data are kept in buffers for future new frames.
Thus, the MUX is required to free rejected frames. This fixes a memory
leak which may grow with important data transfers.
It should be backported to 2.6 after it has been tested and validated.
TX flow-control enforcing is not straightforward : it requires the usage
of several counters at stream and connection level, in part due to the
difficult sending API between MUX and quic-conn layers.
To strengthen this part and ensures it behaves as expected, some
existing BUG_ON statements were adjusted and new one were added. This
should help to catch errors as early as possible, as in the case with
github issue #1738.
The flow control enforced at connection level is incorrectly calculated.
There is a risk of exceeding the limit. In most cases, this results in a
segfault produced by a BUG_ON which is here to catch this kind of error.
If not compiled with DEBUG_STRICT, this should generate a connection
closed by the client due to the flow control overflow.
The problem is encountered when transfered payload is big enough to fill
the transport congestion window. In this case, some data are rejected by
the transport layer and kept by the MUX to be reemitted later. However,
these preserved data are not counted on the connection flow control when
resubmitted, which gradually amplify the gap between expected and real
consumed flow control.
To fix this, handle the flow-control at the connection level in the same
way as the stream level. A new field qcc.tx.offsets is incremented as
soon as data are transfered between stream TX buffers. The field
qcc.tx.sent_offsets is preserved to count bytes handled by the transport
layer and stop the MUX transfer if limit is reached.
As already stated, this bug can occur during transfers with enough
emitted data, over multiple streams. When using a single stream, the
flow control at the stream level hides it.
The BUG_ON crash is reproduced systematically with quiche client :
$ quiche-client --no-verify --http-version HTTP/3 -n 10000 https://127.0.0.1:20443/10K
This must be backported up to 2.6 when confirmed to work as expected.
This should fix github issue #1738.
This is the continuation of commit 5a0e7ca5d ("BUG/MINOR: cli/stats: add
missing trailing LF after JSON outputs"). There's also a "show info json"
command which was also missing the trailing LF. It's constructed exactly
like the "show stat json", in that it dumps a series of fields without any
LF. The difference however is that for the stats output, everything was
enclosed in an array which required an LF *after* the closing bracket,
while here there's no such array so we have to emit the LF after the loop.
That makes the two functions a bit inconsistent, it's quite annoying, but
making them better would require sending one LF per line in the stats
output, which is not particularly interesting.
Given that it took 5+ years to spot that this code wasn't working as
expected it doesn't seem worth investing much time trying to refactor
it to make it look cleaner at the risk of breaking other obscure parts.
Leonhard Wimmer reported an interesting bug in github issue #1742.
Servers in disabled proxies that are configured for resolution are still
subscribed to DNS resolutions, but the LB algos are not initialized at
all since the proxy is disabled, so when the server state changes,
attempts to update its status cause a crash when the server's weight
is recalculated via a divide by the proxy's total weight which is zero.
This should be backported to all versions. Beware that before 2.5 or
so, there's no PR_FL_DISABLED flag, instead px->disabled should be
used (2.3-2.4) or PR_STSTOPPED for older versions.
Thanks to Leonhard for his report and quick test!
Patrick Hemmer reported that we have a bug in the CLI commands
"show stat json" and "show schema json" in that they forget the trailing
LF that's required to mark the end of the response. This has been the
case since the introduction of the feature in 1.8-dev1 by commit 6f6bb380e
("MEDIUM: stats: Add show json schema"), so this fix may be backported to
all versions.
Function used to parse SETTINGS frame is incorrect as it does not stop
at the frame length but continue to parse beyond it. In most cases, it
will result in a connection closed with error H3_FRAME_ERROR.
This bug can be reproduced with clients that sent more than just a
SETTINGS frame on the H3 control stream. This is notably the case with
aioquic which emit a MAX_PUSH_ID after SETTINGS.
This bug has been introduced in the current dev release, by the
following patch
62eef85961
MINOR: mux-quic: simplify decode_qcs API
thus, it does not need to be backported.
This changes the default from RFC 7540's default 65535 (64k-1) to avoid
avoid some degenerative WINDOW_UPDATE behaviors in the wild observed with
clients using 65536 as their buffer size, and have to complete each block
with a 1-byte frame, which with some servers tend to degenerate in 1-byte
WU causing more 1-byte frames to be sent until the transfer almost only
uses 1-byte frames.
More details here: https://github.com/nghttp2/nghttp2/issues/1722
As mentioned in previous commit (MEDIUM: mux-h2: try to coalesce outgoing
WINDOW_UPDATE frames) the issue could not be reproduced with haproxy but
individual WU frames are sent so theoretically nothing prevents this from
happening. As such it should be backported as a workaround for already
deployed clients after watching for any possible side effect with rare
clients. As an added benefit, uploads from curl now use less DATA frames
(all are 16384 now). Note that the previous patch alone is sufficient to
stop the issue with curl in case this one would need to be reverted.
[wt: edited commit messaged, updated doc]
Glenn Strauss from Lighttpd reported a corner case affecting curl+lighttpd
that causes some uploads to degenerate to extremely suboptimal conditions
under certain circumstances, and noted that many other implementations
were possibly not safe against this degradation.
Glenn's detailed analysis is available here:
https://github.com/nghttp2/nghttp2/issues/1722
In short, curl uses a 65536 bytes buffer and the default stream window
is 65535, with 16384 bytes per frame. Curl will then send 3 frames of
16384 bytes followed by one of 16383, will wait for a window update to
send the last byte before recycling the buffer to read the next 64kB.
On each round like this, one extra single-byte frame will be sent, and
if ACKs for these single-byte frames are not aggregated, this will only
allow the client to send one extra byte at a time. At some point it is
possible (at least Glenn observed it) to have mostly 1-byte frames in
the transfer, resulting in huge CPU usage and a long transfer.
It was not possible to reproduce this with haproxy, even when playing
with frame sizes, buffer sizes nor window sizes. One reason seems to
be that we're using the same buffer size for the connection and the
stream and that the frame headers prevent the filling of the window
from happening on the same boundaries as on the sender. However it
does occasionally happen to see up to two 1-byte data frames in a row,
indicating that there's definitely room for improvement.
The WINDOW_UPDATE frames for the connection are sent at the end of the
demuxing, but the ones for the streams are currently sent immediately
after a DATA frame is processed, mostly for convenience. But we don't
need to proceed like this, we already have the counter of unacked bytes
in rcvd_s, so we can simply use that to decide when to send an ACK. It
must just be done before processing a new frame. The benefit is that
contiguous frames for the same stream will now only produce a single
WU, like for the connection. On complicated tests involving a client
that was limited to 100 Mbps transfers and a dummy Lua-based payload
consumer, it was possible to see the number of stream WU frames being
halved for a 100 MB transfer, which is already a nice saving anyway.
Glenn proposed a better workaround consisting in increasing the
default window size to 65536. This will be done in a separate patch
so that both can be studied independently in field and backported as
needed.
This patch is not much complicated and shold be backportable. It just
needs to be tested in development first.
BUG_ON() assertion to check for incomplete SETTINGS frame is incorrect.
It should check if frame length is greater, not smaller, than current
buffer data. Anyway, this BUG_ON() is useless as h3_decode_qcs()
prevents parsing of an incomplete frame, except for H3 DATA. Remove it
to fix this bug.
This bug was introduced in the current dev tree by commit
commit 62eef85961
MINOR: mux-quic: simplify decode_qcs API
Thus it does not need to be backported.
This fixes crashes which happen with DEBUG_STRICT=2. Most notably, this
is reproducible with clients that emit more than just a SETTINGS frame
on the H3 control stream. It can be reproduced with aioquic for example.
The health-check attached to an email alert has no type. It is unexpected,
and since the 2.6, it is important because we rely on it to know the
application type in front of a connection at the stream-connector
level. Because the object type is not set, the SE descriptor is not properly
initialized, leading to a segfault when a connection to the SMTP server is
established.
This patch must be backported to 2.6 and may be backported as far as
2.0. However, it is only an issue for the 2.6 and upper.
There is no server for email alerts. So the trace messages must be adapted
to handle this case. Information related to the server are now skipped for
email alerts and "[EMAIL]" prefix is used.
This patch must be backported as far as 2.4.
Email alerts are based on health-checks but with no server. Thus, in
__trace() function, responsible to write a trace message, we must be
prepared to have no server and thus no proxy.
This patch must be backported as far as 2.4.
Building QUIC with gcc-4.4 on el6 shows this error:
src/xprt_quic.c: In function 'qc_release_lost_pkts':
src/xprt_quic.c:1905: error: unknown field 'loss' specified in initializer
compilation terminated due to -Wfatal-errors.
make: *** [src/xprt_quic.o] Error 1
make: *** Waiting for unfinished jobs....
Initializing an anonymous form of union like :
struct quic_cc_event ev = {
(...)
.loss.time_sent = newest_lost->time_sent,
(...)
};
generates an error with gcc-4.4 but not when initializing the
fields outside of the declaration.
Convert return code to -1 when an error has been detected. This is
required since the previous API change on return value from the patch :
1f21ebdd76
MINOR: mux-quic/h3: adjust demuxing function return values
Without this, QUIC MUX won't consider the call as an error and will try
to remove one byte from the buffer. This may cause a BUG_ON failure if
the buffer is empty at this stage.
This bug was introduced in the current dev tree. Does not need to be
backported.
Clean the API used by decode_qcs() and transcoder internal functions.
Parsing functions now returns a ssize_t which represents the number of
consumed bytes or a negative error code. The total consumed bytes is
returned via decode_qcs().
The API is now unified and cleaner. The MUX can thus simply use the
return value of decode_qcs() instead of substracting the data bytes in
the buffer before and after the call. Transcoders functions are not
anymore obliged to remove consumed bytes from the buffer which was not
obvious.
Slightly modify decode_qcs function used by transcoders. The MUX now
gives a buffer instance on which each transcoder is free to work on it.
At the return of the function, the MUX removes consume data from its own
buffer.
This reduces the number of invocation to qcs_consume at the end of a
full demuxing process. The API is also cleaner with the transcoders not
responsible of calling it with the risk of having the input buffer
freed if empty.
As a mirror to qcc/qcs types, add a h3c pointer into h3s struct. This
should help to clean up H3 code and avoid to use qcs.qcc.ctx to retrieve
the h3c instance.
smp_fc_http_major may be used to return the http version as an integer
used on the frontend or backend side. Previously, the handler only
checked for version 2 or 1 as a fallback. Extend it to support version 3
with the QUIC mux.
Commit d6c66f06a ("MINOR: ssl_ckch: Remove service context for "set ssl
crl-file" command") introduced a regression leading to a build error because
of a possible uninitialized value. It is now fixed.
This patch must be backported as far as 2.5.
A build error is reported about the path variable in the switch statement on
the commit type, in cli_io_handler_commit_cafile_crlfile() function. The
enum contains only 2 values, but a default clause has been added to return an
error to make GCC happy.
This patch must be backported as far as 2.5.
Commit 9a99e5478 ("BUG/MINOR: ssl_ckch: Dump CRL transaction only once if
show command yield") introduced a regression leading to a build error
because of a possible uninitialized value. It is now fixed.
This patch must be backported as far as 2.5.
Commit 5a2154bf7 ("BUG/MINOR: ssl_ckch: Dump CA transaction only once if
show command yield") introduced a regression leading to a build error
because of a possible uninitialized value. It is now fixed.
This patch must be backported as far as 2.5.
Commit 3e94f5d4b ("BUG/MINOR: ssl_ckch: Dump cert transaction only once if
show command yield") introduced a regression leading to a build error
because of a possible uninitialized value. It is now fixed.
This patch must be backported as far as 2.2.
The same type is used for CA and CRL entries. So, in commit_cert_ctx
structure, there is no reason to have different fields for the CA and CRL
entries.
.next_ckchi_link field must be initialized to NULL instead of .next_ckchi in
cli_parse_commit_crlfile() function. Only '.nex_ckchi_link' is used in the
I/O handler.
This patch must be backported as far as 2.5 with some adaptations for the 2.5.
When loaded SSL certificates are displayed via "show ssl cert" command, the
in-progess transaction, if any, is also displayed. However, if the command
yield, the transaction is re-displayed again and again.
To fix the issue, old_ckchs field is used to remember the transaction was
already displayed.
This patch must be backported as far as 2.2.
When loaded CA files are displayed via "show ssl ca-file" command, the
in-progress transaction, if any, is also displayed. However, if the command
yield, the transaction is re-displayed again and again.
To fix the issue, old_cafile_entry field is used to remember the transaction
was already displayed.
This patch must be backported as far as 2.5.
When loaded CRL files are displayed via "show ssl crl-file" command, the
in-progess transaction, if any, is also displayed. However, if the command
yield, the transaction is re-displayed again and again.
To fix the issue, old_crlfile_entry field is used to remember the transaction
was already displayed.
This patch must be backported as far as 2.5.
Because of a typo (I guess), an unknown type is used for the old entry in
show_crlfile_ctx structure. Because this field is unused, there is no
compilation error. But it must be a cafile_entry and not a crlfile_entry.
Note this field is not used for now, but it will be used.
This patch must be backported to 2.6.
Simplify cli_io_handler_commit_cafile_crlfile() handler function by
retrieving old and new entries at the beginning. In addition the path is
also retrieved at this stage. This removes several switch statements.
Note that the ctx was already validated by the corresponding parsing
function. Thus there is no reason to test the pointers.
While it is not a bug, this patch may help to fix issue #1731.
There is an enum to determine the entry entry type when changes are
committed on a CA/CRL entry. So use it in the service context instead of an
integer.
This patch may help to fix issue #1731.
Rewrite failures in http rules are reported as proxy errors (PRXCOND) in
logs. However, other rewrite errors are reported as internal errors. For
instance, it happens when we fail to add X-Forwarded-For header. It is not
consistent and it is confusing. So now, all rewite failures are reported as
proxy errors.
This patch may be backported if necessary.
'httpclient' command does not properly handle full buffer cases. When the
response buffer is full, we exit to retry later. However, the context flags
are updated. It means when this happens, we may loose a part of the
response.
So now, flags are preserved when we fail to push data into the response
buffer. In addition, instead of dumping one part per call, we now try to
dump as much data as possible.
Finally, when there is no more data, because everything was dumped or
because we are waiting for more data from the HTTP client, the applet is
updated accordingly by calling applet_have_no_more_data(). Otherwise, when
some data are blocked, applet_putchk() already takes care to update the SE
flags. So, it is useless to call sc_need_room().
This patch should fix the issue #1723. It must be backported as far as
2.5. But a massive refactoring was performed in 2.6. So, for the 2.5 and
below, the patch will have to be adapted.
Commit 534645d6 ("BUG/MEDIUM: httpclient: Fix loop consuming HTX blocks from
the response channel") introduced a regression. When the response is
consumed, The HTX header blocks are removed before duplicating them. Thus,
the first header block is always lost.
This patch must be backported as far as 2.5.
'add ssl crt-list' command is also concerned. This patch is similar to the
previous ones. Full buffer cases when we try to push the reply are not
properly handled. To fix the issue, the functions responsible to add a
crt-list entry were reworked.
First, the error message is now part of the service context. This way, if we
cannot push the error message in the reponse buffer, we may retry later. To
do so, a dedicated state was created (ADDCRT_ST_ERROR,). Then, the success
message is also handled in a dedicated state (ADDCRT_ST_SUCCESS). This way
we are able to retry to push it if necessary. Finally, the dot displayed for
each new instance is now immediatly pushed in the response buffer, and
before the update. This way, we are able to retry too if necessary.
This patch should fix the issue #1724. It must be backported as far as
2.2. But a massive refactoring was performed in 2.6. So, for the 2.5 and
below, the patch will have to be adapted.
'commit ssl crl-file' command is also concerned. This patch is similar to
the previous one. Full buffer cases when we try to push the reply are not
properly handled. To fix the issue, the functions responsible to commit CA
or CRL entry changes were reworked.
First, the error message is now part of the service context. This way, if we
cannot push the error message in the reponse buffer, we may retry later. To
do so, a dedicated state was created (CACRL_ST_ERROR). Then, the success
message is also handled in a dedicated state (CACRL_ST_SUCCESS). This way we
are able to retry to push it if necessary. Finally, the dot displayed for
each updated CKCH instance is now immediatly pushed in the response buffer,
and before the update. This way, we are able to retry too if necessary.
This patch should fix the issue #1722. It must be backported as far as
2.5. But a massive refactoring was performed in 2.6. So, for the 2.5, the
patch will have to be adapted.
When changes on a certificate are commited, a trash buffer is used to create
the response. Once done, the message is copied in the response buffer.
However, if the buffer is full, there is no way to retry and the message is
lost. The same issue may happen with the error message. It is a design issue
of cli_io_handler_commit_cert() function.
To fix it, the function was reworked. First, the error message is now part
of the service context. This way, if we cannot push the error message in the
reponse buffer, we may retry later. To do so, a dedicated state was created
(CERT_ST_ERROR). Then, the success message is also handled in a dedicated
state (CERT_ST_SUCCESS). This way we are able to retry to push it if
necessary. Finally, the dot displayed for each updated CKCH instance is now
immediatly pushed in the response buffer, and before the update. This way,
we are able to retry too if necessary.
This patch should fix the issue #1725. It must be backported as far as
2.2. But massive refactoring was performed in 2.6. So, for the 2.5 and
below, the patch must be adapted.
When a CA or CRL entry is replaced (via 'set ssl ca-file' or 'set ssl
crl-file' commands), the path is duplicated and used to identify the ongoing
transaction. However, if the same command is repeated, the path is still
duplicated but the transaction is not changed and the duplicated path is not
released. Thus there is a memory leak.
By reviewing the code, it appears there is no reason to duplicate the
path. It is always the filename path of the old entry. So, a reference on it
is now used. This simplifies the code and this fixes the memory leak.
This patch must be backported as far as 2.5.
When a certificate entry is replaced (via 'set ssl cert' command), the path
is duplicated and used to identify the ongoing transaction. However, if the
same command is repeated, the path is still duplicated but the transaction
is not changed and the duplicated path is not released. Thus there is a
memory leak.
By reviewing the code, it appears there is no reason to duplicate the
path. It is always the path of the old entry. So, a reference on it is now
used. This simplifies the code and this fixes the memory leak.
This patch must be backported as far as 2.2.
When a CA or a CRL entry is being modified, we must take care to no delete
it because the corresponding ongoing transaction still references it. If we
do so, it leads to a null-deref and a crash may be exeperienced if changes
are commited.
This patch must be backported as far as 2.5.
When a certificate entry is being modified, we must take care to no delete
it because the corresponding ongoing transaction still references it. If we
do so, it leads to a null-deref and a crash may be exeperienced if changes
are commited.
This patch must be backported as far as 2.2.
On the CLI, If we fail to commit changes on a CA or a CRL entry, an error
message is returned. This error must be released.
This patch must be backported as far as 2.4.
On the CLI, If we fail to commit changes on a certificate entry, an error
message is returned. This error must be released.
This patch must be backported as far as 2.2.
When parsing QPACK encoder/decoder streams, h3_decode_qcs() displays an
error trace if they are empty. Change the return code used in QPACK code
to avoid this trace.
To uniformize with MUX/H3 code, 0 is now used to indicate success.
Beyond this spurious error trace, this bug has no impact.
The MUX now provides a single API for both uni and bidirectional
streams. It is responsible to reject reception on a local unidirectional
stream with the error STREAM_STATE_ERROR. This is already implemented in
qcc_recv(). As such, remove this duplicated check from xprt_quic.c.
The H3 frame demuxing code is incorrect when receiving a STREAM frame
which contains only a new H3 frame header without its payload.
In this case, the check on frames bigger than the buffer size is
incorrect. This is because the buffer has been freed via
qcs_consume()/qc_free_ncbuf() as it was emptied after H3 frame header
parsing. This causes the connection to be incorrectly closed with
H3_EXCESSIVE_LOAD error.
This bug was reproduced with xquic client on the interop and with the
command-line invocation :
$ ./interop_client -l d -k $SSLKEYLOGFILE -a <addr> -p <port> -D /tmp \
-A h3 -U https://<addr>:<port>/hello_world.txt
Note also that h3_is_frame_valid() invocation has been moved before the
new buffer size check. This ensures that first we check the frame
validity before returning from the function. It's also better
positionned as this is only needed when a new H3 frame header has been
parsed.
The H3 demuxing code was not fully correct. After parsing the H3 frame
header, the check between frame length and buffer data is wrong as we
compare a copy of the buffer made before the H3 header removal.
Fix this by improving the H3 demuxing code API. h3_decode_frm_header()
now uses a ncbuf instance, this prevents an unnecessary cast
ncbuf/buffer in h3_decode_qcs() which resolves this error.
This bug was not triggered at this moment. Its impact should be really
limited.
Replace ncb_blk_is_null() by ncb_is_null() as a prelude to ncb_data().
The result is the same : the function will return 0 if the buffer is
uninitialized. However, it is clearer to directly call ncb_is_null() to
reflect this.
There is no functional change with this commit.
Some server keywords are currently silently ignored in the peers
section, which is not good because it wastes time on user-side, trying
to make something work while it cannot by design.
With this patch we at least report a few of them (the most common ones),
which are init_addr, resolvers, check, agent-check. Others might follow.
This may be backported to 2.5 to encourage some cleaning of bogus configs.
When parsing a peers section, it's particularly difficult to make the
difference between the local peer which doesn't have any address, and
other peers which need one, and the error messages do not help because
with just:
peers foo
bind :8001
server foo 127.0.0.1:8001
server bar 127.0.0.2:8001
One can get such a confusing message when the local peer is "bar":
[peers.cfg:15] : 'server foo/bar' : unknown keyword '127.0.0.1:8001'.
It's not clear there why the other peer doesn't trigger an error.
With this commit we add a hint in the error message when no address
was expected. The error remains quite generic (since deep into the
server code) but at least the useer gets a hint about why the keyword
wasn't understood:
[peers.cfg:15] : 'server foo/bar' : unknown keyword '127.0.0.1:8001'.
Hint: no address was expected for this server.
For some poor historical reasons, the name of a peers proxy used to be
set to the name of the local peer itself. That causes some confusion when
multiple sections are present because the same proxy name appears at
multiple places in "show peers", but since 2.5 where parsing errors include
the proxy name, a config like this one :
peers foo
server foobar blah
Would report this when the local peer name isn't "foobar":
'server (null)/foobar' : invalid address: 'blah' in 'blah'
And this when it is foobar:
'server foobar/foobar' : invalid address: 'blah' in 'blah'
This is wrong, confusing and not very practical. This commit addresses
all this by using the peers section's name when it's created. This now
allows to report messages such as:
'server foo/foobar' : invalid address: 'blah' in 'blah'
Which make it clear that the section is called "foo" and the server
"foobar".
This may be backported to 2.5, though the patch may be simplified if
needed, by just adding the change at the output of init_peers_frontend().
By having the appctx in argument this function wouldn't have experienced
the previous bug. Better do that now to avoid proliferation of awkward
functions.
In the context of a CLI command, it's particularly not welcome to use
an "appctx" variable that is not the current one. In addition it was
created for use at exactly 6 places in 2 lines. Let's just remove it
and stick to peer->appctx which is used elsewhere in the function and
is unambiguous.
Commit d0a06d52f ("CLEANUP: applet: use applet_put*() everywhere possible")
replaced most accesses to the conn_stream with simpler accesses to the
appctx. Unfortunately, in all the CLI functions using an appctx, one
makes an exception where the appctx is not the caller's but the one being
inspected! When no peers connection is active, the early exit immediately
crashes.
No backport is needed.
stream.c and mux_fcgi.c may cause a warning for a possible NULL deref
at -Os, while that is not possible thanks to the previous test. Let's
just switch to __htx_get_head_blk() instead.
Remove an unneeded BUG_ON statement when find_http_meth() returns
HTTP_METH_OTHER.
This fix is necessary to support requests with unusual methods with
DEBUG_STRICT activated. This was detected when browsing with HTTP/3 over
a nextcloud instance which uses PROPFIND method for Webdav.
Prefix-integer encoding function was incomplete. It was not able to deal
correctly with value encoded on more than 2 bytes. This maximum value depends
on the size of the prefix, but value greater than 254 were all impacted.
Most notably, this change is required to support header name/value with
sizeable length. Previously, length was incorrectly encoded. The client thus
closed the connection with QPACK_DECOMPRESSION_ERROR.
Replace bogus call b_data() by b_room() to check if there is enough
space left in the buffer before encoding a prefix integer.
At this moment, no real scenario was found to trigger a bug related to
this change. This is probably because the buffer always contains data
(field section line and status code) before calling
qpack_encode_prefix_integer() which prevents an occurrence of this bug.
If the connection client timeout has expired, the mux is released.
If the client decides to initiate a new request, we send a STOP_SENDING
frame. Then, the client endessly sends a RESET_STREAM frame.
At this time, we simulate the fact that we support the RESET_STREAM frame
thanks to this ridiculously minimalistic patch.
If the connection client timeout has expired, the mux is released.
If the client decides to initiate a new request, we do not ack its
request. This leads the client to endlessly sent it request.
This patch makes a QUIC listener send a STOP_SENDING frame in such
a situation.
Add ->inc_err_cnt new callback to qcc_app_ops struct which can
be called from xprt to increment the application level error code counters.
It take the application context as first parameter to be generic and support
new QUIC applications to come.
Add h3_stats.c module with counters for all the frame types and error codes.
Rename "tune.quic.conn-buf-limit" to "tune.quic.frontend.conn-tx-buffers.limit"
to reflect the stream direction (TX) and the objects (frontends) which are
concerned.
Add new counters to count the number of dropped packet upon parsing error, lost
sent packets and the number of stateless reset packet sent.
Take the oppportunity of this patch to rename CONN_OPENINGS to QUIC_ST_HALF_OPEN_CONN
(total number of half open connections) and QUIC_ST_HDSHK_FAILS to QUIC_ST_HDSHK_FAIL.
When we select the next encryption level in qc_treat_rx_pkts() we
must reset the local largest_pn variable if we do not want to reuse its
previous value for this encryption. This bug could only happend during
handshake step and had no visible impact because this variable
is only used during the header protection removal step which hopefully
supports the packet reordering.
This is becoming difficult to distinguish the default values for
transport parameters which come with the RFC from our implementation
default values when not set by configuration (tunable parameters).
Add a comment to distinguish them.
Prefix these default values by QUIC_TP_DFLT_ to distinguish them from
QUIC_DFLT_* value even if there are not numerous.
Furthermore ->max_udp_payload_size must be first initialized to
QUIC_TP_DFLT_MAX_UDP_PAYLOAD_SIZE especially for received value.
Add tunable "tune.quic.frontend.max_streams_bidi" setting for QUIC frontends
to set the "initial_max_streams_bidi" transport parameter.
Add some documentation for this new setting.
Add two tunable settings both for backends and frontends "max_idle_timeout"
QUIC transport parameter, "tune.quic.frontend.max-idle-timeout" and
"tune.quic.backend.max-idle-timeout" respectively.
cfg_parse_quic_time() has been implemented to parse a time value thanks
to parse_time_err(). It should be reused for any tunable time value to be
parsed.
Add the documentation for this tunable setting only for frontend.
Add quic_transport_params_dump() static inline function to do so for
a quic_transport_parameters struct as parameter.
We use the trace API do dump these transport parameters both
after they have been initialized (RX/local) or received (TX/remote).
Make the transport parameters be standlone as much as possible as
it consists only in encoding/decoding data into/from buffers.
Reduce the size of xprt_quic.h. Unfortunalety, I think we will
have to continue to include <xprt_quic-t.h> to use the trace API
into this module.
We do not want to count the out of packet padding as being belonging
to an invalid packet, the firt byte of a QUIC packet being never null.
Some browsers like firefox proceeds this way to add PADDING frames
after an Initial packet and increase the size of their Initial packets.
In muxes, the stream-endoint descriptor of a stream is always defined. Thus,
in .show_fd callback functions, there is no reason to test it.
This patch should address the issue #1727.
.
Thanks to the recent refactoring, when tcpcheck_main() function is called,
the stream-connector of the healthchek is always defined. There is no reason
to still test it.
This patch should fix the issue #1721.
The only two places where it was used was to carefully preserve the
SE_FL_WILL_CONSUME flag (since others are irrelevant there and the
previous RXBLK* flags moved to the stconn). Now that the flag is
cleared by default there's no need to re-created a fresh new one
when replacing the descriptor, so we can eliminate that remaining
trick.
Now that the data consumption from the endpoint is the default setting,
we can generalize the pre-clearing of the wont_consume flag, which is
no more specific to applets. In practice it's not needed anymore to do
it, but since streams might be initiatied from asynchronous applets,
these might have blocked their consumption side before creating the
stream thus it's safer to preserve the clearing of the flag.
The stream endpoint descriptor that was named "endp" is now called "sd"
both in the fcgi_strm struct and in the few functions using this. The
name was also updated in the "show fd" output.
The stream endpoint descriptor that was named "endp" is now called "sd"
both in the h2s struct and in the few functions using this. The name
was also updated in the "show fd" output.
The stream endpoint descriptor that was named "endp" is now called "sd"
both in the h1s struct and in the few functions using this. The name
was also updated in the "show fd" output.
In ssl_action_wait_for_hs() the local variables called "cs" is just a
copy of s->scf that's only used once, so it can be removed. In addition
the check was removed as well since it's not possible to have a NULL SC
on a stream.
Function arguments and local variables called "cs" were renamed to
"sc" to avoid future confusion. There was also one place in traces
where "cs" used to display the stconn, which were turned to "sc".
Function arguments and local variables called "cs" were renamed to
"sc" to avoid future confusion. There were also 2 places in traces
where "cs" used to display the stconn, which were turned to "sc".
The "nb_cs" struct field and "h2_has_too_many_cs()" functions were
also renamed.
Function arguments and local variables called "cs" were renamed to
"sc" to avoid future confusion. There were also 2 places in traces
where "cs" used to display the stconn, which were turned to "sc".
h1s_upgrade_cs() and h1s_new_cs() were both renamed to _cs.
Function arguments and local variables called "cs" were renamed to
"sc" to avoid future confusion. There were also 3 places in debugging
traces where "cs" used to display the stconn, which were turned to "sc"
for similar reasons. The number of streams "nb_cs" was turned to "nb_sc".
Function arguments and local variables called "cs" were renamed to "sc"
to avoid future confusion. Both the core functions and the ones in the
resolvers files were updated.
Function arguments and local variables called "cs" were renamed to "sc"
to avoid future confusion. The HTTP analyser and the backend functions
were all updated after being reviewed. Function stream_update_both_cs()
was renamed to stream_update_both_sc()
Function arguments and local variables called "cs" were renamed to "sc"
to avoid future confusion. The "nb_cs" stream-connector counter was
renamed to "nb_sc" and qc_attach_cs() was renamed to qc_attach_sc().
Function arguments and local variables called "cs" were renamed to "sc"
to avoid future confusion. The change is huge (~580 lines), so extreme
care was given not to change anything else.
The check struct had a "cs" field renamed to "sc", which also required
a tiny update to a few functions using it to distinguish a check from
a stream (log.c, payload.c, ssl_sample.c, tcp_sample.c, tcpcheck.c,
connection.c).
Function arguments and local variables called "cs" were renamed to "sc".
The presence of one "cs=" in the debugging traces was also turned to
"sc=" for consistency.
There's no more reason for keepin the code and definitions in conn_stream,
let's move all that to stconn. The alphabetical ordering of include files
was adjusted.
This file contains all the stream-connector functions that are specific
to application layers of type stream. So let's name it accordingly so
that it's easier to figure what's located there.
The alphabetical ordering of include files was preserved.
QUIC was the last user of entities with "conn_stream" in their names,
though there's no more reason for this given that the pool names were
already pretty straightforward. The renaming does this:
qc_stream_desc: pool_head_quic_conn_stream -> pool_head_quic_stream_desc
qc_stream_buf: pool_head_quic_conn_stream_buf -> pool_head_quic_stream_buf
An equivalent applet_need_more_data() was added as well since that function
is mostly used from applet code. It makes it much clearer that the applet
is waiting for data from the stream layer.
These ones are essentially for the stream endpoint, let's give them a
name that matches the intent. Equivalent versions were provided in the
applet namespace to ease code legibility.
The following flags are not at all related to the endpoint but to the
connector itself:
- SE_FL_RXBLK_ROOM
- SE_FL_RXBLK_BUFF
- SE_FL_RXBLK_CHAN
As such they have no business staying in the endpoint descriptor and
they must move to the stream connector. They've also been renamed
accordingly to better match what they correspond to (the same name
as the function that sets them).
The rare occurrences of cs_rx_blocked() were replaced by an explicit
test on the list of flags. The reason is that cs_rx_blocked() used to
preserve some tests that are not needed at certain places since already
known. For the same reason SE_FL_RXBLK_ANY wasn't converted. As such it
will later be possible to carefully review these few locations and
eliminate the unneeded flags from the tests. No particular function
was made to test them since they're explicit enough.
It now looks like ci_putchk() and friends could very well place the flag
themselves on the connector when they detect a buffer full condition, as
this would significantly simplify the high-level API. But all usages must
first be reviewed before this simplification can be done. For now it
remains done by applet_put*() instead.
It's more explicit this way. The cs_rx_endp_ready() function could be
removed so that the flag is directly tested. In the future it should
be inverted and the few places where it's set (or preserved via
SE_FL_APP_MASK) could be dropped.
At plenty of places we combine multiple flags checks to determine if we
can receive (endp_ready, rx_blocked, cf_shutr etc). Let's group them
under a single function that is meant to replace existing tests.
Some tests were only checking the rxblk flags at the connection level,
so for now they were not converted, this requires a bit of auditing
first, and probably a test to determine whether or not to check for
cf_shutr (e.g. there is none if no stream is present).
The analysis of cs_rx_endp_more() showed that the purpose is for a stream
endpoint to inform the connector that it's ready to deliver more data to
that one, and conversely cs_rx_endp_done() that it's done delivering data
so it should not be bothered again for this.
This was modified two ways:
- the operation is no longer performed on the connector but on the
endpoint so that there is no more doubt when reading applet code
about what this rx refers to; it's the endpoint that has more or
no more data.
- an applet implementation is also provided and mostly used from
applet code since it saves the caller from having to access the
endpoint descriptor.
It's visible that the flag ought to be inverted because some places
have to set it by default for no reason.
These functions are used by the application layer to disable or enable
reading at the stream connector's level when the input buffer failed to
be allocated (or was finally allocated). The new names makes things
clearer.
These functions were used by the channel to inform the lower layer
whether reading was acceptable or not. Usually this directly mimmicks
the CF_DONT_READ flag from the channel, which may be set when it's
desired not to buffer incoming data that will not be processed, or
that the buffer wants to be flushed before starting to read again,
or that bandwidth limiting might be enforced, etc. It's always a
policy reason, not a purely resource-based one.
The new name mor eclearly indicates that a stream connector cannot make
any more progress because it needs room in the channel buffer, or that
it may be unblocked because the buffer now has more room available. The
testing function is sc_waiting_room(). This is mostly used by applets.
Note that the flags will change soon.
This makes SE_FL_APPLET_NEED_CONN autonomous, in that we check for it
everywhere we have a relevant cs_rx_blocked(), so that the flag doesn't
need anymore to be covered by cs_rx_blocked(). Indeed, this flag doesn't
really translate a receive blocking condition but rather a refusal to
wake up an applet that is waiting for a connection to finish to setup.
This also ensures we will not risk to set it back on a new endpoint
after cs_reset_endp() via SE_FL_APP_MASK, because the flag being
specific to the endpoint only and not to the connector, we don't
want to preserve it when replacing the endpoint.
It's possible that cs_chk_rcv() could later be further simplified if
we can demonstrate that the two tests in it can be merged.
This flag is exclusively used when a front applet needs to wait for the
other side to connect (or fail to). Let's give it a more explicit name
and remove the ambiguous function that was used only once.
This also ensures we will not risk to set it back on a new endpoint
after cs_reset_endp() via SE_FL_APP_MASK, because the flag being
specific to the endpoint only and not to the connector, we don't
want to preserve it when replacing the endpoint.
This flag is no more needed, it was only set on shut read to be tested
by cs_rx_blocked() which is now properly tested for shutr as well. The
cs_rx_blk_shut() calls were removed. Interestingly it allowed to remove
a special case in the L7 retry code.
This also ensures we will not risk to set it back on a new endpoint
after cs_reset_endp() via SE_FL_APP_MASK.
One flag (RXBLK_SHUT) is always set with CF_SHUTR, so in order to remove
it, we first need to make sure we always check for CF_SHUTR where
cs_rx_blocked() is being used.
sc_is_send_allowed() is now used everywhere instead of the combination
of cs_tx_endp_ready() && !cs_tx_blocked(). There's no place where we
need them individually thus it's simpler. The test was placed in cs_util
as we'll complete it later.
First it applies to the stream endpoint and not the conn_stream, and
second it only tests and touches the flags so it makes sense to call
it se_fl_ like other functions which only manipulate the flags, as
it's just a special case of flags.
It returns an stconn from a connection and not the opposite, so the name
change was more appropriate. In addition it was moved to connection.h
which manipulates the connection stuff, and it happens that only
connection.c uses it.
The following functions which act on a connection-based stream connector
were renamed to sc_conn_* (~60 places):
cs_conn_drain_and_shut
cs_conn_process
cs_conn_read0
cs_conn_ready
cs_conn_recv
cs_conn_send
cs_conn_shut
cs_conn_shutr
cs_conn_shutw
The function doesn't return a pointer to the mux but to the mux stream
(h1s, h2s etc). Let's adjust its name to reflect this. It's rarely used,
the name can be enlarged a bit. And of course s/cs/sc to accommodate for
the updated name.
These functions return the app-layer associated with an stconn, which
is a check, a stream or a stream's task. They're used a lot to access
channels, flags and for waking up tasks. Let's just name them
appropriately for the stream connector.
We're starting to propagate the stream connector's new name through the
API. Most call places of these functions that retrieve the channel or its
buffer are in applets. The local variable names are not changed in order
to keep the changes small and reviewable. There were ~92 uses of cs_ic(),
~96 of cs_oc() (due to co_get*() being less factorizable than ci_put*),
and ~5 accesses to the buffer itself.
This applies the change so that the applet code stops using ci_putchk()
and friends everywhere possible, for the much saferapplet_put*() instead.
The change is mechanical but large. Two or three functions used to have no
appctx and a cs derived from the appctx instead, which was a reminiscence
of old times' stream_interface. These were simply changed to directly take
the appctx. No sensitive change was performed, and the old (more complex)
API is still usable when needed (e.g. the channel is already known).
The change touched roughly a hundred of locations, with no less than 124
lines removed.
It's worth noting that the stats applet, the oldest of the series, could
get a serious lifting, as it's still very channel-centric instead of
propagating the appctx along the chain. Given that this code doesn't
change often, there's no emergency to clean it up but it would look
better.
For historical reasons (stream-interface and connections), we used to
require two independent fields for the application level callbacks and
the transport-level functions. Over time the distinction faded away so
much that the low-level functions became specific to the application
and conversely. For example, applets may only work with streams on top
since they rely on the channels, and the stream-level functions differ
between applets and connections. Right now the application level only
contains a wake() callback and the low-level ones contain the functions
that act at the lower level to perform the shutr/shutw and at the upper
level to notify about readability and writability. Let's just merge them
together into a single set and get rid of this confusing distinction.
Note that the check ops do not define any app-level function since these
are only called by streams.
This also follows the natural naming. There are roughly 238 changes, all
totally trivial. conn_stream-t.h has become completely void of any
"conn_stream" related stuff now (except its name).
This renames the "struct conn_stream" to "struct stconn" and updates
the descriptions in all comments (and the rare help descriptions) to
"stream connector" or "connector". This touches a lot of files but
the change is minimal. The local variables were not even renamed, so
there's still a lot of "cs" everywhere.
Let's start to introduce the stream connector at the app_ops level.
This is entirely self-contained into conn_stream.c. The functions
were also updated to reflect the new name, and the comments were
updated.
Just like for the appctx, this is a pointer to a stream endpoint descriptor,
so let's make this explicit and not confuse it with the full endpoint. There
are very few changes thanks to the preliminary refactoring of the flags
manipulation.
Now at least it makes it obvious that it's the stream endpoint descriptor
and not an endpoint. There were few changes thanks to the previous refactor
of the flags.
After some discussion we found that the cs_endpoint was precisely the
descriptor for a stream endpoint, hence the naturally coming name,
stream endpoint constructor.
This patch renames only the type everywhere and the new/init/free functions
to remain consistent with it. Future patches will address field names and
argument names in various code areas.
That's the "stream endpoint" pointer. Let's change it now while it's
not much spread. The function __cs_endp_target() wasn't yet renamed
because that will change more globally soon.
This changes all main uses of endp->flags to the se_fl_*() equivalent
by applying coccinelle script endp_flags.cocci. The se_fl_*() functions
themselves were manually excluded from the change, of course.
Note: 144 locations were touched, manually reviewed and found to be OK.
The script was applied with all includes:
spatch --in-place --recursive-includes -I include --sp-file $script $files
This changes all main uses of cs->endp->flags to the sc_ep_*() equivalent
by applying coccinelle script cs_endp_flags.cocci.
Note: 143 locations were touched, manually reviewed and found to be OK,
except a single one that was adjusted in cs_reset_endp() where the flags
are read and filtered to be used as-is and not as a boolean, hence was
replaced with sc_ep_get() & $FLAGS.
The script was applied with all includes:
spatch --in-place --recursive-includes -I include --sp-file $script $files
This one is exclusively used by the connection, regardless its generic
name "ctx" is rather confusing. Let's make it a struct connection* and
call it "conn". This way there's no doubt about what it is and there's
no way it will be used by accident by being taken for something else.
This test in cs_update_rx() was introduced in 1.9 by commit b26a6f970
("MEDIUM: stream-int: make use of si_rx_chan_{rdy,blk} to control the
stream-int from the channel"), but by then already it was not needed
because the RX_WAIT_EP flag has never been part of RXBLK_ANY so there's
no point doing "flags & RXBLK_ANY & ~RX_WAIT_EP", that part is already
complicated enough like this.
Adjust the size of the sample buffer before we change the "area"
pointer. Otherwise, we end up not changing the size, because the area
pointer is already the same as "start" before we compute the difference
between the two.
This is similar to the change in b28430591d
but for the word converter instead of field.
Fix a typo that lead to using the wrong pointer when loading a
certificate, which lead to always using the pem loader for every
parameeter.
Use the cert_ext->load() ptr instead of cert_exts->load() which was the
first element of the cert_exts[] array.
Enhance the error message with the field name.
Should fix issue #1716
Commit 2cb3be76b ("CLEANUP: init: address a coverity warning about
possible multiply overflow") was incomplete, two other locations were
present. This should address issue #1585.
Bring some improvment to h3_parse_settings_frm() function. The first one
is the parsing which now manipulates a buffer instead of a plain char*.
This is more to unify with other parsing functions rather than dealing
with data wrapping : it's unlikely to happen as SETTINGS is only
received as the first frame on the control STREAM.
Various errors are now properly reported as connection error :
* on incomplete frame payload
* on a duplicated settings in the same frame
* on reserved settings receive
As specified by HTTP/3 draft, an unknown unidirectional stream can be
aborted. To do this, use a new flag QC_SF_READ_ABORTED. When the MUX
detects this flag, QCS instance is automatically freed.
Previously, such streams were instead automatically drained. By aborting
them, we economize some useless memcpy instruction. On future data
reception, QCS instance is not found in the tree and considered as
already closed. The frame payload is thus deleted without copying it.
Remove all unnecessary bits of code for H3 unidirectional streams. Most
notable, an individual tasklet is not require anymore for each stream.
This is useless since the merge of RX/TX uni streams handling with
bidirectional streams code.
The whole QUIC stack is impacted by this change :
* at quic-conn level, a single function is now used to handle uni and
bidirectional streams. It uses qcc_recv() function from MUX.
* at MUX level, qc_recv() io-handler function does not skip uni streams
* most changes are conducted at app layer. Most notably, all received
data is handle by decode_qcs operation.
Now that decode_qcs is the single app read function, the H3 layer can be
simplified. Uni streams parsing was extracted from h3_attach_ruqs() to
h3_decode_qcs().
h3_decode_qcs() is able to deal with all HTTP/3 frame types. It first
check if the frame is valid for the H3 stream type. Most notably,
SETTINGS parsing was moved from h3_control_recv() into h3_decode_qcs().
This commit has some major benefits besides removing duplicated code.
Mainly, QUIC flow control is now enforced for uni streams as with bidi
streams. Also, an unknown frame received on control stream does not set
an error : it is now silently ignored as required by the specification.
Some cleaning in H3 code is already done with this patch :
h3_control_recv() and h3_attach_ruqs() are removed as they are now
unused. A final patch should clean up the unneeded remaining bit.
Define a new function h3_parse_uni_stream_no_h3(). It can be used to
handle the payload of streams which does not convey H3 frames. This is
mainly useful for QPACK encoder/decoder streams. It can also be used for
a stream of unknown type which should be drain without parsing it.
This patch is useful to extract code in a dedicated function. It will be
simple to reuse it in h3_decode_qcs() when uni-streams reception is
unify with bidirectional streams, without using dedicated stream tasklet.
Define a new function h3_is_frame_valid(). It returns if a frame is
valid or not depending on the stream which received it.
For the moment, it is used in h3_decode_qcs() which only deals with
bidirectional streams. Soon, uni streams will use the same function,
rendering the frame type check useful.
Define a new function h3_init_uni_stream(). This can be used to read the
stream type of an unidirectional stream. There is no functional change
with previous code.
This patch will be useful to unify reception for uni streams with
bidirectional ones.
Define a new enum h3s_t. This is used to differentiate between the
different stream types used in a HTTP/3 connection, including the QPACK
encoder/decoder streams.
For the moment, only bidirectional streams is positioned. This patch
will be useful to unify reception of uni streams with bidirectional
ones.
Replace h3_uqs type by qcs in stream callbacks. This change is done in
the context of unification between bidi and uni-streams. h3_uqs type
will be unneeded when this is achieved.
Remove the unneeded skip over unidirectional streams in qc_send(). This
unify sending for both uni and bidi streams.
In fact, the only local unidirectional streams in use for the moment is
the H3 Control stream responsible of SETTINGS emission. The frame was
already properly generated in qcs.tx.buf, but not send due to stream
skip in qc_send(). Now, there is no need to ignore uni streams so remove
this condition.
This fixes the emission of H3 settings which is now properly emitted.
Uni and bidi streams use the same set of funtcions for sending. One of
the most notable gain is that flow-control is now enforced for uni
streams.
Emit STREAM_STATE_ERROR connection error in two cases :
* if receiving data for send-only stream
* if receiving data on a locally initiated stream not open yet
For the moment the first case cannot be encoutered as uni streams
reception does not use qcc_recv(). However, this will be soon
implemented with the unification between bidi and uni streams.
The whole frame payload must have been received to demux a H3 frames,
except for H3 DATA which can be fragmented into multiple HTX blocks.
If the frame is bigger than the buffer and is not a DATA frame, a
connection error is reported with error H3_EXCESSIVE_LOAD.
This should be completed in the future with the H3 settings to limit the
size of uncompressed header section.
This code is more generic : it can handle every H3 frames. This is done
in order to be able to use h3_decode_qcs() to demux both uni and bidir
streams.
Similar to sending, read operations are disabled when a CONNECTION_CLOSE
frame has been emitted.
Most notably, this prevents unneeded loop demuxing when the H3 layer has
issue an error and cannot process the buffer payload anymore.
Note that read is not prevented for unidirectional streams for the
moment. This will supported soon with the unification of bidir and uni
streams treatment.
Complete quic-conn API for error reporting. A new parameter <app> is
defined in the function quic_set_connection_close(). This will transform
the frame into a CONNECTION_CLOSE_APP type.
This type of frame will be generated by the applicative layer, h3 or
hq-interop for the moment. A new function qcc_emit_cc_app() is exported
by the MUX layer for them.
The only change is that the H3_CF_SETTINGS_SENT flag if-condition is
replaced by a BUG_ON statement. This may help to catch multiple calls on
h3_control_send() instead of silently ignore them.
h3_parse_settings_frm() read one byte after the frame payload. Fix the
parsing code. In most cases, this has no impact as we are inside an
allocated buffer but it could cause a segfault depending on the buffer
alignment.
struct h3 represents the whole HTTP/3 connection. A new type h3s was
recently introduced to represent a single HTTP/3 stream. To facilitate
the analogy with other haproxy code, most notable in MUX, rename h3 type
to h3c.
Do not allocate cs_endpoint for every QCS instances in qcs_new().
Instead, this is delayed to qc_attach_cs() function.
In effect, with H3 as app protocol, cs_endpoint will be allocated on
HEADERS parsing. Thus, no cs_endpoint is allocated for H3 unidirectional
streams which do not convey any HTTP data.
h3_b_dup() is used to obtains a ncbuf representation into a struct
buffer. ncbuf can thus be marked as a const parameter. This will allows
function which already manipulates a const ncbuf to use it.
The previous fix:
BUG/MEDIUM: peers: fix segfault using multiple bind on peers
Prevents to declare multiple listeners on a peers sections but if
peers protocol is extended to support this we could raise the bug
again.
Indeed, after allocating a new listener and adding it to a list the
code mistakenly re-configure the first element of the list instead
of the new added one, and the last one remains finally uninitialized.
The previous fix assure there is no more than one listener in this
list but this could be changed in futur.
This patch patch assures we configure and initialize the newly added
listener instead of the first one in the list.
This patch could be backported until version 2.0 to complete
BUG/MEDIUM: peers: fix segfault using multiple bind on peers
If multiple "bind" lines were present on the "peers" section, multiple
listeners were added to a list but the code mistakenly initialize
the first member and this first listener was re-configured instead of
the newly created one. The last one remains uninitialized causing a null
dereference a soon a connection is received.
In addition, the 'peers' sections and protocol are not currently designed to
handle multiple listeners.
This patch check if there is already a listener configured on the 'peers'
section when we want to create a new one. This is rising an error if
a listener is already present showing the file and line in the error
message.
To keep the file and line number of the previous listener available
for the error message, the 'bind_conf_uniq_alloc' function was modified
to keep the file/line data the struct 'bind_conf' was firstly
allocated (previously it was updated each time the 'bind_conf' was
reused).
This patch should be backported until version 2.0
resolvers_deinit() function is called on error, during post-parsing stage,
or on deinit, when HAProxy is stopped. It releases all entities: resolvers,
resolutions and SRV requests. There is no reason to defer the resolutions
release by moving them in the death_row list because this function is
terminal. And it is in fact a bug. Resolutions must not be released at the
end of the function because resolvers were already freed. However some
resolutions may still be attached to a reolver. Thus, when we try to remove
it from the resolver's tree, in resolv_reset_resolution(), this resolver was
already released.
So now, resolution are immediately released. It means there is no more
reason to track this function. calls to
enter_resolver_code()/leave_resolver_code() have been removed.
This patch should fix the issue #1680 and may be related to #1485. It must
be backported as far as 2.2.
We used to support both RTSP and HTTP protocol version names with and
without accept-invalid-http-request, but since this is based on the
characters themselves, any protocol made of chars {0-9/.HPRST} was
possible and not others. Now that such non-standard protocols are
restricted to accept-invalid-http-request, there's no reason for not
allowing other letters. With this patch, characters {0-9./A-Z} are
permitted when the option is set.
This patch hardens the verification of the HTTP/1.x version line
(i.e. the first line within an HTTP/1.x request) to verify that
the protocol name within the version actually reads "HTTP".
Previously protocols that superficially resembled the wire-format
of HTTP/1.x and having a 4-letter acronym as the protocol name, such
as RTSP would pass this check.
This patch fixes GitHub issue #540, it must be backported to all
supported versions. The legacy, non-HTX parser is affected as well,
a fix must be created for it as well.
Note that such protocols can still be used when option
accept-invalid-http-request is set.
In issue #1585 Coverity suspects a risk of multiply overflow when
calculating the SSL cache size, though in practice the cache is
limited to 2^32 anyway thus it cannot really happen. Nevertheless,
casting the operation should be sufficient to avoid marking it as a
false positive.
This reverts commit 118b2cbf84.
This patch was useful mainly for the docker image of QUIC interop to
have traces on stdout.
A better solution has been found by integrating this patch directly in
the qns repository which is used to build the docker image. Thus, this
hack is not require anymore in the main repository.
The wake handler detects if the frontend is closed. This can happen if
the proxy has been disabled individually or even on process soft-stop.
Before this patch, in this condition QCS instances were freed before
being detached from the cs_endpoint. This clearly violates the haproxy
connection architecture and cause a BUG_ON statement crash in cs_free().
To handle this properly, cs_endpoint is notified by setting RD_SH|WR_SH
on connection flags. The cs_endpoint will thus use the detach operation
which allows the QCS instance to be freed.
This code allows the soft-stop process to complete as soon as possible.
However, the client is not notified about the connection closing. It
should be done by emitting a H3 GOAWAY + CONNECTION_CLOSE. Sadly, this
is impossible at this stage because the listener sockets are closed so
the quic-conn cannot use it to emit new frames. At this stage the client
will most probably detect connection closing on its idle timeout
expiration.
Thus, to completely support proxy closing/soft-stop, important
architecture changes are required in QUIC socket management. This is
also linked with the reload feature.
The given size must be the size of the destination buffer, not the size of the
(binary) address representation.
This fixes GitHub issue #1599.
The bug was introduced in 92149f9a82 which is in
2.4+. The fix must be backported there.
If QUIC support is enabled both branches of the ternary conditional are
identical, upsetting Coverity. Move the full conditional into the non-QUIC
preprocessor branch to make the code more clear.
This resolves GitHub issue #1710.
When we receive a CONNECTION_CLOSE frame, we should decrement this counter
if the handshake state was not successful and if we have not received
a TLS alert from the TLS stack.
When a bind line is configured without the "ssl" keyword, a warning is
emitted and a crash happens at runtime:
bind quic4@:4449 crt rsa+dh2048.pem alpn h3 allow-0rtt
[WARNING] (17867) : config : Proxy 'decrypt': A certificate was specified but SSL was not enabled on bind 'quic4@:4449' at [quic-mini.cfg:24] (use 'ssl').
Let's automatically turn SSL on when QUIC is detected, as it doesn't
exist without SSL anyway. It solves the runtime issue, and also makes
sure it is not possible to accidentally configure a quic listener with
no certificate since the error is detected via the SSL checks.
A warning is emitted in this case, to encourage the user to fix the
configuration so that it remains reviewable.
When no mux protocol is configured on a bind line with "proto", and the
transport layer is QUIC, right now mux_h1 is being used, leading to a
crash.
Now when the transport layer of the bind line is already known as being
QUIC, let's automatically try to configure the QUIC mux, so that users
do not have to enter "proto quic" all the time while it's the only
supported option. this means that the following line now works:
bind quic4@:4449 ssl crt rsa+dh2048.pem alpn h3 allow-0rtt
Till now, placing "proto h1" or "proto h2" on a "quic" bind or placing
"proto quic" on a TCP line would parse fine but would crash when traffic
arrived. The reason is that there's a strong binding between the QUIC
mux and QUIC transport and that they're not expected to be called with
other types at all.
Now that we have the mux's type and we know the type of the protocol used
on the bind conf, we can perform such checks. This now returns:
[ALERT] (16978) : config : frontend 'decrypt' : stream-based MUX protocol 'h2' is incompatible with framed transport of 'bind quic4@:4448' at [quic-mini.cfg:27].
[ALERT] (16978) : config : frontend 'decrypt' : frame-based MUX protocol 'quic' is incompatible with stream transport of 'bind :4448' at [quic-mini.cfg:29].
This config tightening is only tagged MINOR since while such a config,
despite not reporting error, cannot work at all so even if it breaks
experimental configs, they were just waiting for a single connection
to crash.
In order to be able to check compatibility between muxes and transport
layers, we'll need a new flag to tag muxes that work on framed transport
layers like QUIC. Only QUIC has this flag now.
We used to preset XPRT_SSL on bind_conf->xprt when parsing the "ssl"
keyword, which required to be careful about what QUIC could have set
before, and which makes it impossible to consider the whole line to
set all options.
Now that we have the BC_O_USE_SSL option on the bind_conf, it becomes
easier to set XPRT_SSL only once the bind_conf's args are parsed.
It used to be set when parsing the listeners' addresses but this comes
with some difficulties in that other places have to be careful not to
replace it (e.g. the "ssl" keyword parser).
Now we know what protocols a bind_conf line relies on, we can set it
after having parsed the whole line.
Now that we have a function to parse all bind keywords, and that we
know what types of sock-level and xprt-level protocols a bind_conf
is using, it's easier to centralize the check for stream vs dgram
conflict by putting it directly at the end of the args parser. This
way it also works for peers, provides better precision in the report,
and will also allow to validate transport layers. The check was even
extended to detect inconsistencies between xprt layer (which were not
covered before). It can even detect that there are two incompatible
"bind" lines in a single peers section.
Let's collect the set of xprt-level and sock-level dgram/stream protocols
seen on a bind line and store that in the bind_conf itself while they're
being parsed. This will make it much easier to detect incompatibilities
later than the current approch which consists in scanning all listeners
in post-parsing.
This now makes sure that both the peers' "bind" line and the regular one
will use the exact same parser with the exact same behavior. Note that
the parser applies after the address and that it could be factored
further, since the peers one still does quite a bit of duplicated work.
The "bind" parsing code was duplicated for the peers section and as a
result it wasn't kept updated, resulting in slightly different error
behavior (e.g. errors were not freed, warnings were emitted as alerts)
Let's first unify it into a new dedicated function that properly reports
and frees the error.
There's been some great confusion between proto_type, ctrl_type and
sock_type. It turns out that ctrl_type was improperly chosen because
it's not the control layer that is of this or that type, but the
transport layer, and it turns out that the transport layer doesn't
(normally) denaturate the underlying control layer, except for QUIC
which turns dgrams to streams. The fact that the SOCK_{DGRAM|STREAM}
set of values was used added to the confusion.
Let's replace it with xprt_type which reuses the later introduced
PROTO_TYPE_* values, and update the comments to explain which one
works at what level.
Just trying "quic4@:4433" with USE_QUIC not set rsults in such a cryptic
error:
[ALERT] (14610) : config : parsing [quic-mini.cfg:44] : 'bind' : unsupported protocol family 2 for address 'quic4@:4433'
Let's at least add the stream and datagram statuses to indicate what was
being looked for:
[ALERT] (15252) : config : parsing [quic-mini.cfg:44] : 'bind' : unsupported stream protocol for datagram family 2 address 'quic4@:4433'
Still not very pretty but gives a little bit more info.
In case the str2listener() parser reports a generic error with no message
when parsing the argument of a "bind" statement in a "peers" section, the
reported error indicates an invalid address on the empty arg. This has
existed since 2.0 with commit 355b2033e ("MINOR: cfgparse: SSL/TLS binding
in "peers" sections."), so this must be backported till 2.0.
As specified by the RFC reception of different STREAM data for the same
offset should be treated with a CONNECTION_CLOSE with error
PROTOCOL_VIOLATION.
Use ncbuf API to detect this case : if add operation fails with
NCB_RET_DATA_REJ with add mode NCB_ADD_COMPARE.
Send a CONNECTION_CLOSE on reception of a STREAM frame for a STREAM id
exceeding the maximum value enforced. Only implemented for bidirectional
streams for the moment.
Send a CONNECTION_CLOSE if the peer emits more data than authorized by
our flow-control. This is implemented for both stream and connection
level.
Fields have been added in qcc/qcs structures to differentiate received
offsets for limit enforcing with consumed offsets for sending of
MAX_DATA/MAX_STREAM_DATA frames.
Define an API to easily set a CONNECTION_CLOSE. This will mainly be
useful for the MUX when an error is detected which require to close the
whole connection.
On the MUX side, a new flag is added when a CONNECTION_CLOSE has been
prepared. This will disable add future send operations.
We rely on <conn_opening> stats counter and tune.quic.retry_threshold
setting to dynamically start sending Retry packets. We continue to send such packets
when "quic-force-retry" setting is set. The difference is when we receive tokens.
We check them regardless of this setting because the Retry could have been
dynamically started. We must also send Retry packets when we receive Initial
packets without token if the dynamic Retry threshold was reached but only for connection
which are not currently opening or in others words for Initial packets without
connection already instantiated. Indeed, we must not send Retry packets for all
Initial packets without token. For instance a client may have already sent an
Initial packet without receiving Retry packet because the Retry feature was not
started, then the Retry starts on exeeding the threshold value due to others
connections, then finally our client decide to send another Initial packet
(to ACK Initial CRYPTO data for instance). It does this without token. So, for
this already existing connection we must not send a Retry packet.
This QUIC specific keyword may be used to set the theshold, in number of
connection openings, beyond which QUIC Retry feature will be automatically
enabled. Its default value is 100.
First commit to handle the QUIC stats counters. There is nothing special to say
except perhaps for ->conn_openings which is a gauge to count the number of
connection openings. It is incremented after having instantiated a quic_conn
struct, then decremented when the handshake was successful (handshake completed
state) or failed or when the connection timed out without reaching the handshake
completed state.
Move the code which finalizes the QUIC connections initialisations after
having called qc_new_conn() into this function to benefit from its
error handling to release the memory allocated for QUIC connections
the initialization of which could not be finalized.
Here is the format of a token:
- format (1 byte)
- ODCID (from 9 up 21 bytes)
- creation timestamp (4 bytes)
- salt (16 bytes)
A format byte is required to distinguish the Retry token from others sent in
NEW_TOKEN frames.
The Retry token is ciphered after having derived a strong secret from the cluster secret
and generated the AEAD AAD, as well as a 16 bytes long salt. This salt is
added to the token. Obviously it is not ciphered. The format byte is not
ciphered too.
The AAD are built by quic_generate_retry_token_aad() which concatenates the version,
the client SCID and the IP address and port. We had to implement quic_saddr_cpy()
to copy the IP address and port to the AAD buffer. Only the Retry SCID is generated
on our side to build a Retry packet, the others fields come from the first packet
received by the client. It must reuse this Retry SCID in response to our Retry packet.
So, we have not to store it on our side. Everything is offloaded to the client (stateless).
quic_generate_retry_token() must be used to generate a Retry packet. It calls
quic_pkt_encrypt() to cipher the token.
quic_generate_retry_check() must be used to check the validity of a Retry token.
It is able to decipher a token which arrives into an Initial packet in response
to a Retry packet. It calls parse_retry_token() after having deciphered the token
to store the ODCID into a local quic_cid struct variable. Finally this ODCID may
be stored into the transport parameter thanks to qc_lstnr_params_init().
The Retry token lifetime is 10 seconds. This lifetime is also checked by
quic_generate_retry_check(). If quic_generate_retry_check() fails, the received
packet is dropped without anymore packet processing at this time.
This function does exactly the same thing as quic_tls_decrypt(), except that
it does reuse its input buffer as output buffer. This is needed
to decrypt the Retry token without modifying the packet buffer which
contains this token. Indeed, this would prevent us from decryption
the packet itself as the token belong to the AEAD AAD for the packet.
This function must be used to derive strong secrets from a non pseudo-random
secret (cluster-secret setting in our case) and an IV. First it call
quic_hkdf_extract_and_expand() to do that for a temporary strong secret (tmpkey)
then two calls to quic_hkdf_expand() reusing this strong temporary secret
to derive the final strong secret and IV.
In issue #1563, Coverity reported a very interesting issue about a
possible UAF in the config parser if the config file ends in with a
very large line followed by an empty one and the large one causes an
allocation failure.
The issue essentially is that we try to go on with the next line in case
of allocation error, while there's no point doing so. If we failed to
allocate memory to read one config line, the same may happen on the next
one, and blatantly dropping it while trying to parse what follows it. In
the best case, subsequent errors will be incorrect due to this prior error
(e.g. a large ACL definition with many patterns, followed by a reference of
this ACL).
Let's just immediately abort in such a condition where there's no recovery
possible.
This may be backported to all versions once the issue is confirmed to be
addressed.
Thanks to Ilya for the report.
If an unlisted errno is reported, abort the process. If a crash is
reported on this condition, we must determine if the error code is a
bug, should interrupt emission on the fd or if we can retry the syscall.
If sendto returns an error, we should not retry the call and break from
the sending loop. An exception is made for EINTR which allows to retry
immediately the syscall.
This bug caused an infinite loop reproduced when the process is in the
closing state by SIGUSR1 but there is still QUIC data emission left.
Instead of using the health-check to subscribe to read/write events, we now
rely on the conn-stream. Indeed, on the server side, the conn-stream's
endpoint is a multiplexer. Thus it seems appropriate to handle subscriptions
for read/write events the same way than for the streams. Of course, the I/O
callback function is not the same. We use srv_chk_io_cb() instead of
cs_conn_io_cb().
event_srv_chk_io() function is renamed srv_chk_io_cb() to be consistant with
the I/O callback function of connections. In addition, this function is
exported. It will be required to use the conn-stream's subscriptions.
The connection is already closed by the health-check itself. Thus there is
now reason to duplicate this part in the .wake callback function. It is
enough to wake the health-check and wait.
The buffer wait list is used to deal with buffer allocation failure. But at
the end of health-check, it must be reinitialized. There is no reason to
reason to get a buffer between two health-check runs. And in fact, the
associated flags, CHK_ST_IN_ALLOC and CHK_ST_OUT_ALLOC, are already cleared
at the end of a health-check.
This patch must be backported as far as 2.2. On the 2.2, MT_LIST_ADDED and
MT_LIST_DEL must be used instead of LIST_INLIST and LIST_DEL_INIT.
When the line parsing failed because outline buffer must be reallocated, if
my_realloc2() call fails, the buffer size must be reset. Indeed, in this case
the current line is skipped, a fatal error is reported and we jump to the next
line. At this stage the outline buffer is NULL. If the buffer size is not reset,
the next call to parse_line() crashes because we try to write in the buffer. We
fail to detect the outline buffer is too small to copy any character.
To fix the issue, outlinesize variable must be set to 0 when outline allocation
failed.
This patch should fix the issue #1563. It must be backported as far as 2.2.
Release the QCS RX buffer if emptied afer qcs_consume(). This improves
memory usage and avoids a QCS to keep an allocated buffer, particularly
when no data is received anymore. Buffer is automatically reallocated if
needed via qc_get_ncbuf().
qc_free_ncbuf() may now be used with a NCBUF_NULL buffer as parameter.
This is useful when using this function on a QCS with no allocated
buffer. This case was not reproduced for the moment, but it will soon
become more present as buffers will be released if emptied.
Also a call to offer_buffers() is added to conform with the dynamic
buffer management of haproxy.
This commit is similar to the previous one but deals with MAX_DATA for
connection-level data flow control. It uses the same function
qcc_consume_qcs() to update flow control level and generate a MAX_DATA
frame if needed.
Send MAX_STREAM_DATA frames when at least half of the allocated
flow-control has been demuxed, frame and cleared. This is necessary to
support QUIC STREAM with received data greater than a buffer.
Transcoders must use the new function qcc_consume_qcs() to empty the QCS
buffer. This will allow to monitor current flow-control level and
generate a MAX_STREAM_DATA frame if required. This frame will be emitted
via qc_io_cb().
Adjust the mechanism for MAX_STREAMS_BIDI emission. When a bidirectional
stream is removed, current flow-control level is checked. If needed, a
MAX_STREAMS_BIDI frame is generated and inserted in a new list in the
QCS instance. The new frames will be emitted at the start of qc_send().
This has no impact on the current MAX_STREAMS_BIDI behavior. However,
this mechanism is more flexible and will allow to implement quickly
MAX_STREAM_DATA/MAX_DATA emission.
Slightly change the interface for qcc_recv() between MUX and XPRT. The
MUX is now responsible to call qcc_decode_qcs(). This is cleaner as now
the XPRT does not have to deal with an extra QCS parameter and the MUX
will call qcc_decode_qcs() only if really needed.
This change is possible since there is no extra buffering for
out-of-order STREAM frames and the XPRT does not have to handle buffered
frames.
Previously, qc_io_cb() of mux-quic only dealt with TX. Add support for
RX in it. This is done through a new function qc_recv(qcc). It loops
over all QCS instances and call qcc_decode_qcs(qcs).
This has no impact from the quic-conn layer as qcc_decode_qcs(qcs) is
called directly. However, this allows to have a resume point when demux
is blocked on the upper layer HTX full buffer.
Note that for the moment, only RX for bidirectional streams is managed
in qc_io_cb(). Unidirectional streams use their own mechanism for both
TX/RX. It should be unified in the near future in a refactoring.
Flag QCS if HTX buffer is full on demux. This will block all future
operations on QCS demux and should limit unnecessary decode_qcs() calls.
The flag is cleared on rcv_buf operation called by conn-stream.
Previously, H3 demuxer refused to proceed the payload if the frame was
not entirely received and the QCS buffer is not full. This code was
duplicated from the H2 demuxer.
In H2, this is a justified optimization as only one frame at a time can
be demuxed. However, this is not the case in H3 with interleaved frames
in the lower layer QUIC STREAM frames.
This condition is now removed. H3 demuxer will proceed payload as soon
as possible. An exception is kept for HEADERS frame as the code is not
able to deal with partial HEADERS.
With this change, H3 demuxer should consume less memory. To ensure that
we never received a HEADER bigger than the RX buffer, we should use the
H3 SETTINGS_MAX_FIELD_SECTION_SIZE.
First adjusted some typos in comments inside the function. Second,
change the naming of some variable to reduce confusion.
A special case has been inserted when advance is done inside a GAP block
and this block is the last of the buffer. In this case, the whole buffer
will be emptied, equivalent to a ncb_init() operation.
ncb_is_empty() was plainly incorrect as it directly dereferences the
memory to read offset blocks instead of ncb_read_off(). The result is
undefined.
Also, BUG_ON() statement is wrong when the buffer starts with a data
block. In this case, ncb_head() is not the first gap offset but instead
just random data. The calculated sum in BUG_ON() statement has thus no
meaning and may cause an abort. Adjust this by reorganizing the whole
function. Only the first data block size is read. If and only if not
nul, the first gap size is then checked.
ncb_is_full() has been rewritten to share the same model as
ncb_is_empty().
The quic-conn manages a buffer to store received QUIC packets. When the
buffer wraps, the gap is filled until the end with junk and packets can
be inserted at the start of the buffer.
On the other end, deletion is implemented via quic_rx_pkts_del().
Packets are removed one by one if their refcount is nul. If junk is
found, the buffer is emptied until its wrap.
This seems to work in most cases but a bug was found in a particular
case : on insertion if buffer gap is not at the end of the buffer. In
this case, the gap was filled, which is useless as now the buffer is
full and the packet cannot be inserted. Worst, on deletion, when junk is
removed there is a risk to removed new packets. This can happens in the
following case :
1. buffer contig space is too small, junk is inserted in the middle of
it
2. on quic_rx_pkts_del() invocation, a packet is removed, but not the
next one because its refcount is still positive. When a new packet is
received, it will be stored after the junk.
3. on next quic_rx_pkts_del(), when junk is removed, all contig data is
cleared, with newer packets data too.
This will cause a transfer between a client and haproxy to be stalled.
This can be reproduced with big enough POST requests. I triggered it
with ngtcp2 and 10M of posted data.
Hopefully, the solution of this bug is simple. If contig space is not
big enough to store a packet, but the space is not at the end of the
buffer, no junk is inserted and the packet is dropped as we cannot
buffered it. This ensures that junk is only present at the end of the
buffer and when removed no packets data is purged with it.
In httpclient_applet_init() function, ss_dst variable is always defined
before the call to sockaddr_alloc(). There is no reason to test it.
This patch should fix the issue #1706.
labels used in goto statement was not called in the right order. Thus if
there is an error during the appctx startup, it is possible to leak a task.
This patch should fix the issue #1703. No backport needed.
Even if `unique_id` and `s->unique_id` are identical it is a bit odd to
`isttest()` `unique_id` and then use `s->unique_id` in the call to `http_add_header()`.
This "issue" was introduced in a17e66289c,
because before that commit the function returned the length of the ID, as it
was not an ist.
When loading providers with 'ssl-provider' global options, this
ssl-provider-path option can be used to set the search path that is to
be used by openssl. It behaves the same way as the OPENSSL_MODULES
environment variable.
negation or default modifiers are not supported for this option. However,
this was already tested earlier in cfg_parse_listen() function. Thus, when
"http-restrict-req-hdr-names" option is parsed, the keyword modifier is
always equal to KWM_STD. It is useless to test it again at this place.
This patch should solve the issue #1702.
The appctx is already the endpoint target. It is confusing to also use it to
set the endpoint context. So, never set the endpoint ctx when an appctx is
created or attached to an existing conn-stream.
When creating a new applet for peer outgoing connection, we check
the load on each thread. Threads with least applet count are
preferred.
With this solution we avoid a situation when many outgoing
connections run on the same thread causing significant load on
single CPU core.
It is now possible to start an appctx on a thread subset. Some controls were
added here and there. It is forbidden to start a backend appctx on another
thread than the local one. If a frontend appctx is started on another thread
or a thread subset, the applet .init callback function must be defined. This
callback function is responsible to finalize the appctx startup. It can be
performed synchornously. In this case, the appctx is started on the local
thread. It is not really useful but it is valid. Or it can be performed
asynchronously. In this case, .init callback function is called when the
appctx is woken up for the first time. When this happens, the appctx
affinity is set to the current thread to be able to start the session and
the stream.
In the same way than for the tasks, the applets api was changed to be able
to start a new appctx on a thread subset. For now the feature is
disabled. Only appctx_new_here() is working. But it will be possible to
start an appctx on a specific thread or a subset via a mask.
A .init callback function is defined for the peer_applet applet. This
function finishes the appctx startup by calling appctx_finalize_startup()
and its handles the stream customization.
A .init callback function is defined for the sink_forward_applet applet.
This function finishes the appctx startup by calling
appctx_finalize_startup() and its handles the stream customization.
A .init callback function is defined for the httpclient_applet applet. This
function finishes the appctx startup by calling appctx_finalize_startup()
and its handles the stream customization.
A .init callback function is defined for the update_applet applet. This
function finishes the appctx startup by calling appctx_finalize_startup()
and its handles the stream customization.
A .init callback function is defined for the spoe_applet applet. This
function finishes the spoe_appctx initialization. It also finishes the
appctx startup by calling appctx_finalize_startup() and its handles the
stream customization.
A .init callback function is defined for the dns_session_applet applet. This
function finishes the appctx startup by calling appctx_finalize_startup()
and its handles the stream customization.
appctx_free_on_early_error() must be used to release a freshly created
frontend appctx if an error occurred during the init stage. It takes care to
release the stream instead of the appctx if it exists. For a backend appctx,
it just calls appctx_free().
appctx_finalize_startup() may be used to finalize the frontend appctx
startup. It is responsible to create the appctx's session and the frontend
conn-stream. On error, it is the caller responsibility to release the
appctx. However, the session is released if it was created. On success, if
an error is encountered in the caller function, the stream must be released
instead of the appctx.
This function should ease the init stage when new appctx is created.
It is just a helper function that call the .init applet callback function,
if it exists. This will simplify a bit the init stage when a new applet is
started. For now, this callback function is only used when a new service is
started.
The session created for frontend applets is now totally owns by the
corresponding appctx. It means the appctx is now responsible to release
it. This removes the hack in stream_free() about frontend applets to be sure
to release the session.
Applets were moved at the same level than multiplexers. Thus, gradually,
applets code is changed to be less dependent from the stream. With this
commit, the frontend appctx are ready to own the session. It means a
frontend appctx will be responsible to release the session.
If no private key can be found in a bind line's certificate and
ssl-load-extra-files is set to none we end up trying to call
X509_check_private_key with a NULL key, which crashes.
This fix should be backported to all stable branches.
Found with -Wmissing-prototypes:
src/hlua_fcn.c:53:5: fatal error: no previous prototype for function 'hlua_checkboolean' [-Wmissing-prototypes]
int hlua_checkboolean(lua_State *L, int index)
^
src/hlua_fcn.c:53:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
int hlua_checkboolean(lua_State *L, int index)
^
static
1 error generated.
Found with -Wmissing-prototypes:
src/ssl_utils.c:22:5: fatal error: no previous prototype for function 'cert_get_pkey_algo' [-Wmissing-prototypes]
int cert_get_pkey_algo(X509 *crt, struct buffer *out)
^
src/ssl_utils.c:22:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
int cert_get_pkey_algo(X509 *crt, struct buffer *out)
^
static
1 error generated.
When HAProxy is linked to an OpenSSLv3 library, this option can be used
to load a provider during init. You can specify multiple ssl-provider
options, which will be loaded in the order they appear. This does not
prevent OpenSSL from parsing its own configuration file in which some
other providers might be specified.
A linked list of the providers loaded from the configuration file is
kept so that all those providers can be unloaded during cleanup. The
providers loaded directly by OpenSSL will be freed by OpenSSL.
This option can be used to define a default property query used when
fetching algorithms in OpenSSL providers. It follows the format
described in https://www.openssl.org/docs/man3.0/man7/property.html.
It is only available when haproxy is built with SSL support and linked
to OpenSSLv3 libraries.
The random generator initialization needs to be performed before the
chroot but it is not needed before. If we want to add provider
configuration option to the configuration file, they need to be
processed before any call to a crypto-related OpenSSL function.
We can then delay the initialization until after the configuration file
is parsed and processed.
The "http-restrict-req-hdr-names" option can now be set to restrict allowed
characters in the request header names to the "[a-zA-Z0-9-]" charset.
Idea of this option is to not send header names with non-alphanumeric or
hyphen character. It is especially important for FastCGI application because
all those characters are converted to underscore. For instance,
"X-Forwarded-For" and "X_Forwarded_For" are both converted to
"HTTP_X_FORWARDED_FOR". So, header names can be mixed up by FastCGI
applications. And some HAProxy rules may be bypassed by mangling header
names. In addition, some non-HTTP compliant servers may incorrectly handle
requests when header names contain characters ouside the "[a-zA-Z0-9-]"
charset.
When this option is set, the policy must be specify:
* preserve: It disables the filtering. It is the default mode for HTTP
proxies with no FastCGI application configured.
* delete: It removes request headers with a name containing a character
outside the "[a-zA-Z0-9-]" charset. It is the default mode for
HTTP backends with a configured FastCGI application.
* reject: It rejects the request with a 403-Forbidden response if it
contains a header name with a character outside the
"[a-zA-Z0-9-]" charset.
The option is evaluated per-proxy and after http-request rules evaluation.
This patch may be backported to avoid any secuirty issue with FastCGI
application (so as far as 2.2).
Using -Wall reveals several warning when building ncbuf testing API. One
of them was about the signedness mismatch. The other one was with an
incorrect print format.
ncbuf public API functions were not ready to deal with a NCBUF_NULL as
parameter. Strenghten these functions by handling it properly.
Most of the functions will consider the buffer as empty and silently
returns. The only exception is ncb_init(buf) which cannot be called
with a NCBUF_NULL. This seems legitimate to consider this as a bug and
not silently failed in this case.
quic_rx_strm_frm type was used to buffered STREAM frames received out of
order. Now the MUX is able to deal directly with these frames and
buffered it inside its ncbuf.
Rx has been simplified since the conversion of buffer to a ncbuf. The
old buffer can now be removed. The frms tree is also removed. It was
used previously to stored out-of-order received STREAM frames. Now the
MUX is able to buffer them directly into the ncbuf.
This commit is the equivalent for uni-streams of previous commit
MEDIUM: mux-quic/h3/hq-interop: use ncbuf for bidir streams
All unidirectional streams data is now handle in MUX Rx ncbuf. The
obsolete buffer is not unused and will be cleared in the following
patches.
Add a ncbuf for data reception on qcs. Thanks to this, the MUX is able
to buffered all received frame directly into the buffer. Flow control
parameters will be used to ensure there is never an overflow.
This change will simplify Rx path with the future deletion of acked
frames tree previously used for frames out of order.
Coverity reports that data block generated by ncb_blk_first() has
sz_data field uninitialized. This has no real impact as it has no sense
for data block. Set to 0 to hide the warning.
This should fix github issue #1695.
It is absolutely not possible to use the same "bind" line to listen to
both quic and tcp for example, because no single transport layer would
fit both modes and we'll need the type to choose one then to choose a
mux. Let's make sure this does not happen. This may be relaxed in the
future if we manage to instantiate transport layers on the fly, but the
SSL vs quic part might be tricky to handle.
The error message when mixing stream and dgram protocols in an
address speaks about sockets while it ought to speak about addresses,
let's fix this as in some contexts it can be a bit confusing.
Fred & Amaury found that I messed up with qc_detach() in commit 4201ab791
("CLEANUP: muxes: make mux->attach/detach take a conn_stream endpoint"),
causing a segv in this case with endp->cs == NULL being passed to
__cs_mux(). It obviously ought to have been endp->target like in other
muxes.
No backport needed.
Valerio Pachera explained [1] that external checks would benefit from
having a variable indicating if SSL is being used or not on the server
being checked, and the discussion derived to also indicating the protocol
in use.
This patch adds two environment variables for external checks:
- HAPROXY_SERVER_SSL: equals "0" when SSL is not used, "1" when it is
- HAPROXY_SERVER_PROTO: contains one of the following words to describe
the protocol used with this server:
- "cli": the haproxy CLI. Normally not seen
- "syslog": this is a syslog TCP server
- "peers": this is a peers TCP server
- "h1": this is an HTTP/1.x server
- "h2": this is an HTTP/2 server
- "tcp": this is any other TCP server
The patch is very simple, and may be backported to recent versions if
needed. This closes github issue #1692.
[1] https://www.mail-archive.com/haproxy@formilux.org/msg42233.html
The two functions became exact copies since there's no more special case
for the appctx owner. Let's merge them into a single one, that simplifies
the code.
This one is the pointer to the conn_stream which is always in the
endpoint that is always present in the appctx, thus it's not needed.
This patch removes it and replaces it with appctx_cs() instead. A
few occurences that were using __cs_strm(appctx->owner) were moved
directly to appctx_strm() which does the equivalent.
The former takes a conn_stream still attached to a valid appctx,
which also complicates the termination of the applet. Instead, let's
pass the appctx which already points to the endpoint, this allows us
to properly detach the conn_stream before the call, which is cleaner
and safer.
The mux ->detach() function currently takes a conn_stream. This causes
an awkward situation where the caller cs_detach_endp() has to partially
mark it as released but not completely so that ->detach() finds its
endpoint and context, and it cannot be done later since it's possible
that ->detach() deletes the endpoint. As such the endpoint link between
the conn_stream and the mux's stream is in a transient situation while
we'd like it to be clean so that the mux's ->detach() code can call any
regular function it wants that knows the regular semantics of the
relation between the CS and the endpoint.
A better approach consists in slightly modifying the detach() API to
better match the reality, which is that the endpoint is detached but
still alive and that it's the only part the function is interested in.
As such, this patch modifies the function to take an endpoint there,
and by analogy (or simplicity) does the same for ->attach(), even
though it looks less important there since we're always attaching an
endpoint to a conn_stream anyway. It is possible that in the future
the API could evolve to use more endpoints that provide a bit more
flexibility in the API, but at this point we don't need to go further.
The principle that each mux stream should have an endpoint is not
guaranteed for closed streams that map to the dummy static streams.
Let's have a dummy endpoint for use with such streams. It only has
the DETACHED flag and a NULL conn_stream, and is referenced by all
the closed streams so that we can afford not to test strm->endp when
trying to access the flags or the CS.
The principle that each mux stream should have an endpoint is not
guaranteed for closed streams that map to the dummy static streams.
Let's have a dummy endpoint for use with such streams. It only has
the DETACHED flag and a NULL conn_stream, and is referenced by all
the closed streams so that we can afford not to test h2s->endp when
trying to access the flags or the CS.
There is always an endpoint link in a stream, and this endpoint link
contains a pointer to the conn_stream it's attached to, so the one in
the h1 stream is always duplicate now. Let's always use endp->cs
instead and get rid of it.
Muxes and applets need to have both a pointer to the endpoint and to the
conn_stream. It would seem more natural that they only have a pointer to
the endpoint (that is always there) and that this one has an optional
pointer to the conn_stream. This would reduce the number of elements to
manipulate in lower level code. In addition, the conn_stream is not much
used from the lower layers (wake and exceptional events mostly).
The few applets that set CS_EP_EOI or CS_EP_ERROR used to set it on the
endpoint retrieved from the conn_stream while it's already available on
the appctx itself. Better use the appctx one to limit the unneeded
interactions between the two sides.
At a few places the endpoint pointer was retrieved from the conn_stream
while it's safer and more long-term proof to take it from the qcs. Let's
just do that.
At a few places the endpoint pointer was retrieved from the conn_stream
while it's safer and more long-term proof to take it from the fstrm.
Let's just do that.
At a few places the endpoint pointer was retrieved from the conn_stream
while it's safer and more long-term proof to take it from the context.
Let's just do that.
At a few places the endpoint pointer was retrieved from the conn_stream
while it's safer and more long-term proof to take it from the h2s. Let's
just do that.
At a few places the endpoint pointer was retrieved from the conn_stream
while it's safer and more long-term proof to take it from the h1s. Let's
just do that.
Wherever we need to report an error, we have an even easier access to
the endpoint than the conn_stream. Let's first adjust the API to use
the endpoint and rename the function accordingly to cs_ep_set_error().
Since the 2.5, for security reason, HTTP/1.0 GET/HEAD/DELETE requests with a
payload are rejected (See e136bd12a "MEDIUM: mux-h1: Reject HTTP/1.0
GET/HEAD/DELETE requests with a payload" for details). However it may be an
issue for old clients.
To avoid any compatibility issue with such clients,
"h1-accept-payload-with-any-method" global option was added. It must only be
set if there is a good reason to do so because it may lead to a request
smuggling attack on some servers or intermediaries.
This patch should solve the issue #1691. it may be backported to 2.5.
In wdt_handler(), does not try to trigger the watchdog if the
prev_cpu_time wasn't initialized.
This prevents an unexpected trigger of the watchdog when it wasn't
initialized yet. This case could happen in the master just after loading
the configuration. This would show a trace where the <diff> value is equal
to the <now> value in the trace, and the <poll> value would be 0.
For example:
Thread 1 is about to kill the process.
*>Thread 1 : id=0x0 act=1 glob=1 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
stuck=1 prof=0 harmless=0 wantrdv=0
cpu_ns: poll=0 now=6005541706 diff=6005541706
curr_task=0
Thanks to Christian Ruppert for repporting the problem.
Could be backported in every stable versions.
Lua API Channel.remove() and HTTPMessage.remove() expects 1 to 3
arguments (counting the manipulated object), with offset and length
being the 2nd and 3rd argument, respectively.
hlua_{channel,http_msg}_del_data() incorrectly gets the 3rd argument as
offset, and 4th (nonexistent) as length. hlua_http_msg_del_data() also
improperly checks arguments. This patch fixes argument handling in both.
Must be backported to 2.5.
Implement a series of unit test to validate ncbuf. This is written with
a main function which can be compiled independently using the following
command-line :
$ gcc -DSTANDALONE -lasan -I./include -o ncbuf src/ncbuf.c
The first part tests is used to test ncb_add()/ncb_advance(). After each
call a loop is done on the buffer blocks which should ensure that the
gap infos are correct.
The second part generates random offsets and insert them until the
buffer is full. The buffer is then resetted and all random offsets are
re-inserted in the reverse order : the buffer should be full once again.
The generated binary takes arguments to change the tests execution.
"usage: ncbuf [-r] [-s bufsize] [-h bufhead] [-p <delay_msec>]"
A new function ncb_advance() is implemented. This is used to advance the
buffer head pointer. This will consume the front data while forming a
new gap at the end for future data.
On success NCB_RET_OK is returned. The operation can be rejected if a
too small new gap is formed in front of the buffer.
Define three different ways to proceed insertion. This configures how
overlapping data is treated.
- NCB_ADD_PRESERVE : in this mode, old data are kept during insertion.
- NCB_ADD_OVERWRT : new data will overwrite old ones.
- NCB_ADD_COMPARE : this mode adds a new test in check stage. The
overlapping old and new data must be identical or else the insertion
is not conducted. An error NCB_RET_DATA_REJ is used in this case.
The mode is specified with a new argument to ncb_add() function.
Implement a new function ncb_add() to insert data in ncbuf. This
operation is conducted in two stages. First, a simulation will be run to
ensure that insertion can be proceeded. If a gap is formed, either
before or after the new data, it must be big enough to store its header,
or else the insertion is aborted.
After this check stage, the insertion is conducted block by block with
the function pair ncb_fill_data_blk()/ncb_fill_gap_blk().
A new type ncb_ret is used as a return value. For the moment, only
success or gap-size error is used. It is planned to add new error types
in the future when insertion will be extended.
Relax the constraint for gap storage when this is the last block.
ncb_blk API functions will consider that if a gap is stored near the end
of the buffer, without the space to store its header, the gap will cover
entirely the buffer end.
For these special cases, the gap size/data size are not write/read
inside the gap to prevent an overflow. Such a gap is designed in
functions as "reduced gap" and will be flagged with the value
NCB_BK_F_FIN.
This should reduce the rejection on future add operation when receiving
data in-order. Without reduced gap handling, an insertion would be
rejected if it covers only partially the last buffer bytes, which can be
a very common case.
Implement two new functions to report the total data stored accross the
whole buffer and the data stored at a specific offset until the next gap
or the buffer end.
To facilitate implementation of these new functions and also future
add/delete operations, a new abstraction is introduced : ncb_blk. This
structure represents a block of either data or gap in the buffer. It
simplifies operation when moving forward in the buffer. The first buffer
block can be retrieved via ncb_blk_first(buf). The block at a specific
offset is accessed via ncb_blk_find(buf, off).
This abstraction is purely used in functions but not stored in the ncbuf
structure per-se. This is necessary to keep the minimal memory
footprint.
Define the new type ncbuf. It can be used as a buffer with
non-contiguous data and wrapping support.
To reduce as much as possible the memory footprint, size of data and
gaps are stored in the gaps themselves. This put some limitation on the
buffer usage. A reserved space is present just before the head to store
the size of the first data block. Also, add and delete operations will
be constrained to ensure minimal gap sizes are preserved.
The sizes stored in the gaps are represented by a custom type named
ncb_sz_t. This type is a typedef to easily change it : this has a
direct impact on the maximum buffer size (MAX(ncb_sz_t) - sizeof(ncb_sz_t))
and the minimal gap sizes (sizeof(ncb_sz_t) * 2)).
Currently, it is set to uint32_t.
Add send_stateless_reset() to send a stateless reset packet. It prepares
a packet to build a 1-RTT packet with quic_stateless_reset_token_cpy()
to copy a stateless reset token derived from the cluster secret with
the destination connection ID received as salt.
Also add QUIC_EV_STATELESS_RST new trace event to at least to have a trace
of the connection which are reset.
A server may send the stateless reset token associated to the current
connection from its transport parameters. So, let's copy it from
qc_lstnt_params_init().
The stateless reset token of a connection is generated from qc_new_conn() when
allocating the first connection ID. A QUIC server can copy it into its transport
parameters to allow the peer to reset the associated connection.
This latter is not easily reachable after having returned from qc_new_conn().
We want to be able to initialize the transport parameters from this function which
has an access to all the information to do so.
Extract the code used to initialize the transport parameters from qc_lstnr_pkt_rcv()
and make it callable from qc_new_conn(). qc_lstnr_params_init() is implemented
to accomplish this task for a haproxy listener.
Modify qc_new_conn() to reduce its the number of parameters.
The source address coming from Initial packets is also copied from qc_new_conn().
Add quic_stateless_reset_token_init() wrapper function around
quic_hkdf_extract_and_expand() function to derive the stateless reset tokens
attached to the connection IDs from "cluster-secret" configuration setting
and call it each time we instantiate a QUIC connection ID.
This function will have to call another one from quic_tls.[ch] soon.
As we do not want to include quic_tls.h from xprt_quic.h because
quic_tls.h already includes xprt_quic.h, let's moving it into
xprt_quic.c.
This is a wrapper function around OpenSSL HKDF API functions to
use the "extract-then-expand" HKDF mode as defined by rfc5869.
This function will be used to derived stateless reset tokens
from secrets ("cluster-secret" conf. keyword) and CIDs (as salts).
It could be usefull to set a ASCII secret which could be used for different
usages. For instance, it will be used to derive QUIC stateless reset tokens.
A ->time_received new member is added to quic_rx_packet to store the time the
packet are received. ->largest_time_received is added the the packet number
space structure to store this timestamp for the packet with a new largest
packet number to be acknowledged. QUIC_FL_PKTNS_NEW_LARGEST_PN new flag is
added to mark a packet number space as having to acknowledged a packet wih a
new largest packet number. In this case, the packet number space ack delay
must be recalculated.
Add quic_compute_ack_delay_us() function to compute the ack delay from the value
of the time a packet was received. Used only when a packet with a new largest
packet number.
As we do not have any task to be wake up by the poller after sendto() error,
we add an sendto() error counter to the quic_conn struct.
Dump its values from qc_send_ppkts().
There are two reasons we can reject the creation of an h2 stream on the
frontend:
- its creation would violate the MAX_CONCURRENT_STREAMS setting
- there's no more memory available
And on the backend it's almost the same except that the setting might
have be negotiated after trying to set up the stream.
Let's add traces for such a sitaution so that it's possible to know why
the stream was rejected (currently we only know it was rejected).
It could be nice to backport this to the most recent versions.
When a client doesn't respect the h2 MAX_CONCURRENT_STREAMS setting, we
rightfully send RST_STREAM to it so that the client closes. But the
max_id is only updated on the successful path of h2c_handle_stream_new(),
which may be reentered for partial frames or CONTINUATION frames, and as
a result we don't increment it if an extraneous stream ID is rejected.
Normally it doesn't have any consequence. But on a POST it can have some
if the DATA frame immediately follows the faulty HEADERS frame: with
max_id not incremented, the stream remains in IDLE state, and the DATA
frame now lands in an invalid state from a protocol's perspective, which
must lead to a connection error instead of a stream error.
This can be tested by modifying the code to send an arbitrarily large
MAX_CONCURRENT_STREAM setting and using h2load to send more concurrent
streams than configured: with a GET, only a tiny fraction of them will
report an error (e.g. 101 streams for 100 accepted will result in ~1%
failure), but when sending data, most of the streams will be reported
as failed because the connection will be closed. By updating the max_id
earlier, the stream is now considered as closed when the DATA frame
arrives and it's silently discarded.
This must be backported to all versions but only if the code is exactly
the same. Under no circumstance this ID may be updated for a partial frame
(i.e. only update it before or just after calling h2c_frt_steam_new()).
This patch adds a lock on the struct dgram_conn to ensure
that an other thread cannot trash a fd or alter its status
while the current thread processing it on for send/receive/connect
operations.
Starting with the 2.4 version this could cause a crash when a DNS
request is failing, setting the FD of the dgram structure to -1. If the
dgram structure is reused after that, a read access to fdtab[-1] is
attempted. The crash was only triggered when compiled with ASAN.
In previous versions the concurrency issue also exists but is less
likely to crash.
This patch must be backported until v2.4 and should be
adapt for v < 2.4.
... or how a bogus warning forces you to do tricky changes in your code
and fail on a length test condition! Fortunately it changed in the right
direction that immediately broke, due to a missing "> sizeof(path)" that
had to be added to the already ugly condition.
This fixes recent commit 393e42ae5 ("BUILD: ssl: work around bogus warning
in gcc 12's -Wformat-truncation"). It may have to be backported if that
one is backported.
When building without threads, gcc 12 says that there's a null-deref in
_HA_ATOMIC_INC() called from listener_accept(). It's just that the code
was originally written in an attempt not to always have a proxy for a
listener and that there are two places where the pointer is tested before
being used, so the compiler concludes that the pointer might be null
hence that other places are null-derefs.
In practice the pointer cannot be null there (and never has been), but
since that code was initially built that way and it's only a matter of
adding a pair of braces to shut it up, let's respect that initial
attempt in case one day we need it.
This one was also reported by Ilya in issue #1513, though with threads
enabled in his case.
This may have to be backported if users complain about new breakage with
gcc-12.
As was first reported by Ilya in issue #1513, Gcc 12 incorrectly reports
a possible overflow from the concatenation of two strings whose size was
previously checked to fit:
src/ssl_crtlist.c: In function 'crtlist_parse_file':
src/ssl_crtlist.c:545:58: error: '%s' directive output may be truncated writing up to 4095 bytes into a region of size between 1 and 4096 [-Werror=format-truncation=]
545 | snprintf(path, sizeof(path), "%s/%s", global_ssl.crt_base, crt_path);
| ^~
src/ssl_crtlist.c:545:25: note: 'snprintf' output between 2 and 8192 bytes into a destination of size 4097
545 | snprintf(path, sizeof(path), "%s/%s", global_ssl.crt_base, crt_path);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It would be a bit concerning to disable -Wformat-truncation because it
might detect real programming mistakes at other places. The solution
adopted in this patch is absolutely ugly and error-prone, but it works,
it consists in integrating the snprintf() call in the error condition
and to test the result again. Let's hope a smarter compiler will not
warn that this test is absurd since guaranteed by the first condition...
This may have to be backported for those suffering from a compiler upgrade.
The CRL file CLI update code was strongly based off the CA one and some
copy-paste issues were then introduced.
This patch fixes GitHub issue #1685.
It should be backported to 2.5.
The STAT_ST_* values have been abused by virtually every applet and CLI
keyword handler, and this must not continue as it's a source of bugs and
of overly complicated code.
This patch renames the states to STAT_STATE_*, and keeps the previous
enum while marking each entry as deprecated. This should be sufficient to
catch out-of-tree code that might rely on them and to let them know what
to do with that.
Now that the CLI's print context is alone in the appctx, it's possible
to refine the appctx's ctx layout so that the cli part matches exactly
a regular svcctx, and as such move the CLI context into an svcctx like
other applets. External code will still build and work because the
struct cli perfectly maps onto the struct cli_print_ctx that's located
into svc.storage. This is of course only to make a smooth transition
during 2.6 and will disappear immediately after.
A tiny change had to be applied to the opentracing addon which performs
direct accesses to the CLI's err pointer in its own print function. The
rest uses the standard cli_print_* which were the only ones that needed
a small change.
The whole "ctx.cli" struct could be tagged as deprecated so that any
possibly existing external code that relies on it will get build
warnings, and the comments in the struct are pretty clear about the
way to fix it, and the lack of future of this old API.
Just like for the TCP service, let's move the context away from
appctx.ctx. A new struct hlua_http_ctx was defined, reserved in
hlua_applet_http_init() and used everywhere else. Similarly, the
task dump code will no more report decoded stack traces in case
these services would be involved. That may be solved later.
The use-service mechanism for Lua in TCP mode relies on the
hlua_tcp storage in appctx->ctx. We can move its definition to
hlua.c and simply use appctx_reserve_svcctx() to reserve and access
the stoage. One tiny side effect is that the task dump used in panics
will not show anymore the Lua call stack in its trace. For this a
better API is needed from the Lua code to expose a function that does
the job from an appctx.
The Lua cosockets were using appctx.ctx.hlua_cosocket. Let's move this
to a local definition of "hlua_csk_ctx" in hlua.c, which is allocated
from the appctx by hlua_socket_new(). There's a notable change which is
that, while previously the xref link with the peer was established with
the appctx, it's now in the hlua_csk_ctx. This one must then hold a
pointer to the appctx. The code was adjusted accordingly, and now that
part of the code doesn't use the appctx.ctx anymore.
The context was moved to a local definition in the cache code, and
there's nothing specific to the cache anymore in the appctx. The
struct is stored into the appctx's storage area via the svcctx.
This one has been misused for a while as well, it's time to deprecate it
since we don't use it anymore. It will be removed in 2.7 and for now is
only marked as deprecated. Since we need to guarantee that it's zeroed
before starting any applet or CLI command, it was moved into an anonymous
union where its sibling is not marked as deprecated so that we can
continue to initialize it without triggering a warning.
If you found this commit after a bisect session you initiated to figure
why you got some build warnings and don't know what to do, have a look
at the code that deals with the "show fd", "show sess" or "show servers"
commands, as it's supposed to be self-explanatory about the tiny changes
to apply to your code to port it. If you find APPLET_MAX_SVCCTX to be
too small for your use case, either kindly ask for a tiny extension
(and try to get your code merged), or just use a pool.
The httpclient already uses its own pointer and only used to store this
single pointer into the appctx.ctx field. Let's just move it to the
svcctx and remove this entry from the appctx union.
The httpclient's CLI uses ctx.cli.i0 for its flags and .p0 for the client
instance. Let's have a locally defined structure for this so that we don't
need the generic cli variables anymore.
Let's create a show_sock_ctx to store the bind_conf and the listener.
The entry is reserved when entering the I/O handler since there's no
parser here. That's fine because the function doesn't touch the area.
The code is was a bit convoluted by the use of a state machine around
st2 that is not used since only the STAT_ST_LIST state was used, and
the test of global.cli_fe inside the loop while it ought better be
tested before entering there. Let's get rid of this unneded state and
simplify the code. There's no more need for ->st2 now. The code looks
more changed than it really is due to the reindent caused by the
removal of the switch statement, but "git show -b" shows what really
changed.
There is the variable to start from (or environ) and an option to stop
after dumping the first one, just like "show fd". Let's have a small
locally-defined context with these two fields.
The "show fd" command used to rely on cli.i0 for the fd, and st2 just
to decide whether to stop after the first value or not. It could have
been possible to decide to use just a negative integer to dump a single
value, but it's as easy and more durable to declare a two-field struct
show_fd_ctx for this.
The command uses a pointer to the current proxy being dumped, one to the
current server being dumped, an optional ID of the only proxy to dump
(which is in fact used as a boolean), and a flag indicating if we're
doing a "show servers conn" or a "show servers state". Let's move all
this to a struct show_srv_ctx.
The command uses a pointer to a cache instance and the next key to dump,
they were in cli.p0/i0 respectively, let's move them to a struct
show_cache_ctx.
The command uses this state but _INIT immediately turns to _LIST, which
turns to _FIN at the end without doing anything in that state, thus the
only existing state is _LIST so we don't need to store a state. Let's
just get rid of it.
The command was using cli.p0/p1/p2 to select which section to dump, the
current section and the current ns. Let's instead have a locally defined
"show_resolvers_ctx" section for this.
The ring code was using ctx.cli.i0/p0/o0 to store its context during CLI
dumps via "show events" or "show errors". Let's use a locally defined
context and drop that.
The ring watch flags (wait, seek end) were dangerously passed via ctx.cli.i0
from "show buf" in sink.c:cli_parse_show_events(), or implicitly reset in
"show errors". That's very unconvenient, difficult to follow, and prone to
short-term breakage.
Let's pass an extra argument to ring_attach_cli() to take these flags, now
defined in ring-t.h as RING_WF_*, and let the function set them itself
where appropriate (still ctx.cli.i0 for now).
The command only requires to store an int, but it will be useful later
to have a struct to pass extra info such as an "all" flag to dump all
FDs. The new context is now a struct dev_fd_ctx stored in svcctx.
Instead of having a struct that contains a single pointer in the appctx
context, let's directly use the generic context pointer and get rid of
the now unused sft.ptr entry.
Several steps are used during the addition of a crtlist to yield during
long operations, and states are used for this. Let's just not use the
st2 anymore and place the state inside the add_crtlist_ctx struct instead.
These commands were using cli.i0/p0/p1 and in a not very clean way since
they use the same parser but with different types depending on the I/O
handler. Given there was no explanation about what the variables were
supposed to be, they were named based on best guess and placed into a
new "show_crtlist_ctx" structure.
These two commands use distinct parse/release functions but a common
iohandler, thus they need to keep the same context. It was created
under the name "commit_cacrlfile_ctx" and holds a large part of the
pointers (6) and the ca_type field that helps distinguish between
the two commands for the I/O handler. It looks like some of these
fields could have been merged since apparently the CA part only
uses *cafile* and the CRL part *crlfile*, while both old and new
are of type cafile_entry and set only for each type. This could
probably even simplify some parts of the code that tries to use
the correct field.
These fields were the last ones to be migrated thus the appctx's
ssl context could finally be removed.
Just like for "set ssl cafile", the command doesn't really need this
context which doesn't outlive the parsing function but it was there
for a purpose so it's maintained. Only 3 fields were used from the
appctx's ssl context: old_crlfile_entry, new_crlfile_entry, and path.
These ones were reinstantiated into a new "set_crlfile_ctx" struct.
It could have been merged with the one used in "set cafile" if the
fields had been renamed since cafile and crlfile are of the same
type (probably one of them ought to be renamed?).
None of these fields could be dropped as they are still shared with
other commands.
Just like for "set ssl cert", the command doesn't really need this
context which doesn't outlive the parsing function but it was there
for a purpose so it's maintained. Only 3 fields were used from the
appctx's ssl context: old_cafile_entry, new_cafile_entry, and path.
These ones were reinstantiated into a new "set_cafile_ctx" struct.
None of them could be dropped as they are still shared with other
commands.
The command doesn't really need any storage since there's only a parser,
but since it used this context, there might have been plans for extension,
so better continue with a persistent one. Only old_ckchs, new_ckchs, and
path were being used from the appctx's ssl context. There ones moved to
the local definition, and the two former ones were removed from the appctx
since not used anymore.
This command only really uses old_ckchs, new_ckchs and next_ckchi
from the appctx's ssl context. The new structure "commit_cert_ctx"
only has these 3 fields, though none could be removed from the shared
ssl context since they're still used by other commands.
This command only really uses old_ckchs, cur_ckchs and the index
in which the transaction was stored. The new structure "show_cert_ctx"
only has these 3 fields, and the now unused "cur_ckchs" and "index"
could be removed from the shared ssl context.
Now this command doesn't share any context anymore with "show cafile"
nor with the other commands. The previous "cur_cafile_entry" field from
the applet's ssl context was removed as not used anymore. Everything was
moved to show_crlfile_ctx which only has 3 fields.
Saying that the layout and usage of the various variables in the ssl
applet context is a mess would be an understatement. It's very hard
to know what command uses what fields, even after having moved away
from the mix of cli and ssl.
Let's extract the parts used by "show cafile" into their own structure.
Only the "show_all" field would be removed from the ssl ctx, the other
fields are still shared with other commands.
This context is used by CLI keywords registered by Lua. We can take
it out of the appctx and use the generic command context allocation so
that the appctx doesn't have to declare a specific one anymore. The
context is created during parsing.
This makes use of the generic command context allocation so that the
appctx doesn't have to declare a specific one anymore. The context is
created during parsing (both in the CLI and HTTP).
The change looks large but it's particularly mechanical. The context
initialization appears in stats.c and http_ana.c. The context is used
in stats.c and resolvers.c since "show stat resolvers" points there.
That's the reason why the definition moved to stats.h. "show info"
and "show stat" continue to share the same state definition for now.
Nothing else was modified.
Historically the CLI was a second access to the stats and we've continued
to initialize only the stats part when initializing the CLI. Let's make
sure we do that on the whole ctx instead. It's probably not more needed
at all nowadays but better stay on the safe side.
Let's instead define a 4-state enum solely for this use case, and place
it into the command's context. Note that END and FIN were already
aliases, which is why they were merged.
This makes use of the generic command context allocation so that the
appctx doesn't have to declare a specific one anymore. The context is
created during parsing. The code also uses st2 which deserves being
addressed in separate commit.
There's no point checking the state before deciding to detach the backref
on "show map", it should always be done if the list is not empty. Note
that being empty guarantees that it's not linked into the list, and
conversely not being empty guarantees that it's in the list, hence the
test doesn't need to be performed under the lock.
The "show map"/"clear map" code used to rely on the cli's i0/i1 fields
to store the generation numbers to work with. That's particularly dirty
because it's done at places where ctx.map is also manipulated while they
are part of the same union, and the reason why this didn't cause trouble
is because cli.i0/i1 are at offset 216/224 while the map parts end at
204, so luckily there was no overlap.
Let's add these fields to the map context.
This makes use of the generic command context allocation so that the
appctx doesn't have to declare a specific one anymore. The context is
created during parsing. Many commands, including pure parsers, use this
context but that's not a problem as it's designed to be used this way.
Due to this, many lines are changed but that's in fact a replacement of
"appctx->ctx.map" with "ctx->". Note that the code also uses st2 which
deserves being addressed in separate commit.
This state is pointless now, it just serves to initialize the initial
table pointer while this can be done more easily in the parser, so let's
do that and drop that state.
Let's instead define a 4-state enum solely for this use case, and place
it into the command's context. Note that END and FIN were already
aliases, which is why they were merged.
This makes use of the generic command context allocation so that the
appctx doesn't have to declare a specific one anymore. The context is
created during parsing. The code also uses st2 which deserves being
addressed in separate commit.
This makes use of the generic command context allocation so that the
appctx doesn't have to declare a specific one anymore. The context is
created during parsing.
The code still has room for improvement, such as in the "flags" field
where bits are hard-coded, but they weren't modified.
The state was a constant, let's remove what remains of the switch/case.
The code from the "case" statement was only reindented as can be checked
with "git show -b".
This state is only an alias for "thr >= global.nbthread" which is the
sole condition that indicates the end and reaches it. Let's just drop
this state. There's only the STATE_LIST that's left.
This state was only used to preset the list element. Now that we can
guarantee that the context can be properly preset during the parsing
we don't need this state anymore. The first pointer has to be set to
point to the first stream during the initial call which is detected
by the pointer not yet being set (null). Thanks to this we can also
remove one state check on the abort path.
This makes use of the generic command context allocation so that the
appctx doesn't have to declare a specific one anymore. The context is
created during parsing.
Till now, appctx_new() used to allocate an entry from the pool and
pass it through appctx_init() to initialize a debatable part of it that
didn't correspond anymore to the comments, and fill other fields. It's
hard to say what is fully initialized and what is not.
Let's get rid of that, and always zero the initialization (appctx are
not that big anyway, even with the cache there's no difference in
performance), and initialize what remains. this is cleaner and more
resistant to new field additions.
The appctx_init() function was removed.
Instead of using existing fields and having to put keyword-specific
contexts in the applet definition, let's have the appctx provide a
generic storage area that's currently large enough for existing CLI
commands and small applets, and a function to allocate that storage.
The function will be responsible for verifying that the requested size
fits in the area so that the caller doesn't need to add specific checks;
it is validated during development as this size is static and will
not change at runtime. In addition the caller doesn't even need to
free() the area since it's part of an existing context. For the
caller's convenience, a context pointer "svcctx" for the command is
also provided so that the allocated area can be placed there (or
possibly any other one in case a larger area is needed).
The struct's layout has been temporarily complicated by adding one
level of anonymous union on top of the "ctx" one. This will allow us
to preserve "ctx" during 2.6 for compatibility with possible external
code and get rid of it in 2.7. This explains why the diff extends to
the whole "ctx" union, but a "git show -b" shows that only one extra
layer was added. In order to make both the svcctx pointer and its
storage accessible without further enlarging the appctx structure,
both svcctx and the storage share the same storage as the ctx part.
This is done by having them placed in the union with a protected
overlapping area for svcctx, for which a shadow member is also
present in the storage area:
union {
void* svcctx; // variable accessed by services
struct {
void *shadow; // shadow of svcctx;
char storage[]; // where most services store their data
};
union { // older commands store here and ignore svcctx
...
} ctx;
};
I.e. new applications will use appctx->svcctx while older ones will be
able to continue to use appctx->ctx.*
The whole area (including the pointer's context) is zeroed before any
applet is initialized, and before CLI keyword processor's first invocation,
as it is an important part of the existing keyword processors, which makes
CLI keywords effectively behave like applets.
The io_handler in "add ssl crt_list" is built around a "while" loop that
only makes forward progress and that doesn't handle its final state as
it's not supposed to be called again once reached. This makes the code
confusing because its construct implies an infinite loop for such a
state (or any other unhandled one). Let's just remove that unneeded loop.
The "show ssl cert" command mixes some generic pointers from the
"ctx.cli" struct with context-specific ones from "ctx.ssl" while both
are in a union. Amazingly, despite the use of both p0 and i0 to store
respectively a pointer to the current ckchs and a transaction id, there
was no overlap with the other pointers used during these operations,
but should these fields be reordered or slightly updated this will break.
Comments were added above the faulty functions to indicate which fields
they are using.
This needs to be backported to 2.5.
The "show ssl crl-file" command mixes some generic pointers from the
"ctx.cli" struct with context-specific ones from "ctx.ssl" while both
are in a union. It's fortunate that the p1 pointer in use is located
before the first one used (it overlaps with old_cafile_entry). But
should these fields be reordered or slightly updated this will break.
This needs to be backported to 2.5.
The "show ssl ca-file <name>" command mixes some generic pointers from
the "ctx.cli" struct and context-specific ones from "ctx.ssl" while both
are in a union. The i0 integer used to store the current ca_index overlaps
with new_crlfile_entry which is thus harmless for now but is at the mercy
of any reordering or addition of these fields. Let's add dedicated fields
into the ssl structure for this.
Comments were added on top of the affected functions to indicate what they
use.
This needs to be backported to 2.5.
The "show ca-file" and "show crl-file" commands mix some generic pointers
from the "ctx.cli" struct and context-specific ones from "ctx.ssl" while
both are in a union. It's fortunate that the p0 pointer in use is located
immediately before the first one used (it overlaps with next_ckchi_link,
and old_cafile_entry is safe). But should these fields be reordered or
slightly updated this will break.
Comments were added on top of the affected functions to indicate what they
use.
This needs to be backported to 2.5.
When "show map" initializes itself, it first takes the reference to the
starting point under a lock, then releases it before switching to state
STATE_LIST, and takes the lock again. The problem is that it is possible
for another thread to remove the first element during this unlock/lock
sequence, and make the list run anywhere. This is of course extremely
unlikely but not impossible.
Let's initialize the pointer in the STATE_LIST part under the same lock,
which is simpler and more reliable.
This should be backported to all versions.
In case of write error in "show map", the backref is detached but
the list wasn't locked when this is done. The risk is very low but
it may happen that two concurrent "show map" one of which would fail
or one "show map" failing while the same entry is being updated could
cause a crash.
This should be backported to all stable versions.
Commit e7f74623e ("MINOR: stats: don't output internal proxies (PR_CAP_INT)")
in 2.5 ensured we don't dump internal proxies on the stats page, but the
same is needed for "show backend", as since the addition of the HTTP client
it now appears there:
$ socat /tmp/sock1 - <<< "show backend"
# name
<HTTPCLIENT>
This needs to be backported to 2.5.
This command was introduced in 1.8 with commit eceddf722 ("MEDIUM: cli:
'show cli sockets' list the CLI sockets") but its yielding doesn't work.
Each time it enters, it restarts from the last bind_conf but enumerates
all listening sockets again, thus it loops forever. The risk that it
happens in field is low but it easily triggers on port ranges after
400-500 sockets depending on the length of their addresses:
global
stats socket /tmp/sock1 level admin
stats socket 192.168.8.176:30000-31000 level operator
$ socat /tmp/sock1 - <<< "show cli sockets"
(...)
ipv4@192.168.8.176:30426 operator all
ipv4@192.168.8.176:30427 operator all
ipv4@192.168.8.176:30428 operator all
ipv4@192.168.8.176:30000 operator all
ipv4@192.168.8.176:30001 operator all
ipv4@192.168.8.176:30002 operator all
^C
This patch adds the minimally needed restart point for the listener so
that it can easily be backported. Some more cleanup is needed though.
The "show resolvers" command is bogus, it tries to implement a yielding
mechanism except that if it yields it restarts from the beginning, until
it manages to fill the buffer with only line breaks, and faces error -2
that lets it reach the final state and exit.
The risk is low since it requires about 50 name servers to reach that
state, but it's not impossible, especially when using multiple sections.
In addition, the extraneous line breaks, if sent over an interactive
connection, will desynchronize the commands and make the client believe
the end was reached after the first nameserver. This cannot be fixed
separately because that would turn this bug into an infinite loop since
it's the line feed that manages to fill the buffer and stop it.
The fix consists in saving the current resolvers section into ctx.cli.p1
and the current nameserver into ctx.cli.p2.
This should be backported, but that code moved a lot since it was
introduced and has always been bogus. It looks like it has mostly
stabilized in 2.4 with commit c943799c86 so the fix might be backportable
to 2.4 without too much effort.
Try to create a "default" resolvers section at startup, but does not
display any error nor warning. This section is initialized using the
/etc/resolv.conf of the system.
This is opportunistic and with no guarantee that it will work (but it should
on most systems).
This is useful for the httpclient as it allows to use the DNS resolver
without any configuration in most of the cases.
The function is called from the httpclient_pre_check() function to
ensure than we tried to create the section before trying to initiate the
httpclient. But it is also called from the resolvers.c to ensure the
section is created when the httpclient init was disabled.
Release the expression used by the set-{src,dst}[-port] actions so we
keep valgrind happy upon an exit or an haproxy -c.
Could be backported in every supported version.
Move the resolv.conf parser from the cfg_parse_resolvers so it could be
used separately.
Some changes were made in the memprintf in order to use a char **
instead of a char *. Also the variable is tested before each memprintf
so could skip them if no warnmsg nor errmsg were set.
Cleanup the alert and warning handling in the "parse-resolve-conf"
parser to use the errmsg and warnmsg variables and memprintf.
This will allow to split the parser and shut the alert/warning if
needed.
The commit 2eb5243e7 ("BUG/MEDIUM: mux-h1: Set outgoing message to DONE when
payload length is reached") introduced a regression. An internal error is
reported when we try to forward a message with trailers while the
content-length header was specified. Indeed, this case does not exist for H1
messages but it is possible in H2.
This patch should solve the issue #1684. It must be backported as far as
2.4.
This bug was already fixed at many places (stats, promex, lua) but the FCGI
multiplexer is also affected. When there is no content-length specified in
the response and when the END_REQUEST record is delayed, the response may be
truncated because an abort is erroneously detected. If the connection is not
closed because "keep-conn" option is set, the response is aborted at the end
of the server timeout.
This bug is a design issue with the HTX. It should be addressed. But it will
probably not be possible to backport them as far as 2.4. So, for now, the
only solution is to explicitly add an EOT block with the EOM flag in this
case.
This patch should fix the issue #1682. It must be backported as far as 2.4.
The commit a6c4a4834 ("BUG/MEDIUM: conn-stream: Don't erase endpoint flags
on reset") was too laxy on reset. Only app layer flags must be preserved. On
reset, the endpoint is detached. Thus all flags set by the endpoint itself
or concerning its type must be removed.
Without this fix, we can experienced crashes when a stream is released while
a server connection attempt failed. Indeed, in this case, endpoint of the
backend conn-stream is reset. But the endpoint type is still set. Thus when
the stream is released, the endpoint is detached again.
This patch is 2.6-specific. No backport needed. This commit depends on the
previous one ("MINOR: conn-stream: Add mask from flags set by endpoint or
app layer").
The httpclient.resolvers.prefer global keyword allows to configure an
ipv4 or ipv6 preference when resolving. This could be useful in
environment where the ipv6 is not supported.
By default the httpclient uses the resolvers section whose ID is
"default", the httpclient.resolvers.id global option allows to configure
another section to use.
The hard_error_ssl flag is set when the configuration is explicitely
done for the ssl in the httpclient.
If no configuration was made, the features are simply disabled and no
alert is emitted.
Cleanup the error handling in the initialization so we rely on the
ERR_CODE and use memprintf() to set the errmsg before printing it at the
end of the functions.
httpclient_set_dst() allows one to set the destination address instead
of using the one in the URL or resolving one from the host.
This function also support other types of socket like sockpair@, unix@,
anything that could be used on a server line.
In order to still support this behavior, the address must be set on the
backend in this particular case because the frontend connection does not
support anything other than ipv4 or ipv6.
To allow the http-request set-dst to work for the httpclient DNS
resolving, some changes need to be done:
- The destination address need to be set in the frontend (s->csf->dst)
instead of the backend (s->csb->dst) to be able to use
tcp_action_req_set_dst()
- SRV_F_MAPPORTS need to be set on the proxy in order to allow the port
change in alloc_dst_address()
The httpclient_resolve_init() adds http-request actions which does the
resolving using the Host header of the HTTP request.
The parse_http_req_cond function is directly used over an array of http
rules.
The do-resolve rule uses the "default" resolvers section. If this
section does not exist in the configuration, the resolving is disabling.
The httpclient DNS resolver will need a more efficient URL parser which
splits the URL into parts and does not try to resolve.
httpclient_spliturl uses the http_uri_parser in order to split the URL
into several ist.
The result of the function host part is then processed into str2ip2(),
and fails if it it not an IP, allowing us to resolve the domain later.
We rely on the largest ID which was used to open streams to know if the
stream we received STREAM frames for is closed or not. If closed, we return the
same status as the one for a STREAM frame which was a already received one for
on open stream.
It is possible that we continue to receive retransmitted STREAM frames after
the mux have been released. We rely on the ->rx.streams[].nb_streams counter
to check the stream was closed. If not, at this time we drop the packet.
This is required when the retransmitted frame types when the mux is released.
We add a counter for the number of streams which were opened or closed by the mux.
After the mux has been released, we can rely on this counter to know if the STREAM
frames are retransmitted ones or not.
That's similar to what was done for conn_streams and connections. The
flags were only set exactly when the relevant pointers were allocated,
so better test the pointer than the flag and stop setting the flag.
Just like for the conn_stream, now that these addresses are dynamically
allocated, there is no single case where the pointer is set without the
corresponding flag, and the flag is used as a permission to dereference
the pointer. Let's just replace the test of the flag with a test of the
pointer and remove all flag assignment. This makes the code clearer
(especially in "if" conditions) and saves the need for future code to
think about properly setting the flag after setting the pointer.
Some of the protocol-level ->connect() functions currently dereference
the connection's destination address while others test it and return an
error. There's normally no more non-bogus code path that calls such
functions without a valid destination address on the connection, so
let's unify these functions and just place a BUG_ON() there, and drop
the useless test that's supposed to return an internal error.
This flag is no longer needed now that it must always match the presence
of a destination address on the backend conn_stream. Worse, before previous
patch, if it were to be accidently removed while the address is present, it
could result in a leak of that address since alloc_dst_address() would first
be called to flush it.
Its usage has a long history where addresses were stored in an area shared
with the connection, but as this is no longer the case, there's no reason
for putting this burden onto application-level code that should not focus
on setting obscure flags.
The only place where that made a small difference is in the dequeuing code
in case of queue redistribution, because previously the code would first
clear the flag, and only later when trying to deal with the queue, would
release the address. It's not even certain whether there would exist a
code path going to connect_server() without calling pendconn_dequeue()
first (e.g. retries on queue timeout maybe?).
Now the pendconn_dequeue() code will rely on SF_ASSIGNED to decide to
clear and release the address, since that flag is always set while in
a server's queue, and its clearance implies that we don't want to keep
the address. At least it remains consistent and there's no more risk of
leaking it.
These functions dynamically allocate a source or destination address but
start by clearing the previous one. There's a non-null risk of leaking
addresses there in case of misuse. Better have them do nothing if the
address was already allocated.
HTTP/3 implementation must ignore unknown frame type to support protocol
evolution. Clients can deliberately use unknown type to test that the
server is conformant : this principle is called greasing.
Quiche client uses greasing on H3 frame type with a zero length frame.
This reveals a bug in H3 parsing code which causes the transfer to be
interrupted. Fix this by removing the break statement on ret variable.
Now the parsing loop is only interrupted if input buffer is empty or the
demux is blocked.
This should fix http/3 freeze transfers with the quiche client. Thanks
to Lucas Pardue from Cloudflare for his report on the bug. Frédéric
Lecaille quickly found the source of the problem which helps me to write
this patch.
If the request channel buffer is full, H3 demuxing must be interrupted
on the stream until some read is performed. This condition is reported
if the HTX stream buffer qcs.rx.app_buf is full.
In this case, qcs instance is marked with a new flag QC_SF_DEM_FULL.
This flag cause the H3 demuxing to be interrupted. It is cleared when
the HTX buffer is read by the conn-stream layer through rcv_buf
operation.
When the flag is cleared, the MUX tasklet is woken up. However, as MUX
iocb does not treat Rx for the moment, this is useless. It must be fix
to prevent possible freeze on POST transfers.
In practice, for the moment the HTX buffer is never full as the current
Rx code is limited by the quic-conn receive buffer size and the
incomplete flow-control implementation. So for now this patch is not
testable under the current conditions.
It seems this multiplier ended up in oblivion. Indeed a multiplier must be
applied to the loss delay expressed as an RTT multiplier: 9/8.
So, some packets were detected as lost too soon, leading to be retransmitted too
early!
If the MUX cannot handle immediately nor buffer a STREAM frame, the
packet containing it must not be acknowledge. This is in conformance
with the RFC9000.
qcc_recv() return codes have been adjusted to differentiate an invalid
frame with an already fully received offset which must be acknowledged.
If a packet contains a STREAM frame but the MUX is not allocated, the
frame cannot be enqueued. According to the RFC9000, we must not
acknowledge the packet under this condition.
This may prevents a bug with firefox which keeps trying on refreshing
the web page. This issue has already been detected before closing state
implementation : haproxy wasn't emitted CONNECTION_CLOSE and keeps
acknowledge STREAM frames despite not handle them.
In the future, it might be necessary to respond with a CONNECTION_CLOSE
if the MUX has already been freed.
In issue #1468 it was reported that sometimes server-side connection
attempts were only validated after the "timeout connect" value, and
that would only happen with an H2 client. A long code analysis with the
output dumps showed only one possible call path: an I/O event on the
frontend while reading had just been disabled calls h2_wake() which in
turns wakes cs_conn_io_cb(), which tries cs_conn_process() and cs_notify(),
which sees that the other side is not blocked (already in CS_ST_CON)
and tries cs_chk_snd() on it. But on that side the connection had just
finished to be set up and not yet woken the stream up, cs_notify()
would then call cs_conn_send() which succeeds and passes the connection
to CS_ST_RDY. The problem is that nothing new happened on the frontend
side so there's no reason to wake the stream up and the backend-side
conn_stream remains in CS_ST_RDY state with the stream never being
woken up.
Once the "timeout connect" strikes, process_stream() is woken up and
finds the connection finally setup, so it ignores the timeout and goes
on.
The number of conditions to meet to reproduce this is huge, which also
explains why the reporter says it's "occasional" and we were never able
to reproduce it in the lab. It needs at least reads to be disabled and
immediately re-enabled on the frontend side (e.g. buffer full) with an
I/O even reported before the poller had an opportunity to be disabled
but with no subscribe being reinstalled, so that sock_conn_iocb() has
no other choice but calling h2_wake(), and exactly at the same time
the backend connection must finish to set up so that it was not yet
reported by the poller, the data were sent and the polling for writes
disabled.
Several factors are to be considered here:
- h2_wake() should probably not call h2_wake_some_streams() for
ret >= 0 (common case), but only if some special event is reported
for at least one stream; that part is sensitive though as in the
past we managed to lose some rare cases (e.g. restart processing
after a pause), and such wakeups are extremely rare so we'd better
make that effort once in a while.
- letting a lazy forward attempt on the frontend confirm a backend
connection establishment is too smart to be reliable. That wasn't
in fact the intent and it's inherited from the very old code where
muxes didn't exist and where it was guaranteed that an even at this
layer would wake everyone up.
Here the best thing to do is to refrain from attempting to forward data
until the connection is confirmed. This will let the poller report the
connect() event to the backend side which will process it as it should
and does in all other cases.
Thanks to Jimmy Crutchfield for having reported useful traces and
tested patches.
This will have to be backported to all stable branches after some
observation. Before 2.6 the function is stream_int_chk_snd_conn(),
and the flag to remove is SI_SB_CON.
The use of co_set_data() should be strictly limited to setting the amount of
existing data to be transmitted. It ought not be used to decrement the
output after the data have left the buffer, because doing so involves
performing incorrect calculations using co_data() that still comprises data
that are not in the buffer anymore. Let's use c_rew() for this, which is
made exactly for this purpose, i.e. decrement c->output by as much as
requested.
When HTX blocks are transfer from the HTTP client context to the request
channel, via htx_xfer_blks() function, the metadata must also be counted, in
addition to the data size. Otherwise, expected payload size will not be
copied because the metadata of an HTX block (8 bytes) will be reserved. And
if the payload size is lower than 8 bytes, nothing will be copied. Thus only
a zero-copy will be able to copy the payload.
This issue is 2.6-specific, no backport is needed.
When the HTTP client consumes the response, it loops on the HTX message to
copy blocks content and it removes blocks by calling htx_remove_blk(). But
this function removes a block and returns the next one in the HTX
message. The result must be used instead of using htx_get_next(). It is
especially important because the block used in htx_get_next() loop was
removed. It only works because the message is not defragmented during the
loop.
In addition, the loop on the response was simplified to iter on blocks
instead of positions.
This patch must be backported to 2.5.
Only CS_EP_ERROR flag is now removed from the endpoint when a reset is
performed. When a new the endpoint is allocated, flags are preserved. It is
the caller responsibility to remove other flags, depending on its need.
Concretly, during a connection retry or a L7 retry, we must preserve
flags. In tcpcheck and the CLI, we reset flags.
This patch is 2.6-specific. No backport needed.
The SSL_SERVER_VERIFY_* constants were incorrectly set on the httpclient
server verify. The right constants are SSL_SOCK_VERIFY_* .
This could cause issues when using "httpclient-ssl-verify" or when the
SSL certificates can't be loaded.
No backport needed
This function is used to parse the QUIC packets carried by a UDP datagram.
When a correct packet could be found, the ->len RX packet structure value
is set to the packet length value. On the contrary, it is set to the remaining
number of bytes in the UDP datagram if no correct QUIC packet could be found.
So, there is no need to make this function return a status value. It allows
the caller to parse any QUIC packet carried by a UDP datagram without this.
Any client Initial packet carried in a datagram smaller than QUIC_INITIAL_PACKET_MINLEN(200)
bytes must be discarded. This does not mean we must discard the entire datagram.
So we must at least try to parse the packet length before dropping the packet
and return its length from qc_lstnr_pkt_rcv().
A crash is possible under such circumtances:
- The congestion window is drastically reduced to its miniaml value
when a quic listener is experiencing extreme packet loss ;
- we enqueue several STREAM frames to be resent and some of them could not be
transmitted ;
- some of the still in flight are acknowledged and trigger the
stream memory releasing ;
- when we come back to send the remaing STREAM frames, haproxy
crashes when it tries to build them.
To fix this issue, we mark the STREAM frame as lost when detected as lost.
Then a lookup if performed for the stream the STREAM frames are attached
to before building them. They are released if the stream is no more available
or the data range of the frame is consumed.
When we have to probe the peer, we must first try to send new data. This is done
here waking up the mux after having set the number of maximum number of datagrams
to send to QUIC_MAX_NB_PTO_DGRAMS (2). Of course, this is only the case if the
mux was subscribed to SEND events.
This may happen in rare cases with extreme packet loss (30% for both TX and RX)
which leads the congestion window to decrease down to its minimal value (two
datagrams). Under such circumtances, no ack-eliciting frame can be added to
a packet by qc_build_frms(). In this case we must cancel the packet building
process if there is no ACK or probe (PING frame) to send.
This function must return a successful status as soon as it could be build
a frame to be embedded by a packet. This behavior was broken by the last
modifications. This was due to a dangerous "ret = 1" statement inside
a loop. This statement must be reach only if we go out of a switch/case
after a "break" statement.
Add comments to mention this information.
When we are probing, we do not receive packets, furthermore all ACK frames have
already been sent. This is useless to send ACK when probing the peer. This
modification does not reset the flag which marks the connection as requiring an
ACK frame to be sent. If this is the case, this will be taken into an account
by after the probing process.
Make the two I/O handlers quic_conn_io_cb() and quic_conn_app_io_cb() call
qc_dgrams_retransmit() after probing retransmissions need was detected by
the timer task (qc_process_timer()).
We must modify qc_prep_pkts() to support QUIC_TLS_ENC_LEVEL_NONE as <next_tel>
parameter when called from qc_dgrams_retransmit().
When probing retranmissions with old data are needed for the connection we
mark the packets as probing with old data to track them when acknowledged:
we do not resend frames with old data when lost.
Modify qc_send_app_pkt() to distinguish the case where it sends new data
against the case where it sends old data during probing retransmissions.
We add <old_data> boolean parameter to this function to do so. The mux
never directly send old data when probing retransmissions are needed by
the connection.
This function is used to requeue the TX frames from TX packets which have
been detected as lost. The modifications consist in avoiding resending frames from
duplicated frames used to probe the peer. This is useless. Only the original
frames loss must be taken into an account because detected as lost before
the retransmitted frames. If these latter are also detected as lost, other
duplicated frames would have been retransmitted before their loss detection.
qc_prep_fast_retrans() and qc_prep_hdshk_fast_retrans() are modified to
take two list of frames as parameters. Two lists are needed for
qc_prep_hdshk_fast_retrans() to build datagrams with two packets during
handshake. qc_prep_fast_retrans() needs two lists of frames to be used
to send two datagrams with one list by datagram.
We want to be able to resend frames from list of frames during handshakes to
resend datagrams with the same frames as during the first transmissions.
This leads to decrease drasctically the chances of frame fragmentation due to
variable lengths of the packet fields. Furthermore the frames were not duplicated
when retransmitted from a packet to another one. This must be the case only during
packet loss dectection.
qc_dup_pkt_frms() is there to duplicate the frames from an input list to an output
list. A distinction is made between STREAM frames and the other ones because we
can rely on the "acknowledged offset" the aim of which is to track the number
of bytes which were acknowledged from TX STREAM frames.
qc_release_frm() in addition to release the frame passed as parameter, also mark
the duplicate STREAM frames as acknowledeged.
qc_send_hdshk_pkts() is the qc_send_app_pkts() counterpart to send datagrams from
at most two list of frames to be able to coalesced packets from two different
packet number spaces
qc_dgrams_retransmit() is there to probe the peer with datagrams depending on the
need of the packet number spaces which must be flag with QUIC_FL_PKTNS_PROBE_NEEDED
by the PTO timer task (qc_process_timer()).
Add QUIC_FL_CONN_RETRANS_NEEDED connection flag definition to mark a quic_conn
struct as needing a retranmission.
Add QUIC_FL_PKTNS_PROBE_NEEDED to mark a packet number space as needing a
datagram probing.
Set these flags from process_timer() to trigger datagram probings.
Do not initiate anymore datagrams probing from any quic encryption level.
This will be done from the I/O handlers (quic_conn_io_cb() during handshakes and
quic_conn_app_io_cb() after handshakes).
Add QUIC_FL_TX_PACKET_COALESCED flag to mark a TX packet as coalesced with others
to build a datagram.
Ensure we do not directly retransmit frames from such coalesced packets. They must
be retransmitted from their packet number spaces to avoid duplications.
We want to track the frames which have been duplicated during retransmissions so
that to avoid uselessly retransmitting frames which would already have been
acknowledged. ->origin new member is there to store the frame from which a copy
was done, ->reflist is a list to store the frames which are copies.
Also ensure all the frames are zeroed and that their ->reflist list member is
initialized.
Add QUIC_FL_TX_FRAME_ACKED flag definition to mark a TX frame as acknowledged.
Add a loop in the bidi STREAM function. This will call repeatdly
qcc_decode_qcs() and dequeue buffered frames.
This is useful when reception of more data is interrupted because the
MUX buffer was full. qcc_decode_qcs() has probably free some space so it
is useful to immediatly retry reception of buffered frames of the qcs
tree.
This may fix occurences of stalled Rx transfers with large payload.
Note however that there is still room for improvment. The conn-stream
layer is not able at this moment to retrigger demuxing. This is because
the mux io-handler does not treat Rx : this may continue to cause
stalled tranfers.
Previously, h3 layer was not able to demux a DATA frame if not fully
received in the Rx buffer. This causes evident limitation and prevents
to be able to demux a frame bigger than the buffer.
Improve h3_data_to_htx() to support partial frame demuxing. The demux
state is preserved in the h3s new fields : this is useful to keep the
current type and length of the demuxed frame.
Define a new structure h3s used to provide context for a H3 stream. This
structure is allocated and stored in the qcs thanks to previous commit
which provides app-layer context storage.
For now, h3s is empty. It will soon be completed to be able to support
stateful demux : this is required to be able to demux an incomplete
frame if the rx buffer is full.
Define 2 new callback for qcc_app_ops : attach and detach. They are
called when a qcs instance is respectively allocated and freed. If
implemented, they can allocate a custom context stored in the new
abstract field ctx of qcs.
For now, h3 and hq-interop does not use these new callbacks. They will
be soon implemented by the h3 layer to allocate a context used for
stateful demuxing.
This change is required to support the demuxing of H3 frames bigger than
a buffer.
Edit the functions used for HEADERS and DATA parsing. They now return
the number of bytes handled.
This change will help to demux H3 frames bigger than the buffer.
Improve the reception for STREAM frames. In qcc_recv(), if the frame is
bigger than the remaining space in rx buffer, do not reject it wholly.
Instead, copy as much data as possible. The rest of the data is
buffered.
This is necessary to handle H3 frames bigger than a buffer. The H3 code
does not demux until the frame is complete or the buffer is full.
Without this, the transfer on payload larger than the Rx buffer can
rapidly freeze.
Handle wrapping buffer in h3_data_to_htx(). If data is wrapping, first
copy the contiguous data, then copy the data in front of the buffer.
Note that h3_headers_to_htx() is not able to handle wrapping data. For
the moment, a BUG_ON was added as a reminder. This cas never happened,
most probably because HEADERS is the first frame of the stream.
Always set HTX flag HTX_SL_F_XFER_LEN for http/3. This is correct
becuase the size of H3 requests is always known thanks to the protocol
framing.
This may fix occurences of incomplete POST requests when the client side
of the connection has been closed before.
Add new qcs fields to count the sum of bytes received for each stream.
This is necessary to enforce flow-control for reception on the peer.
For the moment, the implementation is partial. No MAX_STREAM_DATA or
FLOW_CONTROL_ERROR are emitted. BUG_ON statements are here as a
remainder.
This means that for the moment we do not support POST payloads greater
that the initial max-stream-data announced (256k currently).
At least, we now ensure that we never buffer a frame which overflows the
flow-control limit : this ensures that the memory consumption per stream
should stay under control.
qcs and its field are not properly freed if the conn-stream allocation
fail in qcs_new(). Fix this by having a proper deinit code with a
dedicated label.
Comments were not properly edited since the splitting of functions for
stream emission. Also "payload" argument has been renamed to "in" as it
better reflects the function purpose.
This adds a deinit_idle_conns() function that's called on deinit to
release the per-thread idle connection management tasks. The global
task was already taken care of.
The freeing of pre-check callbacks was missing when this feature was
recently added with commit b53eb8790 ("MINOR: init: add the pre-check
callback"), let's do it to make valgrind happy.
Tim reported in issue #1676 that just like startup_logs, trash buffers
are not released on deinit since they're thread-local, but valgrind
notices it when quitting before creating threads like "-c -f ...". Let's
just subscribe the function to deinit in addition to threads' end.
The two "free(x);x=NULL;" in free_trash_buffers_per_thread() were
also simplified using ha_free().
Tim reported in issue #1676 that we don't release startup logs if we
warn during startup and quit before creating threads (e.g. -c -f ...).
Let's subscribe deinit_errors_buffers() to both thread's end and
deinit. That's OK since it uses both per-thread and global variables,
and is idempotent.
Low footprint client machines may not have enough memory to download a
complete 16KB TLS record at once. With the new option the maximum
record size can be defined on the server side.
Note: Before limiting the the record size on the server side, a client should
consider using the TLS Maximum Fragment Length Negotiation Extension defined
in RFC6066.
This patch fixes GitHub issue #1679.
In issue #1677, Tim reported that we don't correctly free some shared
pools on exit. What happens in fact is that pool_destroy() is meant to
be called once per pool *pointer*, as it decrements the use count for
each pass and only releases the pool when it reaches zero. But since
pool_destroy_all() iterates over the pools list, it visits each pool
only once and will not eliminate some of them, which thus remain in the
list.
In an ideal case, the function should loop over all pools for as long
as the list is not empty, but that's pointless as we know we're exiting,
so let's just set the users count to 1 before the call so that
pool_destroy() knows it can delete and release the entry.
This could be backported to all versions (memory.c in 2.0 and older) but
it's not a real problem in practice.
We thought that we could get rid of some DISGUISE() with commit a80e4a354
("MINOR: fd: add functions to set O_NONBLOCK and FD_CLOEXEC") thanks to
the calls being in a function but that was without counting on Coverity.
Let's put it directly in the function since most if not all callers don't
care about this result.
It appears that it is safe to call perform a clean deinit at this point, so
let's do this to exercise the deinit paths some more.
Running `valgrind --leak-check=full --show-leak-kinds=all ./haproxy -vv` with
this change reports:
==261864== HEAP SUMMARY:
==261864== in use at exit: 344 bytes in 11 blocks
==261864== total heap usage: 1,178 allocs, 1,167 frees, 1,102,089 bytes allocated
==261864==
==261864== 24 bytes in 1 blocks are still reachable in loss record 1 of 2
==261864== at 0x483DD99: calloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==261864== by 0x324BA6: hap_register_pre_check (init.c:92)
==261864== by 0x155824: main (haproxy.c:3024)
==261864==
==261864== 320 bytes in 10 blocks are still reachable in loss record 2 of 2
==261864== at 0x483DD99: calloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==261864== by 0x26E54E: cfg_register_postparser (cfgparse.c:4238)
==261864== by 0x155824: main (haproxy.c:3024)
==261864==
==261864== LEAK SUMMARY:
==261864== definitely lost: 0 bytes in 0 blocks
==261864== indirectly lost: 0 bytes in 0 blocks
==261864== possibly lost: 0 bytes in 0 blocks
==261864== still reachable: 344 bytes in 11 blocks
==261864== suppressed: 0 bytes in 0 blocks
which is looking pretty good.
A config like the following:
global
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
resolvers unbound
nameserver unbound 127.0.0.1:53
will report the following leak when running a configuration check:
==241882== 6,991 (6,952 direct, 39 indirect) bytes in 1 blocks are definitely lost in loss record 8 of 13
==241882== at 0x483DD99: calloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==241882== by 0x25938D: cfg_parse_resolvers (resolvers.c:3193)
==241882== by 0x26A1E8: readcfgfile (cfgparse.c:2171)
==241882== by 0x156D72: init (haproxy.c:2016)
==241882== by 0x156D72: main (haproxy.c:3037)
because the `.px` member of `struct resolvers` is not freed.
The offending allocation was introduced in
c943799c86 which is a reorganization that
happened during development of 2.4.x. This fix can likely be backported without
issue to 2.4+ and is likely not needed for earlier versions as the leak happens
during deinit only.
To make the deinit function a proper inverse of the init function we need to
free the `http_err_chunks`:
==252081== 311,296 bytes in 19 blocks are still reachable in loss record 50 of 50
==252081== at 0x483B7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==252081== by 0x2727EE: http_str_to_htx (http_htx.c:914)
==252081== by 0x272E60: http_htx_init (http_htx.c:1059)
==252081== by 0x26AC87: check_config_validity (cfgparse.c:4170)
==252081== by 0x155DFE: init (haproxy.c:2120)
==252081== by 0x155DFE: main (haproxy.c:3037)
A memory leak was introduced when ignore-empty option was added to redirect
rules. If there is no location, when this option is set, the redirection is
aborted and the processing continues. But when this happened, the trash buffer
allocated to format the redirect response was not released.
The bug was introduced by commit bc1223be7 ("MINOR: http-rules: add a new
"ignore-empty" option to redirects.").
This patch should fix the issue #1675. It must be backported to 2.5.
If the "close-spread-time" option is set to "infinite", active
connection closing during a soft-stop can be disabled. The 'connection:
close' header or the GOAWAY frame will not be added anymore to the
server's response and active connections will only be closed once the
clients disconnect. Idle connections will not be closed all at once when
the soft-stop starts anymore, and each idle connection will follow its
own timeout based on the multiple timeouts set in the configuration (as
is the case during regular execution).
This feature request was described in GitHub issue #1614.
This patch should be backported to 2.5. It depends on 'MEDIUM: global:
Add a "close-spread-time" option to spread soft-stop on time window'.
HAProxy crashes when "show ssl ca-file" is being called on a ca-file
which contains a lot of certificates. (127 in our test with
/etc/ssl/certs/ca-certificates.crt).
The root cause is the fonction does not yield when there is no available
space when writing the details, and we could write a lot.
To fix the issue, we try to put the data with ci_putchk() after every
show_cert_detail() and we yield if the ci_putchk() fails.
This patch also cleans up a little bit the code:
- the end label is now a real end with a return 1;
- i0 is used instead of (long)p1
- the ID is stored upon yield
Since the httpclient verify now has a fallback which disable the SSL in
the httpclient without exiting haproxy at startup, we can safely
re-enable it by default.
It could still be disabled with "httpclient-ssl-verify none".
The cafile_tree was never free upon deinit, making valgrind and ASAN
complains when haproxy quits.
This could be backported as far as 2.2 but it requires the
ssl_store_delete_cafile_entry() helper from
5daff3c8ab.
Emit a warning when the ca-file couldn't be loaded for the httpclient,
and disable the SSL of the httpclient.
We must never be in a case where the verify is disabled without any
configuration, so better disable the SSL completely.
Move the check on the scheme above the initialization of the applet so
we could abort before initializing the appctx.
There were plenty of leftovers from old code that were never removed
and that are not needed at all since these files do not use any
definition depending on fcntl.h, let's drop them.
This gets rid of most open-coded fcntl() calls, some of which were passed
through DISGUISE() to avoid a useless test. The FD_CLOEXEC was most often
set without preserving previous flags, which could become a problem once
new flags are created. Now this will not happen anymore.
Instead of seeing each location manipulate the fcntl() themselves and
often forget to check previous flags, let's centralize the functions to
do this. It also allows to drop fcntl.h from most call places and will
ease the adoption of different OS-specific mechanisms if needed. Note
that the fd_set_nonblock() function purposely doesn't check the previous
flags as it's meant to be used on new FDs only.
Despite what the 'close-spread-time' option should do, the
'connection:close' header was always added to HTTP responses during
soft-stops even with a soft-stop window defined.
This patch adds the proper random based closing to HTTP connections
during a soft-stop (based on the time left in the soft close window).
It should be backported to 2.5 once 'MEDIUM: global: Add a
"close-spread-time" option to spread soft-stop on time window' is
backported as well.
Some older systems may routinely return EWOULDBLOCK for some syscalls
while we tend to check only for EAGAIN nowadays. Modern systems define
EWOULDBLOCK as EAGAIN so that solves it, but on a few older ones (AIX,
VMS etc) both are different, and for portability we'd need to test for
both or we never know if we risk to confuse some status codes with
plain errors.
There were few entries, the most annoying ones are the switch/case
because they require to only add the entry when it differs, but the
other ones are really trivial.
__comp_fetch_init() only presets the maxzlibmem, and only when both
USE_ZLIB and DEFAULT_MAXZLIBMEM are set. The intent is to preset a
default value to protect the system against excessive memory usage
when no setting is set by the user.
Nowadays the entry in the global struct is always there so there's no
point anymore in passing via a constructor to possibly set this value.
Let's go the cleaner way by always presetting DEFAULT_MAXZLIBMEM to 0
in defaults.h unless these conditions are met, and always assigning it
instead of pre-setting the entry to zero. This is more straightforward
and removes some ifdefs and the last constructor. In addition, now the
setting has a chance of being found.
The constructor present there could be replaced with an initcall.
This one is set at level STG_PREPARE because it also zeroes the
lock_stats, and it's a bit odd that it could possibly have been
scheduled to run after other constructors that might already
preset some of these locks by accident.
Transport layers (raw_sock, ssl_sock, xprt_handshake and xprt_quic)
were using 4 constructors and 2 destructors. The 4 constructors were
replaced with INITCALL and the destructors with REGISTER_POST_DEINIT()
so that we do not depend on this anymore.
Pollers are among the few remaining blocks still using constructors
to register themselves. That's not needed anymore since the initcalls
so better turn to initcalls.
On some systems, the hard limit for ulimit -n may be huge, in the order
of 1 billion, and using this to automatically compute maxconn doesn't
work as it requires way too much memory. Users tend to hard-code maxconn
but that's not convenient to manage deployments on heterogenous systems,
nor when porting configs to developers' machines. The ulimit-n parameter
doesn't work either because it forces the limit. What most users seem to
want (and it makes sense) is to respect the system imposed limits up to
a certain value and cap this value. This is exactly what fd-hard-limit
does.
This addresses github issue #1622.
Almost all of our hash-based LB algorithms are implemented as special
cases of something that can now be achieved using sample expressions,
and some of them have adopted some options to adapt their behavior in
ways that could also be achieved using converters.
There are users who want to hash other parameters that are combined
into variables, and who set headers from these values and use
"balance hdr(name)" for this.
Instead of constantly implementing specific options and having users
hack around when they want a real hash, let's implement a native hash
mode that applies to a standard sample expression. This way, any
fetchable element (including variables) may be used to construct the
hash, even modified by any converter if desired.
Any type except bool could cast to bin, while it can cast to string.
That's a bit inconsistent, and prevents a boolean from being used as
the entry of a hash function while any other type can. This is a
problem when passing via variable where someone could use:
... set-var(txn.bar) always_false
to temporarily disable something, but this would result in an empty
hash output when later doing:
... var(txn.bar),sdbm
Instead of using c_int2bin() as is done for the string output, better
enfore an set of inputs or exactly 0 or 1 so that a poorly written sample
fetch function does not result in a difficult to debug hash output.
Surprisingly, while about all calls to a sample cast function carefully
avoid calling c_none(), sample_fetch_as_type() makes no effort regarding
this, while by nature, the function is most often called with an expected
output type similar to the one of the expresison. Let's add it to shorten
the most common call path.
The use_backend and use-server contexts were not enumerated in
smp_resolve_args, and while use-server doesn't currently take an
expression, at least use_backend supports that, and both entries ought
to be listed for completeness. Now an error in a use_backend rule
becomes more precise, from:
[ALERT] (12373) : config : parsing [use-srv.cfg:33]: unable to find
backend 'foo' referenced in arg 1 of sample fetch
keyword 'nbsrv' in proxy 'echo'.
to:
[ALERT] (12307) : config : parsing [use-srv.cfg:33]: unable to find
backend 'foo' referenced in arg 1 of sample fetch
keyword 'nbsrv' in use_backend expression in proxy
'echo'.
This may be backported though this is totally harmless.
Since commit dd7e6c6dc ("BUG/MINOR: http-rules: completely free incorrect
TCP rules on error") free_act_rule() is called on some error paths, and one
of them involves incomplete redirect rules that may cause a crash if the
rule wasn't yet initialized, as shown in this config snippet:
frontend ft
mode http
bind *:8001
http-request redirect location /%[always_false,sdbm]
Let's simply make release_http_redir() more robust against null redirect
rules.
No backport needed since it seems that the only way to trigger this was
the extra check above that was merged during 2.6-dev.
The function checking captures defined in tcp-request content ruleset didn't
use the right rule arguments. "arg.trk_ctr" was used instead of "arg.cap".
This patch must be backported as far as 2.2.
Since the 2.5, it is possible to define TCP/HTTP ruleset in defaults
sections. However, rules defining a capture in defaults sections was not
properly handled because they was not shared with the proxies inheriting
from the defaults section. This led to crash when haproxy tried to store a
new capture.
So now, to fix the issue, when a new proxy is created, the list of captures
points to the list of its defaults section. It may be NULL or not. All new
caputres are prepended to this list. It is not a problem to share the same
defaults section between several proxies, because it is not altered and we
take care to not release it when corresponding proxies are freed but only
when defaults proxies are freed. To do so, defaults proxies are now
unreferenced at the end of free_proxy() function instead of the beginning.
This patch should fix the issue #1674. It must be backported to 2.5.
Captures must only be defined in proxies with the frontend capabilities or
in defaults sections used by proxies with the frontend capabilities. Thus,
an extra check is added to be sure a defaults section defining a capture
will never be references by a backend.
Note that in this case, only named captures in "tcp-request content" or
"http-request" rules are possible. It is not possible in a defaults section
to decalre a capture slot. Not yet at least.
This patch must be backported to 2.5. It is releated to issue #1674.
When using qc_stream_desc_ack(), the stream instance may be freed if
there is no more data in its buffers. This also means that all frames
still stored waiting for ACK for this stream are freed via
qc_stream_desc_free().
This is particularly important in quic_stream_try_to_consume() where we
loop over the frames tree of the stream. A use-after-free is present in
cas the stream has been freed in the trace "stream consumed" which
dereference the frame. Fix this by first checking if the stream has been
freed or not.
This bug was detected by using ASAN + quic traces enabled.
It's long been known that queues didn't scale with threads for various
reasons ranging from the cost of the queue lock to the cost of the
massive amount of inter-thread wakeups.
But some recent reports showing deplorable perfs with threads used at
100% CPU helped us notice that the two elements above add on top of
each other:
- with plenty of inter-thread wakeups, the scheduler takes a lot of
time to dequeue pending tasks from the shared queue ;
- the lock held by the scheduler to do this slows down subsequent
task_wakeup() calls from the the queue that are made under the
queue's lock
- the queue's lock slows down addition of new requests to the queue
and adds up to the number of needed queue entries for a steady
traffic.
But the cost of the share queue has no reason for being paid because
it had already been paid when process_stream() added the request to
the queue. As such an instant wakeup is perfectly fit for this.
This is exactly what this patch does, it uses tasklet_instant_wakeup()
to dequeue pending requests, which has the effect of not bloating the
shared queue, hence not requiring the global queue lock, which in turn
results in the wakeup to be much faster, and the queue lock to be much
shorter. In the end, a test with 4k concurrent connections that was
being limited to 40-80k requests/s before with 16 threads, some of
which were stuck at 100% CPU now reaches 570k req/s with 4% idle.
Given that it's been found that it was possible to trigger the watchdog
on the queue lock under extreme conditions, and that such conditions
could happen when users want to protect their servers during a DoS, it
would definitely make sense to backport it to the most recent releases
(2.5 and 2.4 seem like good candidates especially because their scheduler
is modern enough to receive the change above). If a backport is performed,
the following patch is needed:
MINOR: task: add a new task_instant_wakeup() function
Remove CS_EP_EOS set erroneously on qc_rcv_buf().
This fixes POST with abortonclose. Previously, request was preemptively
aborted by haproxy due to the incorrect EOS flag.
For the moment, EOS flag is not set anymore. It should be set to warn
about a premature close from the client.
Since the idle connections management changed to use eb-trees instead of MT
lists, a lock must be acquired to manipulate servers idle/safe/available
connection lists. However, it remains an unprotected use in
connect_server(), when a connection is removed from an idle list if the mux
has no more streams available. Thus it is possible to remove a connection
from an idle list on a thread, while another one is looking for a idle
connection. Of couse, this may lead to a crash.
To fix the bug, we must take care to acquire the idle connections lock
first. The bug was introduced by the commit f232cb3e9 ("MEDIUM: connection:
replace idle conn lists by eb trees").
The patch must be backported as far as 2.4.
Disable temporary the SSL verify by default in the httpclient. The
initialization of the @system-ca during the init of the httpclient is a
problem in some cases.
The verify can be reactivated with "httpclient-ssl-verify required" in
the global section.
The httpclient HTTPS requests now enable the "verify required" option.
To achieve this, the "@system-ca" ca-file is configured in the
httpclient ssl server. Which means all the system CAs will be loaded at
haproxy startup.
Change the init order of the httpclient, a different init sequence is
required to allow a more complicated init.
The init is splitted in two parts:
- the first part is executed before config_check_validity(), which
allows to create proxy and more advanced stuff than STG_INIT, because
we might want to use stuff already initialized in haproxy (trash
buffers for example)
- the second part is executed after the config_check_validity(),
currently it is used for the log configuration.
This adds a call to function <fct> to the list of functions to be called at
the step just before the configuration validity checks. This is useful when you
need to create things like it would have been done during the configuration
parsing and where the initialization should continue in the configuration
check.
It could be used for example to generate a proxy with multiple servers using
the configuration parser itself. At this step the trash buffers are allocated.
Threads are not yet started so no protection is required. The function is
expected to return non-zero on success, or zero on failure. A failure will make
the process emit a succinct error message and immediately exit.
A conn-stream is never detached from an endpoint or an application alone,
except on a reset. Thus, to avoid any error, these functions are now
private. And cs_destroy() function is added to destroy a conn-stream. This
function is called when a stream is released, on the front and back
conn-streams, and when a health-check is finished.
When a client abort is detected with the server conn-stream in CS_ST_INI
state, there is no reason to detach the endpoing because we know there is no
endpoint attached to this conn-stream. This patch depends on the commit
"BUG/MEDIUM: conn-stream: Set back CS to RDY state when the appctx is
created".
When an appctx is created on the server side, we now set the corresponding
conn-stream to ready state (CS_ST_RDY). When it happens, the backend
conn-stream is in CS_ST_INI state. It is not consistant to let the
conn-stream in this state because it means it is possible to have a target
installed in CS_ST_INI state, while with a connection, the conn-stream is
switch to CS_ST_RDY or CS_ST_EST state.
It is especially anbiguous because we may be tempted to think there is no
endpoint attached to the conn-stream before the CS_ST_CON state. And it is
indeed the reason for a bug leading to a crash because a cs_detach_endp() is
performed if an abort is detected on the backend conn-stream in CS_ST_INI
state. With a mux or a appctx attached to the conn-stream, "->endp" field is
set to NULL. It is unexpected. The API will be changed to be sure it is not
possible. But it exposes a consistency issue with applets.
So, the conn-stream must not stay in CS_ST_INI state when an appctx is
attached. But there is no reason to set it in CS_ST_REQ. The conn-stream
must be set to CS_ST_RDY to handle applets and connections in the same
way. Note that if only the target is set but no appctx is created, the
backend conn-stream is switched from CS_ST_INI to CS_ST_REQ state to be able
to create the corresponding appctx. This part is unchanged.
This patch depends on the commit "MINOR: backend: Don't allow to change
backend applet".
The ambiguity exists on previous versions. But the issue is
2.6-specific. Thus, no backport is needed.
This part was inherited from haproxy-1.5. But since a while (at least 1.8),
the backend applet, once created, is no longer changed. Thus there is no
reason to still check if the target has changed. And in fact, if it was
still possible, there would be a memory leak because the old applet would be
lost and never released.
There is no reason to backport this fix because the leak only exists on a
dead code path.
When we want to serve a resource from the cache, if the applet creation
fails, the "cache-use" action must not yield. Otherwise, the stream will
hang. Instead, we now disable the cache. Thus the request may be served by
the server.
This patch must be backported as far as 1.8.
cs_applet_shut() now relies on CS_EP_SH* flags to performed the applet
shutdown. It means the applet release callback is called if there is no
CS_EP_SHR or CS_EP_SHW flags set. And it set these flags, CS_EP_SHRR and
CS_EP_SHWN more specifically, before exiting.
This way, cs_applet_shut() is the really equivalent to cs_conn_shut().
This function does not release the applet but only call the applet release
callback. It is equivalent to cs_conn_shut() but for applets. Thus the
function is renamed cs_applet_shut().
These functions don't close the connection but only perform shutdown for
reads and writes at the mux level. It is a bit ambiguous. Thus,
cs_conn_close() is renamed cs_conn_shut() and cs_conn_drain_and_close() is
renamed cs_conn_drain_and_shut(). These both functions rely on
cs_conn_shutw() and cs_conn_shutr().
Since the recent changes about the conn-streams, the stream dump in "show
sess all" command is a bit mangled. front and back conn-stream are now
properly displayed (csf and csb). In addition, when there is no backend
endpoint, "APPCTX" was always reported. Now, "NONE" is reported in this
case.
It is 2.6-specific. No backport needed.
Since previous patch
MINOR: mux-quic: split xfer and STREAM frames build
there is no way to report an error in qcs_xfer_data().
This should fix github issue #1669.
As anticipated in commit 211ea252d ("BUG/MINOR: logs: fix logsrv leaks
on clean exit"), there were indeed other corner cases that were not
properly covered. Setting the http client's ring_name to NULL make the
sink lookup crash on startup in sink_find () with a config as simple as:
global
log ring@buf0 local0
The fields must be properly initialized (both config file name and
the ring_name). This only needs to be backported if/when the commit
above is backported.
Do not initialize mux task timeout if timeout client is set to 0 in the
configuration. Check for the task before queuing it in qc_io_cb() or
qc_detach().
This fix a crash when timeout client is 0 or undefined.
Unsubscribe from lower layer on qc_release. This ensures that the lower
layer won't wake up a null tasklet after the MUX has been released and
may prevent a crash.
It is possible the xprt layer have to process retransmitted STREAM frames after
the mux was released. In this case, there is no need to try to wake it up.
Starting from OpenSSLv3, providers are at the core of cryptography
functions. Depending on the provider used, the way the SSL
functionalities work could change. This new 'show ssl providers' CLI
command allows to show what providers were loaded by the SSL library.
This is required because the provider configuration is exclusively done
in the OpenSSL configuration file (/usr/local/ssl/openssl.cnf for
instance).
A new line is also added to the 'haproxy -vv' output containing the same
information.
Complete qc_send function. After having processed each qcs emission, it
will now retry send on qcs where transfer can continue. This is useful
when qc_stream_desc buffer is full and there is still data present in
qcs buf.
To implement this, each eligible qcs is inserted in a new list
<qcc.send_retry_list>. This is done on send notification from the
transport layer through qcc_streams_sent_done(). Retry emission until
send_retry_list is empty or the transport layer cannot proceed more
data.
Several send operations are now called on two different places. Thus a
new _qc_send_qcs() function is defined to factorize the code.
This change should maximize the throughput during QUIC transfers.
MUX streams can now allocate multiple buffers for sending. quic-conn is
responsible to limit the total count of allowed allocated buffers. A
counter is stored in the new field <stream_buf_count>.
For the moment, the value is hardcoded to 30.
On stream buffer allocation failure, the qcc MUX is flagged with
QC_CF_CONN_FULL. The MUX is then woken up as soon as a buffer is freed,
most notably on ACK reception.
Acknowledge of STREAM has been complexified with the introduction of
stream multi buffers. Two functions are executing roughly the same set
of instructions in xprt_quic.c.
To simplify this, move the code complexity in a new function
qc_stream_desc_ack(). It will handle offset calculation, removal of
data, freeing oldest buffer and freeing stream instance if required.
The qc_stream_desc API is cleaner as qc_stream_desc_free_buf() ambiguous
function has been removed.
Complete the qc_stream_desc type to support multiple buffers on
emission. The main objective is to increase the transfer throughput.
The MUX is now able to transfer more data without having to wait ACKs.
To implement this feature, a new type qc_stream_buf is declared. it
encapsulates a buffer with a list element. New functions are defined to
retrieve the current buffer, release it or allocate a new one. Each
buffer is kept in the qc_stream_desc list until all of its data is
acknowledged.
On the MUX side, a qcs uses the current stream buffer to transfer data.
Once the buffer is full, it is released and a new one will be allocated
on a future qc_send() invocation.
Add a new member <qc> in qc_stream_desc structure. This change is
possible since previous patch which add quic-conn argument to
qc_stream_desc_new().
The purpose of this change is to simplify the future evolution of
qc-stream-desc API. This will avoid to repeat qc as argument in various
functions which already used a qc_stream_desc.
Simplify the model qcs/qc_stream_desc. Each types has now its own tree
node, stored respectively in qcc and quic-conn trees. It is still
necessary to mark the stream as detached by the MUX once all data is
transfered to the lower layer.
This might improve slightly the performance on ACK management as now
only the lookup in quic-conn is necessary. On the other hand, memory
size of qcs structure is increased.
Regroup all type definitions and functions related to qc_stream_desc in
the source file src/quic_stream.c.
qc_stream_desc complexity will be increased with the development of Tx
multi-buffers. Having a dedicated module is useful to mix it with
pure transport/quic-conn code.
Split qcs_push_frame() in two functions.
The first one is qcs_xfer_data(). Its purpose is to transfer data from
qcs.tx.buf to qc_stream_desc buffer. The second function is named
qcs_build_stream_frm(). It generates a STREAM frame using qc_stream_desc
buffer as payload.
The trace events previously associated with qcs_push_frame() has also
been split in two to reflect the new code structure.
The purpose of this refactoring is first to better reflect how sending
is implemented. It will also simplify the implementation of Tx
multi-buffer per streams.
The DH parameters used for OpenSSL versions 1.1.1 and earlier where
changed. For OpenSSL 1.0.2 and LibreSSL the newly introduced
ssl_get_dh_by_nid function is not used since we keep the original
parameters.
DHE ciphers do not present a security risk if the key is big enough but
they are slow and mostly obsoleted by ECDHE. This patch removes any
default DH parameters. This will effectively disable all DHE ciphers
unless a global ssl-dh-param-file is defined, or
tune.ssl.default-dh-param is set, or a frontend has DH parameters
included in its PEM certificate. In this latter case, only the frontends
that have DH parameters will have DHE ciphers enabled.
Adding explicitely a DHE ciphers in a "bind" line will not be enough to
actually enable DHE. We would still need to know which DH parameters to
use so one of the three conditions described above must be met.
This request was described in GitHub issue #1604.
RFC7919 defined sets of DH parameters supposedly strong enough to be
used safely. We will then use them when we can instead of our hard coded
ones (namely the ffdhe2048 and ffdhe4096 named groups).
The ffdhe2048 and ffdhe4096 named groups were integrated in OpenSSL
starting with version 1.1.1. Instead of duplicating those parameters in
haproxy for older versions of OpenSSL, we will keep using our own
parameters when they are not provided by the SSL library.
We will also need to keep our 1024 bits DH parameters since they are
considered not safe enough to have a dedicated named group in RFC7919
but we must still keep it for retrocompatibility with old Java clients.
This request was described in GitHub issue #1604.
MacOS can feed fc_rtt, fc_rttvar, fc_sacked, fc_lost and fc_retrans
so let's expose them on this platform.
Note that at the tcp(7) level, the API is slightly different, as
struct tcp_info is called tcp_connection_info and TCP_INFO is
called TCP_CONNECTION_INFO, so for convenience these ones were
defined to point to their equivalent. However there is a small
difference now in that tcpi_rtt is called tcpi_rttcur on this
platform, which forces us to make a special case for it before
other platforms.
While there is some overlap between what each OS provides in terms of
retrievable info, each set is not a real subset of another one and this
results in increasing complexity when trying to add support for new OSes.
Let's just condition each item to the OS that support it. It's not pretty
but at least it will avoid a real mess later.
Note that fc_rtt and fc_rttvar are supported on any OS that has TCP_INFO,
not just linux/freebsd/netbsd, so we continue to expose them unconditionally.
If the response is compressed, we must update the HTX start-line flags and
the HTTP message flags. It is especially important if there is another
filter enabled. Otherwise, there is no way to know the C-L header was
removed and T-E one was added. Except by looping on headers.
This patch is related to the issue #1660. It must backported as far as 2.0
(for HTX part only).
Instead of relying on the HTX start-line flags, it is better to rely on
http_msg flags to know if a content-length header can be added or not. In
addition, if the header is added, HTTP_MSGF_CNT_LEN flag must be added.
Because of this bug, an invalid message can be emitted when the response is
compressed because it may contain C-L and a T-E headers.
This patch should fix the issue #1660. It must be backported as far as 2.2.
A released qc_stream_desc is freed as soon as all its buffer content has
been acknowledged. However, it may still contains other frames waiting
for ACK pointing to deleted buffer content. This can happen on
retransmission.
When freeing a qc_stream_desc, free all its frames in acked_frms tree to
fix memory leak. This may also possibly fix a crash on retransmission.
Now, the frames are properly removed from a packet. This ensure we do
not retransmit a frame whose buffer is deallocated.
The issue only concerns the backend connection. The conn-stream is now owned
by the stream and persists during all the stream life. Thus we must not
crush it when the backend connection is released.
It is 2.6-specific. No backport is needed.
While it's often a pain to try to figure a UNIX socket address, the
server ones are reliable and may be emitted in the check provided
they are retrieved in time. We cannot rely on addr_to_str() because
it only reports "unix" since it may be used to log client addresses
or listener addresses (which are renamed).
The address length was extended to 256 chars to deal with long paths
as previously it was limited to INET6_ADDRSTRLEN+1.
This addresses github issue #101. There's no point backporting this,
external checks are almost never used.
During the config parsing we preset the server's address and port, but
that's pointless since it's replaced during each check in order to deal
with the possibility that the address was changed since.
Github issue #472 reports a problem with short client connections making
stick-table entries disappear. The problem is in fact totally different
and stems at the connection establishment step.
What happens is that the stick-table there has a single entry. The
"stick-on" directive is forced to purge an existing entry before being
able to create a new one. The new entry will be committed during the
call to process_store_rules() on the response path.
But if the client sends the FIN immediately after the connection is set
up (e.g. using nc -z) then the SHUTR is received and will cancel the
connection setup just after it starts. This cancellation will induce a
call to cs_shutw() which will in turn leave the server-side state in
ST_DIS. This transition from ST_CON to ST_DIS doesn't belong to the
list of handled transition during the connection setup so it will be
handled right after on the regular path, causing the connection to be
closed. Because of this, we never pass through back_establish() and
the backend's analysers are never set on the response channel, which
is why process_store_rules() is not called and the stick-tables entry
never committed.
The comment above the code that causes this transition clearly says
that the function is to be used after the connection is established
with the server, but there's no such protection, and we always have
the AUTO_CLOSE flag there (but there's hardly any available condition
to eliminate it).
This patch adds a test for the connection not being in ST_CON or for
option abortonclose being set. It's sufficient to do the job and it
should not cause issues.
One concern was that the transition could happen during cs_recv()
after the connection switches from CON to RDY then the read0 would
be taken into account and would cause DIS to appear, which is not
handled either. But that cannot happen because cs_recv() doesn't do
anything until it's in ST_EST state, hence the read0() cannot be
called from CON/RDY. Thus the transition from CON to DIS is only
possible in back_handle_st_con() and back_handle_st_rdy() both of
which are called when dealing with the transition already, or when
abortonclose is set and the client aborts before connect() succeeds.
It's possible that some further improvements could be made to detect
this specific transition but it doesn't seem like anything would have
to be added.
This issue was first reported on 2.1. The abortonclose area is very
sensitive so it would be wise to backport slowly, and probably no
further than 2.4.
Emit a CONNECTION_CLOSE if the app layer cannot be properly initialized
on qc_xprt_start. This force the quic-conn to enter the closing state
before being closed.
Without this, quic-conn normal operations continue, despite the
app-layer reported as not initialized. This behavior is undefined, in
particular when handling STREAM frames.
Fix the return value used in quic-conn start callback for error. The
caller expects a negative value in this case.
Without this patch, the quic-conn and the connection stack are not
closed despite an initialization failure error, which is an undefined
behavior and may cause a crash in the end.
In the quic_session_accept, connection is in charge to call the
quic-conn start callback. If this callback fails for whatever reason,
there is a crash because of an explicit session_free.
This happens because the connection is now the owner of the session due
to previous conn_complete_session call. It will automatically calls
session_free. Fix this by skipping the session_free explicit invocation
on error.
In practice, currently this has never happened as there is only limited
cases of failures for conn_xprt_start for QUIC.
Implement qc_destroy. This callback is used to quickly release all MUX
resources.
session_free uses this callback. Currently, it can only be called if
there was an error during connection initialization. If not defined, the
process crashes.
When an HTTP client is started on an HAProxy compiled without the SSL
support, an error is triggered when HTTPS is used. In this case, the freshly
created conn-stream is released. But this code is specific to the non-SSL
part. Thus it is moved the in right #if/#else section.
This patch should fix the issue #1655.
The commit 744451c7c ("BUG/MEDIUM: mux-h1: Properly detect full buffer cases
during message parsing") introduced a regression if trailers are not
received in one time. Indeed, in this case, nothing is appended in the
channel buffer, while there are some data in the input buffer. In this case,
we must not request more room to the upper layer, especially because the
channel buffer can be empty.
To fix the issue, on trailers parsing, we consider the H1 stream as
congested when the max size allowed is reached. Of course, the H1 stream is
also considered as congested if the trailers are too big and the channel
buffer is not empty.
This patch should fix the issue #1657. It must be backported as far as 2.0.
For all muxes, the function responsible to release a mux is always called
with a defined mux. Thus there is no reason to test if it is defined or not.
Note the patch may seem huge but it is just because of indentation changes.
Several muxes (h2, fcgi, quic) don't support the protocol upgrade. For these
muxes, there is no reason to have code to support it. Thus in the destroy
callback, there is now a BUG_ON() and the release function is simplified
because the connection is always owned by the mux..
Once a mux initialized, the underlying connection alwaus exists from its
point of view and it is never removed until the mux is released. It may be
owned by another mux during an upgrade. But the pointer remains set. Thus
there is no reason to test it in the destroy callback function.
This patch should fix the issue #1652.
The doc states that timeout http-keep-alive is not set, timeout http-request
is used instead. As implemented in commit 15a4733d5 ("BUG/MEDIUM: mux-h2:
make use of http-request and keep-alive timeouts"), we use http-keep-alive
unconditionally between requests, with a fallback on client/server. Let's
make sure http-request is always used as a fallback for http-keep-alive
first.
This needs to be backported wherever the commit above is backported.
Thanks to Christian Ruppert for spotting this.
Commit 15a4733d5 ("BUG/MEDIUM: mux-h2: make use of http-request and
keep-alive timeouts") omitted to check the side of the connection, and
as a side effect, automatically enabled timeouts on idle backend
connections, which is totally contrary to the principle that they
must be autonomous.
This needs to be backported wherever the patch above is backported.
If the client does not sent an ALPN, the SSL ALPN negotiation callback
is not called. However, the handshake is reported as successful. Check
just after SSL_do_handshake if an ALPN was negotiated. If not, emit a
CONNECTION_CLOSE with a TLS alert to close the connection.
This prevent a crash in qcc_install_app_ops() called with null as second
parameter value.
Instead of testing if a conn-stream exists or not, we rely on CS_EP_ORPHAN
endpoint flag. In addition, if possible, we access the endpoint from the
h1s. Finally, the endpoint flags are now reported in trace messages.
cs_free_cond() must now be used to remove a CS. cs_free() may be used on
error path to release a freshly allocated but unused CS. But in all other
cases cs_free_cond() must be used. This function takes care to release the
CS if it is possible (no app and detached from any endpoint).
In fact, this function is only used internally. From the outside,
cs_detach_* functions are used.
It is a partial revert of 54e85cbfc ("MAJOR: check: Use a persistent
conn-stream for health-checks"). But with the CS refactoring, the result is
cleaner now. A CS is allocated when a new health-check run is started. The
same CS is then used throughout the run. If there are several connections,
the endpoint is just reset. At the end of the run, the CS is released. It
means, in the tcp-check part, the CS is always defined.
process_stream() and all associated functions now manipulate conn-streams.
stream-interfaces are no longer used. In addition, function to dump info
about a stream no longer print info about stream-interfaces.
cs_conn_io_cb(), cs_conn_sync_recv() and cs_conn_sync_send() are moved in
conn_stream.c. Associated functions are moved too (cs_notify, cs_conn_read0,
cs_conn_recv, cs_conn_send and cs_conn_process).
Remaining flags and associated functions are move in the conn-stream
scope. These flags are added on the endpoint and not the conn-stream
itself. This way it will be possible to get them from the mux or the
applet. The functions to get or set these flags are renamed accordingly with
the "cs_" prefix and updated to manipualte a conn-stream instead of a
stream-interface.
si_conn_cb variable is renamed cs_data_conn_cb. In addtion, its associated
functions are also renamed. si_cs_recv(), si_cs_send() and si_cs_process() are
renamed cs_conn_recv(), cs_conn_send and cs_conn_process(). These functions are
updated to manipulate conn-streams instead of stream-interfaces.
data callbacks were only used for streams attached to a connection and
for health-checks. However there is a callback used by task_run_applet. So,
si_applet_wake_cb() is first renamed to cs_applet_process() and it is
defined as the data callback for streams attached to an applet. This way,
this part now manipulates a conn-stream instead of a stream-interface. In
addition, applets are no longer handled as an exception for this part.
si_update_both() is renamed stream_update_both_cs() and moved in stream.c.
The function is slightly changed to manipulate the stream instead the front
and back conn-streams.
si_update_rx(), si_update_tx() and si_update() are renamed cs_update_rx(),
cs_upate_tx() and cs_update() and updated to manipulate a conn-stream
instead of a stream-interface.
It is a transient commit. It should ease next changes about the conn-stream
refactoring. At the end these functions will be moved in the conn-stream
scope.
si_register_applet() and si_applet_release() are renamed
cs_register_applet() and cs_applet_release() and now manipulate a
conn-stream instead of a stream-inteface.
si_shutr(), si_shutw(), si_chk_rcv() and si_chk_snd() are moved in the
conn-stream scope and renamed, respectively, cs_shutr(), cs_shutw(),
cs_chk_rcv(), cs_chk_snd() and manipulate a conn-stream instead of a
stream-interface.
Some conn-stream functions are only used when there is a connection. Thus,
they was renamed with "cs_conn_" prefix. In addition, we expect to have a
connection, so a BUG_ON is added to be sure the functions are never called
in another context.
wait_event structure is moved in the conn-stream. The tasklet is only
created if the conn-stream is attached to a mux and released when the mux is
detached. This implies a subtle change. In stream_int_chk_rcv() function,
the wakeup of the tasklet was removed because there is no longer tasklet at
this stage (stream_int_chk_rcv() is a callback function of si_embedded_ops).
To be able to move wait_event from the stream-interface to the conn-stream,
we must be prepare to handle errors when a mux is attached to a conn-stream.
Indeed, the wait_event's tasklet will be allocated when both a mux and a
stream will be both attached to a stream. So, we must be prepared to handle
allocation errors.
These flags only concerns the connection part. In addition, it is required
for a next commit, to avoid circular deps. Thus CS_SHR_* and CS_SHW_* were
renamed with the "CO_" prefix.
si_connect() is moved in backend.c and renamed as do_connect_server(). In
addition, the function now manipulate a stream instead of a
stream-interface.
si_retnclose() is used to send a reply to a client before closing. There is
no use on the server side, in spite of the function is generic. Thus, it is
renamed stream_retnclose() and moved into the stream scope. The function now
handle a stream and explicitly send a message to the client.
The stream-interface state (SI_ST_*) is now in the conn-stream. It is a
mechanical replacement for now. Nothing special. SI_ST_* and SI_SB_* were
renamed accordingly. Utils functions to manipulate these infos were moved
under the conn-stream scope.
But it could be good to keep in mind that this part should be
reworked. Indeed, at the CS level, we only need to know if it is ready to
receive or to send. The state of conn-stream from INI to EST is only used on
the server side. The client CS is immediately set to EST. Thus current
SI_ST_* states should probably be moved to the stream to reflect the server
connection state during the establishment stage.
Only the server side is concerned by the stream-interface error type. It is
useless to have an err_type field on the client side. So, it is now move to
the stream. SI_ET_* are renames STRM_ET_* and moved in stream-t.h header
file.
The previous connection state on the client side was only used for debugging
purpose to report client close. But this may be handled when the client
stream-interface is switched from SI_ST_DIS to SI_ST_CLO.
So, there only remains the previous connection state on the server side that
is used by the stream, in process_stream(), to be able to set the correct
termination flags. Thus, instead of keeping this info in the
stream-interface for only one side, the info is now stored in the stream
itself.
Flag to get the source ip/port with getsockname is now handled at the stream
level. Thus SI_FL_SRC_ADDR stream-int flag is replaced by SF_SRC_ADDR stream
flag.
Flag to consider a stream as indepenent is now handled at the conn-stream
level. Thus SI_FL_INDEP_STR stream-int flag is replaced by CS_FL_INDEP_STR
conn-stream flags.
Flag to not wake the stream up on I/O is now handled at the conn-stream
level. Thus SI_FL_DONT_WAKE stream-int flag is replaced by CS_FL_DONT_WAKE
conn-stream flags.
Flags to disable lingering and half-close are now handled at the conn-stream
level. Thus SI_FL_NOLINGER and SI_FL_NOHALF stream-int flags are replaced by
CS_FL_NOLINGER and CS_FL_NOHALF conn-stream flags.
Instead of setting a stream-interface flag to then set the corresponding
conn-stream endpoint flag, we now only rely the conn-stream endoint. Thus
SI_FL_KILL_CON is replaced by CS_EP_KILL_CONN.
In addition si_must_kill_conn() is replaced by cs_must_kill_conn().
Instead of relying on the conn-stream error, via CS_FL_ERR flags, we now
directly use the error at the endpoint level with the flag CS_EP_ERROR. It
should be safe to do so. But we must be careful because it is still possible
that an error is processed too early. Anyway, a conn-stream has always a
valid endpoint, maybe detached from any endpoint, but valid.
SI_FL_ERR is removed and replaced by CS_FL_ERROR. It is a transient patch
because the idea is to rely on the endpoint to handle errors at this
level. But if for any reason it is not possible, the stream-interface flags
will still be replaced.
The expiration date in the stream-interface was only used on the server side
to set the connect, queue or turn-around timeout. It was checked on the
frontend stream-interface, but never used concretely. So it was removed and
replaced by a connect expiration date in the stream itself. Thus, SI_FL_EXP
flag in stream-interfaces is replaced by a stream flag, SF_CONN_EXP.
The source and destination addresses at the applicative layer are moved from
the stream-interface to the conn-stream. This simplifies a bit the code and
it is a logicial step to remove the stream-interface.
The conn_retries counter was set to the max value and decremented at each
connection retry. Thus the counter reflected the number of retries left and
not the real number of retries. All calculations of redispatch or reporting
of number of retries experienced were made using subtracts from the
configured retries, which was complicated and didn't bring any benefit.
Now, this counter is set to 0 and incremented at each retry. We know we've
reached the maximum allowed connection retries by comparing it to the
configured value. In all other cases, we directly use the counter.
This patch should address the feature request #1608.
The conn_retries counter may be moved into the stream structure. It only
concerns the connection establishment. The frontend stream-interface does not
use it. So it is a logical change.
The L7 retries only concerns the stream when a server connection is
established. Thus instead of storing the L7 buffer into the
stream-interface, it may be moved to the stream. And because it is only
available for HTTP streams, it may be moved in the HTTP transaction.
Associated flags are also moved into the HTTP transaction.
At many places, we now use the new CS functions to get a stream or a channel
from a conn-stream instead of using the stream-interface API. It is the
first step to reduce the scope of the stream-interfaces. The main change
here is about the applet I/O callback functions. Before the refactoring, the
stream-interface was the appctx owner. Thus, it was heavily used. Now, as
far as possible,the conn-stream is used. Of course, it remains many calls to
the stream-interface API.
CS_FL_ISBACK is a new flag, set on backend conn-streams. We must just be
careful to preserve this flag when the endpoint is detached from the
conn-stream.
Instead of testing if a conn-stream exists or not, we rely on CS_EP_ORPHAN
endpoint flag. In addition, if possible, we access the endpoint from the
mux_pt context. Finally, the endpoint flags are now reported in trace
messages.
All old flags CS_FL_* are now moved in the endpoint scope and renamed
CS_EP_* accordingly. It is a systematic replacement. There is no true change
except for the health-check and the endpoint reset. Here it is a bit special
because the same conn-stream is reused. Thus, we must handle endpoint
allocation errors. To do so, cs_reset_endp() has been adapted.
Thanks to this last change, it will now be possible to simplify the
multiplexer and probably the applets too. A review must also be performed to
remove some flags in the channel or the stream-interface. The HTX will
probably be simplified too. Finally, there is now some place in the
conn-stream to move info from the stream-interface.
The conn-stream endpoint is now shared between the conn-stream and the
applet or the multiplexer. If the mux or the applet is created first, it is
responsible to also create the endpoint and share it with the conn-stream.
If the conn-stream is created first, it is the opposite.
When the endpoint is only owned by an applet or a mux, it is called an
orphan endpoint (there is no conn-stream). When it is only owned by a
conn-stream, it is called a detached endpoint (there is no mux/applet).
The last entity that owns an endpoint is responsible to release it. When a
mux or an applet is detached from a conn-stream, the conn-stream
relinquishes the endpoint to recreate a new one. This way, the endpoint
state is never lost for the mux or the applet.
It is a transient commit to prepare next changes. Now, when a conn-stream is
created from an applet or a multiplexer, an endpoint is always provided. In
addition, the API to create a conn-stream was specialized to have one
function per type.
The next step will be to share the endpoint structure.
It is a transient commit to prepare next changes. It is possible to pass a
pre-allocated endpoint to create a new conn-stream. If it is NULL, a new
endpoint is created, otherwise the existing one is used. There no more
change at the conn-stream level.
In the applets, all conn-stream are created with no pre-allocated
endpoint. But for multiplexers, an endpoint is systematically created before
creating the conn-stream.
Some CS flags, only related to the endpoint, are moved into the endpoint
struct. More will probably moved later. Those ones are not critical. So it
is pretty safe to move them now and this will ease next changes.
Group the endpoint target of a conn-stream, its context and the associated
flags in a dedicated structure in the conn-stream. It is not inlined in the
conn-stream structure. There is a dedicated pool.
For now, there is no complexity. It is just an indirection to get the
endpoint or its context. But the purpose of this structure is to be able to
share a refcounted context between the mux and the conn-stream. This way, it
will be possible to preserve it when the mux is detached from the
conn-stream.
The function cs_init() is only called by cs_new(). The conn-stream
initialization will be reviewed. It is easier to do it in cs_new() instead
of using a dedicated function. cs_new() is pretty simple, there is no reason
to split the code in this case.
This change is only significant for the multiplexer part. For the applets,
the context and the endpoint are the same. Thus, there is no much change. For
the multiplexer part, the connection was used to set the conn-stream
endpoint and the mux's stream was the context. But it is a bit strange
because once a mux is installed, it takes over the connection. In a
wonderful world, the connection should be totally hidden behind the mux. The
stream-interface and, in a lesser extent, the stream, still access the
connection because that was inherited from the pre-multiplexer era.
Now, the conn-stream endpoint is the mux's stream (an opaque entity for the
conn-stream) and the connection is the context. Dedicated functions have
been added to attached an applet or a mux to a conn-stream.
The appctx owner is now always a conn-stream. Thus, it can be set during the
appctx allocation. But, to do so, the conn-stream must be created first. It
is not a problem on the server side because the conn-stream is created with
the stream. On the client side, we must take care to create the conn-stream
first.
This change should ease other changes about the applets bootstrapping.
This patch is mandatory to invert the endpoint and the context in the
conn-stream. There is no common type (at least for now) for the entity
representing a mux (h1s, h2s...), thus we must set its type when the
endpoint is attached to a conn-stream. There is 2 types for the conn-stream
endpoints: the mux (CS_FL_ENDP_MUX) and the applet (CS_FL_ENDP_APP).
For now there is no much change. Only the appctx is passed as argument when
the .init callback function is called. And it is not possible to yield at
this stage. It is not a problem because the feature is not used. Only the
lua defines this callback function for the lua TCP/HTTP services. The idea
is to be able to use it for all applets to initialize the appctx context.
cs_free() must not be called when we fail to allocate the conn-stream in
h1s_new_cs() function. This bug was introduced by the commit cda94accb
("MAJOR: stream/conn_stream: Move the stream-interface into the
conn-stream").
It is 2.6-specific, no backport is needed.
It was mentioned in issue #12 that expired entries would appear with a
negative expire delay in "show cache". Instead of listing them, let's
just evict them.
This could be backported to all versions since this was reported on
1.8 already.
It was reported in issue #13 that a GOAWAY frame was sent on timeout even
if no SETTINGS frame was sent. The approach imagined by then was to track
the fact that a SETTINGS frame was already sent to avoid this, but that's
already what is done through the state, though it doesn't stand due to the
fact that we switch the frame to the error state. Thus instead what we're
doing here is to instead set the GOAWAY_FAILED flag in h2c_error() before
switching to the ERROR state when the state indicates we've not yet sent
settings, and refrain from sending anything from the h2c_send_goaway_error()
function for such states.
This could be backported to all versions where it applies well.
qcs by_id field has been replaced by a new field named "id". Adjust the
h3_debug_printf traces. This is the case since the introduction of the
qc_stream_desc type.
In issue #1184, cppcheck reports that an incorrect format "%d" was
used to print an unsigned in the debug code, though values are always
very small and this will never be an issue.
In issue #1184, cppcheck complains about some inconsistent printf
formats. At least the one in peer_prepare_hellomsg() that uses "%u"
for the int "min_ver" is wrong. Let's force other types to make it
happy, though constants cannot cause trouble.
cppcheck reports in issue #1184 a type mismatch between "%d" and the
unsigned int "misses" in the standalone debug code of lru.c. Let's
switch to "%u".
We used to check if the transport layer was ssl_sock to decide to log
"~" after a frontend's name. Now that QUIC is present, this doesn't work
anymore. Better rely on the transport layer's get_ssl_sock_ctx() method.
Coverity found in issue #1646 that I added a double-close bug in last
commit e4d09cedb ("MINOR: sock: check configured limits at the sock layer,
not the listener's") because the error path already closes the FD. No
backport needed.
In issue #1645, coverity suspects some dead code due to a pair of
remaining tests on "if (!ctx)". While all other functions test the
context earlier, these ones used to only test the connection and the
transport. It's still not very clear to me if there are certain error
cases that can lead to no SSL being initially set while the rest is
ready, and the SSL arriving later, but better preserve this original
construct by testing first the connection and only later the context.
First gcc, then now coverity report possible null derefs in situations
where we know these cannot happen since we call the functions in
contexts that guarantee the existence of the connection and the method
used. Let's introduce an unchecked version of the function for such
cases, just like we had to do with objt_*. This allows us to remove the
ALREADY_CHECKED() statements (which coverity doesn't see), and addresses
github issues #1643, #1644, #1647.
Some compilers see a possible null deref after conn_get_ssl_sock_ctx()
in ssl_sock_parse_heartbeat, which cannot happen there, so let's mark
it as safe. No backport needed.
It was supposed to be there, and probably was not placed there due to
historic limitations in listener_accept(), but now there does not seem
to be a remaining valid reason for keeping the quic_conn out of the
handle. In addition in new_quic_cli_conn() the handle->fd was incorrectly
set to the listener's FD.
By being able to return the ssl_sock_ctx, we're now enabling the whole
set of SSL sample fetch methods to work on the current SSL context of
the QUIC connection, as seen in the following test showing a request
forwarded to an HTTP/1 server with plenty of SSL headers filled:
00000001:decrypt.clireq[000f:ffffffff]: GET / HTTP/1.1
00000001:decrypt.clihdr[000f:ffffffff]: host: localhost
00000001:decrypt.clihdr[000f:ffffffff]: user-agent: nghttp3/ngtcp2 client
00000001:decrypt.clihdr[000f:ffffffff]: x-src: 127.0.0.1
00000001:decrypt.clihdr[000f:ffffffff]: x-dst: 127.0.0.4
00000001:decrypt.clihdr[000f:ffffffff]: x-ssl_f_serial: D16197E7D3E634E9
00000001:decrypt.clihdr[000f:ffffffff]: x-ssl_f_key_alg: rsaEncryption
00000001:decrypt.clihdr[000f:ffffffff]: x-ssl_f_sig_alg: RSA-SHA1
00000001:decrypt.clihdr[000f:ffffffff]: x-ssl_fc: 1
00000001:decrypt.clihdr[000f:ffffffff]: x-ssl_fc_has_sni: 1
00000001:decrypt.clihdr[000f:ffffffff]: x-ssl_fc_sni: blah
00000001:decrypt.clihdr[000f:ffffffff]: x-ssl_fc_alpn: h3
00000001:decrypt.clihdr[000f:ffffffff]: x-ssl_fc_protocol: TLSv1.3
00000001:decrypt.clihdr[000f:ffffffff]: x-ssl_fc_cipher: TLS_AES_256_GCM_SHA384
00000001:decrypt.clihdr[000f:ffffffff]: x-ssl_fc_alg_keysize: 256
00000001:decrypt.clihdr[000f:ffffffff]: x-ssl_fc_use_keysize: 256
00000001:decrypt.clihdr[000f:ffffffff]: x-forwarded-for: 127.0.0.1
The code is trivial, but this is marked as medium as there's always
the risk that some of the callable functions do not like being called
on such SSL contexts.
The SSL functions must not use conn->xprt_ctx anymore but find the context
by calling conn_get_ssl_sock_ctx(), which will properly pass through the
transport layers to retrieve the desired information. Otherwise when the
functions are called on a QUIC connection, they refuse to work for not
being called on the proper transport.
Historically there was a single way to have an SSL transport on a
connection, so detecting if the transport layer was SSL and a context
was present was sufficient to detect SSL. With QUIC, things have changed
because QUIC also relies on SSL, but the context is embedded inside the
quic_conn and the transport layer doesn't match expectations outside,
making it difficult to detect that SSL is in use over the connection.
The approach taken here to improve this consists in adding a new method
at the transport layer, get_ssl_sock_ctx(), to retrieve this often needed
ssl_sock_ctx, and to use this to detect the presence of SSL. This will
even allow some simplifications and cleanups to be made in the SSL code
itself, and QUIC will be able to provide one to export its ssl_sock_ctx.
These functions will allow the connection layer to retrieve a quic_conn's
source or destination when possible. The quic_conn holds the peer's address
but not the local one, and the sockets API doesn't always makes that easy
for datagrams. Thus for frontend connection what we're doing here is to
retrieve the listener's address when the destination address is desired.
Now it finally becomes possible to fetch the source and destination using
"src" and "dst", and to pass an incoming connection's endpoints via the
proxy protocol.
The mux didn't have its flags nor name set, as seen in this output of
"haproxy -vv":
Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
quic : mode=HTTP side=FE mux= flags=
h2 : mode=HTTP side=FE|BE mux=H2 flags=HTX|CLEAN_ABRT|HOL_RISK|NO_UPG
This might have random impacts at certain points like forcing some
connections to close instead of aborting a stream, or not always
handling certain streams as fully HTX-compliant.
Certain functions cannot be called on an FD-less conn because they are
normally called as part of the protocol-specific setup/teardown sequence.
Better place a few BUG_ON() to make sure none of them is called in other
situations. If any of them would trigger in ambiguous conditions, it would
always be possible to replace it with an error.
Some syscalls at the TCP level act directly on the FD. Some of them
are used by TCP actions like set-tos, set-mark, silent-drop, others
try to retrieve TCP info, get the source or destination address. These
ones must not be called with an invalid FD coming from an FD-less
connection, so let's add the relevant tests for this. It's worth
noting that all these ones already have fall back plans (do nothing,
error, or switch to alternate implementation).
QUIC connections do not use a file descriptor, instead they use the
quic equivalent which is the quic_conn. A number of our historical
functions at the connection level continue to unconditionally touch
the file descriptor and this may have consequences once QUIC starts
to be used.
This patch adds a new flag on QUIC connections, CO_FL_FDLESS, to
mention that the connection doesn't have a file descriptor, hence the
FD-based API must never be used on them.
From now on it will be possible to intrument existing functions to
panic when this flag is present.
listener_accept() used to continue to enforce the FD limits relative to
global.maxsock by itself while it's the last FD-specific test in the
whole file. This test has nothing to do there, it ought to be placed in
sock_accept_conn() which is the one in charge of FD allocation and tests.
Similar tests are already located there by the way. The only tiny
difference is that listener_accept() used to pause for one second when
this limit was reached, while other similar conditions were pausing only
100ms, so now the same 100ms will apply. But that's not important and
could even be considered as an improvement.
OpenSSL 3.0 warns that ERR_func_error_string() is deprecated. Using
ERR_peek_error_func() solves it instead, and this function was added to
the compat layer by commit 1effd9aa0 ("MINOR: ssl: Remove call to
ERR_func_error_string with OpenSSLv3").
The OpenSSL engine API is deprecated starting with OpenSSL 3.0.
In order to have a clean build this feature is now disabled by default.
It can be reactivated with USE_ENGINE=1 on the build line.
Shawn Heisey reported that the proxy's description was unreadable in dark
color scheme. This is because the text color is changed in the table but
not the cell's background.
This should be backported to 2.5.
In "haproxy -vv" we produce a list of available muxes with their
capabilities, but that list is often quite large for terminals due
to excess of spaces, so let's reduce them a bit to make the output
more readable.
The new 'close-spread-time' global option can be used to spread idle and
active HTTP connction closing after a SIGUSR1 signal is received. This
allows to limit bursts of reconnections when too many idle connections
are closed at once. Indeed, without this new mechanism, in case of
soft-stop, all the idle connections would be closed at once (after the
grace period is over), and all active HTTP connections would be closed
by appending a "Connection: close" header to the next response that goes
over it (or via a GOAWAY frame in case of HTTP2).
This patch adds the support of this new option for HTTP as well as HTTP2
connections. It works differently on active and idle connections.
On active connections, instead of sending systematically the GOAWAY
frame or adding the 'Connection: close' header like before once the
soft-stop has started, a random based on the remainder of the close
window is calculated, and depending on its result we could decide to
keep the connection alive. The random will be recalculated for any
subsequent request/response on this connection so the GOAWAY will still
end up being sent, but we might wait a few more round trips. This will
ensure that goaways are distributed along a longer time window than
before.
On idle connections, a random factor is used when determining the expire
field of the connection's task, which should naturally spread connection
closings on the time window (see h2c_update_timeout).
This feature request was described in GitHub issue #1614.
This patch should be backported to 2.5. It depends on "BUG/MEDIUM:
mux-h2: make use of http-request and keep-alive timeouts" which
refactorized the timeout management of HTTP2 connections.
We modify the key update feature implementation to support reusable cipher contexts
as this is done for the other cipher contexts for packet decryption and encryption.
To do so we attach a context to the quic_tls_kp struct and initialize it each time
the underlying secret key is updated. Same thing when we rotate the secrets keys,
we rotate the contexts as the same time.
These settings are potentially cancelled by others setting initialization shared
with SSL sock bindings. This will have to be clarified when we will adapt the
QUIC bindings configuration.
Add ->ctx new member field to quic_tls_secrets struct to store the cipher context
for each QUIC TLS context TX/RX parts.
Add quic_tls_rx_ctx_init() and quic_tls_tx_ctx_init() functions to initialize
these cipher context for RX and TX parts respectively.
Make qc_new_isecs() call these two functions to initialize the cipher contexts
of the Initial secrets. Same thing for ha_quic_set_encryption_secrets() to
initialize the cipher contexts of the subsequent derived secrets (ORTT, Handshake,
1RTT).
Modify quic_tls_decrypt() and quic_tls_encrypt() to always use the same cipher
context without allocating it each time they are called.
When a QUIC connection is accepted, the wrong field is set from the
client's source address, it's the destination instead of the source.
No backport needed.
On qc_detach(), the qcs must cleared the conn-stream context and set its
cs pointer to NULL. This prevents the qcs to point to a dangling
reference.
Without this, a SEGFAULT may occurs in qc_wake_some_streams() when
accessing an already detached conn-stream instance through a qcs.
Here is the SEGFAULT observed on haproxy.org.
Program terminated with signal 11, Segmentation fault.
1234 else if (qcs->cs->data_cb->wake) {
(gdb) p qcs.cs.data_cb
$1 = (const struct data_cb *) 0x0
This can happens since the following patch :
commit fe035eca3a
MEDIUM: mux-quic: report errors on conn-streams
The stream mux buffering has been reworked since the introduction of the
struct qc_stream_desc. A qcs is now able to quickly release its buffer
to the quic-conn.
For replace-path, replace-pathq and replace-uri actions, we must take care
to not match on the selected element if it is not defined.
regex_exec_match2() function expects to be called with a defined
subject. However, if the request path is invalid or not found, the function
is called with a NULL subject, leading to a crash when compiled without the
PRCE/PCRE2 support.
For instance the following rules crashes HAProxy on a CONNECT request:
http-request replace-path /short/(.) /\1
This patch must be backported as far as 2.0.
url_enc() encodes an input string by calling encode_string(). To do so, it
adds a trailing '\0' to the sample string. However it never restores the
sample string at the end. It is a problem for const samples. The sample
string may be in the middle of a buffer. For instance, the HTTP headers
values are concerned.
However, instead of modifying the sample string, it is easier to rely on
encode_chunk() function. It does the same but on a buffer.
This patch must be backported as far as 2.2.
When compiled in debug mode, a BUG_ON triggers an error when the payload is
fully transfered from the http-client buffer to the request channel
buffer. In fact, when channel_add_input() is called, the request buffer is
empty. So an error is reported when those data are directly forwarded,
because we try to add some output data on a buffer with no data.
To fix the bug, we must be sure to call channel_add_input() after the data
transfer.
The bug was introduced by the commit ccc7ee45f ("MINOR: httpclient: enable
request buffering"). So, this patch must be backported if the above commit
is backported.
When a message is sent, we can switch it state to MSG_DONE if all the
announced payload was processed. This way, if the EOM flag is not set on the
message when the last expected data block is processed, the message can
still be set to MSG_DONE state.
This bug is related to the previous ones. There is a design issue with the
HTX since the 2.4. When the EOM HTX block was replaced by a flag, I tried
hard to be sure the flag is always set with the last HTX block on a
message. It works pretty well for all messages received from a client or a
server. But for internal messages, it is not so simple. Because applets
cannot always properly handle the end of messages. So, there are some cases
where the EOM flag is set on an empty message.
As a workaround, for chunked messages, we can add an EOT HTX block. It does
the trick. But for messages with a content-length, there is no empty DATA
block. Thus, the only way to be sure the end of the message was reached in
this case is to detect it in the H1 multiplexr.
We already count the amount of data processed when the payload length is
announced. Thus, we must only switch the message in DONE state when last
bytes of the payload are received. Or when the EOM flag is received of
course.
This patch must be backported as far as 2.4.
In a lua HTTP applet, when the script is finished, we must be sure to not
set the EOM on an empty message. Otherwise, because there is no data to
send, the mux on the client side may miss the end of the message and
consider any shutdown as an abort.
See "UG/MEDIUM: stats: Be sure to never set EOM flag on an empty HTX
message" for details.
This patch must be backported as far as 2.4. On previous version there is
still the EOM HTX block.
During the last call to the stats I/O handle, it is possible to have nothing
to dump. It may happen for many reasons. For instance, all remaining proxies
are disabled or they don't match the specified scope. In HTML or in JSON, it
is not really an issue because there is a footer. So there are still some
data to push in the response channel buffer. In CSV, it is a problem because
there is no footer. It means it is possible to finish the response with no
payload at all in the HTX message. Thus, the EOM flag may be added on an
empty message. When this happens, a shutdown is performed on an empty HTX
message. Because there is nothing to send, the mux on the client side is not
notified that the message was properly finished and interprets the shutdown
as an abort.
The response is chunked. So an abort at this stage means the last CRLF is
never sent to the client. All data were sent but the message is invalid
because the response chunking is not finished. If the reponse is compressed,
because of a similar bug in the comppression filter, the compression is also
aborted and the content is truncated because some data a lost in the
compression filter.
It is design issue with the HTX. It must be addressed. And there is an
opportunity to do so with the recent conn-stream refactoring. It must be
carefully evaluated first. But it is possible. In the means time and to also
fix stable versions, to workaround the bug, a end-of-trailer HTX block is
systematically added at the end of the message when the EOM flag is set if
the HTX message is empty. This way, there are always some data to send when
the EOM flag is set.
Note that with a H2 client, it is only a problem when the response is
compressed.
This patch should fix the issue #1478. It must be backported as far as
2.4. On previous versions there is still the EOM block.
In the FCGI app, when a full response is received, if there is no
content-length and transfer-encoding headers, a content-length header is
automatically added. This avoid, as far as possible to chunk the
response. This trick was added because, most of time, scripts don"t add
those headers.
But this should not be performed for response to HEAD requests. Indeed, in
this case, there is no payload. If the payload size is not specified, we
must not added it by hand. Otherwise, a "content-length: 0" will always be
added while it is not the real payload size (unknown at this stage).
This patch should solve issue #1639. It must be backported as far as 2.2.
Define a new API to notify the MUX from the quic-conn when the
connection is about to be closed. This happens in the following cases :
- on idle timeout
- on CONNECTION_CLOSE emission or reception
The MUX wake callback is called on these conditions. The quic-conn
QUIC_FL_NOTIFY_CLOSE is set to only report once. On the MUX side,
connection flags CO_FL_SOCK_RD_SH|CO_FL_SOCK_WR_SH are set to interrupt
future emission/reception.
This patch is the counterpart to
"MEDIUM: mux-quic: report CO_FL_ERROR on send".
Now the quic-conn is able to report its closing, which may be translated
by the MUX into a CO_FL_ERROR on the connection for the upper layer.
This allows the MUX to properly react to the QUIC closing mechanism for
both idle-timeout and closing/draining states.
Complete the error reporting. For each attached streams, if CO_FL_ERROR
is set, mark them with CS_FL_ERR_PENDING|CS_FL_ERROR. This will notify
the upper layer to trigger streams detach and release of the MUX.
This reporting is implemented in a new function qc_wake_some_streams(),
called by qc_wake(). This ensures that a lower-layer error is quickly
reported to the individual streams.
Mark the connection with CO_FL_ERROR on qc_send() if the socket Tx is
closed. This flag is used by the upper layer to order a close on the
MUX. This requires to check CO_FL_ERROR in qcc_is_dead() to process to
immediate MUX free when set.
The qc_wake() callback has been completed. Most notably, it now calls
qc_send() to report a possible CO_FL_ERROR. This is useful because
qc_wake() is called by the quic-conn on imminent closing.
Note that for the moment the error flag can never be set because the
quic-conn does not report when the Tx socket is closed. This will be
implemented in a following patch.
Regroup all features related to sending in qc_send(). This will be
useful when qc_send() will be called outside of the io-cb.
Currently, flow-control frames generation is now automatically
integrated in qc_send().
Add a new app layer operation is_active. This can be used by the MUX to
check if the connection can be considered as active or not. This is used
inside qcc_is_dead as a first check.
For example on HTTP/3, if there is at least one bidir client stream
opened the connection is active. This explicitly ignore the uni streams
used for control and qpack as they can never be closed during the
connection lifetime.
Improve timeout handling on the MUX. When releasing a stream, first
check if the connection can be considered as dead and should be freed
immediatly. This allows to liberate resources faster when possible.
If the connection is still active, ensure there is no attached
conn-stream before scheduling the timeout. To do this, add a nb_cs field
in the qcc structure.
When freeing a quic-conn, the streams resources attached to it must be
cleared. This code is already implemented but the streams buffer was not
deallocated.
Fix this by using the function qc_stream_desc_free. This existing
function centralize all operations to properly free all streams
elements, attached both to the MUX and the quic-conn.
This fixes a memory leak which can happen for each released connection.
This flag was used to notify the MUX about a CONNECTION_CLOSE frame
reception. It is now unused on the MUX side and can be removed. A new
mechanism to detect quic-conn closing will be soon implemented.
Rationalize the lifetime of the quic-conn regarding with the MUX. The
quic-conn must not be freed if the MUX is still allocated.
This simplify the MUX code when accessing the quic-conn and removed
possible segfaults.
To implement this, if the quic-conn timer expired, the quic-conn is
released only if the MUX is not allocated. Else, the quic-conn is
flagged with QUIC_FL_CONN_EXP_TIMER. The MUX is then responsible
to call quic_close() which will free the flagged quic-conn.
New received packets after sending CONNECTION_CLOSE frame trigger a new
CONNECTION_CLOSE frame to be sent. Each time such a frame is sent we
increase the number of packet required to send another CONNECTION_CLOSE
frame.
Rearm only one time the idle timer when sending a CONNECTION_CLOSE frame.
In case an error provokes the release of the applet, we will never call
the end callback of the httpclient.
In the case of a lua script, it would mean that the lua task will never
be waked up after a yield, letting the lua script stuck forever.
Fix the issue by moving the callback from the end of the iohandler to
the applet release function.
Must be backported in 2.5.
The request buffering is required for doing l7 retry. The IO handler of
the httpclient need to be rework for that.
This patch change the IO handler so it copies partially the data instead
of swapping buffer. This is needed because the b_xfer won't never work
if the destination buffer is not empty, which is the case when
buffering.
There were empty lines in the output of "show ssl ca-file <cafile>" and
"show ssl crl-file <crlfile>" commands when an empty line should only
mark the end of the output. This patch adds a space to those lines.
This patch should be backported to 2.5.
ssl_store_load_locations_file() is using X509_get_default_cert_dir()
when using '@system-ca' as a parameter.
This function could return a NULL if OpenSSL was built with a
X509_CERT_DIR set to NULL, this is uncommon but let's fix this.
No backport needed, 2.6 only.
Fix issue #1637.
This new converter is similar to the concat converter and can be used to
build new variables made of a succession of other variables but the main
difference is that it does the checks if adding a delimiter makes sense as
wouldn't be the case if e.g the current input sample is empty. That
situation would require 2 separate rules using concat converter where the
first rule would have to check if the current sample string is empty before
adding a delimiter. This resolves GitHub Issue #1621.
Previous patch was accidentaly breaking upon an error when itarating
through a CA directory. This is not the expected behavior, the function
must start processing the other files after the warning.
This patch implements the ability to load a certificate directory with
the "ca-file" directive.
The X509_STORE_load_locations() API does not allow to cache a directory
in memory at startup, it only references the directory to allow a lookup
of the files when needed. But that is not compatible with the way
HAProxy works, without any access to the filesystem.
The current implementation loads every ".pem", ".crt", ".cer", and
".crl" available in the directory which is what is done when using
c_rehash and X509_STORE_load_locations(). Those files are cached in the
same X509_STORE referenced by the directory name. When looking at "show ssl
ca-file", everything will be shown in the same entry.
This will eventually allow to load more easily the CA of the system,
which could already be done with "ca-file /etc/ssl/certs" in the
configuration.
Loading failure intentionally emit a warning instead of an alert,
letting HAProxy starts when one of the files can't be loaded.
Known limitations:
- There is a bug in "show ssl ca-file", once the buffer is full, the
iohandler is not called again to output the next entries.
- The CLI API is kind of limited with this, since it does not allow to
add or remove a entry in a particular ca-file. And with a lot of
CAs you can't push them all in a buffer. It probably needs a "add ssl
ca-file" like its done with the crt-list.
Fix issue #1476.
As suggested in the comment, it's possible to read 32 bits at once in
big-endian order, and now we have the functions to do this, so let's
do it. This reduces the code on the fast path by 31 bytes on x86, and
more importantly performs single-operation 32-bit reads instead of
playing with shifts and additions.
As reported in issue #1635, there's a subtle sign change when shifting
a uint8_t value to the left because integer promotion first turns any
smaller type to signed int *even if it was unsigned*. A warning was
reported about uint8_t shifted left 24 bits that couldn't fit in int
due to this.
It was verified that the emitted code didn't change, as expected, but
at least this allows to silence the code checkers. There's no need to
backport this.
This one has been detected by valgrind:
==2179331== Conditional jump or move depends on uninitialised value(s)
==2179331== at 0x1B6EDE: qcs_notify_recv (mux_quic.c:201)
==2179331== by 0x1A17C5: qc_handle_uni_strm_frm (xprt_quic.c:2254)
==2179331== by 0x1A1982: qc_handle_strm_frm (xprt_quic.c:2286)
==2179331== by 0x1A2CDB: qc_parse_pkt_frms (xprt_quic.c:2550)
==2179331== by 0x1A6068: qc_treat_rx_pkts (xprt_quic.c:3463)
==2179331== by 0x1A6C3D: quic_conn_app_io_cb (xprt_quic.c:3589)
==2179331== by 0x3AA566: run_tasks_from_lists (task.c:580)
==2179331== by 0x3AB197: process_runnable_tasks (task.c:883)
==2179331== by 0x357E56: run_poll_loop (haproxy.c:2750)
==2179331== by 0x358366: run_thread_poll_loop (haproxy.c:2921)
==2179331== by 0x3598D2: main (haproxy.c:3538)
==2179331==
This should be useful to have an idea of the list of frames which could be built
towards the list of available frames when building packets.
Same thing about the frames which could not be built because of a lack of room
in the TX buffer.
During a handshake, after having prepared a probe upon a PTO expiration from
process_timer(), we wake up the I/O handler to make it send probing packets.
This handler first treat incoming packets which trigger a fast retransmission
leading to send too much probing (duplicated) packets. In this cas we cancel
the fast retranmission.
When discarding a packet number space, we at least reset the PTO backoff counter.
Doing this several times have an impact on the PTO duration calculation.
We must not discard a packet number space several times (this is already the case
for the handshake packet number space).
Before having a look at the next encryption level to build packets if there is
no more ack-eliciting frames to send we must check we have not to probe from
the current encryption level anymore. If not, we only send one datagram instead
of sending two datagrams giving less chance to recover from packet loss.
Due to a erroneous interpretation of the RFC 9000 (quic-transport), ACKs frames
were always sent only after having received two ack-eliciting packets.
This could trigger useless retransmissions for tail packets on the peer side.
For now on, we send as soon as possible ACK frames as soon as we have ACK to send,
in the same packets as the ack-eliciting frame packets, and we also send ACK
frames after having received 2 ack-eliciting packets since the last time we sent
an ACK frame with other ack-eliciting frames.
As such variables are handled by the QUIC connection I/O handler which runs
always on the thread, there is no need to continue to use such atomic operations
This bug has come with this commit:
1fc5e16c4 MINOR: quic: More accurate immediately close
As mentionned in this commit we do not want to derive anymore secret when in closing
state. But the flag which denote secrets were derived was set. Add a label at
the correct flag to skip the secrets derivation without setting this flag.
Over time we've tried hard to abstract connection errors from the upper
layers so that they're reported per stream and not per connection. As
early as 1.8-rc1, commit 4ff3b8964 ("MINOR: connection: make conn_stream
users also check for per-stream error flag") did precisely this, but
strangely only for rx, not for tx (probably that by then send errors
were not imagined to be reported that way).
And this lack of Tx error check was just revealed in 2.6 by recent commit
d1480cc8a ("BUG/MEDIUM: stream-int: do not rely on the connection error
once established") that causes wakeup loops between si_cs_send() failing
to send via mux_pt_snd_buf() and subscribing against si_cs_io_cb() in
loops because the function now rightfully only checks for CS_FL_ERROR
and not CO_FL_ERROR.
As found by Amaury, this causes aborted "show events -w" to cause
haproxy to loop at 100% CPU.
This fix theoretically needs to be backported to all versions, though
it will be necessary and sufficient to backport it wherever 4ff3b8964
gets backported.
The list of streams was modified in 2.4 to become per-thread with commit
a698eb673 ("MINOR: streams: use one list per stream instead of a global
one"). However the change applied to cli_parse_shutdown_session() is
wrong, as it uses the nullity of the stream pointer to continue on next
threads, but this one is not null once the list_for_each_entry() loop
is finished, it points to the list's head again, so the loop doesn't
check other threads, and no message is printed either to say that the
stream was not found.
Instead we should check if the stream is equal to the requested pointer
since this is the condition to break out of the loop.
Thus must be backported to 2.4. Thanks to Maciej Zdeb for reporting this.
The new qc_stream_desc type has a tree node for storage. Thus, we can
remove the node in the qcs structure.
When initializing a new stream, it is stored into the qcc streams_by_id
tree. When the MUX releases it, it will freed as soon as its buffer is
emptied. Before this, the quic-conn is responsible to store it inside
its own streams_by_id tree.
Move the xprt-buf and ack related fields from qcs to the qc_stream_desc
structure. In exchange, qcs has a pointer to the low-level stream. For
each new qcs, a qc_stream_desc is automatically allocated.
This simplify the transport layer by removing qcs/mux manipulation
during ACK frame parsing. An additional check is done to not notify the
MUX on sending if the stream is already released : this case may now
happen on retransmission.
To complete this change, the quic_stream frame now references the
quic_stream instance instead of a qcs.
Currently, the mux qcs streams manage the Tx buffering, even after
sending it to the transport layer. Buffers are emptied when
acknowledgement are treated by the transport layer. This complicates the
MUX liberation and we may loose some data after the MUX free.
Change this paradigm by moving the buffering on the transport layer. For
this goal, a new type is implemented as low-level stream at the
transport layer, as a counterpart of qcs mux instances. This structure
is called qc_stream_desc. This will allow to free the qcs/qcc instances
without having to wait for acknowledge reception.
For the moment, the quic-conn is responsible to store the qc_stream_desc
in a new tree named streams_by_id. This will sligthly change in the next
commits to remove the qcs node which has a similar purpose :
qc_stream_desc instances will be shared between the qcc MUX and the
quic-conn.
This patch only introduces the new type definition and the function to
manipulate it. The following commit will bring the rearchitecture in the
qcs structure.
Remove qcs instances left during qcc MUX release. This can happen when
the MUX is closed before the completion of all the transfers, such as on
a timeout or process termination.
This may free some memory leaks on the connection.
Implement the release app-ops ops for H3 layer. This is used to clean up
uni-directional streams and the h3 context.
This prevents a memory leak on H3 resources for each connection.
Define a new callback release inside qcc_app_ops. It is called when the
qcc MUX is freed via qc_release. This will allows to implement cleaning
on the app layer.
Regroup some cleaning operations inside a new function qcs_free. This
can be used for all streams, both through qcs_destroy and with
uni-directional streams.
The quic_stream frame stores the qcs instance. On ACK parsing, qcs is
accessed to clear the stream buffer. This can cause a segfault if the
MUX or the qcs is already released.
Consider the following scenario :
1. a STREAM frame is generated by the MUX
transport layer emits the frame with PKN=1
upper layer has finished the transfer so related qcs is detached
2. transport layer reemits the frame with PKN=2 because ACK was not
received
3. ACK for PKN=1 is received, stream buffer is cleared
at this stage, qcs may be freed by the MUX as it is detached
4. ACK for PKN=2 is received
qcs for STREAM frame is dereferenced which will lead to a crash
To prevent this, qcs is never accessed from the quic_stream during ACK
parsing. Instead, a lookup is done on the MUX streams tree. If the MUX
is already released, no lookup is done. These checks prevents a possible
segfault.
This change may have an impact on the perf as now we are forced to use a
tree lookup operation. If this is the case, an alternative solution may
be to implement a refcount on qcs instances.
The CertCache.set() function allows to update an SSL certificate file
stored in the memory of the HAProxy process. This function does the same
as "set ssl cert" + "commit ssl cert" over the CLI.
This could be used to update the crt and key, as well as the OCSP, the
SCTL, and the OSCP issuer.
The implementation does yield every 10 ckch instances, the same way the
"commit ssl cert" do.
Simplify the "cert_exts" array which is used for the selection of the
parsing function depending on the extension.
It now uses a pointer to an array element instead of an index, which is
simplier for the declaration of the array.
This way also allows to have multiple extension using the same type.
Extract the code that replace the ckch_store and its dependencies into
the ckch_store_replace() function.
This function must be used under the global ckch lock.
It frees everything related to the old ckch_store.
Note that we cannot reuse dump_act_rules() because the output format
may be adjusted depending on the call place (this is also used from
haproxy -vv). The principle is the same however.
There are very few but they're registered from constructors, hence
in a random order. The scope had to be copied when retrieving the
next keyword. Note that this also has the effect of listing them
sorted in haproxy -vv.
Like for previous keyword classes, we're sorting the output. But this
time as it's not trivial to do it with multiple words, instead we're
proceeding like the help command, we sort them on their usage message
when present, and fall back to the first word of the command when there
is no usage message (e.g. "help" command).
It's much more convenient to sort these keywords on output to detect
changes, and it's easy to do. The patch looks big but most of it is
only caused by an indent change in the loop, as "git diff -b" shows.
The output produced by dump_registered_keywords() really deserves to be
sorted in order to ease comparisons. The function now implements a tiny
sorting mechanism that's suitable for each two-level list, and makes
use of dump_act_rules() to dump rulesets. The code is not significantly
more complicated and some parts (e.g options) could even be factored.
The output is much more exploitable to detect differences now.
The new function dump_act_rules() now dumps the list of actions supported
by a ruleset. These actions are alphanumerically sorted first so that the
produced output is easy to compare.
When trying to sort sets of strings, it's often needed to required to
compare 3 strings to see if the chosen one fits well between the two
others. That's what this function does, in addition to being able to
ignore extremities when they're NULL (typically for the first iteration
for example).
Similar to the sample fetch keywords, let's also list the converter
keywords. They're much simpler since there's no compatibility matrix.
Instead the input and output types are listed. This is called by
dump_registered_keywords() for the "cnv" keywords class.
New function smp_dump_fetch_kw lists registered sample fetch keywords
with their compatibility matrix, mandatory and optional argument types,
and output types. It's called from dump_registered_keywords() with class
"smp".
New function acl_dump_kwd() dumps the registered ACL keywords and their
sample-fetch equivalent to stdout. It's called by dump_registered_keywords()
for keyword class "acl".
New function cli_list_keywords() scans the list of registered CLI keywords
and dumps them on stdout. It's now called from dump_registered_keywords()
for the class "cli".
Some keywords are valid for the master, they'll be suffixed with
"[MASTER]". Others are valid for the worker, they'll have "[WORKER]".
Those accessible only in expert mode will show "[EXPERT]" and the
experimental ones will show "[EXPERIM]".
When no output stream is passed, stdout is used with one entry per line,
and this is called from dump_registered_services() when passed the class
"svc".
When passing a NULL output buffer the function will now dump to stdout
with a more compact format that is more suitable for machine processing.
An entry was added to dump_registered_keyword() to call it when the
keyword class "flt" is requested.
All registered config keywords that are valid in the config parser are
dumped to stdout organized like the regular sections (global, listen,
etc). Some keywords that are known to only be valid in frontends or
backends will be suffixed with [FE] or [BE].
All regularly registered "bind" and "server" keywords are also dumped,
one per "bind" or "server" line. Those depending on ssl are listed after
the "ssl" keyword. Doing so required to export the listener and server
keyword lists that were static.
The function is called from dump_registered_keywords() for keyword
class "cfg".
It's difficult from outside haproxy to detect the supported keywords
and syntax. Interestingly, many of our modern keywords are enumerated
since they're registered from constructors, so it's not very hard to
enumerate most of them.
This patch creates some basic infrastructure to support dumping existing
keywords from different classes on stdout. The format will differ depending
on the classes, but the idea is that the output could easily be passed to
a script that generates some simple syntax highlighting rules, completion
rules for editors, syntax checkers or config parsers.
The principle chosen here is that if "-dK" is passed on the command-line,
at the end of the parsing the registered keywords will be dumped for the
requested classes passed after "-dK". Special name "help" will show known
classes, while "all" will execute all of them. The reason for doing that
after the end of the config processor is that it will also enumerate
internally-generated keywords, Lua or even those loaded from external
code (e.g. if an add-on is loaded using LD_PRELOAD). A typical way to
call this with a valid config would be:
./haproxy -dKall -q -c -f /path/to/config
If there's no config available, feeding /dev/null will also do the job,
though it will not be able to detect dynamically created keywords, of
course.
This patch also updates the management doc.
For now nothing but the help is listed, various subsystems will follow
in subsequent patches.
In 2.4, two commits added support for supporting sample fetch calls from
new config and CLI contexts, but these were not added to the visibile
names, which may possibly cause "(null)" to appear in some error messages.
The commit in question were:
db5e0dbea ("MINOR: sample: add a new CLI_PARSER context for samples")
f9a7a8fd8 ("MINOR: sample: add a new CFG_PARSER context for samples")
This patch needs to be backported where these are present (2.4 and above).
211ea252d ("BUG/MINOR: logs: fix logsrv leaks on clean exit") introduced a
regression because the list element of a new log server is not intialized. Thus
HAProxy crashes on error path when an invalid log server is released.
This patch shoud fix the issue #1636. It must be backported if the above commit
is backported. For now, it is 2.6-specific and no backport is needed.
When the destination buffer is full while there are still data to parse, the
h1s must be marked as congested to be able to restart the parsing
later. This work on headers and data parsing. But on trailers parsing, we
fail to do so when the buffer is full before to parse the trailers. In this
case, we skip the trailers parsing but the h1s is not marked as
congested. This is important to be sure to wake up the mux to restart the
parsing when some room is made in the buffer.
Because of this bug, the message processing may hang till a timeout is
triggered. Note that for 2.3 and 2.2, the EOM processing is buggy too, for
the same reason. It should be fixed too on these versions. On the 2.0, only
trailers parsing is affected.
This patch must be backported as far as 2.0. On 2.3 and 2.2, the EOM parsing
must be fixed too.
h1_parse_msg_hdrs() and h1_parse_msg_tlrs() may return negative values if
the parsing fails or if more space is needed in the destination buffer. When
h1-htx was changed, The H1 mux was updated accordingly but not the FCGI
mux. Thus if a negative value is returned, it is ignored and it is casted to
a size_t, leading to an integer overflow on the <ofs> value, used to know
the position in the RX buffer.
This patch must be backported as far as 2.2.
url2sa() still have an unfortunate case where it reads 1 byte too far,
it happens when no port or path are specified in the URL, and could
crash if the byte after the URL is not allocated (mostly with ASAN).
This case is never triggered in old versions of haproxy because url2sa
is used with buffers which are way bigger than the URL. It is only
triggered with the httpclient.
Should be bacported in every stable branches.
H3_DEBUG definition is removed from h3.c similarly to the commit
d96361b270
CLEANUP: qpack: suppress by default stdout traces
Also, a plain fprintf in h3_snd_buf has been replaced to be conditional
to the H3_DEBUG definition.
These changes reduces the default output on stdout with QUIC traffic.
Remove the definition of DEBUG_HPACK on qpack-dec.c which forces the
QPACK decoding traces on stderr. Also change the name to use a dedicated
one for QPACK decoding as DEBUG_QPACK.
If the macro is not defined, some local variables are flagged as unused
by the compiler. Fix this by using the __maybe_unused attribute.
For now, the macro is defined in the qpack-dec.c. However, this will
change to not mess up the stderr output of haproxy with QUIC traffic.
This commit is similar to the following one :
commit 118b2cbf84
MINOR: quic: activate QUIC traces at compilation
If the macro ENABLE_QUIC_STDOUT_TRACES is defined, qmux traces are
outputted automatically on stdout. This is useful for the haproxy-qns
interop docker image.
Add a new qmux trace event QMUX_EV_QCS_PUSH_FRM. Its only purpose is to
display the meaningful result of a qcs_push_frame invocation.
A dedicated struct qcs_push_frm_trace_arg is defined to pass a series of
extra args for the trace output.
Define a new qmux event QMUX_EV_SEND_FRM. This allows to pass a
quic_frame as an extra argument. Depending on the frame type, a special
format can be used to log the frame content.
Currently this event is only used in qc_send_max_streams. Thus the
handler is only able to handle MAX_STREAMS frames.
Convert all printfs in the mux-quic code with traces.
Note that some meaningul printfs were not converted because they use
extra args in a format-string. This is the case inside qcs_push_frame
and qc_send_max_streams. A dedicated trace event should be implemented
for them to be able to display the extra arguments.
Declare a new trace module for mux-quic named qmux. It will be used to
convert all printf to regular traces. The handler qmux_trace can uses a
connection and a qcs instance as extra arguments.
Move all inline functions with trace from quic_loss.h to a dedicated
object file. This let to remove the TRACE_SOURCE macro definition
outside of the include file.
This change is required to be able to define another TRACE_SOUCE inside
the mux_quic.c for a dedicated trace module.
Fix 8a91374 ("BUG/MINOR: tools: url2sa reads ipv4 too far") introduced a
regression in the value returned when parsing an ipv4 host.
Tthe consumed length is supposed to be as far as the first character of
the path, only its not computed correctly anymore and return the length
minus the size of the scheme.
Fixed the issue by reverting 'curr' and 'url' as they were before the
patch.
Must be backported in every stable branch where the 8a91374 patch was
backported.
After having consumed <i> bytes from <buf>, the remaining available room to be
passed to generate_retry_token() is sizeof(buf) - i.
This bug could be easily reproduced with quic-qo as client which chooses a random
value as ODCID length.
When no DEBUG_STRICT is enabled, we get this build warning:
src/stream_interface.c: In function 'stream_int_chk_snd_conn':
src/stream_interface.c:1198:28: warning: unused variable 'conn' [-Wunused-variable]
1198 | struct connection *conn = cs_conn(cs);
| ^~~~
This was the result of the simplification of the code in commit
d1480cc8a ("BUG/MEDIUM: stream-int: do not rely on the connection error
once established") which removed the last user of this variable outside
of a BUG_ON().
If the patch above is backported, this one should be backported as well.
This commit is similar to the previous one but with MAX_DATA frames.
This allows to increase the connection level flow-control limit. If the
connection was blocked due to QC_CF_BLK_MFCTL flag, the flag is reseted.
Implement a MUX method to parse MAX_STREAM_DATA. If the limit is greater
than the previous one and the stream was blocked, the flag
QC_SF_BLK_SFCTL is removed.
This commit is similar to the previous one, but this time on the
connection level instead of the stream.
When the connection limit is reached, the connection is flagged with
QC_CF_BLK_MFCTL. This flag is checked in qc_send.
qcs_push_frame uses a new parameter which is used to not exceed the
connection flow-limit while calling it repeatdly over multiple streams
instance before transfering data to the transport layer.
Implement the flow-control max-streams-data limit on emission. We ensure
that we never push more than the offset limit set by the peer. When the
limit is reached, the stream is marked as blocked with a new flag
QC_SF_BLK_SFCTL to disable emission.
Currently, this is only implemented for bidirectional streams. It's
required to unify the sending for unidirectional streams via
qcs_push_frame from the H3 layer to respect the flow-control limit for
them.
Rename the fields used for flow-control in the qcc structure. The
objective is to have shorter name for better readability while keeping
their purpose clear. It will be useful when the flow-control will be
extended with new fields.
Add comments on qc_send and qcs_push_frame. Also adjust the return of
qc_send to reflect the total bytes sent. This has no impact as currently
the return value is not checked by the caller.
In MQTTv3.1, protocol name is "MQIsdp" and protocol level is 3. The mqtt
converters(mqtt_is_valid and mqtt_field_value) did not work for clients on
mqttv3.1 because the mqtt_parse_connect() marked the CONNECT message invalid
if either the protocol name is not "MQTT" or the protocol version is other than
v3.1.1 or v5.0. To fix it, we have added the mqttv3.1 protocol name and version
as part of the checks.
This patch fixes the mqtt converters to support mqttv3.1 clients as well (issue #1600).
It must be backported to 2.4.
We must consider the peer address as validated as soon as we received an
handshake packet. An ACK frame in handshake packet was too restrictive.
Rename the concerned flag to reflect this situation.
We must be able to handle 1RTT packets after the mux has terminated its job
(qc->mux_state == QC_MUX_RELEASED). So the condition (qc->mux_state != QC_MUX_READY)
in qc_qel_may_rm_hp() is not correct when we want to wait for the mux to be started.
Add a check in qc_parse_pkt_frms() to ensure is started before calling it. All
the STREAM frames will be ignored when the mux will be released.
The most important one is the ->flags member which leads to an erratic xprt behavior.
For instance a non ack-eliciting packet could be seen as ack-eliciting leading the
xprt to try to retransmit a packet which are not ack-eliciting. In this case, the
xprt does nothing and remains indefinitively in a blocking state.
This could lead to a mux erratic behavior. Sometimes the application layer could
not wakeup the mux I/O handler because it estimated it had already subscribed
to write events (see h3_snd_buf() end of implementation).
This was revealed by libasan when each time qc_send_frames() is run at the first
time:
=================================================================
==84177==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7fbaaca2b3c8 at pc 0x560a4fdb7c2e bp 0x7fbaaca2b300 sp 0x7fbaaca2b2f8
READ of size 1 at 0x7fbaaca2b3c8 thread T6
#0 0x560a4fdb7c2d in qc_send_frames src/mux_quic.c:473
#1 0x560a4fdb83be in qc_send src/mux_quic.c:563
#2 0x560a4fdb8a6e in qc_io_cb src/mux_quic.c:638
#3 0x560a502ab574 in run_tasks_from_lists src/task.c:580
#4 0x560a502ad589 in process_runnable_tasks src/task.c:883
#5 0x560a501e3c88 in run_poll_loop src/haproxy.c:2675
#6 0x560a501e4519 in run_thread_poll_loop src/haproxy.c:2846
#7 0x7fbabd120ea6 in start_thread nptl/pthread_create.c:477
#8 0x7fbabcb19dee in __clone (/lib/x86_64-linux-gnu/libc.so.6+0xfddee)
Address 0x7fbaaca2b3c8 is located in stack of thread T6 at offset 56 in frame
#0 0x560a4fdb7f00 in qc_send src/mux_quic.c:514
This frame has 1 object(s):
[32, 48) 'frms' (line 515) <== Memory access at offset 56 overflows this variable
HINT: this may be a false positive if your program uses some custom stack unwind mechanism, swapcontext or vfork
(longjmp and C++ exceptions *are* supported)
Thread T6 created by T0 here:
#0 0x7fbabd1bd2a2 in __interceptor_pthread_create ../../../../src/libsanitizer/asan/asan_interceptors.cpp:214
#1 0x560a5036f9b8 in setup_extra_threads src/thread.c:221
#2 0x560a501e70fd in main src/haproxy.c:3457
#3 0x7fbabca42d09 in __libc_start_main ../csu/libc-start.c:308
SUMMARY: AddressSanitizer: stack-buffer-overflow src/mux_quic.c:473 in qc_send_frames
There are non already identified rare cases where qc_build_frms() does not manage
to size frames to be encoded in a packet leading qc_build_frm() to fail to add
such frame to the packet to be built. In such cases we must move back such
frames to their origin frame list passed as parameter to qc_build_frms(): <frms>.
because they were added to the packet frame list (but not built). If this
this packet is not retransmitted, the frame is lost for ever! Furthermore we must
not modify the buffer.
The TX packet refcounting had come with the multithreading support but not only.
It is very useful to ease the management of the memory allocated for TX packets
with TX frames attached to. At some locations of the code we have to move TX
frames from a packet to a new one during retranmission when the packet has been
deemed as lost or not. When deemed lost the memory allocated for the paquet must
be released contrary to when its frames are retransmitted when probing (PTO).
For now on, thanks to this patch we handle the TX packets memory this way. We
increment the packet refcount when:
- we insert it in its packet number space tree,
- we attache an ack-eliciting frame to it.
And reciprocally we decrement this refcount when:
- we remove an ack-eliciting frame from the packet,
- we delete the packet from its packet number space tree.
Note that an optimization WOULD NOT be to fully reuse (without releasing its
memorya TX packet to retransmit its contents (its ack-eliciting frames). Its
information (timestamp, in flight length) to be processed by packet loss detection
and the congestion control.
When building a packet with an ACK frame, we store the largest acknowledged
packet number sent in this frame in the packet (quic_tx_packet struc).
When receiving an ack for such a packet we can purge the tree of acknowledged
packet number ranges from the range sent before this largest acknowledged
packet number.
This struct member stores the largest acked packet number which was received. It
is used to build (TX) packet. But this is confusing to store it in the tx packet
of the packet number space structure even if it is used to build and transmit
packets.
Add qc_may_reuse_cbuf() function used by qc_prep_pkts() and qc_prep_app_pkts().
Simplification of the factorized section code: there is no need to check there
is enough room to mark the end of the data in the TX buf. This is done by
the callers (qc_prep_pkts() and qc_prep_app_pkts()). Add a diagram to explain
the conditions which must be verified to be able to reuse a cbuf struct.
This should improve the QUIC stack implementation maintenability.
Previous uses of `ist.cocci` did not add `--include-headers-for-types` and
`--recursive-includes` preventing Coccinelle seeing `struct ist` members of
other structs.
Reapply the patch with proper flags to further clean up the use of the ist API.
The command used was:
spatch -sp_file dev/coccinelle/ist.cocci -in_place --include-headers --include-headers-for-types --recursive-includes --dir src/
If allocation of a new HTTP rule fails, we must not release it calling
free_act_rule(). The regression was introduced by the commit dd7e6c6dc
("BUG/MINOR: http-rules: completely free incorrect TCP rules on error").
This patch must only be backported if the commit above is backported. It should
fix the issues #1627, #1628 and #1629.
dd7e6c6dc ("BUG/MINOR: http-rules: completely free incorrect TCP rules on
error") and 388c0f2a6 ("BUG/MINOR: tcp-rules: completely free incorrect TCP
rules on error") introduced a regression because the list element of a new
rule is not intialized. Thus HAProxy crashes when an incorrect rule is
released.
This patch must be backported if above commits are backported. Note that
new_act_rule() only exists since the 2.5. It relies on the commit d535f807b
("MINOR: rules: add a new function new_act_rule() to allocate act_rules").
Christian Ruppert reported an issue explaining that it's not possible to
forcefully close H2 connections which do not receive requests anymore if
they continue to send control traffic (window updates, ping etc). This
will indeed refresh the timeout. In H1 we don't have this problem because
any single byte is part of the stream, so the control frames in H2 would
be equivalent to TCP acks in H1, that would not contribute to the timeout
being refreshed.
What misses from H2 is the use of http-request and keep-alive timeouts.
These were not implemented because initially it was hard to see how they
could map to H2. But if we consider the real use of the keep-alive timeout,
that is, how long do we keep a connection alive with no request, then it's
pretty obvious that it does apply to H2 as well. Similarly, http-request
may definitely be honored as soon as a HEADERS frame starts to appear
while there is no stream. This will also allow to deal with too long
CONTINUATION frames.
This patch moves the timeout update to a new function, h2c_update_timeout(),
which is in charge of this. It also adds an "idle_start" timestamp in the
connection, which is set when nb_cs reaches zero or when a headers frame
start to arrive, so that it cannot be delayed too long.
This patch should be backported to recent stable releases after some
observation time. It depends on previous patch "MEDIUM: mux-h2: slightly
relax timeout management rules".
The H2 timeout rules were arranged to cover complex situations In 2.1
with commit c2ea47fb1 ("BUG/MEDIUM: mux-h2: do not enforce timeout on
long connections").
It turns out that such rules while complex, do not perfectly cover all
use cases. The real intent is to say that as long as there are attached
streams, the connection must not timeout. Then once all these streams
have quit (possibly for timeout reasons) then the mux should take over
the management of timeouts.
We do have this nb_cs field which indicates the number of attached
streams, and it's updated even when leaving orphaned streams. So
checking it alone is sufficient to know whether it's the mux or the
streams that are in charge of the timeouts.
In its current state, this doesn't cause visible effects except that
it makes it impossible to implement more subtle parsing timeouts.
This would need to be backported as far as 2.0 along with the next
commit that will depend on it.
There's a rare race condition possible when trying to retrieve session from
a back connection's owner, that was fixed in 2.4 and described in commit
3aab17bd5 ("BUG/MAJOR: connection: reset conn->owner when detaching from
session list").
It also affects the trace code which does the same, so the same fix is
needed, i.e. check from conn->session_list that the connection is still
enlisted. It's visible when sending a few tens to hundreds of parallel
requests to an h2 backend and enabling traces in parallel.
This should be backported as far as 2.2 which is the oldest version
supporting traces.
Historically the stream-interface code used to check for connection
errors by itself. Later this was partially deferred to muxes, but
only once the mux is installed or the connection is at least in the
established state. But probably as a safety practice the connection
error tests remained.
The problem is that they are causing trouble on when a response received
from a mux is mixed with an error report. The typical case is an upload
that is interrupted by the server sending an error or redirect without
draining all data, causing an RST to be queued just after the data. In
this case the mux has the data, the CO_FL_ERROR flag is present on the
connection, and unfortunately the stream-interface refuses to retrieve
the data due to this flag, and return an error to the client.
It's about time to only rely on CS_FL_ERROR which is set by the mux, but
the stream-interface is still responsible for the connection during its
setup. However everywhere the CO_FL_ERROR is checked, CS_FL_ERROR is
also checked.
This commit addresses this by:
- adding a new function si_is_conn_error() that checks the SI state
and only reports the status of CO_FL_ERROR for states before
SI_ST_EST.
- eliminating all checks for CO_FL_ERORR in places where CS_FL_ERROR
is already checked and either the presence of a mux was already
validated or the stream-int's state was already checked as being
SI_ST_EST or higher.
CO_FL_ERROR tests on the send() direction are also inappropriate as they
may cause the loss of pending data. Now this doesn't happen anymore and
such events are only converted to CS_FL_ERROR by the mux once notified of
the problem. As such, this must not cause the loss of any error event.
Now an early error reported on a backend mux doesn't prevent the queued
response from being read and forwarded to the client (the list of syscalls
below was trimmed and epoll_ctl is not represented):
recvfrom(10, "POST / HTTP/1.1\r\nConnection: clo"..., 16320, 0, NULL, NULL) = 66
sendto(11, "POST / HTTP/1.1\r\ntransfer-encodi"..., 47, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 47
epoll_wait(3, [{events=EPOLLIN|EPOLLERR|EPOLLHUP|EPOLLRDHUP, data={u32=11, u64=11}}], 200, 15001) = 1
recvfrom(11, "HTTP/1.1 200 OK\r\ncontent-length:"..., 16320, 0, NULL, NULL) = 57
sendto(10, "HTTP/1.1 200 OK\r\ncontent-length:"..., 57, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 57
epoll_wait(3, [{events=EPOLLIN|EPOLLERR|EPOLLHUP|EPOLLRDHUP, data={u32=11, u64=11}}], 200, 13001) = 1
epoll_wait(3, [{events=EPOLLIN, data={u32=10, u64=10}}], 200, 13001) = 1
recvfrom(10, "A\n0123456789\r\n0\r\n\r\n", 16320, 0, NULL, NULL) = 19
shutdown(10, SHUT_WR) = 0
close(11) = 0
close(10) = 0
Above the server is an haproxy configured with the following:
listen blah
bind :8002
mode http
timeout connect 5s
timeout client 5s
timeout server 5s
option httpclose
option nolinger
http-request return status 200 hdr connection close
And the client takes care of sending requests and data in two distinct
parts:
while :; do
./dev/tcploop/tcploop 8001 C T S:"POST / HTTP/1.1\r\nConnection: close\r\nTransfer-encoding: chunked\r\n\r\n" P1 S:"A\n0123456789\r\n0\r\n\r\n" P R F;
done
With this, a small percentage of the requests will reproduce the behavior
above. Note that this fix requires the following patch to be applied for
the test above to work:
BUG/MEDIUM: mux-h1: only turn CO_FL_ERROR to CS_FL_ERROR with empty ibuf
This should be backported with after a few weeks of observation, and
likely one version at a time. During the backports, the patch might
need to be adjusted at each check of CO_FL_ERORR to follow the
principles explained above.
A connection-level error must not be turned to a stream-level error if there
are still pending data for that stream, otherwise it can cause the truncation
of the last pending data.
This must be backported to affected releases, at least as far as 2.4,
maybe further.
CF_SHUTW_NOW shouldn't be a condition alone to exit the io handler, it
must be tested with the emptiness of the response channel.
Must be backported to 2.5.
A server could reply a response with a shut before the end of the htx
transfer, in this case the httpclient would leave before computing the
received response.
This patch fixes the issue by calling the "process_data" label instead of
the "more" label which don't do the si_shut.
Must be bacported in 2.5.
Checking msg >= HTTP_MSG_DATA was useful to check if we received all the
data. However it does not work correctly in case of errors because we
don't reach this state, preventing to catch the error in the httpclient.
The consequence of this problem is that we don't get the status code of
the error response upon an error.
Fix the issue by only checking co_data().
Must be backported to 2.5.
When a http-request or http-response rule fails to parse, we currently
free only the rule without its contents, which makes ASAN complain.
Now that we have a new function for this, let's completely free the
rule. This relies on this commit:
MINOR: actions: add new function free_act_rule() to free a single rule
It's probably not needed to backport this since we're on the exit path
anyway.
When a tcp-request or tcp-response rule fails to parse, we currently
free only the rule without its contents, which makes ASAN complain.
Now that we have a new function for this, let's completely free the
rule. Reg-tests are now completely OK with ASAN. This relies on this
commit:
MINOR: actions: add new function free_act_rule() to free a single rule
It's probably not needed to backport this since we're on the exit path
anyway.
There was free_act_rules() that frees all rules from a head but nothing
to free a single rule. Currently some rulesets partially free their own
rules on parsing error, and we're seeing some regtests emit errors under
ASAN because of this.
Let's first extract the code to free a rule into its own function so
that it becomes possible to use it on a single rule.
Log servers are a real mess because:
- entries are duplicated using memcpy() without their strings being
reallocated, which results in these ones not being freeable every
time.
- a new field, ring_name, was added in 2.2 by commit 99c453df9
("MEDIUM: ring: new section ring to declare custom ring buffers.")
but it's never initialized during copies, causing the same issue
- no attempt is made at freeing all that.
Of course, running "haproxy -c" under ASAN quickly notices that and
dumps a core.
This patch adds the missing strdup() and initialization where required,
adds a new free_logsrv() function to cleanly free() such a structure,
calls it from the proxy when iterating over logsrvs instead of silently
leaking their file names and ring names, and adds the same logsrv loop
to the proxy_free_defaults() function so that we don't leak defaults
sections on exit.
It looks a bit entangled, but it comes as a whole because all this stuff
is inter-dependent and was missing.
It's probably preferable not to backport this in the foreseable future
as it may reveal other jokes if some obscure parts continue to memcpy()
the logsrv struct.
ASAN complains about the SNI expression not being free upon an haproxy
-c. Indeed the httpclient is now initialized with a sni expression and
this one is never free in the server release code.
Must be backported in 2.5 and could be backported in every stable
versions.
src/http_client.c: In function ‘httpclient_cfg_postparser’:
src/http_client.c:1065:8: error: unused variable ‘errmsg’ [-Werror=unused-variable]
1065 | char *errmsg = NULL;
| ^~~~~~
src/http_client.c:1064:6: error: unused variable ‘err_code’ [-Werror=unused-variable]
1064 | int err_code = 0;
| ^~~~~~~~
Fix the build of the httpclient without SSL, the problem was introduced
with previous patch 71e3158 ("BUG/MINOR: httpclient: send the SNI using
the host header")
Must be backported in 2.5 as well.
Generate an SNI expression which uses the Host header of the request.
This is mandatory for most of the SSL servers nowadays.
Must be backported in 2.5 with the previous patch which export
server_parse_sni_expr().
The appctx owner is not a stream-interface anymore. It is now a
conn-stream. However, sink code was not updated accordingly. It is now
fixed.
It is 2.6-specific, no backport is needed.
The appctx owner is not a stream-interface anymore. It is now a conn-stream.
In the cli I/O handler for the command "debug dev fd", we still handle it as
a stream-interface. It is now fixed.
It is 2.6-specific, no backport is needed.
Since the CS/SI refactoring, the .release callback function may be called
twice. The first call when a shutdown for read or for write is performed.
The second one when the applet is detached from its conn-stream. The second
call must be guarded, just like the first one, to only be performed is the
stream-interface is not the in disconnected (SI_ST_DIS) or closed
(SI_ST_CLO) state.
To simplify the fix, we now always rely on si_applet_release() function.
It is 2.6-specific, no backport is needed.
The httpclient lua code is lacking the end callback, which means it
won't be able to wake up the lua code after a longjmp if the connection
was closed without any data.
Must be backported to 2.5.
This commit reverts this one:
"d5066dd9d BUG/MEDIUM: quic: qc_prep_app_pkts() retries on qc_build_pkt() failures"
After having filled the congestion control window, qc_build_pkt() always fails.
Then depending on the relative position of the writer and reader indexes for the
TX buffer, this could lead this function to try to reuse the buffer even if not full.
In such case, we do not always mark the end of the data in this TX buffer. This
is something the reader cannot understand: it reads a false datagram length,
then a wrong packet address from the TX buffer, leading to an invalid pointer
dereferencing.
STREAM frames which are not acknowledged in order are inserted in ->tx.acked_frms
tree ordered by the STREAM frame offset values. Then, they are consumed in order
by qcs_try_to_consume(). But, when we retransmit frames, we possibly have to
insert the same STREAM frame node (with the same offset) in this tree.
The problem is when they have different lengths. Unfortunately the restransmitted
frames are not inserted because of the tree nature (EB_ROOT_UNIQUE). If the STREAM
frame which has been successfully inserted has a smaller length than the
retransmitted ones, when it is consumed they are tailing bytes in the STREAM
(retransmitted ones) which indefinitively remains in the STREAM TX buffer
which will never properly be consumed, leading to a blocking state.
At this time this may happen because we sometimes build STREAM frames
with null lengths. But this is another issue.
The solution is to use an EB_ROOT tree to support the insertion of STREAM frames
with the same offset but with different lengths. As qcs_try_to_consume() support
the STREAM frames retransmission this modification should not have any impact.
The httpclient mistakenly use the htx_get_first{_blk}() functions instead
of the htx_get_head{_blk}() functions. Which could stop the httpclient
because it will be without the start line, waiting for data that won't never
come.
Must be backported in 2.5.
Remove the UNUSED blocks when iterating on headers, we should not stop
when encountering one. We should only stop iterating once we found the
EOH block. It doesn't provoke a problem, since we don't manipulates
the headers before treating them, but it could evolve in the future.
Must be backported to 2.5.
Consume partly the blocks in the httpclient I/O handler when there is
not enough room in the destination buffer for the whole block or when
the block is not contained entirely in the channel's output.
It prevents the I/O handler to be stuck in cases when we need to modify
the buffer with a filter for exemple.
Must be backported in 2.5.
In httpclient_applet_io_handler(), on the response path, we don't check
if the data are in the output part of the channel, and could consume
them before they were analyzed.
To fix this issue, this patch checks for the stline and the headers if
the msg_state is >= HTTP_MSG_DATA which means the stline and headers
were analyzed. For the data part, it checks if each htx blocks is in the
output before copying it.
Must be backported in 2.5.
Dynamic servers feature is now judged to be stable enough. Remove the
experimental-mode requirement for "add/del server" commands. This should
facilitate dynamic servers adoption.
For server checks, SSL and PROXY is automatically inherited from the
server settings if no specific check port is specified. Change this
behavior for dynamic servers : explicit "check-ssl"/"check-send-proxy"
are required for them.
Without this change, it is impossible to add a dynamic server with
SSL/PROXY settings and checks without, if the check port is not
explicit. This is because "no-check-ssl"/"no-check-send-proxy" keywords
are not available for dynamic servers.
This change respects the principle that dynamic servers on the CLI
should not reuse the same shortcuts used during the config file parsing.
Mostly because we expect this feature to be manipulated by automated
tools, contrary to the config file which should aim to be the shortest
possible for human readability.
Update the documentation of the "check" keyword to reflect this change.
The current implementation of STREAM frames emission has some
limitation. Most notably when we cannot sent all frames in a single
qc_send run.
In this case, frames are left in front of the MUX list. It will be
re-send individually before other frames, possibly another frame from
the same STREAM with new data. An opportunity to merge the frames is
lost here.
This method is now improved. If a frame cannot be send entirely, it is
discarded. On the next qc_send run, we retry to send to this position. A
new field qcs.sent_offset is used to remember this. A new frame list is
used for each qc_send.
The impact of this change is not precisely known. The most notable point
is that it is a more logical method of emission. It might also improve
performance as we do not keep old STREAM frames which might delay other
streams.
Implement a new MUX function qcc_notify_send. This function must be
called by the transport layer to confirm the sending of STREAM data to
the MUX.
For the moment, the function has no real purpose. However, it will be
useful to solve limitations on push frame and implement the flow
control.
For the moment, the transport layer function qc_send_app_pkts lacks
features. Most notably, it only send up to a single Tx buffer and won't
retry even if there is frames left and its Tx buffer is now empty.
To overcome this limitation, the MUX implements an opportunistic retry
sending mechanism. qc_send_app_pkts is repeatedly called until the
transport layer is blocked on an external condition (such as congestion
control or a sendto syscall error).
The blocking was detected by inspecting the frame list before and after
qc_send_app_pkts. If no frame has been poped by the function, we
considered the transport layer to be blocked and we stop to send. The
MUX is subscribed on the lower layer to send the frames left.
However, in case of STREAM frames, qc_send_app_pkts might use only a
portion of the data and update the frame offset. So, for STREAM frames,
a new mechanism is implemented : if the offset field of the first frame
has not been incremented, it means the transport layer is blocked.
This should improve transfers execution. Before this change, there is a
possibility of interrupted transfer if the mux has not sent everything
possible and is waiting on a transport signaling which will never
happen.
In the future, qc_send_app_pkts should be extended to retry sending by
itself. All this code burden will be removed from the MUX.
For the moment, unidirectional streams handling is not identical to
bidirectional ones in MUX/H3 layer, both in Rx and Tx path. As a safety,
skip over uni streams in qc_send.
In fact, this change has no impact because qcs.tx.buf is emptied before
we start using qcs_push_frame, which prevents the call to
qcs_push_frame. However, this condition will soon change to improve
bidir streams emission, so an explicit check on stream type must be
done.
It is planified to unify uni and bidir streams handling in a future
stage. When implemented, the check will be removed.
The aim of the idle timeout is to silently closed the connection after a period
of inactivity depending on the "max_idle_timeout" transport parameters advertised
by the endpoints. We add a new task to implement this timer. Its expiry is
updated each time we received an ack-eliciting packet, and each time we send
an ack-eliciting packet if no other such packet was sent since we received
the last ack-eliciting packet. Such conditions may be implemented thanks
to QUIC_FL_CONN_IDLE_TIMER_RESTARTED_AFTER_READ new flag.
There is no need to use such a reference counter anymore since the QUIC
connections are always handled by the same thread.
quic_conn_drop() is removed. Its code is merged into quic_conn_release().
Andrew Suffield reported in issue #1596 that we've had a bug in
session_accept_fd() since 2.4 with commit 1b3c931bf ("MEDIUM:
connections: Introduce a new XPRT method, start().") where an error
label is wrong and may cause the leak of the freshly allocated session
in case conn_xprt_start() returns < 0.
The code was checked there and the only two transport layers available
at this point are raw_sock and ssl_sock. The former doesn't provide a
->start() method hence conn_xprt_start() will always return zero. The
second does provide such a function, but it may only return <0 if the
underlying transport (raw_sock) has such a method and fails, which is
thus not the case.
So fortunately it is not possible to trigger this leak.
The patch above also touched the accept code in quic_sock() which was
mostly a plain copy of the session code, but there the move didn't
have this impact, and since then it was simplified and the next change
moved it to its final destination with the proper error label.
This should be backported as far as 2.4 as a long-term safety measure
(e.g. if in the future we have a reason for making conn_xprt_start()
to start failing), but will not have any positive nor negative effect
in the short term.
These two sample fetch methods report respectively the file name and the
line number where was located the last rule that was final. This is aimed
at being used on log-format lines to help admins figure what rule in the
configuration gave a final verdict, and help understand the condition
that led to the action.
For example, it's now possible to log the last matched rule by adding
this to the log-format:
... lr=%[last_rule_file]:%[last_rule_line]
A regtest is provided to test various combinations of final rules, some
even on top of each other from different rulesets.
When a tcp-{request,response} content or http-request/http-response
rule delivers a final verdict (deny, accept, redirect etc), the last
evaluated one will now be recorded in the stream. The purpose is to
permit to log the last one that performed a final action. For now
the log is not produced.
In TCP, when a conn-stream is detached from a backend connection, the
connection must be always closed. It was only performed if an error or a
shutdown occurred or if there was no connection owner. But it is a problem,
because, since the 2.3, backend connections are always owned by a
session. This way it is possible to have idle connections attached to a
session instead of a server. But there is no idle connections in TCP. In
addition, when a session owns a connection it is responsible to close it
when it is released. But it only works for idle connections. And it only
works if the session is released.
Thus there is the place for bugs here. And indeed, a connection leak may
occur if a connection retry is performed because of a timeout. In this case,
the underlying connection is still alive and is waiting to be fully
established. Thus, when the conn-stream is detached from the connection, the
connection is not closed. Because the PT multiplexer is quite simple, there
is no timeout at this stage. We depend on the kenerl to be notified and
finally close the connection. With an unreachable server, orphan backend
connections may be accumulated for a while. It may be perceived as a leak.
Because there is no reason to keep such backend connections, we just close
it now. Frontend connections are still closed by the session or when an
error or a shutdown occurs.
This patch should fix the issue #1522. It must be backported as far as
2.0. Note that the 2.2 and 2.0 are not affected by this bug because there is
no owner for backend TCP connections. But it is probably a good idea to
backport the patch on these versions to avoid any future bugs.
Found manually, while creating the previous commits to turn `struct proxy`
members into ists.
There is an existing Coccinelle rule to replace this pattern by `istadv()` in
`ist.cocci`:
@@
struct ist i;
expression e;
@@
- i.ptr += e;
- i.len -= e;
+ i = istadv(i, e);
But apparently it is not smart enough to match ists that are stored in another
struct. It would be useful to make the existing rule more generic, so that it
might catch similar cases in the future.
The server_id_hdr_name is already processed as an ist in various locations lets
also just store it as such.
see 0643b0e7e ("MINOR: proxy: Make `header_unique_id` a `struct ist`") for a
very similar past commit.
The orgto_hdr_name is already processed as an ist in `http_process_request`,
lets also just store it as such.
see 0643b0e7e ("MINOR: proxy: Make `header_unique_id` a `struct ist`") for a
very similar past commit.
The fwdfor_hdr_name is already processed as an ist in `http_process_request`,
lets also just store it as such.
see 0643b0e7e ("MINOR: proxy: Make `header_unique_id` a `struct ist`") for a
very similar past commit.
The monitor_uri is already processed as an ist in `http_wait_for_request`, lets
also just store it as such.
see 0643b0e7e ("MINOR: proxy: Make `header_unique_id` a `struct ist`") for a
very similar past commit.
Channels buffer state is displayed in the strem trace messages. However,
because of a typo, the request buffer was used instead of the response one.
This patch should be backported as far as 2.2.
The response analyzer of the master CLI only handles read errors. So if
there is a write error, the session remains stuck because some outgoing data
are blocked in the channel and the response analyzer waits everything to be
sent. Because the maxconn is set to 10 for the master CLI, it may be
unresponsive if this happens to many times.
Now read and write errors, timeouts and client aborts are handled.
This patch should solve the issue #1512. It must be backported as far as
2.0.
In the I/O handler of the cache applet, we must update the underlying buffer
when the HTX message is loaded, using htx_from_buf() function instead of
htxbuf(). It is important because the applet will update the message by
adding new HTX blocks. This way, the state of the underlying buffer remains
consistant with the state of the HTX message.
It is especially important if HAProxy is compiled with "DEBUG_STRICT=2"
mode. Without this patch, channel_add_input() call crashed if the channel
was empty at the begining of the I/O handler.
Note that it is more a build/debug issue than a bug. But this patch may
prevent future bugs. For now it is safe because htx_to_buf() function is
systematically called, updating accordingly the underlying buffer.
This patch may be backported as far as 2.0.
For now, for a stream, request analyzers are set at 2 stages. The first one
is when the stream is created. The session's listener analyzers, if any, are
set on the request channel. In addition, some HTTP analyzers are set for HTX
streams (AN_REQ_WAIT_HTTP and AN_REQ_HTTP_PROCESS_FE). The second one is
when the backend is set on the stream. At the stage, request analyzers are
updated using the backend settings.
It is an issue for client applets because there is no listener attached to
the stream. In addtion, it may have no specific/dedicated backend. Thus,
several request analyzers are missing. Among others, the HTTP analyzers for
HTTP applets. The HTTP client is the only one affected for now.
To fix the bug, when a stream is created without a listener, we use the
frontend to set the request analyzers. Note that there is no issue with the
response channel because its analyzers are set when the server connection is
established.
This patch may be backported to all stable versions. Because only the HTTP
client is affected, it must at least be backported to 2.5. It is related to
the issue #1593.
This bug is the same than for the HTTP client. See "BUG/MINOR: httpclient:
Set conn-stream/channel EOI flags at the end of request" for details.
Note that because a filter is always attached to the stream when the cache
is used, there is no issue because there is no direct forwarding in this
case. Thus the stream analyzers are able to see the HTX_FL_EOM flag on the
HTX messge.
This patch must be backported as far as 2.0. But only CF_EOI must be set
because applets are not attached to a conn-stream on older versions.
This bug is the same than for the HTTP client. See "BUG/MINOR: httpclient:
Set conn-stream/channel EOI flags at the end of request" for details.
This patch must be backported as far as 2.0. But only CF_EOI must be set
because applets are not attached to a conn-stream on older versions.
This bug is the same than for the HTTP client. See "BUG/MINOR: httpclient:
Set conn-stream/channel EOI flags at the end of request" for details.
This patch must be backported as far as 2.0. But only CF_EOI must be set
because applets are not attached to a conn-stream on older versions.
In HTX, HTX_FL_EOM flag is added on the message to notifiy the end of the
message was received. In addition, the producer must set CS_FL_EOI flag on
the conn-stream. If it is a mux, the stream-interface is responsible to set
CF_EOI flag on the input channel. But, for now, if the producer is an
applet, in addition to the conn-stream flag, it must also set the channel
one.
These flags are used to notify the stream that the message is finished and
no more data are expected. It is especially important when the message
itself it directly forwarded from one side to the other. Because in this
case, the stream has no way to see the HTX_FL_EOM flag on the
message. Otherwise, the stream will detect a client or a server abort,
depending on the side.
For the HTTP client, it is not really easy to diagnose this error because
there is also another bug hiding this one. All HTTP request analyzers are
not set on the input channel. This will be fixed by another patch.
This patch must be backported to 2.5. It is related to the issue #1593.
In commit e9ed63e548 dark mode support was added to the stats page. The
initial commit does not include dark mode color overwrites for the
.socket CSS class. This commit colors socket rows the same way as
backends that acre active but do not have a health check defined.
This fixes an issue where reading information from socket lines became
really hard in dark mode due to suboptimal coloring of the cell
background and the font in it.
Change the return value to success in qc_handle_bidi_strm_frm for two
specific cases :
* if STREAM frame is an already received offset
* if application decoding failed
This ensures that the packet is not dropped and properly acknowledged.
Previous to this fix, the return code was set to error which prevented
the ACK to be generated.
The impact of the bug might be noticeable in environment with packet
loss and retransmission. Due to haproxy not generating ACK for packets
containing STREAM frames with already received offset, the client will
probably retransmit them again, which will worsen the network
transmission.
The "show sess" cli command only handles "http" or "tcp" as a fallback
mode, replace this by a call to proxy_mode_str() to show all the modes.
Could be backported in every maintained versions.
Some users with very large numbers of connections have been facing
extremely long malloc_trim() calls on reload that managed to trigger
the watchdog! That's a bit counter-productive. It's even possible
that some implementations are not perfectly reliable or that their
trimming time grows quadratically with the memory used. Instead of
constantly trying to work around these issues, let's offer an option
to disable this mechanism, since nobody had been complaining in the
past, and this was only meant to be an improvement.
This should be backported to 2.4 where trimming on reload started to
appear.
When in congestion avoidance state and when acknowledging an <acked> number bytes
we must increase the congestion window by at most one datagram (<path->mtu>)
by congestion window. So thanks to this patch we apply a ratio to the current
number of acked bytes : <acked> * <path->mtu> / <cwnd>.
So, when <cwnd> bytes are acked we precisely increment <cwnd> by <path->mtu>.
Furthermore we take into an account the number of remaining acknowledged bytes
each time we increment the window by <acked> storing their values in the algorithm
struct state (->remain_acked) so that it might be take into an account at the
next ACK event.
Since the persistent congestion detection is done out of the congestion
controllers, there is no need to pass them information through quic_cc_event struct.
We remove its useless members. Also remove qc_cc_loss_event() which is no more used.
We establish the persistent congestion out of any congestion controller
to improve the algorithms genericity. This path characteristic detection may
be implemented regarless of the underlying congestion control algorithm.
Send congestion (loss) event using directly quic_cc_event(), so without
qc_cc_loss_event() wrapper function around quic_cc_event().
Take the opportunity of this patch to shorten "newest_time_sent" member field
of quic_cc_event to "time_sent".
We want to be able to make the congestion controllers re-enter the slow
start state outside of the congestion controllers themselves. So,
we add a callback ->slow_start() to do so.
Define this callback for NewReno algorithm.
QUIC connection path in flight bytes is a variable which should not be manipulated
by the congestion controller. This latter aim is to compute the congestion window.
So, we pass it as less as parameters as possible to do so.
kFreeBSD needs to be treated as a distinct target from FreeBSD
since the underlying system libc is the GNU one. Thus, relying
only on __GLIBC__ no longer suffice.
- freebsd-glibc new target, key difference is including crypt.h
and linking to libdl like linux.
- cpu affinity available but the api is still the FreeBSD's.
- enabling auxiliary data access only for Linux.
Patch based on preliminary work done by @bigon.
closes#1555
Implement the locally flow-control streams limit for opened
bidirectional streams. Add a counter which is used to count the total
number of closed streams. If this number is big enough, emit a
MAX_STREAMS frame to increase the limit of remotely opened bidirectional
streams.
This is the first commit to implement QUIC flow-control. A series of
patches should follow to complete this.
This is required to be able to handle more than 100 client requests.
This should help to validate the Multiplexing interop test.
This commit should fix the possible transfer interruption caused by the
previous commit. The MUX always retry to send frames if there is
remaining data after a send call on the transport layer. This is useful
if the transport layer is not blocked on the sending path.
In the future, the transport layer should retry by itself the send
operation if no blocking condition exists. The MUX layer will always
subscribe to retry later if remaining frames are reported which indicate
a blocking on the transport layer.
Modify the STREAM emission in qc_send. Use the new transport function
qc_send_app_pkts to directly send the list of constructed frames. This
allows to remove the tasklet wakeup on the quic_conn and should reduce
the latency.
If not all frames are send after the transport call, subscribe the MUX
on the lower layer to be able to retry. Currently there is a bug because
the transport layer does not retry to send frames in excess after a
successful sendto. This might cause the transfer to be interrupted.
Improve the functions used to detect the stream characteristics :
uni/bidirectional and local/remote initiated.
Most notably, these functions are now designed to work transparently for
a MUX in the frontend or backend side. For this, we use the connection
to determine the current MUX side. This will be useful if QUIC is
implemented on the server side.
Since QUIC accept handling has been improved, the MUX is initialized
after the handshake completion. Thus its safe to access transport
parameters in qc_init via the quic_conn.
Remove quic_mux_transport_params_update which was called by the
transport for the MUX. This improves the architecture by removing a
direct call from the transport to the MUX.
The deleted function body is not transfered to qc_init because this part
will change heavily in the near future when implementing the
flow-control.
We want to be able to build ack-eliciting frames to be embedded into QUIC packets
from a prebuilt list of ack-eliciting frames. This will be helpful for the mux
which would like to send STREAM frames asap after having builts its own prebuilt
list.
To do so, we only add a parameter as struct list to this function to handle
such a prebuilt list.
We want to be able to send ack-elicting packets from a list of ack-eliciting
frames. So, this patch adds such a paramaters to the function responsible of
building 1RTT packets. The entry point function is qc_send_app_pkts() which
is used with the underlying packet number space TX frame list as parameter.
We want to get rid of the code used during the handshake step. qc_prep_app_pkts()
aim is to build short packets which are also datagrams.
Make quic_conn_app_io_cb() call this new function to prepare short packets.
As reported by Tim in issue #1428, our sources are clean, there are
just a few files with a few rare non-ASCII chars for the paragraph
symbol, a few typos, or in Fred's name. Given that Fred already uses
the non-accentuated form at other places like on the public list,
let's uniformize all this and make sure the code displays equally
everywhere.
Commit e81248c0c ("BUG/MINOR: pool: always align pool_heads to 64 bytes")
added a free of the allocated pool in pool_destroy() using ha_free(), but
it added a subtle bug by which once the pool is released, setting its
address to NULL inside the structure itself cannot work because the area
has just been freed.
This will need to be backported wherever the patch above is backported.
A segfault happens when receiving a CONNECTION_CLOSE during handshake.
This is because the mux is not initialized at this stage but the
transport layer dereferences it.
Fix this by ensuring that the MUX is initialized before. Thanks to Willy
for his help on this one. Welcome in the QUIC-men team !
This is the pool equivalent of commit 97ea9c49f ("BUG/MEDIUM: fd: always
align fdtab[] to 64 bytes"). After a careful code review, it happens that
the pool heads are the other structures allocated with malloc/calloc that
claim to be aligned to a size larger than what the allocator can offer.
While no issue was reported on them, no memset() is performed and no type
is large, this is a problem waiting to happen, so better fix it. In
addition, it's relatively easy to do by storing the allocation address
inside the pool_head itself and use it at free() time. Finally, threads
might benefit from the fact that the caches will really be aligned and
that there will be no false sharing.
This should be backported to all versions where it applies easily.
When POSTing a request with a payload, and reusing the same httpclient
lua instance, one could encounter a spinning of the httpclient appctx.
Indeed the sent counter is not reset between 2 POSTs and the condition
for sending the EOM flag is never met.
Must fixed issue #1593.
To be backported in 2.5.
Many inline functions involve some BUG_ON() calls and because of the
partial complexity of the functions, they're not inlined anymore (e.g.
co_data()). The reason is that the expression instantiates the message,
its size, sometimes a counter, then the atomic OR to taint the process,
and the back trace. That can be a lot for an inline function and most
of it is always the same.
This commit modifies this by delegating the common parts to a dedicated
function "complain()" that takes care of updating the counter if needed,
writing the message and measuring its length, and tainting the process.
This way the caller only has to check a condition, pass a pointer to the
preset message, and the info about the type (bug or warn) for the tainting,
then decide whether to dump or crash. Note that this part could also be
moved to the function but resulted in complain() always being at the top
of the stack, which didn't seem like an improvement.
Thanks to these changes, the BUG_ON() calls do not result in uninlining
functions anymore and the overall code size was reduced by 60 to 120 kB
depending on the build options.
This one is referenced in initcalls by its pointer, it makes no sense
to declare it inline. At best it causes function duplication, at worst
it doesn't build on older compilers.
This one is referenced in initcalls by its pointer, it makes no sense
to declare it inline. At best it causes function duplication, at worst
it doesn't build on older compilers.
The 3 functions http_{req,res,after_res}_keywords_register() are
referenced in initcalls by their pointer, it makes no sense to declare
them inline. At best it causes function duplication, at worst it doesn't
build on older compilers.
This one is referenced in initcalls by its pointer, it makes no sense
to declare it inline. At best it causes function duplication, at worst
it doesn't build on older compilers.
Do not distinguish the direction (TX/RX) when settings TLS secrets flags.
There is not such a distinction in the RFC 9001.
Assemble them at the same level: at the upper context level.
This is required since this previous commit:
"MINOR: quic: Post handshake I/O callback switching"
If not, such packets remain endlessly in the RX buffer and cannot be parsed
by the new I/O callback used after the handshake has been confirmed.
Wakeup asap the timer task when setting its timer in the past.
Take also the opportunity of this patch to make simplify quic_pto_pktns():
calling tick_first() is useless here to compare <lpto> with <tmp_pto>.
Since the recent refactoring on the conn-streams, a stream has always a
defined frontend and backend conn-streams. Thus, in stream_dump(), there is
no reason to still test if these conn-streams are defined.
In addition, still in stream_dump(), get the stream-interfaces using the
conn-streams and not the opposite.
This patch should fix issue #1589 and #1590.
Reorganize the Rx path for STREAM frames on bidirectional streams. A new
function qcc_recv is implemented on the MUX. It will handle the STREAM
frames copy and offset calculation from transport to MUX.
Another function named qcc_decode_qcs from the MUX can be called by
transport each time new STREAM data has been copied.
The architecture is now cleaner with the MUX layer in charge of parsing
the STREAM frames offsets. This is required to be able to implement the
flow-control on the MUX layer.
Note that as a convenience, a STREAM frame is not partially copied to
the MUX buffer. This simplify the implementation for the moment but it
may change in the future to optimize the STREAM frames handling.
For the moment, only bidirectional streams benefit from this change. In
the future, it may be extended to unidirectional streams to unify the
STREAM frames processing.
FIN flag on a STREAM frame was not detected if the frame was previously
buffered on qcs.rx.frms before being handled.
To fix this, copy the fin field from the quic_stream instance to
quic_rx_strm_frm. This is required to properly notify the FIN flag on
qc_treat_rx_strm_frms for the MUX layer.
Without this fix, the request channel might be left opened after the
last STREAM frame reception if there is out-of-order frames on the Rx
path.
This flag is set when the STREAM frame with FIN set has been received on
a qcs instance. For now, this is only used as a BUG_ON guard to prevent
against multiple frames with FIN set. It will also be useful when
reorganize the RX path and move some of its code in the mux.
Adjust the function to handle buffered STREAM frames. If the offset of
the frame was already fully received, discard the frame. If only
partially received, compute the difference and copy only the newly
offset.
Before this change, a buffered frame representing a fully or partially
received offset caused the loop to be interrupted. The frame was
preserved, thus preventing frames with greater offset to be handled.
This may fix some occurences of stalled transfer on the request channel
if there is out-of-order STREAM frames on the Rx path.
qc_strm_cpy can be simplified by simply using b_putblk which already
handle wrapping of the destination buffer. The function is kept to
update the frame length and offset fields.
With BUG_ON() being enabled by default it is more useful to use a BUG_ON()
instead of an effectively never-taken if, as any incorrect assumptions will
become much more visible.
see 488ee7fb6 ("BUG/MAJOR: proxy_protocol: Properly validate TLV lengths")
Transform the unreachability comment into a call to `my_unreachable()` to allow
the compiler from benefitting from it.
see d1b15b6e9 ("MINOR: proxy_protocol: Ingest PP2_TYPE_UNIQUE_ID on incoming connections")
see 615f81eb5 ("MINOR: connection: Use a `struct ist` to store proxy_authority")
693b23bb1 ("MEDIUM: tree-wide: Use unsafe conn-stream API when it is
relevant") introduced a regression in DEBUG_STRICT mode because some BUG_ON
conditions were inverted. It should ok now.
In addition, ALREADY_CHECKED macro was removed from appctx_wakeup() function
because it is useless now.
In htx_xfer_blks() function, when headers or trailers are partially
transferred, we rollback the copy by removing copied blocks. Internally, all
blocks between <dstref> and <dstblk> are removed. But if the transfer was
stopped because we failed to reserve a block, the variable <dstblk> is
NULL. Thus, we must not try to remove it. It is unexpected to call
htx_remove_blk() in this case.
htx_remove_blk() was updated to test <blk> variable inside the existing
BUG_ON(). The block must be defined.
For now, this bug may only be encountered when H2 trailers are copied. On H2
headers, the destination buffer is empty. Thus a swap is performed.
This patch should fix the issue #1578. It must be backported as far as 2.4.
When an HTTP health-check is performed in FCGI, we must not rely on the SI
source and destination addresses to set default parameters
(REMOTE_ADDR/REMOTE_PORT and SERVER_NAME/SERVER_PORT) because the backend
conn-stream is not attached to a stream but to a healt-check. Thus, there is
no stream-interface. In addition, there is no client connection because it
is an "internal" session.
Thus, for now, in this case, there is only the server connection that can be
used. So src/dst addresses are retrieved from the server connection when the
CS application is a health-check.
This patch should solve issue #1572. It must be backported to 2.5. Note than
the CS api has changed. Thus, on HAProxy 2.5, we should test the session's
origin instead:
const struct sockaddr_storage *src = (cs_check(fstrm->cs) ? ...);
const struct sockaddr_storage *dst = (cs_check(fstrm->cs) ? ...);
This way si_*_recv() and si_*_sned() API are defined the same
way. si_sync_snd/si_sync_recv are both exported and defined in the C
file. And si_cs_send/si_cs_recv are private and only used by
stream-interface internals.
The unsafe conn-stream API (__cs_*) is now used when we are sure the good
endpoint or application is attached to the conn-stream. This avoids compiler
warnings about possible null derefs. It also simplify the code and clear up
any ambiguity about manipulated entities.
The use of co_set_data() should be strictly limited to setting the amount
of existing data to be transmitted. It ought not be used to decrement the
output after the data have left the buffer, because doing so involves
performing incorrect calculations using co_data() that still comprises
data that are not in the buffer anymore. Let's use c_rew() for this, which
is made exactly for this purpose, i.e. decrement c->output by as much as
requested. This is cleaner, faster, and will permit stricter checks.
The only reason for warning once is to check if a condition really
happens. Let's use a term that better translates the intent, that's
important when reading the code.
The quic_frame instance containing the quic_stream must be freed when
the corresponding ACK has been received. However when implementing this
on qcs_try_to_consume, some data transfers are interrupted and cannot
complete (DC test from interop test suite).
The sending buffer of each stream is cleared when processing ACKs
corresponding to STREAM emitted frames. If the buffer is empty, free it
and offer it as with other dynamic buffers usage.
This should reduce memory consumption as before an opened stream
confiscate a buffer during its whole lifetime even if there is no more
data to transmit.
Simplify the data manipulation of STREAM frames on TX. Only stream data
and len field are used to generate a valid STREAM frames from the
buffer. Do not use the offset field, which required that a single buffer
instance should be shared for every frames on a single stream.
This one will maintain a static counter per call place and will only
emit the warning on the first call. It may be used to invite users to
report an unexpected event without spamming them with messages.
This is the same as BUG_ON() except that it never crashes and only emits
a warning and a backtrace, inviting users to report the problem. This will
be usable for non-fatal issues that should not happen and need to be fixed.
This way the BUG_ON() when using DEBUG_STRICT_NOCRASH is effectively an
equivalent of WARN_ON().
The functions needed to manipulate the "tainted" flags were located in
too high a level to be callable from the lower code layers. Let's move
them to bug.h.
get_tainted() was using an atomic store from the atomic value to a
local one instead of using an atomic load. In practice it has no effect
given the relatively rare updates of this field and the fact that it's
read only when dumping "show info" output, but better fix it.
There's probably no need to backport this.
GCC 6 was not very good at value propagation and is often mislead about
risks of null derefs. Since 2.6-dev commit 13a35e575 ("MAJOR: conn_stream/
stream-int: move the appctx to the conn-stream"), it sees a risk of null-
deref in stream_upgrade_from_cs() after checking cs_conn_mux(cs). Let's
disguise the result so that it doesn't complain anymore. The output code
is exactly the same. The same method could be used to shut warnings at
-O1 that affect the same compiler by the way.
Adjust the handling of ACK for STREAM frames. When receiving a ACK, the
corresponding frames from the acknowledged packet are retrieved. If a
frame is of type STREAM, we compare the frame STREAM offset with the
last offset known of the qcs instance.
The comparison was incomplete as it did not treat a acked offset smaller
than the known offset. Previously, the acked frame was incorrectly
buffered in the qcs.tx.acked_frms. On reception of future ACKs, when
trying to process the buffered acks via qcs_try_to_consume, the loop is
interrupted on the smallest offset different from the qcs known offset :
in this case it will be the previous smaller range. This is a real bug
as it prevents all buffered ACKs to be processed, eventually filling the
qcs sending buffer and cause the transfer to stall.
Fix this by properly properly handle smaller acked offset. First check
if the offset length is greater than the qcs offset and mark as
acknowledged the difference on the qcs. If not, the frame is not
buffered and simply ignored.
As reported by Coverity in issue #1568, a missing initialization of the
error message pointer in parse_new_proxy() may result in displaying garbage
or crashing in case of memory allocation error when trying to create a new
proxy on startup.
This should be backported to 2.4.
Since recent changes related to the conn-stream/stream-interface
refactoring, GCC reports potential null pointer dereferences when we get the
appctx, the stream or the stream-interface from the conn-strem. Of course,
depending on the time, these entities may be null. But at many places, we
know they are defined and it is safe to get them without any check. Thus, we
use ALREADY_CHECKED() macro to silent these warnings.
Note that the refactoring is unfinished, so it is not a real issue for now.
In the same way a stream has always valid conn-streams, when a health-checks
is created, a conn-stream is now created and the health-check is attached on
it, as an app. This simplify a bit the connect part when a health-check is
running.
cs_detach_app() function is added to detach an app from a conn-stream. And
now, both cs_detach_app() and cs_detach_endp() release the conn-stream when
both the app and the endpoint are detached.
Thanks to all previous changes, it is now possible to move the
stream-interface into the conn-stream. To do so, some SI functions are
removed and their conn-stream counterparts are added. In addition, the
conn-stream is now responsible to create and release the
stream-interface. While the stream-interfaces were inlined in the stream
structure, there is now a pointer in the conn-stream. stream-interfaces are
now dynamically allocated. Thus a dedicated pool is added. It is a temporary
change because, at the end, the stream-interface structure will most
probably disappear.
To be able to move the stream-interface from the stream to the conn-stream,
all access to the SI is done via the conn-stream. This patch is limited to
the sink part.
To be able to move the stream-interface from the stream to the conn-stream,
all access to the SI is done via the conn-stream. This patch is limited to
the tcp-act part.
To be able to move the stream-interface from the stream to the conn-stream,
all access to the SI is done via the conn-stream. This patch is limited to
the httpclient part.
To be able to move the stream-interface from the stream to the conn-stream,
all access to the SI is done via the conn-stream. This patch is limited to
the http-act part.
To be able to move the stream-interface from the stream to the conn-stream,
all access to the SI is done via the conn-stream. This patch is limited to
the dns part.
To be able to move the stream-interface from the stream to the conn-stream,
all access to the SI is done via the conn-stream. This patch is limited to
the cache part.
To be able to move the stream-interface from the stream to the conn-stream,
all access to the SI is done via the conn-stream. This patch is limited to
the hlua part.
To be able to move the stream-interface from the stream to the conn-stream,
all access to the SI is done via the conn-stream. This patch is limited to
the debug part.
To be able to move the stream-interface from the stream to the conn-stream,
all access to the SI is done via the conn-stream. This patch is limited to
the peers part.
To be able to move the stream-interface from the stream to the conn-stream,
all access to the SI is done via the conn-stream. This patch is limited to
the proxy part.
To be able to move the stream-interface from the stream to the conn-stream,
all access to the SI is done via the conn-stream. This patch is limited to
the frontend part.
To be able to move the stream-interface from the stream to the conn-stream,
all access to the SI is done via the conn-stream. This patch is limited to
the log part.
To be able to move the stream-interface from the stream to the conn-stream,
all access to the SI is done via the conn-stream. This patch is limited to
the cli part.
To be able to move the stream-interface from the stream to the conn-stream, all
access to the SI is done via the conn-stream. This patch is limited to the
http-ana part.
To be able to move the stream-interface from the stream to the conn-stream,
all access to the SI is done via the conn-stream. This patch is limited to
the stream part.
To be able to move the stream-interface from the stream to the conn-stream,
all access to the SI is done via the conn-stream. This patch is limited to
the backend part.
frontend and backend conn-streams are now directly accesible from the
stream. This way, and with some other changes, it will be possible to remove
the stream-interfaces from the stream structure.
In the same way the conn-stream has a pointer to the stream endpoint , this
patch adds a pointer to the application entity in the conn-stream
structure. For now, it is a stream or a health-check. It is mandatory to
merge the stream-interface with the conn-stream.
Because appctx is now an endpoint of the conn-stream, there is no reason to
still have the stream-interface as appctx owner. Thus, the conn-stream is
now the appctx owner.
Thanks to previous changes, it is now possible to set an appctx as endpoint
for a conn-stream. This means the appctx is no longer linked to the
stream-interface but to the conn-stream. Thus, a pointer to the conn-stream
is explicitly stored in the stream-interface. The endpoint (connection or
appctx) can be retrieved via the conn-stream.
To be able to handle applets as a conn-stream endpoint, we must be prepared
to handle different types of endpoints. First of all, the conn-strream's
connection must no longer be used directly.
Because the backend conn-stream is no longer released during connection
retry and because it is valid to have conn-stream with no connection, it is
possible to allocated it when the stream is created. This means, from now, a
stream has always valid frontend and backend conn-streams. It is the first
step to merge the SI and the CS.
The backend conn-stream is no longer released on connection retry. This
means the conn-stream is detached from the underlying connection but not
released. Thus, during connection retries, the stream has always an
allocated conn-stream with no connection. All previous changes were made to
make this possible.
Note that .attach() mux callback function was changed to get the conn-stream
as argument. The muxes are no longer responsible to create the conn-stream
when a server connection is attached to a stream.
In the same way the previous commit, when a stream is created, the appctx
case is now handled before the conn-stream one. The purpose of this change
is to limit bugs during the SI/CS refactoring.
The conn-stream will progressively replace the stream-interface. Thus, a
stream will have to allocate the backend conn-stream during its
creation. This means it will be possible to have a conn-stream with no
connection. To prepare this change, we test the conn-stream's connection
when we retrieve it.
The 9 currently available debugging options may now be checked, set, or
cleared using -dM. The directive now takes a comma-delimited list of
options after the optional poisonning byte. With "help", the list of
available options is displayed with a short help and their current
status.
The management doc was updated.
New function pool_parse_debugging() is now dedicated to parsing options
of -dM. For now it only handles the optional memory poisonning byte, but
the function may already return an informative message to be printed for
help, a warning or an error. This way we'll reuse it for the settings
that will be needed for configurable debugging options.
The argument parser runs too late, we'll soon need it before creating
pools, hence just after init_early(). No visible change is expected but
this part is sensitive enough to be placed into its own commit for easier
bisection later if needed.
The cmdline argument parsing was performed quite late, which prevents
from retrieving elements that can be used to initialize the pools and
certain sensitive areas. The goal is to improve this by parsing command
line arguments right after the early init stage. This is possible
because the cmdline parser already does very little beyond retrieving
config elements that are used later.
Doing so requires to move the parser code to a separate function and
to externalize a few variables out of the function as they're used
later in the boot process, in the original function.
This patch creates init_args() but doesn't move it upfront yet, it's
still executed just before init(), which essentially corresponds to
what was done before (only the trash buffers, ACLs and Lua were
initialized earlier and are not needed for this).
The rest is not modified and as expected no change is observed.
Note that the diff doesn't to justice to the change as it makes it
look like the early init() code was moved to a new function after
the function was renamed, while in fact it's clearly the parser
itself which moved.
There are some delicate chicken-and-egg situations in the initialization
code, because the init() function currently does way too much (it goes
as far as parsing the config) and due to this it must be started very
late. But it's also in charge of initializing a number of variables that
are needed in early boot (e.g. hostname/pid for error reporting, or
entropy for random generators).
This patch carefully extracts all the early code that depends on
absolutely nothing, and places it immediately after the STG_LOCK init
stage. The only possible failures at this stage are only allocation
errors and they continue to provoke an immediate exit().
Some environment variables, hostname, date, pid etc are retrieved at
this stage. The program's arguments are also copied there since they're
needed to be kept intact for the master process.
The STG_REGISTER init level is used to register known keywords and
protocol stacks. It must be called earlier because some of the init
code already relies on it to be known. For example, "haproxy -vv"
for now is constrained to start very late only because of this.
This patch moves it between STG_LOCK and STG_ALLOC, which is fine as
it's used for static registration.
Now -dM will set POOL_DBG_POISON for consistency with the rest of the
pool debugging options. As such now we only check for the new flag,
which allows the default value to be preset.
This option used to allow to store a marker at the end of the area, which
was used as a canary and detection against wrong freeing while the object
is used, and as a pointer to the last pool_free() caller when back in cache.
Now that we can compute the offsets at runtime, let's check it at run time
and continue the code simplification.
This option used to allow to store a pointer to the caller of the last
pool_alloc() or pool_free() at the end of the area. Now that we can
compute the offsets at runtime, let's check it at run time and continue
the code simplification. In __pool_alloc() we now always calculate the
return address (which is quite cheap), and the POOL_DEBUG_TRACE_CALLER()
calls are conditionned on the status of debugging option.
This macro is build-time dependent and is almost unused, yet where it
cannot easily be avoided. Now that we store the distinction between
pool->size and pool->alloc_sz, we don't need to maintain it and we
can instead compute it on the fly when creating a pool. This is what
this patch does. The variables are for now pretty static, but this is
sufficient to kill the macro and will allow to set them more dynamically.
The allocated size is the visible size plus the extra storage. Since
for now we can store up to two extra elements (mark and tracer), it's
convenient because now we know that the mark is always stored at
->size, and the tracer is always before ->alloc_sz.
Like previous patches, this replaces the build-time code paths that were
conditionned by CONFIG_HAP_POOLS with runtime paths conditionned by
!POOL_DBG_NO_CACHE. One trivial test had to be added in the hot path in
__pool_alloc() to refrain from calling pool_get_from_cache(), and another
one in __pool_free() to avoid calling pool_put_to_cache().
All cache-specific functions were instrumented with a BUG_ON() to make
sure we never call them with cache disabled. Additionally the cache[]
array was not initialized (remains NULL) so that we can later drop it
if not needed. It's particularly huge and should be turned to dynamic
with a pointer to a per-thread area where all the objects are located.
This will solve the memory usage issue and will improve locality, or
even help better deal with NUMA machines once each thread uses its own
arena.
There were very few functions left that were specific to global pools,
and even the checks they used to participate to are not directly on the
most critical path so they can suffer an extra "if".
What's done now is that pool_releasable() always returns 0 when global
pools are disabled (like the one before) so that pool_evict_last_items()
never tries to place evicted objects there. As such there will never be
any object in the free list. However pool_refill_local_from_shared() is
bypassed when global pools are disabled so that we even avoid the atomic
loads from this function.
The default global setting is still adjusted based on the original
CONFIG_NO_GLOBAL_POOLS that is set depending on threads and the allocator.
The global executable only grew by 1.1kB by keeping this code enabled,
and the code is simplified and will later support runtime options.
The test to decide whether or not to enforce integrity checks on cached
objects is now enabled at runtime and conditionned by this new debugging
flag. While previously it was not a concern to inflate the code size by
keeping the two functions static, they were moved to pool.c to limit the
impact. In pool_get_from_cache(), the fast code path remains fast by
having both flags tested at once to open a slower branch when either
POOL_DBG_COLD_FIRST or POOL_DBG_INTEGRITY are set.
When enabling pools integrity checks, we usually prefer to allocate cold
objects first in order to maximize the time the objects spend in the
cache. In order to make this configurable at runtime, let's introduce
a new debugging flag to control this allocation order. It is currently
preset by the DEBUG_POOL_INTEGRITY build-time setting.
This test used to appear at a single location in create_pool() to
enable a check on the pool name or unconditionally merge similarly
sized pools.
This patch introduces POOL_DBG_DONT_MERGE and conditions the test on
this new runtime flag, that is preset according to the aforementioned
debugging option.
The fail-alloc test used to be enabled/disabled at build time using
the DEBUG_FAIL_ALLOC macro, but it happens that the cost of the test
is quite cheap and that it can be enabled as one of the pool_debugging
options.
This patch thus introduces the first POOL_DBG_FAIL_ALLOC option, whose
default value depends on DEBUG_FAIL_ALLOC. The mem_should_fail() function
is now always built, but it was made static since it's never used outside.
This read-mostly variable will be used at runtime to enable/disable
certain pool-debugging features and will be set by the command-line
parser. A future option -dP will take a number of debugging features
as arguments to configure this variable's contents.
The poisonning performed on pool_free() used to help a little bit with
use-after-free detection, but usually did more harm than good in that
it was never possible to perform post-mortem analysis on released
objects once poisonning was enabled on allocation. Now that there is
a dedicated DEBUG_POOL_INTEGRITY, let's get rid of this annoyance
which is not even documented in the management manual.
There's no point keeping the vars_init_head() call in init() when we
already have a vars_init() registered at the right time to do that,
and it complexifies the boot sequence, so let's move it there.
Let's not use a trash there anymore. The function is called at very
early boot (for "haproxy -vv"), and the need for a trash prevents the
arguments from being parsed earlier. Moreover, the function only uses
a FILE* on output with fprintf(), so there's not even any benefit in
using chunk_printf() on an intermediary variable, emitting the output
directly is both clearer and safer.
REGISTER is meant to only assemble static lists, not to initialize
code that may depend on some elements possibly initialized at this
level. For example the init code currently looks up transport protocols
such as XPRT_RAW and XPRT_SSL which ought to be themselves registered
from at REGISTER stage, and which currently work only because they're
still registered directly from a constructor. INIT is perfectly suited
for this level.
Add the ability to set a "server timeout" on the httpclient with either
the httpclient_set_timeout() API or the timeout argument in a request.
Issue #1470.
In process_stream(), we force the response buffer allocation before any
processing to be able to return an error message. It is important because,
when an error is triggered, the stream is immediately closed. Thus we cannot
wait for the response buffer allocation.
When the allocation fails, the stream analysis is stopped and the expiration
date of the stream's task is updated before exiting process_stream(). But if
the stream was woken up because of a connection or an analysis timeout, the
expiration date remains blocked in the past. This means the stream is woken
up in loop as long as the response buffer is not properly allocated.
Alone, this behavior is already a bug. But because the mechanism to handle
buffer allocation failures is totally broken since a while, this bug becomes
more problematic. Because, most of time, the watchdog will kill HAProxy in
this case because it will detect a spinning loop.
To fix it, at least temporarily, an allocation failure at this stage is now
reported as an error and the processing is aborted. It's not satisfying but
it is better than nothing. If the buffers allocation mechanism is
refactored, this part will be reviewed.
This patch must be backported, probably as far as 2.0. It may be perceived
as a regression, but the actual behavior is probably even worse. And
because it was not reported, it is probably not a common situation.
The mem_poison_byte, mem_fail_rate, using_default_allocator and the
pools list are all only set once at boot time and never changed later,
while they're heavily used at run time. Let's optimize their usage from
all threads by marking them read-mostly so that them reside in a shared
cache line.
The recent changes was not complete.
d1c76f24fd
MINOR: quic: do not modify offset node if quic_rx_strm_frm in tree
The frame length and data pointer should incremented after the data
copy. A BUG_ON statement has been added to detect an incorrect decrement
operaiton.
Some variables were only checked via BUG_ON macro. If compiling without
DEBUG_STRICT, this instruction is a noop. Fix this by using an explicit
condition + ABORT_NOW.
This should fix the github issue #1549.
qc_rx_strm_frm_cpy is unsafe because it updates the offset field of the
frame. This is not safe as the frame is inserted in the tree when
calling this function and offset serves as the key node.
To fix this, the API is modified so that qc_rx_strm_frm_cpy does not
update the frame parameter. The caller is responsible to update
offset/length in case of a partial copy.
The impact of this bug is not known. It can only happened with received
STREAM frames out-of-order. This might be triggered with large h3 POST
requests.
In si_cs_recv(), the mux must never set CS_FL_WANT_ROOM flag on the
conn-stream if the input buffer is empty and nothing was copied. It is
important because, there is nothing the app layer can do in this case to
make some room. If this happens, this will most probably lead to a ping-pong
loop between the mux and the stream.
With this BUG_ON(), it will be easier to spot such bugs.
If a parsing error is detected and the corresponding HTX flag is set
(HTX_FL_PARSING_ERROR), we must be sure to always report it to the app
layer. It is especially important when the error occurs during the response
parsing, on the server side. In this case, the RX buffer contains an empty
HTX message to carry the flag. And it remains in this state till the info is
reported to the app layer. This must be done otherwise, on the conn-stream,
the CS_FL_ERR_PENDING flag cannot be switched to CS_FL_ERROR and the
CS_FL_WANT_ROOM flag is always set when h2_rcv_buf() is called. The result
is a ping-pong loop between the mux and the stream.
Note that this patch fixes a bug. But it also reveals a design issue. The
error must not be reported at the HTX level. The error is already carried by
the conn-stream. There is no reason to duplicate it. In addition, it is
errorprone to have an empty HTX message only to report the error to the app
layer.
This patch should fix the issue #1561. It must be backported as far as 2.0
but the bug only affects HAProxy >= 2.4.
After sending some data, we try to wake the H1 stream to resume data
processing at the stream level, except if the output buffer is still
full. However we must also be sure the mux is not blocked because of an
allocation failure on this buffer. Otherwise, it may lead to a ping-pong
loop between the stream and the mux to send more data with an unallocated
output buffer.
Note there is a mechanism to queue buffers allocations when a failure
happens. However this mechanism is totally broken since the filters were
introducted in HAProxy 1.7. And it is worse now with the multiplexers. So
this patch fixes a possible loop needlessly consuming all the CPU. But
buffer allocation failures must remain pretty rare.
This patch must be backported as far as 2.0.
The qcc instance should be tested as it is implied by a previous test
that it may be NULL. In this case, qc_timeout_task can be stopped.
This should fix github issue #1559.
A bug was uncovered by commit fc5912914 ("MINOR: httpclient: Don't limit
data transfer to 1024 bytes"), it happens that callers of b_xfer() and
b_force_xfer() are expected to check for available room in the target
buffer. Previously it was unlikely to be full but now with full buffer-
sized transfers, it happens more often and in practice it is possible
to crash the process with the debug command "httpclient" on the CLI by
going beyond a the max buffer size. Other call places ought to be
rechecked by now and it might be time to rethink this API if it tends
to generalize.
This must be backported to 2.5.
The url2sa implementation is inconsitent when parsing an IPv4, indeed
url2sa() takes a <ulen> as a parameter where the call to url2ipv4() takes
a null terminated string. Which means url2ipv4 could try to read more
that it is supposed to.
This function is only used from a buffer so it never reach a unallocated
space. It can only cause an issue when used from the httpclient which
uses it with an ist.
This patch fixes the issue by copying everything in the trash and
null-terminated it.
Must be backported in all supported version.
When calling ssl_ocsp_response_print which is used to display an OCSP
response's details when calling the "show ssl ocsp-response" on the CLI,
we use the BIO_read function that copies an OpenSSL BIO into a trash.
The return value was not checked though, which could lead to some
crashes since BIO_read can return a negative value in case of error.
This patch should be backported to 2.5.
When calling the "show ssl ocsp-response" CLI command some OpenSSL
objects need to be created in order to get some information related to
the OCSP response and some of them were not freed.
It should be backported to 2.5.
The b_istput function called to append the last data block to the end of
an OCSP response's detailed output was not checked in
ssl_ocsp_response_print. The ssl_ocsp_response_print return value checks
were added as well since some of them were missing.
This error was raised by Coverity (CID 1469513).
This patch fixes GitHub issue #1541.
It can be backported to 2.5.
The 'dst' optionnal field on a httpclient request can be used to set an
alternative server address in the haproxy address format. Which means it
could be use with unix@, ipv6@ etc.
Should fix issue #1471.
When starting for the 2nd time a request from the same httpclient *hc
context, the flags are not reinitialized and the httpclient will stop
after the first call to the IO handler, because the END flag is always
present.
This patch also add a test before httpclient_start() to ensure we don't
start a client already started.
Must be backported in 2.5.
The idle connection delay calculation before a request is a bit tricky,
especially for multiplexed protocols. It changed between 2.3 and 2.4 by
the integration of the idle delay inside the session itself with these
commits:
dd78921c6 ("MINOR: logs: Use session idle duration when no stream is provided")
7a6c51324 ("MINOR: stream: Always get idle duration from the session")
and by then it was only set by the H1 mux. But over multiple changes, what
used to be a zero idle delay + a request delay for H2 became a bit odd, with
the idle time slipping into the request time measurement. The effect is that,
as reported in GH issue #1395, some H2 request times look huge.
This patch introduces the calculation of the session's idle time on the
H2 mux before creating the stream. This is made possible because the
stream_new() code immediately copies this value into the stream for use
at log time. Thus we don't care about changing something that will be
touched by every single request. The idle time is calculated as documented,
i.e. the delay from the previous request to the current one. This also
means that when a single stream is present on a connection, a part of
the server's response time may appear in the %Ti measurement, but this
reflects the reality since nothing would prevent the client from using
the connection to fetch more objects. In addition this shows how long
it takes a client to find references to objects in an HTML page and
start to fetch them.
A different approach could have consisted in counting from the last time
the connection was left without any request (i.e. really idle), but this
would at least require a documentation change and it's not certain this
would provide a more useful information.
Thanks to Bart Butler and Luke Seelenbinder for reporting enough elements
to diagnose this issue.
This should be backported to 2.4.
Sadly, despite particular care, commit 39a0a1e12 ("MEDIUM: h2/hpack: emit
a Dynamic Table Size Update after settings change") broke H2 when sending
DTSU. A missing negation on the flag caused the DTSU_EMITTED flag to be
lost and the DTSU to be sent again on the next stream, and possibly to
break flow control or a few other internal states.
This will have to be backported wherever the patch above was backported.
Thanks to Yves Lafon for notifying us with elements to reproduce the
issue!
There's a bug in spoe_release_appctx() which checks the presence of items
in the wrong list rt[tid].agents to run over rt[tid].waiting_queue and
zero their spoe_appctx. The effect is that these contexts are not zeroed
and if spoe_stop_processing() is called, "sa->cur_fpa--" will be applied
to one of these recently freed contexts and will corrupt random memory
locations, as found at least in bugs #1494 and #1525.
This must be backported to all stable versions.
Many thanks to Christian Ruppert from Babiel for exchanging so many
useful traces over the last two months, testing debugging code and
helping set up a similar environment to reproduce it!
Ensure calls to http_find_header() terminate. If a "Set-Cookie2"
header is found then the while(1) loop in
http_manage_server_side_cookies() will never terminate, resulting in
the watchdog firing and the process terminating via SIGABRT.
The while(1) loop becomes unbounded because an unmatched call to
http_find_header("Set-Cookie") will leave ctx->blk=NULL. Subsequent
calls to check for "Set-Cookie2" will now enumerate from the beginning
of all the blocks and will once again match on subsequent
passes (assuming a match first time around), hence the loop becoming
unbounded.
This issue was introduced with HTX and this fix should be backported
to all versions supporting HTX.
Many thanks to Grant Spence (gspence@redhat.com) for working through
this issue with me.
ist are not ended by '\0', leading to junk characters being displayed
when using %s for printing the HTTP start line.
Fix the issue by replacing %s by %.*s + istlen.
Must be backported in 2.5.
If the same filename was specified in multiple calls of the jwt_verify
converter, we would have parsed the contents of the file every time it
was used instead of checking if the entry already existed in the tree.
This lead to memory leaks because we would not insert the duplicated
entry and we would not free it (as well as the EVP_PKEY it referenced).
We now check the return value of ebst_insert and free the current entry
if it is a duplicate of an existing entry.
The order in which the tree insert and the pkey parsing happen was also
switched in order to avoid parsing key files in case of duplicates.
Should be backported to 2.5.
When emptying the jwt_cert_tree during deinit, the entries are freed but
not the EVP_PKEY reference they kept, leading in a memory leak.
Should be backported in 2.5.
The node pointer was not moving properly along the jwt_cert_tree during
the deinit which ended in a double free during cleanup (or when checking
a configuration that used the jwt_verify converter with an explicit
certificate specified).
This patch fixes GitHub issue #1533.
It should be backported to 2.5.
Inspect return code of HEADERS/DATA parsing functions and use a BUG_ON
to signal an error. The stream should be closed to handle the error
in a more clean fashion.
This is the same fixe as for this commit:
"BUILD: tree-wide: avoid warnings caused by redundant checks of obj_types"
Should fix CID 1469649 for GH #1546
Remove this server specific code section. It is useless, not tested. Furthermore
this is really not the good place to retrieve the peer transport parameters.
Add a new function h3_data_to_htx. This function is used to parse a H3
DATA frame and copy it in the mux stream HTX buffer. This is required to
support HTTP POST data.
Note that partial transfers if the HTX buffer is fulled is not properly
handle. This causes large DATA transfer to fail at the moment.
Move the HEADERS parsing code outside of generic h3_decode_qcs to a new
dedicated function h3_headers_to_htx. The benefit will be visible when
other H3 frames parsing will be implemented such as DATA.
Flags EOI/EOS must be set on conn-stream when transfering the last data
of a stream in rcv_buf. This is activated if qcs HTX buffer has the EOM
flag and has been fully transfered.
Implement the stream rcv_buf operation on QUIC mux.
A new buffer is stored in qcs structure named app_buf. This new buffer
will contains HTX and will be filled for example on H3 DATA frame
parsing.
The rcv_buf operation transfer as much as possible data from the HTX
from app_buf to the conn-stream buffer. This is mainly identical to
mux-h2. This is required to support HTTP POST data.
Adjust the method to detect that a H3 HEADERS frame is the last one of
the stream. If this is true, the flags EOM and BODYLESS must be set on
the HTX message.
Pass the H3 frame length to QPACK decoding instead of the length of the
whole buffer.
Without this fix, if there is multiple H3 frames starting with a
HEADERS, QPACK decoding will be erroneously applied over all of them,
most probably leading to a decoding error.
If the last frame is not entirely copied and must be buffered, FIN
must not be signaled to the upper layer.
This might fix a rare bug which could cause the request channel to be
closed too early leading to an incomplete request.
If a CONNECTION_CLOSE is received during handshake or after mux release,
a segfault happens due to invalid dereferencement of qc->qcc. Check
mux_state first to prevent this.
Move the QUIC datagram handlers oustide of the receivers. Use a global
handler per-thread which is allocated on post-config. Implement a free
function on process deinit to avoid a memory leak.
Since the relaxation of the run-queue locks in 2.0 there has been a
very small but existing race between expired tasks and running tasks:
a task might be expiring and being woken up at the same time, on
different threads. This is protected against via the TASK_QUEUED and
TASK_RUNNING flags, but just after the task finishes executing, it
releases it TASK_RUNNING bit an only then it may go to task_queue().
This one will do nothing if the task's ->expire field is zero, but
if the field turns to zero between this test and the call to
__task_queue() then three things may happen:
- the task may remain in the WQ until the 24 next days if it's in
the future;
- the task may prevent any other task after it from expiring during
the 24 next days once it's queued
- if DEBUG_STRICT is set on 2.4 and above, an abort may happen
- since 2.2, if the task got killed in between, then we may
even requeue a freed task, causing random behaviour next time
it's found there, or possibly corrupting the tree if it gets
reinserted later.
The peers code is one call path that easily reproduces the case with
the ->expire field being reset, because it starts by setting it to
TICK_ETERNITY as the first thing when entering the task handler. But
other code parts also use multi-threaded tasks and rightfully expect
to be able to touch their expire field without causing trouble. No
trivial code path was found that would destroy such a shared task at
runtime, which already limits the risks.
This must be backported to 2.0.
Along recent evolutions of the pools, we've lost the ability to reliably
detect double-frees because while in the past the same pointer was being
used to chain the objects in the cache and to store the pool's address,
since 2.0 they're different so the pool's address is never overwritten on
free() and a double-free will rarely be detected.
This patch sets the caller's return address there. It can never be equal
to a pool's address and will help guess what was the previous call path.
It will not work on exotic architectures nor with very old compilers but
these are not the environments where we're trying to get detailed bug
reports, and this is not done by default anyway so we don't care about
this limitation. Note that depending on the inlining status of the
function, the result may differ but that's no big deal either.
A test by placing a double free of an appctx inside the release handler
itself successfully reported the trouble during appctx_free() and showed
that the return address was in stream_int_shutw_applet() (this one calls
the release handler).
During global eviction we're visiting nodes from the LRU tail and we
determine their pool cache head and their pool. In order to make sure
we never mess up, let's add some backwards pointer to the thread number
and pool from the pool_cache_head. It's 64-byte aligned anyway so we're
not wasting space and it helps for debugging and will prevent memory
corruption the earliest possible.
When refilling caches from the shared cache, it's pointless to set the
pointer to the local pool since it may be overwritten immediately after
by the LIST_INSERT(). This is a leftover from the pre-2.4 code in fact.
It didn't hurt, though.
When destroying a pool (e.g. at exit or when resizing buffers), it's
important to try to free all their local objects otherwise we can leave
some in the cache. This is particularly visible when changing "bufsize",
because "show pools" will then show two "trash" pools, one of which
contains a single object in cache (which is fortunately not reachable).
In all cases this happens while single-threaded so that's easy to do,
we just have to do it on the current thread.
The easiest way to do this is to pass an extra argument to function
pool_evict_from_local_cache() to force a full flush instead of a
partial one.
This can probably be backported to about all branches where this
applies, but at least 2.4 needs it.
With the introduction of DEBUG_POOL_TRACING in 2.6-dev with commit
add43fa43 ("DEBUG: pools: add new build option DEBUG_POOL_TRACING"), small
pools might be too short to store both the pool_cache_item struct and the
caller location, resulting in memory corruption and crashes when this debug
option is used.
What happens here is that the way the size is calculated is by considering
that the POOL_EXTRA part is only used while the object is in use, but this
is not true anymore for the caller's pointer which must absolutely be placed
after the pool_cache_item.
This patch makes sure that the caller part will always start after the
pool_cache_item and that the allocation will always be sufficent. This is
only tagged medium because the debug option is new and unlikely to be used
unless requested by a developer.
No backport is needed.
This should fix Coverity CID 375047 in GH #1536 where <buf_area> could leak because
not always freed by by quic_conn_drop(), especially when not stored in <qc> variable.
The SSL_CTX_set_tmp_dh_callback function was marked as deprecated in
OpenSSLv3 so this patch replaces this callback mechanism by a direct set
of DH parameters during init.
DH structure is a low-level one that should not be used anymore with
OpenSSLv3. All functions working on DH were marked as deprecated and
this patch replaces the ones we used with new APIs recommended in
OpenSSLv3, be it in the migration guide or the multiple new manpages
they created.
This patch replaces all mentions of the DH type by the HASSL_DH one,
which will be replaced by EVP_PKEY with OpenSSLv3 and will remain DH on
older versions. It also uses all the newly created helper functions that
enable for instance to load DH parameters from a file into an EVP_PKEY,
or to set DH parameters into an SSL_CTX for use in a DHE negotiation.
The following deprecated functions will effectively disappear when
building with OpenSSLv3 : DH_set0_pqg, PEM_read_bio_DHparams, DH_new,
DH_free, DH_up_ref, SSL_CTX_set_tmp_dh.
Starting from OpenSSLv3, we won't rely on the
SSL_CTX_set_tmp_dh_callback mechanism so we will need to know the DH
size we want to use during init. In order for the default DH param size
to be used when no RSA or DSA private key can be found for a given bind
line, we will need to know the default size we want to use (which was
not possible the way the code was built, since the global default dh
size was set too late.
The current way the local DH structures are built relies on the fact
that the ssl_get_tmp_dh function would only be called as a callback
during a DHE negotiation, so after all the SSL contexts are built and
the init is over. With OpenSSLv3, this function will now be called
during init, so before those objects are curretly built.
This patch ensures that when calling ssl_get_tmp_dh and trying to use
one of or hard-coded DH parameters, it will be created if it did not
exist yet.
The current DH parameter creation is also kept so that with versions
before OpenSSLv3 we don't end up creating this DH object during a
handshake.
Starting from OpenSSLv3, the DH_set0_pqg function is deprecated and the
use of DH objects directly is advised against so this new helper
function will be used to convert our hard-coded DH parameters into an
EVP_PKEY. It relies on the new OSSL_PARAM mechanism, as described in the
EVP_PKEY-DH manpage.
This helper function will only be used with OpenSSLv3. It simply sets in
an SSL_CTX a set of DH parameters of the same size as a certificate's
private key. This logic is the same as the one used with older versions,
it simply relies on new APIs.
If no pkey can be found the SSL_CTX_set_dh_auto function wll be called,
making the SSL_CTX rely on DH parameters provided by OpenSSL in case of
DHE negotiation.
Starting from OpenSSLv3, the SSL_CTX_set_tmp_dh function is deprecated
and it should be replaced by SSL_CTX_set0_tmp_dh_pkey, which takes an
EVP_PKEY instead of a DH parameter. Since this function is new to
OpenSSLv3 and its use requires an extra EVP_PKEY_up_ref call, we will
keep the two versions side by side, otherwise it would require to get
rid of all DH references in older OpenSSL versions as well.
This helper function is not used yet so this commit should be strictly
iso-functional, regardless of the OpenSSL version.
In the upcoming OpenSSLv3 specific patches, we will make use of the
newly created ssl_get_tmp_dh that returns an EVP_PKEY containing DH
parameters of the same size as a bind line's RSA or DSA private key.
The previously named ssl_get_tmp_dh function was renamed
ssl_get_tmp_dh_cbk because it is only used as a callback passed to
OpenSSL through SSL_CTX_set_tmp_dh_callback calls.
This new function makes use of the new OpenSSLv3 APIs that should be
used to load DH parameters from a file (or a BIO in this case) and that
should replace the deprecated PEM_read_bio_DHparams function.
Note that this function returns an EVP_PKEY when using OpenSSLv3 since
they now advise against using low level structures such as DH ones.
This helper function is not used yet so this commit should be stricly
iso-functional, regardless of the OpenSSL version.
ERR_func_error_string does not return anything anymore with OpenSSLv3,
it can be replaced by ERR_peek_error_func which did not exist on
previous versions.
When started in master-worker mode combined with daemon mode, HAProxy
will open() with O_TRUNC the pidfile when switching to wait mode.
In 2.5, it happens everytime after trying to load the configuration,
since we switch to wait mode.
In previous version this happens upon a failure of the configuration
loading.
Fixes bug #1545.
Must be backported in every supported branches.
Rename quic_conn_to_buf to qc_snd_buf and remove it from xprt ops. This
is done to reflect the true usage of this function which is only a
wrapper around sendto but cannot be called by the upper layer.
qc_snd_buf is moved in quic-sock because to mark its link with
quic_sock_fd_iocb which is the recvfrom counterpart.
Rename a local variable tid to cid_tid. This ensures there is no
confusion with the global tid. It is now more explicit that we are
manipulating a quic datagram handlers from another thread in
quic_lstnr_dgram_dispatch.
HMAC_Init_ex being a function that acts on a low-level HMAC_CTX
structure was marked as deprecated in OpenSSLv3.
This patch replaces this call by EVP_MAC_CTX_set_params, as advised in
the migration_guide, and uses the new OSSL_PARAM mechanism to configure
the MAC context, as described in the EVP_MAC and EVP_MAC-HMAC manpages.
SSL_CTX_set_tlsext_ticket_key_cb was deprecated on OpenSSLv3 because it
uses an HMAC_pointer which is deprecated as well. According to the v3's
manpage it should be replaced by SSL_CTX_set_tlsext_ticket_key_evp_cb
which uses a EVP_MAC_CTX pointer.
This new callback was introduced in OpenSSLv3 so we need to keep the two
calls in the source base and to split the usage depending on the OpenSSL
version.
In the context of the 'generate-certificates' bind line option, if an
'ecdhe' option is present on the bind line as well, we use the
SSL_CTX_set_tmp_ecdh function which was marked as deprecated in
OpenSSLv3. As advised in the SSL_CTX_set_tmp_ecdh manpage, this function
should be replaced by the SSL_CTX_set1_groups one (or the
SSL_CTX_set1_curves one in our case which does the same but existed on
older OpenSSL versions as well).
The ECDHE behaviour with OpenSSL 1.0.2 is not the same when using the
SSL_CTX_set1_curves function as the one we have on newer versions.
Instead of looking for a code that would work exactly the same
regardless of the OpenSSL version, we will keep the original code on
1.0.2 and use newer APIs for other versions.
This patch should be strictly isofunctional.
The ecdhe option relies on the SSL_CTX_set_tmp_ecdh function which has
been marked as deprecated in OpenSSLv3. As advised in the
SSL_CTX_set_tmp_ecdh manpage, this function should be replaced by the
SSL_CTX_set1_groups one (or the SSL_CTX_set1_curves one in our case
which does the same but existed on older OpenSSL versions as well).
When using the "curves" option we have a different behaviour with
OpenSSL1.0.2 compared to later versions. On this early version an SSL
backend using a P-256 ECDSA certificate manages to connect to an SSL
frontend having a "curves P-384" option (when it fails with later
versions).
Even if the API used for later version than OpenSSL 1.0.2 already
existed then, for some reason the behaviour is not the same on the older
version which explains why the original code with the deprecated API is
kept for this version (otherwise we would risk breaking everything on a
version that might still be used by some people despite being pretty old).
This patch should be strictly isofunctional.
The sha2 converter's implementation used low level interfaces such as
SHA256_Update which are flagged as deprecated starting from OpenSSLv3.
This patch replaces those calls by EVP ones which already existed on
older versions. It should be fully isofunctional.
There were empty lines in the output of the CLI's "show ssl
ocsp-response <id>" command. The plain "show ssl ocsp-response" command
(without parameter) was already managed in commit
cc750efbc5. This patch adds an extra space
to those lines so that the only existing empty lines actually mark the
end of the output. This requires to post-process the buffer filled by
OpenSSL's OCSP_RESPONSE_print function (which produces the output of the
"openssl ocsp -respin <ocsp.pem>" command). This way the output of our
command still looks the same as openssl's one.
Must be backported in 2.5.
The same datagram could be passed to quic_lstnr_dgram_dispatch() before
being consumed by qc_lstnr_pkt_rcv() leading to a wrong decryption for the packet
number decryption, then a decryption error for the data. This was due to
a wrong datagram buffer passed to quic_lstnr_dgram_dispatch(). The datagram data
which must be passed to quic_lstnr_dgram_dispatch() are the same as the one
passed to recvfrom().
For debug purpose, no more 1024 bytes were copied at a time. But there is no
reason to keep this limitation. Thus, it is removed.
This patch may be backported to 2.5.
Since the HTTP legacy mode was removed, it is unexpected to create an HTTP
stream without a valid request. Thanks to this change, the wait_for_request
analyzer was significatly simplified. And it is possible because HTTP
multiplexers already take care to have a valid request to create a stream.
But it means that any HTTP applet on the client side must do the same. The
httpclient client is one of them. And it is not a problem because the
request is generated before starting the applet. We must just take care to
set the right state.
For now it works "by chance", because the applet seems to be scheduled
before the stream itself. But if this change, this will lead to crash
because the stream expects to have a request when wait_for_request analyzer.
This patch should be backported to 2.5.
For now, these buffers are allocated when the httpclient is created and
freed when it is released. Usually, we try to avoid to keep buffer allocated
if it is not required. Empty buffers should be released ASAP. Apart for
that, there is no issue with the response side because a copy is always
performed. However, for the request side, a swap with the channel's buffer
is always performed. And there is no guarantee the channel's buffer is
allocated. Thus, after the swap, the httpclient can retrieve a null
buffer. In practice, this never happens. But this may change. And it will be
required for a futur fix.
So, now, we systematically take care to have an allocated buffer when we
want to write in it. And it is released as soon as it becomes empty.
This patch should be backported to 2.5.
"mcli-debug-mode on" enables every command that were meant for a worker,
on the CLI of the master. Which mean you can issue, "show fd", show
stat" in order to debug the MASTER proxy.
You can also combine it with "expert-mode on" or "experimental-mode on"
to access to more commands.
When in expert or experimental mode on the master CLI, and issuing a
command for the master process, all commands are prefixed by
"mode-experimental -" or/and "mode-expert on -", however these commands
were not available in the master applet, so the help was issued for
each one.
Released version 2.6-dev1 with the following main changes :
- BUG/MINOR: cache: Fix loop on cache entries in "show cache"
- BUG/MINOR: httpclient: allow to replace the host header
- BUG/MINOR: lua: don't expose internal proxies
- MEDIUM: mworker: seamless reload use the internal sockpairs
- BUG/MINOR: lua: remove loop initial declarations
- BUG/MINOR: mworker: does not add the -sf in wait mode
- BUG/MEDIUM: mworker: FD leak of the eventpoll in wait mode
- MINOR: quic: do not reject PADDING followed by other frames
- REORG: quic: add comment on rare thread concurrence during CID alloc
- CLEANUP: quic: add comments on CID code
- MEDIUM: quic: handle CIDs to rattach received packets to connection
- MINOR: qpack: support litteral field line with non-huff name
- MINOR: quic: activate QUIC traces at compilation
- MINOR: quic: use more verbose QUIC traces set at compile-time
- MEDIUM: pool: refactor malloc_trim/glibc and jemalloc api addition detections.
- MEDIUM: pool: support purging jemalloc arenas in trim_all_pools()
- BUG/MINOR: mworker: deinit of thread poller was called when not initialized
- BUILD: pools: only detect link-time jemalloc on ELF platforms
- CI: github actions: add the output of $CC -dM -E-
- BUG/MEDIUM: cli: Properly set stream analyzers to process one command at a time
- BUILD: evports: remove a leftover from the dead_fd cleanup
- MINOR: quic: Set "no_application_protocol" alert
- MINOR: quic: More accurate immediately close.
- MINOR: quic: Immediately close if no transport parameters extension found
- MINOR: quic: Rename qc_prep_hdshk_pkts() to qc_prep_pkts()
- MINOR: quic: Possible crash when inspecting the xprt context
- MINOR: quic: Dynamically allocate the secrete keys
- MINOR: quic: Add a function to derive the key update secrets
- MINOR: quic: Add structures to maintain key phase information
- MINOR: quic: Optional header protection key for quic_tls_derive_keys()
- MINOR: quic: Add quic_tls_key_update() function for Key Update
- MINOR: quic: Enable the Key Update process
- MINOR: quic: Delete the ODCIDs asap
- BUG/MINOR: vars: Fix the set-var and unset-var converters
- MEDIUM: pool: Following up on previous pool trimming update.
- BUG/MEDIUM: mux-h1: Fix splicing by properly detecting end of message
- BUG/MINOR: mux-h1: Fix splicing for messages with unknown length
- MINOR: mux-h1: Improve H1 traces by adding info about http parsers
- MINOR: mux-h1: register a stats module
- MINOR: mux-h1: add counters instance to h1c
- MINOR: mux-h1: count open connections/streams on stats
- MINOR: mux-h1: add stat for total count of connections/streams
- MINOR: mux-h1: add stat for total amount of bytes received and sent
- REGTESTS: h1: Add a script to validate H1 splicing support
- BUG/MINOR: server: Don't rely on last default-server to init server SSL context
- BUG/MEDIUM: resolvers: Detach query item on response error
- MEDIUM: resolvers: No longer store query items in a list into the response
- BUG/MAJOR: segfault using multiple log forward sections.
- BUG/MEDIUM: h1: Properly reset h1m flags when headers parsing is restarted
- BUG/MINOR: resolvers: Don't overwrite the error for invalid query domain name
- BUILD: bug: Fix error when compiling with -DDEBUG_STRICT_NOCRASH
- BUG/MEDIUM: sample: Fix memory leak in sample_conv_jwt_member_query
- DOC: spoe: Clarify use of the event directive in spoe-message section
- DOC: config: Specify %Ta is only available in HTTP mode
- BUILD: tree-wide: avoid warnings caused by redundant checks of obj_types
- IMPORT: slz: use the correct CRC32 instruction when running in 32-bit mode
- MINOR: quic: fix segfault on CONNECTION_CLOSE parsing
- MINOR: h3: add BUG_ON on control receive function
- MEDIUM: xprt-quic: finalize app layer initialization after ALPN nego
- MINOR: h3: remove duplicated FIN flag position
- MAJOR: mux-quic: implement a simplified mux version
- MEDIUM: mux-quic: implement release mux operation
- MEDIUM: quic: detect the stream FIN
- MINOR: mux-quic: implement subscribe on stream
- MEDIUM: mux-quic: subscribe on xprt if remaining data after send
- MEDIUM: mux-quic: wake up xprt on data transferred
- MEDIUM: mux-quic: handle when sending buffer is full
- MINOR: quic: RX buffer full due to wrong CRYPTO data handling
- MINOR: quic: Race issue when consuming RX packets buffer
- MINOR: quic: QUIC encryption level RX packets race issue
- MINOR: quic: Delete remaining RX handshake packets
- MINOR: quic: Remove QUIC TX packet length evaluation function
- MINOR: hq-interop: fix tx buffering
- MINOR: mux-quic: remove uneeded code to check fin on TX
- MINOR: quic: add HTX EOM on request end
- BUILD: mux-quic: fix compilation with DEBUG_MEM_STATS
- MINOR: http-rules: Add capture action to http-after-response ruleset
- BUG/MINOR: cli/server: Don't crash when a server is added with a custom id
- MINOR: mux-quic: do not release qcs if there is remaining data to send
- MINOR: quic: notify the mux on CONNECTION_CLOSE
- BUG/MINOR: mux-quic: properly initialize flow control
- MINOR: quic: Compilation fix for quic_rx_packet_refinc()
- MINOR: h3: fix possible invalid dereference on htx parsing
- DOC: config: retry-on list is space-delimited
- DOC: config: fix error-log-format example
- BUG/MEDIUM: mworker/cli: crash when trying to access an old PID in prompt mode
- MINOR: hq-interop: refix tx buffering
- REGTESTS: ssl: use X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY for cert check
- MINOR: cli: "show version" displays the current process version
- CLEANUP: cfgparse: modify preprocessor guards around numa detection code
- MEDIUM: cfgparse: numa detect topology on FreeBSD.
- BUILD: ssl: unbreak the build with newer libressl
- MINOR: vars: Move UPDATEONLY flag test to vars_set_ifexist
- MINOR: vars: Set variable type to ANY upon creation
- MINOR: vars: Delay variable content freeing in var_set function
- MINOR: vars: Parse optional conditions passed to the set-var converter
- MINOR: vars: Parse optional conditions passed to the set-var actions
- MEDIUM: vars: Enable optional conditions to set-var converter and actions
- DOC: vars: Add documentation about the set-var conditions
- REGTESTS: vars: Add new test for conditional set-var
- MINOR: quic: Attach timer task to thread for the connection.
- CLEANUP: quic_frame: Remove a useless suffix to STOP_SENDING
- MINOR: quic: Add traces for STOP_SENDING frame and modify others
- CLEANUP: quic: Remove cdata_len from quic_tx_packet struct
- MINOR: quic: Enable TLS 0-RTT if needed
- MINOR: quic: No TX secret at EARLY_DATA encryption level
- MINOR: quic: Add quic_set_app_ops() function
- MINOR: ssl_sock: Set the QUIC application from ssl_sock_advertise_alpn_protos.
- MINOR: quic: Make xprt support 0-RTT.
- MINOR: qpack: Missing check for truncated QPACK fields
- CLEANUP: quic: Comment fix for qc_strm_cpy()
- MINOR: hq_interop: Stop BUG_ON() truncated streams
- MINOR: quic: Do not mix packet number space and connection flags
- CLEANUP: quic: Shorten a litte bit the traces in lstnr_rcv_pkt()
- MINOR: mux-quic: fix trace on stream creation
- CLEANUP: quic: fix spelling mistake in a trace
- CLEANUP: quic: rename quic_conn conn to qc in quic_conn_free
- MINOR: quic: add missing lock on cid tree
- MINOR: quic: rename constant for haproxy CIDs length
- MINOR: quic: refactor concat DCID with address for Initial packets
- MINOR: quic: compare coalesced packets by DCID
- MINOR: quic: refactor DCID lookup
- MINOR: quic: simplify the removal from ODCID tree
- REGTESTS: vars: Remove useless ssl tunes from conditional set-var test
- MINOR: ssl: Remove empty lines from "show ssl ocsp-response" output
- MINOR: quic: Increase the RX buffer for each connection
- MINOR: quic: Add a function to list remaining RX packets by encryption level
- MINOR: quic: Stop emptying the RX buffer asap.
- MINOR: quic: Do not expect to receive only one O-RTT packet
- MINOR: quic: Do not forget STREAM frames received in disorder
- MINOR: quic: Wrong packet refcount handling in qc_pkt_insert()
- DOC: fix misspelled keyword "resolve_retries" in resolvers
- CLEANUP: quic: rename quic_conn instances to qc
- REORG: quic: move mux function outside of xprt
- MINOR: quic: add reference to quic_conn in ssl context
- MINOR: quic: add const qualifier for traces function
- MINOR: trace: add quic_conn argument definition
- MINOR: quic: use quic_conn as argument to traces
- MINOR: quic: add quic_conn instance in traces for qc_new_conn
- MINOR: quic: Add stream IDs to qcs_push_frame() traces
- MINOR: quic: unchecked qc_retrieve_conn_from_cid() returned value
- MINOR: quic: Wrong dropped packet skipping
- MINOR: quic: Handle the cases of overlapping STREAM frames
- MINOR: quic: xprt traces fixes
- MINOR: quic: Drop asap Retry or Version Negotiation packets
- MINOR: pools: work around possibly slow malloc_trim() during gc
- DEBUG: ssl: make sure we never change a servername on established connections
- MINOR: quic: Add traces for RX frames (flow control related)
- MINOR: quic: Add CONNECTION_CLOSE phrase to trace
- REORG: quic: remove qc_ prefix on functions which not used it directly
- BUG/MINOR: quic: upgrade rdlock to wrlock for ODCID removal
- MINOR: quic: remove unnecessary call to free_quic_conn_cids()
- MINOR: quic: store ssl_sock_ctx reference into quic_conn
- MINOR: quic: remove unnecessary if in qc_pkt_may_rm_hp()
- MINOR: quic: replace usage of ssl_sock_ctx by quic_conn
- MINOR: quic: delete timer task on quic_close()
- MEDIUM: quic: implement refcount for quic_conn
- BUG/MINOR: quic: fix potential null dereference
- BUG/MINOR: quic: fix potential use of uninit pointer
- BUG/MEDIUM: backend: fix possible sockaddr leak on redispatch
- BUG/MEDIUM: peers: properly skip conn_cur from incoming messages
- CI: Github Actions: do not show VTest failures if build failed
- BUILD: opentracing: display warning in case of using OT_USE_VARS at compile time
- MINOR: compat: detect support for dl_iterate_phdr()
- MINOR: debug: add ability to dump loaded shared libraries
- MINOR: debug: add support for -dL to dump library names at boot
- BUG/MEDIUM: ssl: initialize correctly ssl w/ default-server
- REGTESTS: ssl: fix ssl_default_server.vtc
- BUG/MINOR: ssl: free the fields in srv->ssl_ctx
- BUG/MEDIUM: ssl: free the ckch instance linked to a server
- REGTESTS: ssl: update of a crt with server deletion
- BUILD/MINOR: cpuset FreeBSD 14 build fix.
- MINOR: pools: always evict oldest objects first in pool_evict_from_local_cache()
- DOC: pool: document the purpose of various structures in the code
- CLEANUP: pools: do not use the extra pointer to link shared elements
- CLEANUP: pools: get rid of the POOL_LINK macro
- MINOR: pool: allocate from the shared cache through the local caches
- CLEANUP: pools: group list updates in pool_get_from_cache()
- MINOR: pool: rely on pool_free_nocache() in pool_put_to_shared_cache()
- MINOR: pool: make pool_is_crowded() always true when no shared pools are used
- MINOR: pool: check for pool's fullness outside of pool_put_to_shared_cache()
- MINOR: pool: introduce pool_item to represent shared pool items
- MINOR: pool: add a function to estimate how many may be released at once
- MEDIUM: pool: compute the number of evictable entries once per pool
- MINOR: pools: prepare pool_item to support chained clusters
- MINOR: pools: pass the objects count to pool_put_to_shared_cache()
- MEDIUM: pools: centralize cache eviction in a common function
- MEDIUM: pools: start to batch eviction from local caches
- MEDIUM: pools: release cached objects in batches
- OPTIM: pools: reduce local pool cache size to 512kB
- CLEANUP: assorted typo fixes in the code and comments This is 29th iteration of typo fixes
- CI: github actions: update OpenSSL to 3.0.1
- BUILD/MINOR: tools: solaris build fix on dladdr.
- BUG/MINOR: cli: fix _getsocks with musl libc
- BUG/MEDIUM: http-ana: Preserve response's FLT_END analyser on L7 retry
- MINOR: quic: Wrong traces after rework
- MINOR: quic: Add trace about in flight bytes by packet number space
- MINOR: quic: Wrong first packet number space computation
- MINOR: quic: Wrong packet number space computation for PTO
- MINOR: quic: Wrong loss time computation in qc_packet_loss_lookup()
- MINOR: quic: Wrong ack_delay compution before calling quic_loss_srtt_update()
- MINOR: quic: Remove nb_pto_dgrams quic_conn struct member
- MINOR: quic: Wrong packet number space trace in qc_prep_pkts()
- MINOR: quic: Useless test in qc_prep_pkts()
- MINOR: quic: qc_prep_pkts() code moving
- MINOR: quic: Speeding up Handshake Completion
- MINOR: quic: Probe Initial packet number space more often
- MINOR: quic: Probe several packet number space upon timer expiration
- MINOR: quic: Comment fix.
- MINOR: quic: Improve qc_prep_pkts() flexibility
- MINOR: quic: Do not drop secret key but drop the CRYPTO data
- MINOR: quic: Prepare Handshake packets asap after completed handshake
- MINOR: quic: Flag asap the connection having reached the anti-amplification limit
- MINOR: quic: PTO timer too often reset
- MINOR: quic: Re-arm the PTO timer upon datagram receipt
- MINOR: proxy: add option idle-close-on-response
- MINOR: cpuset: switch to sched_setaffinity for FreeBSD 14 and above.
- CI: refactor spelling check
- CLEANUP: assorted typo fixes in the code and comments
- BUILD: makefile: add -Wno-atomic-alignment to work around clang abusive warning
- MINOR: quic: Only one CRYPTO frame by encryption level
- MINOR: quic: Missing retransmission from qc_prep_fast_retrans()
- MINOR: quic: Non-optimal use of a TX buffer
- BUG/MEDIUM: mworker: don't use _getsocks in wait mode
- BUG/MINOR: ssl: Store client SNI in SSL context in case of ClientHello error
- BUG/MAJOR: mux-h1: Don't decrement .curr_len for unsent data
- DOC: internals: document the pools architecture and API
- CI: github actions: clean default step conditions
- BUILD: cpuset: fix build issue on macos introduced by previous change
- MINOR: quic: Remaining TRACEs with connection as firt arg
- MINOR: quic: Reset ->conn quic_conn struct member when calling qc_release()
- MINOR: quic: Flag the connection as being attached to a listener
- MINOR: quic: Wrong CRYPTO frame concatenation
- MINOR: quid: Add traces quic_close() and quic_conn_io_cb()
- REGTESTS: ssl: Fix ssl_errors regtest with OpenSSL 1.0.2
- MINOR: quic: Do not dereference ->conn quic_conn struct member
- MINOR: quic: fix return of quic_dgram_read
- MINOR: quic: add config parse source file
- MINOR: quic: implement Retry TLS AEAD tag generation
- MEDIUM: quic: implement Initial token parsing
- MINOR: quic: define retry_source_connection_id TP
- MEDIUM: quic: implement Retry emission
- MINOR: quic: free xprt tasklet on its thread
- BUG/MEDIUM: connection: properly leave stopping list on error
- MINOR: pools: enable pools with DEBUG_FAIL_ALLOC as well
- MINOR: quic: As server, skip 0-RTT packet number space
- MINOR: quic: Do not wakeup the I/O handler before the mux is started
- BUG/MEDIUM: htx: Adjust length to add DATA block in an empty HTX buffer
- CI: github actions: use cache for OpenTracing
- BUG/MINOR: httpclient: don't send an empty body
- BUG/MINOR: httpclient: set default Accept and User-Agent headers
- BUG/MINOR: httpclient/lua: don't pop the lua stack when getting headers
- BUILD/MINOR: fix solaris build with clang.
- BUG/MEDIUM: server: avoid changing healthcheck ctx with set server ssl
- CI: refactor OpenTracing build script
- DOC: management: mark "set server ssl" as deprecated
- MEDIUM: cli: yield between each pipelined command
- MINOR: channel: add new function co_getdelim() to support multiple delimiters
- BUG/MINOR: cli: avoid O(bufsize) parsing cost on pipelined commands
- MEDIUM: h2/hpack: emit a Dynamic Table Size Update after settings change
- MINOR: quic: Retransmit the TX frames in the same order
- MINOR: quic: Remove the packet number space TX MT_LIST
- MINOR: quic: Splice the frames which could not be added to packets
- MINOR: quic: Add the number of TX bytes to traces
- CLEANUP: quic: Replace <nb_pto_dgrams> by <probe>
- MINOR: quic: Send two ack-eliciting packets when probing packet number spaces
- MINOR: quic: Probe regardless of the congestion control
- MINOR: quic: Speeding up handshake completion
- MINOR: quic: Release RX Initial packets asap
- MINOR: quic: Release asap TX frames to be transmitted
- MINOR: quic: Probe even if coalescing
- BUG/MEDIUM: cli: Never wait for more data on client shutdown
- BUG/MEDIUM: mcli: do not try to parse empty buffers
- BUG/MEDIUM: mcli: always realign wrapping buffers before parsing them
- BUG/MINOR: stream: make the call_rate only count the no-progress calls
- MINOR: quic: do not use quic_conn after dropping it
- MINOR: quic: adjust quic_conn refcount decrement
- MINOR: quic: fix race-condition on xprt tasklet free
- MINOR: quic: free SSL context on quic_conn free
- MINOR: quic: Add QUIC_FT_RETIRE_CONNECTION_ID parsing case
- MINOR: quic: Wrong packet number space selection
- DEBUG: pools: add new build option DEBUG_POOL_INTEGRITY
- MINOR: quic: add missing include in quic_sock
- MINOR: quic: fix indentation in qc_send_ppkts
- MINOR: quic: remove dereferencement of connection when possible
- MINOR: quic: set listener accept cb on parsing
- MEDIUM: quic/ssl: add new ex data for quic_conn
- MINOR: quic: initialize ssl_sock_ctx alongside the quic_conn
- MINOR: ssl: fix build in release mode
- MINOR: pools: partially uninline pool_free()
- MINOR: pools: partially uninline pool_alloc()
- MINOR: pools: prepare POOL_EXTRA to be split into multiple extra fields
- MINOR: pools: extend pool_cache API to pass a pointer to a caller
- DEBUG: pools: add new build option DEBUG_POOL_TRACING
- DEBUG: cli: add a new "debug dev fd" expert command
- MINOR: fd: register the write side of the poller pipe as well
- CI: github actions: use cache for SSL libs
- BUILD: debug/cli: condition test of O_ASYNC to its existence
- BUILD: pools: fix build error on DEBUG_POOL_TRACING
- MINOR: quic: refactor header protection removal
- MINOR: quic: handle app data according to mux/connection layer status
- MINOR: quic: refactor app-ops initialization
- MINOR: receiver: define a flag for local accept
- MEDIUM: quic: flag listener for local accept
- MINOR: quic: do not manage connection in xprt snd_buf
- MINOR: quic: remove wait handshake/L6 flags on init connection
- MINOR: listener: add flags field
- MINOR: quic: define QUIC flag on listener
- MINOR: quic: create accept queue for QUIC connections
- MINOR: listener: define per-thr struct
- MAJOR: quic: implement accept queue
- CLEANUP: mworker: simplify mworker_free_child()
- BUILD/DEBUG: lru: update the standalone code to support the revision
- DEBUG: lru: use a xorshift generator in the testing code
- BUG/MAJOR: compiler: relax alignment constraints on certain structures
- BUG/MEDIUM: fd: always align fdtab[] to 64 bytes
- MINOR: quic: No DCID length for datagram context
- MINOR: quic: Comment fix about the token found in Initial packets
- MINOR: quic: Get rid of a struct buffer in quic_lstnr_dgram_read()
- MINOR: quic: Remove the QUIC haproxy server packet parser
- MINOR: quic: Add new defintion about DCIDs offsets
- MINOR: quic: Add a list to QUIC sock I/O handler RX buffer
- MINOR: quic: Allocate QUIC datagrams from sock I/O handler
- MINOR: proto_quic: Allocate datagram handlers
- MINOR: quic: Pass CID as a buffer to quic_get_cid_tid()
- MINOR: quic: Convert quic_dgram_read() into a task
- CLEANUP: quic: Remove useless definition
- MINOR: proto_quic: Wrong allocations for TX rings and RX bufs
- MINOR: quic: Do not consume the RX buffer on QUIC sock i/o handler side
- MINOR: quic: Do not reset a full RX buffer
- MINOR: quic: Attach all the CIDs to the same connection
- MINOR: quic: Make usage of by datagram handler trees
- MEDIUM: da: new optional data file download scheduler service.
- MEDIUM: da: update doc and build for new scheduler mode service.
- MEDIUM: da: update module to handle schedule mode.
- MINOR: quic: Drop Initial packets with wrong ODCID
- MINOR: quic: Wrong RX buffer tail handling when no more contiguous data
- MINOR: quic: Iterate over all received datagrams
- MINOR: quic: refactor quic CID association with threads
- BUG/MEDIUM: resolvers: Really ignore trailing dot in domain names
- DEV: flags: Add missing flags
- BUG/MINOR: sink: Use the right field in appctx context in release callback
- MINOR: sock: move the unused socket cleaning code into its own function
- BUG/MEDIUM: mworker: close unused transferred FDs on load failure
- BUILD: atomic: make the old HA_ATOMIC_LOAD() support const pointers
- BUILD: cpuset: do not use const on the source of CPU_AND/CPU_ASSIGN
- BUILD: checks: fix inlining issue on set_srv_agent_[addr,port}
- BUILD: vars: avoid overlapping field initialization
- BUILD: server-state: avoid using not-so-portable isblank()
- BUILD: mux_fcgi: avoid aliasing of a const struct in traces
- BUILD: tree-wide: mark a few numeric constants as explicitly long long
- BUILD: tools: fix warning about incorrect cast with dladdr1()
- BUILD: task: use list_to_mt_list() instead of casting list to mt_list
- BUILD: mworker: include tools.h for platforms without unsetenv()
- BUG/MINOR: mworker: fix a FD leak of a sockpair upon a failed reload
- MINOR: mworker: set the master side of ipc_fd in the worker to -1
- MINOR: mworker: allocate and initialize a mworker_proc
- CI: Consistently use actions/checkout@v2
- REGTESTS: Remove REQUIRE_VERSION=1.8 from all tests
- MINOR: mworker: sets used or closed worker FDs to -1
- MINOR: quic: Try to accept 0-RTT connections
- MINOR: quic: Do not try to treat 0-RTT packets without started mux
- MINOR: quic: Do not try to accept a connection more than one time
- MINOR: quic: Initialize the connection timer asap
- MINOR: quic: Do not use connection struct xprt_ctx too soon
- Revert "MINOR: mworker: sets used or closed worker FDs to -1"
- BUILD: makefile: avoid testing all -Wno-* options when not needed
- BUILD: makefile: validate support for extra warnings by batches
- BUILD: makefile: only compute alternative options if required
- DEBUG: fd: make sure we never try to insert/delete an impossible FD number
- MINOR: mux-quic: add comment
- MINOR: mux-quic: properly initialize qcc flags
- MINOR: mux-quic: do not consider CONNECTION_CLOSE for the moment
- MINOR: mux-quic: create a timeout task
- MEDIUM: mux-quic: delay the closing with the timeout
- MINOR: mux-quic: release idle conns on process stopping
- MINOR: listener: replace the listener's spinlock with an rwlock
- BUG/MEDIUM: listener: read-lock the listener during accept()
- MINOR: mworker/cli: set expert/experimental mode from the CLI
Allow to set the master CLI in expert or experimental mode. No command
within the master are unlocked yet, but it gives the ability to send
expert or experimental commands to the workers.
echo "@1; experimental-mode on; del server be1/s2" | socat /var/run/haproxy.master -
echo "experimental-mode on; @1 del server be1/s2" | socat /var/run/haproxy.master -
Listeners might be disabled by other threads while running in
listener_accept() due to a stopping condition or possibly a rebinding
error after a failed stop/start. When this happens, the listener's FD
is -1 and accesses made by the lower layers to fdtab[-1] do not end up
well. This can occasionally be noticed if running at high connection
rates in master-worker mode when compiled with ASAN and hammered with
10 reloads per second. From time to time an out-of-bounds error will
be reported.
One approach could consist in keeping a copy of critical information
such as the FD before proceeding but that's not correct since in case of
close() the FD might be reassigned to another connection for example.
In fact what is needed is to read-lock the listener during this operation
so that it cannot change while we're touching it.
Tests have shown that using a spinlock only does generally work well but
it doesn't scale much with threads and we can see listener_accept() eat
10-15% CPU on a 24 thread machine at 300k conn/s. For this reason the
lock was turned to an rwlock by previous commit and this patch only takes
the read lock to make sure other operations do not change the listener's
state while threads are accepting connections. With this approach, no
performance loss was noticed at all and listener_accept() doesn't appear
in perf top.
This ought to be backported to about all branches that make use of the
unlocked listeners, but in practice it seems to mostly concern 2.3 and
above, since 2.2 and older will take the FD in the argument (and the
race exists there, this FD could end up being reassigned in parallel
but there's not much that can be done there to prevent that race; at
least a permanent error will be reported).
For backports, the current approach is preferred, with a preliminary
backport of previous commit "MINOR: listener: replace the listener's
spinlock with an rwlock". However if for any reason this commit cannot
be backported, the current patch can be modified to simply take a
spinlock (tested and works), it will just impact high performance
workloads (like DDoS protection).
We'll need to lock the listener a little bit more during accept() and
tests show that a spinlock is a massive performance killer, so let's
first switch to an rwlock for this lock.
This patch might have to be backported for the next patch to work, and
if so, the change is almost mechanical (look for LISTENER_LOCK), but do
not forget about the few HA_SPIN_INIT() in the file. There's no reference
to this lock outside of listener.c nor listener-t.h.
Implement the idle frontend connection cleanup for QUIC mux. Each
connection is registered on the mux_stopping_list. On process closing,
the mux is notified via a new function qc_wake. This function immediatly
release the connection if the parent proxy is stopped.
This allows to quickly close the process even if there is QUIC
connection stucked on timeout.
Do not close immediatly the connection if there is no bidirectional
stream opened. Schedule instead the mux timeout when this condition is
verified. On the timer expiration, the mux/connection can be freed.
This task will be used to schedule a timer when there is no activity on
the mux. The timeout is set via the "timeout client" from the
configuration file.
The timeout task process schedule the timeout only on specific
conditions. Currently, it's done if there is no opened bidirectional
stream.
For now this task is not used. This will be implemented in the following
commit.
Remove the condition on CONNECTION_CLOSE reception to close immediately
streams. It can cause some crash as the QUIC xprt layer still access the
qcs to send data and handle ACK.
The whole interface and buffering between QUIC xprt and mux must be
properly reorganized to better handle this case. Once this is done, it
may have some sense to free the qcs streams on CONNECTION_CLOSE
reception.
It's among the cases that would provoke memory corruption, let's add
some tests against negative FDs and those larger than the table. This
must never ever happen and would currently result in silent corruption
or a crash. Better have a noticeable one exhibiting the call chain if
that were to happen.
This reverts commit ea7371e934.
This can't work correctly as we need this FD in the worker to be
inserted in the fdtab. The correct way to do it would be to cleanup the
mworker_proc in the master after the fork().
In fact the xprt_ctx of the connection is first stored into quic_conn
struct as soon as it is initialized from qc_conn_alloc_ssl_ctx().
As quic_conn_init_timer() is run after this function, we can associate
the timer context of the timer to the one from the quic_conn struct.
We must move this initialization from xprt_start() callback, which
comes too late (after handshake completion for 1RTT session). This timer must be
usable as soon as we have packets to send/receive. Let's initialize it after
the TLS context is initialized in qc_conn_alloc_ssl_ctx(). This latter function
initializes I/O handler task (quic_conn_io_cb) to send/receive packets.
We add a new flag to mark a connection as already enqueued for acception.
This is useful for 0-RTT session where a connection is first enqueued for
acception as soon as 0-RTT RX secrets could be derived. Then as for any other
connection, we could accept one more time this connection after handshake
completion which lead to very bad side effects.
Thank you to Amaury for this nice patch.
mworker_cli_sockpair_new() is used to create the socketpair CLI listener of
the worker. Its FD is referenced in the mworker_proc structure, however,
once it's assigned to the listener the reference should be removed so we
don't use it accidentally.
The same must be done in case of errors if the FDs were already closed.
When starting HAProxy in master-worker, the master pre-allocate a struct
mworker_proc and do a socketpair() before the configuration parsing. If
the configuration loading failed, the FD are never closed because they
aren't part of listener, they are not even in the fdtab.
This patch fixes the issue by cleaning the mworker_proc structure that
were not asssigned a process, and closing its FDs.
Must be backported as far as 2.0, the srv_drop() only frees the memory
and could be dropped since it's done before an exec().
There were a few casts of list* to mt_list* that were upsetting some
old compilers (not sure about the effect on others). We had created
list_to_mt_list() purposely for this, let's use it instead of applying
this cast.
dladdr1() is used on glibc and takes a void**, but we pass it a
const ElfW(Sym)** and some compilers complain that we're aliasing.
Let's just set a may_alias attribute on the local variable to
address this. There's no need to backport this unless warnings are
reported on older distros or uncommon compilers.
At a few places in the code the switch/case ond flags are tested against
64-bit constants without explicitly being marked as long long. Some
32-bit compilers complain that the constant is too large for a long, and
other likely always use long long there. Better fix that as it's uncertain
what others which do not complain do. It may be backported to avoid doubts
on uncommon platforms if needed, as it touches very few areas.
fcgi_trace() declares fconn as a const and casts its mbuf array to
(struct buffer*), which rightfully upsets some older compilers. Better
just declare it as a writable variable and get rid of the cast. It's
harmless anyway. This has been there since 2.1 with commit 5c0f859c2
("MINOR: mux-fcgi/trace: Register a new trace source with its events")
and doens't need to be backported though it would not harm either.
Once in a while we get rid of this one. isblank() is missing on old
C libraries and only matches two values, so let's just replace it.
It was brought with this commit in 2.4:
0bf268e18 ("MINOR: server: Be more strict on the server-state line parsing")
It may be backported though it's really not important.
Compiling vars.c with gcc 4.2 shows that we're initializing some local
structs field members in a not really portable way:
src/vars.c: In function 'vars_parse_cli_set_var':
src/vars.c:1195: warning: initialized field overwritten
src/vars.c:1195: warning: (near initialization for 'px.conf.args')
src/vars.c:1195: warning: initialized field overwritten
src/vars.c:1195: warning: (near initialization for 'px.conf')
src/vars.c:1201: warning: initialized field overwritten
src/vars.c:1201: warning: (near initialization for 'rule.conf')
It's totally harmless anyway, but better clean this up.
These functions are declared as external functions in check.h and
as inline functions in check.c. Let's move them as static inline in
check.h. This appeared in 2.4 with the following commits:
4858fb2e1 ("MEDIUM: check: align agentaddr and agentport behaviour")
1c921cd74 ("BUG/MINOR: check: consitent way to set agentaddr")
While harmless (it only triggers build warnings with some gcc 4.x),
it should probably be backported where the paches above are present
to keep the code consistent.
The man page indicates that CPU_AND() and CPU_ASSIGN() take a variable,
not a const on the source, even though it doesn't make much sense. But
with older libcs, this triggers a build warning:
src/cpuset.c: In function 'ha_cpuset_and':
src/cpuset.c:53: warning: initialization discards qualifiers from pointer target type
src/cpuset.c: In function 'ha_cpuset_assign':
src/cpuset.c:101: warning: initialization discards qualifiers from pointer target type
Better stick stricter to the documented API as this is really harmless
here. There's no need to backport it (unless build issues are reported,
which is quite unlikely).
When the master process is reloaded on a new config, it will try to
connect to the previous process' socket to retrieve all known
listening FDs to be reused by the new listeners. If listeners were
removed, their unused FDs are simply closed.
However there's a catch. In case a socket fails to bind, the master
will cancel its startup and swithc to wait mode for a new operation
to happen. In this case it didn't close the possibly remaining FDs
that were left unused.
It is very hard to hit this case, but it can happen during a
troubleshooting session with fat fingers. For example, let's say
a config runs like this:
frontend ftp
bind 1.2.3.4:20000-29999
The admin wants to extend the port range down to 10000-29999 and
by mistake ends up with:
frontend ftp
bind 1.2.3.41:20000-29999
Upon restart the bind will fail if the address is not present, and the
master will then switch to wait mode without releasing the previous FDs
for 1.2.3.4:20000-29999 since they're now apparently unused. Then once
the admin fixes the config and does:
frontend ftp
bind 1.2.3.4:10000-29999
The service will start, but will bind new sockets, half of them
overlapping with the previous ones that were not properly closed. This
may result in a startup error (if SO_REUSEPORT is not enabled or not
available), in a FD number exhaustion (if the error is repeated many
times), or in connections being randomly accepted by the process if
they sometimes land on the old FD that nobody listens on.
This patch will need to be backported as far as 1.8, and depends on
previous patch:
MINOR: sock: move the unused socket cleaning code into its own function
Note that before 2.3 most of the code was located inside haproxy.c, so
the patch above should probably relocate the function there instead of
sock.c.
The startup code used to scan the list of unused sockets retrieved from
an older process, and to close them one by one. This also required that
the knowledge of the internal storage of these temporary sockets was
known from outside sock.c and that the code was copy-pasted at every
call place.
This patch moves this into sock.c under the name
sock_drop_unused_old_sockets(), and removes the xfer_sock_list
definition from sock.h since the rest of the code doesn't need to know
this.
This cleanup is minimal and preliminary to a future fix that will need
to be backported to all versions featuring FD transfers over the CLI.
In the release callback, ctx.peers was used instead of ctx.sft. Concretly,
it is not an issue because the appctx context is an union and these both
fields are structures with a unique pointer. But it will be a problem if
that changes.
This patch must be backported as far as 2.2.
When a string is converted to a domain name label, the trailing dot must be
ignored. In resolv_str_to_dn_label(), there is a test to do so. However, the
trailing dot is not really ignored. The character itself is not copied but
the string index is still moved to the next char. Thus, this trailing dot is
counted in the length of the last encoded part of the domain name. Worst,
because the copy is skipped, a garbage character is included in the domain
name.
This patch should fix the issue #1528. It must be backported as far as 2.0.
Do not use an extra DCID parameter on new_quic_cid to be able to
associated a new generated CID to a thread ID. Simply do the computation
inside the function. The API is cleaner this way.
This also has the effects to improve the apparent randomness of CIDs.
With the previous version the first byte of all CIDs are identical for a
connection which could lead to privacy issue. This version may not be
totally perfect on this aspect but it improves the situation.
The producer must know where is the tailing hole in the RX buffer
when it purges it from consumed datagram. This is done allocating
a fake datagram with the remaining number of bytes which cannot be produced
at the tail of the RX buffer as length.
The CID trees are no more attached to the listener receiver but to the
underlying datagram handlers (one by thread) which run always on the same thread.
So, any operation on these trees do not require any locking.
We copy the first octet of the original destination connection ID to any CID for
the connection calling new_quic_cid(). So this patch modifies only this function
to take a dcid as passed parameter.
As the RX buffer is not consumed by the sock i/o handler as soon as a datagram
is produced, when full an RX buffer must not be reset. The remaining room is
consumed without modifying it. The consumer has a represention of its contents:
a list of datagrams.
Rename quic_lstnr_dgram_read() to quic_lstnr_dgram_dispatch() to reflect its new role.
After calling this latter, the sock i/o handler must consume the buffer only if
the datagram it received is detected as wrong by quic_lstnr_dgram_dispatch().
The datagram handler task mark the datagram as consumed atomically setting ->buf
to NULL value. The sock i/o handler is responsible of flushing its RX buffer
before using it. It also keeps a datagram among the consumed ones so that
to pass it to quic_lstnr_dgram_dispatch() and prevent it from allocating a new one.
As mentionned in the comment, the tx_qrings and rxbufs members of
receiver struct must be pointers to pointers!
Modify the functions responsible of their allocations consequently.
Note that this code could work because sizeof rxbuf and sizeof tx_qrings
are greater than the size of pointer!
quic_dgram_read() parses all the QUIC packets from a UDP datagram. It is the best
candidate to be converted into a task, because is processing data unit is the UDP
datagram received by the QUIC sock i/o handler. If correct, this datagram is
added to the context of a task, quic_lstnr_dghdlr(), a conversion of quic_dgram_read()
into a task. This task pop a datagram from an mt_list and passes it among to
the packet handler (quic_lstnr_pkt_rcv()).
Modify the quic_dgram struct to play the role of the old quic_dgram_ctx struct when
passed to quic_lstnr_pkt_rcv().
Modify the datagram handlers allocation to set their tasks to quic_lstnr_dghdlr().
Add quic_dghdlr new struct do define datagram handler tasks, one by thread.
Allocate them and attach them to the listener receiver part calling
quic_alloc_dghdlrs_listener() newly implemented function.
Add quic_dgram new structure to store information about datagrams received
by the sock I/O handler (quic_sock_fd_iocb) and its associated pool.
Implement quic_get_dgram_dcid() to retrieve the datagram DCID which must
be the same for all the packets in the datagram.
Modify quic_lstnr_dgram_read() called by the sock I/O handler to allocate
a quic_dgram each time a correct datagram is found and add it to the sock I/O
handler rxbuf dgram list.
This function is no more used anymore, broken and uses code shared with the
listener packet parser. This is becoming anoying to continue to modify
it without testing each time we modify the code it shares with the
listener packet parser.
This is to be sure xprt functions do not manipulate the buffer struct
passed as parameter to quic_lstnr_dgram_read() from low level datagram
I/O callback in quic_sock.c (quic_sock_fd_iocb()).
Mention that the token is sent only by servers in both server and listener
packet parsers.
Remove a "TO DO" section in listener packet parser because there is nothing
more to do in this function about the token
This quic_dgram_ctx struct member is used to denote if we are parsing a new
datagram (null value), or a coalesced packet into the current datagram (non null
value). But it was never set.
There's a risk that fdtab is not 64-byte aligned. The first effect is that
it may cause false sharing between cache lines resulting in contention
when adjacent FDs are used by different threads. The second is related
to what is explained in commit "BUG/MAJOR: compiler: relax alignment
constraints on certain structures", i.e. that modern compilers might
make use of aligned vector operations to zero some entries, and would
crash. We do not use any memset() or so on fdtab, so the risk is almost
inexistent, but that's not a reason for violating some valid assumptions.
This patch addresses this by allocating 64 extra bytes and aligning the
structure manually (this is an extremely cheap solution for this specific
case). The original address is stored in a new variable "fdtab_addr" and
is the one that gets freed. This remains extremely simple and should be
easily backportable. A dedicated aligned allocator later would help, of
course.
This needs to be backported as far as 2.2. No issue related to this was
reported yet, but it could very well happen as compilers evolve. In
addition this should preserve high performance across restarts (i.e.
no more dependency on allocator's alignment).
The standalone testing code used to rely on rand(), but switching to a
xorshift generator speeds up the test by 7% which is important to
accurately measure the real impact of the LRU code itself.
Do not proceed to direct accept when creating a new quic_conn. Wait for
the QUIC handshake to succeeds to insert the quic_conn in the accept
queue. A tasklet is then woken up to call listener_accept to accept the
quic_conn.
The most important effect is that the connection/mux layers are not
instantiated at the same time as the quic_conn. This forces to delay
some process to be sure that the mux is allocated :
* initialization of mux transport parameters
* installation of the app-ops
Also, the mux instance is not checked now to wake up the quic_conn
tasklet. This is safe because the xprt-quic code is now ready to handle
the absence of the connection/mux layers.
Note that this commit has a deep impact as it changes significantly the
lower QUIC architecture. Most notably, it breaks the 0-RTT feature.
Create a new structure li_per_thread. This is uses as an array in the
listener structure, with an entry allocated per thread. The new function
li_init_per_thr is responsible of the allocation.
For now, li_per_thread contains fields only useful for QUIC listeners.
As such, it is only allocated for QUIC listeners.
Create a new type quic_accept_queue to handle QUIC connections accept.
A queue will be allocated for each thread. It contains a list of
listeners which contains at least one quic_conn ready to be accepted and
the tasklet to run listener_accept for these listeners.
Mark QUIC listeners with the flag LI_F_QUIC_LISTENER. It is set by the
proto-quic layer on the add listener callback. This allows to override
more clearly the accept callback on quic_session_accept.
The connection is allocated after finishing the QUIC handshake. Remove
handshake/L6 flags when initializing the connection as handshake is
finished with success at this stage.
Remove usage of connection in quic_conn_from_buf. As connection and
quic_conn are decorrelated, it is not logical to check connection flags
when using sendto.
This require to store the L4 peer address in quic_conn to be able to use
sendto.
This change is required to delay allocation of connection.
QUIC connections are distributed accross threads by xprt-quic according
to their CIDs. As such disable the thread selection in listener_accept
for QUIC listeners.
This prevents connection from migrating to another threads after its
allocation which can results in unexpected side-effects.
This flag is named RX_F_LOCAL_ACCEPT. It will be activated for special
receivers where connection balancing to threads is already handle
outside of listener_accept, such as with QUIC listeners.
Add a new function in mux-quic to install app-ops. For now this
functions is called during the ALPN negotiation of the QUIC handshake.
This change will be useful when the connection accept queue will be
implemented. It will be thus required to delay the app-ops
initialization because the mux won't be allocated anymore during the
QUIC handshake.
Define a new enum to represent the status of the mux/connection layer
above a quic_conn. This is important to know if it's possible to handle
application data, or if it should be buffered or dropped.
Adjust the function to check if header protection can be removed. It can
now be used both for a single packet in qc_lstnr_pkt_rcv and in the
quic_conn handler to handle buffered packets for a specific encryption
level.
When squashing commit add43fa43 ("DEBUG: pools: add new build option
DEBUG_POOL_TRACING") I managed to break the build and to fail to detect
it even after the rebase and a full rebuild :-(
David Carlier reported a build breakage on Haiku since commit
5be7c198e ("DEBUG: cli: add a new "debug dev fd" expert command")
due to O_ASYNC not being defined. Ilya also reported it broke the
build on Cygwin. It's not that portable and sometimes defined as
O_NONBLOCK for portability. But here we don't even need that, as
we already condition other flags, let's just ignore it if it does
not exist.
The poller's pipe was only registered on the read side since we don't
need to poll to write on it. But this leaves some known FDs so it's
better to also register the write side with no event. This will allow
to show them in "show fd" and to avoid dumping them as unhandled FDs.
Note that the only other type of unhandled FDs left are:
- stdin/stdout/stderr
- epoll FDs
The later can be registered upon startup though but at least a dummy
handler would be needed to keep the fdtab clean.
This command will scan the whole file descriptors space to look for
existing FDs that are unknown to haproxy's fdtab, and will try to dump
a maximum number of information about them (including type, mode, device,
size, uid/gid, cloexec, O_* flags, socket types and addresses when
relevant). The goal is to help detecting inherited FDs from parent
processes as well as potential leaks.
Some of those listed are actually known but handled so deep into some
systems that they're not in the fdtab (such as epoll FDs or inter-
thread pipes). This might be refined in the future so that these ones
become known and do not appear.
Example of output:
$ socat - /tmp/sock1 <<< "expert-mode on;debug dev fd"
0 type=tty. mod=0620 dev=0x8803 siz=0 uid=1000 gid=5 fs=0x16 ino=0x6 getfd=+0 getfl=O_RDONLY,O_APPEND
1 type=tty. mod=0620 dev=0x8803 siz=0 uid=1000 gid=5 fs=0x16 ino=0x6 getfd=+0 getfl=O_RDONLY,O_APPEND
2 type=tty. mod=0620 dev=0x8803 siz=0 uid=1000 gid=5 fs=0x16 ino=0x6 getfd=+0 getfl=O_RDONLY,O_APPEND
3 type=pipe mod=0600 dev=0 siz=0 uid=1000 gid=100 fs=0xc ino=0x18112348 getfd=+0
4 type=epol mod=0600 dev=0 siz=0 uid=0 gid=0 fs=0xd ino=0x3674 getfd=+0 getfl=O_RDONLY
33 type=pipe mod=0600 dev=0 siz=0 uid=1000 gid=100 fs=0xc ino=0x24af8251 getfd=+0 getfl=O_RDONLY
34 type=epol mod=0600 dev=0 siz=0 uid=0 gid=0 fs=0xd ino=0x3674 getfd=+0 getfl=O_RDONLY
36 type=pipe mod=0600 dev=0 siz=0 uid=1000 gid=100 fs=0xc ino=0x24af8d1b getfd=+0 getfl=O_RDONLY
37 type=epol mod=0600 dev=0 siz=0 uid=0 gid=0 fs=0xd ino=0x3674 getfd=+0 getfl=O_RDONLY
39 type=pipe mod=0600 dev=0 siz=0 uid=1000 gid=100 fs=0xc ino=0x24afa04f getfd=+0 getfl=O_RDONLY
41 type=pipe mod=0600 dev=0 siz=0 uid=1000 gid=100 fs=0xc ino=0x24af8252 getfd=+0 getfl=O_RDONLY
42 type=epol mod=0600 dev=0 siz=0 uid=0 gid=0 fs=0xd ino=0x3674 getfd=+0 getfl=O_RDONLY
This new option, when set, will cause the callers of pool_alloc() and
pool_free() to be recorded into an extra area in the pool that is expected
to be helpful for later inspection (e.g. in core dumps). For example it
may help figure that an object was released to a pool with some sub-fields
not yet released or that a use-after-free happened after releasing it,
with an immediate indication about the exact line of code that released
it (possibly an error path).
This only works with the per-thread cache, and even objects refilled from
the shared pool directly into the thread-local cache will have a NULL
there. That's not an issue since these objects have not yet been freed.
It's worth noting that pool_alloc_nocache() continues not to set any
caller pointer (e.g. when the cache is empty) because that would require
a possibly undesirable API change.
The extra cost is minimal (one pointer per object) and this completes
well with DEBUG_POOL_INTEGRITY.
This adds a caller to pool_put_to_cache() and pool_get_from_cache()
which will optionally be used to pass a pointer to their callers. For
now it's not used, only the API is extended to support this pointer.
The pool_alloc() function was already a wrapper to __pool_alloc() which
was also inlined but took a set of flags. This latter was uninlined and
moved to pool.c, and pool_alloc()/pool_zalloc() turned to macros so that
they can more easily evolve to support debugging options.
The number of call places made this code grow over time and doing only
this change saved ~1% of the whole executable's size.
The pool_free() function has become a bit big over time due to the
extra consistency checks. It used to remain inline only to deal
cleanly with the NULL pointer free that's quite present on some
structures (e.g. in stream_free()).
Here we're splitting the function in two:
- __pool_free() does the inner block without the pointer test and
becomes a function ;
- pool_free() is now a macro that only checks the pointer and calls
__pool_free() if needed.
The use of a macro versus an inline function is only motivated by an
easier intrumentation of the code later.
With this change, the code size reduces by ~1%, which means that at
this point all pool_free() call places used to represent more than
1% of the total code size.
Fix potential null pointer dereference. In fact, this case is not
possible, only a mistake in SSL ex-data initialization may cause it :
either connection is set or quic_conn, which allows to retrieve
the bind_conf.
A BUG_ON was already present but this does not cover release build.
Extract the allocation of ssl_sock_ctx from qc_conn_init to a dedicated
function qc_conn_alloc_ssl_ctx. This function is called just after
allocating a new quic_conn, without waiting for the initialization of
the connection. It allocates the ssl_sock_ctx and the quic_conn tasklet.
This change is now possible because the SSL callbacks are dealing with a
quic_conn instance.
This change is required to be able to delay the connection allocation
and handle handshake packets without it.
Allow to register quic_conn as ex-data in SSL callbacks. A new index is
used to identify it as ssl_qc_app_data_index.
Replace connection by quic_conn as SSL ex-data when initializing the QUIC
SSL session. When using SSL callbacks in QUIC context, the connection is
now NULL. Used quic_conn instead to retrieve the required parameters.
Also clean up
The same changes are conducted inside the QUIC SSL methods of xprt-quic
: connection instance usage is replaced by quic_conn.
Define a special accept cb for QUIC listeners to quic_session_accept().
This operation is conducted during the proto.add callback when creating
listeners.
A special care is now taken care when setting the standard callback
session_accept_fd() to not overwrite if already defined by the proto
layer.
Some functions of xprt-quic were still using connection instead of
quic_conn. This must be removed as the two are decorrelated : a
quic_conn can exist without a connection.
When enabled, objects picked from the cache are checked for corruption
by comparing their contents against a pattern that was placed when they
were inserted into the cache. Objects are also allocated in the reverse
order, from the oldest one to the most recent, so as to maximize the
ability to detect such a corruption. The goal is to detect writes after
free (or possibly hardware memory corruptions). Contrary to DEBUG_UAF
this cannot detect reads after free, but may possibly detect later
corruptions and will not consume extra memory. The CPU usage will
increase a bit due to the cost of filling/checking the area and for the
preference for cold cache instead of hot cache, though not as much as
with DEBUG_UAF. This option is meant to be usable in production.
It is possible that the listener is in INITIAL state, but have to probe
with Handshake packets. In this case, when entering qc_prep_pkts() there
is nothing to do. We must select the next packet number space (or encryption
level) to be able to probe with such packet type.
Remove the unsafe call to tasklet_free in quic_close. At this stage the
tasklet may already be scheduled by an other threads even after if the
quic_conn refcount is now null. It will probably cause a crash on the
next tasklet processing.
Use tasklet_kill instead to ensure that the tasklet is freed in a
thread-safe way. Note that quic_conn_io_cb is not protected by the
refcount so only the quic_conn pinned thread must kill the tasklet.
Adjust slightly refcount code decrement on quic_conn close. A new
function named quic_conn_release is implemented. This function is
responsible to remove the quic_conn from CIDs trees and decrement the
refcount to free the quic_conn once all threads have finished to work
with it.
For now, quic_close is responsible to call it so the quic_conn is
scheduled to be free by upper layers. In the future, it may be useful to
delay it to be able to send remaining data or waiting for missing ACKs
for example.
This simplify quic_conn_drop which do not require the lock anymore.
Also, this can help to free the connection more quickly in some cases.
quic_conn_drop decrement the refcount and may free the quic_conn if
reaching 0. The quic_conn should not be dereferenced again after it in
any case even for traces.
We have an anti-looping protection in process_stream() that detects bugs
that used to affect a few filters like compression in the past which
sometimes forgot to handle a read0 or a particular error, leaving a
thread looping at 100% CPU forever. When such a condition is detected,
an alert it emitted and the process is killed so that it can be replaced
by a sane one:
[ALERT] (19061) : A bogus STREAM [0x274abe0] is spinning at 2057156
calls per second and refuses to die, aborting now! Please
report this error to developers [strm=0x274abe0,3 src=unix
fe=MASTER be=MASTER dst=<MCLI> txn=(nil),0 txn.req=-,0
txn.rsp=-,0 rqf=c02000 rqa=10000 rpf=88000021 rpa=8000000
sif=EST,40008 sib=DIS,84018 af=(nil),0 csf=0x274ab90,8600
ab=0x272fd40,1 csb=(nil),0
cof=0x25d5d80,1300:PASS(0x274aaf0)/RAW((nil))/unix_stream(9)
cob=(nil),0:NONE((nil))/NONE((nil))/NONE(0) filters={}]
call trace(11):
| 0x4dbaab [c7 04 25 01 00 00 00 00]: stream_dump_and_crash+0x17b/0x1b4
| 0x4df31f [e9 bd c8 ff ff 49 83 7c]: process_stream+0x382f/0x53a3
(...)
One problem with this detection is that it used to only count the call
rate because we weren't sure how to make it more accurate, but the
threshold was high enough to prevent accidental false positives.
There is actually one case that manages to trigger it, which is when
sending huge amounts of requests pipelined on the master CLI. Some
short requests such as "show version" are sufficient to be handled
extremely fast and to cause a wake up of an analyser to parse the
next request, then an applet to handle it, back and forth. But this
condition is not an error, since some data are being forwarded by
the stream, and it's easy to detect it.
This patch modifies the detection so that update_freq_ctr() only
applies to calls made without CF_READ_PARTIAL nor CF_WRITE_PARTIAL
set on any of the channels, which really indicates that nothing is
happening at all.
This is greatly sufficient and extremely effective, as the call above
is still caught (shutr being ignored by an analyser) while a loop on
the master CLI now has no effect. The "call_rate" field in the detailed
"show sess" output will now be much lower, except for bogus streams,
which may help spot them. This field is only there for developers
anyway so it's pretty fine to slightly adjust its meaning.
This patch could be backported to stable versions in case of reports
of such an issue, but as that's unlikely, it's not really needed.
Pipelined commands easily result in request buffers to wrap, and the
master-cli parser only deals with linear buffers since it needs contiguous
keywords to look for in a list. As soon as a buffer wraps, some commands
are ignored and the parser is called in loops because the wrapped data
do not leave the buffer.
Let's take the easiest path that's already used at the HTTP layer, we
simply realign the buffer if its input wraps. This rarely happens anyway
(typically once per buffer), remains reasonably cheap and guarantees this
cannot happen anymore.
This needs to be backported as far as 2.0.
When pcli_parse_request() is called with an empty buffer, it still tries
to parse it and can go on believing it finds an empty request if the last
char before the beginning of the buffer is a '\n'. In this case it overwrites
it with a zero and processes it as an empty command, doing nothing but not
making the buffer progress. This results in an infinite loop that is stopped
by the watchdog. For a reason related to another issue (yet to be fixed),
this can easily be reproduced by pipelining lots of commands such as
"show version".
Let's add a length check after the search for a '\n'.
This needs to be backported as far as 2.0.
When a shutdown is detected on the cli, we try to execute all pending
commands first before closing the connection. It is required because
commands execution is serialized. However, when the last part is a partial
command, the cli connection is not closed, waiting for more data. Because
there is no timeout for now on the cli socket, the connection remains
infinitely in this state. And because the maxconn is set to 10, if it
happens several times, the cli socket quickly becomes unresponsive because
all its slots are waiting for more data on a closed connections.
This patch should fix the issue #1512. It must be backported as far as 2.0.
Again, we fix a reminiscence of the way we probed before probing by packet.
When we were probing by datagram we inspected <prv_pkt> to know if we were
coalescing several packets. There is no need to do that at all when probing by packet.
Furthermore this could lead to blocking situations where we want to probe but
are limited by the congestion control (<cwnd> path variable). This must not be
the case. When probing we must do it regardless of the congestion control.
If a client resend Initial CRYPTO data, this is because it did not receive all
the server Initial CRYPTO data. With this patch we prepare a fast retransmission
without waiting for the PTO timer expiration sending old Initial CRYPTO data,
coalescing them with Handshake CRYPTO if present in the same datagram. Furthermore
we send also a datagram made of previously sent Hanshashke CRYPTO data if any.
When probing, we must not take into an account the congestion control window.
This was not completely correctly implemented: qc_build_frms() could fail
because of this limit when comparing the head of the packet againts the
congestion control window. With this patch we make it fail only when
we are not probing.
This is to avoid too much PTO timer expirations for 01RTT and Handshake packet
number spaces. Furthermore we are not limited by the anti-amplication for 01RTT
packet number space. According to the RFC we can send up to two packets.
This modification should have come with this commit:
"MINOR: quic: Remove nb_pto_dgrams quic_conn struct member"
where the nb_pto_dgrams quic_conn struct member was removed.
When building packets to send, we build frames computing their sizes
to have more chance to be added to new packets. There are rare cases
where this packet coult not be built because of the congestion control
which may for instance prevent us from building a packet with padding
(retransmitted Initial packets). In such a case, the pre-built frames
were lost because added to the packet frame list but not move packet
to the packet number space they come from.
With this patch we add the frames to the packet only if it could be built
and move them back to the packet number space if not.
There is no need to use an MT_LIST to store frames to send from a packet
number space. This is a reminiscence for multi-threading support for the TX part.
As reported by @jinsubsim in github issue #1498, there is an
interoperability issue between nghttp2 as a client and a few servers
among which haproxy (in fact likely all those which do not make use
of the dynamic headers table in responses or which do not intend to
use a larger table), when reducing the header table size below 4096.
These are easily testable this way:
nghttp -v -H":method: HEAD" --header-table-size=0 https://$SITE
It will result in a compression error for those which do not start
with an HPACK dynamic table size update opcode.
There is a possible interpretation of the H2 and HPACK specs that
says that an HPACK encoder must send an HPACK headers table update
confirming the new size it will be using after having acknowledged
it, because since it's possible for a decoder to advertise a late
SETTINGS and change it after transfers have begun, the initially
advertised value might very well be seen as a first change from the
initial setting, and the HPACK spec doesn't specify the side which
causes the change that triggers a DTSU update, which was essentially
summed up in this question from nghttp2's author when this issue
was already raised 6 years ago, but which didn't really find a solid
response by then:
https://lists.w3.org/Archives/Public/ietf-http-wg/2015OctDec/0107.html
The ongoing consensus based on what some servers are doing and that aims
at limiting interoperability issues seems to be that a DTSU is expected
for each reduction from the current size, which should be reflected in
the next revision of the H2 spec:
https://github.com/httpwg/http2-spec/pull/1005
Given that we do not make use of this table we can emit a DTSU of zero
before encoding any HPACK frame. However, some clients do not support
receiving DTSU with such values (e.g. VTest) so we cannot do it
inconditionnally!
The current patch aims at sticking as close to the spec as possible by
proceeding this way:
- when a SETTINGS_HEADER_TABLE_SIZE is received, a flag is set
indicating that the value changed
- before sending any HPACK frame, this flag is checked to see if
an update is wanted and if none was sent
- in this case a DTSU of size zero is emitted and a flag is set
to mention it was emitted so that it never has to be sent again
This addresses the problem with nghttp2 without affecting VTest.
More context is available here:
https://github.com/nghttp2/nghttp2/issues/1660https://lists.w3.org/Archives/Public/ietf-http-wg/2021OctDec/0235.html
Many thanks to @jinsubsim for this report and participating to the issue
that led to an improvement of the H2 spec.
This should be backported to stable releases in a timely manner, ideally
as far as 2.4 once the h2spec update is merged, then to other versions
after a few months of observation or in case an issue around this is
reported.
Sending pipelined commands on the CLI using a semi-colon as a delimiter
has a cost that grows linearly with the buffer size, because co_getline()
is called for each word and looks up a '\n' in the whole buffer while
copying its contents into a temporary buffer.
This causes huge parsing delays, for example 3s for 100k "show version"
versus 110ms if parsed only once for a default 16k buffer.
This patch makes use of the new co_getdelim() function to support both
an LF and a semi-colon as delimiters so that it's no more needed to parse
the whole buffer, and that commands are instantly retrieved. We still
need to rely on co_getline() in payload mode as escapes and semi-colons
are not used there.
It should likely be backported where CLI processing speed matters, but
will require to also backport previous patch "MINOR: channel: add new
function co_getdelim() to support multiple delimiters". It's worth noting
that backporting it without "MEDIUM: cli: yield between each pipelined
command" would significantly increase the ratio of disconnections caused
by empty request buffers, for the sole reason that the currently slow
parsing grants more time to request data to come in. As such it would
be better to backport the patch above before taking this one.
For now we have co_getline() which reads a buffer and stops on LF, and
co_getword() which reads a buffer and stops on one arbitrary delimiter.
But sometimes we'd need to stop on a set of delimiters (CR and LF, etc).
This patch adds a new function co_getdelim() which takes a set of delimiters
as a string, and constructs a small map (32 bytes) that's looked up during
parsing to stop after the first delimiter found within the set. It also
supports an optional escape character that skips a delimiter (typically a
backslash). For the rest it works exactly like the two other variants.
Pipelining commands on the CLI is sometimes needed for batched operations
such as map deletion etc, but it causes two problems:
- some possibly long-running commands will be run in series without
yielding, possibly causing extremely long latencies that will affect
quality of service and even trigger the watchdog, as seen in github
issue #1515.
- short commands that end on a buffer size boundary, when not run in
interactive mode, will often cause the socket to be closed when
the last command is parsed, because the buffer is empty.
This patch proposes a small change to this: by yielding in the CLI applet
after processing a command when there are data left, we significantly
reduce the latency, since only one command is executed per call, and
we leave an opportunity for the I/O layers to refill the request buffer
with more commands, hence to execute all of them much more often.
With this change there's no more watchdog triggered on long series of
"del map" on large map files, and the operations are much less disturbed.
It would be desirable to backport this patch to stable versions after some
period of observation in recent versions.
While giving a fresh try to `set server ssl` (which I wrote), I realised
the behavior is a bit inconsistent. Indeed when using this command over
a server with ssl enabled for the data path but also for the health
check path we have:
- data and health check done using tls
- emit `set server be_foo/srv0 ssl off`
- data path and health check path becomes plain text
- emit `set server be_foo/srv0 ssl on`
- data path becomes tls and health check path remains plain text
while I thought the end result would be:
- data path and health check path comes back in tls
In the current code we indeed erase all connections while deactivating,
but restore only the data path while activating. I made this mistake in
the past because I was testing with a case where the health check plain
text by default.
There are several ways to solve this issue. The cleanest one would
probably be to avoid changing the health check connection when we use
`set server ssl` command, and create a new command `set server
ssl-check` to change this. For now I assumed this would be ok to simply
avoid changing the health check path and be more consistent.
This patch tries to address that and also update the documentation. It
should not break the existing usage with health check on plain text, as
in this case they should have `no-check-ssl` in defaults. Without this
patch, it makes the command unusable in an env where you have a list of
server to add along the way with initial `server-template`, and all
using tls for data and healthcheck path.
For 2.6 we should probably reconsider and add `set server ssl-check`
command for better granularity of cases.
If this solution is accepted, this patch should be backported up to >=
2.4.
The alternative solution was to restore the previous state, but I
believe this will create even more confusion in the future.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
hlua_httpclient_table_to_hdrs() does a lua_pop(L, 1) at the end of the
function, this is supposed to be done in the caller and it is already be
done in hlua_httpclient_send().
This call has the consequence of poping the next parameter of the
httpclient, ignoring it.
This patch fixes the issue by removing the lua_pop(L, 1).
Must be backported in 2.5.
Forbid the httpclient to send an empty chunked client when there is no
data to send. It does happen when doing a simple GET too.
Must be backported in 2.5.
htx_add_data() is able to partially consume data. However there is a bug
when the HTX buffer is empty. The data length is not properly
adjusted. Thus, if it exceeds the HTX buffer size, no block is added. To fix
the issue, the length is now adjusted first.
This patch must be backported as far as 2.0.
If we wakeup the I/O handler before the mux is started, it is possible
it has enough time to parse the ClientHello TLS message and update the
mux transport parameters, leading to a crash.
So, we initialize ->qcc quic_conn struct member at the very last time,
when the mux if fully initialized. The condition to wakeup the I/O handler
from lstnr_rcv_pkt() is: xprt context and mux both initialized.
Note that if the xprt context is initialized, it implies its tasklet is
initialized. So, we do not check anymore this latter condition.
The stopping-list management introduced by commit d3a88c1c3 ("MEDIUM:
connection: close front idling connection on soft-stop") missed two
error paths in the H1 and H2 muxes. The effect is that if a stream
or HPACK table couldn't be allocated for these incoming connections,
we would leave with the connection freed still attached to the
stopping_list and it would never leave it, resulting in use-after-free
hence either a crash or a data corruption.
This is marked as medium as it only happens under extreme memory pressure
or when playing with tune.fail-alloc. Other stability issues remain in
such a case so that abnormal behaviors cannot be explained by this bug
alone.
This must be backported to 2.4.
Free the ssl_sock_ctx tasklet in quic_close() instead of
quic_conn_drop(). This ensures that the tasklet is destroyed safely by
the same thread.
This has no impact as the free operation was previously conducted with
care and should not be responsible of any crash.
Implement the emission of Retry packets. These packets are emitted in
response to Initial from clients without token. The token from the Retry
packet contains the ODCID from the Initial packet.
By default, Retry packet emission is disabled and the handshake can
continue without address validation. To enable Retry, a new bind option
has been defined named "quic-force-retry". If set, the handshake must be
conducted only after receiving a token in the Initial packet.
Implement the parsing of token from Initial packets. It is expected that
the token contains a CID which is the DCID from the Initial packet
received from the client without token which triggers a Retry packet.
This CID is then used for transport parameters.
Note that at the moment Retry packet emission is not implemented. This
will be achieved in a following commit.
Implement a new QUIC TLS related function
quic_tls_generate_retry_integrity_tag(). This function can be used to
calculate the AEAD tag of a Retry packet.
It is expected that quic_dgram_read() returns the total number of bytes
read. Fix the return value when the read has been successful. This bug
has no impact as in the end the return value is not checked by the
caller.
->conn quic_conn struct member is a connection struct object which may be
released from several places. With this patch we do our best to stop dereferencing
this member as much as we can.
This commit was not correct:
"MINOR: quic: Only one CRYPTO frame by encryption level"
Indeed, when receiving CRYPTO data from TLS stack for a packet number space,
there are rare cases where there is already other frames than CRYPTO data frames
in the packet number space, especially for 01RTT packet number space. This is
very often with quant as client.
There may be remaining locations where ->conn quic_conn struct member
is used. So let's reset this.
Add a trace to have an idead when this connection is released.
The build on macos was broken by recent commit df91cbd58 ("MINOR: cpuset:
switch to sched_setaffinity for FreeBSD 14 and above."), let's move the
variable declaration inside the ifdef.
A regression was introduced by commit 140f1a58 ("BUG/MEDIUM: mux-h1: Fix
splicing by properly detecting end of message"). To detect end of the
outgoing message, when the content-length is announced, we count amount of
data already sent. But only data really sent must be counted.
If the output buffer is full, we can fail to send data (fully or
partially). In this case, we must take care to only count sent
data. Otherwise we may think too much data were sent and an internal error
may be erroneously reported.
This patch should fix issues #1510 and #1511. It must be backported as far
as 2.4.
If an error is raised during the ClientHello callback on the server side
(ssl_sock_switchctx_cbk), the servername callback won't be called and
the client's SNI will not be saved in the SSL context. But since we use
the SSL_get_servername function to return this SNI in the ssl_fc_sni
sample fetch, that means that in case of error, such as an SNI mismatch
with a frontend having the strict-sni option enabled, the sample fetch
would not work (making strict-sni related errors hard to debug).
This patch fixes that by storing the SNI as an ex_data in the SSL
context in case the ClientHello callback returns an error. This way the
sample fetch can fallback to getting the SNI this way. It will still
first call the SSL_get_servername function first since it is the proper
way of getting a client's SNI when the handshake succeeded.
In order to avoid memory allocations are runtime into this highly used
runtime function, a new memory pool was created to store those client
SNIs. Its entry size is set to 256 bytes since SNIs can't be longer than
255 characters.
This fixes GitHub #1484.
It can be backported in 2.5.
Since version 2.5 the master is automatically re-executed in wait-mode
when the config is successfully loaded, puting corner cases of the wait
mode in plain sight.
When using the -x argument and with the right timing, the master will
try to get the FDs again in wait mode even through it's not needed
anymore, which will harm the worker by removing its listeners.
However, if it fails, (and it's suppose to, sometimes), the
master will exit with EXIT_FAILURE because it does not have the
MODE_MWORKER flag, but only the MODE_MWORKER_WAIT flag. With the
consequence of killing the workers.
This patch fixes the issue by restricting the use of _getsocks to some
modes.
This patch must be backported in every version supported, even through
the impact should me more harmless in version prior to 2.5.
In fact we must look for the first packet with some ack-elicting frame to
in the packet number space tree to retransmit from. Obviously there
may be already retransmit packets which are not deemed as lost and
still present in the packet number space tree for TX packets.
When receiving CRYPTO data from the TLS stack, concatenate the CRYPTO data
to the first allocated CRYPTO frame if present. This reduces by one the number
of handshake packets built for a connection with a standard size certificate.
Avoid closing idle connections if a soft stop is in progress.
By default, idle connections will be closed during a soft stop. In some
environments, a client talking to the proxy may have prepared some idle
connections in order to send requests later. If there is no proper retry
on write errors, this can result in errors while haproxy is reloading.
Even though a proper implementation should retry on connection/write
errors, this option was introduced to support back compat with haproxy <
v2.4. Indeed before v2.4, we were waiting for a last request to be able
to add a "connection: close" header and advice the client to close the
connection.
In a real life example, this behavior was seen in AWS using the ALB in
front of a haproxy. The end result was ALB sending 502 during haproxy
reloads.
This patch was tested on haproxy v2.4, with a regular reload on the
process, and a constant trend of requests coming in. Before the patch,
we see regular 502 returned to the client; when activating the option,
the 502 disappear.
This patch should help fixing github issue #1506.
In order to unblock some v2.3 to v2.4 migraton, this patch should be
backported up to v2.4 branch.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
[wt: minor edits to the doc to mention other options to care about]
Signed-off-by: Willy Tarreau <w@1wt.eu>
When block by the anti-amplification limit, this is the responsability of the
client to unblock it sending new datagrams. On the server side, even if not
well parsed, such datagrams must trigger the PTO timer arming.
Switch back to QUIC_HS_ST_SERVER_HANDSHAKE state after a completed handshake
if acks must be send.
Also ensure we build post handshake frames only one time without using prev_st
variable and ensure we discard the Handshake packet number space only one time.
We need to be able to decrypt late Handshake packets after the TLS secret
keys have been discarded. If not the peer send Handshake packet which have
not been acknowledged. But for such packets, we discard the CRYPTO data.
According to RFC 9002 par. 6.2.3. when receving duplicate Initial CRYPTO
data a server may a packet containing non unacknowledged before the PTO
expiry.
These tests were there to initiate PTO probing but they are not correct.
Furthermore they may break the PTO probing process and lead to useless packet
building.
RFC 9002 5.3. Estimating smoothed_rtt and rttvar:
MUST use the lesser of the acknowledgment delay and the peer's max_ack_delay
after the handshake is confirmed.
When a filter is attached on a stream, the FLT_END analyser must not be
removed from the response channel on L7 retry. It is especially important
because CF_FLT_ANALYZE flag is still set. This means the synchronization
between the two sides when the filter ends can be blocked. Depending on the
timing, this can freeze the stream infinitely or lead to a spinning loop.
Note that the synchronization between the two sides at the end of the
analysis was introduced because the stream was reused in HTTP between two
transactions. But, since the HTX was introduced, a new stream is created for
each transaction. So it is probably possible to remove this step for 2.2 and
higher.
This patch must be backported as far as 2.0.
With this patch pool_evict_last_items builds clusters of up to
CONFIG_HAP_POOL_CLUSTER_SIZE entries so that accesses to the shared
pools are reduced by CONFIG_HAP_POOL_CLUSTER_SIZE and the inter-
thread contention is reduced by as much..
Since previous patch we can forcefully evict multiple objects from the
local cache, even when evicting basd on the LRU entries. Let's define
a compile-time configurable setting to batch releasing of objects. For
now we set this value to 8 items per round.
This is marked medium because eviction from the LRU will slightly change
in order to group the last items that are freed within a single cache
instead of accurately scanning only the oldest ones exactly in their
order of appearance. But this is required in order to evolve towards
batched removals.
We currently have two functions to evict cold objects from local caches:
pool_evict_from_local_cache() to evict from a single cache, and
pool_evict_from_local_caches() to evict oldest objects from all caches.
The new function pool_evict_last_items() focuses on scanning oldest
objects from a pool and releasing a predefined number of them, either
to the shared pool or to the system. For now they're evicted one at a
time, but the next step will consist in creating clusters.
In order to support batched allocations and releases, we'll need to
prepare chains of items linked together and that can be atomically
attached and detached at once. For this we implement a "down" pointer
in each pool_item that points to the other items belonging to the same
group. For now it's always NULL though freeing functions already check
them when trying to release everything.
In pool_evict_from_local_cache() we used to check for room left in the
pool for each and every object. Now we compute the value before entering
the loop and keep into a local list what has to be released, and call
the OS-specific functions for the other ones.
It should already save some cycles since it's not needed anymore to
recheck for the pool's filling status. But the main expected benefit
comes from the ability to pre-construct a list of all releasable
objects, that will later help with grouping them.
In order to support batch allocation from/to shared pools, we'll have to
support a specific representation for pool objects. The new pool_item
structure will be used for this. For now it only contains a "next"
pointer that matches exactly the current storage model. The few functions
that deal with the shared pool entries were adapted to use the new type.
There is no functionality difference at this point.
Instead of letting pool_put_to_shared_cache() pass the object to the
underlying OS layer when there's no more room, let's have the caller
check if the pool is full and either call pool_put_to_shared_cache()
or call pool_free_nocache().
Doing this sensibly simplifies the code as this function now only has
to deal with a pool and an item and only for cases where there are
local caches and shared caches. As the code was simplified and the
calls more isolated, the function was moved to pool.c.
Note that it's only called from pool_evict_from_local_cache{,s}() and
that a part of its logic might very well move there when dealing with
batches.
One of the thread scaling challenges nowadays for the pools is the
contention on the shared caches. There's never any situation where we
have a shared cache and no local cache anymore, so we can technically
afford to transfer objects from the shared cache to the local cache
before returning them to the user via the regular path. This adds a
little bit more work per object per miss, but will permit batch
processing later.
This patch simply moves pool_get_from_shared_cache() to pool.c under
the new name pool_refill_local_from_shared(), and this function does
not return anything but it places the allocated object at the head of
the local cache.
The POOL_LINK macro is now only used for debugging, and it still requires
ifdefs around, which needlessly complicates the code. Let's replace it
and the calling code with a new pair of macros: POOL_DEBUG_SET_MARK()
and POOL_DEBUG_CHECK_MARK(), that respectively store and check the pool
pointer in the extra location at the end of the pool. This removes 4
pairs of ifdefs in the middle of the code.
This practice relying on POOL_LINK() dates from the era where there were
no pool caches, but given that the structures are a bit more complex now
and that pool caches do not make use of this feature, it is totally
useless since released elements have already been overwritten, and yet
it complicates the architecture and prevents from making simplifications
and optimizations. Let's just get rid of this feature. The pointer to
the origin pool is preserved though, as it helps detect incorrect frees
and serves as a canary for overflows.
For an unknown reason, despite the comment stating that we were evicting
oldest objects first from the local caches, due to the use of LIST_NEXT,
the newest were evicted, since pool_put_to_cache() uses LIST_INSERT().
Some tests on 16 threads show that evicting oldest objects instead can
improve performance by 0.5-1% especially when using shared pools.
This patch unlinks and frees the ckch instance linked to a server during
the free of this server.
This could have locked certificates in a "Used" state when removing
servers dynamically from the CLI. And could provoke a segfault once we
try to dynamically update the certificate after that.
This must be backported as far as 2.4.
A lot of free are missing in ssl_sock_free_srv_ctx(), this could result
in memory leaking when removing dynamically a server via the CLI.
This must be backported in every branches, by removing the fields that
does not exist in the previous branches.
This bug was introduced by d817dc73 ("MEDIUM: ssl: Load client
certificates in a ckch for backend servers") in which the creation of
the SSL_CTX for a server was moved to the configuration parser when
using a "crt" keyword instead of being done in ssl_sock_prepare_srv_ctx().
The patch 0498fa40 ("BUG/MINOR: ssl: Default-server configuration ignored by
server") made it worse by setting the same SSL_CTX for every servers
using a default-server. Resulting in any SSL option on a server applied
to every server in its backend.
This patch fixes the issue by reintroducing a string which store the
path of certificate inside the server structure, and loading the
certificate in ssl_sock_prepare_srv_ctx() again.
This is a quick fix to backport, a cleaner way can be achieve by always
creating the SSL_CTX in ssl_sock_prepare_srv_ctx() and splitting
properly the ssl_sock_load_srv_cert() function.
This patch fixes issue #1488.
Must be backported as far as 2.4.
This is a second help to dump loaded library names late at boot, once
external code has already been initialized. The purpose is to provide
a format that makes it easy to pass to "tar" to produce an archive
containing the executable and the list of dependencies. For example
if haproxy is started as "haproxy -f foo.cfg", a config check only
will suffice to quit before starting, "-q" will be used to disable
undesired output messages, and -dL will be use to dump libraries.
This will result in such a command to trivially produce a tarball
of loaded libraries:
./haproxy -q -c -dL -f foo.cfg | tar -T - -hzcf archive.tgz
Many times core dumps reported by users who experience trouble are
difficult to exploit due to missing system libraries. Sometimes,
having just a list of loaded libraries and their respective addresses
can already provide some hints about some problems.
This patch makes a step in that direction by adding a new "show libs"
command that will try to enumerate the list of object files that are
loaded in memory, relying on the dynamic linker for this. It may also
be used to detect that some foreign code embarks other undesired libs
(e.g. some external Lua modules).
At the moment it's only supported on glibc when USE_DL is set, but it's
implemented in a way that ought to make it reasonably easy to be extended
to other platforms.
The approach used for skipping conn_cur in commit db2ab8218 ("MEDIUM:
stick-table: never learn the "conn_cur" value from peers") was wrong,
it only works with simple tables but as soon as frequency counters or
arrays are exchanged after conn_cur, the stream is desynchronized and
incorrect values are read. This is because the fields have a variable
length depending on their types and cannot simply be skipped by a
"continue" statement.
Let's change the approach to make sure we continue to completely parse
these local-only fields, and only drop the value at the moment we're
about to store them, since this is exactly the intent.
A simpler approach could consist in having two sets of stktable_data_ptr()
functions, one for retrieval and one for storage, and to make the store
function return a NULL pointer for local types. For now this doesn't
seem worth the trouble.
This fixes github issue #1497. Thanks to @brenc for the reproducer.
This must be backported to 2.5.
A subtle change of target address allocation was introduced with commit
68cf3959b ("MINOR: backend: rewrite alloc of stream target address") in
2.4. Prior to this patch, a target address was allocated by function
assign_server_address() only if none was previously allocated. After
the change, the allocation became unconditional. Most of the time it
makes no difference, except when we pass multiple times through
connect_server() with SF_ADDR_SET cleared.
The most obvious fix would be to avoid allocating that address there
when already set, but the root cause is that since introduction of
dynamically allocated addresses, the SF_ADDR_SET flag lies. It can
be cleared during redispatch or during a queue redistribution without
the address being released.
This patch instead gives back all its correct meaning to SF_ADDR_SET
and guarantees that when not set no address is allocated, by freeing
that address at the few places the flag is cleared. The flag could
even be removed so that only the address is checked but that would
require to touch many areas for no benefit.
The easiest way to test it is to send requests to a proxy with l7
retries enabled, which forwards to a server returning 500:
defaults
mode http
timeout client 1s
timeout server 1s
timeout connect 1s
retry-on all-retryable-errors
retries 1
option redispatch
listen proxy
bind *:5000
server app 0.0.0.0:5001
frontend dummy-app
bind :5001
http-request return status 500
Issuing "show pools" on the CLI will show that pool "sockaddr" grows
as requests are redispatched, and remains stable with the fix. Even
"ps" will show that the process' RSS grows by ~160B per request.
This fix will need to be backported to 2.4. Note that before 2.5,
there's no strm->si[1].dst, strm->target_addr must be used instead.
This addresses github issue #1499. Special thanks to Daniil Leontiev
for providing a well-documented reproducer.
Properly initialized the ssl_sock_ctx pointer in qc_conn_init. This is
required to avoid to set an undefined pointer in qc.xprt_ctx if argument
*xprt_ctx is NULL.
Implement a refcount on quic_conn instance. By default, the refcount is
0. Two functions are implemented to manipulate it.
* qc_conn_take() which increments the refcount
* qc_conn_drop() which decrements it. If the refcount is 0 *BEFORE*
the substraction, the instance is freed.
The refcount is incremented on retrieve_qc_conn_from_cid() or when
allocating a new quic_conn in qc_lstnr_pkt_rcv(). It is substracted most
notably by the xprt.close operation and at the end of
qc_lstnr_pkt_rcv(). The increments/decrements should be conducted under
the CID lock to guarantee thread-safety.
The timer task is attached to the connection-pinned thread. Only this
thread can delete it. With the future refcount implementation of
quic_conn, every thread can be responsible to remove the quic_conn via
quic_conn_free(). Thus, the timer task deletion is moved from the
calling function quic_close().
Big refactoring on xprt-quic. A lot of functions were using the
ssl_sock_ctx as argument to only access the related quic_conn. All these
arguments are replaced by a quic_conn parameter.
As a convention, the quic_conn instance is always the first parameter of
these functions.
This commit is part of the rearchitecture of xprt-quic layers and the
separation between xprt and connection instances.
Remove the shortcut to use the INITIAL encryption level when removing
header protection on first connection packet.
This change is useful for the following change which removes
ssl_sock_ctx in argument lists in favor of the quic_conn instance.
Add a pointer in quic_conn to its related ssl_sock_ctx. This change is
required to avoid to use the connection instance to access it.
This commit is part of the rearchitecture of xprt-quic layers and the
separation between xprt and connection instances. It will be notably
useful when the connection allocation will be delayed.
free_quic_conn_cids() was called in quic_build_post_handshake_frames()
if an error occured. However, the only error is an allocation failure of
the CID which does not required to call it.
This change is required for future refcount implementation. The CID lock
will be removed from the free_quic_conn_cids() and to the caller.
When a quic_conn is found in the DCID tree, it can be removed from the
first ODCID tree. However, this operation must absolutely be run under a
write-lock to avoid race condition. To avoid to use the lock too
frequently, node.leaf_p is checked. This value is set to NULL after
ebmb_delete.
Some applications may send some information about the reason why they decided
to close a connection. Add them to CONNECTION_CLOSE frame traces.
Take the opportunity of this patch to shorten some too long variable names
without any impact.
Add traces about important frame types to chunk_tx_frm_appendf()
and call this function for any type of frame when parsing a packet.
Move it to quic_frame.c
Since this case was already met previously with commit 655dec81b
("BUG/MINOR: backend: do not set sni on connection reuse"), let's make
sure that we don't change reused connection settings. This could be
generalized to most settings that are only in effect before the handshake
in fact (like set_alpn and a few other ones).
During 2.4-dev, support for malloc_trim() was implemented to ease
release of memory in a stopping process. This was found to be quite
effective and later backported to 2.3.7.
Then it was found that sometimes malloc_trim() could take a huge time
to complete it if was competing with other threads still allocating and
releasing memory, reason why it was decided in 2.5-dev to move
malloc_trim() under the thread isolation that was already in place in
the shared pool version of pool_gc() (this was commit 26ed1835).
However, other instances of pool_gc() that used to call malloc_trim()
were not updated since they were not using thread isolation. Currently
we have two other such instances, one for when there is absolutely no
pool and one for when there are only thread-local pools.
Christian Ruppert reported in GH issue #1490 that he's sometimes seeing
and old process die upon reload when upgrading from 2.3 to 2.4, and
that this happens inside malloc_trim(). The problem is that since
2.4-dev11 with commit 0bae07592 we detect modern libc that provide a
faster thread-aware allocator and do not maintain shared pools anymore.
As such we're using again the simpler pool_gc() implementations that do
not use thread isolation around the malloc_trim() call.
All this code was cleaned up recently and the call moved to a new
function trim_all_pools(). This patch implements explicit thread isolation
inside that function so that callers do not have to care about this
anymore. The thread isolation is conditional so that this doesn't affect
the one already in place in the larger version of pool_gc(). This way it
will solve the problem for all callers.
This patch must be backported as far as 2.3. It may possibly require
some adaptations. If trim_all_pools() is not present, copy-pasting the
tests in each version of pool_gc() will have the same effect.
Thanks to Christian for his detailed report and his testing.
This is the same treatment for bidi and uni STREAM frames. This is a duplication
code which should me remove building a function for both these types of streams.
The connection instance has been replaced by a quic_conn as first
argument to QUIC traces. It is possible to report the quic_conn instance
in the qc_new_conn(), contrary to the connection which is not
initialized at this stage.
Replace the connection instance for first argument of trace callback by
a quic_conn instance. The QUIC trace module is properly initialized with
the first argument refering to a quic_conn.
Replace every connection instances in TRACE_* macros invocation in
xprt-quic by its related quic_conn. In some case, the connection is
still used to access the quic_conn. It may cause some problem on the
future when the connection will be completly separated from the xprt
layer.
This commit is part of the rearchitecture of xprt-quic layers and the
separation between xprt and connection instances.
Prepare trace support for quic_conn instances as argument. This will be
used by the xprt-quic layer in replacement of the connection.
This commit is part of the rearchitecture of xprt-quic layers and the
separation between xprt and connection instances.
Add const qualifier on arguments of several dump functions used in the
trace callback. This is required to be able to replace the first trace
argument by a quic_conn instance. The first argument is a const pointer
and so the members accessed through it must also be const.
Add a new member in ssl_sock_ctx structure to reference the quic_conn
instance if used in the QUIC stack. This member is initialized during
qc_conn_init().
This is needed to be able to access to the quic_conn without relying on
the connection instance. This commit is part of the rearchitecture of
xprt-quic layers and the separation between xprt and connection
instances.
Move qcc_get_qcs() function from xprt_quic.c to mux_quic.c. This
function is used to retrieve the qcs instance from a qcc with a stream
id. This clearly belongs to the mux-quic layer.
Use the convention of naming quic_conn instance as qc to not confuse it
with a connection instance. The changes occured for qc_parse_pkt_frms(),
qc_build_frms() and qc_do_build_pkt().
The QUIC connection I/O handler qc_conn_io_cb() could be called just after
qc_pkt_insert() have inserted a packet in a its tree, and before qc_pkt_insert()
have incremented the reference counter to this packet. As qc_conn_io_cb()
decrement this counter, the packet could be released before qc_pkt_insert()
might increment the counter, leading to possible crashes when trying to do so.
So, let's make qc_pkt_insert() increment this counter before inserting the packet
it is tree. No need to lock anything for that.
Add a function to process all STREAM frames received and ordered
by their offset (qc_treat_rx_strm_frms()) and modify
qc_handle_bidi_strm_frm() consequently.
There were empty lines in the output of the CLI's "show ssl
ocsp-response" command (after the certificate ID and between two
certificates). This patch removes them since an empty line should mark
the end of the output.
Must be backported in 2.5.
With the DCID refactoring, the locking is more centralized. It is
possible to simplify the code for removal of a quic_conn from the ODCID
tree.
This operation can be conducted as soon as the connection has been
retrieved from the DCID tree, meaning that the peer now uses the final
DCID. Remove the bit to flag a connection for removal and just uses
ebmb_delete() on each sucessful lookup on the DCID tree. If the
quic_conn has already been removed, it is just a noop thanks to
eb_delete() implementation.
A new function named qc_retrieve_conn_from_cid() now contains all the
code to retrieve a connection from a DCID. It handle all type of packets
and centralize the locking on the ODCID/DCID trees.
This simplify the qc_lstnr_pkt_rcv() function.
If an UDP datagram contains multiple QUIC packets, they must all use the
same DCID. The datagram context is used partly for this.
To ensure this, a comparison was made on the dcid_node of DCID tree. As
this is a comparison based on pointer address, it can be faulty when
nodes are removed/readded on the same pointer address.
Replace this comparison by a proper comparison on the DCID data itself.
To this end, the dgram_ctx structure contains now a quic_cid member.
For first Initial packets, the socket source dest address is
concatenated to the DCID. This is used to be able to differentiate
possible collision between several clients which used the same ODCID.
Refactor the code to manage DCID and the concatenation with the address.
Before this, the concatenation was done on the quic_cid struct and its
<len> field incremented. In the code it is difficult to differentiate a
normal DCID with a DCID + address concatenated.
A new field <addrlen> has been added in the quic_cid struct. The <len>
field now only contains the size of the QUIC DCID. the <addrlen> is
first initialized to 0. If the address is concatenated, it will be
updated with the size of the concatenated address. This now means we
have to explicitely used either cid.len or cid.len + cid.addrlen to
access the DCID or the DCID + the address. The code should be clearer
thanks to this.
The field <odcid_len> in quic_rx_packet struct is now useless and has
been removed. However, a new parameter must be added to the
qc_new_conn() function to specify the size of the ODCID addrlen.
On haproxy implementation, generated DCID are on 8 bytes, the minimal
value allowed by the specification. Rename the constant representing
this size to inform that this is haproxy specific.
All operation on the ODCID/DCID trees must be conducted under a
read-write lock. Add a missing read-lock on the lookup operation inside
listener handler.
The packet number space flags were mixed with the connection level flags.
This leaded to ACK to be sent at the connection level without regard to
the underlying packet number space. But we want to be able to acknowleged
packets for a specific packet number space.
This is required if we do not want to make haproxy crash during zerortt
interop runner test which makes a client open multiple streams with
long request paths.
A client sends a 0-RTT data packet after an Initial one in the same datagram.
We must be able to parse such packets just after having parsed the Initial packets.