Regroup some cleaning operations inside a new function qcs_free. This
can be used for all streams, both through qcs_destroy and with
uni-directional streams.
The quic_stream frame stores the qcs instance. On ACK parsing, qcs is
accessed to clear the stream buffer. This can cause a segfault if the
MUX or the qcs is already released.
Consider the following scenario :
1. a STREAM frame is generated by the MUX
transport layer emits the frame with PKN=1
upper layer has finished the transfer so related qcs is detached
2. transport layer reemits the frame with PKN=2 because ACK was not
received
3. ACK for PKN=1 is received, stream buffer is cleared
at this stage, qcs may be freed by the MUX as it is detached
4. ACK for PKN=2 is received
qcs for STREAM frame is dereferenced which will lead to a crash
To prevent this, qcs is never accessed from the quic_stream during ACK
parsing. Instead, a lookup is done on the MUX streams tree. If the MUX
is already released, no lookup is done. These checks prevents a possible
segfault.
This change may have an impact on the perf as now we are forced to use a
tree lookup operation. If this is the case, an alternative solution may
be to implement a refcount on qcs instances.
The CertCache.set() function allows to update an SSL certificate file
stored in the memory of the HAProxy process. This function does the same
as "set ssl cert" + "commit ssl cert" over the CLI.
This could be used to update the crt and key, as well as the OCSP, the
SCTL, and the OSCP issuer.
The implementation does yield every 10 ckch instances, the same way the
"commit ssl cert" do.
Simplify the "cert_exts" array which is used for the selection of the
parsing function depending on the extension.
It now uses a pointer to an array element instead of an index, which is
simplier for the declaration of the array.
This way also allows to have multiple extension using the same type.
Extract the code that replace the ckch_store and its dependencies into
the ckch_store_replace() function.
This function must be used under the global ckch lock.
It frees everything related to the old ckch_store.
Note that we cannot reuse dump_act_rules() because the output format
may be adjusted depending on the call place (this is also used from
haproxy -vv). The principle is the same however.
There are very few but they're registered from constructors, hence
in a random order. The scope had to be copied when retrieving the
next keyword. Note that this also has the effect of listing them
sorted in haproxy -vv.
Like for previous keyword classes, we're sorting the output. But this
time as it's not trivial to do it with multiple words, instead we're
proceeding like the help command, we sort them on their usage message
when present, and fall back to the first word of the command when there
is no usage message (e.g. "help" command).
It's much more convenient to sort these keywords on output to detect
changes, and it's easy to do. The patch looks big but most of it is
only caused by an indent change in the loop, as "git diff -b" shows.
The output produced by dump_registered_keywords() really deserves to be
sorted in order to ease comparisons. The function now implements a tiny
sorting mechanism that's suitable for each two-level list, and makes
use of dump_act_rules() to dump rulesets. The code is not significantly
more complicated and some parts (e.g options) could even be factored.
The output is much more exploitable to detect differences now.
The new function dump_act_rules() now dumps the list of actions supported
by a ruleset. These actions are alphanumerically sorted first so that the
produced output is easy to compare.
When trying to sort sets of strings, it's often needed to required to
compare 3 strings to see if the chosen one fits well between the two
others. That's what this function does, in addition to being able to
ignore extremities when they're NULL (typically for the first iteration
for example).
Similar to the sample fetch keywords, let's also list the converter
keywords. They're much simpler since there's no compatibility matrix.
Instead the input and output types are listed. This is called by
dump_registered_keywords() for the "cnv" keywords class.
New function smp_dump_fetch_kw lists registered sample fetch keywords
with their compatibility matrix, mandatory and optional argument types,
and output types. It's called from dump_registered_keywords() with class
"smp".
New function acl_dump_kwd() dumps the registered ACL keywords and their
sample-fetch equivalent to stdout. It's called by dump_registered_keywords()
for keyword class "acl".
New function cli_list_keywords() scans the list of registered CLI keywords
and dumps them on stdout. It's now called from dump_registered_keywords()
for the class "cli".
Some keywords are valid for the master, they'll be suffixed with
"[MASTER]". Others are valid for the worker, they'll have "[WORKER]".
Those accessible only in expert mode will show "[EXPERT]" and the
experimental ones will show "[EXPERIM]".
When no output stream is passed, stdout is used with one entry per line,
and this is called from dump_registered_services() when passed the class
"svc".
When passing a NULL output buffer the function will now dump to stdout
with a more compact format that is more suitable for machine processing.
An entry was added to dump_registered_keyword() to call it when the
keyword class "flt" is requested.
All registered config keywords that are valid in the config parser are
dumped to stdout organized like the regular sections (global, listen,
etc). Some keywords that are known to only be valid in frontends or
backends will be suffixed with [FE] or [BE].
All regularly registered "bind" and "server" keywords are also dumped,
one per "bind" or "server" line. Those depending on ssl are listed after
the "ssl" keyword. Doing so required to export the listener and server
keyword lists that were static.
The function is called from dump_registered_keywords() for keyword
class "cfg".
It's difficult from outside haproxy to detect the supported keywords
and syntax. Interestingly, many of our modern keywords are enumerated
since they're registered from constructors, so it's not very hard to
enumerate most of them.
This patch creates some basic infrastructure to support dumping existing
keywords from different classes on stdout. The format will differ depending
on the classes, but the idea is that the output could easily be passed to
a script that generates some simple syntax highlighting rules, completion
rules for editors, syntax checkers or config parsers.
The principle chosen here is that if "-dK" is passed on the command-line,
at the end of the parsing the registered keywords will be dumped for the
requested classes passed after "-dK". Special name "help" will show known
classes, while "all" will execute all of them. The reason for doing that
after the end of the config processor is that it will also enumerate
internally-generated keywords, Lua or even those loaded from external
code (e.g. if an add-on is loaded using LD_PRELOAD). A typical way to
call this with a valid config would be:
./haproxy -dKall -q -c -f /path/to/config
If there's no config available, feeding /dev/null will also do the job,
though it will not be able to detect dynamically created keywords, of
course.
This patch also updates the management doc.
For now nothing but the help is listed, various subsystems will follow
in subsequent patches.
In 2.4, two commits added support for supporting sample fetch calls from
new config and CLI contexts, but these were not added to the visibile
names, which may possibly cause "(null)" to appear in some error messages.
The commit in question were:
db5e0dbea ("MINOR: sample: add a new CLI_PARSER context for samples")
f9a7a8fd8 ("MINOR: sample: add a new CFG_PARSER context for samples")
This patch needs to be backported where these are present (2.4 and above).
211ea252d ("BUG/MINOR: logs: fix logsrv leaks on clean exit") introduced a
regression because the list element of a new log server is not intialized. Thus
HAProxy crashes on error path when an invalid log server is released.
This patch shoud fix the issue #1636. It must be backported if the above commit
is backported. For now, it is 2.6-specific and no backport is needed.
When the destination buffer is full while there are still data to parse, the
h1s must be marked as congested to be able to restart the parsing
later. This work on headers and data parsing. But on trailers parsing, we
fail to do so when the buffer is full before to parse the trailers. In this
case, we skip the trailers parsing but the h1s is not marked as
congested. This is important to be sure to wake up the mux to restart the
parsing when some room is made in the buffer.
Because of this bug, the message processing may hang till a timeout is
triggered. Note that for 2.3 and 2.2, the EOM processing is buggy too, for
the same reason. It should be fixed too on these versions. On the 2.0, only
trailers parsing is affected.
This patch must be backported as far as 2.0. On 2.3 and 2.2, the EOM parsing
must be fixed too.
h1_parse_msg_hdrs() and h1_parse_msg_tlrs() may return negative values if
the parsing fails or if more space is needed in the destination buffer. When
h1-htx was changed, The H1 mux was updated accordingly but not the FCGI
mux. Thus if a negative value is returned, it is ignored and it is casted to
a size_t, leading to an integer overflow on the <ofs> value, used to know
the position in the RX buffer.
This patch must be backported as far as 2.2.
Reverts 75df9d7a7 ("DOC: explain HTTP2 timeout behavior") since H2
connections now respect "timeout http-keep-alive".
If commit 15a4733d5d ("BUG/MEDIUM: mux-h2: make use of http-request
and keep-alive timeouts") is backported, this DOC change needs to
be backported along with it.
Released version 2.6-dev4 with the following main changes :
- BUG/MEDIUM: httpclient: don't consume data before it was analyzed
- CLEANUP: htx: remove unused co_htx_remove_blk()
- BUG/MINOR: httpclient: consume partly the blocks when necessary
- BUG/MINOR: httpclient: remove the UNUSED block when parsing headers
- BUG/MEDIUM: httpclient: must manipulate head, not first
- REGTESTS: fix the race conditions in be2hex.vtc
- BUG/MEDIUM: quic: Blocked STREAM when retransmitted
- BUG/MAJOR: quic: Possible crash with full congestion control window
- BUG/MINOR: httpclient/lua: stuck when closing without data
- BUG/MEDIUM: applet: Don't call .release callback function twice
- BUG/MEDIUM: cli/debug: Properly get the stream-int in all debug I/O handlers
- BUG/MEDIUM: sink: Properly get the stream-int in appctx callback functions
- DEV: udp: switch parser to getopt() instead of positional arguments
- DEV: udp: add support for random packet corruption
- MINOR: server: export server_parse_sni_expr() function
- BUG/MINOR: httpclient: send the SNI using the host header
- BUILD: httpclient: fix build without SSL
- BUG/MINOR: server/ssl: free the SNI sample expression
- BUG/MINOR: logs: fix logsrv leaks on clean exit
- MINOR: actions: add new function free_act_rule() to free a single rule
- BUG/MINOR: tcp-rules: completely free incorrect TCP rules on error
- BUG/MINOR: http-rules: completely free incorrect TCP rules on error
- BUG/MINOR: httpclient: only check co_data() instead of HTTP_MSG_DATA
- BUG/MINOR: httpclient: process the response when received before the end of the request
- BUG/MINOR: httpclient: CF_SHUTW_NOW should be tested with channel_is_empty()
- CI: github actions: switch to LibreSSL-3.5.1
- BUG/MEDIUM: mux-h1: only turn CO_FL_ERROR to CS_FL_ERROR with empty ibuf
- BUG/MEDIUM: stream-int: do not rely on the connection error once established
- BUG/MEDIUM: trace: avoid race condition when retrieving session from conn->owner
- MEDIUM: mux-h2: slightly relax timeout management rules
- BUG/MEDIUM: mux-h2: make use of http-request and keep-alive timeouts
- BUG/MINOR: rules: Initialize the list element when allocating a new rule
- BUG/MINOR: http-rules: Don't free new rule on allocation failure
- DEV: coccinelle: Fix incorrect replacement in ist.cocci
- CLEANUP: Reapply ist.cocci with `--include-headers-for-types --recursive-includes`
- DEV: coccinelle: Add a new pattern to ist.cocci
- CLEANUP: Reapply ist.cocci
- REGTESTS: Do not use REQUIRE_VERSION for HAProxy 2.5+
- MINOR: quic: Code factorization (TX buffer reuse)
- CLEANUP: quic: "largest_acked_pn" pktns struc member moving
- MEDIUM: quic: Limit the number of ACK ranges
- MEDIUM: quic: Rework of the TX packets memory handling
- BUG/MINOR: quic: Possible crash in parse_retry_token()
- BUG/MINOR: quic: Possible leak in quic_build_post_handshake_frames()
- BUG/MINOR: quic: Unsent frame because of qc_build_frms()
- BUG/MINOR: mux-quic: Access to empty frame list from qc_send_frames()
- BUG/MINOR: mux-quic: Missing I/O handler events initialization
- BUG/MINOR: quic: Missing TX packet initializations
- BUG/MINOR: quic: 1RTT packets ignored after mux was released
- BUG/MINOR: quic: Incorrect peer address validation
- BUG/MINOR: quic: Non initialized variable in quic_build_post_handshake_frames()
- BUG/MINOR: quic: Wrong TX packet related counters handling
- MEDIUM: mqtt: support mqtt_is_valid and mqtt_field_value converters for MQTTv3.1
- DOC: config: Explictly add supported MQTT versions
- MINOR: quic: Add traces about stream TX buffer consumption
- MINOR: quic: Add traces in qc_set_timer() (scheduling)
- CLEANUP: mux-quic: change comment style to not mess with git conflict
- CLEANUP: mux-quic: adjust comment for coding-style
- MINOR: mux-quic: complete trace when stream is not found
- MINOR: mux-quic: add comments for send functions
- MINOR: mux-quic: use shorter name for flow-control fields
- MEDIUM: mux-quic: respect peer bidirectional stream data limit
- MEDIUM: mux-quic: respect peer connection data limit
- MINOR: mux-quic: support MAX_STREAM_DATA frame parsing
- MINOR: mux-quic: support MAX_DATA frame parsing
- BUILD: stream-int: avoid a build warning when DEBUG is empty
- BUG/MINOR: quic: Wrong buffer length passed to generate_retry_token()
- BUG/MINOR: tools: fix url2sa return value with IPv4
- MINOR: mux-quic: convert fin on push-frame as boolean
- BUILD: quic: add missing includes
- REORG: quic: use a dedicated quic_loss.c
- MINOR: mux-quic: declare the qmux trace module
- MINOR: mux-quic: replace printfs by traces
- MINOR: mux-quic: add trace event for frame sending
- MINOR: mux-quic: add trace event for qcs_push_frame
- MINOR: mux-quic: activate qmux traces on stdout via macro
- BUILD: qpack: fix unused value when not using DEBUG_HPACK
- CLEANUP: qpack: suppress by default stdout traces
- CLEANUP: h3: suppress by default stdout traces
- BUG/MINOR: tools: url2sa reads too far when no port nor path
url2sa() still have an unfortunate case where it reads 1 byte too far,
it happens when no port or path are specified in the URL, and could
crash if the byte after the URL is not allocated (mostly with ASAN).
This case is never triggered in old versions of haproxy because url2sa
is used with buffers which are way bigger than the URL. It is only
triggered with the httpclient.
Should be bacported in every stable branches.
H3_DEBUG definition is removed from h3.c similarly to the commit
d96361b270
CLEANUP: qpack: suppress by default stdout traces
Also, a plain fprintf in h3_snd_buf has been replaced to be conditional
to the H3_DEBUG definition.
These changes reduces the default output on stdout with QUIC traffic.
Remove the definition of DEBUG_HPACK on qpack-dec.c which forces the
QPACK decoding traces on stderr. Also change the name to use a dedicated
one for QPACK decoding as DEBUG_QPACK.
If the macro is not defined, some local variables are flagged as unused
by the compiler. Fix this by using the __maybe_unused attribute.
For now, the macro is defined in the qpack-dec.c. However, this will
change to not mess up the stderr output of haproxy with QUIC traffic.
This commit is similar to the following one :
commit 118b2cbf84
MINOR: quic: activate QUIC traces at compilation
If the macro ENABLE_QUIC_STDOUT_TRACES is defined, qmux traces are
outputted automatically on stdout. This is useful for the haproxy-qns
interop docker image.
Add a new qmux trace event QMUX_EV_QCS_PUSH_FRM. Its only purpose is to
display the meaningful result of a qcs_push_frame invocation.
A dedicated struct qcs_push_frm_trace_arg is defined to pass a series of
extra args for the trace output.
Define a new qmux event QMUX_EV_SEND_FRM. This allows to pass a
quic_frame as an extra argument. Depending on the frame type, a special
format can be used to log the frame content.
Currently this event is only used in qc_send_max_streams. Thus the
handler is only able to handle MAX_STREAMS frames.
Convert all printfs in the mux-quic code with traces.
Note that some meaningul printfs were not converted because they use
extra args in a format-string. This is the case inside qcs_push_frame
and qc_send_max_streams. A dedicated trace event should be implemented
for them to be able to display the extra arguments.
Declare a new trace module for mux-quic named qmux. It will be used to
convert all printf to regular traces. The handler qmux_trace can uses a
connection and a qcs instance as extra arguments.
Move all inline functions with trace from quic_loss.h to a dedicated
object file. This let to remove the TRACE_SOURCE macro definition
outside of the include file.
This change is required to be able to define another TRACE_SOUCE inside
the mux_quic.c for a dedicated trace module.
Fix 8a91374 ("BUG/MINOR: tools: url2sa reads ipv4 too far") introduced a
regression in the value returned when parsing an ipv4 host.
Tthe consumed length is supposed to be as far as the first character of
the path, only its not computed correctly anymore and return the length
minus the size of the scheme.
Fixed the issue by reverting 'curr' and 'url' as they were before the
patch.
Must be backported in every stable branch where the 8a91374 patch was
backported.
After having consumed <i> bytes from <buf>, the remaining available room to be
passed to generate_retry_token() is sizeof(buf) - i.
This bug could be easily reproduced with quic-qo as client which chooses a random
value as ODCID length.
When no DEBUG_STRICT is enabled, we get this build warning:
src/stream_interface.c: In function 'stream_int_chk_snd_conn':
src/stream_interface.c:1198:28: warning: unused variable 'conn' [-Wunused-variable]
1198 | struct connection *conn = cs_conn(cs);
| ^~~~
This was the result of the simplification of the code in commit
d1480cc8a ("BUG/MEDIUM: stream-int: do not rely on the connection error
once established") which removed the last user of this variable outside
of a BUG_ON().
If the patch above is backported, this one should be backported as well.
This commit is similar to the previous one but with MAX_DATA frames.
This allows to increase the connection level flow-control limit. If the
connection was blocked due to QC_CF_BLK_MFCTL flag, the flag is reseted.