When a bind line is configured without the "ssl" keyword, a warning is
emitted and a crash happens at runtime:
bind quic4@:4449 crt rsa+dh2048.pem alpn h3 allow-0rtt
[WARNING] (17867) : config : Proxy 'decrypt': A certificate was specified but SSL was not enabled on bind 'quic4@:4449' at [quic-mini.cfg:24] (use 'ssl').
Let's automatically turn SSL on when QUIC is detected, as it doesn't
exist without SSL anyway. It solves the runtime issue, and also makes
sure it is not possible to accidentally configure a quic listener with
no certificate since the error is detected via the SSL checks.
A warning is emitted in this case, to encourage the user to fix the
configuration so that it remains reviewable.
When no mux protocol is configured on a bind line with "proto", and the
transport layer is QUIC, right now mux_h1 is being used, leading to a
crash.
Now when the transport layer of the bind line is already known as being
QUIC, let's automatically try to configure the QUIC mux, so that users
do not have to enter "proto quic" all the time while it's the only
supported option. this means that the following line now works:
bind quic4@:4449 ssl crt rsa+dh2048.pem alpn h3 allow-0rtt
Till now, placing "proto h1" or "proto h2" on a "quic" bind or placing
"proto quic" on a TCP line would parse fine but would crash when traffic
arrived. The reason is that there's a strong binding between the QUIC
mux and QUIC transport and that they're not expected to be called with
other types at all.
Now that we have the mux's type and we know the type of the protocol used
on the bind conf, we can perform such checks. This now returns:
[ALERT] (16978) : config : frontend 'decrypt' : stream-based MUX protocol 'h2' is incompatible with framed transport of 'bind quic4@:4448' at [quic-mini.cfg:27].
[ALERT] (16978) : config : frontend 'decrypt' : frame-based MUX protocol 'quic' is incompatible with stream transport of 'bind :4448' at [quic-mini.cfg:29].
This config tightening is only tagged MINOR since while such a config,
despite not reporting error, cannot work at all so even if it breaks
experimental configs, they were just waiting for a single connection
to crash.
In order to be able to check compatibility between muxes and transport
layers, we'll need a new flag to tag muxes that work on framed transport
layers like QUIC. Only QUIC has this flag now.
We used to preset XPRT_SSL on bind_conf->xprt when parsing the "ssl"
keyword, which required to be careful about what QUIC could have set
before, and which makes it impossible to consider the whole line to
set all options.
Now that we have the BC_O_USE_SSL option on the bind_conf, it becomes
easier to set XPRT_SSL only once the bind_conf's args are parsed.
It used to be set when parsing the listeners' addresses but this comes
with some difficulties in that other places have to be careful not to
replace it (e.g. the "ssl" keyword parser).
Now we know what protocols a bind_conf line relies on, we can set it
after having parsed the whole line.
Now that we have a function to parse all bind keywords, and that we
know what types of sock-level and xprt-level protocols a bind_conf
is using, it's easier to centralize the check for stream vs dgram
conflict by putting it directly at the end of the args parser. This
way it also works for peers, provides better precision in the report,
and will also allow to validate transport layers. The check was even
extended to detect inconsistencies between xprt layer (which were not
covered before). It can even detect that there are two incompatible
"bind" lines in a single peers section.
Let's collect the set of xprt-level and sock-level dgram/stream protocols
seen on a bind line and store that in the bind_conf itself while they're
being parsed. This will make it much easier to detect incompatibilities
later than the current approch which consists in scanning all listeners
in post-parsing.
This now makes sure that both the peers' "bind" line and the regular one
will use the exact same parser with the exact same behavior. Note that
the parser applies after the address and that it could be factored
further, since the peers one still does quite a bit of duplicated work.
The "bind" parsing code was duplicated for the peers section and as a
result it wasn't kept updated, resulting in slightly different error
behavior (e.g. errors were not freed, warnings were emitted as alerts)
Let's first unify it into a new dedicated function that properly reports
and frees the error.
There's been some great confusion between proto_type, ctrl_type and
sock_type. It turns out that ctrl_type was improperly chosen because
it's not the control layer that is of this or that type, but the
transport layer, and it turns out that the transport layer doesn't
(normally) denaturate the underlying control layer, except for QUIC
which turns dgrams to streams. The fact that the SOCK_{DGRAM|STREAM}
set of values was used added to the confusion.
Let's replace it with xprt_type which reuses the later introduced
PROTO_TYPE_* values, and update the comments to explain which one
works at what level.
Just trying "quic4@:4433" with USE_QUIC not set rsults in such a cryptic
error:
[ALERT] (14610) : config : parsing [quic-mini.cfg:44] : 'bind' : unsupported protocol family 2 for address 'quic4@:4433'
Let's at least add the stream and datagram statuses to indicate what was
being looked for:
[ALERT] (15252) : config : parsing [quic-mini.cfg:44] : 'bind' : unsupported stream protocol for datagram family 2 address 'quic4@:4433'
Still not very pretty but gives a little bit more info.
In case the str2listener() parser reports a generic error with no message
when parsing the argument of a "bind" statement in a "peers" section, the
reported error indicates an invalid address on the empty arg. This has
existed since 2.0 with commit 355b2033e ("MINOR: cfgparse: SSL/TLS binding
in "peers" sections."), so this must be backported till 2.0.
As specified by the RFC reception of different STREAM data for the same
offset should be treated with a CONNECTION_CLOSE with error
PROTOCOL_VIOLATION.
Use ncbuf API to detect this case : if add operation fails with
NCB_RET_DATA_REJ with add mode NCB_ADD_COMPARE.
Send a CONNECTION_CLOSE on reception of a STREAM frame for a STREAM id
exceeding the maximum value enforced. Only implemented for bidirectional
streams for the moment.
Send a CONNECTION_CLOSE if the peer emits more data than authorized by
our flow-control. This is implemented for both stream and connection
level.
Fields have been added in qcc/qcs structures to differentiate received
offsets for limit enforcing with consumed offsets for sending of
MAX_DATA/MAX_STREAM_DATA frames.
Define an API to easily set a CONNECTION_CLOSE. This will mainly be
useful for the MUX when an error is detected which require to close the
whole connection.
On the MUX side, a new flag is added when a CONNECTION_CLOSE has been
prepared. This will disable add future send operations.
We rely on <conn_opening> stats counter and tune.quic.retry_threshold
setting to dynamically start sending Retry packets. We continue to send such packets
when "quic-force-retry" setting is set. The difference is when we receive tokens.
We check them regardless of this setting because the Retry could have been
dynamically started. We must also send Retry packets when we receive Initial
packets without token if the dynamic Retry threshold was reached but only for connection
which are not currently opening or in others words for Initial packets without
connection already instantiated. Indeed, we must not send Retry packets for all
Initial packets without token. For instance a client may have already sent an
Initial packet without receiving Retry packet because the Retry feature was not
started, then the Retry starts on exeeding the threshold value due to others
connections, then finally our client decide to send another Initial packet
(to ACK Initial CRYPTO data for instance). It does this without token. So, for
this already existing connection we must not send a Retry packet.
This QUIC specific keyword may be used to set the theshold, in number of
connection openings, beyond which QUIC Retry feature will be automatically
enabled. Its default value is 100.
First commit to handle the QUIC stats counters. There is nothing special to say
except perhaps for ->conn_openings which is a gauge to count the number of
connection openings. It is incremented after having instantiated a quic_conn
struct, then decremented when the handshake was successful (handshake completed
state) or failed or when the connection timed out without reaching the handshake
completed state.
Move the code which finalizes the QUIC connections initialisations after
having called qc_new_conn() into this function to benefit from its
error handling to release the memory allocated for QUIC connections
the initialization of which could not be finalized.
Here is the format of a token:
- format (1 byte)
- ODCID (from 9 up 21 bytes)
- creation timestamp (4 bytes)
- salt (16 bytes)
A format byte is required to distinguish the Retry token from others sent in
NEW_TOKEN frames.
The Retry token is ciphered after having derived a strong secret from the cluster secret
and generated the AEAD AAD, as well as a 16 bytes long salt. This salt is
added to the token. Obviously it is not ciphered. The format byte is not
ciphered too.
The AAD are built by quic_generate_retry_token_aad() which concatenates the version,
the client SCID and the IP address and port. We had to implement quic_saddr_cpy()
to copy the IP address and port to the AAD buffer. Only the Retry SCID is generated
on our side to build a Retry packet, the others fields come from the first packet
received by the client. It must reuse this Retry SCID in response to our Retry packet.
So, we have not to store it on our side. Everything is offloaded to the client (stateless).
quic_generate_retry_token() must be used to generate a Retry packet. It calls
quic_pkt_encrypt() to cipher the token.
quic_generate_retry_check() must be used to check the validity of a Retry token.
It is able to decipher a token which arrives into an Initial packet in response
to a Retry packet. It calls parse_retry_token() after having deciphered the token
to store the ODCID into a local quic_cid struct variable. Finally this ODCID may
be stored into the transport parameter thanks to qc_lstnr_params_init().
The Retry token lifetime is 10 seconds. This lifetime is also checked by
quic_generate_retry_check(). If quic_generate_retry_check() fails, the received
packet is dropped without anymore packet processing at this time.
This function does exactly the same thing as quic_tls_decrypt(), except that
it does reuse its input buffer as output buffer. This is needed
to decrypt the Retry token without modifying the packet buffer which
contains this token. Indeed, this would prevent us from decryption
the packet itself as the token belong to the AEAD AAD for the packet.
This function must be used to derive strong secrets from a non pseudo-random
secret (cluster-secret setting in our case) and an IV. First it call
quic_hkdf_extract_and_expand() to do that for a temporary strong secret (tmpkey)
then two calls to quic_hkdf_expand() reusing this strong temporary secret
to derive the final strong secret and IV.
In issue #1563, Coverity reported a very interesting issue about a
possible UAF in the config parser if the config file ends in with a
very large line followed by an empty one and the large one causes an
allocation failure.
The issue essentially is that we try to go on with the next line in case
of allocation error, while there's no point doing so. If we failed to
allocate memory to read one config line, the same may happen on the next
one, and blatantly dropping it while trying to parse what follows it. In
the best case, subsequent errors will be incorrect due to this prior error
(e.g. a large ACL definition with many patterns, followed by a reference of
this ACL).
Let's just immediately abort in such a condition where there's no recovery
possible.
This may be backported to all versions once the issue is confirmed to be
addressed.
Thanks to Ilya for the report.
If an unlisted errno is reported, abort the process. If a crash is
reported on this condition, we must determine if the error code is a
bug, should interrupt emission on the fd or if we can retry the syscall.
If sendto returns an error, we should not retry the call and break from
the sending loop. An exception is made for EINTR which allows to retry
immediately the syscall.
This bug caused an infinite loop reproduced when the process is in the
closing state by SIGUSR1 but there is still QUIC data emission left.
Instead of using the health-check to subscribe to read/write events, we now
rely on the conn-stream. Indeed, on the server side, the conn-stream's
endpoint is a multiplexer. Thus it seems appropriate to handle subscriptions
for read/write events the same way than for the streams. Of course, the I/O
callback function is not the same. We use srv_chk_io_cb() instead of
cs_conn_io_cb().
event_srv_chk_io() function is renamed srv_chk_io_cb() to be consistant with
the I/O callback function of connections. In addition, this function is
exported. It will be required to use the conn-stream's subscriptions.
The connection is already closed by the health-check itself. Thus there is
now reason to duplicate this part in the .wake callback function. It is
enough to wake the health-check and wait.
The buffer wait list is used to deal with buffer allocation failure. But at
the end of health-check, it must be reinitialized. There is no reason to
reason to get a buffer between two health-check runs. And in fact, the
associated flags, CHK_ST_IN_ALLOC and CHK_ST_OUT_ALLOC, are already cleared
at the end of a health-check.
This patch must be backported as far as 2.2. On the 2.2, MT_LIST_ADDED and
MT_LIST_DEL must be used instead of LIST_INLIST and LIST_DEL_INIT.
When the line parsing failed because outline buffer must be reallocated, if
my_realloc2() call fails, the buffer size must be reset. Indeed, in this case
the current line is skipped, a fatal error is reported and we jump to the next
line. At this stage the outline buffer is NULL. If the buffer size is not reset,
the next call to parse_line() crashes because we try to write in the buffer. We
fail to detect the outline buffer is too small to copy any character.
To fix the issue, outlinesize variable must be set to 0 when outline allocation
failed.
This patch should fix the issue #1563. It must be backported as far as 2.2.
Release the QCS RX buffer if emptied afer qcs_consume(). This improves
memory usage and avoids a QCS to keep an allocated buffer, particularly
when no data is received anymore. Buffer is automatically reallocated if
needed via qc_get_ncbuf().
qc_free_ncbuf() may now be used with a NCBUF_NULL buffer as parameter.
This is useful when using this function on a QCS with no allocated
buffer. This case was not reproduced for the moment, but it will soon
become more present as buffers will be released if emptied.
Also a call to offer_buffers() is added to conform with the dynamic
buffer management of haproxy.
This commit is similar to the previous one but deals with MAX_DATA for
connection-level data flow control. It uses the same function
qcc_consume_qcs() to update flow control level and generate a MAX_DATA
frame if needed.
Send MAX_STREAM_DATA frames when at least half of the allocated
flow-control has been demuxed, frame and cleared. This is necessary to
support QUIC STREAM with received data greater than a buffer.
Transcoders must use the new function qcc_consume_qcs() to empty the QCS
buffer. This will allow to monitor current flow-control level and
generate a MAX_STREAM_DATA frame if required. This frame will be emitted
via qc_io_cb().
Adjust the mechanism for MAX_STREAMS_BIDI emission. When a bidirectional
stream is removed, current flow-control level is checked. If needed, a
MAX_STREAMS_BIDI frame is generated and inserted in a new list in the
QCS instance. The new frames will be emitted at the start of qc_send().
This has no impact on the current MAX_STREAMS_BIDI behavior. However,
this mechanism is more flexible and will allow to implement quickly
MAX_STREAM_DATA/MAX_DATA emission.
Slightly change the interface for qcc_recv() between MUX and XPRT. The
MUX is now responsible to call qcc_decode_qcs(). This is cleaner as now
the XPRT does not have to deal with an extra QCS parameter and the MUX
will call qcc_decode_qcs() only if really needed.
This change is possible since there is no extra buffering for
out-of-order STREAM frames and the XPRT does not have to handle buffered
frames.
Previously, qc_io_cb() of mux-quic only dealt with TX. Add support for
RX in it. This is done through a new function qc_recv(qcc). It loops
over all QCS instances and call qcc_decode_qcs(qcs).
This has no impact from the quic-conn layer as qcc_decode_qcs(qcs) is
called directly. However, this allows to have a resume point when demux
is blocked on the upper layer HTX full buffer.
Note that for the moment, only RX for bidirectional streams is managed
in qc_io_cb(). Unidirectional streams use their own mechanism for both
TX/RX. It should be unified in the near future in a refactoring.
Flag QCS if HTX buffer is full on demux. This will block all future
operations on QCS demux and should limit unnecessary decode_qcs() calls.
The flag is cleared on rcv_buf operation called by conn-stream.
Previously, H3 demuxer refused to proceed the payload if the frame was
not entirely received and the QCS buffer is not full. This code was
duplicated from the H2 demuxer.
In H2, this is a justified optimization as only one frame at a time can
be demuxed. However, this is not the case in H3 with interleaved frames
in the lower layer QUIC STREAM frames.
This condition is now removed. H3 demuxer will proceed payload as soon
as possible. An exception is kept for HEADERS frame as the code is not
able to deal with partial HEADERS.
With this change, H3 demuxer should consume less memory. To ensure that
we never received a HEADER bigger than the RX buffer, we should use the
H3 SETTINGS_MAX_FIELD_SECTION_SIZE.
First adjusted some typos in comments inside the function. Second,
change the naming of some variable to reduce confusion.
A special case has been inserted when advance is done inside a GAP block
and this block is the last of the buffer. In this case, the whole buffer
will be emptied, equivalent to a ncb_init() operation.
ncb_is_empty() was plainly incorrect as it directly dereferences the
memory to read offset blocks instead of ncb_read_off(). The result is
undefined.
Also, BUG_ON() statement is wrong when the buffer starts with a data
block. In this case, ncb_head() is not the first gap offset but instead
just random data. The calculated sum in BUG_ON() statement has thus no
meaning and may cause an abort. Adjust this by reorganizing the whole
function. Only the first data block size is read. If and only if not
nul, the first gap size is then checked.
ncb_is_full() has been rewritten to share the same model as
ncb_is_empty().
The quic-conn manages a buffer to store received QUIC packets. When the
buffer wraps, the gap is filled until the end with junk and packets can
be inserted at the start of the buffer.
On the other end, deletion is implemented via quic_rx_pkts_del().
Packets are removed one by one if their refcount is nul. If junk is
found, the buffer is emptied until its wrap.
This seems to work in most cases but a bug was found in a particular
case : on insertion if buffer gap is not at the end of the buffer. In
this case, the gap was filled, which is useless as now the buffer is
full and the packet cannot be inserted. Worst, on deletion, when junk is
removed there is a risk to removed new packets. This can happens in the
following case :
1. buffer contig space is too small, junk is inserted in the middle of
it
2. on quic_rx_pkts_del() invocation, a packet is removed, but not the
next one because its refcount is still positive. When a new packet is
received, it will be stored after the junk.
3. on next quic_rx_pkts_del(), when junk is removed, all contig data is
cleared, with newer packets data too.
This will cause a transfer between a client and haproxy to be stalled.
This can be reproduced with big enough POST requests. I triggered it
with ngtcp2 and 10M of posted data.
Hopefully, the solution of this bug is simple. If contig space is not
big enough to store a packet, but the space is not at the end of the
buffer, no junk is inserted and the packet is dropped as we cannot
buffered it. This ensures that junk is only present at the end of the
buffer and when removed no packets data is purged with it.
In httpclient_applet_init() function, ss_dst variable is always defined
before the call to sockaddr_alloc(). There is no reason to test it.
This patch should fix the issue #1706.
labels used in goto statement was not called in the right order. Thus if
there is an error during the appctx startup, it is possible to leak a task.
This patch should fix the issue #1703. No backport needed.
Even if `unique_id` and `s->unique_id` are identical it is a bit odd to
`isttest()` `unique_id` and then use `s->unique_id` in the call to `http_add_header()`.
This "issue" was introduced in a17e66289c,
because before that commit the function returned the length of the ID, as it
was not an ist.
When loading providers with 'ssl-provider' global options, this
ssl-provider-path option can be used to set the search path that is to
be used by openssl. It behaves the same way as the OPENSSL_MODULES
environment variable.
negation or default modifiers are not supported for this option. However,
this was already tested earlier in cfg_parse_listen() function. Thus, when
"http-restrict-req-hdr-names" option is parsed, the keyword modifier is
always equal to KWM_STD. It is useless to test it again at this place.
This patch should solve the issue #1702.
The appctx is already the endpoint target. It is confusing to also use it to
set the endpoint context. So, never set the endpoint ctx when an appctx is
created or attached to an existing conn-stream.
When creating a new applet for peer outgoing connection, we check
the load on each thread. Threads with least applet count are
preferred.
With this solution we avoid a situation when many outgoing
connections run on the same thread causing significant load on
single CPU core.
It is now possible to start an appctx on a thread subset. Some controls were
added here and there. It is forbidden to start a backend appctx on another
thread than the local one. If a frontend appctx is started on another thread
or a thread subset, the applet .init callback function must be defined. This
callback function is responsible to finalize the appctx startup. It can be
performed synchornously. In this case, the appctx is started on the local
thread. It is not really useful but it is valid. Or it can be performed
asynchronously. In this case, .init callback function is called when the
appctx is woken up for the first time. When this happens, the appctx
affinity is set to the current thread to be able to start the session and
the stream.
In the same way than for the tasks, the applets api was changed to be able
to start a new appctx on a thread subset. For now the feature is
disabled. Only appctx_new_here() is working. But it will be possible to
start an appctx on a specific thread or a subset via a mask.
A .init callback function is defined for the peer_applet applet. This
function finishes the appctx startup by calling appctx_finalize_startup()
and its handles the stream customization.
A .init callback function is defined for the sink_forward_applet applet.
This function finishes the appctx startup by calling
appctx_finalize_startup() and its handles the stream customization.
A .init callback function is defined for the httpclient_applet applet. This
function finishes the appctx startup by calling appctx_finalize_startup()
and its handles the stream customization.
A .init callback function is defined for the update_applet applet. This
function finishes the appctx startup by calling appctx_finalize_startup()
and its handles the stream customization.
A .init callback function is defined for the spoe_applet applet. This
function finishes the spoe_appctx initialization. It also finishes the
appctx startup by calling appctx_finalize_startup() and its handles the
stream customization.
A .init callback function is defined for the dns_session_applet applet. This
function finishes the appctx startup by calling appctx_finalize_startup()
and its handles the stream customization.
appctx_free_on_early_error() must be used to release a freshly created
frontend appctx if an error occurred during the init stage. It takes care to
release the stream instead of the appctx if it exists. For a backend appctx,
it just calls appctx_free().
appctx_finalize_startup() may be used to finalize the frontend appctx
startup. It is responsible to create the appctx's session and the frontend
conn-stream. On error, it is the caller responsibility to release the
appctx. However, the session is released if it was created. On success, if
an error is encountered in the caller function, the stream must be released
instead of the appctx.
This function should ease the init stage when new appctx is created.
It is just a helper function that call the .init applet callback function,
if it exists. This will simplify a bit the init stage when a new applet is
started. For now, this callback function is only used when a new service is
started.
The session created for frontend applets is now totally owns by the
corresponding appctx. It means the appctx is now responsible to release
it. This removes the hack in stream_free() about frontend applets to be sure
to release the session.
Applets were moved at the same level than multiplexers. Thus, gradually,
applets code is changed to be less dependent from the stream. With this
commit, the frontend appctx are ready to own the session. It means a
frontend appctx will be responsible to release the session.
If no private key can be found in a bind line's certificate and
ssl-load-extra-files is set to none we end up trying to call
X509_check_private_key with a NULL key, which crashes.
This fix should be backported to all stable branches.
Found with -Wmissing-prototypes:
src/hlua_fcn.c:53:5: fatal error: no previous prototype for function 'hlua_checkboolean' [-Wmissing-prototypes]
int hlua_checkboolean(lua_State *L, int index)
^
src/hlua_fcn.c:53:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
int hlua_checkboolean(lua_State *L, int index)
^
static
1 error generated.
Found with -Wmissing-prototypes:
src/ssl_utils.c:22:5: fatal error: no previous prototype for function 'cert_get_pkey_algo' [-Wmissing-prototypes]
int cert_get_pkey_algo(X509 *crt, struct buffer *out)
^
src/ssl_utils.c:22:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
int cert_get_pkey_algo(X509 *crt, struct buffer *out)
^
static
1 error generated.
When HAProxy is linked to an OpenSSLv3 library, this option can be used
to load a provider during init. You can specify multiple ssl-provider
options, which will be loaded in the order they appear. This does not
prevent OpenSSL from parsing its own configuration file in which some
other providers might be specified.
A linked list of the providers loaded from the configuration file is
kept so that all those providers can be unloaded during cleanup. The
providers loaded directly by OpenSSL will be freed by OpenSSL.
This option can be used to define a default property query used when
fetching algorithms in OpenSSL providers. It follows the format
described in https://www.openssl.org/docs/man3.0/man7/property.html.
It is only available when haproxy is built with SSL support and linked
to OpenSSLv3 libraries.
The random generator initialization needs to be performed before the
chroot but it is not needed before. If we want to add provider
configuration option to the configuration file, they need to be
processed before any call to a crypto-related OpenSSL function.
We can then delay the initialization until after the configuration file
is parsed and processed.
The "http-restrict-req-hdr-names" option can now be set to restrict allowed
characters in the request header names to the "[a-zA-Z0-9-]" charset.
Idea of this option is to not send header names with non-alphanumeric or
hyphen character. It is especially important for FastCGI application because
all those characters are converted to underscore. For instance,
"X-Forwarded-For" and "X_Forwarded_For" are both converted to
"HTTP_X_FORWARDED_FOR". So, header names can be mixed up by FastCGI
applications. And some HAProxy rules may be bypassed by mangling header
names. In addition, some non-HTTP compliant servers may incorrectly handle
requests when header names contain characters ouside the "[a-zA-Z0-9-]"
charset.
When this option is set, the policy must be specify:
* preserve: It disables the filtering. It is the default mode for HTTP
proxies with no FastCGI application configured.
* delete: It removes request headers with a name containing a character
outside the "[a-zA-Z0-9-]" charset. It is the default mode for
HTTP backends with a configured FastCGI application.
* reject: It rejects the request with a 403-Forbidden response if it
contains a header name with a character outside the
"[a-zA-Z0-9-]" charset.
The option is evaluated per-proxy and after http-request rules evaluation.
This patch may be backported to avoid any secuirty issue with FastCGI
application (so as far as 2.2).
Using -Wall reveals several warning when building ncbuf testing API. One
of them was about the signedness mismatch. The other one was with an
incorrect print format.
ncbuf public API functions were not ready to deal with a NCBUF_NULL as
parameter. Strenghten these functions by handling it properly.
Most of the functions will consider the buffer as empty and silently
returns. The only exception is ncb_init(buf) which cannot be called
with a NCBUF_NULL. This seems legitimate to consider this as a bug and
not silently failed in this case.
quic_rx_strm_frm type was used to buffered STREAM frames received out of
order. Now the MUX is able to deal directly with these frames and
buffered it inside its ncbuf.
Rx has been simplified since the conversion of buffer to a ncbuf. The
old buffer can now be removed. The frms tree is also removed. It was
used previously to stored out-of-order received STREAM frames. Now the
MUX is able to buffer them directly into the ncbuf.
This commit is the equivalent for uni-streams of previous commit
MEDIUM: mux-quic/h3/hq-interop: use ncbuf for bidir streams
All unidirectional streams data is now handle in MUX Rx ncbuf. The
obsolete buffer is not unused and will be cleared in the following
patches.
Add a ncbuf for data reception on qcs. Thanks to this, the MUX is able
to buffered all received frame directly into the buffer. Flow control
parameters will be used to ensure there is never an overflow.
This change will simplify Rx path with the future deletion of acked
frames tree previously used for frames out of order.
Coverity reports that data block generated by ncb_blk_first() has
sz_data field uninitialized. This has no real impact as it has no sense
for data block. Set to 0 to hide the warning.
This should fix github issue #1695.
It is absolutely not possible to use the same "bind" line to listen to
both quic and tcp for example, because no single transport layer would
fit both modes and we'll need the type to choose one then to choose a
mux. Let's make sure this does not happen. This may be relaxed in the
future if we manage to instantiate transport layers on the fly, but the
SSL vs quic part might be tricky to handle.
The error message when mixing stream and dgram protocols in an
address speaks about sockets while it ought to speak about addresses,
let's fix this as in some contexts it can be a bit confusing.
Fred & Amaury found that I messed up with qc_detach() in commit 4201ab791
("CLEANUP: muxes: make mux->attach/detach take a conn_stream endpoint"),
causing a segv in this case with endp->cs == NULL being passed to
__cs_mux(). It obviously ought to have been endp->target like in other
muxes.
No backport needed.
Valerio Pachera explained [1] that external checks would benefit from
having a variable indicating if SSL is being used or not on the server
being checked, and the discussion derived to also indicating the protocol
in use.
This patch adds two environment variables for external checks:
- HAPROXY_SERVER_SSL: equals "0" when SSL is not used, "1" when it is
- HAPROXY_SERVER_PROTO: contains one of the following words to describe
the protocol used with this server:
- "cli": the haproxy CLI. Normally not seen
- "syslog": this is a syslog TCP server
- "peers": this is a peers TCP server
- "h1": this is an HTTP/1.x server
- "h2": this is an HTTP/2 server
- "tcp": this is any other TCP server
The patch is very simple, and may be backported to recent versions if
needed. This closes github issue #1692.
[1] https://www.mail-archive.com/haproxy@formilux.org/msg42233.html
The two functions became exact copies since there's no more special case
for the appctx owner. Let's merge them into a single one, that simplifies
the code.
This one is the pointer to the conn_stream which is always in the
endpoint that is always present in the appctx, thus it's not needed.
This patch removes it and replaces it with appctx_cs() instead. A
few occurences that were using __cs_strm(appctx->owner) were moved
directly to appctx_strm() which does the equivalent.
The former takes a conn_stream still attached to a valid appctx,
which also complicates the termination of the applet. Instead, let's
pass the appctx which already points to the endpoint, this allows us
to properly detach the conn_stream before the call, which is cleaner
and safer.
The mux ->detach() function currently takes a conn_stream. This causes
an awkward situation where the caller cs_detach_endp() has to partially
mark it as released but not completely so that ->detach() finds its
endpoint and context, and it cannot be done later since it's possible
that ->detach() deletes the endpoint. As such the endpoint link between
the conn_stream and the mux's stream is in a transient situation while
we'd like it to be clean so that the mux's ->detach() code can call any
regular function it wants that knows the regular semantics of the
relation between the CS and the endpoint.
A better approach consists in slightly modifying the detach() API to
better match the reality, which is that the endpoint is detached but
still alive and that it's the only part the function is interested in.
As such, this patch modifies the function to take an endpoint there,
and by analogy (or simplicity) does the same for ->attach(), even
though it looks less important there since we're always attaching an
endpoint to a conn_stream anyway. It is possible that in the future
the API could evolve to use more endpoints that provide a bit more
flexibility in the API, but at this point we don't need to go further.
The principle that each mux stream should have an endpoint is not
guaranteed for closed streams that map to the dummy static streams.
Let's have a dummy endpoint for use with such streams. It only has
the DETACHED flag and a NULL conn_stream, and is referenced by all
the closed streams so that we can afford not to test strm->endp when
trying to access the flags or the CS.
The principle that each mux stream should have an endpoint is not
guaranteed for closed streams that map to the dummy static streams.
Let's have a dummy endpoint for use with such streams. It only has
the DETACHED flag and a NULL conn_stream, and is referenced by all
the closed streams so that we can afford not to test h2s->endp when
trying to access the flags or the CS.
There is always an endpoint link in a stream, and this endpoint link
contains a pointer to the conn_stream it's attached to, so the one in
the h1 stream is always duplicate now. Let's always use endp->cs
instead and get rid of it.
Muxes and applets need to have both a pointer to the endpoint and to the
conn_stream. It would seem more natural that they only have a pointer to
the endpoint (that is always there) and that this one has an optional
pointer to the conn_stream. This would reduce the number of elements to
manipulate in lower level code. In addition, the conn_stream is not much
used from the lower layers (wake and exceptional events mostly).
The few applets that set CS_EP_EOI or CS_EP_ERROR used to set it on the
endpoint retrieved from the conn_stream while it's already available on
the appctx itself. Better use the appctx one to limit the unneeded
interactions between the two sides.
At a few places the endpoint pointer was retrieved from the conn_stream
while it's safer and more long-term proof to take it from the qcs. Let's
just do that.
At a few places the endpoint pointer was retrieved from the conn_stream
while it's safer and more long-term proof to take it from the fstrm.
Let's just do that.
At a few places the endpoint pointer was retrieved from the conn_stream
while it's safer and more long-term proof to take it from the context.
Let's just do that.
At a few places the endpoint pointer was retrieved from the conn_stream
while it's safer and more long-term proof to take it from the h2s. Let's
just do that.
At a few places the endpoint pointer was retrieved from the conn_stream
while it's safer and more long-term proof to take it from the h1s. Let's
just do that.
Wherever we need to report an error, we have an even easier access to
the endpoint than the conn_stream. Let's first adjust the API to use
the endpoint and rename the function accordingly to cs_ep_set_error().
Since the 2.5, for security reason, HTTP/1.0 GET/HEAD/DELETE requests with a
payload are rejected (See e136bd12a "MEDIUM: mux-h1: Reject HTTP/1.0
GET/HEAD/DELETE requests with a payload" for details). However it may be an
issue for old clients.
To avoid any compatibility issue with such clients,
"h1-accept-payload-with-any-method" global option was added. It must only be
set if there is a good reason to do so because it may lead to a request
smuggling attack on some servers or intermediaries.
This patch should solve the issue #1691. it may be backported to 2.5.
In wdt_handler(), does not try to trigger the watchdog if the
prev_cpu_time wasn't initialized.
This prevents an unexpected trigger of the watchdog when it wasn't
initialized yet. This case could happen in the master just after loading
the configuration. This would show a trace where the <diff> value is equal
to the <now> value in the trace, and the <poll> value would be 0.
For example:
Thread 1 is about to kill the process.
*>Thread 1 : id=0x0 act=1 glob=1 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
stuck=1 prof=0 harmless=0 wantrdv=0
cpu_ns: poll=0 now=6005541706 diff=6005541706
curr_task=0
Thanks to Christian Ruppert for repporting the problem.
Could be backported in every stable versions.
Lua API Channel.remove() and HTTPMessage.remove() expects 1 to 3
arguments (counting the manipulated object), with offset and length
being the 2nd and 3rd argument, respectively.
hlua_{channel,http_msg}_del_data() incorrectly gets the 3rd argument as
offset, and 4th (nonexistent) as length. hlua_http_msg_del_data() also
improperly checks arguments. This patch fixes argument handling in both.
Must be backported to 2.5.
Implement a series of unit test to validate ncbuf. This is written with
a main function which can be compiled independently using the following
command-line :
$ gcc -DSTANDALONE -lasan -I./include -o ncbuf src/ncbuf.c
The first part tests is used to test ncb_add()/ncb_advance(). After each
call a loop is done on the buffer blocks which should ensure that the
gap infos are correct.
The second part generates random offsets and insert them until the
buffer is full. The buffer is then resetted and all random offsets are
re-inserted in the reverse order : the buffer should be full once again.
The generated binary takes arguments to change the tests execution.
"usage: ncbuf [-r] [-s bufsize] [-h bufhead] [-p <delay_msec>]"
A new function ncb_advance() is implemented. This is used to advance the
buffer head pointer. This will consume the front data while forming a
new gap at the end for future data.
On success NCB_RET_OK is returned. The operation can be rejected if a
too small new gap is formed in front of the buffer.
Define three different ways to proceed insertion. This configures how
overlapping data is treated.
- NCB_ADD_PRESERVE : in this mode, old data are kept during insertion.
- NCB_ADD_OVERWRT : new data will overwrite old ones.
- NCB_ADD_COMPARE : this mode adds a new test in check stage. The
overlapping old and new data must be identical or else the insertion
is not conducted. An error NCB_RET_DATA_REJ is used in this case.
The mode is specified with a new argument to ncb_add() function.
Implement a new function ncb_add() to insert data in ncbuf. This
operation is conducted in two stages. First, a simulation will be run to
ensure that insertion can be proceeded. If a gap is formed, either
before or after the new data, it must be big enough to store its header,
or else the insertion is aborted.
After this check stage, the insertion is conducted block by block with
the function pair ncb_fill_data_blk()/ncb_fill_gap_blk().
A new type ncb_ret is used as a return value. For the moment, only
success or gap-size error is used. It is planned to add new error types
in the future when insertion will be extended.
Relax the constraint for gap storage when this is the last block.
ncb_blk API functions will consider that if a gap is stored near the end
of the buffer, without the space to store its header, the gap will cover
entirely the buffer end.
For these special cases, the gap size/data size are not write/read
inside the gap to prevent an overflow. Such a gap is designed in
functions as "reduced gap" and will be flagged with the value
NCB_BK_F_FIN.
This should reduce the rejection on future add operation when receiving
data in-order. Without reduced gap handling, an insertion would be
rejected if it covers only partially the last buffer bytes, which can be
a very common case.
Implement two new functions to report the total data stored accross the
whole buffer and the data stored at a specific offset until the next gap
or the buffer end.
To facilitate implementation of these new functions and also future
add/delete operations, a new abstraction is introduced : ncb_blk. This
structure represents a block of either data or gap in the buffer. It
simplifies operation when moving forward in the buffer. The first buffer
block can be retrieved via ncb_blk_first(buf). The block at a specific
offset is accessed via ncb_blk_find(buf, off).
This abstraction is purely used in functions but not stored in the ncbuf
structure per-se. This is necessary to keep the minimal memory
footprint.
Define the new type ncbuf. It can be used as a buffer with
non-contiguous data and wrapping support.
To reduce as much as possible the memory footprint, size of data and
gaps are stored in the gaps themselves. This put some limitation on the
buffer usage. A reserved space is present just before the head to store
the size of the first data block. Also, add and delete operations will
be constrained to ensure minimal gap sizes are preserved.
The sizes stored in the gaps are represented by a custom type named
ncb_sz_t. This type is a typedef to easily change it : this has a
direct impact on the maximum buffer size (MAX(ncb_sz_t) - sizeof(ncb_sz_t))
and the minimal gap sizes (sizeof(ncb_sz_t) * 2)).
Currently, it is set to uint32_t.
Add send_stateless_reset() to send a stateless reset packet. It prepares
a packet to build a 1-RTT packet with quic_stateless_reset_token_cpy()
to copy a stateless reset token derived from the cluster secret with
the destination connection ID received as salt.
Also add QUIC_EV_STATELESS_RST new trace event to at least to have a trace
of the connection which are reset.
A server may send the stateless reset token associated to the current
connection from its transport parameters. So, let's copy it from
qc_lstnt_params_init().
The stateless reset token of a connection is generated from qc_new_conn() when
allocating the first connection ID. A QUIC server can copy it into its transport
parameters to allow the peer to reset the associated connection.
This latter is not easily reachable after having returned from qc_new_conn().
We want to be able to initialize the transport parameters from this function which
has an access to all the information to do so.
Extract the code used to initialize the transport parameters from qc_lstnr_pkt_rcv()
and make it callable from qc_new_conn(). qc_lstnr_params_init() is implemented
to accomplish this task for a haproxy listener.
Modify qc_new_conn() to reduce its the number of parameters.
The source address coming from Initial packets is also copied from qc_new_conn().
Add quic_stateless_reset_token_init() wrapper function around
quic_hkdf_extract_and_expand() function to derive the stateless reset tokens
attached to the connection IDs from "cluster-secret" configuration setting
and call it each time we instantiate a QUIC connection ID.
This function will have to call another one from quic_tls.[ch] soon.
As we do not want to include quic_tls.h from xprt_quic.h because
quic_tls.h already includes xprt_quic.h, let's moving it into
xprt_quic.c.
This is a wrapper function around OpenSSL HKDF API functions to
use the "extract-then-expand" HKDF mode as defined by rfc5869.
This function will be used to derived stateless reset tokens
from secrets ("cluster-secret" conf. keyword) and CIDs (as salts).
It could be usefull to set a ASCII secret which could be used for different
usages. For instance, it will be used to derive QUIC stateless reset tokens.
A ->time_received new member is added to quic_rx_packet to store the time the
packet are received. ->largest_time_received is added the the packet number
space structure to store this timestamp for the packet with a new largest
packet number to be acknowledged. QUIC_FL_PKTNS_NEW_LARGEST_PN new flag is
added to mark a packet number space as having to acknowledged a packet wih a
new largest packet number. In this case, the packet number space ack delay
must be recalculated.
Add quic_compute_ack_delay_us() function to compute the ack delay from the value
of the time a packet was received. Used only when a packet with a new largest
packet number.
As we do not have any task to be wake up by the poller after sendto() error,
we add an sendto() error counter to the quic_conn struct.
Dump its values from qc_send_ppkts().
There are two reasons we can reject the creation of an h2 stream on the
frontend:
- its creation would violate the MAX_CONCURRENT_STREAMS setting
- there's no more memory available
And on the backend it's almost the same except that the setting might
have be negotiated after trying to set up the stream.
Let's add traces for such a sitaution so that it's possible to know why
the stream was rejected (currently we only know it was rejected).
It could be nice to backport this to the most recent versions.
When a client doesn't respect the h2 MAX_CONCURRENT_STREAMS setting, we
rightfully send RST_STREAM to it so that the client closes. But the
max_id is only updated on the successful path of h2c_handle_stream_new(),
which may be reentered for partial frames or CONTINUATION frames, and as
a result we don't increment it if an extraneous stream ID is rejected.
Normally it doesn't have any consequence. But on a POST it can have some
if the DATA frame immediately follows the faulty HEADERS frame: with
max_id not incremented, the stream remains in IDLE state, and the DATA
frame now lands in an invalid state from a protocol's perspective, which
must lead to a connection error instead of a stream error.
This can be tested by modifying the code to send an arbitrarily large
MAX_CONCURRENT_STREAM setting and using h2load to send more concurrent
streams than configured: with a GET, only a tiny fraction of them will
report an error (e.g. 101 streams for 100 accepted will result in ~1%
failure), but when sending data, most of the streams will be reported
as failed because the connection will be closed. By updating the max_id
earlier, the stream is now considered as closed when the DATA frame
arrives and it's silently discarded.
This must be backported to all versions but only if the code is exactly
the same. Under no circumstance this ID may be updated for a partial frame
(i.e. only update it before or just after calling h2c_frt_steam_new()).
This patch adds a lock on the struct dgram_conn to ensure
that an other thread cannot trash a fd or alter its status
while the current thread processing it on for send/receive/connect
operations.
Starting with the 2.4 version this could cause a crash when a DNS
request is failing, setting the FD of the dgram structure to -1. If the
dgram structure is reused after that, a read access to fdtab[-1] is
attempted. The crash was only triggered when compiled with ASAN.
In previous versions the concurrency issue also exists but is less
likely to crash.
This patch must be backported until v2.4 and should be
adapt for v < 2.4.
... or how a bogus warning forces you to do tricky changes in your code
and fail on a length test condition! Fortunately it changed in the right
direction that immediately broke, due to a missing "> sizeof(path)" that
had to be added to the already ugly condition.
This fixes recent commit 393e42ae5 ("BUILD: ssl: work around bogus warning
in gcc 12's -Wformat-truncation"). It may have to be backported if that
one is backported.
When building without threads, gcc 12 says that there's a null-deref in
_HA_ATOMIC_INC() called from listener_accept(). It's just that the code
was originally written in an attempt not to always have a proxy for a
listener and that there are two places where the pointer is tested before
being used, so the compiler concludes that the pointer might be null
hence that other places are null-derefs.
In practice the pointer cannot be null there (and never has been), but
since that code was initially built that way and it's only a matter of
adding a pair of braces to shut it up, let's respect that initial
attempt in case one day we need it.
This one was also reported by Ilya in issue #1513, though with threads
enabled in his case.
This may have to be backported if users complain about new breakage with
gcc-12.
As was first reported by Ilya in issue #1513, Gcc 12 incorrectly reports
a possible overflow from the concatenation of two strings whose size was
previously checked to fit:
src/ssl_crtlist.c: In function 'crtlist_parse_file':
src/ssl_crtlist.c:545:58: error: '%s' directive output may be truncated writing up to 4095 bytes into a region of size between 1 and 4096 [-Werror=format-truncation=]
545 | snprintf(path, sizeof(path), "%s/%s", global_ssl.crt_base, crt_path);
| ^~
src/ssl_crtlist.c:545:25: note: 'snprintf' output between 2 and 8192 bytes into a destination of size 4097
545 | snprintf(path, sizeof(path), "%s/%s", global_ssl.crt_base, crt_path);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It would be a bit concerning to disable -Wformat-truncation because it
might detect real programming mistakes at other places. The solution
adopted in this patch is absolutely ugly and error-prone, but it works,
it consists in integrating the snprintf() call in the error condition
and to test the result again. Let's hope a smarter compiler will not
warn that this test is absurd since guaranteed by the first condition...
This may have to be backported for those suffering from a compiler upgrade.
The CRL file CLI update code was strongly based off the CA one and some
copy-paste issues were then introduced.
This patch fixes GitHub issue #1685.
It should be backported to 2.5.
The STAT_ST_* values have been abused by virtually every applet and CLI
keyword handler, and this must not continue as it's a source of bugs and
of overly complicated code.
This patch renames the states to STAT_STATE_*, and keeps the previous
enum while marking each entry as deprecated. This should be sufficient to
catch out-of-tree code that might rely on them and to let them know what
to do with that.
Now that the CLI's print context is alone in the appctx, it's possible
to refine the appctx's ctx layout so that the cli part matches exactly
a regular svcctx, and as such move the CLI context into an svcctx like
other applets. External code will still build and work because the
struct cli perfectly maps onto the struct cli_print_ctx that's located
into svc.storage. This is of course only to make a smooth transition
during 2.6 and will disappear immediately after.
A tiny change had to be applied to the opentracing addon which performs
direct accesses to the CLI's err pointer in its own print function. The
rest uses the standard cli_print_* which were the only ones that needed
a small change.
The whole "ctx.cli" struct could be tagged as deprecated so that any
possibly existing external code that relies on it will get build
warnings, and the comments in the struct are pretty clear about the
way to fix it, and the lack of future of this old API.
Just like for the TCP service, let's move the context away from
appctx.ctx. A new struct hlua_http_ctx was defined, reserved in
hlua_applet_http_init() and used everywhere else. Similarly, the
task dump code will no more report decoded stack traces in case
these services would be involved. That may be solved later.
The use-service mechanism for Lua in TCP mode relies on the
hlua_tcp storage in appctx->ctx. We can move its definition to
hlua.c and simply use appctx_reserve_svcctx() to reserve and access
the stoage. One tiny side effect is that the task dump used in panics
will not show anymore the Lua call stack in its trace. For this a
better API is needed from the Lua code to expose a function that does
the job from an appctx.
The Lua cosockets were using appctx.ctx.hlua_cosocket. Let's move this
to a local definition of "hlua_csk_ctx" in hlua.c, which is allocated
from the appctx by hlua_socket_new(). There's a notable change which is
that, while previously the xref link with the peer was established with
the appctx, it's now in the hlua_csk_ctx. This one must then hold a
pointer to the appctx. The code was adjusted accordingly, and now that
part of the code doesn't use the appctx.ctx anymore.
The context was moved to a local definition in the cache code, and
there's nothing specific to the cache anymore in the appctx. The
struct is stored into the appctx's storage area via the svcctx.
This one has been misused for a while as well, it's time to deprecate it
since we don't use it anymore. It will be removed in 2.7 and for now is
only marked as deprecated. Since we need to guarantee that it's zeroed
before starting any applet or CLI command, it was moved into an anonymous
union where its sibling is not marked as deprecated so that we can
continue to initialize it without triggering a warning.
If you found this commit after a bisect session you initiated to figure
why you got some build warnings and don't know what to do, have a look
at the code that deals with the "show fd", "show sess" or "show servers"
commands, as it's supposed to be self-explanatory about the tiny changes
to apply to your code to port it. If you find APPLET_MAX_SVCCTX to be
too small for your use case, either kindly ask for a tiny extension
(and try to get your code merged), or just use a pool.
The httpclient already uses its own pointer and only used to store this
single pointer into the appctx.ctx field. Let's just move it to the
svcctx and remove this entry from the appctx union.
The httpclient's CLI uses ctx.cli.i0 for its flags and .p0 for the client
instance. Let's have a locally defined structure for this so that we don't
need the generic cli variables anymore.
Let's create a show_sock_ctx to store the bind_conf and the listener.
The entry is reserved when entering the I/O handler since there's no
parser here. That's fine because the function doesn't touch the area.
The code is was a bit convoluted by the use of a state machine around
st2 that is not used since only the STAT_ST_LIST state was used, and
the test of global.cli_fe inside the loop while it ought better be
tested before entering there. Let's get rid of this unneded state and
simplify the code. There's no more need for ->st2 now. The code looks
more changed than it really is due to the reindent caused by the
removal of the switch statement, but "git show -b" shows what really
changed.
There is the variable to start from (or environ) and an option to stop
after dumping the first one, just like "show fd". Let's have a small
locally-defined context with these two fields.
The "show fd" command used to rely on cli.i0 for the fd, and st2 just
to decide whether to stop after the first value or not. It could have
been possible to decide to use just a negative integer to dump a single
value, but it's as easy and more durable to declare a two-field struct
show_fd_ctx for this.
The command uses a pointer to the current proxy being dumped, one to the
current server being dumped, an optional ID of the only proxy to dump
(which is in fact used as a boolean), and a flag indicating if we're
doing a "show servers conn" or a "show servers state". Let's move all
this to a struct show_srv_ctx.
The command uses a pointer to a cache instance and the next key to dump,
they were in cli.p0/i0 respectively, let's move them to a struct
show_cache_ctx.
The command uses this state but _INIT immediately turns to _LIST, which
turns to _FIN at the end without doing anything in that state, thus the
only existing state is _LIST so we don't need to store a state. Let's
just get rid of it.
The command was using cli.p0/p1/p2 to select which section to dump, the
current section and the current ns. Let's instead have a locally defined
"show_resolvers_ctx" section for this.
The ring code was using ctx.cli.i0/p0/o0 to store its context during CLI
dumps via "show events" or "show errors". Let's use a locally defined
context and drop that.
The ring watch flags (wait, seek end) were dangerously passed via ctx.cli.i0
from "show buf" in sink.c:cli_parse_show_events(), or implicitly reset in
"show errors". That's very unconvenient, difficult to follow, and prone to
short-term breakage.
Let's pass an extra argument to ring_attach_cli() to take these flags, now
defined in ring-t.h as RING_WF_*, and let the function set them itself
where appropriate (still ctx.cli.i0 for now).
The command only requires to store an int, but it will be useful later
to have a struct to pass extra info such as an "all" flag to dump all
FDs. The new context is now a struct dev_fd_ctx stored in svcctx.
Instead of having a struct that contains a single pointer in the appctx
context, let's directly use the generic context pointer and get rid of
the now unused sft.ptr entry.
Several steps are used during the addition of a crtlist to yield during
long operations, and states are used for this. Let's just not use the
st2 anymore and place the state inside the add_crtlist_ctx struct instead.
These commands were using cli.i0/p0/p1 and in a not very clean way since
they use the same parser but with different types depending on the I/O
handler. Given there was no explanation about what the variables were
supposed to be, they were named based on best guess and placed into a
new "show_crtlist_ctx" structure.
These two commands use distinct parse/release functions but a common
iohandler, thus they need to keep the same context. It was created
under the name "commit_cacrlfile_ctx" and holds a large part of the
pointers (6) and the ca_type field that helps distinguish between
the two commands for the I/O handler. It looks like some of these
fields could have been merged since apparently the CA part only
uses *cafile* and the CRL part *crlfile*, while both old and new
are of type cafile_entry and set only for each type. This could
probably even simplify some parts of the code that tries to use
the correct field.
These fields were the last ones to be migrated thus the appctx's
ssl context could finally be removed.
Just like for "set ssl cafile", the command doesn't really need this
context which doesn't outlive the parsing function but it was there
for a purpose so it's maintained. Only 3 fields were used from the
appctx's ssl context: old_crlfile_entry, new_crlfile_entry, and path.
These ones were reinstantiated into a new "set_crlfile_ctx" struct.
It could have been merged with the one used in "set cafile" if the
fields had been renamed since cafile and crlfile are of the same
type (probably one of them ought to be renamed?).
None of these fields could be dropped as they are still shared with
other commands.
Just like for "set ssl cert", the command doesn't really need this
context which doesn't outlive the parsing function but it was there
for a purpose so it's maintained. Only 3 fields were used from the
appctx's ssl context: old_cafile_entry, new_cafile_entry, and path.
These ones were reinstantiated into a new "set_cafile_ctx" struct.
None of them could be dropped as they are still shared with other
commands.
The command doesn't really need any storage since there's only a parser,
but since it used this context, there might have been plans for extension,
so better continue with a persistent one. Only old_ckchs, new_ckchs, and
path were being used from the appctx's ssl context. There ones moved to
the local definition, and the two former ones were removed from the appctx
since not used anymore.
This command only really uses old_ckchs, new_ckchs and next_ckchi
from the appctx's ssl context. The new structure "commit_cert_ctx"
only has these 3 fields, though none could be removed from the shared
ssl context since they're still used by other commands.
This command only really uses old_ckchs, cur_ckchs and the index
in which the transaction was stored. The new structure "show_cert_ctx"
only has these 3 fields, and the now unused "cur_ckchs" and "index"
could be removed from the shared ssl context.
Now this command doesn't share any context anymore with "show cafile"
nor with the other commands. The previous "cur_cafile_entry" field from
the applet's ssl context was removed as not used anymore. Everything was
moved to show_crlfile_ctx which only has 3 fields.
Saying that the layout and usage of the various variables in the ssl
applet context is a mess would be an understatement. It's very hard
to know what command uses what fields, even after having moved away
from the mix of cli and ssl.
Let's extract the parts used by "show cafile" into their own structure.
Only the "show_all" field would be removed from the ssl ctx, the other
fields are still shared with other commands.
This context is used by CLI keywords registered by Lua. We can take
it out of the appctx and use the generic command context allocation so
that the appctx doesn't have to declare a specific one anymore. The
context is created during parsing.
This makes use of the generic command context allocation so that the
appctx doesn't have to declare a specific one anymore. The context is
created during parsing (both in the CLI and HTTP).
The change looks large but it's particularly mechanical. The context
initialization appears in stats.c and http_ana.c. The context is used
in stats.c and resolvers.c since "show stat resolvers" points there.
That's the reason why the definition moved to stats.h. "show info"
and "show stat" continue to share the same state definition for now.
Nothing else was modified.
Historically the CLI was a second access to the stats and we've continued
to initialize only the stats part when initializing the CLI. Let's make
sure we do that on the whole ctx instead. It's probably not more needed
at all nowadays but better stay on the safe side.