The two functions became exact copies since there's no more special case
for the appctx owner. Let's merge them into a single one, that simplifies
the code.
This one is the pointer to the conn_stream which is always in the
endpoint that is always present in the appctx, thus it's not needed.
This patch removes it and replaces it with appctx_cs() instead. A
few occurences that were using __cs_strm(appctx->owner) were moved
directly to appctx_strm() which does the equivalent.
The former takes a conn_stream still attached to a valid appctx,
which also complicates the termination of the applet. Instead, let's
pass the appctx which already points to the endpoint, this allows us
to properly detach the conn_stream before the call, which is cleaner
and safer.
The mux ->detach() function currently takes a conn_stream. This causes
an awkward situation where the caller cs_detach_endp() has to partially
mark it as released but not completely so that ->detach() finds its
endpoint and context, and it cannot be done later since it's possible
that ->detach() deletes the endpoint. As such the endpoint link between
the conn_stream and the mux's stream is in a transient situation while
we'd like it to be clean so that the mux's ->detach() code can call any
regular function it wants that knows the regular semantics of the
relation between the CS and the endpoint.
A better approach consists in slightly modifying the detach() API to
better match the reality, which is that the endpoint is detached but
still alive and that it's the only part the function is interested in.
As such, this patch modifies the function to take an endpoint there,
and by analogy (or simplicity) does the same for ->attach(), even
though it looks less important there since we're always attaching an
endpoint to a conn_stream anyway. It is possible that in the future
the API could evolve to use more endpoints that provide a bit more
flexibility in the API, but at this point we don't need to go further.
The principle that each mux stream should have an endpoint is not
guaranteed for closed streams that map to the dummy static streams.
Let's have a dummy endpoint for use with such streams. It only has
the DETACHED flag and a NULL conn_stream, and is referenced by all
the closed streams so that we can afford not to test strm->endp when
trying to access the flags or the CS.
The principle that each mux stream should have an endpoint is not
guaranteed for closed streams that map to the dummy static streams.
Let's have a dummy endpoint for use with such streams. It only has
the DETACHED flag and a NULL conn_stream, and is referenced by all
the closed streams so that we can afford not to test h2s->endp when
trying to access the flags or the CS.
There is always an endpoint link in a stream, and this endpoint link
contains a pointer to the conn_stream it's attached to, so the one in
the h1 stream is always duplicate now. Let's always use endp->cs
instead and get rid of it.
Muxes and applets need to have both a pointer to the endpoint and to the
conn_stream. It would seem more natural that they only have a pointer to
the endpoint (that is always there) and that this one has an optional
pointer to the conn_stream. This would reduce the number of elements to
manipulate in lower level code. In addition, the conn_stream is not much
used from the lower layers (wake and exceptional events mostly).
The few applets that set CS_EP_EOI or CS_EP_ERROR used to set it on the
endpoint retrieved from the conn_stream while it's already available on
the appctx itself. Better use the appctx one to limit the unneeded
interactions between the two sides.
At a few places the endpoint pointer was retrieved from the conn_stream
while it's safer and more long-term proof to take it from the qcs. Let's
just do that.
At a few places the endpoint pointer was retrieved from the conn_stream
while it's safer and more long-term proof to take it from the fstrm.
Let's just do that.
At a few places the endpoint pointer was retrieved from the conn_stream
while it's safer and more long-term proof to take it from the context.
Let's just do that.
At a few places the endpoint pointer was retrieved from the conn_stream
while it's safer and more long-term proof to take it from the h2s. Let's
just do that.
At a few places the endpoint pointer was retrieved from the conn_stream
while it's safer and more long-term proof to take it from the h1s. Let's
just do that.
Wherever we need to report an error, we have an even easier access to
the endpoint than the conn_stream. Let's first adjust the API to use
the endpoint and rename the function accordingly to cs_ep_set_error().
Since the 2.5, for security reason, HTTP/1.0 GET/HEAD/DELETE requests with a
payload are rejected (See e136bd12a "MEDIUM: mux-h1: Reject HTTP/1.0
GET/HEAD/DELETE requests with a payload" for details). However it may be an
issue for old clients.
To avoid any compatibility issue with such clients,
"h1-accept-payload-with-any-method" global option was added. It must only be
set if there is a good reason to do so because it may lead to a request
smuggling attack on some servers or intermediaries.
This patch should solve the issue #1691. it may be backported to 2.5.
In wdt_handler(), does not try to trigger the watchdog if the
prev_cpu_time wasn't initialized.
This prevents an unexpected trigger of the watchdog when it wasn't
initialized yet. This case could happen in the master just after loading
the configuration. This would show a trace where the <diff> value is equal
to the <now> value in the trace, and the <poll> value would be 0.
For example:
Thread 1 is about to kill the process.
*>Thread 1 : id=0x0 act=1 glob=1 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
stuck=1 prof=0 harmless=0 wantrdv=0
cpu_ns: poll=0 now=6005541706 diff=6005541706
curr_task=0
Thanks to Christian Ruppert for repporting the problem.
Could be backported in every stable versions.
Lua API Channel.remove() and HTTPMessage.remove() expects 1 to 3
arguments (counting the manipulated object), with offset and length
being the 2nd and 3rd argument, respectively.
hlua_{channel,http_msg}_del_data() incorrectly gets the 3rd argument as
offset, and 4th (nonexistent) as length. hlua_http_msg_del_data() also
improperly checks arguments. This patch fixes argument handling in both.
Must be backported to 2.5.
Implement a series of unit test to validate ncbuf. This is written with
a main function which can be compiled independently using the following
command-line :
$ gcc -DSTANDALONE -lasan -I./include -o ncbuf src/ncbuf.c
The first part tests is used to test ncb_add()/ncb_advance(). After each
call a loop is done on the buffer blocks which should ensure that the
gap infos are correct.
The second part generates random offsets and insert them until the
buffer is full. The buffer is then resetted and all random offsets are
re-inserted in the reverse order : the buffer should be full once again.
The generated binary takes arguments to change the tests execution.
"usage: ncbuf [-r] [-s bufsize] [-h bufhead] [-p <delay_msec>]"
A new function ncb_advance() is implemented. This is used to advance the
buffer head pointer. This will consume the front data while forming a
new gap at the end for future data.
On success NCB_RET_OK is returned. The operation can be rejected if a
too small new gap is formed in front of the buffer.
Define three different ways to proceed insertion. This configures how
overlapping data is treated.
- NCB_ADD_PRESERVE : in this mode, old data are kept during insertion.
- NCB_ADD_OVERWRT : new data will overwrite old ones.
- NCB_ADD_COMPARE : this mode adds a new test in check stage. The
overlapping old and new data must be identical or else the insertion
is not conducted. An error NCB_RET_DATA_REJ is used in this case.
The mode is specified with a new argument to ncb_add() function.
Implement a new function ncb_add() to insert data in ncbuf. This
operation is conducted in two stages. First, a simulation will be run to
ensure that insertion can be proceeded. If a gap is formed, either
before or after the new data, it must be big enough to store its header,
or else the insertion is aborted.
After this check stage, the insertion is conducted block by block with
the function pair ncb_fill_data_blk()/ncb_fill_gap_blk().
A new type ncb_ret is used as a return value. For the moment, only
success or gap-size error is used. It is planned to add new error types
in the future when insertion will be extended.
Relax the constraint for gap storage when this is the last block.
ncb_blk API functions will consider that if a gap is stored near the end
of the buffer, without the space to store its header, the gap will cover
entirely the buffer end.
For these special cases, the gap size/data size are not write/read
inside the gap to prevent an overflow. Such a gap is designed in
functions as "reduced gap" and will be flagged with the value
NCB_BK_F_FIN.
This should reduce the rejection on future add operation when receiving
data in-order. Without reduced gap handling, an insertion would be
rejected if it covers only partially the last buffer bytes, which can be
a very common case.
Implement two new functions to report the total data stored accross the
whole buffer and the data stored at a specific offset until the next gap
or the buffer end.
To facilitate implementation of these new functions and also future
add/delete operations, a new abstraction is introduced : ncb_blk. This
structure represents a block of either data or gap in the buffer. It
simplifies operation when moving forward in the buffer. The first buffer
block can be retrieved via ncb_blk_first(buf). The block at a specific
offset is accessed via ncb_blk_find(buf, off).
This abstraction is purely used in functions but not stored in the ncbuf
structure per-se. This is necessary to keep the minimal memory
footprint.
Define the new type ncbuf. It can be used as a buffer with
non-contiguous data and wrapping support.
To reduce as much as possible the memory footprint, size of data and
gaps are stored in the gaps themselves. This put some limitation on the
buffer usage. A reserved space is present just before the head to store
the size of the first data block. Also, add and delete operations will
be constrained to ensure minimal gap sizes are preserved.
The sizes stored in the gaps are represented by a custom type named
ncb_sz_t. This type is a typedef to easily change it : this has a
direct impact on the maximum buffer size (MAX(ncb_sz_t) - sizeof(ncb_sz_t))
and the minimal gap sizes (sizeof(ncb_sz_t) * 2)).
Currently, it is set to uint32_t.
Add send_stateless_reset() to send a stateless reset packet. It prepares
a packet to build a 1-RTT packet with quic_stateless_reset_token_cpy()
to copy a stateless reset token derived from the cluster secret with
the destination connection ID received as salt.
Also add QUIC_EV_STATELESS_RST new trace event to at least to have a trace
of the connection which are reset.
A server may send the stateless reset token associated to the current
connection from its transport parameters. So, let's copy it from
qc_lstnt_params_init().
The stateless reset token of a connection is generated from qc_new_conn() when
allocating the first connection ID. A QUIC server can copy it into its transport
parameters to allow the peer to reset the associated connection.
This latter is not easily reachable after having returned from qc_new_conn().
We want to be able to initialize the transport parameters from this function which
has an access to all the information to do so.
Extract the code used to initialize the transport parameters from qc_lstnr_pkt_rcv()
and make it callable from qc_new_conn(). qc_lstnr_params_init() is implemented
to accomplish this task for a haproxy listener.
Modify qc_new_conn() to reduce its the number of parameters.
The source address coming from Initial packets is also copied from qc_new_conn().
Add quic_stateless_reset_token_init() wrapper function around
quic_hkdf_extract_and_expand() function to derive the stateless reset tokens
attached to the connection IDs from "cluster-secret" configuration setting
and call it each time we instantiate a QUIC connection ID.
This function will have to call another one from quic_tls.[ch] soon.
As we do not want to include quic_tls.h from xprt_quic.h because
quic_tls.h already includes xprt_quic.h, let's moving it into
xprt_quic.c.
This is a wrapper function around OpenSSL HKDF API functions to
use the "extract-then-expand" HKDF mode as defined by rfc5869.
This function will be used to derived stateless reset tokens
from secrets ("cluster-secret" conf. keyword) and CIDs (as salts).
It could be usefull to set a ASCII secret which could be used for different
usages. For instance, it will be used to derive QUIC stateless reset tokens.
A ->time_received new member is added to quic_rx_packet to store the time the
packet are received. ->largest_time_received is added the the packet number
space structure to store this timestamp for the packet with a new largest
packet number to be acknowledged. QUIC_FL_PKTNS_NEW_LARGEST_PN new flag is
added to mark a packet number space as having to acknowledged a packet wih a
new largest packet number. In this case, the packet number space ack delay
must be recalculated.
Add quic_compute_ack_delay_us() function to compute the ack delay from the value
of the time a packet was received. Used only when a packet with a new largest
packet number.
As we do not have any task to be wake up by the poller after sendto() error,
we add an sendto() error counter to the quic_conn struct.
Dump its values from qc_send_ppkts().
There are two reasons we can reject the creation of an h2 stream on the
frontend:
- its creation would violate the MAX_CONCURRENT_STREAMS setting
- there's no more memory available
And on the backend it's almost the same except that the setting might
have be negotiated after trying to set up the stream.
Let's add traces for such a sitaution so that it's possible to know why
the stream was rejected (currently we only know it was rejected).
It could be nice to backport this to the most recent versions.
When a client doesn't respect the h2 MAX_CONCURRENT_STREAMS setting, we
rightfully send RST_STREAM to it so that the client closes. But the
max_id is only updated on the successful path of h2c_handle_stream_new(),
which may be reentered for partial frames or CONTINUATION frames, and as
a result we don't increment it if an extraneous stream ID is rejected.
Normally it doesn't have any consequence. But on a POST it can have some
if the DATA frame immediately follows the faulty HEADERS frame: with
max_id not incremented, the stream remains in IDLE state, and the DATA
frame now lands in an invalid state from a protocol's perspective, which
must lead to a connection error instead of a stream error.
This can be tested by modifying the code to send an arbitrarily large
MAX_CONCURRENT_STREAM setting and using h2load to send more concurrent
streams than configured: with a GET, only a tiny fraction of them will
report an error (e.g. 101 streams for 100 accepted will result in ~1%
failure), but when sending data, most of the streams will be reported
as failed because the connection will be closed. By updating the max_id
earlier, the stream is now considered as closed when the DATA frame
arrives and it's silently discarded.
This must be backported to all versions but only if the code is exactly
the same. Under no circumstance this ID may be updated for a partial frame
(i.e. only update it before or just after calling h2c_frt_steam_new()).
This patch adds a lock on the struct dgram_conn to ensure
that an other thread cannot trash a fd or alter its status
while the current thread processing it on for send/receive/connect
operations.
Starting with the 2.4 version this could cause a crash when a DNS
request is failing, setting the FD of the dgram structure to -1. If the
dgram structure is reused after that, a read access to fdtab[-1] is
attempted. The crash was only triggered when compiled with ASAN.
In previous versions the concurrency issue also exists but is less
likely to crash.
This patch must be backported until v2.4 and should be
adapt for v < 2.4.
... or how a bogus warning forces you to do tricky changes in your code
and fail on a length test condition! Fortunately it changed in the right
direction that immediately broke, due to a missing "> sizeof(path)" that
had to be added to the already ugly condition.
This fixes recent commit 393e42ae5 ("BUILD: ssl: work around bogus warning
in gcc 12's -Wformat-truncation"). It may have to be backported if that
one is backported.
When building without threads, gcc 12 says that there's a null-deref in
_HA_ATOMIC_INC() called from listener_accept(). It's just that the code
was originally written in an attempt not to always have a proxy for a
listener and that there are two places where the pointer is tested before
being used, so the compiler concludes that the pointer might be null
hence that other places are null-derefs.
In practice the pointer cannot be null there (and never has been), but
since that code was initially built that way and it's only a matter of
adding a pair of braces to shut it up, let's respect that initial
attempt in case one day we need it.
This one was also reported by Ilya in issue #1513, though with threads
enabled in his case.
This may have to be backported if users complain about new breakage with
gcc-12.
As was first reported by Ilya in issue #1513, Gcc 12 incorrectly reports
a possible overflow from the concatenation of two strings whose size was
previously checked to fit:
src/ssl_crtlist.c: In function 'crtlist_parse_file':
src/ssl_crtlist.c:545:58: error: '%s' directive output may be truncated writing up to 4095 bytes into a region of size between 1 and 4096 [-Werror=format-truncation=]
545 | snprintf(path, sizeof(path), "%s/%s", global_ssl.crt_base, crt_path);
| ^~
src/ssl_crtlist.c:545:25: note: 'snprintf' output between 2 and 8192 bytes into a destination of size 4097
545 | snprintf(path, sizeof(path), "%s/%s", global_ssl.crt_base, crt_path);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It would be a bit concerning to disable -Wformat-truncation because it
might detect real programming mistakes at other places. The solution
adopted in this patch is absolutely ugly and error-prone, but it works,
it consists in integrating the snprintf() call in the error condition
and to test the result again. Let's hope a smarter compiler will not
warn that this test is absurd since guaranteed by the first condition...
This may have to be backported for those suffering from a compiler upgrade.
The CRL file CLI update code was strongly based off the CA one and some
copy-paste issues were then introduced.
This patch fixes GitHub issue #1685.
It should be backported to 2.5.
The STAT_ST_* values have been abused by virtually every applet and CLI
keyword handler, and this must not continue as it's a source of bugs and
of overly complicated code.
This patch renames the states to STAT_STATE_*, and keeps the previous
enum while marking each entry as deprecated. This should be sufficient to
catch out-of-tree code that might rely on them and to let them know what
to do with that.
Now that the CLI's print context is alone in the appctx, it's possible
to refine the appctx's ctx layout so that the cli part matches exactly
a regular svcctx, and as such move the CLI context into an svcctx like
other applets. External code will still build and work because the
struct cli perfectly maps onto the struct cli_print_ctx that's located
into svc.storage. This is of course only to make a smooth transition
during 2.6 and will disappear immediately after.
A tiny change had to be applied to the opentracing addon which performs
direct accesses to the CLI's err pointer in its own print function. The
rest uses the standard cli_print_* which were the only ones that needed
a small change.
The whole "ctx.cli" struct could be tagged as deprecated so that any
possibly existing external code that relies on it will get build
warnings, and the comments in the struct are pretty clear about the
way to fix it, and the lack of future of this old API.
Just like for the TCP service, let's move the context away from
appctx.ctx. A new struct hlua_http_ctx was defined, reserved in
hlua_applet_http_init() and used everywhere else. Similarly, the
task dump code will no more report decoded stack traces in case
these services would be involved. That may be solved later.
The use-service mechanism for Lua in TCP mode relies on the
hlua_tcp storage in appctx->ctx. We can move its definition to
hlua.c and simply use appctx_reserve_svcctx() to reserve and access
the stoage. One tiny side effect is that the task dump used in panics
will not show anymore the Lua call stack in its trace. For this a
better API is needed from the Lua code to expose a function that does
the job from an appctx.
The Lua cosockets were using appctx.ctx.hlua_cosocket. Let's move this
to a local definition of "hlua_csk_ctx" in hlua.c, which is allocated
from the appctx by hlua_socket_new(). There's a notable change which is
that, while previously the xref link with the peer was established with
the appctx, it's now in the hlua_csk_ctx. This one must then hold a
pointer to the appctx. The code was adjusted accordingly, and now that
part of the code doesn't use the appctx.ctx anymore.
The context was moved to a local definition in the cache code, and
there's nothing specific to the cache anymore in the appctx. The
struct is stored into the appctx's storage area via the svcctx.
This one has been misused for a while as well, it's time to deprecate it
since we don't use it anymore. It will be removed in 2.7 and for now is
only marked as deprecated. Since we need to guarantee that it's zeroed
before starting any applet or CLI command, it was moved into an anonymous
union where its sibling is not marked as deprecated so that we can
continue to initialize it without triggering a warning.
If you found this commit after a bisect session you initiated to figure
why you got some build warnings and don't know what to do, have a look
at the code that deals with the "show fd", "show sess" or "show servers"
commands, as it's supposed to be self-explanatory about the tiny changes
to apply to your code to port it. If you find APPLET_MAX_SVCCTX to be
too small for your use case, either kindly ask for a tiny extension
(and try to get your code merged), or just use a pool.
The httpclient already uses its own pointer and only used to store this
single pointer into the appctx.ctx field. Let's just move it to the
svcctx and remove this entry from the appctx union.
The httpclient's CLI uses ctx.cli.i0 for its flags and .p0 for the client
instance. Let's have a locally defined structure for this so that we don't
need the generic cli variables anymore.
Let's create a show_sock_ctx to store the bind_conf and the listener.
The entry is reserved when entering the I/O handler since there's no
parser here. That's fine because the function doesn't touch the area.
The code is was a bit convoluted by the use of a state machine around
st2 that is not used since only the STAT_ST_LIST state was used, and
the test of global.cli_fe inside the loop while it ought better be
tested before entering there. Let's get rid of this unneded state and
simplify the code. There's no more need for ->st2 now. The code looks
more changed than it really is due to the reindent caused by the
removal of the switch statement, but "git show -b" shows what really
changed.
There is the variable to start from (or environ) and an option to stop
after dumping the first one, just like "show fd". Let's have a small
locally-defined context with these two fields.
The "show fd" command used to rely on cli.i0 for the fd, and st2 just
to decide whether to stop after the first value or not. It could have
been possible to decide to use just a negative integer to dump a single
value, but it's as easy and more durable to declare a two-field struct
show_fd_ctx for this.
The command uses a pointer to the current proxy being dumped, one to the
current server being dumped, an optional ID of the only proxy to dump
(which is in fact used as a boolean), and a flag indicating if we're
doing a "show servers conn" or a "show servers state". Let's move all
this to a struct show_srv_ctx.
The command uses a pointer to a cache instance and the next key to dump,
they were in cli.p0/i0 respectively, let's move them to a struct
show_cache_ctx.
The command uses this state but _INIT immediately turns to _LIST, which
turns to _FIN at the end without doing anything in that state, thus the
only existing state is _LIST so we don't need to store a state. Let's
just get rid of it.
The command was using cli.p0/p1/p2 to select which section to dump, the
current section and the current ns. Let's instead have a locally defined
"show_resolvers_ctx" section for this.
The ring code was using ctx.cli.i0/p0/o0 to store its context during CLI
dumps via "show events" or "show errors". Let's use a locally defined
context and drop that.
The ring watch flags (wait, seek end) were dangerously passed via ctx.cli.i0
from "show buf" in sink.c:cli_parse_show_events(), or implicitly reset in
"show errors". That's very unconvenient, difficult to follow, and prone to
short-term breakage.
Let's pass an extra argument to ring_attach_cli() to take these flags, now
defined in ring-t.h as RING_WF_*, and let the function set them itself
where appropriate (still ctx.cli.i0 for now).
The command only requires to store an int, but it will be useful later
to have a struct to pass extra info such as an "all" flag to dump all
FDs. The new context is now a struct dev_fd_ctx stored in svcctx.
Instead of having a struct that contains a single pointer in the appctx
context, let's directly use the generic context pointer and get rid of
the now unused sft.ptr entry.
Several steps are used during the addition of a crtlist to yield during
long operations, and states are used for this. Let's just not use the
st2 anymore and place the state inside the add_crtlist_ctx struct instead.
These commands were using cli.i0/p0/p1 and in a not very clean way since
they use the same parser but with different types depending on the I/O
handler. Given there was no explanation about what the variables were
supposed to be, they were named based on best guess and placed into a
new "show_crtlist_ctx" structure.
These two commands use distinct parse/release functions but a common
iohandler, thus they need to keep the same context. It was created
under the name "commit_cacrlfile_ctx" and holds a large part of the
pointers (6) and the ca_type field that helps distinguish between
the two commands for the I/O handler. It looks like some of these
fields could have been merged since apparently the CA part only
uses *cafile* and the CRL part *crlfile*, while both old and new
are of type cafile_entry and set only for each type. This could
probably even simplify some parts of the code that tries to use
the correct field.
These fields were the last ones to be migrated thus the appctx's
ssl context could finally be removed.
Just like for "set ssl cafile", the command doesn't really need this
context which doesn't outlive the parsing function but it was there
for a purpose so it's maintained. Only 3 fields were used from the
appctx's ssl context: old_crlfile_entry, new_crlfile_entry, and path.
These ones were reinstantiated into a new "set_crlfile_ctx" struct.
It could have been merged with the one used in "set cafile" if the
fields had been renamed since cafile and crlfile are of the same
type (probably one of them ought to be renamed?).
None of these fields could be dropped as they are still shared with
other commands.
Just like for "set ssl cert", the command doesn't really need this
context which doesn't outlive the parsing function but it was there
for a purpose so it's maintained. Only 3 fields were used from the
appctx's ssl context: old_cafile_entry, new_cafile_entry, and path.
These ones were reinstantiated into a new "set_cafile_ctx" struct.
None of them could be dropped as they are still shared with other
commands.
The command doesn't really need any storage since there's only a parser,
but since it used this context, there might have been plans for extension,
so better continue with a persistent one. Only old_ckchs, new_ckchs, and
path were being used from the appctx's ssl context. There ones moved to
the local definition, and the two former ones were removed from the appctx
since not used anymore.
This command only really uses old_ckchs, new_ckchs and next_ckchi
from the appctx's ssl context. The new structure "commit_cert_ctx"
only has these 3 fields, though none could be removed from the shared
ssl context since they're still used by other commands.
This command only really uses old_ckchs, cur_ckchs and the index
in which the transaction was stored. The new structure "show_cert_ctx"
only has these 3 fields, and the now unused "cur_ckchs" and "index"
could be removed from the shared ssl context.
Now this command doesn't share any context anymore with "show cafile"
nor with the other commands. The previous "cur_cafile_entry" field from
the applet's ssl context was removed as not used anymore. Everything was
moved to show_crlfile_ctx which only has 3 fields.
Saying that the layout and usage of the various variables in the ssl
applet context is a mess would be an understatement. It's very hard
to know what command uses what fields, even after having moved away
from the mix of cli and ssl.
Let's extract the parts used by "show cafile" into their own structure.
Only the "show_all" field would be removed from the ssl ctx, the other
fields are still shared with other commands.
This context is used by CLI keywords registered by Lua. We can take
it out of the appctx and use the generic command context allocation so
that the appctx doesn't have to declare a specific one anymore. The
context is created during parsing.
This makes use of the generic command context allocation so that the
appctx doesn't have to declare a specific one anymore. The context is
created during parsing (both in the CLI and HTTP).
The change looks large but it's particularly mechanical. The context
initialization appears in stats.c and http_ana.c. The context is used
in stats.c and resolvers.c since "show stat resolvers" points there.
That's the reason why the definition moved to stats.h. "show info"
and "show stat" continue to share the same state definition for now.
Nothing else was modified.
Historically the CLI was a second access to the stats and we've continued
to initialize only the stats part when initializing the CLI. Let's make
sure we do that on the whole ctx instead. It's probably not more needed
at all nowadays but better stay on the safe side.
Let's instead define a 4-state enum solely for this use case, and place
it into the command's context. Note that END and FIN were already
aliases, which is why they were merged.
This makes use of the generic command context allocation so that the
appctx doesn't have to declare a specific one anymore. The context is
created during parsing. The code also uses st2 which deserves being
addressed in separate commit.
There's no point checking the state before deciding to detach the backref
on "show map", it should always be done if the list is not empty. Note
that being empty guarantees that it's not linked into the list, and
conversely not being empty guarantees that it's in the list, hence the
test doesn't need to be performed under the lock.
The "show map"/"clear map" code used to rely on the cli's i0/i1 fields
to store the generation numbers to work with. That's particularly dirty
because it's done at places where ctx.map is also manipulated while they
are part of the same union, and the reason why this didn't cause trouble
is because cli.i0/i1 are at offset 216/224 while the map parts end at
204, so luckily there was no overlap.
Let's add these fields to the map context.
This makes use of the generic command context allocation so that the
appctx doesn't have to declare a specific one anymore. The context is
created during parsing. Many commands, including pure parsers, use this
context but that's not a problem as it's designed to be used this way.
Due to this, many lines are changed but that's in fact a replacement of
"appctx->ctx.map" with "ctx->". Note that the code also uses st2 which
deserves being addressed in separate commit.
This state is pointless now, it just serves to initialize the initial
table pointer while this can be done more easily in the parser, so let's
do that and drop that state.
Let's instead define a 4-state enum solely for this use case, and place
it into the command's context. Note that END and FIN were already
aliases, which is why they were merged.
This makes use of the generic command context allocation so that the
appctx doesn't have to declare a specific one anymore. The context is
created during parsing. The code also uses st2 which deserves being
addressed in separate commit.
This makes use of the generic command context allocation so that the
appctx doesn't have to declare a specific one anymore. The context is
created during parsing.
The code still has room for improvement, such as in the "flags" field
where bits are hard-coded, but they weren't modified.
The state was a constant, let's remove what remains of the switch/case.
The code from the "case" statement was only reindented as can be checked
with "git show -b".
This state is only an alias for "thr >= global.nbthread" which is the
sole condition that indicates the end and reaches it. Let's just drop
this state. There's only the STATE_LIST that's left.
This state was only used to preset the list element. Now that we can
guarantee that the context can be properly preset during the parsing
we don't need this state anymore. The first pointer has to be set to
point to the first stream during the initial call which is detected
by the pointer not yet being set (null). Thanks to this we can also
remove one state check on the abort path.
This makes use of the generic command context allocation so that the
appctx doesn't have to declare a specific one anymore. The context is
created during parsing.
Till now, appctx_new() used to allocate an entry from the pool and
pass it through appctx_init() to initialize a debatable part of it that
didn't correspond anymore to the comments, and fill other fields. It's
hard to say what is fully initialized and what is not.
Let's get rid of that, and always zero the initialization (appctx are
not that big anyway, even with the cache there's no difference in
performance), and initialize what remains. this is cleaner and more
resistant to new field additions.
The appctx_init() function was removed.
Instead of using existing fields and having to put keyword-specific
contexts in the applet definition, let's have the appctx provide a
generic storage area that's currently large enough for existing CLI
commands and small applets, and a function to allocate that storage.
The function will be responsible for verifying that the requested size
fits in the area so that the caller doesn't need to add specific checks;
it is validated during development as this size is static and will
not change at runtime. In addition the caller doesn't even need to
free() the area since it's part of an existing context. For the
caller's convenience, a context pointer "svcctx" for the command is
also provided so that the allocated area can be placed there (or
possibly any other one in case a larger area is needed).
The struct's layout has been temporarily complicated by adding one
level of anonymous union on top of the "ctx" one. This will allow us
to preserve "ctx" during 2.6 for compatibility with possible external
code and get rid of it in 2.7. This explains why the diff extends to
the whole "ctx" union, but a "git show -b" shows that only one extra
layer was added. In order to make both the svcctx pointer and its
storage accessible without further enlarging the appctx structure,
both svcctx and the storage share the same storage as the ctx part.
This is done by having them placed in the union with a protected
overlapping area for svcctx, for which a shadow member is also
present in the storage area:
union {
void* svcctx; // variable accessed by services
struct {
void *shadow; // shadow of svcctx;
char storage[]; // where most services store their data
};
union { // older commands store here and ignore svcctx
...
} ctx;
};
I.e. new applications will use appctx->svcctx while older ones will be
able to continue to use appctx->ctx.*
The whole area (including the pointer's context) is zeroed before any
applet is initialized, and before CLI keyword processor's first invocation,
as it is an important part of the existing keyword processors, which makes
CLI keywords effectively behave like applets.
The io_handler in "add ssl crt_list" is built around a "while" loop that
only makes forward progress and that doesn't handle its final state as
it's not supposed to be called again once reached. This makes the code
confusing because its construct implies an infinite loop for such a
state (or any other unhandled one). Let's just remove that unneeded loop.
The "show ssl cert" command mixes some generic pointers from the
"ctx.cli" struct with context-specific ones from "ctx.ssl" while both
are in a union. Amazingly, despite the use of both p0 and i0 to store
respectively a pointer to the current ckchs and a transaction id, there
was no overlap with the other pointers used during these operations,
but should these fields be reordered or slightly updated this will break.
Comments were added above the faulty functions to indicate which fields
they are using.
This needs to be backported to 2.5.
The "show ssl crl-file" command mixes some generic pointers from the
"ctx.cli" struct with context-specific ones from "ctx.ssl" while both
are in a union. It's fortunate that the p1 pointer in use is located
before the first one used (it overlaps with old_cafile_entry). But
should these fields be reordered or slightly updated this will break.
This needs to be backported to 2.5.
The "show ssl ca-file <name>" command mixes some generic pointers from
the "ctx.cli" struct and context-specific ones from "ctx.ssl" while both
are in a union. The i0 integer used to store the current ca_index overlaps
with new_crlfile_entry which is thus harmless for now but is at the mercy
of any reordering or addition of these fields. Let's add dedicated fields
into the ssl structure for this.
Comments were added on top of the affected functions to indicate what they
use.
This needs to be backported to 2.5.
The "show ca-file" and "show crl-file" commands mix some generic pointers
from the "ctx.cli" struct and context-specific ones from "ctx.ssl" while
both are in a union. It's fortunate that the p0 pointer in use is located
immediately before the first one used (it overlaps with next_ckchi_link,
and old_cafile_entry is safe). But should these fields be reordered or
slightly updated this will break.
Comments were added on top of the affected functions to indicate what they
use.
This needs to be backported to 2.5.
When "show map" initializes itself, it first takes the reference to the
starting point under a lock, then releases it before switching to state
STATE_LIST, and takes the lock again. The problem is that it is possible
for another thread to remove the first element during this unlock/lock
sequence, and make the list run anywhere. This is of course extremely
unlikely but not impossible.
Let's initialize the pointer in the STATE_LIST part under the same lock,
which is simpler and more reliable.
This should be backported to all versions.
In case of write error in "show map", the backref is detached but
the list wasn't locked when this is done. The risk is very low but
it may happen that two concurrent "show map" one of which would fail
or one "show map" failing while the same entry is being updated could
cause a crash.
This should be backported to all stable versions.
Commit e7f74623e ("MINOR: stats: don't output internal proxies (PR_CAP_INT)")
in 2.5 ensured we don't dump internal proxies on the stats page, but the
same is needed for "show backend", as since the addition of the HTTP client
it now appears there:
$ socat /tmp/sock1 - <<< "show backend"
# name
<HTTPCLIENT>
This needs to be backported to 2.5.
This command was introduced in 1.8 with commit eceddf722 ("MEDIUM: cli:
'show cli sockets' list the CLI sockets") but its yielding doesn't work.
Each time it enters, it restarts from the last bind_conf but enumerates
all listening sockets again, thus it loops forever. The risk that it
happens in field is low but it easily triggers on port ranges after
400-500 sockets depending on the length of their addresses:
global
stats socket /tmp/sock1 level admin
stats socket 192.168.8.176:30000-31000 level operator
$ socat /tmp/sock1 - <<< "show cli sockets"
(...)
ipv4@192.168.8.176:30426 operator all
ipv4@192.168.8.176:30427 operator all
ipv4@192.168.8.176:30428 operator all
ipv4@192.168.8.176:30000 operator all
ipv4@192.168.8.176:30001 operator all
ipv4@192.168.8.176:30002 operator all
^C
This patch adds the minimally needed restart point for the listener so
that it can easily be backported. Some more cleanup is needed though.
The "show resolvers" command is bogus, it tries to implement a yielding
mechanism except that if it yields it restarts from the beginning, until
it manages to fill the buffer with only line breaks, and faces error -2
that lets it reach the final state and exit.
The risk is low since it requires about 50 name servers to reach that
state, but it's not impossible, especially when using multiple sections.
In addition, the extraneous line breaks, if sent over an interactive
connection, will desynchronize the commands and make the client believe
the end was reached after the first nameserver. This cannot be fixed
separately because that would turn this bug into an infinite loop since
it's the line feed that manages to fill the buffer and stop it.
The fix consists in saving the current resolvers section into ctx.cli.p1
and the current nameserver into ctx.cli.p2.
This should be backported, but that code moved a lot since it was
introduced and has always been bogus. It looks like it has mostly
stabilized in 2.4 with commit c943799c86 so the fix might be backportable
to 2.4 without too much effort.
Try to create a "default" resolvers section at startup, but does not
display any error nor warning. This section is initialized using the
/etc/resolv.conf of the system.
This is opportunistic and with no guarantee that it will work (but it should
on most systems).
This is useful for the httpclient as it allows to use the DNS resolver
without any configuration in most of the cases.
The function is called from the httpclient_pre_check() function to
ensure than we tried to create the section before trying to initiate the
httpclient. But it is also called from the resolvers.c to ensure the
section is created when the httpclient init was disabled.
Release the expression used by the set-{src,dst}[-port] actions so we
keep valgrind happy upon an exit or an haproxy -c.
Could be backported in every supported version.
Move the resolv.conf parser from the cfg_parse_resolvers so it could be
used separately.
Some changes were made in the memprintf in order to use a char **
instead of a char *. Also the variable is tested before each memprintf
so could skip them if no warnmsg nor errmsg were set.
Cleanup the alert and warning handling in the "parse-resolve-conf"
parser to use the errmsg and warnmsg variables and memprintf.
This will allow to split the parser and shut the alert/warning if
needed.
The commit 2eb5243e7 ("BUG/MEDIUM: mux-h1: Set outgoing message to DONE when
payload length is reached") introduced a regression. An internal error is
reported when we try to forward a message with trailers while the
content-length header was specified. Indeed, this case does not exist for H1
messages but it is possible in H2.
This patch should solve the issue #1684. It must be backported as far as
2.4.
This bug was already fixed at many places (stats, promex, lua) but the FCGI
multiplexer is also affected. When there is no content-length specified in
the response and when the END_REQUEST record is delayed, the response may be
truncated because an abort is erroneously detected. If the connection is not
closed because "keep-conn" option is set, the response is aborted at the end
of the server timeout.
This bug is a design issue with the HTX. It should be addressed. But it will
probably not be possible to backport them as far as 2.4. So, for now, the
only solution is to explicitly add an EOT block with the EOM flag in this
case.
This patch should fix the issue #1682. It must be backported as far as 2.4.
The commit a6c4a4834 ("BUG/MEDIUM: conn-stream: Don't erase endpoint flags
on reset") was too laxy on reset. Only app layer flags must be preserved. On
reset, the endpoint is detached. Thus all flags set by the endpoint itself
or concerning its type must be removed.
Without this fix, we can experienced crashes when a stream is released while
a server connection attempt failed. Indeed, in this case, endpoint of the
backend conn-stream is reset. But the endpoint type is still set. Thus when
the stream is released, the endpoint is detached again.
This patch is 2.6-specific. No backport needed. This commit depends on the
previous one ("MINOR: conn-stream: Add mask from flags set by endpoint or
app layer").
The httpclient.resolvers.prefer global keyword allows to configure an
ipv4 or ipv6 preference when resolving. This could be useful in
environment where the ipv6 is not supported.
By default the httpclient uses the resolvers section whose ID is
"default", the httpclient.resolvers.id global option allows to configure
another section to use.
The hard_error_ssl flag is set when the configuration is explicitely
done for the ssl in the httpclient.
If no configuration was made, the features are simply disabled and no
alert is emitted.
Cleanup the error handling in the initialization so we rely on the
ERR_CODE and use memprintf() to set the errmsg before printing it at the
end of the functions.
httpclient_set_dst() allows one to set the destination address instead
of using the one in the URL or resolving one from the host.
This function also support other types of socket like sockpair@, unix@,
anything that could be used on a server line.
In order to still support this behavior, the address must be set on the
backend in this particular case because the frontend connection does not
support anything other than ipv4 or ipv6.
To allow the http-request set-dst to work for the httpclient DNS
resolving, some changes need to be done:
- The destination address need to be set in the frontend (s->csf->dst)
instead of the backend (s->csb->dst) to be able to use
tcp_action_req_set_dst()
- SRV_F_MAPPORTS need to be set on the proxy in order to allow the port
change in alloc_dst_address()
The httpclient_resolve_init() adds http-request actions which does the
resolving using the Host header of the HTTP request.
The parse_http_req_cond function is directly used over an array of http
rules.
The do-resolve rule uses the "default" resolvers section. If this
section does not exist in the configuration, the resolving is disabling.
The httpclient DNS resolver will need a more efficient URL parser which
splits the URL into parts and does not try to resolve.
httpclient_spliturl uses the http_uri_parser in order to split the URL
into several ist.
The result of the function host part is then processed into str2ip2(),
and fails if it it not an IP, allowing us to resolve the domain later.
We rely on the largest ID which was used to open streams to know if the
stream we received STREAM frames for is closed or not. If closed, we return the
same status as the one for a STREAM frame which was a already received one for
on open stream.
It is possible that we continue to receive retransmitted STREAM frames after
the mux have been released. We rely on the ->rx.streams[].nb_streams counter
to check the stream was closed. If not, at this time we drop the packet.
This is required when the retransmitted frame types when the mux is released.
We add a counter for the number of streams which were opened or closed by the mux.
After the mux has been released, we can rely on this counter to know if the STREAM
frames are retransmitted ones or not.
That's similar to what was done for conn_streams and connections. The
flags were only set exactly when the relevant pointers were allocated,
so better test the pointer than the flag and stop setting the flag.
Just like for the conn_stream, now that these addresses are dynamically
allocated, there is no single case where the pointer is set without the
corresponding flag, and the flag is used as a permission to dereference
the pointer. Let's just replace the test of the flag with a test of the
pointer and remove all flag assignment. This makes the code clearer
(especially in "if" conditions) and saves the need for future code to
think about properly setting the flag after setting the pointer.
Some of the protocol-level ->connect() functions currently dereference
the connection's destination address while others test it and return an
error. There's normally no more non-bogus code path that calls such
functions without a valid destination address on the connection, so
let's unify these functions and just place a BUG_ON() there, and drop
the useless test that's supposed to return an internal error.
This flag is no longer needed now that it must always match the presence
of a destination address on the backend conn_stream. Worse, before previous
patch, if it were to be accidently removed while the address is present, it
could result in a leak of that address since alloc_dst_address() would first
be called to flush it.
Its usage has a long history where addresses were stored in an area shared
with the connection, but as this is no longer the case, there's no reason
for putting this burden onto application-level code that should not focus
on setting obscure flags.
The only place where that made a small difference is in the dequeuing code
in case of queue redistribution, because previously the code would first
clear the flag, and only later when trying to deal with the queue, would
release the address. It's not even certain whether there would exist a
code path going to connect_server() without calling pendconn_dequeue()
first (e.g. retries on queue timeout maybe?).
Now the pendconn_dequeue() code will rely on SF_ASSIGNED to decide to
clear and release the address, since that flag is always set while in
a server's queue, and its clearance implies that we don't want to keep
the address. At least it remains consistent and there's no more risk of
leaking it.
These functions dynamically allocate a source or destination address but
start by clearing the previous one. There's a non-null risk of leaking
addresses there in case of misuse. Better have them do nothing if the
address was already allocated.
HTTP/3 implementation must ignore unknown frame type to support protocol
evolution. Clients can deliberately use unknown type to test that the
server is conformant : this principle is called greasing.
Quiche client uses greasing on H3 frame type with a zero length frame.
This reveals a bug in H3 parsing code which causes the transfer to be
interrupted. Fix this by removing the break statement on ret variable.
Now the parsing loop is only interrupted if input buffer is empty or the
demux is blocked.
This should fix http/3 freeze transfers with the quiche client. Thanks
to Lucas Pardue from Cloudflare for his report on the bug. Frédéric
Lecaille quickly found the source of the problem which helps me to write
this patch.
If the request channel buffer is full, H3 demuxing must be interrupted
on the stream until some read is performed. This condition is reported
if the HTX stream buffer qcs.rx.app_buf is full.
In this case, qcs instance is marked with a new flag QC_SF_DEM_FULL.
This flag cause the H3 demuxing to be interrupted. It is cleared when
the HTX buffer is read by the conn-stream layer through rcv_buf
operation.
When the flag is cleared, the MUX tasklet is woken up. However, as MUX
iocb does not treat Rx for the moment, this is useless. It must be fix
to prevent possible freeze on POST transfers.
In practice, for the moment the HTX buffer is never full as the current
Rx code is limited by the quic-conn receive buffer size and the
incomplete flow-control implementation. So for now this patch is not
testable under the current conditions.
It seems this multiplier ended up in oblivion. Indeed a multiplier must be
applied to the loss delay expressed as an RTT multiplier: 9/8.
So, some packets were detected as lost too soon, leading to be retransmitted too
early!
If the MUX cannot handle immediately nor buffer a STREAM frame, the
packet containing it must not be acknowledge. This is in conformance
with the RFC9000.
qcc_recv() return codes have been adjusted to differentiate an invalid
frame with an already fully received offset which must be acknowledged.
If a packet contains a STREAM frame but the MUX is not allocated, the
frame cannot be enqueued. According to the RFC9000, we must not
acknowledge the packet under this condition.
This may prevents a bug with firefox which keeps trying on refreshing
the web page. This issue has already been detected before closing state
implementation : haproxy wasn't emitted CONNECTION_CLOSE and keeps
acknowledge STREAM frames despite not handle them.
In the future, it might be necessary to respond with a CONNECTION_CLOSE
if the MUX has already been freed.
In issue #1468 it was reported that sometimes server-side connection
attempts were only validated after the "timeout connect" value, and
that would only happen with an H2 client. A long code analysis with the
output dumps showed only one possible call path: an I/O event on the
frontend while reading had just been disabled calls h2_wake() which in
turns wakes cs_conn_io_cb(), which tries cs_conn_process() and cs_notify(),
which sees that the other side is not blocked (already in CS_ST_CON)
and tries cs_chk_snd() on it. But on that side the connection had just
finished to be set up and not yet woken the stream up, cs_notify()
would then call cs_conn_send() which succeeds and passes the connection
to CS_ST_RDY. The problem is that nothing new happened on the frontend
side so there's no reason to wake the stream up and the backend-side
conn_stream remains in CS_ST_RDY state with the stream never being
woken up.
Once the "timeout connect" strikes, process_stream() is woken up and
finds the connection finally setup, so it ignores the timeout and goes
on.
The number of conditions to meet to reproduce this is huge, which also
explains why the reporter says it's "occasional" and we were never able
to reproduce it in the lab. It needs at least reads to be disabled and
immediately re-enabled on the frontend side (e.g. buffer full) with an
I/O even reported before the poller had an opportunity to be disabled
but with no subscribe being reinstalled, so that sock_conn_iocb() has
no other choice but calling h2_wake(), and exactly at the same time
the backend connection must finish to set up so that it was not yet
reported by the poller, the data were sent and the polling for writes
disabled.
Several factors are to be considered here:
- h2_wake() should probably not call h2_wake_some_streams() for
ret >= 0 (common case), but only if some special event is reported
for at least one stream; that part is sensitive though as in the
past we managed to lose some rare cases (e.g. restart processing
after a pause), and such wakeups are extremely rare so we'd better
make that effort once in a while.
- letting a lazy forward attempt on the frontend confirm a backend
connection establishment is too smart to be reliable. That wasn't
in fact the intent and it's inherited from the very old code where
muxes didn't exist and where it was guaranteed that an even at this
layer would wake everyone up.
Here the best thing to do is to refrain from attempting to forward data
until the connection is confirmed. This will let the poller report the
connect() event to the backend side which will process it as it should
and does in all other cases.
Thanks to Jimmy Crutchfield for having reported useful traces and
tested patches.
This will have to be backported to all stable branches after some
observation. Before 2.6 the function is stream_int_chk_snd_conn(),
and the flag to remove is SI_SB_CON.
The use of co_set_data() should be strictly limited to setting the amount of
existing data to be transmitted. It ought not be used to decrement the
output after the data have left the buffer, because doing so involves
performing incorrect calculations using co_data() that still comprises data
that are not in the buffer anymore. Let's use c_rew() for this, which is
made exactly for this purpose, i.e. decrement c->output by as much as
requested.
When HTX blocks are transfer from the HTTP client context to the request
channel, via htx_xfer_blks() function, the metadata must also be counted, in
addition to the data size. Otherwise, expected payload size will not be
copied because the metadata of an HTX block (8 bytes) will be reserved. And
if the payload size is lower than 8 bytes, nothing will be copied. Thus only
a zero-copy will be able to copy the payload.
This issue is 2.6-specific, no backport is needed.
When the HTTP client consumes the response, it loops on the HTX message to
copy blocks content and it removes blocks by calling htx_remove_blk(). But
this function removes a block and returns the next one in the HTX
message. The result must be used instead of using htx_get_next(). It is
especially important because the block used in htx_get_next() loop was
removed. It only works because the message is not defragmented during the
loop.
In addition, the loop on the response was simplified to iter on blocks
instead of positions.
This patch must be backported to 2.5.
Only CS_EP_ERROR flag is now removed from the endpoint when a reset is
performed. When a new the endpoint is allocated, flags are preserved. It is
the caller responsibility to remove other flags, depending on its need.
Concretly, during a connection retry or a L7 retry, we must preserve
flags. In tcpcheck and the CLI, we reset flags.
This patch is 2.6-specific. No backport needed.
The SSL_SERVER_VERIFY_* constants were incorrectly set on the httpclient
server verify. The right constants are SSL_SOCK_VERIFY_* .
This could cause issues when using "httpclient-ssl-verify" or when the
SSL certificates can't be loaded.
No backport needed
This function is used to parse the QUIC packets carried by a UDP datagram.
When a correct packet could be found, the ->len RX packet structure value
is set to the packet length value. On the contrary, it is set to the remaining
number of bytes in the UDP datagram if no correct QUIC packet could be found.
So, there is no need to make this function return a status value. It allows
the caller to parse any QUIC packet carried by a UDP datagram without this.
Any client Initial packet carried in a datagram smaller than QUIC_INITIAL_PACKET_MINLEN(200)
bytes must be discarded. This does not mean we must discard the entire datagram.
So we must at least try to parse the packet length before dropping the packet
and return its length from qc_lstnr_pkt_rcv().
A crash is possible under such circumtances:
- The congestion window is drastically reduced to its miniaml value
when a quic listener is experiencing extreme packet loss ;
- we enqueue several STREAM frames to be resent and some of them could not be
transmitted ;
- some of the still in flight are acknowledged and trigger the
stream memory releasing ;
- when we come back to send the remaing STREAM frames, haproxy
crashes when it tries to build them.
To fix this issue, we mark the STREAM frame as lost when detected as lost.
Then a lookup if performed for the stream the STREAM frames are attached
to before building them. They are released if the stream is no more available
or the data range of the frame is consumed.
When we have to probe the peer, we must first try to send new data. This is done
here waking up the mux after having set the number of maximum number of datagrams
to send to QUIC_MAX_NB_PTO_DGRAMS (2). Of course, this is only the case if the
mux was subscribed to SEND events.
This may happen in rare cases with extreme packet loss (30% for both TX and RX)
which leads the congestion window to decrease down to its minimal value (two
datagrams). Under such circumtances, no ack-eliciting frame can be added to
a packet by qc_build_frms(). In this case we must cancel the packet building
process if there is no ACK or probe (PING frame) to send.
This function must return a successful status as soon as it could be build
a frame to be embedded by a packet. This behavior was broken by the last
modifications. This was due to a dangerous "ret = 1" statement inside
a loop. This statement must be reach only if we go out of a switch/case
after a "break" statement.
Add comments to mention this information.
When we are probing, we do not receive packets, furthermore all ACK frames have
already been sent. This is useless to send ACK when probing the peer. This
modification does not reset the flag which marks the connection as requiring an
ACK frame to be sent. If this is the case, this will be taken into an account
by after the probing process.
Make the two I/O handlers quic_conn_io_cb() and quic_conn_app_io_cb() call
qc_dgrams_retransmit() after probing retransmissions need was detected by
the timer task (qc_process_timer()).
We must modify qc_prep_pkts() to support QUIC_TLS_ENC_LEVEL_NONE as <next_tel>
parameter when called from qc_dgrams_retransmit().
When probing retranmissions with old data are needed for the connection we
mark the packets as probing with old data to track them when acknowledged:
we do not resend frames with old data when lost.
Modify qc_send_app_pkt() to distinguish the case where it sends new data
against the case where it sends old data during probing retransmissions.
We add <old_data> boolean parameter to this function to do so. The mux
never directly send old data when probing retransmissions are needed by
the connection.
This function is used to requeue the TX frames from TX packets which have
been detected as lost. The modifications consist in avoiding resending frames from
duplicated frames used to probe the peer. This is useless. Only the original
frames loss must be taken into an account because detected as lost before
the retransmitted frames. If these latter are also detected as lost, other
duplicated frames would have been retransmitted before their loss detection.
qc_prep_fast_retrans() and qc_prep_hdshk_fast_retrans() are modified to
take two list of frames as parameters. Two lists are needed for
qc_prep_hdshk_fast_retrans() to build datagrams with two packets during
handshake. qc_prep_fast_retrans() needs two lists of frames to be used
to send two datagrams with one list by datagram.
We want to be able to resend frames from list of frames during handshakes to
resend datagrams with the same frames as during the first transmissions.
This leads to decrease drasctically the chances of frame fragmentation due to
variable lengths of the packet fields. Furthermore the frames were not duplicated
when retransmitted from a packet to another one. This must be the case only during
packet loss dectection.
qc_dup_pkt_frms() is there to duplicate the frames from an input list to an output
list. A distinction is made between STREAM frames and the other ones because we
can rely on the "acknowledged offset" the aim of which is to track the number
of bytes which were acknowledged from TX STREAM frames.
qc_release_frm() in addition to release the frame passed as parameter, also mark
the duplicate STREAM frames as acknowledeged.
qc_send_hdshk_pkts() is the qc_send_app_pkts() counterpart to send datagrams from
at most two list of frames to be able to coalesced packets from two different
packet number spaces
qc_dgrams_retransmit() is there to probe the peer with datagrams depending on the
need of the packet number spaces which must be flag with QUIC_FL_PKTNS_PROBE_NEEDED
by the PTO timer task (qc_process_timer()).
Add QUIC_FL_CONN_RETRANS_NEEDED connection flag definition to mark a quic_conn
struct as needing a retranmission.
Add QUIC_FL_PKTNS_PROBE_NEEDED to mark a packet number space as needing a
datagram probing.
Set these flags from process_timer() to trigger datagram probings.
Do not initiate anymore datagrams probing from any quic encryption level.
This will be done from the I/O handlers (quic_conn_io_cb() during handshakes and
quic_conn_app_io_cb() after handshakes).
Add QUIC_FL_TX_PACKET_COALESCED flag to mark a TX packet as coalesced with others
to build a datagram.
Ensure we do not directly retransmit frames from such coalesced packets. They must
be retransmitted from their packet number spaces to avoid duplications.
We want to track the frames which have been duplicated during retransmissions so
that to avoid uselessly retransmitting frames which would already have been
acknowledged. ->origin new member is there to store the frame from which a copy
was done, ->reflist is a list to store the frames which are copies.
Also ensure all the frames are zeroed and that their ->reflist list member is
initialized.
Add QUIC_FL_TX_FRAME_ACKED flag definition to mark a TX frame as acknowledged.
Add a loop in the bidi STREAM function. This will call repeatdly
qcc_decode_qcs() and dequeue buffered frames.
This is useful when reception of more data is interrupted because the
MUX buffer was full. qcc_decode_qcs() has probably free some space so it
is useful to immediatly retry reception of buffered frames of the qcs
tree.
This may fix occurences of stalled Rx transfers with large payload.
Note however that there is still room for improvment. The conn-stream
layer is not able at this moment to retrigger demuxing. This is because
the mux io-handler does not treat Rx : this may continue to cause
stalled tranfers.
Previously, h3 layer was not able to demux a DATA frame if not fully
received in the Rx buffer. This causes evident limitation and prevents
to be able to demux a frame bigger than the buffer.
Improve h3_data_to_htx() to support partial frame demuxing. The demux
state is preserved in the h3s new fields : this is useful to keep the
current type and length of the demuxed frame.
Define a new structure h3s used to provide context for a H3 stream. This
structure is allocated and stored in the qcs thanks to previous commit
which provides app-layer context storage.
For now, h3s is empty. It will soon be completed to be able to support
stateful demux : this is required to be able to demux an incomplete
frame if the rx buffer is full.
Define 2 new callback for qcc_app_ops : attach and detach. They are
called when a qcs instance is respectively allocated and freed. If
implemented, they can allocate a custom context stored in the new
abstract field ctx of qcs.
For now, h3 and hq-interop does not use these new callbacks. They will
be soon implemented by the h3 layer to allocate a context used for
stateful demuxing.
This change is required to support the demuxing of H3 frames bigger than
a buffer.
Edit the functions used for HEADERS and DATA parsing. They now return
the number of bytes handled.
This change will help to demux H3 frames bigger than the buffer.
Improve the reception for STREAM frames. In qcc_recv(), if the frame is
bigger than the remaining space in rx buffer, do not reject it wholly.
Instead, copy as much data as possible. The rest of the data is
buffered.
This is necessary to handle H3 frames bigger than a buffer. The H3 code
does not demux until the frame is complete or the buffer is full.
Without this, the transfer on payload larger than the Rx buffer can
rapidly freeze.
Handle wrapping buffer in h3_data_to_htx(). If data is wrapping, first
copy the contiguous data, then copy the data in front of the buffer.
Note that h3_headers_to_htx() is not able to handle wrapping data. For
the moment, a BUG_ON was added as a reminder. This cas never happened,
most probably because HEADERS is the first frame of the stream.
Always set HTX flag HTX_SL_F_XFER_LEN for http/3. This is correct
becuase the size of H3 requests is always known thanks to the protocol
framing.
This may fix occurences of incomplete POST requests when the client side
of the connection has been closed before.
Add new qcs fields to count the sum of bytes received for each stream.
This is necessary to enforce flow-control for reception on the peer.
For the moment, the implementation is partial. No MAX_STREAM_DATA or
FLOW_CONTROL_ERROR are emitted. BUG_ON statements are here as a
remainder.
This means that for the moment we do not support POST payloads greater
that the initial max-stream-data announced (256k currently).
At least, we now ensure that we never buffer a frame which overflows the
flow-control limit : this ensures that the memory consumption per stream
should stay under control.
qcs and its field are not properly freed if the conn-stream allocation
fail in qcs_new(). Fix this by having a proper deinit code with a
dedicated label.
Comments were not properly edited since the splitting of functions for
stream emission. Also "payload" argument has been renamed to "in" as it
better reflects the function purpose.
This adds a deinit_idle_conns() function that's called on deinit to
release the per-thread idle connection management tasks. The global
task was already taken care of.
The freeing of pre-check callbacks was missing when this feature was
recently added with commit b53eb8790 ("MINOR: init: add the pre-check
callback"), let's do it to make valgrind happy.
Tim reported in issue #1676 that just like startup_logs, trash buffers
are not released on deinit since they're thread-local, but valgrind
notices it when quitting before creating threads like "-c -f ...". Let's
just subscribe the function to deinit in addition to threads' end.
The two "free(x);x=NULL;" in free_trash_buffers_per_thread() were
also simplified using ha_free().
Tim reported in issue #1676 that we don't release startup logs if we
warn during startup and quit before creating threads (e.g. -c -f ...).
Let's subscribe deinit_errors_buffers() to both thread's end and
deinit. That's OK since it uses both per-thread and global variables,
and is idempotent.
Low footprint client machines may not have enough memory to download a
complete 16KB TLS record at once. With the new option the maximum
record size can be defined on the server side.
Note: Before limiting the the record size on the server side, a client should
consider using the TLS Maximum Fragment Length Negotiation Extension defined
in RFC6066.
This patch fixes GitHub issue #1679.
In issue #1677, Tim reported that we don't correctly free some shared
pools on exit. What happens in fact is that pool_destroy() is meant to
be called once per pool *pointer*, as it decrements the use count for
each pass and only releases the pool when it reaches zero. But since
pool_destroy_all() iterates over the pools list, it visits each pool
only once and will not eliminate some of them, which thus remain in the
list.
In an ideal case, the function should loop over all pools for as long
as the list is not empty, but that's pointless as we know we're exiting,
so let's just set the users count to 1 before the call so that
pool_destroy() knows it can delete and release the entry.
This could be backported to all versions (memory.c in 2.0 and older) but
it's not a real problem in practice.
We thought that we could get rid of some DISGUISE() with commit a80e4a354
("MINOR: fd: add functions to set O_NONBLOCK and FD_CLOEXEC") thanks to
the calls being in a function but that was without counting on Coverity.
Let's put it directly in the function since most if not all callers don't
care about this result.
It appears that it is safe to call perform a clean deinit at this point, so
let's do this to exercise the deinit paths some more.
Running `valgrind --leak-check=full --show-leak-kinds=all ./haproxy -vv` with
this change reports:
==261864== HEAP SUMMARY:
==261864== in use at exit: 344 bytes in 11 blocks
==261864== total heap usage: 1,178 allocs, 1,167 frees, 1,102,089 bytes allocated
==261864==
==261864== 24 bytes in 1 blocks are still reachable in loss record 1 of 2
==261864== at 0x483DD99: calloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==261864== by 0x324BA6: hap_register_pre_check (init.c:92)
==261864== by 0x155824: main (haproxy.c:3024)
==261864==
==261864== 320 bytes in 10 blocks are still reachable in loss record 2 of 2
==261864== at 0x483DD99: calloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==261864== by 0x26E54E: cfg_register_postparser (cfgparse.c:4238)
==261864== by 0x155824: main (haproxy.c:3024)
==261864==
==261864== LEAK SUMMARY:
==261864== definitely lost: 0 bytes in 0 blocks
==261864== indirectly lost: 0 bytes in 0 blocks
==261864== possibly lost: 0 bytes in 0 blocks
==261864== still reachable: 344 bytes in 11 blocks
==261864== suppressed: 0 bytes in 0 blocks
which is looking pretty good.
A config like the following:
global
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
resolvers unbound
nameserver unbound 127.0.0.1:53
will report the following leak when running a configuration check:
==241882== 6,991 (6,952 direct, 39 indirect) bytes in 1 blocks are definitely lost in loss record 8 of 13
==241882== at 0x483DD99: calloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==241882== by 0x25938D: cfg_parse_resolvers (resolvers.c:3193)
==241882== by 0x26A1E8: readcfgfile (cfgparse.c:2171)
==241882== by 0x156D72: init (haproxy.c:2016)
==241882== by 0x156D72: main (haproxy.c:3037)
because the `.px` member of `struct resolvers` is not freed.
The offending allocation was introduced in
c943799c86 which is a reorganization that
happened during development of 2.4.x. This fix can likely be backported without
issue to 2.4+ and is likely not needed for earlier versions as the leak happens
during deinit only.
To make the deinit function a proper inverse of the init function we need to
free the `http_err_chunks`:
==252081== 311,296 bytes in 19 blocks are still reachable in loss record 50 of 50
==252081== at 0x483B7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==252081== by 0x2727EE: http_str_to_htx (http_htx.c:914)
==252081== by 0x272E60: http_htx_init (http_htx.c:1059)
==252081== by 0x26AC87: check_config_validity (cfgparse.c:4170)
==252081== by 0x155DFE: init (haproxy.c:2120)
==252081== by 0x155DFE: main (haproxy.c:3037)
A memory leak was introduced when ignore-empty option was added to redirect
rules. If there is no location, when this option is set, the redirection is
aborted and the processing continues. But when this happened, the trash buffer
allocated to format the redirect response was not released.
The bug was introduced by commit bc1223be7 ("MINOR: http-rules: add a new
"ignore-empty" option to redirects.").
This patch should fix the issue #1675. It must be backported to 2.5.
If the "close-spread-time" option is set to "infinite", active
connection closing during a soft-stop can be disabled. The 'connection:
close' header or the GOAWAY frame will not be added anymore to the
server's response and active connections will only be closed once the
clients disconnect. Idle connections will not be closed all at once when
the soft-stop starts anymore, and each idle connection will follow its
own timeout based on the multiple timeouts set in the configuration (as
is the case during regular execution).
This feature request was described in GitHub issue #1614.
This patch should be backported to 2.5. It depends on 'MEDIUM: global:
Add a "close-spread-time" option to spread soft-stop on time window'.
HAProxy crashes when "show ssl ca-file" is being called on a ca-file
which contains a lot of certificates. (127 in our test with
/etc/ssl/certs/ca-certificates.crt).
The root cause is the fonction does not yield when there is no available
space when writing the details, and we could write a lot.
To fix the issue, we try to put the data with ci_putchk() after every
show_cert_detail() and we yield if the ci_putchk() fails.
This patch also cleans up a little bit the code:
- the end label is now a real end with a return 1;
- i0 is used instead of (long)p1
- the ID is stored upon yield
Since the httpclient verify now has a fallback which disable the SSL in
the httpclient without exiting haproxy at startup, we can safely
re-enable it by default.
It could still be disabled with "httpclient-ssl-verify none".
The cafile_tree was never free upon deinit, making valgrind and ASAN
complains when haproxy quits.
This could be backported as far as 2.2 but it requires the
ssl_store_delete_cafile_entry() helper from
5daff3c8ab.
Emit a warning when the ca-file couldn't be loaded for the httpclient,
and disable the SSL of the httpclient.
We must never be in a case where the verify is disabled without any
configuration, so better disable the SSL completely.
Move the check on the scheme above the initialization of the applet so
we could abort before initializing the appctx.
There were plenty of leftovers from old code that were never removed
and that are not needed at all since these files do not use any
definition depending on fcntl.h, let's drop them.
This gets rid of most open-coded fcntl() calls, some of which were passed
through DISGUISE() to avoid a useless test. The FD_CLOEXEC was most often
set without preserving previous flags, which could become a problem once
new flags are created. Now this will not happen anymore.
Instead of seeing each location manipulate the fcntl() themselves and
often forget to check previous flags, let's centralize the functions to
do this. It also allows to drop fcntl.h from most call places and will
ease the adoption of different OS-specific mechanisms if needed. Note
that the fd_set_nonblock() function purposely doesn't check the previous
flags as it's meant to be used on new FDs only.
Despite what the 'close-spread-time' option should do, the
'connection:close' header was always added to HTTP responses during
soft-stops even with a soft-stop window defined.
This patch adds the proper random based closing to HTTP connections
during a soft-stop (based on the time left in the soft close window).
It should be backported to 2.5 once 'MEDIUM: global: Add a
"close-spread-time" option to spread soft-stop on time window' is
backported as well.
Some older systems may routinely return EWOULDBLOCK for some syscalls
while we tend to check only for EAGAIN nowadays. Modern systems define
EWOULDBLOCK as EAGAIN so that solves it, but on a few older ones (AIX,
VMS etc) both are different, and for portability we'd need to test for
both or we never know if we risk to confuse some status codes with
plain errors.
There were few entries, the most annoying ones are the switch/case
because they require to only add the entry when it differs, but the
other ones are really trivial.
__comp_fetch_init() only presets the maxzlibmem, and only when both
USE_ZLIB and DEFAULT_MAXZLIBMEM are set. The intent is to preset a
default value to protect the system against excessive memory usage
when no setting is set by the user.
Nowadays the entry in the global struct is always there so there's no
point anymore in passing via a constructor to possibly set this value.
Let's go the cleaner way by always presetting DEFAULT_MAXZLIBMEM to 0
in defaults.h unless these conditions are met, and always assigning it
instead of pre-setting the entry to zero. This is more straightforward
and removes some ifdefs and the last constructor. In addition, now the
setting has a chance of being found.
The constructor present there could be replaced with an initcall.
This one is set at level STG_PREPARE because it also zeroes the
lock_stats, and it's a bit odd that it could possibly have been
scheduled to run after other constructors that might already
preset some of these locks by accident.
Transport layers (raw_sock, ssl_sock, xprt_handshake and xprt_quic)
were using 4 constructors and 2 destructors. The 4 constructors were
replaced with INITCALL and the destructors with REGISTER_POST_DEINIT()
so that we do not depend on this anymore.
Pollers are among the few remaining blocks still using constructors
to register themselves. That's not needed anymore since the initcalls
so better turn to initcalls.
On some systems, the hard limit for ulimit -n may be huge, in the order
of 1 billion, and using this to automatically compute maxconn doesn't
work as it requires way too much memory. Users tend to hard-code maxconn
but that's not convenient to manage deployments on heterogenous systems,
nor when porting configs to developers' machines. The ulimit-n parameter
doesn't work either because it forces the limit. What most users seem to
want (and it makes sense) is to respect the system imposed limits up to
a certain value and cap this value. This is exactly what fd-hard-limit
does.
This addresses github issue #1622.
Almost all of our hash-based LB algorithms are implemented as special
cases of something that can now be achieved using sample expressions,
and some of them have adopted some options to adapt their behavior in
ways that could also be achieved using converters.
There are users who want to hash other parameters that are combined
into variables, and who set headers from these values and use
"balance hdr(name)" for this.
Instead of constantly implementing specific options and having users
hack around when they want a real hash, let's implement a native hash
mode that applies to a standard sample expression. This way, any
fetchable element (including variables) may be used to construct the
hash, even modified by any converter if desired.
Any type except bool could cast to bin, while it can cast to string.
That's a bit inconsistent, and prevents a boolean from being used as
the entry of a hash function while any other type can. This is a
problem when passing via variable where someone could use:
... set-var(txn.bar) always_false
to temporarily disable something, but this would result in an empty
hash output when later doing:
... var(txn.bar),sdbm
Instead of using c_int2bin() as is done for the string output, better
enfore an set of inputs or exactly 0 or 1 so that a poorly written sample
fetch function does not result in a difficult to debug hash output.
Surprisingly, while about all calls to a sample cast function carefully
avoid calling c_none(), sample_fetch_as_type() makes no effort regarding
this, while by nature, the function is most often called with an expected
output type similar to the one of the expresison. Let's add it to shorten
the most common call path.
The use_backend and use-server contexts were not enumerated in
smp_resolve_args, and while use-server doesn't currently take an
expression, at least use_backend supports that, and both entries ought
to be listed for completeness. Now an error in a use_backend rule
becomes more precise, from:
[ALERT] (12373) : config : parsing [use-srv.cfg:33]: unable to find
backend 'foo' referenced in arg 1 of sample fetch
keyword 'nbsrv' in proxy 'echo'.
to:
[ALERT] (12307) : config : parsing [use-srv.cfg:33]: unable to find
backend 'foo' referenced in arg 1 of sample fetch
keyword 'nbsrv' in use_backend expression in proxy
'echo'.
This may be backported though this is totally harmless.
Since commit dd7e6c6dc ("BUG/MINOR: http-rules: completely free incorrect
TCP rules on error") free_act_rule() is called on some error paths, and one
of them involves incomplete redirect rules that may cause a crash if the
rule wasn't yet initialized, as shown in this config snippet:
frontend ft
mode http
bind *:8001
http-request redirect location /%[always_false,sdbm]
Let's simply make release_http_redir() more robust against null redirect
rules.
No backport needed since it seems that the only way to trigger this was
the extra check above that was merged during 2.6-dev.
The function checking captures defined in tcp-request content ruleset didn't
use the right rule arguments. "arg.trk_ctr" was used instead of "arg.cap".
This patch must be backported as far as 2.2.
Since the 2.5, it is possible to define TCP/HTTP ruleset in defaults
sections. However, rules defining a capture in defaults sections was not
properly handled because they was not shared with the proxies inheriting
from the defaults section. This led to crash when haproxy tried to store a
new capture.
So now, to fix the issue, when a new proxy is created, the list of captures
points to the list of its defaults section. It may be NULL or not. All new
caputres are prepended to this list. It is not a problem to share the same
defaults section between several proxies, because it is not altered and we
take care to not release it when corresponding proxies are freed but only
when defaults proxies are freed. To do so, defaults proxies are now
unreferenced at the end of free_proxy() function instead of the beginning.
This patch should fix the issue #1674. It must be backported to 2.5.
Captures must only be defined in proxies with the frontend capabilities or
in defaults sections used by proxies with the frontend capabilities. Thus,
an extra check is added to be sure a defaults section defining a capture
will never be references by a backend.
Note that in this case, only named captures in "tcp-request content" or
"http-request" rules are possible. It is not possible in a defaults section
to decalre a capture slot. Not yet at least.
This patch must be backported to 2.5. It is releated to issue #1674.
When using qc_stream_desc_ack(), the stream instance may be freed if
there is no more data in its buffers. This also means that all frames
still stored waiting for ACK for this stream are freed via
qc_stream_desc_free().
This is particularly important in quic_stream_try_to_consume() where we
loop over the frames tree of the stream. A use-after-free is present in
cas the stream has been freed in the trace "stream consumed" which
dereference the frame. Fix this by first checking if the stream has been
freed or not.
This bug was detected by using ASAN + quic traces enabled.
It's long been known that queues didn't scale with threads for various
reasons ranging from the cost of the queue lock to the cost of the
massive amount of inter-thread wakeups.
But some recent reports showing deplorable perfs with threads used at
100% CPU helped us notice that the two elements above add on top of
each other:
- with plenty of inter-thread wakeups, the scheduler takes a lot of
time to dequeue pending tasks from the shared queue ;
- the lock held by the scheduler to do this slows down subsequent
task_wakeup() calls from the the queue that are made under the
queue's lock
- the queue's lock slows down addition of new requests to the queue
and adds up to the number of needed queue entries for a steady
traffic.
But the cost of the share queue has no reason for being paid because
it had already been paid when process_stream() added the request to
the queue. As such an instant wakeup is perfectly fit for this.
This is exactly what this patch does, it uses tasklet_instant_wakeup()
to dequeue pending requests, which has the effect of not bloating the
shared queue, hence not requiring the global queue lock, which in turn
results in the wakeup to be much faster, and the queue lock to be much
shorter. In the end, a test with 4k concurrent connections that was
being limited to 40-80k requests/s before with 16 threads, some of
which were stuck at 100% CPU now reaches 570k req/s with 4% idle.
Given that it's been found that it was possible to trigger the watchdog
on the queue lock under extreme conditions, and that such conditions
could happen when users want to protect their servers during a DoS, it
would definitely make sense to backport it to the most recent releases
(2.5 and 2.4 seem like good candidates especially because their scheduler
is modern enough to receive the change above). If a backport is performed,
the following patch is needed:
MINOR: task: add a new task_instant_wakeup() function
Remove CS_EP_EOS set erroneously on qc_rcv_buf().
This fixes POST with abortonclose. Previously, request was preemptively
aborted by haproxy due to the incorrect EOS flag.
For the moment, EOS flag is not set anymore. It should be set to warn
about a premature close from the client.
Since the idle connections management changed to use eb-trees instead of MT
lists, a lock must be acquired to manipulate servers idle/safe/available
connection lists. However, it remains an unprotected use in
connect_server(), when a connection is removed from an idle list if the mux
has no more streams available. Thus it is possible to remove a connection
from an idle list on a thread, while another one is looking for a idle
connection. Of couse, this may lead to a crash.
To fix the bug, we must take care to acquire the idle connections lock
first. The bug was introduced by the commit f232cb3e9 ("MEDIUM: connection:
replace idle conn lists by eb trees").
The patch must be backported as far as 2.4.
Disable temporary the SSL verify by default in the httpclient. The
initialization of the @system-ca during the init of the httpclient is a
problem in some cases.
The verify can be reactivated with "httpclient-ssl-verify required" in
the global section.
The httpclient HTTPS requests now enable the "verify required" option.
To achieve this, the "@system-ca" ca-file is configured in the
httpclient ssl server. Which means all the system CAs will be loaded at
haproxy startup.
Change the init order of the httpclient, a different init sequence is
required to allow a more complicated init.
The init is splitted in two parts:
- the first part is executed before config_check_validity(), which
allows to create proxy and more advanced stuff than STG_INIT, because
we might want to use stuff already initialized in haproxy (trash
buffers for example)
- the second part is executed after the config_check_validity(),
currently it is used for the log configuration.
This adds a call to function <fct> to the list of functions to be called at
the step just before the configuration validity checks. This is useful when you
need to create things like it would have been done during the configuration
parsing and where the initialization should continue in the configuration
check.
It could be used for example to generate a proxy with multiple servers using
the configuration parser itself. At this step the trash buffers are allocated.
Threads are not yet started so no protection is required. The function is
expected to return non-zero on success, or zero on failure. A failure will make
the process emit a succinct error message and immediately exit.
A conn-stream is never detached from an endpoint or an application alone,
except on a reset. Thus, to avoid any error, these functions are now
private. And cs_destroy() function is added to destroy a conn-stream. This
function is called when a stream is released, on the front and back
conn-streams, and when a health-check is finished.
When a client abort is detected with the server conn-stream in CS_ST_INI
state, there is no reason to detach the endpoing because we know there is no
endpoint attached to this conn-stream. This patch depends on the commit
"BUG/MEDIUM: conn-stream: Set back CS to RDY state when the appctx is
created".
When an appctx is created on the server side, we now set the corresponding
conn-stream to ready state (CS_ST_RDY). When it happens, the backend
conn-stream is in CS_ST_INI state. It is not consistant to let the
conn-stream in this state because it means it is possible to have a target
installed in CS_ST_INI state, while with a connection, the conn-stream is
switch to CS_ST_RDY or CS_ST_EST state.
It is especially anbiguous because we may be tempted to think there is no
endpoint attached to the conn-stream before the CS_ST_CON state. And it is
indeed the reason for a bug leading to a crash because a cs_detach_endp() is
performed if an abort is detected on the backend conn-stream in CS_ST_INI
state. With a mux or a appctx attached to the conn-stream, "->endp" field is
set to NULL. It is unexpected. The API will be changed to be sure it is not
possible. But it exposes a consistency issue with applets.
So, the conn-stream must not stay in CS_ST_INI state when an appctx is
attached. But there is no reason to set it in CS_ST_REQ. The conn-stream
must be set to CS_ST_RDY to handle applets and connections in the same
way. Note that if only the target is set but no appctx is created, the
backend conn-stream is switched from CS_ST_INI to CS_ST_REQ state to be able
to create the corresponding appctx. This part is unchanged.
This patch depends on the commit "MINOR: backend: Don't allow to change
backend applet".
The ambiguity exists on previous versions. But the issue is
2.6-specific. Thus, no backport is needed.
This part was inherited from haproxy-1.5. But since a while (at least 1.8),
the backend applet, once created, is no longer changed. Thus there is no
reason to still check if the target has changed. And in fact, if it was
still possible, there would be a memory leak because the old applet would be
lost and never released.
There is no reason to backport this fix because the leak only exists on a
dead code path.
When we want to serve a resource from the cache, if the applet creation
fails, the "cache-use" action must not yield. Otherwise, the stream will
hang. Instead, we now disable the cache. Thus the request may be served by
the server.
This patch must be backported as far as 1.8.
cs_applet_shut() now relies on CS_EP_SH* flags to performed the applet
shutdown. It means the applet release callback is called if there is no
CS_EP_SHR or CS_EP_SHW flags set. And it set these flags, CS_EP_SHRR and
CS_EP_SHWN more specifically, before exiting.
This way, cs_applet_shut() is the really equivalent to cs_conn_shut().
This function does not release the applet but only call the applet release
callback. It is equivalent to cs_conn_shut() but for applets. Thus the
function is renamed cs_applet_shut().
These functions don't close the connection but only perform shutdown for
reads and writes at the mux level. It is a bit ambiguous. Thus,
cs_conn_close() is renamed cs_conn_shut() and cs_conn_drain_and_close() is
renamed cs_conn_drain_and_shut(). These both functions rely on
cs_conn_shutw() and cs_conn_shutr().
Since the recent changes about the conn-streams, the stream dump in "show
sess all" command is a bit mangled. front and back conn-stream are now
properly displayed (csf and csb). In addition, when there is no backend
endpoint, "APPCTX" was always reported. Now, "NONE" is reported in this
case.
It is 2.6-specific. No backport needed.
Since previous patch
MINOR: mux-quic: split xfer and STREAM frames build
there is no way to report an error in qcs_xfer_data().
This should fix github issue #1669.
As anticipated in commit 211ea252d ("BUG/MINOR: logs: fix logsrv leaks
on clean exit"), there were indeed other corner cases that were not
properly covered. Setting the http client's ring_name to NULL make the
sink lookup crash on startup in sink_find () with a config as simple as:
global
log ring@buf0 local0
The fields must be properly initialized (both config file name and
the ring_name). This only needs to be backported if/when the commit
above is backported.
Do not initialize mux task timeout if timeout client is set to 0 in the
configuration. Check for the task before queuing it in qc_io_cb() or
qc_detach().
This fix a crash when timeout client is 0 or undefined.
Unsubscribe from lower layer on qc_release. This ensures that the lower
layer won't wake up a null tasklet after the MUX has been released and
may prevent a crash.
It is possible the xprt layer have to process retransmitted STREAM frames after
the mux was released. In this case, there is no need to try to wake it up.
Starting from OpenSSLv3, providers are at the core of cryptography
functions. Depending on the provider used, the way the SSL
functionalities work could change. This new 'show ssl providers' CLI
command allows to show what providers were loaded by the SSL library.
This is required because the provider configuration is exclusively done
in the OpenSSL configuration file (/usr/local/ssl/openssl.cnf for
instance).
A new line is also added to the 'haproxy -vv' output containing the same
information.
Complete qc_send function. After having processed each qcs emission, it
will now retry send on qcs where transfer can continue. This is useful
when qc_stream_desc buffer is full and there is still data present in
qcs buf.
To implement this, each eligible qcs is inserted in a new list
<qcc.send_retry_list>. This is done on send notification from the
transport layer through qcc_streams_sent_done(). Retry emission until
send_retry_list is empty or the transport layer cannot proceed more
data.
Several send operations are now called on two different places. Thus a
new _qc_send_qcs() function is defined to factorize the code.
This change should maximize the throughput during QUIC transfers.
MUX streams can now allocate multiple buffers for sending. quic-conn is
responsible to limit the total count of allowed allocated buffers. A
counter is stored in the new field <stream_buf_count>.
For the moment, the value is hardcoded to 30.
On stream buffer allocation failure, the qcc MUX is flagged with
QC_CF_CONN_FULL. The MUX is then woken up as soon as a buffer is freed,
most notably on ACK reception.
Acknowledge of STREAM has been complexified with the introduction of
stream multi buffers. Two functions are executing roughly the same set
of instructions in xprt_quic.c.
To simplify this, move the code complexity in a new function
qc_stream_desc_ack(). It will handle offset calculation, removal of
data, freeing oldest buffer and freeing stream instance if required.
The qc_stream_desc API is cleaner as qc_stream_desc_free_buf() ambiguous
function has been removed.
Complete the qc_stream_desc type to support multiple buffers on
emission. The main objective is to increase the transfer throughput.
The MUX is now able to transfer more data without having to wait ACKs.
To implement this feature, a new type qc_stream_buf is declared. it
encapsulates a buffer with a list element. New functions are defined to
retrieve the current buffer, release it or allocate a new one. Each
buffer is kept in the qc_stream_desc list until all of its data is
acknowledged.
On the MUX side, a qcs uses the current stream buffer to transfer data.
Once the buffer is full, it is released and a new one will be allocated
on a future qc_send() invocation.
Add a new member <qc> in qc_stream_desc structure. This change is
possible since previous patch which add quic-conn argument to
qc_stream_desc_new().
The purpose of this change is to simplify the future evolution of
qc-stream-desc API. This will avoid to repeat qc as argument in various
functions which already used a qc_stream_desc.
Simplify the model qcs/qc_stream_desc. Each types has now its own tree
node, stored respectively in qcc and quic-conn trees. It is still
necessary to mark the stream as detached by the MUX once all data is
transfered to the lower layer.
This might improve slightly the performance on ACK management as now
only the lookup in quic-conn is necessary. On the other hand, memory
size of qcs structure is increased.
Regroup all type definitions and functions related to qc_stream_desc in
the source file src/quic_stream.c.
qc_stream_desc complexity will be increased with the development of Tx
multi-buffers. Having a dedicated module is useful to mix it with
pure transport/quic-conn code.
Split qcs_push_frame() in two functions.
The first one is qcs_xfer_data(). Its purpose is to transfer data from
qcs.tx.buf to qc_stream_desc buffer. The second function is named
qcs_build_stream_frm(). It generates a STREAM frame using qc_stream_desc
buffer as payload.
The trace events previously associated with qcs_push_frame() has also
been split in two to reflect the new code structure.
The purpose of this refactoring is first to better reflect how sending
is implemented. It will also simplify the implementation of Tx
multi-buffer per streams.
The DH parameters used for OpenSSL versions 1.1.1 and earlier where
changed. For OpenSSL 1.0.2 and LibreSSL the newly introduced
ssl_get_dh_by_nid function is not used since we keep the original
parameters.
DHE ciphers do not present a security risk if the key is big enough but
they are slow and mostly obsoleted by ECDHE. This patch removes any
default DH parameters. This will effectively disable all DHE ciphers
unless a global ssl-dh-param-file is defined, or
tune.ssl.default-dh-param is set, or a frontend has DH parameters
included in its PEM certificate. In this latter case, only the frontends
that have DH parameters will have DHE ciphers enabled.
Adding explicitely a DHE ciphers in a "bind" line will not be enough to
actually enable DHE. We would still need to know which DH parameters to
use so one of the three conditions described above must be met.
This request was described in GitHub issue #1604.
RFC7919 defined sets of DH parameters supposedly strong enough to be
used safely. We will then use them when we can instead of our hard coded
ones (namely the ffdhe2048 and ffdhe4096 named groups).
The ffdhe2048 and ffdhe4096 named groups were integrated in OpenSSL
starting with version 1.1.1. Instead of duplicating those parameters in
haproxy for older versions of OpenSSL, we will keep using our own
parameters when they are not provided by the SSL library.
We will also need to keep our 1024 bits DH parameters since they are
considered not safe enough to have a dedicated named group in RFC7919
but we must still keep it for retrocompatibility with old Java clients.
This request was described in GitHub issue #1604.
MacOS can feed fc_rtt, fc_rttvar, fc_sacked, fc_lost and fc_retrans
so let's expose them on this platform.
Note that at the tcp(7) level, the API is slightly different, as
struct tcp_info is called tcp_connection_info and TCP_INFO is
called TCP_CONNECTION_INFO, so for convenience these ones were
defined to point to their equivalent. However there is a small
difference now in that tcpi_rtt is called tcpi_rttcur on this
platform, which forces us to make a special case for it before
other platforms.
While there is some overlap between what each OS provides in terms of
retrievable info, each set is not a real subset of another one and this
results in increasing complexity when trying to add support for new OSes.
Let's just condition each item to the OS that support it. It's not pretty
but at least it will avoid a real mess later.
Note that fc_rtt and fc_rttvar are supported on any OS that has TCP_INFO,
not just linux/freebsd/netbsd, so we continue to expose them unconditionally.
If the response is compressed, we must update the HTX start-line flags and
the HTTP message flags. It is especially important if there is another
filter enabled. Otherwise, there is no way to know the C-L header was
removed and T-E one was added. Except by looping on headers.
This patch is related to the issue #1660. It must backported as far as 2.0
(for HTX part only).
Instead of relying on the HTX start-line flags, it is better to rely on
http_msg flags to know if a content-length header can be added or not. In
addition, if the header is added, HTTP_MSGF_CNT_LEN flag must be added.
Because of this bug, an invalid message can be emitted when the response is
compressed because it may contain C-L and a T-E headers.
This patch should fix the issue #1660. It must be backported as far as 2.2.
A released qc_stream_desc is freed as soon as all its buffer content has
been acknowledged. However, it may still contains other frames waiting
for ACK pointing to deleted buffer content. This can happen on
retransmission.
When freeing a qc_stream_desc, free all its frames in acked_frms tree to
fix memory leak. This may also possibly fix a crash on retransmission.
Now, the frames are properly removed from a packet. This ensure we do
not retransmit a frame whose buffer is deallocated.
The issue only concerns the backend connection. The conn-stream is now owned
by the stream and persists during all the stream life. Thus we must not
crush it when the backend connection is released.
It is 2.6-specific. No backport is needed.
While it's often a pain to try to figure a UNIX socket address, the
server ones are reliable and may be emitted in the check provided
they are retrieved in time. We cannot rely on addr_to_str() because
it only reports "unix" since it may be used to log client addresses
or listener addresses (which are renamed).
The address length was extended to 256 chars to deal with long paths
as previously it was limited to INET6_ADDRSTRLEN+1.
This addresses github issue #101. There's no point backporting this,
external checks are almost never used.
During the config parsing we preset the server's address and port, but
that's pointless since it's replaced during each check in order to deal
with the possibility that the address was changed since.
Github issue #472 reports a problem with short client connections making
stick-table entries disappear. The problem is in fact totally different
and stems at the connection establishment step.
What happens is that the stick-table there has a single entry. The
"stick-on" directive is forced to purge an existing entry before being
able to create a new one. The new entry will be committed during the
call to process_store_rules() on the response path.
But if the client sends the FIN immediately after the connection is set
up (e.g. using nc -z) then the SHUTR is received and will cancel the
connection setup just after it starts. This cancellation will induce a
call to cs_shutw() which will in turn leave the server-side state in
ST_DIS. This transition from ST_CON to ST_DIS doesn't belong to the
list of handled transition during the connection setup so it will be
handled right after on the regular path, causing the connection to be
closed. Because of this, we never pass through back_establish() and
the backend's analysers are never set on the response channel, which
is why process_store_rules() is not called and the stick-tables entry
never committed.
The comment above the code that causes this transition clearly says
that the function is to be used after the connection is established
with the server, but there's no such protection, and we always have
the AUTO_CLOSE flag there (but there's hardly any available condition
to eliminate it).
This patch adds a test for the connection not being in ST_CON or for
option abortonclose being set. It's sufficient to do the job and it
should not cause issues.
One concern was that the transition could happen during cs_recv()
after the connection switches from CON to RDY then the read0 would
be taken into account and would cause DIS to appear, which is not
handled either. But that cannot happen because cs_recv() doesn't do
anything until it's in ST_EST state, hence the read0() cannot be
called from CON/RDY. Thus the transition from CON to DIS is only
possible in back_handle_st_con() and back_handle_st_rdy() both of
which are called when dealing with the transition already, or when
abortonclose is set and the client aborts before connect() succeeds.
It's possible that some further improvements could be made to detect
this specific transition but it doesn't seem like anything would have
to be added.
This issue was first reported on 2.1. The abortonclose area is very
sensitive so it would be wise to backport slowly, and probably no
further than 2.4.
Emit a CONNECTION_CLOSE if the app layer cannot be properly initialized
on qc_xprt_start. This force the quic-conn to enter the closing state
before being closed.
Without this, quic-conn normal operations continue, despite the
app-layer reported as not initialized. This behavior is undefined, in
particular when handling STREAM frames.
Fix the return value used in quic-conn start callback for error. The
caller expects a negative value in this case.
Without this patch, the quic-conn and the connection stack are not
closed despite an initialization failure error, which is an undefined
behavior and may cause a crash in the end.
In the quic_session_accept, connection is in charge to call the
quic-conn start callback. If this callback fails for whatever reason,
there is a crash because of an explicit session_free.
This happens because the connection is now the owner of the session due
to previous conn_complete_session call. It will automatically calls
session_free. Fix this by skipping the session_free explicit invocation
on error.
In practice, currently this has never happened as there is only limited
cases of failures for conn_xprt_start for QUIC.
Implement qc_destroy. This callback is used to quickly release all MUX
resources.
session_free uses this callback. Currently, it can only be called if
there was an error during connection initialization. If not defined, the
process crashes.
When an HTTP client is started on an HAProxy compiled without the SSL
support, an error is triggered when HTTPS is used. In this case, the freshly
created conn-stream is released. But this code is specific to the non-SSL
part. Thus it is moved the in right #if/#else section.
This patch should fix the issue #1655.
The commit 744451c7c ("BUG/MEDIUM: mux-h1: Properly detect full buffer cases
during message parsing") introduced a regression if trailers are not
received in one time. Indeed, in this case, nothing is appended in the
channel buffer, while there are some data in the input buffer. In this case,
we must not request more room to the upper layer, especially because the
channel buffer can be empty.
To fix the issue, on trailers parsing, we consider the H1 stream as
congested when the max size allowed is reached. Of course, the H1 stream is
also considered as congested if the trailers are too big and the channel
buffer is not empty.
This patch should fix the issue #1657. It must be backported as far as 2.0.
For all muxes, the function responsible to release a mux is always called
with a defined mux. Thus there is no reason to test if it is defined or not.
Note the patch may seem huge but it is just because of indentation changes.
Several muxes (h2, fcgi, quic) don't support the protocol upgrade. For these
muxes, there is no reason to have code to support it. Thus in the destroy
callback, there is now a BUG_ON() and the release function is simplified
because the connection is always owned by the mux..