quic_rx_packet struct had a reference to the quic_conn instance. This is
useless as qc instance is always passed through function argument. In
fact, pkt.qc is used only in qc_pkt_decrypt() on key update, even though
qc is also passed as argument.
Simplify this by removing qc field from quic_rx_packet structure
definition. Also clean up qc_pkt_decrypt() documentation and interface
to align it with other quic-conn related functions.
This should be backported up to 2.7.
Rename the structure "cert_key_and_chain" to "ckch_data" in order to
avoid confusion with the store whcih often called "ckchs".
The "cert_key_and_chain *ckch" were renamed "ckch_data *data", so we now
have store->data instead of ckchs->ckch.
Marked medium because it changes the API.
Adding base code to provide subscribe/publish API for internal
events processing.
event_hdl provides two complementary APIs, both are implemented
in src/event_hdl.c and include/haproxy/event_hdl{-t.h,.h}:
One API targeting developers that want to register event handlers
that will be notified on specific events.
(SUBSCRIBE)
One API targeting developers that want to notify registered handlers
about an event.
(PUBLISH)
This feature is being considered to address the following scenarios:
- mailers code refactoring (getting rid of deprecated
tcp-check ruleset implementation)
- server events from lua code (registering user defined
lua function that is executed with relevant data when a
server is dynamically added/removed or on server state change)
- providing a stable and easy to use API for upcoming
developments that rely on specific events to perform actions.
(e.g: ressource cleanup when a server is deleted from haproxy)
At this time though, we don't have much use cases in mind in addition to
server events handling, but the API is aimed at being multipurpose
so that new event families, with their own particularities, can be
easily implemented afterwards (and hopefully) without requiring breaking
changes to the API.
Moreover, you should know that the API was not designed to cope well
with high rate event publishing.
Mostly because publishing means iterating over unsorted subscriber list.
So it won't scale well as subscriber list increases, but it is intended in
order to keep the code simple and versatile.
Instead, it is assumed that events implemented using this API
should be periodic events, and that events related to critical
io/networking processing should be handled using
dedicated facilities anyway.
(After all, this is meant to be a general purpose event API)
Apart from being easily extensible, one of the main goals of this API is
to make subscriber code as simple and safe as possible.
This is done by offering multiple event handling modes:
- SYNC mode:
publishing code directly
leverages handler code (callback function)
and handler code has a direct access to "live" event data
(pointers mostly, alongside with lock hints/context
so that accessing data pointers can be done properly)
- normal ASYNC mode:
handler is executed in a backward compatible way with sync mode,
so that it is easy to switch from and to SYNC/ASYNC mode.
Only here the handler has access to "offline" event data, and
not "live" data (ptrs) so that data consistency is guaranteed.
By offline, you should understand "snapshot" of relevant data
at the time of the event, so that the handler can consume it
later (even if associated ressource is not valid anymore)
- advanced ASYNC mode
same as normal ASYNC mode, but here handler is not a function
that is executed with event data passed as argument: handler is a
user defined tasklet that is notified when event occurs.
The tasklet may consume pending events and associated data
through its own message queue.
ASYNC mode should be considered first if you don't rely on live event
data and you wan't to make sure that your code has the lowest impact
possible on publisher code. (ie: you don't want to break stuff)
Internal API documentation will follow:
You will find more details about the notions we roughly approached here.
When digging into suspected memory leaks, it's cumbersome to count the
number of allocations and free calls. Here we're adding a summary at the
end of the sum of allocs minus the sum of frees, excluding realloc since
we can't know how much it releases upon each call. This means that when
doing many realloc+free the count may be negative but in practice there
are very few reallocs so that's not a problem. Also the size/call is signed
and corresponds to the average size allocated (e.g. leaked) per call.
It seems to work reasonably well for now:
> debug dev memstats match buf
quic_conn.c:2978 P_FREE size: 1239547904 calls: 75656 size/call: 16384 buffer
quic_conn.c:2960 P_ALLOC size: 1239547904 calls: 75656 size/call: 16384 buffer
mux_quic.c:393 P_ALLOC size: 9112780800 calls: 556200 size/call: 16384 buffer
mux_quic.c:383 P_ALLOC size: 17783193600 calls: 1085400 size/call: 16384 buffer
mux_quic.c:159 P_FREE size: 8935833600 calls: 545400 size/call: 16384 buffer
mux_quic.c:142 P_FREE size: 9112780800 calls: 556200 size/call: 16384 buffer
h3.c:776 P_ALLOC size: 8935833600 calls: 545400 size/call: 16384 buffer
quic_stream.c:166 P_FREE size: 975241216 calls: 59524 size/call: 16384 buffer
quic_stream.c:127 P_FREE size: 7960592384 calls: 485876 size/call: 16384 buffer
stream.c:772 P_FREE size: 8798208 calls: 537 size/call: 16384 buffer
stream.c:768 P_FREE size: 2424832 calls: 148 size/call: 16384 buffer
stream.c:751 P_ALLOC size: 8852062208 calls: 540287 size/call: 16384 buffer
stream.c:641 P_FREE size: 8849162240 calls: 540110 size/call: 16384 buffer
stream.c:640 P_FREE size: 8847360000 calls: 540000 size/call: 16384 buffer
channel.h:850 P_ALLOC size: 2441216 calls: 149 size/call: 16384 buffer
channel.h:850 P_ALLOC size: 5914624 calls: 361 size/call: 16384 buffer
dynbuf.c:55 P_FREE size: 32768 calls: 2 size/call: 16384 buffer
Total BALANCE size: 0 calls: 5606906 size/call: 0 (excl. realloc)
Let's see how useful this becomes over time.
Sometimes when debugging it's convenient to be able to focus only on
certain pools. Just like we did for "show pools", let's add a filter
based on a prefix on "debug dev memstats match <prefix>".
These two have absolutely zero impact on the process and do not need to
be restricted to the expert mode. The first one calculates a string hash
that can be used by anyone when checking a dump; the second one may be
used by anyone tracking a memory leak, and is cumbersome to use due to
the "expert-mode on" that needs to be prepended. In addition this gives
bad habits to users and needlessly taints the process. So let's drop
this restriction for these two commands.
This command is used to hash a section name using the current anon key,
it was brought in 2.7 by commit 54966dffd ("MINOR: anon: store the
anonymizing key in the CLI's appctx"). However the help message only
says "return msg hashed" which is misleading because if anon mode is
not enabled, it returns the string as-is. Let's just mention this
condition in the help message, and also fix the alphabetical ordering
and alignment on the line.
"debug dev memstats" supports various options but silently ignores the
unknown ones. Let's make sure it returns indications about what it
expects, as the help message is quite limited otherwise.
Just like for the H2 multiplexer, info about the H1 connection task is now
displayed in "show fd" output. The task pointer is displayed and, if not
null, its expiration date.
It may be useful to backport it.
If shards are in use, we must fill the shard number on incoming updates,
otherwise some entries are assigned shard number zero, and may be broadcast
everywhere once updated, instead of being sent only to the peers having the
same shard number.
This fixes commit 36d156564 ("MINOR: peers: Support for peer shards"). No
backport is needed.
In peer_treat_updatemsg(), the lower layers of the stick-table code are
reimplemented, and the key length is never really known for an entry
being processed, it depends on the type being parsed and the moment
where it's done. This makes it quite difficult to stuff some shard
number calculation there.
This patch adds a keylen local variable that is always set to the length
of the current key depending on its type. It takes this opportunity for
reducing redudant expressions involving this length and always using the
new variable instead, limiting the risk of errors. Arguably that code
would have been way simpler by creating a dummy stktable_key and passing
it to stksess_new() as done anywhere else, but let's not change all that
a few days before the release.
The function used to calculate the shard number currently requires a
stktable_key on input for this. Unfortunately, it happens that peers
currently miss this calculation and they do not provide stktable_key
at all, instead they're open-coding all the low-level stick-table work
(hence why it's missing). Thus we'll need to be able to calculate the
shard number in keys coming from peers as well but the current API does
not make it possible.
This commit addresses this by inverting the order where the length and
the shard number are used. Now the low-level function is independent on
stksess and stktable_key, it takes a table, pointer and length and does
all the job. The upper function takes care of the type and key to get
the its length, and is for use only from stick-table code.
This doesn't change anything except that the low-level one will be usable
from outside (hence why it's exported now).
If the client closes the connection while there is no pending outgoing data,
the H1 connection must be released. However, it was switched to CLOSING
state instead. Thus the client connection was closed on client timeout.
It is side effect of the commif d1b573059a ("MINOR: mux-h1: Avoid useless
call to h1_send() if no error is sent"). Before, the extra call to h1_send()
was able to fix the H1C state.
To fix the bug and make switch to close state (CLOSING or CLOSED) less
errorprone, h1_close() helper function is systematically used.
It is a 2.7-specific bug. No backport needed.
We need to initialize the shard value in __stksess_init() because there is
not necessarily a key to make it happen later, resulting in an uninitialized
shard value appearing in the entry, typically when entries are learned from
peers. This fixes commit 36d156564 ("MINOR: peers: Support for peer shards"),
no backport is needed.
Note however that it is not sufficient to completely fix the peers code, the
shard value remains zero because the setting of the key was open-coded in
the peers code and these parts were not identified when adding support for
shards.
Some issues such as #1929 seem to involve a task without timeout but we
can't find the condition to reproduce this in the code. However, not having
this info in the output doesn't help, so this patch adds the task pointer
and its timeout (when the task is non-null). It may be useful to backport
it.
qc_dgrams_retransmit() could reuse the same local list and could splice it two
times to the packet number space list of frame to be send/resend. This creates a
loop in this list and makes qc_build_frms() possibly endlessly loop when trying
to build frames from the packet number space list of frames. Then haproxy aborts.
This issue could be easily reproduced patching qc_build_frms() function to set <dlen>
variable value to 0 after having built at least 10 CRYPTO frames and using ngtcp2
as client with 30% packet loss in both direction.
Thank you to @gabrieltz for having reported this issue in GH #1903.
Must be backported to 2.6.
ncbuf can be compiled for haproxy or standalone to run unit test suite.
For the latest mode, BUG_ON() macro has been re-implemented in a simple
version.
The inclusion of the default or the redefined macro relied on DEBUG_DEV.
Change this to now rely on DEBUG_STRICT as this is activated for the
default build.
This change is safe as only BUG_ON_HOT() macro is used in ncbuf code,
which is activated only with the default value DEBUG_STRICT=2.
This should be backported up to 2.6.
ncbuf API relies on lot of small functions. Mark these functions as
inline to reduce call invocations and facilitate compiler optimizations
to reduce code size.
This should be backported up to 2.6.
ncb_blk structure is used to represent a block of data or a gap in a
non-contiguous buffer. This is used in several functions for ncbuf
implementation. Before this patch, ncb_blk was passed by value, which is
sub-optimal. Replace this by const pointer arguments.
This has the side-effect of suppressing a compiler warning reported in
older GCC version :
CC src/http_conv.o
src/ncbuf.c: In function 'ncb_blk_next':
src/ncbuf.c:170: warning: 'blk.end' may be used uninitialized in this function
This should be backported up to 2.6.
Stick-tables support sharding to multiple peers but there was no way to
know to what shard an entry was going to be sent. Let's display this in
the "show table" output to ease debugging.
Instead of using memcpy() to concatenate the table's name to the key when
allocating an stksess, let's compute once for all a per-table seed at boot
time and use it to calculate the key's hash. This saves two memcpy() and
the usage of a chunk, it's always nice in a fast path.
When tested under extreme conditions with a 80-byte long table name, it
showed a 1% performance increase.
Initially the code was placed into cli.c to keep activity.c small and
independent of the cli stuff, but that's no longer the case anyway and
keeping that code over there makes it harder to find. Let's move it to
its more natural place now.
It happened a few times that it was difficult to figure if a counter was
normal or not in "show activity" based on the uptime. Let's just emit the
uptime value along with the date.
With an OpenSSL library which use the wrong OPENSSLDIR, HAProxy tries to
load the OPENSSLDIR/certs/ into @system-ca, but emits a warning when it
can't.
This patch fixes the issue by allowing to shut the error when the SSL
configuration for the httpclient is not explicit.
Must be backported in 2.6.
After reading a datagram, it is enqueud for the thread attached to the
DCID. This is done via quic_lstnr_dgram_dispatch() function. If this
step fails, we remove the datagram from the buffer of quic_receiver_buf.
This step is faulty because we use b_del() instead of b_sub(). If
quic_receiver_buf was not empty, we will remove content from another
datagram while leaving the content of the last read datagram. This
probably produces valid datagram dropping and may even result in crash.
As stated, this bug can only happen if qc_lstnr_dgram_dispatch() fails
which happen on two occaions :
* on quic_dgram allocation failure, which should be pretty rare
* on datagram labelled as invalid for QUIC protocol. This may happen
more frequently depending on the network conditions. Thus, this bug
has been labelled as a medium one.
This should be backported up to 2.6.
There were a few leftovers of DEBUG_HPACK and HPACK_SHT_SIZE instead of
their QPACK equivalent in the QPACK debug code. There's no harm anyway,
but it could lead to confusing results if the tables are not sized
equally.
In issue #1939, Ilya mentions that cppchecks warned about use of "%d" to
report the QPACK table's index that's locally stored as an unsigned int.
While technically valid, this will never cause any trouble since indexes
are always small positive values, but better use %u anyway to silence
this warning.
In issue #1939, Ilya mentions that cppchecks warned about use of "%d" to
report the status state that's locally stored as an unsigned int. While
technically valid, this will never cause any trouble since in the end
what we store there are the applet's states (just a few enum values).
Better use %u anyway to silence this warning.
In GH issue #1940 cppcheck warns about a possible null-dereference in
check_user() when DEBUG_AUTH is enabled. Indeed, <ep> may potentially
be NULL because upon error crypt_r() and crypt() may return a null
pointer. However it's not directly derefenced, it is only passed to
printf() with '%s' fmt. While it is in practice fine with the printf
implementations we care about (that check strings against null before
printing them), it is undefined behavior according to the spec, hence
the warning.
Let's check <ep> before passing it to printf. This should partly
solve GH #1940.
Emeric noticed that producing many randoms to fill a stick table was
saturating on the rand_lock. Since 2.4 we have the statistical PRNG
for low-quality randoms like this one, there is no point in using the
one that was originally implemented for the purpose of creating safe
UUIDs, since the doc itself clearly states that these randoms are not
secure and they have not been in the past either. With this change,
locking contention is completely gone.
This adds a USE_OPENSSL_WOLFSSL option, wolfSSL must be used with the
OpenSSL compatibility layer. This must be used with USE_OPENSSL=1.
WolfSSL build options:
./configure --prefix=/opt/wolfssl --enable-haproxy
HAProxy build options:
USE_OPENSSL=1 USE_OPENSSL_WOLFSSL=1 WOLFSSL_INC=/opt/wolfssl/include/ WOLFSSL_LIB=/opt/wolfssl/lib/ ADDLIB='-Wl,-rpath=/opt/wolfssl/lib'
Using at least the commit 54466b6 ("Merge pull request #5810 from
Uriah-wolfSSL/haproxy-integration") from WolfSSL. (2022-11-23).
This is still to be improved, reg-tests are not supported yet, and more
tests are to be done.
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
Gcc 6.5 is now well known for triggering plenty of false "may be used
uninitialized", particularly at -O1, and two of them happen in quic,
quic_tp and quic_conn. Both of them were reviewed and easily confirmed
as wrong (gcc seems to ignore the control flow after the function
returns and believes error conditions are not met). Let's just preset
the variables that bothers it. In quic_tp the initialization was moved
out of the loop since there's no point inflating the code just to
silence a stupid warning.
These includes brought by commit 9c76637ff ("MINOR: anon: add new macros
and functions to anonymize contents") resulted in an increase of exactly
20% of the number of lines to build. These include are not needed there,
only tools.c needs xxhash.h.
cfgparse-quic accesses some members of the "global" struct but only
includes global-t.h. It actually used to work via tools.h due to a
long dependency chain that brought it, but it will be fixed and will
break cfgparse-quic, so let's fix it first.
Commit 36d156564 ("MINOR: peers: Support for peer shards") reintroduced
a direct dependency on import/xxhash.h which was previously dropped by
commit d5fc8fcb8 ("CLEANUP: Add haproxy/xxhash.h to avoid modifying
import/xxhash.h"). This results in xxhash.h being included twice, which
breaks the build on older compilers which do not like seeing XXH32_hash_t
being defined twice.
Let's just use include/haproxy/xxhash.h instead.
No backport is needed.
If we choose to not send an error to the client, there is no reason to call
h1_send() for nothing. This happens because functions handling errors return
1 instead of 0 when nothing is performed.
If is cleaner to remove this flag in the internal functions handling errors,
responsible to update the H1 connection state, instead to do so in calling
functions. This will hopefully avoid bugs in future.
When a timeout is detected waiting for the request, a 408-Request-Time-Out
response is sent. However, an error was introduced by commit 6858d76cd3
("BUG/MINOR: mux-h1: Obey dontlognull option for empty requests"). Instead
of inhibiting the log message, the option was stopping the error sending.
Of course, we must do the opposite.
This patch must be backported as far as 2.4.
When an idle H1 connection starts to process an new request, we must take
care to remove H1C_F_WAIT_NEXT_REQ flag. This flag is used to know an idle
H1 connection has already processed at least one request and is waiting for
a next one, but nothing was received yet.
Keeping this flag leads to a crash because some running H1 connections may
be erroneously released on a soft-stop. Indeed, only idle or closed
connections must be released.
This bug was reported into #1903. It is 2.7-specific. No backport needed.
In ssl_sock_bind_verifycbk(), when compiled without QUIC support, the
compiler may report an error during compilation about a possible NULL
dereference:
src/ssl_sock.c: In function ‘ssl_sock_bind_verifycbk’:
src/ssl_sock.c:1738:12: error: potential null pointer dereference [-Werror=null-dereference]
1738 | ctx->xprt_st |= SSL_SOCK_ST_FL_VERIFY_DONE;
| ~~~^~~~~~~~~
A BUG_ON() was addeded because it must never happen. But when compiled
without DEBUG_STRICT, there is nothing to help the compiler. Thus
ALREADY_CHECKED() macro is used. The ssl-sock context and the bind config
are concerned.
This patch must be backported to 2.6.
In http_replace_req_uri(), if the URI was successfully replaced, it means
the start-line exists. However, the compiler reports an error about a
possible NULL pointer dereference:
src/http_htx.c: In function ‘http_replace_req_uri’:
src/http_htx.c:392:19: error: potential null pointer dereference [-Werror=null-dereference]
392 | sl->flags &= ~HTX_SL_F_NORMALIZED_URI;
So, ALREADY_CHECKED() macro is used to silent the build error. This patch
must be backported with 84cdbe478a ("BUG/MINOR: http-htx: Don't consider an
URI as normalized after a set-uri action").
The recent refactoring about errors handling in the H1 multiplexer
introduced a bug on abort when the client wait for the server response. The
bug only exists if abortonclose option is not enabled. Indeed, in this case,
when the end of the message is reached, the mux stops to receive data
because these data are part of the next request. However, error on the
sending path are no longer fatal. An error on the reading path must be
caught to do so. So, in case of a client abort, the error is not reported to
the upper layer and the H1 connection cannot be closed if some pending data
are blocked (remember, an error on sending path was detected, blocking
outgoing data).
To be sure to have a chance to detect the abort in the case, when an error
is detected on the sending path, we force the subscription for reads.
This patch, with the previous one, should fix the issue #1943. It is
2.7-specific, no backport is needed.
When the H1 task timed out, we must be careful to not release the H1
conneciton if there is still a H1 stream with a stream-connector
attached. In this case, we must wait. There are some tests to prevent it
happens. But the last one only tests the UPGRADING state while there is also
the CLOSING state with a SC attached. But, in the end, it is safer to test
if there is a H1 stream with a SC attached.
This patch should partially fix the issue #1943. However, it only prevent
the segfault. There is another bug under the hood. BTW, this one is
2.7-specific. Not backport is needed.
An abosulte URI is marked as normalized if it comes from an H2 client. This
way, we know we can send a relative URI to an H1 server. But, after a
set-uri action, the URI must no longer be considered as normalized.
Otherwise there is no way to send an absolute URI on the server side.
If it is important to update a normalized absolute URI without altering this
property, the host, path and/or query-string must be set separatly.
This patch should fix the issue #1938. It should be backported as far as
2.4.
Except for CONNECT method, where a normalization is performed, we expected
to have an exact match between the authority and the host header value.
However it was too strict. Indeed, default port must be handled and the
matching must respect the RFC3986.
There is already a scheme based normalizeation performed on the URI later,
on the HTX message. And we cannot normalize the URI during H1 parsing to be
able to report proper errors on the original raw buffer. And a systematic
read-only normalization to validate the authority will consume CPU for only
few requests. At the end, we decided to perform extra-checks when the exact
match fails. Now, following authority/host are considered as equivalent:
http: domain.com <=> domain.com:80 <=> domain.com:
https: domain.com <=> domain.com:443 <=> domain.com:
This patch depends on:
* MINOR: h1: Consider empty port as invalid in authority for CONNECT
* MINOR: http: Considere empty ports as valid default ports
It is a bug regarding the RFC3986. Technically, I may be backported as far
as 2.4. However, this must be discussed first. If it is backported, the
commits above must be backported too.
Thanks to the previous commit ("MINOR: http: Considere empty ports as valid
default ports"), empty ports are now considered as valid default ports. Thus,
absolute URIs with empty port should be normalized.
So now, the following URIs are normalized:
http://example.com:/ --> http://example.com/https://example.com:/ --> https://example.com/
This patch depend on:
* MINOR: h1: Consider empty port as invalid in authority for CONNECT
* MINOR: http: Considere empty ports as valid default ports
It is a bug regarding the RFC3986. Technically, I may be backported as far
as 2.4. However, this must be discussed first. If backported, the commits
above must be backported too.
In RFC3986#6.2.3, following URIs are considered as equivalent:
http://example.comhttp://example.com/http://example.com:/http://example.com:80/
The third one is interristing because the port is empty and it is still
considered as a default port. Thus, http_get_host_port() does no longer
return IST_NULL when the port is empty. Now, a ist is returned, it points on
the first character after the colon (':') with a length of 0. In addition,
http_is_default_port() now considers an empty port as a default port,
regardless the scheme.
This patch must not be backported, if so, without the previous one ("MINOR:
h1: Consider empty port as invalid in authority for CONNECT").
For now, this change is useless because http_get_host_port() returns
IST_NULL when the port is empty. But this will change. For other methods,
empty ports are valid. But not for CONNECT method. To still return a
400-Bad-Request if a CONNECT is performed with an empty port, istlen() is
used to test the port, instead of isttest().