7020 Commits

Author SHA1 Message Date
Amaury Denoyelle
31ea9177ac MINOR: listener: add flags field
Define a new field in listener structure named flags.

For the moment, no flag is defined. This will be notably useful to
differentiate QUIC listeners with the implementation of a QUIC conn
accept queue.
2022-01-26 15:25:45 +01:00
Amaury Denoyelle
9fa15e5413 MINOR: quic: do not manage connection in xprt snd_buf
Remove usage of connection in quic_conn_from_buf. As connection and
quic_conn are decorrelated, it is not logical to check connection flags
when using sendto.

This require to store the L4 peer address in quic_conn to be able to use
sendto.

This change is required to delay allocation of connection.
2022-01-26 15:25:38 +01:00
Amaury Denoyelle
7f7713d6ef MINOR: receiver: define a flag for local accept
This flag is named RX_F_LOCAL_ACCEPT. It will be activated for special
receivers where connection balancing to threads is already handle
outside of listener_accept, such as with QUIC listeners.
2022-01-26 11:22:20 +01:00
Amaury Denoyelle
4b40f19f92 MINOR: quic: refactor app-ops initialization
Add a new function in mux-quic to install app-ops. For now this
functions is called during the ALPN negotiation of the QUIC handshake.

This change will be useful when the connection accept queue will be
implemented. It will be thus required to delay the app-ops
initialization because the mux won't be allocated anymore during the
QUIC handshake.
2022-01-26 10:59:33 +01:00
Amaury Denoyelle
0b1f93127f MINOR: quic: handle app data according to mux/connection layer status
Define a new enum to represent the status of the mux/connection layer
above a quic_conn. This is important to know if it's possible to handle
application data, or if it should be buffered or dropped.
2022-01-26 10:57:17 +01:00
Willy Tarreau
add43fa43e DEBUG: pools: add new build option DEBUG_POOL_TRACING
This new option, when set, will cause the callers of pool_alloc() and
pool_free() to be recorded into an extra area in the pool that is expected
to be helpful for later inspection (e.g. in core dumps). For example it
may help figure that an object was released to a pool with some sub-fields
not yet released or that a use-after-free happened after releasing it,
with an immediate indication about the exact line of code that released
it (possibly an error path).

This only works with the per-thread cache, and even objects refilled from
the shared pool directly into the thread-local cache will have a NULL
there. That's not an issue since these objects have not yet been freed.
It's worth noting that pool_alloc_nocache() continues not to set any
caller pointer (e.g. when the cache is empty) because that would require
a possibly undesirable API change.

The extra cost is minimal (one pointer per object) and this completes
well with DEBUG_POOL_INTEGRITY.
2022-01-24 16:40:48 +01:00
Willy Tarreau
0e2a5b4b61 MINOR: pools: extend pool_cache API to pass a pointer to a caller
This adds a caller to pool_put_to_cache() and pool_get_from_cache()
which will optionally be used to pass a pointer to their callers. For
now it's not used, only the API is extended to support this pointer.
2022-01-24 16:40:48 +01:00
Willy Tarreau
7fa092b727 MINOR: pools: prepare POOL_EXTRA to be split into multiple extra fields
Here the idea is to calculate the POOL_EXTRA size that is appended at
the end of a pool object based on the sum of enabled optional fields
so that we can more easily compute offsets and sizes depending on build
options.

For this, POOL_EXTRA is replaced with POOL_EXTRA_MARK which itself is
set either to sizeof(void*) or zero depending on whether we enable
marking the origin pool or not upon allocation.
2022-01-24 16:40:48 +01:00
Willy Tarreau
d392973dcc MINOR: pools: partially uninline pool_alloc()
The pool_alloc() function was already a wrapper to __pool_alloc() which
was also inlined but took a set of flags. This latter was uninlined and
moved to pool.c, and pool_alloc()/pool_zalloc() turned to macros so that
they can more easily evolve to support debugging options.

The number of call places made this code grow over time and doing only
this change saved ~1% of the whole executable's size.
2022-01-24 16:40:48 +01:00
Willy Tarreau
15c322c413 MINOR: pools: partially uninline pool_free()
The pool_free() function has become a bit big over time due to the
extra consistency checks. It used to remain inline only to deal
cleanly with the NULL pointer free that's quite present on some
structures (e.g. in stream_free()).

Here we're splitting the function in two:
  - __pool_free() does the inner block without the pointer test and
    becomes a function ;

  - pool_free() is now a macro that only checks the pointer and calls
    __pool_free() if needed.

The use of a macro versus an inline function is only motivated by an
easier intrumentation of the code later.

With this change, the code size reduces by ~1%, which means that at
this point all pool_free() call places used to represent more than
1% of the total code size.
2022-01-24 16:40:48 +01:00
Amaury Denoyelle
9320dd5385 MEDIUM: quic/ssl: add new ex data for quic_conn
Allow to register quic_conn as ex-data in SSL callbacks. A new index is
used to identify it as ssl_qc_app_data_index.

Replace connection by quic_conn as SSL ex-data when initializing the QUIC
SSL session. When using SSL callbacks in QUIC context, the connection is
now NULL. Used quic_conn instead to retrieve the required parameters.
Also clean up

The same changes are conducted inside the QUIC SSL methods of xprt-quic
: connection instance usage is replaced by quic_conn.
2022-01-24 10:30:49 +01:00
Amaury Denoyelle
57af069571 MINOR: quic: set listener accept cb on parsing
Define a special accept cb for QUIC listeners to quic_session_accept().
This operation is conducted during the proto.add callback when creating
listeners.

A special care is now taken care when setting the standard callback
session_accept_fd() to not overwrite if already defined by the proto
layer.
2022-01-24 10:30:49 +01:00
Willy Tarreau
0575d8fd76 DEBUG: pools: add new build option DEBUG_POOL_INTEGRITY
When enabled, objects picked from the cache are checked for corruption
by comparing their contents against a pattern that was placed when they
were inserted into the cache. Objects are also allocated in the reverse
order, from the oldest one to the most recent, so as to maximize the
ability to detect such a corruption. The goal is to detect writes after
free (or possibly hardware memory corruptions). Contrary to DEBUG_UAF
this cannot detect reads after free, but may possibly detect later
corruptions and will not consume extra memory. The CPU usage will
increase a bit due to the cost of filling/checking the area and for the
preference for cold cache instead of hot cache, though not as much as
with DEBUG_UAF. This option is meant to be usable in production.
2022-01-21 19:07:48 +01:00
Willy Tarreau
6c539c4b8c BUG/MINOR: stream: make the call_rate only count the no-progress calls
We have an anti-looping protection in process_stream() that detects bugs
that used to affect a few filters like compression in the past which
sometimes forgot to handle a read0 or a particular error, leaving a
thread looping at 100% CPU forever. When such a condition is detected,
an alert it emitted and the process is killed so that it can be replaced
by a sane one:

  [ALERT]    (19061) : A bogus STREAM [0x274abe0] is spinning at 2057156
             calls per second and refuses to die, aborting now! Please
             report this error to developers [strm=0x274abe0,3 src=unix
             fe=MASTER be=MASTER dst=<MCLI> txn=(nil),0 txn.req=-,0
             txn.rsp=-,0 rqf=c02000 rqa=10000 rpf=88000021 rpa=8000000
             sif=EST,40008 sib=DIS,84018 af=(nil),0 csf=0x274ab90,8600
             ab=0x272fd40,1 csb=(nil),0
             cof=0x25d5d80,1300:PASS(0x274aaf0)/RAW((nil))/unix_stream(9)
             cob=(nil),0:NONE((nil))/NONE((nil))/NONE(0) filters={}]
    call trace(11):
    |       0x4dbaab [c7 04 25 01 00 00 00 00]: stream_dump_and_crash+0x17b/0x1b4
    |       0x4df31f [e9 bd c8 ff ff 49 83 7c]: process_stream+0x382f/0x53a3
    (...)

One problem with this detection is that it used to only count the call
rate because we weren't sure how to make it more accurate, but the
threshold was high enough to prevent accidental false positives.

There is actually one case that manages to trigger it, which is when
sending huge amounts of requests pipelined on the master CLI. Some
short requests such as "show version" are sufficient to be handled
extremely fast and to cause a wake up of an analyser to parse the
next request, then an applet to handle it, back and forth. But this
condition is not an error, since some data are being forwarded by
the stream, and it's easy to detect it.

This patch modifies the detection so that update_freq_ctr() only
applies to calls made without CF_READ_PARTIAL nor CF_WRITE_PARTIAL
set on any of the channels, which really indicates that nothing is
happening at all.

This is greatly sufficient and extremely effective, as the call above
is still caught (shutr being ignored by an analyser) while a loop on
the master CLI now has no effect. The "call_rate" field in the detailed
"show sess" output will now be much lower, except for bogus streams,
which may help spot them. This field is only there for developers
anyway so it's pretty fine to slightly adjust its meaning.

This patch could be backported to stable versions in case of reports
of such an issue, but as that's unlikely, it's not really needed.
2022-01-20 18:56:57 +01:00
Frédéric Lécaille
82468ea98e MINOR: quic: Remove the packet number space TX MT_LIST
There is no need to use an MT_LIST to store frames to send from a packet
number space. This is a reminiscence for multi-threading support for the TX part.
2022-01-20 16:43:06 +01:00
Willy Tarreau
c514365317 MINOR: channel: add new function co_getdelim() to support multiple delimiters
For now we have co_getline() which reads a buffer and stops on LF, and
co_getword() which reads a buffer and stops on one arbitrary delimiter.
But sometimes we'd need to stop on a set of delimiters (CR and LF, etc).

This patch adds a new function co_getdelim() which takes a set of delimiters
as a string, and constructs a small map (32 bytes) that's looked up during
parsing to stop after the first delimiter found within the set. It also
supports an optional escape character that skips a delimiter (typically a
backslash). For the rest it works exactly like the two other variants.
2022-01-19 19:16:47 +01:00
William Lallemand
bad9c8cac4 BUG/MINOR: httpclient: set default Accept and User-Agent headers
Some servers require at least an Accept and a User-Agent header in the
request. This patch sets some default value.

Must be backported in 2.5.
2022-01-14 20:46:21 +01:00
Willy Tarreau
39fd546d4b MINOR: pools: enable pools with DEBUG_FAIL_ALLOC as well
During 2.4-dev, fault injection was enabled for cached pools with commit
207c09509 ("MINOR: pools: move the fault injector to __pool_alloc()"),
except that the condition for CONFIG_HAP_POOLS still depended on
DEBUG_FAIL_ALLOC not being set, which limits the usability to cases
where the define is set by hand. Let's remove it from the equation as
this is not a constraint anymore. While a bit old, there's no need to
backport this as it's only used during development.
2022-01-12 17:31:01 +01:00
Amaury Denoyelle
b76ae69513 MEDIUM: quic: implement Retry emission
Implement the emission of Retry packets. These packets are emitted in
response to Initial from clients without token. The token from the Retry
packet contains the ODCID from the Initial packet.

By default, Retry packet emission is disabled and the handshake can
continue without address validation. To enable Retry, a new bind option
has been defined named "quic-force-retry". If set, the handshake must be
conducted only after receiving a token in the Initial packet.
2022-01-12 11:08:48 +01:00
Amaury Denoyelle
c3b6f4d484 MINOR: quic: define retry_source_connection_id TP
Define a new QUIC transport parameter retry_source_connection_id. This
parameter is set only by server, after issuing a Retry packet.
2022-01-12 11:08:48 +01:00
Amaury Denoyelle
5ff1c9778c MEDIUM: quic: implement Initial token parsing
Implement the parsing of token from Initial packets. It is expected that
the token contains a CID which is the DCID from the Initial packet
received from the client without token which triggers a Retry packet.
This CID is then used for transport parameters.

Note that at the moment Retry packet emission is not implemented. This
will be achieved in a following commit.
2022-01-12 11:08:48 +01:00
Amaury Denoyelle
6efec292ef MINOR: quic: implement Retry TLS AEAD tag generation
Implement a new QUIC TLS related function
quic_tls_generate_retry_integrity_tag(). This function can be used to
calculate the AEAD tag of a Retry packet.
2022-01-12 11:08:48 +01:00
Frédéric Lécaille
ba85acdc70 MINOR: quid: Add traces quic_close() and quic_conn_io_cb()
This is to have an idea of possible remaining issues regarding the
connection terminations.
2022-01-11 16:56:04 +01:00
Frédéric Lécaille
2fe8b3be20 MINOR: quic: Flag the connection as being attached to a listener
We do not rely on connection objects to know if we are a listener or not.
2022-01-11 16:12:31 +01:00
Remi Tricot-Le Breton
a996763619 BUG/MINOR: ssl: Store client SNI in SSL context in case of ClientHello error
If an error is raised during the ClientHello callback on the server side
(ssl_sock_switchctx_cbk), the servername callback won't be called and
the client's SNI will not be saved in the SSL context. But since we use
the SSL_get_servername function to return this SNI in the ssl_fc_sni
sample fetch, that means that in case of error, such as an SNI mismatch
with a frontend having the strict-sni option enabled, the sample fetch
would not work (making strict-sni related errors hard to debug).

This patch fixes that by storing the SNI as an ex_data in the SSL
context in case the ClientHello callback returns an error. This way the
sample fetch can fallback to getting the SNI this way. It will still
first call the SSL_get_servername function first since it is the proper
way of getting a client's SNI when the handshake succeeded.

In order to avoid memory allocations are runtime into this highly used
runtime function, a new memory pool was created to store those client
SNIs. Its entry size is set to 256 bytes since SNIs can't be longer than
255 characters.

This fixes GitHub #1484.

It can be backported in 2.5.
2022-01-10 16:31:22 +01:00
William Dauchy
a9dd901143 MINOR: proxy: add option idle-close-on-response
Avoid closing idle connections if a soft stop is in progress.

By default, idle connections will be closed during a soft stop. In some
environments, a client talking to the proxy may have prepared some idle
connections in order to send requests later. If there is no proper retry
on write errors, this can result in errors while haproxy is reloading.
Even though a proper implementation should retry on connection/write
errors, this option was introduced to support back compat with haproxy <
v2.4. Indeed before v2.4, we were waiting for a last request to be able
to add a "connection: close" header and advice the client to close the
connection.

In a real life example, this behavior was seen in AWS using the ALB in
front of a haproxy. The end result was ALB sending 502 during haproxy
reloads.
This patch was tested on haproxy v2.4, with a regular reload on the
process, and a constant trend of requests coming in. Before the patch,
we see regular 502 returned to the client; when activating the option,
the 502 disappear.

This patch should help fixing github issue #1506.
In order to unblock some v2.3 to v2.4 migraton, this patch should be
backported up to v2.4 branch.

Signed-off-by: William Dauchy <wdauchy@gmail.com>
[wt: minor edits to the doc to mention other options to care about]
Signed-off-by: Willy Tarreau <w@1wt.eu>
2022-01-06 09:09:51 +01:00
Frédéric Lécaille
6b6631593f MINOR: quic: Re-arm the PTO timer upon datagram receipt
When block by the anti-amplification limit, this is the responsability of the
client to unblock it sending new datagrams. On the server side, even if not
well parsed, such datagrams must trigger the PTO timer arming.
2022-01-04 17:30:00 +01:00
Frédéric Lécaille
de6f7c503e MINOR: quic: Prepare Handshake packets asap after completed handshake
Switch back to QUIC_HS_ST_SERVER_HANDSHAKE state after a completed handshake
if acks must be send.
Also ensure we build post handshake frames only one time without using prev_st
variable and ensure we discard the Handshake packet number space only one time.
2022-01-04 17:30:00 +01:00
Frédéric Lécaille
917a7dbdc7 MINOR: quic: Do not drop secret key but drop the CRYPTO data
We need to be able to decrypt late Handshake packets after the TLS secret
keys have been discarded. If not the peer send Handshake packet which have
not been acknowledged. But for such packets, we discard the CRYPTO data.
2022-01-04 17:30:00 +01:00
Frédéric Lécaille
466e9da145 MINOR: quic: Remove nb_pto_dgrams quic_conn struct member
For now on we rely on tx->pto_probe pktns struct member to inform
the packet building function we want to probe.
2022-01-04 17:30:00 +01:00
Frédéric Lécaille
22576a2e55 MINOR: quic: Wrong ack_delay compution before calling quic_loss_srtt_update()
RFC 9002 5.3. Estimating smoothed_rtt and rttvar:
MUST use the lesser of the acknowledgment delay and the peer's max_ack_delay
after the handshake is confirmed.
2022-01-04 17:30:00 +01:00
Frédéric Lécaille
09e0f8319d MINOR: quic: Wrong packet number space computation for PTO
This leaded to make quic_pto_pktns() return 01RTT packet number space
when initiating a probing even if the handshake was not completed!
2022-01-04 17:30:00 +01:00
Frédéric Lécaille
1f6cf18183 MINOR: quic: Wrong first packet number space computation
I really do not know where does these inversion come from.
2022-01-04 17:30:00 +01:00
Frédéric Lécaille
fde2a98dd1 MINOR: quic: Wrong traces after rework
TRACE_*() macros must take a quic_conn struct as first argument.
2022-01-04 17:30:00 +01:00
William Lallemand
148d7a0301 BUG/MINOR: cli: fix _getsocks with musl libc
In ticket #1413, the transfer of FDs couldn't correctly work on alpine
linux. After a few tests with musl on another distribution it seems to
be a limitation of this libc.

The number of FD that could be sent per sendmsg was set to 253, which
does not seem to work with musl, decreasing it 252 seems to work
better, so lets set this value everywhere since it does not have that
much impact.

This must be backported in every maintained version.
2022-01-03 19:50:34 +01:00
Ilya Shipitsin
5e87bcf870 CLEANUP: assorted typo fixes in the code and comments This is 29th iteration of typo fixes 2022-01-03 14:40:58 +01:00
Willy Tarreau
f5e94b2f47 OPTIM: pools: reduce local pool cache size to 512kB
Now that we support batched allocations/releases, it appears that we can
reach the same performance on H2 with shared pools and 256kB thread-local
cache as without shared pools, a fast allocator and 1MB thread-local cache.
With 512kB we're up to 10% faster on highly multiplexed H2 than without the
shared cache. This was tested on a 16-core ARM machine. Thus it's time to
slightly reduce the per-thread memory cost, which may also improve the
performance on machines with smaller L2 caches. It essentially reverts
commit f587003fe ("MINOR: pools: double the local pool cache size to 1 MB").
2022-01-02 19:52:15 +01:00
Willy Tarreau
43937e920f MEDIUM: pools: start to batch eviction from local caches
Since previous patch we can forcefully evict multiple objects from the
local cache, even when evicting basd on the LRU entries. Let's define
a compile-time configurable setting to batch releasing of objects. For
now we set this value to 8 items per round.

This is marked medium because eviction from the LRU will slightly change
in order to group the last items that are freed within a single cache
instead of accurately scanning only the oldest ones exactly in their
order of appearance. But this is required in order to evolve towards
batched removals.
2022-01-02 19:35:26 +01:00
Willy Tarreau
337410c5a4 MINOR: pools: pass the objects count to pool_put_to_shared_cache()
This is in order to let the caller build the cluster of items to be
released. For now single items are released hence the count is always
1.
2022-01-02 19:35:26 +01:00
Willy Tarreau
148160b027 MINOR: pools: prepare pool_item to support chained clusters
In order to support batched allocations and releases, we'll need to
prepare chains of items linked together and that can be atomically
attached and detached at once. For this we implement a "down" pointer
in each pool_item that points to the other items belonging to the same
group. For now it's always NULL though freeing functions already check
them when trying to release everything.
2022-01-02 19:35:26 +01:00
Willy Tarreau
91a8e28f90 MINOR: pool: add a function to estimate how many may be released at once
At the moment we count the number of releasable objects to a shared pool
one by one. The way the formula is made allows to pre-compute the number
of available slots, so let's add a function for that so that callers can
do it once before iterating.

This takes into account the average number of entries needed and the
minimum availability per pool. The function is not used yet.
2022-01-02 19:35:26 +01:00
Willy Tarreau
c16ed3b090 MINOR: pool: introduce pool_item to represent shared pool items
In order to support batch allocation from/to shared pools, we'll have to
support a specific representation for pool objects. The new pool_item
structure will be used for this. For now it only contains a "next"
pointer that matches exactly the current storage model. The few functions
that deal with the shared pool entries were adapted to use the new type.
There is no functionality difference at this point.
2022-01-02 19:35:26 +01:00
Willy Tarreau
b46674a283 MINOR: pool: check for pool's fullness outside of pool_put_to_shared_cache()
Instead of letting pool_put_to_shared_cache() pass the object to the
underlying OS layer when there's no more room, let's have the caller
check if the pool is full and either call pool_put_to_shared_cache()
or call pool_free_nocache().

Doing this sensibly simplifies the code as this function now only has
to deal with a pool and an item and only for cases where there are
local caches and shared caches. As the code was simplified and the
calls more isolated, the function was moved to pool.c.

Note that it's only called from pool_evict_from_local_cache{,s}() and
that a part of its logic might very well move there when dealing with
batches.
2022-01-02 19:35:26 +01:00
Willy Tarreau
a06f78b376 MINOR: pool: make pool_is_crowded() always true when no shared pools are used
This function is used to know whether the shared pools are full or if we
can store more objects in them. Right now it cannot be used in a generic
way because when shared pools are not used it will return false, letting
one think pools can accept objects. Let's make one variant for each build
model.
2022-01-02 19:35:26 +01:00
Willy Tarreau
57c5c6db0c MINOR: pool: rely on pool_free_nocache() in pool_put_to_shared_cache()
At the moment pool_put_to_shared_cache() checks if the pool is crowded,
and if so it does the exact same job as pool_free_nocache(), otherwise
it adds the object there.

This patch rearranges the code so that the function is split in two and
either uses one path or the other, and always relies on pool_free_nocache()
in case we don't want to store the object. This way there will be a common
path with the variant not using the shared cache. The patch is better viewed
using git show -b since a whole block got reindented.

It's worth noting that there is a tiny difference now in the local cache
usage measurement, as the decrement of "used" used to be performed before
checking for pool_is_crowded() instead of being done after. This used to
result in always one less object being kept in the cache than what was
configured in minavail. The rearrangement of the code aligns it with
other call places.
2022-01-02 19:35:26 +01:00
Willy Tarreau
594775d17c CLEANUP: pools: group list updates in pool_get_from_cache()
Some changes affect the list element and others affect the pool stats.
Better group them together, as the compiler may not detect certain
possible optimizations after the casts made by the list macros.
2022-01-02 19:34:19 +01:00
Willy Tarreau
afe2c4a1fc MINOR: pool: allocate from the shared cache through the local caches
One of the thread scaling challenges nowadays for the pools is the
contention on the shared caches. There's never any situation where we
have a shared cache and no local cache anymore, so we can technically
afford to transfer objects from the shared cache to the local cache
before returning them to the user via the regular path. This adds a
little bit more work per object per miss, but will permit batch
processing later.

This patch simply moves pool_get_from_shared_cache() to pool.c under
the new name pool_refill_local_from_shared(), and this function does
not return anything but it places the allocated object at the head of
the local cache.
2022-01-02 19:27:57 +01:00
Willy Tarreau
8c4927098e CLEANUP: pools: get rid of the POOL_LINK macro
The POOL_LINK macro is now only used for debugging, and it still requires
ifdefs around, which needlessly complicates the code. Let's replace it
and the calling code with a new pair of macros: POOL_DEBUG_SET_MARK()
and POOL_DEBUG_CHECK_MARK(), that respectively store and check the pool
pointer in the extra location at the end of the pool. This removes 4
pairs of ifdefs in the middle of the code.
2022-01-02 12:44:19 +01:00
Willy Tarreau
799f6143ca CLEANUP: pools: do not use the extra pointer to link shared elements
This practice relying on POOL_LINK() dates from the era where there were
no pool caches, but given that the structures are a bit more complex now
and that pool caches do not make use of this feature, it is totally
useless since released elements have already been overwritten, and yet
it complicates the architecture and prevents from making simplifications
and optimizations. Let's just get rid of this feature. The pointer to
the origin pool is preserved though, as it helps detect incorrect frees
and serves as a canary for overflows.
2022-01-02 12:44:19 +01:00
Willy Tarreau
4859984a5b DOC: pool: document the purpose of various structures in the code
The pools have become complex with the shared pools and the thread-local
caches, and the purpose of certain structures is never easy to grasp.
Let's add a bit of documentation there to save some long and painful
analysis to those touching that area.
2022-01-02 12:44:19 +01:00