17544 Commits

Author SHA1 Message Date
Frédéric Lécaille
745d1a269b MEDIUM: map/acl: Improve pat_ref_set_elt() efficiency (for "set-map", "add-acl"action perfs)
Store a pointer to the expression (struct pattern_expr) into the data structure
used to chain/store the map element references (struct pat_ref_elt) , e.g. the
struct pattern_tree when stored into an ebtree or struct pattern_list when
chained to a list.

Modify pat_ref_set_elt() to stop inspecting all the expressions attached to a map
and to look for the <elt> element passed as parameter to retrieve the sample data
to be parsed. Indeed, thanks to the pointer added above to each pattern tree nodes
or list elements, they all can be inspected directly from the <elt> passed as
parameter and its ->tree_head and ->list_head member: the pattern tree nodes are
stored into elt->tree_head, and the pattern list elements are chained to
elt->list_head list. This inspection was also the job of pattern_find_smp() which
is no more useful. This patch removes the code of this function.
2023-08-25 15:41:59 +02:00
Frédéric Lécaille
0844bed7d3 MEDIUM: map/acl: Improve pat_ref_set() efficiency (for "set-map", "add-acl" action perfs)
Organize reference to pattern element of map (struct pat_ref_elt) into an ebtree:
  - add an eb_root member to the map (pat_ref struct) and an ebpt_node to its
    element (pat_ref_elt struct),
  - modify the code to insert these nodes into their ebtrees each time they are
    allocated. This is done in pat_ref_append().

Note that ->head member (struct list) of map (struct pat_ref) is not removed
could have been removed. This is not the case because still necessary to dump
the map contents from the CLI in the order the map elememnts have been inserted.

This patch also modifies http_action_set_map() which is the callback at least
used by "set-map" action. The pat_ref_elt element returned by pat_ref_find_elt()
is no more ignored, but reused if not NULL by pat_ref_set() as first element to
lookup from. This latter is also modified to use the ebtree attached to the map
in place of the ->head list attached to each map element (pat_ref_elt struct).

Also modify pat_ref_find_elt() to makes it use ->eb_root map ebtree added to the
map by this patch in place of inspecting all the elements with a strcmp() call.
2023-08-25 15:41:56 +02:00
Willy Tarreau
ff9e653859 BUG/MINOR: ssl_sock: fix possible memory leak on OOM
That's the classical realloc() issue: if it returns NULL, the old area
is not freed but we erase the pointer. It was brought by commit e18d4e828
("BUG/MEDIUM: ssl: backend TLS resumption with sni and TLSv1.3"), and
should be backported where this commit was backported.
2023-08-25 14:32:50 +02:00
Aurelien DARRAGON
ee1891ccbe BUG/MINOR: hlua_fcn: potentially unsafe stktable_data_ptr usage
As reported by Coverity in GH #2253, stktable_data_ptr() usage in
hlua_stktable_dump() func is potentially unsafe because
stktable_data_ptr() may return NULL and the returned value is
dereferenced as-is without precautions.

In practise, this should not happen because some error checking was
already performed prior to calling stktable_data_ptr(). But since we're
using the safe stktable_data_ptr() function, all the error checking is
already done within the function, thus all we need to do is check ptr
against NULL instead to protect against NULL dereferences.

This should be backported in every stable versions.
2023-08-25 11:52:43 +02:00
Amaury Denoyelle
6bd994d5d7 BUG/MINOR: h2: fix reverse if no timeout defined
h2c.task is not allocated in h2_init() if timeout client/server is not
defined depending on the connection side. This caused crash on
connection reverse due to systematic requeuing of h2c.task in
h2_conn_reverse().

To fix this, check h2c.task in h2_conn_reverse(). If old timeout was
undefined but new one is, h2c.task must be allocated as it was not in
h2_init(). On the opposite situation, if old timeout was defined and new
one is not, h2c.task is freed. In this case, or if neither timeout are
defined, skip the task requeuing.

This bug is easily reproduced by using reverse bind or server with
undefined timeout client/server depending on the connection reverse
direction.

This bug has been introduced by reverse connect support.
No need to backport it.
2023-08-24 17:58:14 +02:00
Amaury Denoyelle
5053e89142 MEDIUM: h2: prevent stream opening before connection reverse completed
HTTP/2 demux must be handled with care for active reverse connection.
Until accept has been completed, it should be forbidden to handle
HEADERS frame as session is not yet ready to handle streams.

To implement this, use the flag H2_CF_DEM_TOOMANY which blocks demux
process. This flag is automatically set just after conn_reverse()
invocation. The flag is removed on rev_accept_conn() callback via a new
H2 ctl enum. H2 tasklet is woken up to restart demux process.

As a side-effect, reporting in H2 mux may be blocked as demux functions
are used to convert error status at the connection level with
CO_FL_ERROR. To ensure error is reported for a reverse connection, check
h2c_is_dead() specifically for this case in h2_wake(). This change also
has its own side-effect : h2c_is_dead() conditions have been adjusted to
always exclude !h2c->conn->owner condition which is always true for
reverse connection or else H2 mux may kill them unexpectedly.
2023-08-24 17:03:08 +02:00
Amaury Denoyelle
6820b9b393 MEDIUM: h2: implement active connection reversal
Implement active reverse on h2_conn_reverse().

Only minimal steps are done here : HTTP version session counters are
incremented on the listener instance. Also, the connection is inserted
in the mux_stopping_list to ensure it will be actively closed on process
shutdown/listener suspend.
2023-08-24 17:03:08 +02:00
Amaury Denoyelle
b130f8dbc3 MINOR: proto_reverse_connect: handle early error before reversal
An error can occured on a reverse connection before accept is completed.
In this case, no parent session can be notified. Instead, wake up the
receiver task on conn_create_mux().

As a counterpart to this, receiver task is extended to match CO_FL_ERROR
flag on pending connection. In this case, the onnection is freed. The
task is then requeued with a 1 second delay to start a new reverse
connection attempt.
2023-08-24 17:03:08 +02:00
Amaury Denoyelle
47f502df5e MEDIUM: proto_reverse_connect: bootstrap active reverse connection
Implement active reverse connection initialization. This is done through
a new task stored in the receiver structure. This task is instantiated
via bind callback and first woken up via enable callback.

Task handler is separated into two halves. On the first step, a new
connection is allocated and stored in <pend_conn> member of the
receiver. This new client connection will proceed to connect using the
server instance referenced in the bind_conf.

When connect has successfully been executed and HTTP/2 connection is
ready for exchange after SETTINGS, reverse_connect task is woken up. As
<pend_conn> is still set, the second halve is executed which only
execute listener_accept(). This will in turn execute accept_conn
callback which is defined to return the pending connection.

The task is automatically requeued inside accept_conn callback if bind
maxconn is not yet reached. This allows to specify how many connection
should be opened. Each connection is instantiated and reversed serially
one by one until maxconn is reached.

conn_free() has been modified to handle failure if a reverse connection
fails before being accepted. In this case, no session exists to notify
about the failure. Instead, reverse_connect task is requeud with a 1
second delay, giving time to fix a possible network issue. This will
allow to attempt a new connection reverse.

Note that for the moment connection rebinding after accept is disabled
for simplicity. Extra operations are required to migrate an existing
connection and its stack to a new thread which will be implemented
later.
2023-08-24 17:03:06 +02:00
Amaury Denoyelle
b781a1bb09 MINOR: connection: prepare init code paths for active reverse
When an active reverse connection is initialized, it has no stream-conn
attached to it contrary to other backend connections. This forces to add
extra check on stream existence in conn_create_mux() and h2_init().

There is also extra checks required for session_accept_fd() after
reverse and accept is done. This is because contrary to other frontend
connections, reversed connections have already initialized their mux and
transport layers. This forces us to skip the majority of
session_accept_fd() initialization part.

Finally, if session_accept_fd() is interrupted due to an early error, a
reverse connection cannot be freed directly or else mux will remain
alone. Instead, the mux destroy callback is used to free all connection
elements properly.
2023-08-24 17:02:37 +02:00
Amaury Denoyelle
0747e493a0 MINOR: proto_reverse_connect: parse rev@ addresses for bind
Implement parsing for "rev@" addresses on bind line. On config parsing,
server name is stored on the bind_conf.

Several new callbacks are defined on reverse_connect protocol to
complete parsing. listen callback is used to retrieve the server
instance from the bind_conf server name. If found, the server instance
is stored on the receiver. Checks are implemented to ensure HTTP/2
protocol only is used by the server.
2023-08-24 17:02:37 +02:00
Amaury Denoyelle
008e8f67ee MINOR: connection: extend conn_reverse() for active reverse
Implement active reverse support inside conn_reverse(). This is used to
transfer the connection from the backend to the frontend side.

A new flag is defined CO_FL_REVERSED which is set just after this
transition. This will be used to identify connections which were
reversed but not yet accepted.
2023-08-24 17:02:37 +02:00
Amaury Denoyelle
5db6dde058 MINOR: proto: define dedicated protocol for active reverse connect
A new protocol named "reverse_connect" is created. This will be used to
instantiate connections that are opened by a reverse bind.

For the moment, only a minimal set of callbacks are defined with no real
work. This will be extended along the next patches.
2023-08-24 17:02:37 +02:00
Amaury Denoyelle
1723e21af2 MINOR: connection: use attach-srv name as SNI reuse parameter on reverse
On connection passive reverse from frontend to backend, its hash node is
calculated to be able to select it from the idle server pool. If
attach-srv rule defined an associated name, reuse it as the value for
SNI prehash.

This change allows a client to select a reverse connection by its name
by configuring its server line with a SNI to permit this.
2023-08-24 17:02:34 +02:00
Amaury Denoyelle
0b3758e18f MINOR: tcp-act: define optional arg name for attach-srv
Add an optional argument 'name' for attach-srv rule. This contains an
expression which will be used as an identifier inside the server idle
pool after reversal. To match this connection for a future transfer
through the server, the SNI server parameter must match this name. If no
name is defined, match will only occur with an empty SNI value.

For the moment, only the parsing step is implemented. An extra check is
added to ensure that the reverse server uses SSL with a SNI. Indeed, if
name is defined but server does not uses a SNI, connections will never
be selected on reused after reversal due to a hash mismatch.
2023-08-24 15:28:38 +02:00
Amaury Denoyelle
58cb76d7e1 MINOR: tcp-act: parse 'tcp-request attach-srv' session rule
Create a new tcp-request session rule 'attach-srv'.

The parsing handler is used to extract the server targetted with the
notation 'backend/server'. The server instance is stored in the act_rule
instance under the new union variant 'attach_srv'.

Extra checks are implemented in parsing to ensure attach-srv is only
used for proxy in HTTP mode and with listeners/server with no explicit
protocol reference or HTTP/2 only.

The action handler itself is really simple. It assigns the stored server
instance to the 'reverse' member of the connection instance. It will be
used in a future patch to implement passive reverse-connect.
2023-08-24 15:02:32 +02:00
Amaury Denoyelle
6e428dfaf2 MINOR: backend: only allow reuse for reverse server
A reverse server relies solely on its pool of idle connection to
transfer requests which will be populated through a new tcp-request rule
'attach-srv'.

Several changes are required on connect_server() to implement this.
First, reuse mode is forced to always for this type of server. Then, if
no idle connection is found, the request will be aborted. This results
with a 503 HTTP error code, similarly to when no server is available.
2023-08-24 14:49:03 +02:00
Amaury Denoyelle
e6223a3188 MINOR: server: define reverse-connect server
Implement reverse-connect server. This server type cannot instantiate
its own connection on transfer. Instead, it can only reuse connection
from its idle pool. These connections will be populated using the future
'tcp-request session attach-srv' rule.

A reverse-connect has no address. Instead, it uses a new custom server
notation with '@' character prefix. For the moment, only '@reverse' is
defined. An extra check is implemented to ensure server is used in a
HTTP proxy.
2023-08-24 14:49:03 +02:00
Amaury Denoyelle
4fb538d4b6 MEDIUM: h2: reverse connection after SETTINGS reception
Reverse connection after SETTINGS reception if it was set as reversable.
This operation is done in a new function h2_conn_reverse(). It regroups
common changes which are needed for both reversal direction :
H2_CF_IS_BACK is set or unset and timeouts are inverted.

For the moment, only passive reverse is fully implemented. Once done,
the connection instance is directly inserted in its targetted server
pool. It can then be used immediately for future transfers using this
server.
2023-08-24 14:49:03 +02:00
Amaury Denoyelle
1f76b8ae07 MEDIUM: connection: implement passive reverse
Define a new method conn_reverse(). This method is used to reverse a
connection from frontend to backend or vice-versa depending on its
initial status.

For the moment, passive reverse only is implemented. This covers the
transition from frontend to backend side. The connection is detached
from its owner session which can then be freed. Then the connection is
linked to the server instance.

only for passive connection on
frontend to transfer them on the backend side. This requires to free the
connection session after detaching it from.
2023-08-24 14:44:33 +02:00
Amaury Denoyelle
d8d9122a02 MINOR: connection: centralize init/deinit of backend elements
A connection contains extra elements which are only used for the backend
side. Regroup their allocation and deallocation in two new functions
named conn_backend_init() and conn_backend_deinit().

No functional change is introduced with this commit. The new functions
are reused in place of manual alloc/dealloc in conn_new() / conn_free().
This patch will be useful for reverse connect support with connection
conversion from backend to frontend side and vice-versa.
2023-08-24 14:44:33 +02:00
Amaury Denoyelle
fbe35afaa4 MINOR: proxy: simplify parsing 'backend/server'
Several CLI handlers use a server argument specified with the format
'<backend>/<server>'. The parsing of this arguement is done in two
steps, first splitting the string with '/' delimiter and then use
get_backend_server() to retrieve the server instance.

Refactor this code sections with the following changes :
* splitting is reimplented using ist API
* get_backend_server() is removed. Instead use the already existing
  proxy_be_by_name() then server_find_by_name() which contains
  duplicated code with the now removed function.

No functional change occurs with this commit. However, it will be useful
to add new configuration options reusing the same '<backend>/<server>'
for reverse connect.
2023-08-24 14:44:33 +02:00
Willy Tarreau
821fc95146 MINOR: pattern: do not needlessly lookup the LRU cache for empty lists
If a pattern list is empty, there's no way we can find its elements in
the pattern cache, so let's avoid this expensive lookup. This can happen
for ACLs or maps loaded from files that may optionally be empty for
example. Doing so improves the request rate by roughly 10% for a single
such match for only 8 threads. That's normal because the LRU cache
pre-creates an entry that is about to be committed for the case the list
lookup succeeds after a miss, so we bypass all this.
2023-08-22 07:27:01 +02:00
William Lallemand
3fde27d980 BUG/MINOR: quic: ssl_quic_initial_ctx() uses error count not error code
ssl_quic_initial_ctx() is supposed to use error count and not errror
code.

Bug was introduced by 557706b3 ("MINOR: quic: Initialize TLS contexts
for QUIC openssl wrapper").

No backport needed.
2023-08-21 15:35:17 +02:00
William Lallemand
8c004153e5 BUG/MINOR: quic: allow-0rtt warning must only be emitted with quic bind
When built with USE_QUIC_OPENSSL_COMPAT, a warning is emitted when using
allow-0rtt. However this warning is emitted for every allow-0rtt
keywords on the bind line which is confusing, it must only be done in
case the bind is a quic one. Also this does not handle the case where
the allow-0rtt keyword is in the crt-list.

This patch moves the warning to ssl_quic_initial_ctx() in order to emit
the warning in every useful cases.
2023-08-21 15:33:26 +02:00
Frédéric Lécaille
2677dc1c32 MINOR: quic+openssl_compat: Emit an alert for "allow-0rtt" option
QUIC 0-RTT is not supported when haproxy is linked against an TLS stack with
limited QUIC support (OpenSSL).

Modify the "allow-0rtt" option callback to make it emit a warning if set on
a QUIC listener "bind" line.
2023-08-17 15:44:03 +02:00
Frédéric Lécaille
0e13325f23 MINOR: quic+openssl_compat: Do not start without "limited-quic"
Add a check for limited-quic in check_config_validity() when compiled
with USE_QUIC_OPENSSL_COMPAT so that we prevent a config from starting
accidentally with limited QUIC support. If a QUIC listener is found
when using the compatibility mode and limited-quic is not set, an error
message is reported explaining that the SSL library is not compatible
and proposing the user to enable limited-quic if that's what they want,
and the startup fails.

This partially reverts commit 7c730803d ("MINOR: quic: Warning for
OpenSSL wrapper QUIC bindings without "limited-quic"") since a warning
was not sufficient.
2023-08-17 15:44:03 +02:00
Willy Tarreau
544c2f2d9e MINOR: pools: use EBO to wait for unlock during pool_flush()
pool_flush() could become a source of contention on the pool's free list
if there are many competing thread using that pool. Let's make sure we
use EBO and not just a simple CPU relaxation there, to avoid disturbing
them.
2023-08-17 09:09:20 +02:00
Aurelien DARRAGON
7eb05891d8 BUG/MINOR: stktable: allow sc-add-gpc from tcp-request connection
Following the previous commit's logic, we enable the use of sc-add-gpc
from tcp-request connection since it was probably forgotten in the first
place for sc-set-gpt0, and since sc-add-gpc was inspired from it, it also
lacks its.

As sc-add-gpc was implemented in 5a72d03a58 ("MINOR: stick-table: implement
the sc-add-gpc() action"), this should only be backported to 2.8
2023-08-14 09:03:49 +02:00
Aurelien DARRAGON
6c79309fda BUG/MINOR: stktable: allow sc-set-gpt(0) from tcp-request connection
Both the documentation and original developer intents seem to suggest
that sc-set-gpt/sc-set-gpt0 actions should be available from
tcp-request connection.

Yet because it was probably forgotten when expr support was added to
sc-set-gpt0 in 0d7712dff0 ("MINOR: stick-table: allow sc-set-gpt0 to
set value from an expression") it doesn't work and will report this
kind of errors:
 "internal error, unexpected rule->from=0, please report this bug!"

Fixing the code to comply with the documentation and the expected
behavior.

This must be backported to every stable versions.

[for < 2.5, as only sc-set-gpt0 existed back then, the patch must be
manually applied to skip irrelevant parts]
2023-08-14 09:03:44 +02:00
Willy Tarreau
2d18717fb8 BUILD: pools: fix build error on clang with inline vs forceinline
clang is more picky than gcc regarding duplicate "inline". The functions
declared with "forceinline" don't need to have "inline" since it's already
in the macro.
2023-08-12 19:58:17 +02:00
Willy Tarreau
29eed99b50 MINOR: pools: make pool_evict_last_items() use pool_put_to_os_no_dec()
The bucket is already known, no need to calculate it again. Let's just
include the lower level functions.
2023-08-12 19:04:34 +02:00
Willy Tarreau
7bf829ace1 MAJOR: pools: move the shared pool's free_list over multiple buckets
This aims at further reducing the contention on the free_list when using
global pools. The free_list pointer now appears for each bucket, and both
the alloc and the release code skip to a next bucket when ending on a
contended entry. The default entry used for allocations and releases
depend on the thread ID so that locality is preserved as much as possible
under low contention.

It would be nice to improve the situation to make sure that releases to
the shared pools doesn't consider the first entry's pointer but only an
argument that would be passed and that would correspond to the bucket in
the thread's cache. This would reduce computations and make sure that the
shared cache only contains items whose pointers match the same bucket.
This was not yet done. One possibility could be to keep the same splitting
in the local cache.

With this change, an h2load test with 5 * 160 conns & 40 streams on 80
threads that was limited to 368k RPS with the shared cache jumped to
3.5M RPS for 8 buckets, 4M RPS for 16 buckets, 4.7M RPS for 32 buckets
and 5.5M RPS for 64 buckets.
2023-08-12 19:04:34 +02:00
Willy Tarreau
8a0b5f783b MINOR: pools: move the failed allocation counter over a few buckets
The failed allocation counter cannot depend on a pointer, but since it's
a perpetually increasing counter and not a gauge, we don't care where
it's incremented. Thus instead we're hashing on the TID. There's no
contention there anyway, but it's better not to waste the room in
the pool's heads and to move that with the other counters.
2023-08-12 19:04:34 +02:00
Willy Tarreau
da6999f839 MEDIUM: pools: move the needed_avg counter over a few buckets
That's the same principle as for ->allocated and ->used. Here we return
the summ of the raw values, so the result still needs to be fed to
swrate_avg(). It also means that we now use the local ->used instead
of the global one for the calculations and do not need to call pool_used()
anymore on fast paths. The number of samples should likely be divided by
the number of buckets, but that's not done yet (better observe first).

A function pool_needed_avg() was added to report aggregated values for
the "show pools" command.

With this change, an h2load made of 5 * 160 conn * 40 streams on 80
threads raised from 1.5M RPS to 6.7M RPS.
2023-08-12 19:04:34 +02:00
Willy Tarreau
9e5eb586b1 MEDIUM: pools: move the used counter over a few buckets
That's the same principle as for ->allocated. The small difference here
is that it's no longer possible to decrement ->used in batches when
releasing clusters from the cache to the shared cache, so the counter
has to be decremented for each of them. But as it provides less
contention and it's done only during forced eviction, it shouldn't be
a problem.

A function "pool_used()" was added to return the sum of the entries.
It's used by pool_alloc_nocache() and pool_free_nocache() which need
to count the number of used entries. It's not a problem since such
operations are done when picking/releasing objects to/from the OS,
but it is a reminder that the number of buckets should remain small.

With this change, an h2load test made of 5 * 160 conn * 40 streams on
80 threads raised from 812k RPS to 1.5M RPS.
2023-08-12 19:04:34 +02:00
Willy Tarreau
cdb711e42b MEDIUM: pools: spread the allocated counter over a few buckets
The ->used counter is one of the most stressed, and it heavily
depends on the ->allocated one, so let's first move ->allocated
to a few buckets.

A function "pool_allocated()" was added to return the sum of the entries.
It's important not to abuse it as it does iterate, so everywhere it's
possible to avoid it by keeping a local counter, it's better. Currently
it's used for limited pools which need to make sure they do not allocate
too many objects. That's an acceptable tradeoff to save CPU on large
machines at the expense of spending a little bit more on small ones which
normally are not under load.
2023-08-12 19:04:34 +02:00
Willy Tarreau
06885aaea7 MINOR: pools: introduce the use of multiple buckets
On many threads and without the shared cache, there can be extreme
contention on the ->allocated counter, the ->free_list pointer, and
the ->used counter. It's possible to limit this contention by spreading
the counters a little bit over multiple entries, that are summed up when
a consultation is needed. The criterion used to spread the values cannot
be related to the thread ID due to migrations, since we need to keep
consistent stats (allocated vs used).

Instead we'll just hash the pointer, it provides an index that does the
job and that is consistent for the object. When having just a few entries
(16 here as it showed almost identical performance between global and
non-global pools) even iterations should be short enough during
measurements to not be a problem.

A pair of functions designed to ease pointer hash bucket calculation were
added, with one of them doing it for thread IDs because allocation failures
will be associated with a thread and not a pointer.

For now this patch only brings in the relevant parts of the infrastructure,
the CONFIG_HAP_POOL_BUCKETS_BITS macro that defaults to 6 bits when 512
threads or more are supported, 5 bits when 128 or more are supported, 4
bits when 16 or more are supported, otherwise 3 bits for small setups.
The array in the pool_head and the two utility functions are already
added. It should have no measurable impact beyond inflating the pool_head
structure.
2023-08-12 19:04:34 +02:00
Willy Tarreau
29ad61fb00 OPTIM: pools: make pool_get_from_os() / pool_put_to_os() not update ->allocated
The pool's allocation counter doesn't strictly require to be updated
from these functions, it may more efficiently be done in the caller
(even out of a loop for pool_flush() and pool_gc()), and doing so will
also help us spread the counters over an array later. The functions
were renamed _noinc and _nodec to make sure we catch any possible
user in an external patch. If needed, the original functions may easily
be reimplemented in an inline function.
2023-08-12 19:04:34 +02:00
Willy Tarreau
feeda4132b OPTIM: pools: use exponential back-off on shared pool allocation/release
Running a stick-table stress with -dMglobal under 56 threads shows
extreme contention on the pool's free_list because it has to be
processed in two phases and only used to implement a cpu_relax() on
the retry path.

Let's at least implement exponential back-off here to limit the neighbor's
noise and reduce the time needed to successfully acquire the pointer. Just
doing so shows there's still contention but almost doubled the performance,
from 1.1 to 2.1M req/s.
2023-08-12 19:04:34 +02:00
Willy Tarreau
45eeaad45f MEDIUM: peers: drop the stick-table lock before entering peer_send_teachmsgs()
The function drops the lock very early, and the only operations that
are performed on the entry code are updating the current peer's
last_local_table, which doesn't need to be protected. Thus it's
easier to drop the lock before entering the function and it further
limits its scope.

This has raised the peak RPS from 2050 to 2355k/s with a peers section on
the 80-core machine.
2023-08-11 19:03:35 +02:00
Willy Tarreau
cfeca3a3a3 MEDIUM: stick-table: touch updates under an upgradable read lock
Instead of taking the update's write lock in stktable_touch_with_exp(),
while most of the time under high load there is nothing to update because
the entry is touched before having been synchronized present, let's do
the check under a read lock and upgrade it to perform the update if
needed. These updates are rare and the contention is not expected to be
very high, so at the first failure to upgrade we retry directly with a
write lock.

By doing so the performance has almost doubled again, from 1140 to 2050k
with a peers section enabled. The contention is now on taking the read
lock itself, so there's little to be gained beyond this in this function.
2023-08-11 19:03:35 +02:00
Willy Tarreau
87e072eea5 MEDIUM: stick-table: use a distinct lock for the updates tree
Updating an entry in the updates tree is currently performed under the
table's write lock, which causes huge contention with other accesses
such as lookups and free. Aside the updates tree, the update,
localupdate and commitupdate variables, nothing is manipulated, so
let's create a distinct lock (updt_lock) to protect these together
to remove this contention. It required to add an extra lock in the
few places where we delete the update (though only if we're really
going to delete it) to protect the tree. This is very convenient
because now peer_send_teachmsgs() only needs to take this read lock,
and there is very little contention left on the stick-table.

With this alone, the performance jumped from 614k to 1140k/s on a
80-thread machine with a peers section! Stick-table updates with
no peers however now has to stand two locks and slightly regressed
from 4.0-4.1M/s to 3.9-4.0. This is fairly minimal compared to the
significant unlocking of the peers updates and considered totally
acceptable.
2023-08-11 19:03:35 +02:00
Willy Tarreau
29982ea769 MEDIUM: peers: only read-lock peer_send_teachmsgs()
This function doesn't need to be write-locked. It performs a lookup
of the next update at its index, atomically updates the ref_cnt on
the stksess, updates some shared_table fields on the local thread,
and updates the table's commitupdate. Now that this update is atomic
we don't need to keep the write lock during that period. In addition
this function's callers do not rely on the write lock to be held
either since it was droped during peer_send_updatemsg() anyway.

Now, when the function is entered with a write lock, it's downgraded
to a read lock, otherwise a read lock is grabbed. Updates are looked
up under the read lock and the message is sent without the lock. The
commitupdate is still performed under the read lock (so as not to
break the code too much), and the write lock is re-acquired when
leaving if needed. This allows multiple peers to look up updates in
parallel and to avoid stalling stick-table lookups.
2023-08-11 19:03:35 +02:00
Willy Tarreau
d4f8286e45 MEDIUM: peers: drop then re-acquire the wrlock in peer_send_teachmsgs()
This function maintains the write lock for a while. In practice it does
not need to hold it that long, and some parts could be performed under a
read lock. This patch first drops then re-acquires the write lock at the
function's entry. The purpose is simply to break the end-to-end atomicity
to prove that it has no impact in case something needs to be bisected
later. In fact the write lock is already dropped while calling
peer_send_updatemsg().
2023-08-11 19:03:35 +02:00
Willy Tarreau
4eddf26f58 MEDIUM: peers: update ->commitupdate out of the lock using a CAS
The ->commitupdate index doesn't need to be kept consistent with other
operations, it only needs to be correct and to reflect the last known
value. Right now it's updated under the stick-table lock, which is
expensive and maintains this lock longer than needed. Let's move it
outside of the lock, and update it using a CAS. This patch simply
replaces the assignment with a CAS and makes sure all reads are atomic.
On failed CAS we use a simple cpu_relax(), no need for more as there
should not be that much contention here (updates are not that fast).
2023-08-11 19:03:35 +02:00
Willy Tarreau
7968fe3889 MEDIUM: stick-table: change the ref_cnt atomically
Due to the ts->ref_cnt being manipulated and checked inside wrlocks,
we continue to have it updated under plenty of read locks, which have
an important cost on many-thread machines.

This patch turns them all to atomic ops and carefully moves them outside
of locks every time this is possible:
  - the ref_cnt is incremented before write-unlocking on creation otherwise
    the element could vanish before we can do it
  - the ref_cnt is decremented after write-locking on release
  - for all other cases it's updated out of locks since it's guaranteed by
    the sequence that it cannot vanish
  - checks are done before locking every time it's used to decide
    whether we're going to release the element (saves several write locks)
  - expiration tests are just done using atomic loads, since there's no
    particular ordering constraint there, we just want consistent values.

For Lua, the loop that is used to dump stick-tables could switch to read
locks only, but this was not done.

For peers, the loop that builds updates in peer_send_teachmsgs is extremely
expensive in write locks and it doesn't seem this is really needed since
the only updated variables are last_pushed and commitupdate, the first
one being on the shared table (thus not used by other threads) and the
commitupdate could likely be changed using a CAS. Thus all of this could
theoretically move under a read lock, but that was not done here.

On a 80-thread machine with a peers section enabled, the request rate
increased from 415 to 520k rps.
2023-08-11 19:03:35 +02:00
Willy Tarreau
73b1dea4d1 MINOR: stick-table: move the task_wakeup() call outside of the lock
The write lock in stktable_touch_with_exp() is quite expensive and should
be shortened as much as possible. There's no need for it when calling
task_wakeup() so let's move it out.

On a 80-thread machine with a peers section, the request rate increased
from 397k to 415k rps.
2023-08-11 19:03:35 +02:00
Willy Tarreau
322e4ab9d2 MINOR: stick-table: move the task_queue() call outside of the lock
The write lock in stktable_requeue_exp() is quite expensive and should
be shortened as much as possible. There's no need for it when calling
task_queue() so let's move it out.

On a 80-thread machine with a peers section, the request rate increased
from 368k to 397k rps.
2023-08-11 19:03:35 +02:00
Aurelien DARRAGON
09133860bf BUG/MEDIUM: hlua: streams don't support mixing lua-load with lua-load-per-thread
Michel Mayen reported that mixing lua actions loaded from 'lua-load'
and 'lua-load-per-thread' directives within a single http/tcp session
yields unexpected results.

When executing action defined in another running context from the one of
the previously executed action (from lua-load, then from
lua-load-per-thread or the opposite, order doesn't matter), it would yield
this kind of error:

"Lua function 'name': [state-id x] runtime error: attempt to call a nil value from ."

He also noted that when loading all actions using the same loading
directive, the issue is gone.

This is due to the fact that for lua actions, fetches and converters, lua
code is being executed from the stream lua context. However, the stream
lua context, which is created on the fly when first executing some lua
code related to the stream, is reused between multiple lua executions.

But the thing is, despite successive executions referring to the same
parent "stream" (which is also assigned to a given thread id), they don't
necessarily depend on the same running context from lua point of view.

Indeed, since the function which is about to be executed could have been
loaded from either 'lua-load' or 'lua-load-per-thread', the function
declaration and related dependencies are defined in a specific stack ID
which is known by calling fcn_ref_to_stack_id() on the given function.

Thus, in order to make streams capable of chaining lua actions, fetches and
converters loaded in different lua stacks, we add a new detection logic
in hlua_stream_ctx_prepare() to be able to recreate the lua context in the
proper stack space when the existing one conflicts with the expected stack
id.

This must be backported in every stable versions.

It depends on:
 - "MINOR: hlua: add hlua_stream_prepare helper function"
   [for < 2.5, skip the filter part since they didn't exist]

[wt: warning, wait a little bit before backporting too far, we
 need to be certain the added BUG_ON() will never trigger]
2023-08-11 19:02:59 +02:00