7859 Commits

Author SHA1 Message Date
Aurelien DARRAGON
f751a97a11 MINOR: event_hdl: pause/resume for subscriptions
While working on event handling from lua, the need for a pause/resume
function to temporarily disable a subscription was raised.

We solve this by introducing the EHDL_SUB_F_PAUSED flag for
subscriptions.

The flag is set via _pause() and cleared via _resume(), and it is
checked prior to notifying the subscription in publish function.

Pause and Resume functions are also available for via lookups for
identified subscriptions.

If 68e692da0 ("MINOR: event_hdl: add event handler base api")
is being backported, then this commit should be backported with it.
2023-04-05 08:58:17 +02:00
Aurelien DARRAGON
b4b7320a6a MINOR: event_hdl: add event_hdl_async_equeue_size() function
Use event_hdl_async_equeue_size() in advanced async task handler to
get the near real-time event queue size.

By near real-time, you should understand that the queue size is not
updated during element insertion/removal, but shortly before insertion
and shortly after removal, so the size should reflect the approximate
queue size at a given time but should definitely not be used as a
unique source of truth.

If 68e692da0 ("MINOR: event_hdl: add event handler base api")
is being backported, then this commit should be backported with it.
2023-04-05 08:58:17 +02:00
Aurelien DARRAGON
9e98a27d6a MINOR: event_hdl: add event_hdl_async_equeue_isempty() function
Add event_hdl_async_equeue_isempty() to check is the event queue is
empty from an advanced async task handler.

If 68e692da0 ("MINOR: event_hdl: add event handler base api")
is being backported, then this commit should be backported with it.
2023-04-05 08:58:17 +02:00
Aurelien DARRAGON
b289fd1420 MINOR: event_hdl: normal tasks support for advanced async mode
advanced async mode (EVENT_HDL_ASYNC_TASK) provided full support for
custom tasklets registration.

Due to the similarities between tasks and tasklets, it may be useful
to use the advanced mode with an existing task (not a tasklet).
While the API did not explicitly disallow this usage, things would
get bad if we try to wakeup a task using tasklet_wakeup() for notifying
the task about new events.

To make the API support both custom tasks and tasklets, we use the
TASK_IS_TASKLET() macro to call the proper waking function depending
on the task's type:

  - For tasklets: we use tasklet_wakeup()
  - For tasks: we use task_wakeup()

If 68e692da0 ("MINOR: event_hdl: add event handler base api")
is being backported, then this commit should be backported with it.
2023-04-05 08:58:17 +02:00
Aurelien DARRAGON
ef6ca67176 BUG/MEDIUM: event_hdl: clean soft-stop handling
soft-stop was not explicitly handled in event_hdl API.

Because of this, event_hdl was causing some leaks on deinit paths.
Moreover, a task responsible for handling events could require some
additional cleanups (ie: advanced async task), and as the task was not
protected against abort when soft-stopping, such cleanup could not be
performed unless the task itself implements the required protections,
which is not optimal.

Consider this new approach:
 'jobs' global variable is incremented whenever an async subscription is
 created to prevent the related task from being aborted before the task
 acknowledges the final END event.

 Once the END event is acknowledged and freed by the task, the 'jobs'
 variable is decremented, and the deinit process may continue (including
 the abortion of remaining tasks not guarded by the 'jobs' variable).

To do this, a new global mt_list is required: known_event_hdl_sub_list
This list tracks the known (initialized) subscription lists within the
process.

sub_lists are automatically added to the "known" list when calling
event_hdl_sub_list_init(), and are removed from the list with
event_hdl_sub_list_destroy().

This allows us to implement a global thread-safe event_hdl deinit()
function that is automatically called on soft-stop thanks to signal(0).
When event_hdl deinit() is initiated, we simply iterate against the known
subscription lists to destroy them.

event_hdl_subscribe_ptr() was slightly modified to make sure that a sub_list
may not accept new subscriptions once it is destroyed (removed from the
known list)
This can occur between the time the soft-stop is initiated (signal(0)) and
haproxy actually enters in the deinit() function (once tasks are either
finished or aborted and other threads already joined).

It is safe to destroy() the subscription list multiple times as long
as the pointer is still valid (ie: first on soft-stop when handling
the '0' signal, then from regular deinit() path): the function does
nothing if the subscription list is already removed.

We partially reverted "BUG/MINOR: event_hdl: make event_hdl_subscribe thread-safe"
since we can use parent mt_list locking instead of a dedicated lock to make
the check gainst duplicate subscription ID.
(insert_lock is not useful anymore)

The check in itself is not changed, only the locking method.

sizeof(event_hdl_sub_list) slightly increases: from 24 bits to 32bits due
to the additional mt_list struct within it.

With that said, having thread-safe list to store known subscription lists
is a good thing: it could help to implement additional management
logic for subcription lists and could be useful to add some stats or
debugging tools in the future.

If 68e692da0 ("MINOR: event_hdl: add event handler base api")
is being backported, then this commit should be backported with it.
2023-04-05 08:58:17 +02:00
Aurelien DARRAGON
3a81e997ac MINOR: event_hdl: global sublist management clarification
event_hdl_sub_list_init() and event_hdl_sub_list_destroy() don't expect
to be called with a NULL argument (to use global subscription list
implicitly), simply because the global subscription list init and
destroy is internally managed.

Adding BUG_ON() to detect such invalid usages, and updating some comments
to prevent confusion around these functions.

If 68e692da0 ("MINOR: event_hdl: add event handler base api")
is being backported, then this commit should be backported with it.
2023-04-05 08:58:17 +02:00
Aurelien DARRAGON
d514ca45c6 BUG/MINOR: event_hdl: make event_hdl_subscribe thread-safe
List insertion in event_hdl_subscribe() was not thread-safe when dealing
with unique identifiers. Indeed, in this case the list insertion is
conditional (we check for a duplicate, then we insert). And while we're
using mt lists for this, the whole operation is not atomic: there is a
race between the check and the insertion.
This could lead to the same ID being registered multiple times with
concurrent calls to event_hdl_subscribe() on the same ID.

To fix this, we add 'insert_lock' dedicated lock in the subscription
list struct. The lock's cost is nearly 0 since it is only used when
registering identified subscriptions and the lock window is very short:
we only guard the duplicate check and the list insertion to make the
conditional insertion "atomic" within a given subscription list.
This is the only place where we need the lock: as soon as the item is
properly inserted we're out of trouble because all other operations on
the list are already thread-safe thanks to mt lists.

A new lock hint is introduced: LOCK_EHDL which is dedicated to event_hdl

The patch may seem quite large since we had to rework the logic around
the subscribe function and switch from simple mt_list to a dedicated
struct wrapping both the mt_list and the insert_lock for the
event_hdl_sub_list type.
(sizeof(event_hdl_sub_list) is now 24 instead of 16)

However, all the changes are internal: we don't break the API.

If 68e692da0 ("MINOR: event_hdl: add event handler base api")
is being backported, then this commit should be backported with it.
2023-04-05 08:58:17 +02:00
Aurelien DARRAGON
53eb6aecce BUG/MINOR: event_hdl: fix rid storage type
rid is stored as a uint32_t within struct server, but it was stored as
a signed int within the server event data struct.

Switching from signed int to uint32_t in event_hdl_cb_data_server struct
to make sure it won't overflow.

If 129ecf441 ("MINOR: server/event_hdl: add support for SERVER_ADD and SERVER_DEL events")
is being backported, then this commit should be backported with it.
2023-04-05 08:58:17 +02:00
Thierry Fournier
1edf36a369 MEDIUM: hlua_fcn: dynamic server iteration and indexing
This patch proposes to enumerate servers using internal HAProxy list.
Also, remove the flag SRV_F_NON_PURGEABLE which makes the server non
purgeable each time Lua uses the server.

Removing reg-tests/cli_delete_server_lua.vtc since this test is no
longer relevant (we don't set the SRV_F_NON_PURGEABLE flag anymore)
and we already have a more generic test:
  reg-tests/server/cli_delete_server.vtc

Co-authored-by: Aurelien DARRAGON <adarragon@haproxy.com>
2023-04-05 08:58:16 +02:00
Thierry Fournier
467913c84e MEDIUM: hlua: Dynamic list of frontend/backend in Lua
When HAproxy is loaded with a lot of frontends/backends (tested with 300k),
it is slow to start and it uses a lot of memory just for indexing backends
in the lua tables.

This patch uses the internal frontend/backend index of HAProxy in place of
lua table.

HAProxy startup is now quicker as each frontend/backend object is created
on demand and not at init.
This has to come with some cost: the execution of Lua will be a little bit
slower.
2023-04-05 08:58:16 +02:00
Thierry Fournier
599f2311a8 MINOR: hlua: Fix two functions that return nothing useful
Two lua init function seems to return something useful, but it
is not the case. The function "hlua_concat_init" seems to return
a failure status, but the function never fails. The function
"hlua_fcn_reg_core_fcn" seems to return a number of elements in
the stack, but it is not the case.
2023-04-05 08:58:16 +02:00
Aurelien DARRAGON
f175b08bfb BUG/MINOR: server/del: fix srv->next pointer consistency
We recently discovered a bug which affects dynamic server deletion:

When a server is deleted, it is removed from the "visible" server list.
But as we've seen in previous commit
("MINOR: server: add SRV_F_DELETED flag"), it can still be accessed by
someone who keeps a reference on it (waiting for the final srv_drop()).
Throughout this transient state, server ptr is still valid (may be
dereferenced) and the flag SRV_F_DELETED is set.

However, as the server is not part of server list anymore, we have
an issue: srv->next pointer won't be updated anymore as the only place
where we perform such update is in cli_parse_delete_server() by
iterating over the "visible" server list.

Because of this, we cannot guarantee that a server with the
SRV_F_DELETED flag has a valid 'next' ptr: 'next' could be pointing
to a fully removed (already freed) server.

This problem can be easily demonstrated with server dumping in
the stats:

server list dumping is performed in stats_dump_proxy_to_buffer()
The function can be interrupted and resumed later by design.
ie: output buffer is full: partial dump and finish the dump after
the flush

This is implemented by calling srv_take() on the server being dumped,
and only releasing it when we're done with it using srv_drop().
(drop can be delayed after function resume if buffer is full)

While the function design seems OK, it works with the assumption that
srv->next will still be valid after the function resumes, which is
not true. (especially if multiple servers are being removed in between
the 2 dumping attempts)

In practice, this did not cause any crash yet (at least this was not
reported so far), because server dumping is so fast that it is very
unlikely that multiple server deletions make their way between 2
dumping attempts in most setups. But still, this is a problem that we
need to address because some upcoming work might depend on this
assumption as well and for the moment it is not safe at all.

========================================================================

Here is a quick reproducer:

With this patch, we're creating a large deletion window of 3s as soon
as we reach a server named "t2" while iterating over the list.

This will give us plenty of time to perform multiple deletions before
the function is resumed.

    |  diff --git a/src/stats.c b/src/stats.c
    |  index 84a4f9b6e..15e49b4cd 100644
    |  --- a/src/stats.c
    |  +++ b/src/stats.c
    |  @@ -3189,11 +3189,24 @@ int stats_dump_proxy_to_buffer(struct stconn *sc, struct htx *htx,
    |                    * Temporarily increment its refcount to prevent its
    |                    * anticipated cleaning. Call free_server to release it.
    |                    */
    |  +                struct server *orig = ctx->obj2;
    |                   for (; ctx->obj2 != NULL;
    |                          ctx->obj2 = srv_drop(sv)) {
    |
    |                           sv = ctx->obj2;
    |  +                        printf("sv = %s\n", sv->id);
    |                           srv_take(sv);
    |  +                        if (!strcmp("t2", sv->id) && orig == px->srv) {
    |  +                                printf("deletion window: 3s\n");
    |  +                                thread_idle_now();
    |  +                                thread_harmless_now();
    |  +                                sleep(3);
    |  +                                thread_harmless_end();
    |  +
    |  +                                thread_idle_end();
    |  +
    |  +                                goto full; /* simulate full buffer */
    |  +                        }
    |
    |                           if (htx) {
    |                                   if (htx_almost_full(htx))
    |  @@ -4353,6 +4366,7 @@ static void http_stats_io_handler(struct appctx *appctx)
    |           struct channel *res = sc_ic(sc);
    |           struct htx *req_htx, *res_htx;
    |
    |  +        printf("http dump\n");
    |           /* only proxy stats are available via http */
    |           ctx->domain = STATS_DOMAIN_PROXY;
    |

Ok, we're ready, now we start haproxy with the following conf:

       global
          stats socket /tmp/ha.sock  mode 660  level admin  expose-fd listeners thread 1-1
          nbthread 2

       frontend stats
           mode http
           bind *:8081 thread 2-2
           stats enable
           stats uri /

       backend farm
               server t1 127.0.0.1:1899  disabled
               server t2 127.0.0.1:18999 disabled
               server t3 127.0.0.1:18998 disabled
               server t4 127.0.0.1:18997 disabled

And finally, we execute the following script:

       curl localhost:8081/stats&
       sleep .2
       echo "del server farm/t2" | nc -U /tmp/ha.sock
       echo "del server farm/t3" | nc -U /tmp/ha.sock

This should be enough to reveal the issue, I easily manage to
consistently crash haproxy with the following reproducer:

    http dump
    sv = t1
    http dump
    sv = t1
    sv = t2
    deletion window = 3s
    [NOTICE]   (2940566) : Server deleted.
    [NOTICE]   (2940566) : Server deleted.
    http dump
    sv = t2
    sv = �����U
    [1]    2940566 segmentation fault (core dumped)  ./haproxy -f ttt.conf

========================================================================

To fix this, we add prev_deleted mt_list in server struct.
For a given "visible" server, this list will contain the pending
"deleted" servers references that point to it using their 'next' ptr.

This way, whenever this "visible" server is going to be deleted via
cli_parse_delete_server() it will check for servers in its
'prev_deleted' list and update their 'next' pointer so that they no
longer point to it, and then it will push them in its
'next->prev_deleted' list to transfer the update responsibility to the
next 'visible' server (if next != NULL).

Then, following the same logic, the server about to be removed in
cli_parse_delete_server() will push itself as well into its
'next->prev_deleted' list (if next != NULL) so that it may still use its
'next' ptr for the time it is in transient removal state.

In srv_drop(), right before the server is finally freed, we make sure
to remove it from the 'next->prev_deleted' list so that 'next' won't
try to perform the pointers update for this server anymore.
This has to be done atomically to prevent 'next' srv from accessing a
purged server.

As a result:
  for a valid server, either deleted or not, 'next' ptr will always
  point to a non deleted (ie: visible) server.

With the proposed fix, and several removal combinations (including
unordered cli_parse_delete_server() and srv_drop() calls), I cannot
reproduce the crash anymore.

Example tricky removal sequence that is now properly handled:

sv list: t1,t2,t3,t4,t5,t6

ops:
   take(t2)
   del(t4)
   del(t3)
   del(t5)
   drop(t3)
   drop(t4)
   drop(t5)
   drop(t2)
2023-04-05 08:58:16 +02:00
Aurelien DARRAGON
75b9d1c041 MINOR: server: add SRV_F_DELETED flag
Set the SRV_F_DELETED flag when server is removed from the cli.

When removing a server from the cli (in cli_parse_delete_server()),
we update the "visible" server list so that the removed server is no
longer part of the list.

However, despite the server being removed from "visible" server list,
one could still access the server data from a valid ptr (ie: srv_take())

Deleted flag helps detecting when a server is in transient removal
state: that is, removed from the list, thus not visible but not yet
purged from memory.
2023-04-05 08:58:16 +02:00
Christopher Faulet
7faac7cf34 MINOR: tree-wide: Simplifiy some tests on SHUT flags by accessing SCs directly
At many places, we simplify the tests on SHUT flags to remove calls to
chn_prod() or chn_cons() function because the corresponding SC is available.
2023-04-05 08:57:06 +02:00
Christopher Faulet
87633c3a11 MEDIUM: tree-wide: Move flags about shut from the channel to the SC
The purpose of this patch is only a one-to-one replacement, as far as
possible.

CF_SHUTR(_NOW) and CF_SHUTW(_NOW) flags are now carried by the
stream-connecter. CF_ prefix is replaced by SC_FL_ one. Of course, it is not
so simple because at many places, we were testing if a channel was shut for
reads and writes in same time. To do the same, shut for reads must be tested
on one side on the SC and shut for writes on the other side on the opposite
SC. A special care was taken with process_stream(). flags of SCs must be
saved to be able to detect changes, just like for the channels.
2023-04-05 08:57:06 +02:00
Christopher Faulet
904763f562 MINOR: stconn/channel: Move CF_EOI into the SC and rename it
The channel flag CF_EOI is renamed to SC_FL_EOI and moved into the
stream-connector.
2023-04-05 08:57:06 +02:00
Christopher Faulet
84d3ef982c MINOR: stconn/channel: Move CF_EXPECT_MORE into the SC and rename it
The channel flag CF_EXPECT_MORE is renamed to SC_FL_SND_EXP_MORE and moved
into the stream-connector.
2023-04-05 08:57:05 +02:00
Christopher Faulet
68ef218a72 MINOR: stconn/channel: Move CF_NEVER_WAIT into the SC and rename it
The channel flag CF_NEVER_WAIT is renamed to SC_FL_SND_NEVERWAIT and moved
into the stream-connector.
2023-04-05 08:57:05 +02:00
Christopher Faulet
5c281d58ea MINOR: stconn/channel: Move CF_SEND_DONTWAIT into the SC and rename it
The channel flag CF_SEND_DONTWAIT is renamed to SC_FL_SND_ASAP and moved
into the stream-connector.
2023-04-05 08:57:05 +02:00
Christopher Faulet
9a790f63ed MINOR: stconn/channel: Move CF_READ_DONTWAIT into the SC and rename it
The channel flag CF_READ_DONTWAIT is renamed to SC_FL_RCV_ONCE and moved
into the stream-connector.
2023-04-05 08:57:05 +02:00
Christopher Faulet
26e0935681 MEDIUM: applet/trace: Register a new trace source with its events
Traces are now supported for applets. The first argument is always the
appctx. This will help to debug applets.
2023-04-05 08:46:06 +02:00
Christopher Faulet
a5915eb1dd MINOR: applet: Uninline appctx_free()
This functin is uninlined and move in src/applet.c. It is mandatory to add
traces for applets.
2023-04-05 08:46:06 +02:00
Remi Tricot-Le Breton
26e1432436 BUG/MINOR: ssl: Undefined reference when building with OPENSSL_NO_DEPRECATED
If OPENSSL_NO_DEPRECATED is set, we get a 'error: ‘RSA_PKCS1_PADDING’
undeclared' when building jwt.c. The symbol is not deprecated, we are
just missing an include.

This was raised in GitHub issue #2098.
It does not need to be backported.
2023-04-03 11:46:54 +02:00
Frédéric Lécaille
7d6270a845 BUG/MAJOR: quic: Congestion algorithms states shared between the connection
This very old bug is there since the first implementation of newreno congestion
algorithm implementation. This was a very bad idea to put a state variable
into quic_cc_algo struct which only defines the congestion control algorithm used
by a QUIC listener, typically its type and its callbacks.
This bug could lead to crashes since BUG_ON() calls have been added to each algorithm
implementation. This was revealed by interop test, but not very often as there was
not very often several connections run at the time during these tests.

Hopefully this was also reported by Tristan in GH #2095.

Move the congestion algorithm state to the correct structures which are private
to a connection (see cubic and nr structs).

Must be backported to 2.7 and 2.6.
2023-04-02 13:10:13 +02:00
Ilya Shipitsin
07be66d21b CLEANUP: assorted typo fixes in the code and comments
This is 35th iteration of typo fixes
2023-04-01 18:33:40 +02:00
Frédéric Lécaille
db4bc6b4f3 MINOR: quic: Add a fake congestion control algorithm named "nocc"
This algorithm does nothing except initializing the congestion control window
to a fixed value. Very smart!

Modify the QUIC congestion control configuration parser to support this new
algorithm. The congestion control algorithm must be set as follows:

     quic-cc-algo nocc-<cc window size(KB))

For instance if "nocc-15" is provided as quic-cc-algo keyword value, this
will set a fixed window of 15KB.
2023-03-31 17:09:03 +02:00
Frédéric Lécaille
d721571d26 MEDIUM: quic: Ack delay implementation
Reuse the idle timeout task to delay the acknowledgments. The time of the
idle timer expiration is for now on stored in ->idle_expire. The one to
trigger the acknowledgements is stored in ->ack_expire.
Add QUIC_FL_CONN_ACK_TIMER_FIRED new connection flag to mark a connection
as having its acknowledgement timer been triggered.
Modify qc_may_build_pkt() to prevent the sending of "ack only" packets and
allows the connection to send packet when the ack timer has fired.
It is possible that acks are sent before the ack timer has triggered. In
this case it is cancelled only if ACK frames are really sent.
The idle timer expiration must be set again when the ack timer has been
triggered or when it is cancelled.

Must be backported to 2.7.
2023-03-31 13:41:17 +02:00
Frédéric Lécaille
8f991948f5 MINOR: quic: Traces adjustments at proto level.
Dump variables displayed by TRACE_ENTER() or TRACE_LEAVE() by calls to TRACE_PROTO().
No more variables are displayed by the two former macros. For now on, these information
are accessible from proto level.
Add new calls to TRACE_PROTO() at important locations in relation whith QUIC transport
protocol.
When relevant, try to prefix such traces with TX or RX keyword to identify the
concerned subpart (transmission or reception) of the protocol.

Must be backported to 2.7.
2023-03-31 09:54:59 +02:00
Frédéric Lécaille
acc9cfdf79 MINOR: quic: Adjustments for generic control congestion traces
Display the elapsed time since packets were sent in place of the timestamp which
do not bring easy to read information.

Must be backported to 2.7.
2023-03-31 09:54:59 +02:00
Frédéric Lécaille
d7243318c4 BUG/MINOR: quic: Wrong use of now_ms timestamps (cubic algo)
As now_ms may wrap, one must use the ticks API to protect the cubic congestion
control algorithm implementation from side effects due to this.

Furthermore to make the cubic congestion control algorithm more readable and easy
to maintain, adding a new state ("in recovery period" QUIC_CC_ST_RP new enum) helps
in reaching this goal. Implement quic_cc_cubic_rp_cb() which is the callback for
this new state.

Must be backported to 2.7 and 2.6.
2023-03-31 09:54:59 +02:00
Aurelien DARRAGON
7e64d8720e BUG/MINOR: backend: make be_usable_srv() consistent when stopping
When a proxy enters the STOPPED state, it will no longer accept new
connections.

However, it doesn't mean that it's completely inactive yet: it will
still be able to handle already pending / keep-alive connections,
thus finishing ongoing work before effectively stopping.

be_usable_srv(), which is used by nbsrv converter and sample fetch,
will return 0 if the proxy is either stopped or disabled.

nbsrv behaves this way since it was originally implemented in b7e7c4720
("MINOR: Add nbsrv sample converter").

(Since then, multiple refactors were performed around this area, but
the current implementation still follows the same logic)

It was found that if nbsrv is used in a proxy section to perform routing
logic, unexpected decisions are being made when nbsrv is used on a proxy
with STOPPED state, since in-flight requests will suffer from nbsrv
returning 0 instead of the current number of usable servers which may
still process existing connections.
For instance, this can happen during process soft-stop, or even when
stopping the proxy from the cli / lua.

To fix this: we now make sure be_usable_srv() always returns the
current number of usable servers, unless the proxy is explicitly
disabled (from the config, not at runtime)

This could be backported up to 2.6.
For older versions, the need for a backport should be evaluated first.

--
Note for 2.4: proxy flags did not exist, it was implemented with fd10ab5e
("MINOR: proxy: Introduce proxy flags to replace disabled bitfield")

For 2.2: STOPPED and DISABLED states were not separated, so we have no
easy way to apply the fix anyway.
2023-03-31 07:45:08 +02:00
Martin DOLEZ
110e4a8733 MINOR: http_fetch: add case insensitive support for smp_fetch_url_param
This commit adds a new argument to smp_fetch_url_param
that makes the parameter key comparison case-insensitive.
Several levels of callers were modified to pass this info.
2023-03-30 14:11:10 +02:00
Aurelien DARRAGON
2c5b9ded9b CLEANUP: proxy: remove stop_time related dead code
Since eb77824 ("MEDIUM: proxy: remove the deprecated "grace" keyword"),
stop_time is never set, so the related code in manage_proxy() is not
relevant anymore.

Removing code that refers to p->stop_time, since it was probably
overlooked.
2023-03-28 20:26:47 +02:00
Frédéric Lécaille
c425e03b28 BUG/MINOR: quic: Missing STREAM frame type updated
This patch follows this commit which was not sufficient:
  BUG/MINOR: quic: Missing STREAM frame data pointer updates

Indeed, after updating the ->offset field, the bit which informs the
frame builder of its presence must be systematically set.

This bug was revealed by the following BUG_ON() from
quic_build_stream_frame() :
  bug condition "!!(frm->type & 0x04) != !!stream->offset.key" matched at src/quic_frame.c:515

This should fix the last crash occured on github issue #2074.

Must be backported to 2.6 and 2.7.
2023-03-27 16:01:44 +02:00
Willy Tarreau
1751db140a MINOR: pools: report a replaced memory allocator instead of just malloc_trim()
Instead of reporting the inaccurate "malloc_trim() support" on -vv, let's
report the case where the memory allocator was actively replaced from the
one used at build time, as this is the corner case we want to be cautious
about. We also put a tainted bit when this happens so that it's possible
to detect it at run time (e.g. the user might have inherited it from an
environment variable during a reload operation).

The now unused is_trim_enabled() function was finally dropped.
2023-03-22 18:05:02 +01:00
Willy Tarreau
7aee683541 MINOR: pools: export trim_all_pools()
This way it will be usable from outside instead of malloc_trim().
2023-03-22 17:30:28 +01:00
Willy Tarreau
eaba76b02d MINOR: pools: intercept malloc_trim() instead of trying to plug holes
As reported by Miroslav in commit d8a97d8f6 ("BUG/MINOR: illegal use of
the malloc_trim() function if jemalloc is used") there are still occasional
cases where it's discovered that malloc_trim() is being used without its
suitability being checked first. This is a problem when using another
incompatible allocator. But there's a class of use cases we'll never be
able to cover, it's dynamic libraries loaded from Lua. In order to address
this more reliably, we now define our own malloc_trim() that calls the
previous one after checking that the feature is supported and that the
allocator is the expected one. This way child libraries that would call
it will also be safe.

The function is intentionally left defined all the time so that it will
be possible to clean up some code that uses it by removing ifdefs.
2023-03-22 17:30:28 +01:00
Amaury Denoyelle
1d0ed1a2e9 BUG/MINOR: trace: fix hardcoded level for TRACE_PRINTF
Level argument was not ignored by TRACE_PRINTF due to an hardcoded value
of TRACE_LEVEL_DEVELOPER inside the macro.

This must be backported up to 2.6.
2023-03-22 15:31:55 +01:00
Miroslav Zagorac
d8a97d8f60 BUG/MINOR: illegal use of the malloc_trim() function if jemalloc is used
In the event that HAProxy is linked with the jemalloc library, it is still
shown that malloc_trim() is enabled when executing "haproxy -vv":
  ..
  Support for malloc_trim() is enabled.
  ..

It's not so much a problem as it is that malloc_trim() is called in the
pat_ref_purge_range() function without any checking.

This was solved by setting the using_default_allocator variable to the
correct value in the detect_allocator() function and before calling
malloc_trim() it is checked whether the function should be called.
2023-03-22 14:14:50 +01:00
Willy Tarreau
0de1e6180a BUILD: thread: implement thread_harmless_end_sig() for threadless builds
Building without thread support was broken in 2.8-dev2 with commit
7e70bfc8c ("MINOR: threads: add a thread_harmless_end() version that
doesn't wait") that forgot to define the function for the threadless
cases. No backport is needed.
2023-03-22 10:40:06 +01:00
Willy Tarreau
69869e6354 MINOR: dynbuf: set POOL_F_NO_FAIL on buffer allocation
b_alloc() is used to allocate a buffer. We can provoke fault injection
based on forced memory allocation failures using -dMfail on the command
line, but we know that the buffer_wait list is a bit weak and doesn't
always recover well. As such, submitting buffer allocation to such a
treatment seriously limits the usefulness of -dMfail which cannot really
be used for other purposes. Let's just disable it for buffers for now.
2023-03-21 09:15:13 +01:00
Willy Tarreau
ac78c4fd9d MINOR: ssl-sock: pass the CO_SFL_MSG_MORE info down the stack
Despite having replaced the SSL BIOs to use our own raw_sock layer, we
still didn't exploit the CO_SFL_MSG_MORE flag which is pretty useful to
avoid sending incomplete packets. It's particularly important for SSL
since the extra overhead almost guarantees that each send() will be
followed by an incomplete (and often odd-sided) segment.

We already have an xprt_st set of flags to pass info to the various
layers, so let's just add a new one, SSL_SOCK_SEND_MORE, that is set
or cleared during ssl_sock_from_buf() to transfer the knowledge of
CO_SFL_MSG_MORE. This way we can recover this information and pass
it to raw_sock.

This alone is sufficient to increase by ~5-10% the H2 bandwidth over
SSL when multiple streams are used in parallel.
2023-03-17 16:43:51 +01:00
Frédéric Lécaille
ca07979b97 BUG/MINOR: quic: Missing STREAM frame data pointer updates
This patch follows this one which was not sufficient:
    "BUG/MINOR: quic: Missing STREAM frame length updates"
Indeed, it is not sufficient to update the ->len and ->offset member
of a STREAM frame to move it forward. The data pointer must also be updated.
This is not done by the STREAM frame builder.

Must be backported to 2.6 and 2.7.
2023-03-17 09:21:18 +01:00
Willy Tarreau
9824f8c890 MINOR: buffer: add br_single() to check if a buffer ring has more than one buf
It's cheaper and cleaner than using br_count()==1 given that it just compares
two indexes, and that a ring having a single buffer is in a special case where
it is between empty and used up-to-1. In other words it's not congested.
2023-03-16 18:45:46 +01:00
Willy Tarreau
e5a26eb2de MINOR: buffer: add br_count() to return the number of allocated bufs
We have no way to know how many buffers are currently allocated in a
buffer ring. Let's add br_count() for this.
2023-03-16 18:45:46 +01:00
Christopher Faulet
3a7b539b12 BUG/MEDIUM: connection: Preserve flags when a conn is removed from an idle list
The commit 5e1b0e7bf ("BUG/MEDIUM: connection: Clear flags when a conn is
removed from an idle list") introduced a regression. CO_FL_SAFE_LIST and
CO_FL_IDLE_LIST flags are used when the connection is released to properly
decrement used/idle connection counters. if a connection is idle, these
flags must be preserved till the connection is really released. It may be
removed from the list but not immediately released. If these flags are lost
when it is finally released, the current number of used connections is
erroneously decremented. If means this counter may become negative and the
counters tracking the number of idle connecitons is not decremented,
suggesting a leak.

So, the above commit is reverted and instead we improve a bit the way to
detect an idle connection. The function conn_get_idle_flag() must now be
used to know if a connection is in an idle list. It returns the connection
flag corresponding to the idle list if the connection is idle
(CO_FL_SAFE_LIST or CO_FL_IDLE_LIST) or 0 otherwise. But if the connection
is scheduled to be removed, 0 is also returned, regardless the connection
flags.

This new function is used when the connection is temporarily removed from
the list to be used, mainly in muxes.

This patch should fix #2078 and #2057. It must be backported as far as 2.2.
2023-03-16 15:34:20 +01:00
Remi Tricot-Le Breton
a6c0a59e9a MINOR: ssl: Use ocsp update task for "update ssl ocsp-response" command
Instead of having a dedicated httpclient instance and its own code
decorrelated from the actual auto update one, the "update ssl
ocsp-response" will now use the update task in order to perform updates.

Since the cli command allows to update responses that were never
included in the auto update tree, a new flag was added to the
certificate_ocsp structure so that the said entry can be inserted into
the tree "by hand" and it won't be reinserted back into the tree after
the update process is performed. The 'update_once' flag "stole" a bit
from the 'fail_count' counter since it is the one less likely to reach
UINT_MAX among the ocsp counters of the certificate_ocsp structure.

This new logic required that every certificate_ocsp entry contained all
the ocsp-related information at all time since entries that are not
supposed to be configured automatically can still be updated through the
cli. The logic of the ssl_sock_load_ocsp was changed accordingly.
2023-03-14 11:07:32 +01:00
Willy Tarreau
8f6da64641 MINOR: quic_sock: un-statify quic_conn_sock_fd_iocb()
This one is printed as the iocb in the "show fd" output, and arguably
this wasn't very convenient as-is:
    293 : st=0x000123(cl heopI W:sRa R:sRA) ref=0 gid=1 tmask=0x8 umask=0x0 prmsk=0x8 pwmsk=0x0 owner=0x7f488487afe0 iocb=0x50a2c0(main+0x60f90)

Let's unstatify it and export it so that the symbol can now be resolved
from the various points that need it.
2023-03-10 14:30:01 +01:00
William Lallemand
2078d4b1f7 BUG/MINOR: mworker: use MASTER_MAXCONN as default maxconn value
In environments where SYSTEM_MAXCONN is defined when compiling, the
master will use this value instead of the original minimal value which
was set to 100. When this happens, the master process could allocate
RAM excessively since it does not need to have an high maxconn. (For
example if SYSTEM_MAXCONN was set to 100000 or more)

This patch fixes the issue by using the new define MASTER_MAXCONN which
define a default maxconn of 100 for the master process.

Must be backported as far as 2.5.
2023-03-09 14:28:44 +01:00
Willy Tarreau
cd8914bc52 BUG/MAJOR: fd/threads: close a race on closing connections after takeover
As mentioned in commit 237e6a0d6 ("BUG/MAJOR: fd/thread: fix race between
updates and closing FD"), a race was found during stress tests involving
heavy backend connection reuse with many competing closes.

Here the problem is complex. The analysis in commit f69fea64e ("MAJOR:
fd: get rid of the DWCAS when setting the running_mask") that removed
the DWCAS in 2.5 overlooked a few races.

First, a takeover from thread1 could happen just after fd_update_events()
in thread2 validates it holds the tmask bit in the CAS loop. Since thread1
releases running_mask after the operation, thread2 will succeed the CAS
and both will believe the FD is theirs. This does explain the occasional
crashes seen with h1_io_cb() being called on a bad context, or
sock_conn_iocb() seeing conn->subs vanish after checking it. This issue
can be addressed using a DWCAS in both fd_takeover() and fd_update_events()
as it was before the patch above but this is not portable to all archs and
is not easy to adapt for those lacking it, due to some operations still
happening only on individual masks after the thread groups were added.

Second, the checks after fd_clr_running() for the current thread being
the last one is not sufficient: at the exact moment the operation
completes, another thread may also set and drop the running bit and see
itself as alone, and both can call _fd_close_orphan() in parallel. In
order to prevent this from happening, we cannot rely on the absence of
others, we need an explicit flag indicating that the FD must be closed.
One approach that was attempted consisted in playing with the thread_mask
but that was not reliable since it could still match between the late
deletion and the early insertion that follows. Instead, a new FD flag
was added, FD_MUST_CLOSE, that exactly indicates that the call to
_fd_delete_orphan() must be done. It is set by fd_delete(), and
atomically cleared by the first one which checks it, and which is the
only one to call _fd_delete_orphan().

With both points addressed, there's no more visible race left:

- takeover() only happens under the connection list's lock and cannot
  compete with fd_delete() since fd_delete() must first remove the
  connection from the list before deleting the FD. That's also why it
  doesn't need to call _fd_delete_orphan() when dropping its running
  bit.

- takeover() sets its running bit then atomically replaces the thread
  mask, so that until that's done, it doesn't validate the condition
  to end the synchonization loop in fd_update_events(). Once it's OK,
  the previous thread's bit is lost, and this is checked for in
  fd_update_events()

- fd_update_events() can compete with fd_delete() at various places
  which are explained above. Since fd_delete() clears the thread mask
  as after setting its running bit and after setting the FD_MUST_CLOSE
  bit, the synchronization loop guarantees that the thread mask is seen
  before going further, and that once it's seen, the FD_MUST_CLOSE flag
  is already present.

- fd_delete() may start while fd_update_events() has already started,
  but fd_delete() must hold a bit in thread_mask before starting, and
  that is checked by the first test in fd_update_events() before setting
  the running_mask.

- the poller's _update_fd() will not compete against _fd_delete_orphan()
  nor fd_insert() thanks to the fd_grab_tgid() that's always done before
  updating the polled_mask, and guarantees that we never pretend that a
  polled_mask has a bit before the FD is added.

The issue is very hard to reproduce and is extremely time-sensitive.
Some tests were required with a 1-ms timeout with request rates
closely matching 1 kHz per server, though certain tests sometimes
benefitted from saturation. It was found that adding the following
slowdown at a few key places helped a lot and managed to trigger the
bug in 0.5 to 5 seconds instead of tens of minutes on a 20-thread
setup:

    { volatile int i = 10000; while (i--); }

Particularly, placing it at key places where only one of running_mask
or thread_mask is set and not the other one yet (e.g. after the
synchronization loop in fd_update_events or after dropping the
running bit) did yield great results.

Many thanks to Olivier Houchard for this expert help analysing these
races and reviewing candidate fixes.

The patch must be backported to 2.5. Note that 2.6 does not have tgid
in FDs, and that it requires a change of output on fd_clr_running() as
we need the previous bit. This is provided by carefully backporting
commit d6e1987612 ("MINOR: fd: make fd_clr_running() return the previous
value instead"). Tests have shown that the lack of tgid is a showstopper
for 2.6 and that unless a better workaround is found, it could still be
preferable to backport the minimum pieces required for fd_grab_tgid()
to 2.6 so that it stays stable long.
2023-03-09 14:01:48 +01:00