8446 Commits

Author SHA1 Message Date
Amaury Denoyelle
796446a15e MAJOR: mux-quic: support pacing emission
Support pacing emission for STREAM frames at the QUIC MUX layer. This is
implemented by adding a quic_pacer engine into QCC structure.

The main changes have been written into qcc_io_send(). It now
differentiates cases when some frames have been rejected by transport
layer. This can occur as previously due to congestion or FD buffer full,
which requires subscribing on transport layer. The new case is when
emission has been interrupted due to pacing timing. In this case, QUIC
MUX I/O tasklet is rescheduled to run with the flag TASK_F_USR1.

On tasklet execution, if TASK_F_USR1 is set, all standard processing for
emission and reception is skipped. Instead, a new function
qcc_purge_sending() is called. Its purpose is to retry emission with the
saved STREAM frames list. Either all remaining frames can now be send,
subscribe is done on transport error or tasklet must be rescheduled for
pacing purging.

In the meantime, if tasklet is rescheduled due to other conditions,
TASK_F_USR1 is reset. This will trigger a full regeneration of STREAM
frames. In this case, pacing expiration must be check before calling
qcc_send_frames() to ensure emission is now allowed.
2024-11-19 16:16:48 +01:00
Amaury Denoyelle
ede4cd4c2e MINOR: mux-quic: encapsulate QCC tasklet wakeup
QUIC MUX will be responsible to drive emission with pacing. This will be
implemented via setting TASK_F_USR1 before I/O tasklet wakeup. To
prepare this, encapsulate each I/O tasklet wakeup into a new function
qcc_wakeup().

This commit is purely refactoring prior to pacing implementation into
QUIC MUX.
2024-11-19 16:16:48 +01:00
Amaury Denoyelle
4a94a018f0 MINOR: mux-quic: define a tx STREAM frame list member
For STREAM emission, MUX QUIC previously used a local list defined under
qcc_io_send(). This was suitable as either all frames were sent, or
emission must be interrupted due to transport congestion or fatal error.
In the latter case, the list was emptied anyway and a new frame list was
built on future qcc_io_send() invokation.

For pacing, MUX QUIC may have to save the frame list if pacing should be
applied across emission. This is necessary to avoid to unnecessarily
rebuilt stream frame list between each paced emission. To support this,
STREAM list is now stored as a member of QCC structure.

Ensure frame list is always deleted, even on QCC release, using newly
defined utility function qcc_tx_frms_free().
2024-11-19 16:16:48 +01:00
Amaury Denoyelle
886a7c475c MINOR: quic/pacing: add burst support
qc_send_mux() has been extended previously to support pacing emission.
This will ensure that no more than one datagram will be emitted during
each invokation. However, to achieve better performance, it may be
necessary to emit a batch of several datagrams one one turn.

A so-called burst value can be specified by the user in the
configuration. However, some congestion control algos may defined their
owned dynamic value. As such, a new CC callback pacing_burst is defined.

quic_cc_default_pacing_burst() can be used for algo without pacing
interaction, such as cubic. It will returns a static value based on user
selected configuration.
2024-11-19 16:16:48 +01:00
Amaury Denoyelle
8039fe43e6 MINOR: quic/pacing: support pacing emission on quic_conn layer
Pacing will be implemented for STREAM frames emission. As such,
qc_send_mux() API has been extended to add an argument to a quic_pacer
engine.

If non NULL, engine will be used to pace emission. In short, no more
than one datagram will be emitted for each qc_send_mux() invokation.
Pacer is then notified about the emission and a timer for a future
emission is calculated. qc_send_mux() will return PACING error value, to
inform QUIC MUX layer that it will be responsible to retry emission
after some delay.
2024-11-19 16:16:48 +01:00
Amaury Denoyelle
ab82fab442 MINOR: quic/pacing: implement quic_pacer engine
Extend quic_pacer engine to support pacing emission. Several functions
are defined.
* quic_pacing_sent_done() to notify engine about an emission of one or
  several datagrams
* quic_pacing_expired() to check if emission should be delayed or can be
  conducted immediately
2024-11-19 16:16:48 +01:00
Amaury Denoyelle
3e11492c99 MINOR: quic: define quic_pacing module
Add a new module quic_pacing. A new structure quic_pacer is defined.
This will be used as a pacing engine to implement smooth emission of
QUIC data.
2024-11-19 16:16:48 +01:00
Amaury Denoyelle
7fd48a5723 MINOR: quic: extend qc_send_mux() return type with a dedicated enum
This commit is part of a adjustment on QUIC transport send API to
support pacing. Here, qc_send_mux() return type has been changed to use
a new enum quic_tx_err.

This is useful to explain different failure causes of emission. For now,
only two values have been defined : NONE and FATAL. When pacing will be
implemented, a new value would be added to specify that emission was
interrupted on pacing. This won't be a fatal error as this allows to
retry emission but not immediately.
2024-11-19 16:16:48 +01:00
Amaury Denoyelle
5cb8f8a622 MINOR: quic: support a max number of built packet per send iteration
Extend QUIC transport emission function to support a maximum datagram
argument. The purpose is to ensure that qc_send() won't emit more than
the specified value, unless it is 0 which is considered as unlimited.

In qc_prep_pkts(), a counter of built datagram has been added to support
this. The packet building loop is interrupted if it reaches a specified
maximum value. Also, its return value has been changed to the number of
prepared datagrams. This is reused by qc_send() to interrupt its work if
a specified max datagram argument value is reached over one or several
iteration of prepared/sent datagrams.

This change is necessary to support pacing emission. Note that ideally,
the total length in bytes of emitted datagrams should be taken into
account instead of the raw number of datagrams. However, for a first
implementation, it was deemed easier to implement it with the latter.
2024-11-19 16:16:48 +01:00
Amaury Denoyelle
4069873403 MINOR: mux-quic: add missing values for show flags
Add QCC QC_CF_WAIT_FOR_HS and QCS QC_SF_TXBUB_OOB flags to their
respective show_flags to be able to decipher them via dev flags utility.

These values have been added in the current dev version, thus no need to
backport this patch.
2024-11-19 16:16:48 +01:00
Christopher Faulet
bc967758a2 MINIR: mux-h1: Return 414 or 431 when appropriate
When the request is too large to fit in a buffer a 414 or a 431 error
message is returned depending on the error state of the request parser. A
414 is returned if the URI is too long, otherwise a 431 is returned.

This patch should fix the issue #1309.
2024-11-19 15:29:40 +01:00
Christopher Faulet
62dc8750a9 MINOR: http: Add support for HTTP 414/431 status codes
414-Uri-Too-Long and 431-Request-Header-Fields-Too-Large are now part of
supported status codes that can be define as error files. The hash table
defined in http_get_status_idx() was updated accordingly.
2024-11-19 15:29:40 +01:00
Christopher Faulet
1be7140ade MINOR: http-ana: Add support for "set-cookie-fmt" option to redirect rules
It is now possible to use a log-format string to define the "Set-Cookie"
header value of a response generated by a redirect rule. There is no special
check on the result format and it is not possible during the configuration
parsing. It is proably not a big deal because already existing "set-cookie"
and "clear-cookie" options don't perform any check.

Here is an example:

  http-request redirect location https://someurl.com/ set-cookie haproxy="%[var(txn.var)]"

This patch should fix the issue #1784.
2024-11-19 15:20:02 +01:00
Christopher Faulet
b2877db47c MINOR: http-ana: Add option to keep query-string on a localtion-based redirect
On prefix-based redirect, there is an option to drop the query-string of the
location. Here it is the opposite. an option is added to preserve the
query-string of the original URI for a localtion-based redirect.

By setting "keep-query" option, for a location-based redirect only, the
query-string of the original URI is appended to the location. If there is no
query-string, nothing is added (no empty '?'). If there is already a
non-empty query-string on the localtion, the original one is appended with
'&' separator.

This patch should fix issue #2728.
2024-11-19 15:20:02 +01:00
Willy Tarreau
82f190f882 MINOR: tools: make parse_size_err() support 32/64 bits
parse_size_err() currently is a function working only on an uint. It's
not convenient for certain elements such as rings on large machines.

This commit addresses this by having one function for uints and one
for ullong, and making parse_size_err() a macro that automatically
calls one or the other. It also has the benefit of automatically
supporting compatible types (long, size_t etc).
2024-11-19 10:50:42 +01:00
Willy Tarreau
9c6ccb8dbb MEDIUM: config: warn on unitless timeouts < 100 ms
From time to time we face a configuration with very small timeouts which
look accidental because there could be expectations that they're expressed
in seconds and not milliseconds.

This commit adds a check for non-nul unitless values smaller than 100
and emits a warning suggesting to append an explicit unit if that was
the intent.

Only the common timeouts, the server check intervals and the resolvers
hold and timeout values were covered for now. All the code needs to be
manually reviewed to verify if it supports emitting warnings.

This may break some configs using "zero-warning", but greps in existing
configs indicate that these are extremely rare and solely intentionally
done during tests. At least even if a user leaves that after a test, it
will be more obvious when reading 10ms that something's probably not
correct.
2024-11-19 10:33:20 +01:00
Willy Tarreau
e72b525832 MINOR: cfgparse: parse tune.bufsize.small as a size
Till now this value was parsed as raw integer using atol() and would
silently ignore any trailing suffix, causing unexpected behaviors when
set, e.g. to "4k". Let's make use of parse_size_err() on it so that
units are supported. This requires to turn it to uint as well, which
was verified to be OK.
2024-11-18 19:07:05 +01:00
Willy Tarreau
a344d37fad MINOR: cfgparse: parse tune.bufsize as a size
Till now this value was parsed as raw integer using atol() and would
silently ignore any trailing suffix, preventing from starting when set
e.g. to "64k". Let's make use of parse_size_err() on it so that units are
supported. This requires to turn it to uint as well, and to explicitly
limit its range to INT_MAX - 2*sizeof(void*), which was previously
partially handled as part of the sign check.
2024-11-18 19:06:25 +01:00
Willy Tarreau
2f0c6ff3a5 MINOR: cfgparse: parse tune.recv_enough as a size
Till now this value was parsed as raw integer using atol() and would
silently ignore any trailing suffix, causing unexpected behaviors when
set, e.g. to "512k". Let's make use of parse_size_err() on it so that
units are supported. This requires to turn it to uint as well, and
since it's sometimes compared to an int, we limit its range to
0..INT_MAX.
2024-11-18 19:01:28 +01:00
Willy Tarreau
a90a7d4d60 MINOR: cfgparse: parse tune.pipesize as a size
Till now this value was parsed as raw integer using atol() and would
silently ignore any trailing suffix, causing unexpected behaviors when
set, e.g. to "512k". Let's make use of parse_size_err() on it so that
units are supported. This requires to turn it to uint as well, which
was verified to be OK.
2024-11-18 18:51:31 +01:00
Willy Tarreau
f9f28b7584 MINOR: cfgparse: parse tune.{rcvbuf,sndbuf}.{frontend,backend} as sizes
Till now these values were parsed as raw integer using atol() and would
silently ignore any trailing suffix, causing unexpected behaviors when
set, e.g. to "512k". Let's make use of parse_size_err() on them so that
units are supported. This requires to turn them to uint as well, which
is OK.
2024-11-18 18:50:02 +01:00
Willy Tarreau
a923c72357 MINOR: cfgparse: parse tune.{rcvbuf,sndbuf}.{client,server} as sizes
Till now these values were parsed as raw integer using atol() and would
silently ignore any trailing suffix, causing unexpected behaviors when
set, e.g. to "512k". Let's make use of parse_size_err() on them so that
units are supported. This requires to turn them to uint as well, which
is OK.
2024-11-18 18:49:01 +01:00
Willy Tarreau
00fcda1ff2 MINOR: acl: export find_acl_default()
It will be needed in a future patch, so let's export it (it was static).
2024-11-18 15:15:54 +01:00
Willy Tarreau
4fd6d15344 MINOR: mux-quic/h3: count glitches when they're reported
The qcc_report_glitch() function is now replaced with a macro to support
enumerating counters for each individual glitch line. For now this adds
36 such counters. The macro supports an optional description, though that
is not being used for now.

As a reminder, this requires to build with -DDEBUG_GLITCHES=1.
2024-11-14 20:43:33 +01:00
Aurelien DARRAGON
42710b7320 MEDIUM: uri_auth: implement clean uri_auth cleaning
proxy auth_uri struct was manually cleaned up during deinit, but the logic
behind was kind of akward because it was required to find out which ones
were shared or not. Instead, let's switch to a proper refcount mechanism
and free the auth_uri struct directly in proxy_free_common().
2024-11-14 15:03:38 +01:00
Aurelien DARRAGON
e1ec37ea51 MINOR: uri_auth: add stats_uri_auth_free helper
Let's now leverage stats_uri_auth_free() helper to free uri_auth struct
instead of manually performing the cleanup, which is error-prone.
2024-11-14 15:03:33 +01:00
Willy Tarreau
502790ed7e MINOR: debug: add a new counter type for glitches
COUNT_GLITCH() will implement an unconditional counter on its declaration
line when DEBUG_GLITCHES is set, and do nothing otherwise. The output will
be reported as "GLT" and can be filtered as "glt" on the CLI. The purpose
is to help figure what's happening if some glitches counters start going
through the roof. The macro supports an optional string argument to
describe the cause of the glitch (e.g. "truncated header"), which is then
reported in the dump.

For now this is conditioned by DEBUG_GLITCHES but if it turns out to be
light enough, maybe we'll keep it enabled full time. In this case it
might have to be moved away from debug dev, or at least documented (or
done as debug counters maybe so that dev can remain undocumented and
updatable within a branch?).
2024-11-14 08:49:38 +01:00
Willy Tarreau
e119095290 MINOR: debug: explicitly permit the counter condition to be empty
In order to count new event types, we'll need to support empty conditions
so that we don't have to fake if (1) that would pollute the output. This
change checks if #cond is an empty string before concatenating it with
the optional var args, and avoids dumping the colon on the dump if the
whole description is empty.
2024-11-14 08:47:00 +01:00
Valentine Krasnobaeva
d5d41dee3d MINOR: startup: replace HAPROXY_LOAD_SUCCESS with global load_status
After master-worker refactoring, master performs re-exec only once up to
receiving "reload" command or USR2 signal. There is no more the second
master's re-exec to free unused memory. Thus, there is no longer need to export
environment variable HAPROXY_LOAD_SUCCESS with worker process load status. This
status can be simply saved in a global variable load_status.
2024-11-13 09:50:05 +01:00
Amaury Denoyelle
8e0e7d9d1a BUG/MINOR: guid/server: ensure thread-safety on GUID insert/delete
Since 3.0, it is possible to assign a GUID to proxies, listeners and
servers. These objects are stored in a global tree guid_tree.

Proxies and listeners are static. However, servers may be added or
deleted at runtime, which imply that guid_tree must be protected. Fix
this by declaring a read-write lock to protect tree access.

For now, only guid_insert() and guid_remove() are protected using a
write lock. Outside of these, GUID tree is not accessed at runtime. If
server CLI commands are extended to support GUID as server identifier,
lookup operation should be extended with a read lock protection.

Note that during stat-file preloading, GUID tree is accessed for lookup.
However, as it is performed on startup which is single threaded, there
is no need for lock here. A BUG_ON() has been added to ensure this
precondition remains true.

This bug could caused a segfault when using dynamic servers with GUID.
However, it was never reproduced for now.

This must be backported up to 3.0. To avoid a conflict issue, the
previous cleanup patch can be merged before it.
2024-11-07 18:17:03 +01:00
Amaury Denoyelle
b70880cdc9 CLEANUP: guid: remove global tree export
guid_tree is not directly used outside of functions provided by the guid
module. Remove its export from the include file.
2024-11-07 17:20:00 +01:00
Aurelien DARRAGON
79a346aa28 MINOR: event_hdl: add event_hdl_sub_list_empty() helper func
event_hdl_sub_list_empty() may be used to know if the subscription list
passed as argument is empty or not (ie: if there currently are any
subcribers or not). It can be useful to know if the subscription is empty
is order to avoid unecessary preparation work and skip event publishing to
save CPU time if we already know that no one is interested in tracking the
changes for a given subscription list.
2024-11-07 11:35:55 +01:00
Willy Tarreau
84dd05e7d8 DEBUG: wdt: add a stats counter "BlockedTrafficWarnings" in show info
Every time a warning is issued about traffic being blocked, let's
increment a global counter so that we can check for this situation
in "show info".
2024-11-06 18:35:42 +01:00
Willy Tarreau
148eb5875f DEBUG: wdt: better detect apparently locked up threads and warn about them
In order to help users detect when threads are behaving abnormally, let's
try to emit a warning when one is no longer making any progress. This will
allow to catch faulty situations more accurately, instead of occasionally
triggering just after the long task. It will also let users know that there
is something wrong with their configuration, and inspect the call trace to
figure whether they're using excessively long rules or Lua for example (the
usual warnings about lua-load vs lua-load-per-thread are still reported).

The warning will only be emitted for threads not yet marked as stuck so
as not to interfere with panic dumps and avoid sending a warning just
before a panic. A tainted flag is set when this happens however (0x2000).
2024-11-06 18:35:42 +01:00
Willy Tarreau
0950778b3a MINOR: debug: add a function to dump a stuck thread
There's currently no way to just emit a warning informing that a thread
is stuck without crashing. This is a problem because sometimes users
would benefit from this info to clean up their configuration (e.g. abuse
of map_regm, lua-load etc).

This commit adds a new function ha_stuck_warning() that will emit a
warning indicating that the designated thread has been stuck for XX
milliseconds, with a number of streams blocked, and will make that
thread dump its own state. The warning will then be sent to stderr,
along with some reminders about the impacts of such situations to
encourage users to fix their configuration.

In order not to disrupt operations, a local 4kB buffer is allocated
in the stack. This should be quite sufficient.

For now the function is not used.
2024-11-06 18:35:42 +01:00
Willy Tarreau
1f34a0fd27 CLEANUP: stats: fix misleading comment on top of stat_idx_info
The comment asks to update the "metrics_info" array, which does not
exist, instead it's called stat_cols_info[] and is in stats.c. Let's
mention all that to save time searching for the needed info.

While no version seems to have ever known that "metrics_info", it's not
needed to backport this as it's only a comment.
2024-11-06 18:35:42 +01:00
Amaury Denoyelle
1767196d5b BUG/MINOR: quic: repeat packet parsing to deal with fragmented CRYPTO
A ClientHello may be splitted accross several different CRYPTO frames,
then mixed in a single QUIC packet. This is used notably by clients such
as chrome to render the first Initial packet opaque to middleboxes.

Each packet frame is handled sequentially. Out-of-order CRYPTO frames
are buffered in a ncbuf, until gaps are filled and data is transferred
to the SSL stack. If CRYPTO frames are heavily splitted with small
fragments, buffering may fail as ncbuf does not support small gaps. This
causes the whole packet to be rejected and unacknowledged. It could be
solved if the client reemits its ClientHello after remixing its CRYPTO
frames.

This patch is written to improve CRYPTO frame parsing. Each CRYPTO
frames which cannot be buffered due to ncbuf limitation are now stored
in a temporary list. Packet parsing is completed until all frames have
been handled. If temporary list is not empty, reparsing is done on the
stored frames. With the newly buffered CRYPTO frames, ncbuf insert
operation may this time succeeds if the frame now covers a whole gap.
Reparsing will loop until either no progress can be made or it has been
done at least 3 times, to prevent CPU utilization.

This patch should fix github issue #2776.

This should be backported up to 2.6, after a period of observation. Note
that it relies on the following refactor patches :
  MINOR: quic: extend return value of CRYPTO parsing
  MINOR: quic: use dynamically allocated frame on parsing
  MINOR: quic: simplify qc_parse_pkt_frms() return path
2024-11-06 14:29:14 +01:00
Amaury Denoyelle
d65e782c8c MINOR: quic: extend return value of CRYPTO parsing
qc_handle_crypto_frm() is the function used to handled a newly received
CRYPTO frame. Change its API to use a newly dedicated return type. This
allows to report if the frame was properly handled, ignored if already
parsed previously or rejected after a fatal error.

This commit does not have any functional changes. However, it allows to
simplify qc_handle_crypto_frm() API by removing <fast_retrans> as output
parameter. Also, this patch will be necessary to support multiple
iteration of packet parsing for CRYPTO frames.
2024-11-06 14:28:14 +01:00
Aurelien DARRAGON
24dd7154a6 MINOR: http: don't %-encode the payload when not relevant
As reported by Pierre Maoui in GH #2477, it's not possible to render
control chars from variables or expressions verbatim in the payload part
of http-return statements. That's a problem because this part should not
require to be encoded at all (we could even imagine building favicons on
the fly for example).

In fact it is the LOG_OPT_HTTP option when passed as default options on
parse_logformat_string() which tells the log encoder that the payload
should be http-encoded using lf_chunk() instead of being printed using the
per-type encoder.

This option was set when parsing logformat expressions for lf-string
expression under http-return statements, as well as logformat expressions
for set-map action. While it is true that those actions may only be
used under http context, the LOG_OPT_HTTP logformat option is not relevant
there, because the payload is expected to be used without being encoded.

So let's simply get rid of this option when parsing logformat expressions
for set-map action key/value and lf-string from http-request return
action, and add a note next to LOG_OPT_HTTP option to indicate that it is
used to tell the log encoder that the payload should be HTTP-encoded.

Thanks to Pierre for having reported the issue and Willy for the
analysis and patch proposal.
2024-11-06 10:21:15 +01:00
Willy Tarreau
601b34fe7b MINOR: connection: add new sample fetch functions fc_err_name and bc_err_name
These functions return a symbolic error code such as ECONNRESET to keep
logs compact while making them human-readable. It's a good alternative
to the numeric code in that it's more expressive, and a good one to the
full message since it's shorter and more precise (some codes even match
errno names).

The doc was updated so that the symbolic names appear in the table. It
could be useful to backport this feature to help with troubleshooting
some issues, though backporting the doc might possibly be more annoying
in case users have local patches already, so maybe the table update does
not need to be backported in this case.
2024-11-05 18:57:43 +01:00
Willy Tarreau
00c383ff65 MINOR: connection: add more connection error codes to cover common errno
While we get reports of connection setup errors in fc_err/bc_err, we
don't have the equivalent for the recv/send/splice syscalls. Let's
add provisions for new codes that cover the common errno values that
recv/send/splice can return, i.e. ECONNREFUSED, ENOMEM, EBADF, EFAULT,
EINVAL, ENOTCONN, ENOTSOCK, ENOBUFS, EPIPE. We also add a special case
for when the poller reported the error itself. It's worth noting that
EBADF/EFAULT/EINVAL will generally indicate serious bugs in the code
and should not be reported.

The only thing is that it's quite hard to forcefully (and reliably)
trigger these errors in automated tests as the timing is critical.
Using iptables to manually reset established connections in the
middle of large transfers at least permits to see some ECONNRESET
and/or EPIPE, but the other ones are harder to trigger.
2024-11-05 18:57:43 +01:00
Willy Tarreau
393957908b CLEANUP: connection: properly name the CO_ER_SSL_FATAL enum entry
It was the only one prefixed with "CO_ERR_", making it harder to batch
process and to look up. It was added in 2.5 by commit 61944f7a73 ("MINOR:
ssl: Set connection error code in case of SSL read or write fatal failure")
so it can be backported as far as 2.6 if needed to help integrate other
patches.
2024-11-05 18:57:42 +01:00
Willy Tarreau
b300db55f6 BUILD: compiler: define __builtin_prefetch() for tcc
We're using a few occurrences of __builtin_prefetch() but tcc doesn't
know about it so let's give it a dummy definition. Now the code builds
and works again with tcc without thread support.
2024-11-05 15:43:17 +01:00
Willy Tarreau
033db091fc BUILD: import/mt_list: support building with TCC
TCC is often convenient to quickly test builds, run CI tests etc. It has
limited thread support (e.g. no thread-local stuff) but that is often
sufficient for testing. TCC lacks __atomic_exchange_n() but has the
exactly equivalent __atomic_exchange(), and doesn't have any barrier.
For this reason we force the atomic_exchange to use the stricter SEQ_CST
mem ordering that allows to ignore the barrier.

[wt: that's upstream commit ca8b865 ("BUILD: support building with TCC")]
2024-11-05 15:43:17 +01:00
William Lallemand
e75a019fba MINOR: startup: tune.renice.{startup,runtime} allow to change priorities
This commit introduces the tune.renice.startup and tune.renice.runtime
global keywords that allows to change the priority with setpriority().

tune.renice.startup is parsed and applied in the worker or the standalone
process for configuration parsing. If this keyword is used alone, the
nice value is changed to the previous one after configuration parsing.

tune.renice.runtime is applied after configuration parsing, so in the
worker or a standalone process. Combined with tune.renice.startup it
allows to have a different nice value during configuration parsing and
during runtime.

The feature was discussed in github issue #1919.

Example:

   global
        tune.renice.startup 15
        tune.renice.runtime 0
2024-11-04 17:48:58 +01:00
Christopher Faulet
64554a55f4 MINOR: stream: Add http-buffer-request option in the waiting entities
When http-buffer-request option is set on a proxy, the processing will be
paused to wait the full request payload or a full buffer. So it is an entity
that block the processing, just like a rule or a filter that yields. So now,
it is reported as a waiting entity if an error or a timeout occurred.

To do so, an stream entity type is added for this option. There is no
pointer. And "waiting_entity" sample fetch returns the option name.
2024-10-31 20:24:50 +01:00
Christopher Faulet
c64712b085 MINOR: stream: Use an enum to identify last and waiting entities for streams
Instead of using 1 for last/waiting rule and 2 for last/waiting filter, an
enum is used. It is less ambiguous this way.
2024-10-31 20:24:37 +01:00
Christopher Faulet
537f20eb3e MINOR: stream: Save the entity waiting to continue its processing
When a rule or a filter yields because it waits for something to be able to
continue its processing, this entity is saved in the stream. If an error or
a timeout occurred, info on this entity may be retrieved via the
"waiting_entity" sample fetch, for instance to dump it in the logs. This
info may be useful to found root cause of some bugs because it is a way to
know the processing was temporarily stopped. This may explain timeouts for
instance.

The sample fetch is not documented yet.
2024-10-31 16:40:09 +01:00
Christopher Faulet
53de6da1c0 MINOR: stream: Save the last filter evaluated interrupting the processing
It is very similar to the last evaluated rule. When a filter returns an
error that interrupts the processing, it is saved in the stream, in the
last_entity field, with the type 2. The pointer on filter config is
saved. This pointer never changes during runtime and is part of the proxy's
structure. It is an element of the filter_configs list in the proxy
structure.

"last_entity" sample fetch was update accordingly. The filter identifier is
returned, if defined. Otherwise the save pointer.
2024-10-31 16:39:04 +01:00
Christopher Faulet
c9fa78e747 MINOR: stream: Replace last_rule_file/line fields by a more generic field
The last evaluated rule is now saved in a generic structure, named
last_entity, with a type to identify it. The idea is to be able to store
other kind of entity that may interrupt a specific processing.

The type of the last evaluated rule is set to 1. It will be replace later by
an enum to be more explicit. In addition, the pointer to the rule itself is
saved instead of its location.

The sample fetch "last_entity" was added to retrieve the information about
it. In this case, it is the rule localtion, the config file containing the
rule followed by the line where the rule is defined, separated by a
colon. This sample fetch is not documented yet.
2024-10-31 16:36:39 +01:00