Commit 44537379fc ("MINOR: tools: add errname to print errno macro
name") brought a facility to report errno using a symbolic string
when known instead of showing only the value. However, among the
listed options, ETIME is mentioned but is unknown from FreeBSD where
it breaks the build. Let's simply drop it, we don't use ETIME anyway
and even if it would be reported, the default code path still reports
the numeric value so there's no harm. If other ones fail to build in
the future, they could be handled the same way.
Thanks to the previous patch, it is now possible to explicitly rely on
stream's events to shut it down. The right event is set in
stream_shutdown(), before waking up the stream, via an atomic operation. In
process_stream(), this event will be handled as expected.
Thus, TASK_F_UEVT* are no longer used, but not removed since still usable
for other tasks.
This patch depends on "MEDIUM: stream: Map task wake up reasons to dedicated
stream events".
To fix thread-safety issues when a stream must be shut, three new task
states were added. These states are generic (UEVT1, UEVT2 and UEVT3), the
task callback function is responsible to know what to do with them. However,
it is not really scalable.
The best is to use an atomic field in the stream structure itself to deal
with these dedicated events. There is already the "pending_events" field
that save wake up reasons (TASK_WOKEN_*) to not loose them if
process_stream() is interrupted before it had a chance to handle them.
So the idea is to introduce a new field to handle streams dedicated events
and merged them with the task's wake up reasons used by the stream. This
means a mapping must be performed between some task wake up reasons and
streams events. Note that not all task wake up reasons will be mapped.
In this patch, the "new_events" field is introduced. It is an atomic
bit-field. Streams events (STRM_EVT_*) are also introduced to map the task
wake up reasons used by process_stream(). Only TASK_WOKEN_TIMER and
TASK_WOKEN_MSG are mapped, in addition to TASK_F_UEVT* flags. In
process_stream(), "pending_events" field is now filled with new stream
events and the mapping of the wake up reasons.
shutdown-backup-sessions action for on-marked-up directive does not work anymore
since the stream_shutdown() function was modified to be async-safe.
When stream_shutdown() was modified to be async-safe, dedicated task events were
added to map the reasons to shut a stream down. SF_ERR_DOWN was mapped to
TASK_F_EVT1 and SF_ERR_KILLED was mapped to TASK_F_EVT2. The reverse mapping was
performed by process_stream() to shut the stream with the appropriate reason.
However, SF_ERR_UP reason, used by shutdown-backup-sessions action to shut a
stream down because a preferred server became available, was not mapped in the
same way. So since commit b8e3b0a18d ("BUG/MEDIUM: stream: make
stream_shutdown() async-safe"), this action is ignored and does not work
anymore.
To fix an issue, and being able to bakcport the fix, a third task event was
added. TASK_F_EVT3 is now mapped on SF_ERR_UP.
This patch should fix the issue #2848. It must be backported as far as 2.6.
For both servers and proxies, use one connection queue per thread-group,
instead of only one. Having only one can lead to severe performance
issues on NUMA machines, it is actually trivial to get the watchdog to
trigger on an AMD machine, having a server with a maxconn of 96, and an
injector that uses 160 concurrent connections.
We now have one queue per thread-group, however when dequeueing, we're
dequeuing MAX_SELF_USE_QUEUE (currently 9) pendconns from our own queue,
before dequeueing one from another thread group, if available, to make
sure everybody is still running.
For both proxies and servers, properly calculates queueslength, which is
the total number of element in each queues (as they currently are only
using one queue, it is equivalent to the number of element of that
queue), and use it instead of the queue's length.
Add a per-thread group queue and associated fields in per-thread group
field in struct server, as well as a new field, queues length.
This is currently unused, so should change nothing.
Add a per-thread group field to struct proxy, that will contain a struct
queue, as well as a new field, "queueslength".
This is currently unused, so should change nothing.
Please note that proxy_init_per_thr() must now be called for each proxy
once the thread groups number is known.
A few times in the past we've seen cases where epoll was caught reporting
a wrong event that caused trouble (e.g. spuriously reporting HUP or RDHUP
after a successful connect()). The new tune.epoll.mask-events directive
permits to mask events such as ERR, HUP and RDHUP and convert them to IN
events that are processed by the regular receive path. This should help
better diagnose and troubleshoot issues such as this one, as well as rule
out such a cause when similar issues are reported:
https://github.com/haproxy/haproxy/issues/2368https://www.spinics.net/lists/netdev/msg876470.html
It should be harmless to backport this if necessary.
acl_match_cond() combines acl_exec_cond() + acl_pass() and a check on the
condition->pol (to check if the cond is inverted) in order to return
either 0 if the cond doesn't match or 1 if it matches (or NULL).
Thanks to this we can actually simplify some redundant constructs that
iterate over rules and evaluate if the condition matches or not.
Conditions for tcp-request inspect-content and tcp-response
inspect-content couldn't be simplified because they perform an extra
check for missing data, and thus still need to leverage acl_exec_cond()
It's best to display the patch using "-w", like "git show xxxx -w",
because some blocks had to be re-indented after the cleanup, which
makes the patch hard to review by default.
As ssl_sock_gencert_load_ca and ssl_sock_gencert_free_ca are compiled only if
SSL_NO_GENERATE_CERTIFICATES is not defined, let's align it and move these
declarations in ssl_gencert.h.
ssl_sock_load_ca is defined in ssl_gencert.c and compiled only if
SSL_NO_GENERATE_CERTIFICATES is not defined. It's name is a bit confusing, as
we may think at the first glance, that it's a generic function, which is also
used to load CA file, provided via 'ca-file' keyword.
ssl_set_verify_locations_file is used in this case.
So let's rename ssl_sock_load_ca into ssl_sock_gencert_load_ca. Same is
applied to ssl_sock_free_ca.
ssl_sock_load_ca and ssl_sock_free_ca definitions are compiled only, if
SSL_NO_GENERATE_CERTIFICATES is not set. In case, when we set this define and
build haproxy, linker throws an error. So, let's fix this.
This should be backported in all stable versions.
Released version 3.2-dev4 with the following main changes :
- BUG/MINOR: stktable: fix big-endian compatiblity in smp_to_stkey()
- MINOR: stktable: add stkey_to_smp() helper
- MINOR: stktable: add stksess_getkey() helper
- MINOR: stktable: add sc[0-2]_key fetches
- BUG/MEDIUM: queues: Adjust the proxy counters when appropriate
- MINOR: trace: add help message for -dt argument
- MINOR: trace: ensure -dt priority over traces config section
- MINOR: trace: support all source alias on -dt
- BUG/MINOR: quic: reject NEW_TOKEN frames from clients
- MINOR: stktable: fix potential build issue in smp_to_stkey
- BUG/MEDIUM: stktable: fix missing lock on some table converters
- BUG/MEDIUM: promex: Use right context pointers to dump backends extra-counters
- MINOR: stktable: fix potential build issue in smp_to_stkey (2nd try)
- MINOR: stktable: add smp_fetch_stksess() helper function
- MEDIUM: stktable: split src-based key smp_fetch_sc functions
- MEDIUM: stktable: split sc_ and src_ fetch lookup logics
- MEDIUM: stktable: leverage smp_fetch_* helpers from sample conv
- DOC: config: unify sample conv|fetches optional arguments syntax
- DOC: config: stick-table converters support implicit <table> argument
- DOC: config: stick-table converter do accept ANY-typed input
- DOC: config: clarify return type for some stick-table converters
- DOC: config: refer to canonical sticktable converters for src_* fetches
- CLEANUP: stktable: move sample_conv_table_bytes_out_rate()
- MINOR: stktable: add table_{inc,clr}_gpc* converters
- BUG/MAJOR: quic: reject too large CRYPTO frames
- BUG/MAJOR: log/sink: possible sink collision in sink_new_from_srv()
- BUG/MINOR: init: set HAPROXY_STARTUP_VERSION from the variable, not the macro
- REORG: version: move the remaining BUILD_* stuff from haproxy.c to version.c
- BUG/MINOR: quic: ensure a detached coalesced packet can't access its neighbours
- MINOR: quic: Add a BUG_ON() on quic_tx_packet refcount
- BUILD: quic: Move an ASSUME_NONNULL() for variable which is not null
- BUG/MEDIUM: mux-h1: Properly close H1C if an error is reported before sending data
- CLEANUP: quic: remove unused prototype
- MINOR: quic: rename pacing_rate cb to pacing_inter
- BUG/MINOR: quic: do not increase congestion window if app limited
- MINOR: mux-quic: increment pacing retry counter on expired
- MEDIUM: quic: implement credit based pacing
- MEDIUM: mux-quic: reduce pacing CPU usage with passive wait
- MEDIUM: quic: use dynamic credit for pacing
- MINOR: quic: remove unused pacing burst in bind_conf/quic_cc_path
- MINOR: quic: adapt credit based pacing to BBR
- MINOR: tools: add errname to print errno macro name
- MINOR: debug: debug_parse_cli_show_dev: use errname
- MINOR: debug: show boot and runtime process settings in table
Let's use errname, introduced in the previous commit in the output of
"show dev". This output is destined to engineers. So, no need to provide a
long descriptions of errnos given by strerror.
Add helper to print the name of errno's corresponding macro, for example
"EINVAL" for errno=22. This may be helpful for debugging and for using in
some CLI commands output. The switch-case in errname() contains only the
errnos currently used in the code. So, it needs to be extended, if one starts
to use new syscalls.
Credit based pacing has been further refined to be able to calculate
dynamically burst size based on congestion parameter. However, BBR
algorithm already provides pacing rate and burst size (labelled as
send_quantum) for 1ms of emission.
Adapt quic_pacing_reload() to use BBR values to compute pacing credit.
This is done via pacing_burst callback which is now only defined for
BBR. For other algorithms, determine the burst size over 1ms with the
congestion window size and RTT.
This should be backported up to 3.1.
Pacing burst size is now dynamic. As such, configuration value has been
removed and related fields in bind_conf and quic_cc_path structures can
be safely removed.
This should be backported up to 3.1.
Major improvements have been introduced in pacing recently. Most
notably, QMUX schedules emission on a millisecond resolution, which
allow to use passive wait to be much CPU friendly.
However, an issue remains with the pacing max credit. Unless BBR is
used, it is fixed to the configured value from quic-cc-algo bind
statement. This is not practical as if too low, it may drastically
reduce performance due to 1ms sleep resolution. If too high, some
clients will suffer from too much packet loss.
This commit fixes the issue by implementing a dynamic maximum credit
value based on the network condition specific to each clients.
Calculation is done to fix a maximum value which should allow QMUX
current tasklet context to emit enough data to cover the delay with the
next tasklet invokation. As such, avg_loop_us is used to detect the
process load. If too small, 1.5ms is used as minimal value, to cover the
extra delay incurred by the system which will happen for a default 1ms
sleep.
This should be backported up to 3.1.
Pacing algorithm has been revamped in the previous commit to implement a
credit based solution. This is a far more adaptative solution, in
particular which allow to catch up in case pause between pacing emission
was longer than expected.
This allows QMUX to remove the active loop based on tasklet wake-up.
Instead, a new task is used when emission should be paced. The main
advantage is that CPU usage is drastically reduced.
New pacing task timer is reset each time qcc_io_send() is invoked. Timer
will be set only if pacing engine reports that emission must be
interrupted. In this case timer is set via qcc_wakeup_pacing() to the
delay reported by congestion algorithm, or 1ms if delay is too short. At
the end of qcc_io_cb(), pacing task is queued if timer has been set.
Pacing task execution is simple enough : it immediately wakes up QCC I/O
handler.
Note that to have decent performance, it requires to have a large enough
burst defined in configuration of quic-cc-algo. However, this value is
common to every listener clients, which may cause too much loss under
network conditions. This will be address in a future patch.
This should be backported up to 3.1.
Implement a new method for QUIC pacing emission based on credit. This
represents the number of packets which can be emitted in a single burst.
After emission, decrement from the credit the number of emitted packets.
Several emission can be conducted in the same sequence until the credit
is completely decremented.
When a new emission sequence is initiated (i.e. under a new QMUX tasklet
invokation), credit is refilled according to the delay which occured
between the last and current emission context.
This new mechanism main advantage is that it allows to conduct several
emission in the same task context without having to wait between each
invokation. Wait is only forced if pacing is expired, which is now
equivalent to having a null credit.
Furthermore, if delay between two emissions sequence would have been
smaller than expected, credit is only partially refilled. This allows to
restart emission without having to wait for the whole credit to be
available.
On the implementation side, a new field <credit> is avaiable in
quic_pacer structure. It is automatically decremented on
quic_pacing_sent_done() invokation. Also, a new function
quic_pacing_reload() must be used by QUIC MUX when a new emission
sequence is initiated to refill credit. <next> field from quic_pacer has
been removed.
For the moment, credit is based on the burst configured via quic-cc-algo
keyword, or directly reported by BBR.
This should be backported up to 3.1.
A field <paced_sent_ctr> from quic_pacer structure is used to report the
number of occurences where emission has been interrupted due to pacing.
However, it was not incremented when QUIC MUX had to pause immediately
emission as pacing was still not yet expired.
Fix this by incrementing <paced_sent_ctr> in qcc_io_send() prior to
emission if pacing is expired. Note that incrementation is only done
once if the tasklet is then repeatdely woken up until the timer is
expired.
This should be backported up to 3.1.
Previously, congestion window was increased any time each time a new
acknowledge was received. However, it did not take into account the
window filling level. In a network condition with negligible loss, this
will cause the window to be incremented until the maximum value (by
default 480k), even though the application does not have enough data to
fill it.
In most cases, this issue is not noticeable. However, it may lead to
excessive memory consumption when a QUIC connection is suddendly
interrupted, as in this case haproxy will fill the window with
retransmission. It even has caused OOM crash when thousands of clients
were interrupted at once on a local network benchmark.
Fix this by first checking window level prior to every incrementation
via a new helper function quic_cwnd_may_increase(). It was arbitrarily
decided that the window must be at least 50% full when the ACK is
handled prior to increment it. This value is a good compromise to keep
window in check while still allowing fast increment when needed.
Note that this patch only concerns cubic and newreno algorithm. BBR has
already its notion of application limited which ensures the window is
only incremented when necessary.
This should be backported up to 2.6.
Rename one of the congestion algorithms pacing callback from pacing_rate
to pacing_inter. This better reflects that this function returns a delay
(in nanoseconds) which should be applied between each packet emission to
fill the congestion window with a perfectly smoothed emission.
This should be backported up to 3.1.
It is possible to have front H1 connections waiting for the client timeout
while they should be closed because a conneciton error was reported before
sebding an error message to the client. It is not a leak because the
connections are closed when the timeout expires but it is a waste of
ressources, especially if the client timeout is high.
When an early error message must be sent to the client, if an error was
already detected, no data are sent and the output buffer is released. At
this stage, the H1 connection is in CLOSING state and it must be
released. But because of a bug, this is not performed. The client timeout is
rearmed and the H1 connection is only closed when it expires.
To fix the issue, the condition to close a H1C must also be evaluated when
an error is detected before sending data.
It is only an issue with idle client connections, because there is no H1
stream in that case and the error message is generated by the mux itself.
This patch must be backported as far as 2.8.
Some new compilers warn that <oldest_lost> variable can be null even this cannot be
the case as mentioned by the comment about an already present ASSUME_NONNULL()
call comment as follows:
src/quic_loss.c: In function ‘qc_release_lost_pkts’:
src/quic_loss.c:307:86: error: potential null pointer dereference [-Werror=null-dereference]
307 | unsigned int period = newest_lost->time_sent_ms - oldest_lost->time_sent_ms;
| ~~~~~~~~~~~^~~~~~~~~~~~~~
Move up this ASSUME_NONNULL() statement to please these compiler.
Must be backported as far as 2.6 to easy any further backport around this code part.
This is definitively a bug to call quic_tx_packet_refdec() to decrement the reference
counter of a TX packet calling quic_tx_packet_refdec(), and possibly to release its
memory when it is negative or null.
This counter is incremented when a TX frm is attached to it with some allocated memory
and when the packet is inserted into a data structure, if needed (list or tree).
Should be easily backported as far as 2.6 to ease any further backport around
this code part.
Reset ->prev and ->next fields of a coalesced TX packet to ensure it cannot access
several times its neighbours after it is supposed to be detached from them calling
quic_tx_packet_dgram_detach().
There are two cases where a packet can be coalesced to another previous built one:
this is when it is built into the same datagrame without GSO (and flagged flag with
QUIC_FL_TX_PACKET_COALESCED) or when sent from the same sendto() syscall with GOS
(not flagged with QUIC_FL_TX_PACKET_COALESCED).
This fix may be in relation with GH #2839.
Must be backported as far as 2.6.
version.c tries to centralize all variables conveying version information,
but there's still an issue with the BUILD_* variables which are only
passed to haproxy.o and are only updated when that one is rebuilt. This
is not very logical given that we can end up with values there which
contradict info from version.c.
Better move all of these to version.c which is systematically rebuilt.
Most of these variables only end up as string concatenation at the
moment. Some of them are even duplicated. In version.c we now have one
variable (or constant) for each of them and haproxy.c references them
in messages. This is much more logical and easier to maintain in a
consistent state.
The patch looks a bit large but it really only moves the ifdefed string
assignment from one file to another, placing them into variables.
This environment variable was added by commit d4c0be6b20 ("MINOR: startup:
HAPROXY_STARTUP_VERSION contains the version used to start"). However, it's
set from the macro that is passed during the build process instead of being
set from the variable that's kept up to date in version.c. The difference
is visible only during debugging/bisecting because only changed files and
version.o are rebuilt, but not necessarily haproxy.o, which is where the
environment variable is set. This means that the version exposed in the
environment is not necessarily the same as the one presented in
"haproxy -v" during such debugging sessions.
This should be backported to 2.8. It has no impact at all on regularly
built binaries.
sink_new_from_srv() leverages sink_new_buf() with the server id as name,
sink_new_buf() then calls __sink_new() with the provided name.
Unfortunately sink_new() is designed in such a way that it will first look
up in the list of existing sinks to check if a sink already exists with
given name, in which case the existing sink is returned. While this
behavior may be error-prone, it is actually up to the caller to ensure
that the provided name is unique if it really expects a unique sink
pointer.
Due to this bug in sink_new_from_srv(), multiple tcp servers with the same
name defined in distinct log backends would end up sharing the same sink,
which means messages sent to one of the servers would also be forwarded to
all servers with the same name across all log backend sections defined in
the config, which is obviously an issue and could even raise security
concerns.
Example:
defaults
log backend@log-1 local0
backend log-1
mode log
server s1 127.0.0.1:514
backend log-2
mode log
server s1 127.0.0.1:5114
With the above config, logs sent to log-1/s1 would also end up being sent
to log-2/s1 due to server id "s1" being used for tcp servers in distinct
log backends.
To fix the issue, we now prefix the sink ame with the backend name:
back_name/srv_id combination is known to be unique (backend name
serves as a namespace)
This bug was reported by GH user @landon-lengyel under #2846.
UDP servers (with udp@ prefix before the address) are not affected as they
don't make use of the sink facility.
As a workaround, one should manually ensure that all tcp servers across
different log backends (backend with "mode log" enabled) use unique names
This bug was introduced in e58a9b4 ("MINOR: sink: add sink_new_from_srv()
function") thus it exists since the introduction of log backends in 2.9,
which means this patch should be backported up to 2.9.
Received CRYPTO frames are inserted in a ncbuf to handle out-of-order
reception via ncb_add(). They are stored on the position relative to the
frame offset, minus a base offset which corresponds to the in-order data
length already handled.
Previouly, no check was implemented on the frame offset value prior to
ncb_add(), which could easily trigger a crash if relative offset was too
large. Fix this by ensuring first that the frame can be stored in the
buffer before ncb_add() invokation. If this is not the case, connection
is closed with error CRYPTO_BUFFER_EXCEEDED, as required by QUIC
specification.
This should fix github issue #2842.
This must be backported up to 2.6.
As discussed in GH #2423, there are some cases where src_{inc,clr}_gpc*
is not sufficient because we need to perform the lookup on a specific
key. Indeed, just like we did in e642916 ("MEDIUM: stktable: leverage
smp_fetch_* helpers from sample conv"), we can easily implement new
table converters based on existing fetches. This is what we do in
this patch.
Also the doc was updated so that src_{inc,clr}_gpc* fetches now point to
their generic equivalent table_{inc,clr}_gpc*. Indeed, src_{inc,clr}_gpc*
are simply aliases.
This should fix GH #2423.
sample_conv_table_bytes_out_rate() was defined in the middle of other
stick-table sample convs without any ordering logic. Let's put it
where it belongs, right after sample_conv_table_bytes_in_rate().
When available, to prevent doc duplication, let's make src_* fetches
point to equivalent table_* converters, as they are in fact aliases
for src,table_* converters.
Some stick-table converters such as "table_gpt" erroneously suggest that
the returned type is a boolean while in fact it is integer type, as
properly documented for the sample fetch equivalents.
Since 2d17db58 ("MINOR: stick-table: change all stick-table converters'
inputs to SMP_T_ANY"), all stick-table converters accept ANY input
type as parameter, this means that it does no longer restrict the key as
a string representation of the input. However the doc wasn't updated when
the change was made. Moreover, some converters document the updated behavior
while others don't, which is kind of confusing, let's fix that.
As with stick-table sample fetches, the <table> argument is not strictly
needed and defaults to the current proxy's stick-table when not provided
Let's update the doc and prototype to reflect the current behavior.
The most common way (and proper way it seems) to declare optional
arguments in sample fetch or converters' prototype is to declare
them between square brackets, including the leading coma (because the
coma should be omitted if the argument is not provided). Also, when
multiple optional arguments are found, we should apply the same logic
but recursively.
In this patch we fix prototypes that include optional arguments and don't
follow this syntax. This improves readibility and sets the norm for
upcoming sample fetches/converters.
In this patch we try to prevent code duplication: some fetches and sample
converters do the exact same thing, except that the converter takes the
argument as input data. Until now, both the converter and the fetch
had their own implementation (copy pasted), with the fetch specific or
converter specific lookup part.
Thanks to previous commits, we now have generic sample fetch helpers
that take the stkctr as argument, so let's leverage them directly
from the converter functions when available. This allows to remove
a lot of code duplication and should make code maintenance easier in the
future.
While this patch actually adds more insertions than deletions, it actually
tries to simplify the lookup logic for sc_ and src_ sticktable fetches.
Indeed, smp_create_src_stkctr() and smp_fetch_sc_stkctr() combination
was used everywhere the fetch supports sc_ and src_ form, and
smp_fetch_sc_stkctr() even integrated some of the src-oriented fetch logic.
Not only this was confusing, but it made the task of adding new generic
fetches even more complex.
Thus in this patch we completely dedicate smp_fetch_sc_stkctr() to sc_
oriented fetches, while smp_create_src_stkctr() is now renamed to
smp_fetch_src_stkctr() and can now work on its own for src_ oriented
fetches. It takes an additional paramater, "create" to tell the function
if the entry should be created if it doesn't exist yet.
Now it's up to the calling function to know if it should be using the
sc_ oriented fetch or the src_ oriented one based on the input keyword.
In this patch we split several sample fetch functions that are leveraged
by the "src-" fetches such as smp_fetch_sc_inc_gpc().
Indeed, for all of them, we add an intermediate helper function that takes
a stkctr pointer as parameter and performs the logic, leaving the lookup
part in the calling function. Before this patch existing functions were
doing the lookup + the fetch logic. Thanks to this patch it will become
easier to add generic converters taking lookup key as input.
List of targeted functions:
- smp_fetch_sc_inc_gpc()
- smp_fetch_sc_inc_gpc0()
- smp_fetch_sc_inc_gpc1()
- smp_fetch_sc_clr_gpc()
- smp_fetch_sc_clr_gpc0()
- smp_fetch_sc_clr_gpc1()
- smp_fetch_sc_conn_cnt()
- smp_fetch_sc_conn_rate()
- smp_fetch_sc_updt_conn_cnt()
- smp_fetch_sc_conn_curr()
- smp_fetch_sc_glitch_cnt()
- smp_fetch_sc_glitch_rate()
- smp_fetch_sc_sess_cnt()
- smp_fetch_sc_sess_rate()
- smp_fetch_sc_http_req_cnt()
- smp_fetch_sc_http_req_rate()
- smp_fetch_sc_http_err_cnt()
- smp_fetch_sc_http_err_rate()
- smp_fetch_sc_http_fail_cnt()
- smp_fetch_sc_http_fail_rate()
- smp_fetch_sc_kbytes_in()
- smp_fetch_sc_bytes_in_rate()
- smp_fetch_kbytes_out()
- smp_fetch_sc_gpc1_rate()
- smp_fetch_sc_gpc0_rate()
- smp_fetch_sc_gpc_rate()
- smp_fetch_sc_get_gpc1()
- smp_fetch_sc_get_gpc0()
- smp_fetch_sc_get_gpc()
- smp_fetch_sc_get_gpt0()
- smp_fetch_sc_get_gpt()
- smp_fetch_sc_bytes_out_rate()
Please note that this patch doesn't render any good using "git show" or
"git diff". For all the functions listed above, a new helper function was
defined right above it, with the same name without "_sc". These new
functions perform the fetch part, while the original ones (with "_sc")
now simply perform the lookup and then leverage the corresponding fetch
helper.
smp_fetch_stksess(table, smp, create) performs a lookup in <table> by
using <smp> as a key. It returns matching entry on success and NULL on
failure. <create> can be set to 1 to force the entry creation.
We then use this helper everywhere relevant to prevent code duplication
As discussed in GH #2838, the previous fix f399dbf
("MINOR: stktable: fix potential build issue in smp_to_stkey") which
attempted to remove conversion ambiguity and prevent build warning proved
to be insufficient.
This time, we implement Willy's suggestion, which is to use an union to
perform the conversion.
Hopefully this should fix GH #2838. If that's the case (and only in that
case), then this patch may be backported with f399dbf (else the patch
won't apply) anywhere b59d1fd ("BUG/MINOR: stktable: fix big-endian
compatiblity in smp_to_stkey()") was backported.
When backends extra counters are dumped, the wrong pointer was used in the
promex context to retrieve the stats module. p[1] must be used instead of
p[2]. Because of this typo, a infinite loop could be experienced if the
output buffer is full during this stage. But in all cases an overflow is
possible leading to a memory corruption.
This patch may be related to issue #2831. It must be backported as far as
3.0.
In 819fc6f563
("MEDIUM: threads/stick-tables: handle multithreads on stick tables"),
sample fetch and action functions were properly guarded with stksess
read/write locks for read and write operations respectively, but the
sample_conv_table functions leveraged by "table_*" converters were
overlooked.
This bug was not known to cause issues in existing deployments yet (at
least it was not reported), but due to its nature it can theorically lead
to inconsistent values being reported by "table_*" converters if the value
is being updated by another thread in parallel.
It should be backported to all stable versions.
[ada: for versions < 3.0, glitch_cnt and glitch_rate samples should be
ignored as they first appeared in 3.0]
smp_to_stkey() uses an ambiguous cast from 64bit integer to 32 bit
unsigned integer. While it is intended, let's make the cast less
ambiguous by explicitly casting the right part of the assignment to the
proper type.
This should fix GH #2838
As specified by RFC 9000, reject NEW_TOKEN frames emitted by clients.
Close the connection with error code PROTOCOL_VIOLATION.
This must be backported up to 2.6.