In h2_nego_ff(), it is important to report reset and error to app-layer
stream and to send the RST-STREAM frame accordingly. It is not clear if it
is an issue or not. But it is clearly a difference with the classical
forwarding via h2_snd_buf. And it is mandatory for the next fix.
This patch should be backported to 3.2. But is is probably a good idea to
not backport it on older versions, except if a bug is reported in this area.
This only happens when a connection error is detected or when the H2
connection is in ERR/ERR2 state. The demux buffer is explicitly reset. In
that case, it is important to remove the flag reporting this buffer as full.
It is probably worth to backport this patch to 3.2. But it is not mandatory
on older versions because it does not fix any known issue.
When the mbuf ring buffer is full, the flag H2_CF_DEM_MROOM is set on the H2
connection to block any demux. It is important to properly handle ACK
frames. However, we must take care to restart reading when some data were
removed from the mbuf. Otherwise, we may block the demux for no reason. It
is especially an issue if the demux buffer is full. In that case, the H2
connection is blocked, waiting for the timeout.
This patch should be backported to 3.2. But is is probably a good idea to
not backport it on older versions, except if a bug is reported in this area.
The H2 connection is switched to ERR when a GOAWAY must be sent and in ERR2
when it is sent. In these states, no more data can be emitted by the
mux. But there is no reason to not try to process incoming data or to not
try to receive data. It is espcially important to be able to get the
shutdown from the TCP connection when a SSL connection was previously
detected. Otherwise, it is possible to block a H2 connection until its
timeout expiration to be able to close it.
This patch should be backported to 3.2. But is is probably a good idea to
not backport it on older versions, except if a bug is reported in this
area.
When an send error is detected on the underlying connection, a pending error
is reported to the H2 connection by setting H2_CF_ERR_PENDING flag. When
this happen the tail of the mux ring buffer is reset. However some blocking
flags remain set and have no chance to be removed later because of the
pending error. Especially the flag H2_CF_DEM_MROOM which block data
demultiplexing. Thus, it is possible to block a H2 connection with unparsed
incoming data.
Worse, if a read event is received, it could lead to a wakeup loop between
the H2 connection and the underlying SSL connection. The H2 connection is
unable to convert the pending error to a fatal error because the
demultiplexing is blocked. In the mean time, it tries to receive more data
because of the not-consumed read event. On the underlying connection side,
the error detected earlier blocks the read, but the H2 connection is woken
up to handle the error.
To fix the issue, blocking flags must be removed when a send error is caught,
H2_CF_MUX_MFULL and H2_CF_DEM_MROOM flags. But, it is not necessary to only
release the tail of the mbuf ring. When a send error is detected, all outgoing
data can be flushed. So, now, in h2_send(), h2_release_mbuf() function is called
on pending error. The mbuf ring is fully released and H2_CF_MUX_MFULL and
H2_CF_DEM_MROOM flags are removed.
Many thanks to Krzysztof Kozłowski for its help to spot this issue.
This patch could be backported at least as far as 2.8. But it is a bit
sensitive. So, it is probably a good idea to backport it to 3.2 for now and
wait for bug report on older versions.
Previously, GSO emission was explicitely disabled on backend side. This
is not true since the following patch, thus GSO can be used, for example
when transfering large POST requests to a HTTP/3 backend.
commit e064e5d46171d32097a84b8f84ccc510a5c211db
MINOR: quic: duplicate GSO unsupp status from listener to conn
However, GSO on the backend side may cause crash when handling EIO. In
this case, GSO must be completely disabled. Previously, this was
performed by flagging listener instance. In backend side, this would
cause a crash as listener is NULL.
This patch fixes it by supporting GSO disable flag for servers. Thus, in
qc_send_ppkts(), EIO can be converted either to a listener or server
flag depending on the quic_conn proxy side. On backend side, server
instance is retrieved via <qc.conn.target>. This is enough to guarantee
that server is not deleted.
This does not need to be backported.
Historically, when the purge of pools was forced by sending a SIGQUIT to
haproxy, information about the pools were first dumped. It is now totally
pointless because these info can be retrieved via the CLI. It is even less
relevant now because the purge is forced typically when there are memroy
issues and to dump pools information, data must be allocated.
dump_pools_info() function was simplified because it is now called only from
an applet. No reason to still try to dump info on stderr.
The "show pools" CLI command was not designed to dump information exceeding
the size of a buffer. But there is now much more pools than few years ago
and when detailed information are dumped, we exceeds the buffer limit and
the output is truncated.
To fix the issue, the command must be refactored to be able to stream the
result. To do so, the array containing pools info is now part of the command
context and it is dynamically allocated. A dedicated function was created to
fill all info. In addition, the index of the next pool to dump is saved in
the command context too to properly handle resumption cases. Finally global
information about pools are also stored in the command context for
convenience.
This patch should fix the issue #3067. It must be backported to 3.2. On
older release, the buffer limit is never reached.
The below patch fixes padding emission for small packets, which is
required to ensure that header protection removal can be performed by
the recipient.
commit d7dea408c64c327cab6aebf4ccad93405b675565
BUG/MINOR: quic: too short PADDING frame for too short packets
In addition to the proper fix, constant QUIC_HP_SAMPLE_LEN was removed
and replaced by QUIC_TLS_TAG_LEN. However, it still makes sense to have
a dedicated constant which represent the size of the sample used for
header protection. Thus, this patch restores it.
Special instructions for backport : above patch mentions that no
backport is needed. However, this is incorrect, as bug is introduced by
another patch scheduled for backport up to 2.6. Thus, it is first
mandatory to schedule d7dea408c64c327cab6aebf4ccad93405b675565 after it.
Then, this patch can also be used for the sake of code clarity.
Define a new "quic_tx" unit-test which is used to test QUIC TX module.
For the moment, a single test is performed on qc_do_build_pkt(). It
checks that PADDING is correctly added for HP sampling in case of a
small packet.
As reported in GH #3104, there remained a place where (1 << shift was
used to set or remove bits from uint64_t users bitfield. It is incorrect
and could lead to bugs for values > 32 bits.
Instead, let's use 1ULL to ensure the operation remains 64bits consistent.
No backport needed.
Willy reported that the following config would segfault right after the
"removing incomplete section 'peer' is emitted:
peers peers
bind :2300
server n10 127.0.0.1:2310
listen dummy
bind localhost:9999
This is caused by the fact that stop_proxy(), which tries to read shared
counters, is called during early init while shared counters are not yet
initialized. To fix the crash, let's check if we're still during starting
phase, in which case we assume the counters are not initialized and we
assume 0 value instead.
No backport needed unless 16eb0fab31 ("MAJOR: counters: dispatch counters
over thread groups") is.
Mimic the same behavior as the one for SSL/TCP connetion to implement the
SSL session reuse.
Extract the code which try to reuse the SSL session for SSL/TCP connections
to implement ssl_sock_srv_try_reuse_sess().
Call this function from QUIC ->init() xprt callback (qc_conn_init()) as this
done for SSL/TCP connections.
When kTLS is compiled in, make sure msg_controllen is initialized to 0.
If we're not actually kTLS, then it won't be set, but we'll check that
it is non-zero later to check if we ancillary data.
This does not need to be backported.
This should fix CID 1620865, as reported in github issue #3106.
As found in GH issue #3103, CPU_ISSET() on musl 1.25 doesn't match the man
page which says it's returning an int. The reason is pretty simple, it's
a macro that operates on the bits directly and returns the result of the
bit field applied to the mask as an unsigned long. Bits above 31 will
simply be dropped if returned as an int, which causes CPUs 32..63 to
appear as absent from cpu_sets.
The fix is trivial, it consists in just comparing the result against zero
(i.e. turning it to a boolean), but before it's merged and deployed we'll
have to face such deployments, so better implement the same workaround
in the code here since we have access to the raw long value.
This workaround should be backported to 3.0.
This bug arrvived with this commit:
MINOR: quic: centralize padding for HP sampling on packet building
What was missed is the fact that at the centralization point for the
PADDING frame to add for too short packet, <len> payload length already includes
<*pn_len> the packet number field length value.
So when computing the length of the PADDING frame, the packet field length must
not be considered and added to the payload length (<len>).
This bug leaded too short PADDING frame to too short packets. This was the case,
most of times with Application level packets with a 1-byte packet number field
followed by a 1-byte PING frame. A 1-byte PADDING frame was added in this case
in place of a correct 2-bytes PADDINF frame. The header packet protection of
such packet could not be removed by the clients as for instance for ngtcp2 with
such traces:
I00001828 0x5a135c81e803f092c74bac64a85513b657 pkt could not decrypt packet number
As the header protection could no be removed, the header keyupdate bit could also
not be read by packet analyzers such as pyshark used during the keyupdate tests.
No need to backport.
Similarly to the automic SNI selection for regulat SSL traffic, the SNI of
health-checks HTTPS connection is now automatically set by default by using
the host header value. "check-sni-auto" and "no-check-sni-auto" server
settings were added to change this behavior.
Only implicit HTTPS health-checks can take advantage of this feature. In
this case, the host header value from the "option httpchk" directive is used
to extract the SNI. It is disabled if http-check rules are used. So, the SNI
must still be explicitly specified via a "http-check connect" rule.
This patch with should paritally fix the issue #3081.
For HTTPS outgoing connections, the SNI is now automatically set using the
Host header value if no other value is already set (via the "sni" server
keyword). It is now the default behavior. It could be disabled with the
"no-sni-auto" server keyword. And eventually "sni-auto" server keyword may
be used to reset any previous "no-sni-auto" setting. This option can be
inherited from "default-server" settings. Finally, if no connection name is
set via "pool-conn-name" setting, the selected value is used.
The automatic selection of the SNI is enabled by default for all outgoing
connections. But it is concretely used for HTTPS connections only. The
expression used is "req.hdr(host),host_only".
This patch should paritally fix the issue #3081. It only covers the server
part. Another patch will add the feature for HTTP health-checks.
When we try to ruse connection to perform an healtcheck, the SNI, from the
tcpcheck connection or the healthcheck itself, must not be used as
connection name for non-SSL connections.
This patch must be backported to 3.2.
There is no reason to set the SNI and ALPN for non-ssl connections. It is
not really an issue because ssl_sock_set_servername() and
ssl_sock_set_alpn() functions will do nothing. But it is cleaner this way
and this could avoid bugs in future.
No backport needed, because there is no bug.
There is no reason to set the SNI for non-ssl connections. It is not really
an issue because ssl_sock_set_servername() function will do nothing. But
there is no reason to uselessly evaluate an expression.
No backport needed, because there is no bug.
There is no reason to set the SNI for non-ssl connections. It is not really
an issue because ssl_sock_set_servername() function will do nothing. But
there is no reason to uselessly evaluate an expression.
No backport needed, because there is no bug.
not all changes are concerned. But when the SSL is enabled or disabled for a
server, the healthcheck xprt must be eventually be updated too. This happens
when the healthcheck relies on the server settings.
In the same spirit, when the healthcheck address and port are updated, we
must fallback on the raw xprt if the SSL is not explicitly enabled for the
healthcheck with a "check-ssl" parameter.
This patch should be backported to all stable versions.
By default, for a given server, when no pool-conn-name is specified, the
configured sni is used. However, this must only be done when SSL is in-use
for the server. Of course, it is uncommon to have a sni expression for
now-ssl server. But this may happen.
In addition, the SSL may be disabled via the CLI. In that case, the
pool-conn-name must be discarded if it was copied from the sni. And, we must
of course take care to set it if the ssl is enabled.
Finally, when the attac-srv action is checked, we now checked the
pool-conn-name expression.
This patch should be backported as far as 3.0. It relies on "MINOR: server:
Parse sni and pool-conn-name expressions in a dedicated function" which
should be backported too.
This change is mandatory to fix an issue. The parsing of sni and
pool-conn-name expressions (from string to expression) is now handled in a
dedicated function. This will avoid to duplicate the same code at different
places.
There is a typo in the commit * c51ddd5c3 ("MINOR: acl: Only allow one '-m'
matching method") . '*m' was reported in the error message instead of '-m'.
In addition, it is now mentionned that only the last one should be keep if
an old config triggers the error.
No backport needed, except if the commit above is backported.
I continue to mistakenly set the traces using "-dtXXX" and to have to
refer to the doc to figure that it requires a separate argument and
differs from some other options. Worse, "-dthelp" doesn't say anything
and silently ignores the argument.
Let's make the parser take whatever follows "-dt" as the argument if
present, otherwise take the next one (as it currently does). Doing
this even allows to simplify the code, and is easier to figure the
syntax since "-dthelp" now works.
gcc-13.3 at -Og emits an incorrect build warning in trace.c about a
possibly initialized variable:
In file included from include/haproxy/api.h:35,
from src/trace.c:22:
src/trace.c: In function 'trace_parse_cmd':
include/haproxy/bug.h:431:17: warning: 'arg' may be used uninitialized [-Wmaybe-uninitialized]
431 | free(*__x); \
| ^~~~~~~~~~
src/trace.c:1136:9: note: in expansion of macro 'ha_free'
1136 | ha_free(&oarg);
| ^~~~~~~
src/trace.c:1008:15: note: 'arg' was declared here
1008 | char *arg, *oarg;
| ^~~
The warning is obviously wrong since the field is initialized in one of
the two branches of an "if" whose complementary one returns. But the
compiler doesn't seem to see this because the if is in fact two ifs each
with an opposite condition: "if (arg_src)" then "if (!arg_src)". Let's
just move upwards the default one that returns and eliminate the other
one. Reading the diff with "git diff -b" better shows the tiny change.
It could be backported to 3.0.
This patch introduces three new command line flags to display HAProxy version
info more flexibly:
- `-vqs` outputs the short version string without commit info (e.g., "3.3.1").
- `-vqb` outputs only the branch (major.minor) part of the version (e.g., "3.3").
- `-vq` outputs the full version string with suffixes (e.g., "3.3.1-dev5-1bb975-71").
This allows easier parsing of version info in automation while keeping existing -v and -vv behaviors.
The command line argument parsing now calls `display_version_plain()` with a
display_mode parameter to select the desired output format. The function handles
stripping of commit or patch info as needed, depending on the mode.
Signed-off-by: Nikita Kurashkin <nkurashkin@stsoft.ru>
This commit adds the base2 converter to turn binary input into it's
string representation. Each input byte is converted into a series of
eight characters which are either 0s and 1s by bit-wise comparison.
Like many exposed network deamons, haproxy does normally not need to run
as root and strongly recommends against this, unless strictly necessary.
On some operating systems, capabilities even totally alleviate this need.
Lately, maybe due to a raise of containerization or automated config
generation or a bit of both, we've observed a resurgence of this bad
practice, possibly due to the fact that users are just not aware of the
conditions they're using their daemon.
Let's add a warning at boot when starting as root without having requested
it using "uid" or "user". And take this opportunity for warning the user
about the existence of capabilities when supported, and encouraging the
use of a chroot.
This is achieved by leaving global.uid set to -1 by default, allowing us
to detect if it was explicitly set or not.
As reported on GH #3099, upon memory error add_to_logformat_list() will
return and error but it fails to properly memory which was allocated
within the function, which could result in memory leak.
Let's free all relevant variables allocated by the function before returning.
No backport needed unless 22ac1f5ee ("("BUG/MINOR: log: Add OOM checks for
calloc() and malloc() in logformat parser and dup_logger()") is.
When an SNI is set on a QUIC server line, ssl_sock_set_servername() is called
from connect_server() (backend.c). This leads some BUG_ON() to be triggered
because the CO_FL_WAIT_L6_CONN | CO_FL_SSL_WAIT_HS were not set. This must
be done into the ->init() xprt callback. This patch move the flags settings
from ->start() to ->init() callback.
Indeed, connect_server() calls these functions in this order:
->init(),
ssl_sock_set_servername() # => crash if CO_FL_WAIT_L6_CONN | CO_FL_SSL_WAIT_HS not set
->start()
Furthermore ssl_sock_set_servername() has a side effect to reset the SSL_SESSION
object (attached to SSL object) calling SSL_set_session(), leading to crashes as follows:
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `./haproxy -f quic_srv.cfg'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 tls_process_server_hello (s=0x560c259733b0, pkt=0x7fffac239f20)
at ssl/statem/statem_clnt.c:1624
1624 if (s->session->session_id_length > 0) {
[Current thread is 1 (Thread 0x7fc364e53dc0 (LWP 35514))]
(gdb) bt
#0 tls_process_server_hello (s=0x560c259733b0, pkt=0x7fffac239f20)
at ssl/statem/statem_clnt.c:1624
#1 0x00007fc36540fba4 in ossl_statem_client_process_message (s=0x560c259733b0,
pkt=0x7fffac239f20) at ssl/statem/statem_clnt.c:1042
#2 0x00007fc36540d028 in read_state_machine (s=0x560c259733b0) at ssl/statem/statem.c:646
#3 0x00007fc36540ca70 in state_machine (s=0x560c259733b0, server=0)
at ssl/statem/statem.c:439
#4 0x00007fc36540c576 in ossl_statem_connect (s=0x560c259733b0) at ssl/statem/statem.c:250
#5 0x00007fc3653f1698 in SSL_do_handshake (s=0x560c259733b0) at ssl/ssl_lib.c:3835
#6 0x0000560c22620327 in qc_ssl_do_hanshake (qc=qc@entry=0x560c25961f60,
ctx=ctx@entry=0x560c25963020) at src/quic_ssl.c:863
#7 0x0000560c226210be in qc_ssl_provide_quic_data (len=90, data=<optimized out>,
ctx=0x560c25963020, level=ssl_encryption_initial, ncbuf=0x560c2588bb18)
at src/quic_ssl.c:1071
#8 qc_ssl_provide_all_quic_data (qc=qc@entry=0x560c25961f60, ctx=0x560c25963020)
at src/quic_ssl.c:1123
#9 0x0000560c2260ca5f in quic_conn_io_cb (t=0x560c25962f80, context=0x560c25961f60,
state=<optimized out>) at src/quic_conn.c:791
#10 0x0000560c228255ed in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:648
#11 0x0000560c22825f7a in process_runnable_tasks () at src/task.c:889
#12 0x0000560c22793dc7 in run_poll_loop () at src/haproxy.c:2836
#13 0x0000560c22794481 in run_thread_poll_loop (data=<optimized out>) at src/haproxy.c:3056
#14 0x0000560c2259082d in main (argc=<optimized out>, argv=<optimized out>)
at src/haproxy.c:3667
<s> is the SSL object, and <s->session> is the SSL_SESSION object.
For the client, this is the first call do SSL_do_handshake() which initializes this
SSL_SESSION object from ->init() xpt callback. Then it is reset by
ssl_sock_set_servername(), then tls_process_server_hello() TLS stack is called with
NULL value for s->session when receiving the ServerHello TLS message.
To fix this, simply move the first call to SSL_do_handshake to ->start xprt call
back (qc_xprt_start()).
No need to backport.
Over their lifetime, connections are attached to different list. These
lists depends on whether connection is on frontend or backend side.
Attach point members are stored via a union in struct connection. The
next commit reorganizes them so that a proper frontend/backend
separation is performed :
commit a96f1286a75246fef6db3e615fabdef1de927d83
BUG/MINOR: connection: rearrange union list members
On conn_free(), connection instance must be removed from these lists to
ensure there is no use-after-free case. However code was still shaky
there, despite no real issue. Indeed, <toremove_list> was detached for
all connections, despite being only used on backend side only.
This patch streamlines the freeing of connection. Now, <toremove_list>
detach is performed in conn_backend_deinit(). Moreover, a new helper
conn_frontend_deinit() is defined. It ensures that <stopping_list>
detach is done. Prior it was performed individually by muxes.
Note that a similar procedure is performed when the connection is
reversed. Hence, conn_frontend_deinit() is now used here as well,
rendering reversal from FE to BE or vice versa symmetrical.
As mentionned above, no crash occured prior to this patch, but the code
was fragile, in particular access to <toremove_list> for frontend
connections. Thus this patch is considered as a bug fix worthy of a
backport along with above mentionned patch, currently up to 3.0.
When a connection is reversed, some elements must be resetted prior to
reusing it. Most notably, connection must be removed from lists specific
on frontend/backend sides.
When reverse was performed for frontend to backend side, connection was
not removed via its <stopping_list> attach point. On previous releases,
this did not cause any issue. However, crashes start to occur recently,
probably due to the recent reorganization of connection list attach
points from the following patch.
commit a96f1286a75246fef6db3e615fabdef1de927d83
BUG/MINOR: connection: rearrange union list members
To fix this, simply ensure that <stopping_list> detach is performed via
conn_reverse().
This patch must be backported up to 3.0 release.
For many years, an unset load balancing algorithm would use "roundrobin".
It was shown several times that "random" with at least 2 draws (the
default) generally provides better performance and fairness in that
it will automatically adapt to the server's load and capacity. This
was further described with numbers in this discussion:
https://www.mail-archive.com/haproxy@formilux.org/msg46011.htmlhttps://github.com/orgs/haproxy/discussions/3042
BTW there were no objection and only support for the change.
The goal of this patch is to change the default algo when none is
specified, from "roundrobin" to "random". This way, users who don't
care and don't set the load balancing algorithm will benefit from a
better one in most cases, while those who have good reasons to prefer
roundrobin (for session affinity or for reproducible sequences like used
in regtests) can continue to specify it.
The vast majority of users should not notice a difference.
The keyword check-reuse-pool allows to reuse an idle connection to
perform a health check instead of opening a new one. It is implemented
similarly to HTTP transfer reuse : a hash is calculated with a subset of
properties to lookup a connection with the same characteristics.
One of these properties is the destination address. Initially it was
always set to NULL prior to reuse check, as this is necessary to match
connections on a reverse-HTTP server. However, this prevents reuse on
other servers with a proper address configured. Indeed, in this case
destination address is always used as key for connections inserted in
idle pool.
This patch fixes this by properly setting destination address for check
reuse. By default, it reuses the address from the server. The only
exception is if the server is using reverse-HTTP, in which case address
remains NULL.
A new test is also performed prior to try check reuse to ensure this is
not performed on a transparent server. Indeed, in this case server
address would be unset. Anyway, check cannot reuse a connection in this
case so this is OK. Note that this does not prevent to continue check
with a newly connection with a NULL address : this should be handled
more properly in another patch.
This must be backported up to 3.2.
SSL may be activated implicitely if a server relies on SSL, even without
check-ssl keyword. This is performed by init_srv_check() function. The
main operation is to change xprt layer for check to SSL.
Prior to this patch, <use_ssl> check member was also set, despite not
strictly necessary. This has a negative side-effect of rendering
check-reuse-pool ineffective. Indeed, reuse on check is only performed
if no specific check configuration has been specified (see
tcpcheck_use_nondefault_connect()).
This patch fixes check reuse with SSL : <use_ssl> is not set in case SSL
is inherited implicitely from server configuration. Thus, <use_ssl> is
now only set if an explicit check-ssl keyword is set, which disables
connection reuse for check.
This must be backported up to 3.2.
Add two BUG_ON() in shm_stats_file_prepare() which will trigger if
exported structures (shm_stats_file_hdr and shm_stats_file_object) change
in size, because it means that they will become incompatible with older
versions and thus precautions should be taken by the developer to ensure
compatibility with olders versions, or at least detect incompatible
versions by changing the version number to prevent bugs resulting
from inconsistent mapping between versions. The BUG_ON() may be
safely adjusted then.
Please note that it doesn't protect against accidental struct member
re-ordering if the resulting struct size is equal..
shm_stats_file_reuse_object() has a non negligible cost, especially if
the shm file contains a lot of objects because the functions scans the
whole shm file to find available slots.
During startup, if no existing objects could be mapped in the shm
file shm_stats_file_add_object() for each object (server, fe, be or
listener) with a GUID set. On large config it means
shm_stats_file_add_object() could be called a lot of times in a row.
With current implementation, each shm_stats_file_add_object() call
leverages shm_stats_file_reuse_object(), so the more objects are defined
in the config, the slower the startup will be.
To try to optimize startup time a bit with large configs, we don't
sytematically call shm_stats_file_reuse_object(), especially when we
know that the previous attempt to reuse objects failed. In this case
we add a small tempo between failed attempts to reuse objects because
we assume the new attempt will probably fail anyway. (For slots to
become available, either an old process has to clean its entries,
or they have to time out which implies that the clock needs to be updated)
This is the last patch of the shm stats file series, in this patch we
implement the logic to store and fetch shm stats objects and associate
them to existing shared counters on the current process.
Shm objects are stored in the same memory location as the shm stats file
header. In fact they are stored right after it. All objects (struct
shm_stats_file_object) have the same size (no matter their type), which
allows for easy object traversal without having to check the object's
type, and could permit the use of external tools to scan the SHM in the
future. Each object stores a guid (of GUID_MAX_LEN+1 size) and tgid
which allows to match corresponding shared counters indexes. Also,
as stated before, each object stores the list of users making use of
it. Objects are never released (the map can only grow), but unused
objects (when no more users or active users are found in objects->users),
the object is automatically recycled. Also, each object stores its
type which defines how the object generic data member should be handled.
Upon startup (or reload), haproxy first tries to scan existing shm to
find objects that could be associated to frontends, backends, listeners
or servers in the current config based on GUID. For associations that
couldn't be made, haproxy will automatically create missing objects in
the SHM during late startup. When haproxy matches with an existing object,
it means the counter from an older process is preserved in the new
process, so multiple processes temporarily share the same counter for as
long as required for older processes to eventually exit.
Now that all processes tied to the same shm stats file now share a
common clock source, we introduce the process slot notion in this
patch.
Each living process registers itself in a map at a free index: each slot
stores information about the process' PID and heartbeat. Each process is
responsible for updating its heartbeat, a slot is considered as "free" if
the heartbeat was never set or if the heartbeat is expired (60 seconds of
inactivity). The total number of slots is set to 64, this is on purpose
because it allows to easily store the "users" of a given shm object using
a 64 bits bitmask. Given that when haproxy is reloaded olders processes
are supposed to die eventually, it should be large enough (64 simultaneous
processes) to be safe. If we manage to reach this limit someday, more
slots could be added by splitting "users" bitmask on multiple 64bits
variable.
The use of the "shm-stats-file" directive now implies that all processes
using the same file now share a common clock source, this is required
for consistency regarding time-related operations.
The clock source is stored in the shm stats file header.
When the directive is set, all processes share the same clock
(global_now_ms and global_now_ns both point to variables in the map),
this is required for time-based counters such as freq counters to work
consistently. Since all processes manipulate global clock with atomic
operations exclusively during runtime, and don't systematically relies
on it (thanks to local now_ms and now_ns), it is pretty much transparent.
add initial support for the "shm-stats-file" directive and
associated "shm-stats-file-max-objects" directive. For now they are
flagged as experimental directives.
The shared memory file is automatically created by the first process.
The file is created using open() so it is up to the user to provide
relevant path (either on regular filesystem or ramfs for performance
reasons). The directive takes only one argument which is path of the
shared memory file. It is passed as-is to open().
The maximum number of objects per thread-group (hard limit) that can be
stored in the shm is defined by "shm-stats-file-max-objects" directive,
Upon initial creation, the main shm stats file header is provisioned with
the version which must remains the same to be compatible between processes
and defaults to 2k. which means approximately 1mb max per thread group
and should cover most setups. When the limit is reached (during startup)
an error is reported by haproxy which invites the user to increase the
"shm-stats-file-max-objects" if desired, but this means more memory will
be allocated. Actual memory usage is low at start, because only the mmap
(mapping) is provisionned with the maximum number of objects to avoid
relocating the memory area during runtime, but the actual shared memory
file is dynamically resized when objects are added (resized by following
half power of 2 curve when new objects are added, see upcoming commits)
For now only the file is created, further logic will be implemented in
upcoming commits.
counters_{fe,be}_shared_prepare now take an extra <errmsg> parameter
that contains additional hints about the error in case of failure.
It must be freed accordingly since it is allocated using memprintf
It helps keep the contention level low: when we hold the update lock
that we know other parts may be relying on (peers, track-sc etc),
we decrease the remaining visit counters 4 times as fast to further
reduce the contention. At this point no more warnings are seen during
intense synchronization (2x64 cores, 1.5M req/s with a track-sc each,
5M entries in use).
As reported by Felipe in GH issue #3084, on large systems it's not
sufficient to leave the expiration process after a certain number of
expired entries, because if they accumulate too fast, it's possible
to still spend some time visiting many (e.g. those still in use),
which takes time.
Thus here we're taking a stricter approach consisting in counting the
number of visited entries, which allows to leave early if we can't do
the expected work in a reasonable amount of time.
In order to avoid always stopping on first shards and never visiting
last ones, we're always starting from a random shard number and looping
from that one. This way even if we always leave early, all shards will
be handled equally.
This should be backported to 3.2.
When the expire task is running fast (i.e. running almost alone), it's
super hard to grab the update lock and peers can easily trigger the
watchdog because the time it takes to grab this lock is multiplied by
the number of updates to perform. This is easier to trigger at the end
of an injection session where the expire task is omni-present. Let's
just record that we failed once and don't fail a second time in the
loop.
This should be backported to 3.2, but probably not further given that
this area changed significantly in 3.2.