Since 2.6 we have a free_logsrv() function that is used to release log
servers. It must be called from deinit() instead of manually iterating
over the log servers, otherwise some parts of the structure are not
freed (namely the ring name), as reported by ASAN.
This should be backported to 2.6.
A few loops waiting for threads to synchronize such as thread_isolate()
rightfully filter the thread masks via the threads_enabled field that
contains the list of enabled threads. However, it doesn't use an atomic
load on it. Before 2.7, the equivalent variables were marked as volatile
and were always reloaded. In 2.7 they're fields in ha_tgroup_ctx[], and
the risk that the compiler keeps them in a register inside a loop is not
null at all. In practice when ha_thread_relax() calls sched_yield() or
an x86 PAUSE instruction, it could be verified that the variable is
always reloaded. If these are avoided (e.g. architecture providing
neither solution), it's visible in asm code that the variables are not
reloaded. In this case, if a thread exists just between the moment the
two values are read, the loop could spin forever.
This patch adds the required _HA_ATOMIC_LOAD() on the relevant
threads_enabled fields. It must be backported to 2.7.
Released version 2.8-dev1 with the following main changes :
- MEDIUM: 51d: add support for 51Degrees V4 with Hash algorithm
- MINOR: debug: support pool filtering on "debug dev memstats"
- MINOR: debug: add a balance of alloc - free at the end of the memstats dump
- LICENSE: wurfl: clarify the dummy library license.
- MINOR: event_hdl: add event handler base api
- DOC/MINOR: api: add documentation for event_hdl feature
- MEDIUM: ssl: rename the struct "cert_key_and_chain" to "ckch_data"
- MINOR: quic: remove qc from quic_rx_packet
- MINOR: quic: complete traces in qc_rx_pkt_handle()
- MINOR: quic: extract datagram parsing code
- MINOR: tools: add port for ipcmp as optional criteria
- MINOR: quic: detect connection migration
- MINOR: quic: ignore address migration during handshake
- MINOR: quic: startup detect for quic-conn owned socket support
- MINOR: quic: test IP_PKTINFO support for quic-conn owned socket
- MINOR: quic: define config option for socket per conn
- MINOR: quic: allocate a socket per quic-conn
- MINOR: quic: use connection socket for emission
- MEDIUM: quic: use quic-conn socket for reception
- MEDIUM: quic: move receive out of FD handler to quic-conn io-cb
- MINOR: mux-quic: rename duplicate function names
- MEDIUM: quic: requeue datagrams received on wrong socket
- MINOR: quic: reconnect quic-conn socket on address migration
- MINOR: quic: activate socket per conn by default
- BUG/MINOR: ssl: initialize SSL error before parsing
- BUG/MINOR: ssl: initialize WolfSSL before parsing
- BUG/MINOR: quic: fix fd leak on startup check quic-conn owned socket
- BUG/MEDIIM: stconn: Flush output data before forwarding close to write side
- MINOR: server: add srv->rid (revision id) value
- MINOR: stats: add server revision id support
- MINOR: server/event_hdl: add support for SERVER_ADD and SERVER_DEL events
- MINOR: server/event_hdl: add support for SERVER_UP and SERVER_DOWN events
- BUG/MEDIUM: checks: do not reschedule a possibly running task on state change
- BUG/MINOR: checks: make sure fastinter is used even on forced transitions
- CLEANUP: assorted typo fixes in the code and comments
- MINOR: mworker: display an alert upon a wait-mode exit
- BUG/MEDIUM: mworker: fix segv in early failure of mworker mode with peers
- BUG/MEDIUM: mworker: create the mcli_reload socketpairs in case of upgrade
- BUG/MINOR: checks: restore legacy on-error fastinter behavior
- MINOR: check: use atomic for s->consecutive_errors
- MINOR: stats: properly handle ST_F_CHECK_DURATION metric
- MINOR: mworker: remove unused legacy code in mworker_cleanlisteners
- MINOR: peers: unused code path in process_peer_sync
- BUG/MINOR: init/threads: continue to limit default thread count to max per group
- CLEANUP: init: remove useless assignment of nbthread
- BUILD: atomic: atomic.h may need compiler.h on ARMv8.2-a
- BUILD: makefile/da: also clean Os/ in Device Atlas dummy lib dir
- BUG/MEDIUM: httpclient/lua: double LIST_DELETE on end of lua task
- CLEANUP: pools: move the write before free to the uaf-only function
- CLEANUP: pool: only include pool-os from pool.c not pool.h
- REORG: pool: move all the OS specific code to pool-os.h
- CLEANUP: pools: get rid of CONFIG_HAP_POOLS
- DEBUG: pool: show a few examples in -dMhelp
- MINOR: pools: make DEBUG_UAF a runtime setting
- BUG/MINOR: promex: create haproxy_backend_agg_server_status
- MINOR: promex: introduce haproxy_backend_agg_check_status
- DOC: promex: Add missing backend metrics
- BUG/MAJOR: fcgi: Fix uninitialized reserved bytes
- REGTESTS: fix the race conditions in iff.vtc
- CI: github: reintroduce openssl 1.1.1
- BUG/MINOR: quic: properly handle alloc failure in qc_new_conn()
- BUG/MINOR: quic: handle alloc failure on qc_new_conn() for owned socket
- CLEANUP: mux-quic: remove unused attribute on qcs_is_close_remote()
- BUG/MINOR: mux-quic: remove qcs from opening-list on free
- BUG/MINOR: mux-quic: handle properly alloc error in qcs_new()
- CI: github: split ssl lib selection based on git branch
- REGTESTS: startup: check maxconn computation
- BUG/MINOR: startup: don't use internal proxies to compute the maxconn
- REGTESTS: startup: change the expected maxconn to 11000
- CI: github: set ulimit -n to a greater value
- REGTESTS: startup: activate automatic_maxconn.vtc
- MINOR: sample: add param converter
- CLEANUP: ssl: remove check on srv->proxy
- BUG/MEDIUM: freq-ctr: Don't compute overshoot value for empty counters
- BUG/MEDIUM: resolvers: Use tick_first() to update the resolvers task timeout
- REGTESTS: startup: add alternatives values in automatic_maxconn.vtc
- BUG/MEDIUM: h3: reject request with invalid header name
- BUG/MEDIUM: h3: reject request with invalid pseudo header
- MINOR: http: extract content-length parsing from H2
- BUG/MEDIUM: h3: parse content-length and reject invalid messages
- CI: github: remove redundant ASAN loop
- CI: github: split matrix for development and stable branches
- BUG/MEDIUM: mux-h1: Don't release H1 stream upgraded from TCP on error
- BUG/MINOR: mux-h1: Fix test instead a BUG_ON() in h1_send_error()
- MINOR: http-htx: add BUG_ON to prevent API error on http_cookie_register
- BUG/MEDIUM: h3: fix cookie header parsing
- BUG/MINOR: h3: fix memleak on HEADERS parsing failure
- MINOR: h3: check return values of htx_add_* on headers parsing
- MINOR: ssl: Remove unneeded buffer allocation in show ocsp-response
- MINOR: ssl: Remove unnecessary alloc'ed trash chunk in show ocsp-response
- BUG/MINOR: ssl: Fix memory leak of find_chain in ssl_sock_load_cert_chain
- MINOR: stats: provide ctx for dumping functions
- MINOR: stats: introduce stats field ctx
- BUG/MINOR: stats: fix show stat json buffer limitation
- MINOR: stats: make show info json future-proof
- BUG/MINOR: quic: fix crash on PTO rearm if anti-amplification reset
- BUILD: 51d: fix build issue with recent compilers
- REGTESTS: startup: disable automatic_maxconn.vtc
- BUILD: peers: peers-t.h depends on stick-table-t.h
- BUG/MEDIUM: tests: use tmpdir to create UNIX socket
- BUG/MINOR: mux-h1: Report EOS on parsing/internal error for not running stream
- BUG/MINOR:: mux-h1: Never handle error at mux level for running connection
- BUG/MEDIUM: stats: Rely on a local trash buffer to dump the stats
- OPTIM: pool: split the read_mostly from read_write parts in pool_head
- MINOR: pool: make the thread-local hot cache size configurable
- MINOR: freq_ctr: add opportunistic versions of swrate_add()
- MINOR: pool: only use opportunistic versions of the swrate_add() functions
- REGTESTS: ssl: enable the ssl_reuse.vtc test for WolfSSL
- BUG/MEDIUM: mux-quic: fix double delete from qcc.opening_list
- BUG/MEDIUM: quic: properly take shards into account on bind lines
- BUG/MINOR: quic: do not allocate more rxbufs than necessary
- MINOR: ssl: Add a lock to the OCSP response tree
- MINOR: httpclient: Make the CLI flags public for future use
- MINOR: ssl: Add helper function that extracts an OCSP URI from a certificate
- MINOR: ssl: Add OCSP request helper function
- MINOR: ssl: Add helper function that checks the validity of an OCSP response
- MINOR: ssl: Add "update ssl ocsp-response" cli command
- MEDIUM: ssl: Add ocsp_certid in ckch structure and discard ocsp buffer early
- MINOR: ssl: Add ocsp_update_tree and helper functions
- MINOR: ssl: Add crt-list ocsp-update option
- MINOR: ssl: Store 'ocsp-update' mode in the ckch_data and check for inconsistencies
- MEDIUM: ssl: Insert ocsp responses in update tree when needed
- MEDIUM: ssl: Add ocsp update task main function
- MEDIUM: ssl: Start update task if at least one ocsp-update option is set to on
- DOC: ssl: Add documentation for ocsp-update option
- REGTESTS: ssl: Add tests for ocsp auto update mechanism
- MINOR: ssl: Move OCSP code to a dedicated source file
- BUG/MINOR: ssl/ocsp: check chunk_strcpy() in ssl_ocsp_get_uri_from_cert()
- CLEANUP: ssl/ocsp: add spaces around operators
- BUG/MEDIUM: mux-h2: Refuse interim responses with end-stream flag set
- BUG/MINOR: pool/stats: Use ullong to report total pool usage in bytes in stats
- BUG/MINOR: ssl/ocsp: httpclient blocked when doing a GET
- MINOR: httpclient: don't add body when istlen is empty
- MEDIUM: httpclient: change the default log format to skip duplicate proxy data
- BUG/MINOR: httpclient/log: free of invalid ptr with httpclient_log_format
- MEDIUM: mux-quic: implement shutw
- MINOR: mux-quic: do not count stream flow-control if already closed
- MINOR: mux-quic: handle RESET_STREAM reception
- MEDIUM: mux-quic: implement STOP_SENDING emission
- MINOR: h3: use stream error when needed instead of connection
- CI: github: enable github api authentication for OpenSSL tags read
- BUG/MINOR: mux-quic: ignore remote unidirectional stream close
- CI: github: use the GITHUB_TOKEN instead of a manually generated token
- BUILD: makefile: build the features list dynamically
- BUILD: makefile: move common options-oriented macros to include/make/options.mk
- BUILD: makefile: sort the features list
- BUILD: makefile: initialize all build options' variables at once
- BUILD: makefile: add a function to collect all options' CFLAGS/LDFLAGS
- BUILD: makefile: start to automatically collect CFLAGS/LDFLAGS
- BUILD: makefile: ensure that all USE_* handlers appear before CFLAGS are used
- BUILD: makefile: clean the wolfssl include and lib generation rules
- BUILD: makefile: make sure to also ignore SSL_INC when using wolfssl
- BUILD: makefile: reference libdl only once
- BUILD: makefile: make sure LUA_INC and LUA_LIB are always initialized
- BUILD: makefile: do not restrict Lua's prepend path to empty LUA_LIB_NAME
- BUILD: makefile: never force -latomic, set USE_LIBATOMIC instead
- BUILD: makefile: add an implicit USE_MATH variable for -lm
- BUILD: makefile: properly report USE_PCRE/USE_PCRE2 in features
- CLEANUP: makefile: properly indent ifeq/ifneq conditional blocks
- BUILD: makefile: rework 51D to split v3/v4
- BUILD: makefile: support LIBCRYPT_LDFLAGS
- BUILD: makefile: support RT_LDFLAGS
- BUILD: makefile: support THREAD_LDFLAGS
- BUILD: makefile: support BACKTRACE_LDFLAGS
- BUILD: makefile: support SYSTEMD_LDFLAGS
- BUILD: makefile: support ZLIB_CFLAGS and ZLIB_LDFLAGS
- BUILD: makefile: support ENGINE_CFLAGS
- BUILD: makefile: support OPENSSL_CFLAGS and OPENSSL_LDFLAGS
- BUILD: makefile: support WOLFSSL_CFLAGS and WOLFSSL_LDFLAGS
- BUILD: makefile: support LUA_CFLAGS and LUA_LDFLAGS
- BUILD: makefile: support DEVICEATLAS_CFLAGS and DEVICEATLAS_LDFLAGS
- BUILD: makefile: support PCRE[2]_CFLAGS and PCRE[2]_LDFLAGS
- BUILD: makefile: refactor support for 51DEGREES v3/v4
- BUILD: makefile: support WURFL_CFLAGS and WURFL_LDFLAGS
- BUILD: makefile: make all OpenSSL variants use the same settings
- BUILD: makefile: remove the special case of the SSL option
- BUILD: makefile: only consider settings from enabled options
- BUILD: makefile: also list per-option settings in 'make opts'
- BUG/MINOR: debug: don't mask the TH_FL_STUCK flag before dumping threads
- MINOR: cfgparse-ssl: avoid a possible crash on OOM in ssl_bind_parse_npn()
- BUG/MINOR: ssl: Missing goto in error path in ocsp update code
- BUG/MINOR: stick-table: report the correct action name in error message
- CI: Improve headline in matrix.py
- CI: Add in-memory cache for the latest OpenSSL/LibreSSL
- CI: Use proper `if` blocks instead of conditional expressions in matrix.py
- CI: Unify the `GITHUB_TOKEN` name across matrix.py and vtest.yml
- CI: Explicitly check environment variable against `None` in matrix.py
- CI: Reformat `matrix.py` using `black`
- MINOR: config: add environment variables for default log format
- REGTESTS: Remove REQUIRE_VERSION=1.9 from all tests
- REGTESTS: Remove REQUIRE_VERSION=2.0 from all tests
- REGTESTS: Remove tests with REQUIRE_VERSION_BELOW=1.9
- BUG/MINOR: http-fetch: Only fill txn status during prefetch if not already set
- BUG/MAJOR: buf: Fix copy of wrapping output data when a buffer is realigned
- DOC: config: fix alphabetical ordering of http-after-response rules
- MINOR: http-rules: Add missing actions in http-after-response ruleset
- DOC: config: remove duplicated "http-response sc-set-gpt0" directive
- BUG/MINOR: proxy: free orgto_hdr_name in free_proxy()
- REGTEST: fix the race conditions in json_query.vtc
- REGTEST: fix the race conditions in add_item.vtc
- REGTEST: fix the race conditions in digest.vtc
- REGTEST: fix the race conditions in hmac.vtc
- BUG/MINOR: fd: avoid bad tgid assertion in fd_delete() from deinit()
- BUG/MINOR: http: Memory leak of http redirect rules' format string
- MEDIUM: stick-table: set the track-sc limit at boottime via tune.stick-counters
- MINOR: stick-table: implement the sc-add-gpc() action
The number of stick-counter entries usable by track-sc rules is currently
set at build time. There is no good value for this since the vast majority
of users don't need any, most need only a few and rare users need more.
Adding more counters for everyone increases memory and CPU usages for no
reason.
This patch moves the per-session and per-stream arrays to a pool of a size
defined at boot time. This way it becomes possible to set the number of
entries at boot time via a new global setting "tune.stick-counters" that
sets the limit for the whole process. When not set, the MAX_SESS_STR_CTR
value still applies, or 3 if not set, as before.
It is also possible to lower the value to 0 to save a bit of memory if
not used at all.
Note that a few low-level sample-fetch functions had to be protected due
to the ability to use sample-fetches in the global section to set some
variables.
This patch provides a convenient way to override the default TCP, HTTP
and HTTP log formats. Instead of having a look into the documentation
to figure out what is the appropriate default log format three new
environment variables can be used: HAPROXY_TCP_LOG_FMT,
HAPROXY_HTTP_LOG_FMT and HAPROXY_HTTPS_LOG_FMT. Their content are
substituted verbatim.
These variables are set before parsing the configuration and are unset
just after all configuration files are successful parsed.
Example:
# Instead of writing this long log-format line...
log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC \
%CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r \
lr=last_rule_file:last_rule_line"
# ..the HAPROXY_HTTP_LOG_FMT can be used to provide the default
# http log-format string
log-format "${HAPROXY_HTTP_LOG_FMT} lr=last_rule_file:last_rule_line"
Please note that nothing prevents users to unset the variables or
override their content in a global section.
Signed-off-by: Sébastien Gross <sgross@haproxy.com>
Till now it was only possible to change the thread local hot cache size
at build time using CONFIG_HAP_POOL_CACHE_SIZE. But along benchmarks it
was sometimes noticed a huge contention in the lower level memory
allocators indicating that larger caches could be beneficial, especially
on machines with large L2 CPUs.
Given that the checks against this value was no longer on a hot path
anymore, there was no reason for continuing to force it to be tuned at
build time. So this patch allows to set it by tune.memory-hot-size.
It's worth noting that during the boot phase the value remains zero so
that it's possible to know if the value was set or not, which opens the
possibility that we try to automatically adjust it based on the per-cpu
L2 cache size or the use of certain protocols (none of this is done yet).
In ticket #1956, it was reported that an upgrade from 2.6 to 2.7 via a
reload would stop the master process.
When upgrading the binary, the new process is considered reexec and does
not try to creates the socketpair for the mcli_reload listener, then
tries to bind on -1 since the socket doesn't exit. The failure provokes
an exit() of the master.
This patch fixes the issue by trying to create the mcli_reload sockets
only when they don't exist, instead of creating them at first start.
This way we also avoid possible fd leak since we always try to use the
existing FDs first.
Must be backported in 2.7.
When the mworker wait mode fails it does an exit, but there is no
error message which says it exits.
Add a message which specify that the error is non-recoverable.
Could be backported in 2.7 and possibly earlier branch.
Activate QUIC connection socket to achieve the best performance. The
previous behavior can be reverted by tune.quic.socket-owner
configuration option.
This change is part of quic-conn owned socket implementation.
Contrary to its siblings patches, I suggest to not backport it to 2.7.
This should ensure that stable releases behavior is perserved. If a user
faces issues with QUIC performance on 2.7, he can nonetheless change the
default configuration.
This adds a USE_OPENSSL_WOLFSSL option, wolfSSL must be used with the
OpenSSL compatibility layer. This must be used with USE_OPENSSL=1.
WolfSSL build options:
./configure --prefix=/opt/wolfssl --enable-haproxy
HAProxy build options:
USE_OPENSSL=1 USE_OPENSSL_WOLFSSL=1 WOLFSSL_INC=/opt/wolfssl/include/ WOLFSSL_LIB=/opt/wolfssl/lib/ ADDLIB='-Wl,-rpath=/opt/wolfssl/lib'
Using at least the commit 54466b6 ("Merge pull request #5810 from
Uriah-wolfSSL/haproxy-integration") from WolfSSL. (2022-11-23).
This is still to be improved, reg-tests are not supported yet, and more
tests are to be done.
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
If no cluster-secret is defined by the user, a random one is silently
generated.
This ensures that at least QUIC Retry tokens are generated if abnormal
conditions are detected. However, it is advisable to specify it in the
configuration for tokens to be valid even after a reload or across LBs
instances in the same cluster.
This should be backported up to 2.6.
The SSL_load_error_strings function was marked as deprecated in OpenSSL
1.1.0 so compiling HAProxy with OPENSSL_NO_DEPRECATED set and a recent
OpenSSL library would fail.
The manpages say that this function was replaced by OPENSSL_init_crypto
and OPENSSL_init_ssl which are already called at start up by the SSL
lib. We do not seem to be in a case where explicit call of those
functions is required.
This patch fixes GitHub issue #1813.
It can be backported to 2.6.
Once in a while we spot a bug in the deinit code that is complex,
especially when it has to deal with incomplete initializations, and the
ability to bypass this step has regularly been raised. In addition for
fast-reloading setups it could theoretically save some time. Tests have
shown that very large configs can barely save ~100-150ms by skipping the
deinit step. However the ability not to crash if a bug is encountered can
occasionally help.
This patch adds an option to do exactly this. It's obviously not enabled
by default and the documentation discourages from using it, but this might
be useful in the future.
When compiled with USE_SHM_OPEN=1 the startup-logs are now able to use
an shm which is used to keep the logs when switching to mworker wait
mode. This allows to keep the failed reload logs.
When allocating the startup-logs at first start of the process, haproxy
will do a shm_open with a unique path using the PID of the process, the
file is unlink immediatly so we don't let unwelcomed files be. The fd
resulting from this shm is stored in the HAPROXY_STARTUPLOGS_FD
environment variable so it can be mmap again when switching to wait
mode.
When forking children, the process is copying the mmap to a a mallocated
ring so we never share the same memory section between the master and
the workers. When switching to wait mode, the shm is not used anymore as
it is also copied to a mallocated structure.
This allow to use the "show startup-logs" command over the master CLI,
to get the logs of the latest startup or reload. This way the logs of
the latest failed reload are also kept.
This is only activated on the linux-glibc target for now.
As seen in issue #1866, some environments will not allow to change the
current FD limit, and actually we don't need to do it, we only do it as
a byproduct of adjusting the limit to the one that fits. Here we're
replacing calls to setrlimit() with calls to raise_rlim_nofile(), which
will avoid making the setrlimit() syscall in case the desired value is
lower than the current process' one.
This depends on previous commit "MINOR: fd: add a new function to only
raise RLIMIT_NOFILE" and may need to be backported to 2.6, possibly
earlier, depending on users' experience in such environments.
xprt_quic module was too large and did not reflect the true architecture
by contrast to the other protocols in haproxy.
Extract code related to XPRT layer and keep it under xprt_quic module.
This code should only contains a simple API to communicate between QUIC
lower layer and connection/MUX.
The vast majority of the code has been moved into a new module named
quic_conn. This module is responsible to the implementation of QUIC
lower layer. Conceptually, it overlaps with TCP kernel implementation
when comparing QUIC and HTTP1/2 stacks of haproxy.
This should be backported up to 2.6.
Add an option to dump the number lines of the configuration file when
it's dumped. Other options can be easily added. Options are separated
by ',' when tapping the command line:
'./haproxy -dC[key],line -f [file]'
No backport needed, except if anonymization mechanism is backported.
The environment variable HAPROXY_LOAD_SUCCESS stores "1" if it
successfully load the configuration and started, "0" otherwise.
The "_loadstatus" master CLI command displays either
"Loading failure!\n" or "Loading success.\n"
When using the "reload" command over the master CLI, all connections to
the master CLI were cut, this was unfortunate because it could have been
used to implement a synchronous reload command.
This patch implements an architecture to keep the connection alive after
the reload.
The master CLI is now equipped with a listener which uses a socketpair,
the 2 FDs of this socketpair are stored in the mworker_proc of the
master, which the master keeps via the environment variable.
ipc_fd[1] is used as a listener for the master CLI. During the "reload"
command, the CLI will send the FD of the current session over ipc_fd[0],
then the reload is achieved, so the master won't handle the recv of the
FD. Once reloaded, ipc_fd[1] receives the FD of the session, so the
connection is preserved. Of course it is a new context, so everything
like the "prompt mode" are lost.
Only the FD which performs the reload is kept.
This commit adds a new command line option -dC to dump the configuration
file. An optional key may be appended to -dC in order to produce an
anonymized dump using this key. The anonymizing process uses the same
algorithm as the CLI so that the same key will produce the same hashes
for the same identifiers. This way an admin may share an anonymized
extract of a configuration to match against live dumps. Note that key 0
will not anonymize the output. However, in any case, the configuration
is dumped after tokenizing, thus comments are lost.
Add self-wake in signal_handler() to fix a race condition with a signal
coming in between checking signal_queue_len and entering polling sleep.
The changes in commit 43c891dda ("BUG/MINOR: signals/poller: set the
poller timeout to 0 when there are signals") were insufficient.
Move the signal_queue_len check from the poll implementations to
run_poll_loop() to keep that logic in one place.
The poll loops are terminated either by the parameter wake being set or
wake up due to a write to their poller_wr_pipe by wake_thread() in
signal_handler().
This fixes issue #1841.
Must be backported in every stable version.
Christopher bisected that recent commit d0b73bca71 ("MEDIUM: listener:
switch bind_thread from global to group-local") broke the master socket
in that only the first out of the Nth initial connections would work,
where N is the number of threads, after which they all work.
The cause is that the master socket was bound to multiple threads,
despite global.nbthread being 1 there, so the incoming connection load
balancing would try to send incoming connections to non-existing threads,
however the bind_thread mask would nonetheless include multiple threads.
What happened is that in 1.9 we forced "nbthread" to 1 in the master's poll
loop with commit b3f2be338b ("MEDIUM: mworker: use the haproxy poll loop").
In 2.0, nbthread detection was enabled by default in commit 149ab779cc
("MAJOR: threads: enable one thread per CPU by default"). From this point
on, the operation above is unsafe because everything during startup is
performed with nbthread corresponding to the default value, then it
changes to one when starting the polling loop. But by then we weren't
using the wait mode except for reload errors, so even if it would have
happened nobody would have noticed.
In 2.5 with commit fab0fdce9 ("MEDIUM: mworker: reexec in waitpid mode
after successful loading") we started to rexecute all the time, not just
for errors, so as to release precious resources and to possibly spot bugs
that were rarely exposed in this mode. By then the incoming connection LB
was enforcing all_threads_mask on the listener's thread mask so that the
incorrect value was being corrected while using it.
Finally in 2.7 commit d0b73bca71 ("MEDIUM: listener: switch bind_thread
from global to group-local") replaces the all_threads_mask there with
the listener's bind_thread, but that one was never adjusted by the
starting master, whose thread group was filled to N threads by the
automatic detection during early setup.
The best approach here is to set nbthread to 1 very early in init()
when we're in the master in wait mode, so that we don't try to guess
the best value and don't end up with incorrect bindings anymore. This
patch does this and also sets nbtgroups to 1 in preparation for a
possible future where this will also be automatically calculated.
There is no need to backport this patch since no other versions were
affected, but if it were to be discovered that the incorrect bind mask
on some of the master's FDs could be responsible for any trouble in
older versions, then the backport should be safe (provided that
nbtgroups is dropped of course).
As reported in github issue #1765, some people get trapped into building
haproxy and companion libraries on Windows using a compiler following the
LLP64 model. This has no chance to work, and definitely causes nasty bugs
everywhere when pointers are passed as longs. Let's save them time and
detect this at boot time.
The message and detection was factored with the existing one for -fwrapv
since we need the same info and actions.
This should be backported to all recent supported versions (the ones
that are likely to be tried on such platforms when people don't know).
When updating from 2.4 to 2.6, the child->reloads++ instruction changed
place, resulting in a former worker from the 2.4 process, still
identified as a current worker once in 2.6, because its reload counter
is still 0.
Unfortunately this counter is used to chose the mworker_proc structure
that will be used for the new worker.
What happens next, is that the mworker_proc structure of the previous
process is selected, and this one has ipc_fd[1] set to -1, because this
structure was supposed to be in the master.
The process then forks, and mworker_sockpair_register_per_thread() tries
to register ipc_fd[1] which is set to -1, instead of the fd of the new
socketpair.
This patch fixes the issue by checking if child->pid is equal to -1 when
selecting proc_self. This way we could be sure it wasn't a previous
process.
Should fix issue #1785.
This must be backported as far as 2.4 to fix the issue related to the
reload computation difference. However backporting it in every stable
branch will enforce the reload process.
Since these are not used anymore, let's now remove them. Given the
number of places where we're using ti->ldit_bit, maybe an equivalent
might be useful though.
The principle remains the same, but instead of having a single process
and ignoring extra ones, now we set the affinity masks for the respective
threads of all groups.
The doc was updated with a few extra examples.
These old legacy pollers are not designed for this. They're still
using a shared list of events for all threads, this will not scale at
all, so there's no point in enabling thread-groups there. Modern
systems have epoll, kqueue or event ports and do not need these ones.
We arrange for failing at boot time, only when thread-groups > 1 so
that existing setups will remain unaffected.
If there's a compelling reason for supporting thread groups with these
pollers in the future, the rework should not be too hard, it would just
consume a lot of memory to have an fd_evts[] array per thread, but that
is doable.
The sd_notify API is not able to change the "Active:" line in "systemcl
status". However a message can still be displayed on a "Status: " line,
even if the service is still green and "active (running)".
When startup succeed the Status will be set to "Ready.", upon a reload
it will be set to "Reloading Configuration." If the configuration
succeed "Ready." again. However if the reload failed, it will be set to
"Reload failed!".
Keep in mind that the "Active:" line won't change upon a reload failure,
and will still be green.
As much as possible we should take care of not leaving bits from stopped
threads in shared thread masks. It can avoid issues like the previous
fix and will also make debugging less confusing.
When soft-stopping, there's a comparison between stopping_threads and
threads_enabled to make sure all threads are stopped, but this is not
correct and is racy since the threads_enabled bit is removed when a
thread is stopped but not its stopping_threads bit. The consequence is
that depending on timing, when stopping, if the first stopping thread
is fast enough to remove its bit from threads_enabled, the other threads
will see that stopping_threads doesn't match threads_enabled anymore and
will wait forever. As such the mask must be applied to stopping_threads
during the test. This issue was introduced in recent commit ef422ced9
("MEDIUM: thread: make stopping_threads per-group and add stopping_tgroups"),
no backport is needed.
Commit ef422ced9 ("MEDIUM: thread: make stopping_threads per-group and add
stopping_tgroups") moved the stopping_threads mask to per-group, but one
test in the loop preserved its global value instead, resulting in stopping
threads never sleeping on stop and eating 100% CPU until all were stopped.
No backport is needed.
Commit 377e37a80 ("MINOR: tinfo: add the mask of enabled threads in each
group") forgot -1 on the tgid, thus the groups was not always correctly
tested, which is visible only when running with more than one group. No
backport is needed.
Stopping threads need a mask to figure who's still there without scanning
everything in the poll loop. This means this will have to be per-group.
And we also need to have a global stopping groups mask to know what groups
were already signaled. This is used both to figure what thread is the first
one to catch the event, and which one is the first one to detect the end of
the last job. The logic isn't changed, though a loop is required in the
slow path to make sure all threads are aware of the end.
Note that for now the soft-stop still takes time for group IDs > 1 as the
poller is not yet started on these threads and needs to expire its timeout
as there's no way to wake it up. But all threads are eventually stopped.
In order to kill all_threads_mask we'll need to have an equivalent for
the thread groups. The all_tgroups_mask does just this, it keeps one bit
set per enabled group.
In order to replace the global "all_threads_mask" we'll need to have an
equivalent per group. Take this opportunity for calling it threads_enabled
and make sure which ones are counted there (in case in the future we allow
to stop some).
Every single place where sleeping_thread_mask was still used was to test
or set a single thread. We can now add a per-thread flag to indicate a
thread is sleeping, and remove this shared mask.
The wake_thread() function now always performs an atomic fetch-and-or
instead of a first load then an atomic OR. That's cleaner and more
reliable.
This is not easy to test, as broadcast FD events are rare. The good
way to test for this is to run a very low rate-limited frontend with
a listener that listens to the fewest possible threads (2), and to
send it only 1 connection at a time. The listener will periodically
pause and the wakeup task will sometimes wake up on a random thread
and will call wake_thread():
frontend test
bind :8888 maxconn 10 thread 1-2
rate-limit sessions 5
Alternately, disabling/enabling a frontend in loops via the CLI also
broadcasts such events, but they're more difficult to observe since
this is causing connection failures.
Right now when an inter-thread wakeup happens, we preliminary check if the
thread was asleep, and if so we wake the poller up and remove its bit from
the sleeping mask. That's not very clean since the sleeping mask cannot be
entirely trusted since a thread that's about to wake up will already have
its sleeping bit removed.
This patch adds a new per-thread flag (TH_FL_NOTIFIED) to remember that a
thread was notified to wake up. It's cleared before checking the task lists
last, so that new wakeups can be considered again (since wake_thread() is
only used to notify about task wakeups and FD polling changes). This way
we do not need to modify a remote thread's sleeping mask anymore. As such
wake_thread() now only tests and sets the TH_FL_NOTIFIED flag but doesn't
clear sleeping anymore.
In bug #1751, it was reported that haproxy is consumming too much memory
since the 2.4 version. This is because of a change in the master, which
loses completely its configuration in wait mode, and lose its maxconn.
Without the maxconn, haproxy will try to compute one itself, and will
allocate RAM consequently, too much in our case. Which means the master
will have a too high maxconn and too much RAM allocated.
The patch fixes the issue by setting the maxconn to the default value
when re-executing the master in wait mode.
Must be backported as far as 2.5.
This is becoming difficult to distinguish the default values for
transport parameters which come with the RFC from our implementation
default values when not set by configuration (tunable parameters).
Add a comment to distinguish them.
Prefix these default values by QUIC_TP_DFLT_ to distinguish them from
QUIC_DFLT_* value even if there are not numerous.
Furthermore ->max_udp_payload_size must be first initialized to
QUIC_TP_DFLT_MAX_UDP_PAYLOAD_SIZE especially for received value.
Add tunable "tune.quic.frontend.max_streams_bidi" setting for QUIC frontends
to set the "initial_max_streams_bidi" transport parameter.
Add some documentation for this new setting.
Add two tunable settings both for backends and frontends "max_idle_timeout"
QUIC transport parameter, "tune.quic.frontend.max-idle-timeout" and
"tune.quic.backend.max-idle-timeout" respectively.
cfg_parse_quic_time() has been implemented to parse a time value thanks
to parse_time_err(). It should be reused for any tunable time value to be
parsed.
Add the documentation for this tunable setting only for frontend.
Commit 2cb3be76b ("CLEANUP: init: address a coverity warning about
possible multiply overflow") was incomplete, two other locations were
present. This should address issue #1585.
In issue #1585 Coverity suspects a risk of multiply overflow when
calculating the SSL cache size, though in practice the cache is
limited to 2^32 anyway thus it cannot really happen. Nevertheless,
casting the operation should be sufficient to avoid marking it as a
false positive.