Basically, this is the continuation of the previous commits. So, here after the
fork, worker process closes the "master" end of the copied CLI sockpair and
binds its end, ipc_fd[1], to the GLOBAL proxy listener.
mworker_cli_global_proxy_new_listener() guarantees that GLOBAL proxy will be
created, if it wasn't the case before.
Master process, at first, allocates the MASTER proxy, creates master CLI listener
(-S command line option) and reload sockpair and then closes the "worker" end of
the copied CLI sockpair and binds its end, ipc_fd[0], to the created MASTER proxy.
Usage of the new PROC_O_INIT state helps to reduce test conditions to find the
newly forked worker.
Here, to distinguish between the new worker and the previous one let's add a
new process state PROC_O_INIT and let's set it, when the memory is allocated
for the new worker in the processes list.
For the master process we always need to create a MASTER proxy, even if
master cli settings were not provided via command line, because now we bind a
listener in the master process context at ipc_fd[0]. So, MASTER proxy should be
already allocated at this moment.
The main idea here is to create a master CLI inherited sockpair just before the
master-worker fork. And only then after the fork let each process to bind a
needed listener to the its end of this sockpair.
Like this master and worker processes can close unused "ends" of its sockpair
copy (ipc_fd[0] for worker and and ipc_fd[1] for master).
When this sockpair creation happens inside the
mworker_cli_global_proxy_new_listener() is not possible for the master to
close ipc_fd[1] bound to the GLOBAL proxy listener, as this triggers a
BUG_ON(fd->owner) in fd_insert() in master context, because master process
has alredy entered in its polling loop and poller in its turn tries to reused
closed fd.
Let's rename mworker_cli_sockpair_new() to
mworker_cli_global_proxy_new_listener() to outline that this function creates
the GLOBAL proxy, allocates the listener with "master-socket" bind conf and
attaches this listener to this GLOBAL proxy. Listener is bound to ipc_fd[1] of
the sockpair inherited in master and in worker (master CLI sockpair).
*thread keywords parsers are sensitive to global section position. If they are
present there, the global section must be the first section in the
configuration. *thread parsers logic is based on non_global_section_parsed
counter. So, we need to reset it explicitly before the second configuration
read done by worker or in a standalone mode.
This commit is a part of the series to add a support of discovery mode in the
configuration parser and in initialization sequence.
In order to support discovery mode, we need to read the configuration twice.
So, we need to split the stage, when we load all configuration files, from
the stage when we parse it. To do this, let's encapsulate in read_cfg() the
part, where we load the configuration files in a separate function, load_cfg().
Like this we can call only the parsing part as many times as we need.
Before reading configuration at the first time we set MODE_DISCOVERY. After
the reading this mode is immediately unset, as the real runtime mode has been
already set by discovery keywords parsers.
Second read is performed when all primary runtime modes (daemon, master-worker)
are applied, because we should not read the configuration twice in the master
process.
MODE_MWORKER_WAIT becames redundant with MODE_MWORKER, due to moving
master-worker fork in init(). This change allows master no longer perform
reexec just after forking in order to free additional memory.
As after the fork in the master process we set 'master' variable, we can
replace now MODE_MWORKER_WAIT in some 'if' statements by simple check of this
'master' variable.
Let's also continue to get rid of HAPROXY_MWORKER_WAIT_ONLY environment
variable, as it's no longer needed as well.
In cfg_program_postparser(), which is used to check if cmdline is defined to
launch a program, we completely remove the check of mode for now, because
the master process does not parse the configuration for the moment. 'program'
section parsing will be reintroduced in master later in the next commits.
As we no longer support MODE_MWORKER_WAIT for master (it became redundant with
MODE_MWORKER after moving master-worker fork in init()), let's rename
exit_on_waitmode_failure() callback in just exit_on_failure().
This a first commit to prepare the removal of MODE_MWORKER_WAIT support. It has
became redundant with MODE_MWORKER, due to moving master-worker fork in init().
Master process does no longer perform reexec to free additional memory after
forking and does no longer changing its mode to MODE_MWORKER_WAIT, where it has
entered to its wait polling loop and has handled signals. Now, master enters in
this loop almost immediately after forking a worker and being always in mode
MODE_MWORKER.
So, we can remove mworker_reexec_waitmode() wrapper, which was used to set
HAPROXY_MWORKER_WAIT_ONLY variable and to call mworker_reexec(). But let's keep
for the moment the logic of reexec_on_failure() atexit callback for master in
order if in the future we will need to support this case again.
Due to moving the master-worker fork in init(), we need to protect
prepare_caps_from_permitted_set() call, which is executed after init(). This
call makes sense only for worker, daemon and for foreground mono process modes.
prepare_caps_from_permitted_set() allows to read Linux capabilities from
haproxy binary and to move some of them in process Effective set, if 'setcap'
keyword lists needed capabilities in the global section.
There are two set_identity() calls, both under quite same:
'if ((global.mode & (MODE_MWORKER|MODE_DAEMON...)...'
The first call serves to change uid/gid and set some needed Linux capabilities
only for process in the foreground mode. The second comes after master-worker
fork and allows to do the same in daemon and in worker modes.
Due to moving the master-worker fork in init() in some previous commit, the
second set_identity() now is no longer under the 'if'. So, it is executed
for all modes, except MODE_MWORKER. Now in MODE_MWORKER process enters in its
wait polling loop just after forking a worker and it terminates almost
immediately, if it exits this loop.
Worker, daemon and process in a foreground mode will perform set_identity() as
before, but now it will be called in a one place at main().
global.last_checks should be verified just after set_identity() call. As it's
stated in comments some configuration options may require full privileges or
some Linux capabilities need to be granted to process. set_identity() via
prepare_caps_for_setuid() may put configured capabilities in process Effective
set and, hence, remove respective flag from global.last_checks.
There are two 'chroot' code blocks, both under quite same:
'if ((global.mode & (MODE_MWORKER|MODE_DAEMON...)...'
The first block serves to perform chroot only for process in the foreground
mode. The second comes after master-worker fork and allows to do chroot
in daemon and in worker modes.
Due to moving the master-worker fork in init() in some previous commit, the
second 'chroot' code block now is no longer under the 'if'. So, it is executed
for all modes, except MODE_MWORKER. Now in MODE_MWORKER process enters in its
wait polling loop just after forking a worker and it terminates almost
immediately, if it exits this loop.
Worker, daemon and process in a foreground mode will perform the chroot as
before, but now it will be done in a one place at main().
mworker_create_master_cli() creates MASTER proxy and allocates listeners,
which are attached to this proxy. It also creates a reload sockpair.
So, it's more appropriate to do the check, that we are in a MODE_MWORKER, if
master CLI settings were provided via command line, just after the config
parsing. And only then, if runtime mode and command line settings are
coherent, try to perform master-worker fork and try to create master CLI.
After moving master-worker fork into init() and reintroducing it into a
switch-case (see the previous commit), it is more appropriate to set
nbthread=1 and nbtgroups=1 immediately in the 'case' for the parent process.
Before this fix, startup logs ring was duplicated before the fork(), so master
and worker had both the original startup_logs ring and the duplicated one. In
the worker context we freed the original ring and used a duplicated one. In
the master context we did nothing, but we still create a duplicated copy again
and again during the reload.
So, let's duplicate startup logs ring only in the worker context. Master
continues to use the original ring initialized in init() before its fork().
This refactoring allows to simplify 'master-worker' logic. The master process
with this change will fork a worker very early at the initialization stage,
which allows to perform a configuration parsing only for the worker. In reality
only the worker process needs to parse and to apply the whole configuration.
Master process just polls master CLI sockets, watches worker status, catches
its termination state and handles the signals. With this refactoring there is
no longer need for master to perform re-execution after reading the whole
configuration file to free additional memory. And there is no longer need for
worker to register atexit callbacks, in order to free the memory, when it
fails to apply the new configuration. In contrast, we now need to set
proc_self pointer to the new worker entry in processes list just after
the fork in the worker process context. proc_self is dereferenced in
mworker_sockpair_register_per_thread(), which is called when worker enters in
its polling loop.
Following patches will try to gather more 'worker' and 'master' specific' code
in the dedicated cases of this new fork() switch, or in a separate functions.
Let's move PID handling in init() from the main() code. It is more appropriate
to open and to write the PID of the process just after daemonization fork. In
case of daemon monoprocess mode, we will simply write a PID of the process,
which is already in the background. In case of 'master-worker' mode, we keep
the previous behaviour and we write only a PID of the master process.
This allows to remove redundant tests of the process execution mode, tests of
the pidfd value and consequent writes to this pidfd. This patch prepares the
refactoring of master-worker fork by moving it in init() function as well.
Let's move daemonization fork in init(). We need to perform this fork always
before forking a worker process, in order to be able to launch master and then
its worker in daemon, i.e. background mode, if haproxy was started with '-D'
option.
This refactoring is a preparation step, needed for replacing then master-worker
fork in init() as well. This allows the master process not to read the whole
configuration file and not to do re-execution in order to free additional
memory, when worker was forked. In the new refactored design only the worker
process will read and apply a new configuration, while the master will arrive
very fast in its polling loop to wait worker's termination and to handle
signals. See more details in the following commits.
As master process performs execvp() syscall to handle USR2 and HUP signals in
mworker_reexec(), let's add O_CLOEXEC flag, when we open '/dev/null' in order
to avoid fd leak.
This a preparation step to refactor master-worker logic. See more details in
the next commits.
In proxies, stick-tables, servers, etc... at plenty of places we store
a file name and a line number. Some file names are the result of strdup()
(e.g. in proxies), others not (e.g. stick-tables) and leave dangling
pointers at the end of parsing. The risk of double-free is not null
either.
In order to stop this, let's first add a simple tool that allows to
register short strings inside a global list, these strings happening
to be server names. The strings are either duplicated and stored upon
failure to find them, or just added to this storage. Since file names
are not expected to disappear before the end of the process, for now
we don't even implement refcounting, and we free them all at the end.
There's already a drop_file_name() function to reset the pointer like
ha_free() used to do, and even if not strictly needed it's a good
habit to get used to doing it.
The strings are returned as const so that they're stored as-is in
structs, and that nasty free() calls are easily caught. The pointer
points to the char[] storage inside the node itself. This way later
if we want to implement refcounting, it will be trivial to just look
up a string and change its associated node's refcount. If needed,
comparisons can also be made on pointers.
For now they're not used yet and are released on deinit().
This fixes 7b78e1571 (" MINOR: mworker: restore initial env before wait
mode").
In cases, when haproxy starts without any configuration, for example:
'haproxy -vv', init_env array to backup env variables is never allocated. So,
we need to check in deinit(), when we free its memory, that init_env is not a
NULL ptr.
This patch is the follow-up of 1811d2a6ba (MINOR: tools: add helpers to
backup/clean/restore env).
In order to avoid unexpected behaviour in master-worker mode during the process
reload with a new configuration, when the old one has contained '*env' keywords,
let's backup its initial environment before calling parse_cfg() and let's clean
and restore it in the context of master process, just before it enters in a wait
polling loop.
This will garantee that new workers will have a new updated environment and not
the previous one inherited from the master, which does not read the configuration,
when it's in a wait-mode.
Define a new buffer pool reserved to allocate smaller memory area. For
the moment, its usage will be restricted to QUIC, as such it is declared
in quic_stream module.
Add a new config option "tune.bufsize.small" to specify the size of the
allocated objects. A special check ensures that it is not greater than
the default bufsize to avoid unexpected effects.
QUIC MUX buffer allocation limit is now directly based on the underlying
congestion window size. previous static limit based on conn-tx-buffers
is now unused. As such, this commit adds a warning to users to prevent
that it is now obsolete.
Secondly, update max-window-size setting. It is now the main entrypoint
to limit both the maximum congestion window size and the number of QUIC
MUX allocated buffer on emission. Remove its special value '0' which was
used to automatically adjust it on now unused conn-tx-buffers.
Define a new global keyword tune.quic.frontend.max-window-size. This
allows to set globally the maximum congestion window size for each QUIC
frontend connections.
The default value is 0. It is a special value which automatically derive
the size from the configured QUIC connection buffer limit. This is
similar to the previous "quic-cc-algo" behavior, which can be used to
override the maximum window size per bind line.
As readcfgfile no longer opens configuration files and reads them with fgets,
but performs only the parsing of provided data, let's rename it to parse_cfg by
analogy with read_cfg in haproxy.c.
Let's call load_cfg_in_ram() helper for each configuration file to load it's
content in some area in memory. Adapt readcfgfile() parser function
respectively. In order to limit changes in its scope we give as an argument a
cfgfile structure, already filled in init_args() and in load_cfg_in_ram() with
file metadata and content.
Parser function (readcfgfile()) uses now fgets_from_mem() instead of standard
fgets from libc implementations.
SPOE filter parses its own configuration file, pointed by 'config' keyword in
the configuration already loaded in memory. So, let's allocate and fill for
this a supplementary cfgfile structure, which is not referenced in cfg_cfgfiles
list. This structure and the memory with content of SPOE filter configuration
are freed immediately in parse_spoe_flt(), when readcfgfile() returns.
HAProxy OpenTracing filter also uses its own configuration file. So, let's
follow the same logic as we do for SPOE filter.
This commit prepares read_cfg() to call load_cfg_in_mem() helper in order to
load configuration files in memory. Before, read_cfg() calls the parser for all
files from cfg_cfgfiles list and cumulates parser's errors and memprintf's
errors in for_each loop. memprintf's errors did not stop this loop and were
accounted just after.
Now, as we plan to load configuration files in memory, we stop the loop, if
memprintf() fails, and we show appropraite error message with ha_alert. Then
process terminates. So not all cumulated syntax-related errors will be shown
before exit in this case and we has to stop, because we run out of memory.
If we can't open the current file or we fail to allocate a memory to store
some configuration line, the previous behaviour is kept, process emits
appropriate alert message and exits.
If parser returns some syntax-related error on the current file, the previous
behaviour is kept as well. We cumulate such errors for all parsed files and we
check them just after the loop. All syntax-related errors for all files is
shown then as before in ha_alert messages line by line during the startup.
Then process will exit with 1.
As now cfg_cfgfiles list contains many pointers to some memory areas with
configuration files content and this content could be big, it's better to
free the list explicitly, when parsing was finished. So, let's change
read_cfg() to return some integer value to its caller init(), and let's perform
the free routine at a caller level, as cfg_cfgfiles list was initialized and
initially filled at this level.
list_append_word() helper was used before only to chain configuration file names
in a list. As now we start to use cfgfile structure which represents entire file
in memory and its metadata, let's adapt this helper to use this structure and
let's rename it to list_append_cfgfile().
Adapt functions, which process configuration files and directories to use
cfgfile structure and list_append_cfgfile() instead of wordlist.
Let's check the second time a global counter of "ha_warning" messages, if
zero-warning is set. And let's do this just before forking. At this moment we
are sure, that we've already done all init operations, where we could emit
"ha_warning", and we still have stderr fd opened.
Even with the second check, we could lost some late and rare warnings
about failing to drop supplementary groups and about re-enabling core dumps.
Notes about this are added into 'zero-warning' keyword description.
This patch is done in order to prepare the move of handlers to compute and to
check process related limits as maxconn, maxsock, maxpipes.
So, these handlers become no longer static due to the future move.
We add the handlers declarations in limits.h in this patch as well, in order to
keep the next patch, dedicated to code replacement, without any additional
modifications.
Such split also assures that this patch can be compiled separately from the
next one, where we moving the handlers. This is important in case of
git-bisect.
The code which gets, sets and checks initial and current fd limits and process
related limits (maxconn, maxsock, ulimit-n, fd-hard-limit) is spread around
different functions in haproxy.c and in fd.c. Let's group it together in
dedicated limits.c and limits.h.
This patch is done in order to prepare the moving of limits-related functions
from different places to the new 'limits' compilation unit. It helps to keep
clean the next patch, which will do only the move without any additional
modifications.
Such detailed split is needed in order to be sure not to break accidentally
limits logic and in order to be able to compile each commit separately in case
of git-bisect.
This commit fixes 41275a691 ("MEDIUM: init: set default for fd_hard_limit via
DEFAULT_MAXFD").
fd_hard_limit is taken in account implicitly via 'ideal_maxconn' value in
all maxconn adjustements, when global.rlimit_memmax is set:
MIN(global.maxconn, capped by global.rlimit_memmax, ideal_maxconn);
It also caps provided global.rlimit_nofile, if it couldn't be set as a current
process fd limit (see more details in the main() code).
So, lets set the default value for fd_hard_limit only, when there is no any
other haproxy-specific limit provided, i.e. rlimit_memmax, maxconn,
rlimit_nofile. Otherwise we may break users configs.
Please, note, that in master-worker mode, master does not need the
DEFAULT_MAXFD (1048576) as well, as we explicitly limit its maxconn to 100.
Must be backported in all stable versions until v2.6.0, including v2.6.0,
like the commit above.
Let's provide a default value for fd_hard_limit, if it's not set in the
configuration. With this patch we could set some specific default via
compile-time variable DEFAULT_MAXFD as well. Hope, this will be helpfull for
haproxy package maintainers.
make -j 8 TARGET=linux-glibc DEBUG=-DDEFAULT_MAXFD=50000
If haproxy is comipled without DEFAULT_MAXFD defined, the default will be set
to 1048576.
This is done to avoid killing the process by its watchdog, while it started
without any limitations in its configuration or in the command line and the
hard RLIMIT_NOFILE is extremely huge (~1000000000). We use in this case
compute_ideal_maxconn() to calculate maxconn and maxsock, maxsock defines the
size of internal fdtab, which becames very-very large as well. When
the process starts to simply loop over this fdtab (0(n)), this takes a lot of
time, so watchdog does it job.
To avoid this, maxconn now is always reduced to some reasonable value either
by explicit global.fd-hard-limit from configuration, or by its default. The
default may be changed at build-time and overwritten then by
global.fd-hard-limit at runtime. Explicit global.fd-hard-limit from the
configuration has always precedence over DEFAULT_MAXFD, if set.
Must be backported in all stable versions until v2.6.0, including v2.6.0.
Haproxy master process should not read its configuration the second time
after performing reexec and passing to MODE_MWORKER_WAIT. So, to make
this part of init() function more readable and to distinguish better the
point, where configs have been read, let's encapsulate it in a separate
function.
Let's encapsulate the logic of 'reload' sockpair and master CLI listeners
creation, used by master CLI into a separate function, as we needed this
only in master-worker runtime mode. This makes the code of init() more
readable.
As MODE_CHECK_CONDITION logic terminates the process anyway, no matter if
the test for the provided condition was successfull or not, let's
encapsulate it in a separate function. This makes the code of init() more
readable.
In MODE_CHECK_CONDITION we only parse check_condition string, provided by
'-cc', and then we evaluate it. Haproxy process terminates at the
end of {if..else} block anyway, if the test has failed or passed. So, it
will be more appropriate to perform MODE_CHECK_CONDITION test first and
then do all other process runtime mode verifications.
'show dev' command is very convenient to obtain haproxy debugging information,
while process is run in container. Let's extend its output with version and
cmdline. cmdline is useful in a way, as it shows absolute binary path and its
arguments, because sometimes the person, who is debugging failing container is
not the same, who has created and deployed it.
argc and argv are stored in the exported global structure, because
feed_post_mortem() is added as a post check function callback in the
post_check_list. So we can't simply change the signature of
feed_post_mortem(), without breaking other post check callbacks APIs.
Parsers are not supposed to modify argv, so we can safely bypass its pointer
to debug_parse_cli_show_dev(), without copying all argument stings somewhere
in the heap or on stack.
Previous commit introduce new protection mechanism to forbid
communications with clients which use a privileged source port. By
default, this mechanism is disabled for every protocols.
This patch changes the default value and activate the protection
mechanism for QUIC protocol. This is justified as it is a probable sign
of DNS/NTP amplification attack.
This is labelled as major as it can be a breaking change with some
network environments.
This commit introduces a new global setting named
harden.reject_privileged_ports.{tcp|quic}. When active, communications
with clients which use privileged source ports are forbidden. Such
behavior is considered suspicious as it can be used as spoofing or
DNS/NTP amplication attack.
Value is configured per transport protocol. For each TCP and QUIC
distinct code locations are impacted by this setting. The first one is
in sock_accept_conn() which acts as a filter for all TCP based
communications just after accept() returns a new connection. The second
one is dedicated for QUIC communication in quic_recv(). In both cases,
if a privileged source port is used and setting is disabled, received
message is silently dropped.
By default, protection are disabled for both protocols. This is to be
able to backport it without breaking changes on stable release.
This should be backported as it is an interesting security feature yet
relatively simple to implement.
This patch fixes the commit eea152ee68
("BUG/MINOR: signals/poller: ensure wakeup from signals").
There is some probability that run_poll_loop() becomes inifinite, if
TH_FL_SLEEPING is withdrawn from all threads in the second signal_queue_len
check, when a signal has received just after the first one.
In such particular case, the 'wake' variable, which is used to terminate
thread's poll loop is never reset to 0. So, we never enter to the "stopping"
part of the run_poll_loop() and threads, except the one with id 0 (tid 0
handles signals), will continue to call _do_poll() eternally and will never
sleep, as its TH_FL_SLEEPING flag was unset.
This flag needs to be removed only for the tid 0, as it was done in the first
signal_queue_len check.
This fixes an issue #2537 "infinite loop when shutting down".
This fix must be backported in every stable version.
If 'namespace' keyword is used in the backend server settings or/and in the
bind string, it means that haproxy process will call setns() to change its
default namespace to the configured one and then, it will create a
socket in this new namespace. setns() syscall requires CAP_SYS_ADMIN
capability in the process Effective set (see man 2 setns). Otherwise, the
process must be run as root.
To avoid to run haproxy as root, let's add cap_sys_admin capability in the
same way as we already added the support for some other network capabilities.
As CAP_SYS_ADMIN belongs to CAP_SYS_* capabilities type, let's add a separate
flag LSTCHK_SYSADM for it. This flag is set, if the 'namespace' keyword was
found during configuration parsing. The flag may be unset only in
prepare_caps_for_setuid() or in prepare_caps_from_permitted_set(), which
inspect process EUID/RUID and Effective and Permitted capabilities sets.
If system doesn't support Linux capabilities or 'cap_sys_admin' was not set
in 'setcap', but 'namespace' keyword is presented in the configuration, we
keep the previous strict behaviour. Process, that has changed uid to the
non-priviledged user, will terminate with alert. This alert invites the user
to recheck its configuration.
In the case, when haproxy will start and run under a non-root user and
'cap_sys_admin' is not set, but 'namespace' keyword is presented, this patch
does not change previous behaviour as well. We'll still let the user to try
its configuration, but we inform via warning, that unexpected things, like
socket creation errors, may occur.
Since the introduction of the automatic seamless reload using the
internal socketpair, there is no way of disabling the seamless reload.
Previously we just needed to remove -x from the startup command line,
and remove any "expose-fd" keyword on stats socket lines.
This was introduced in 2be557f7c ("MEDIUM: mworker: seamless reload use
the internal sockpairs").
The patch copy /dev/null again and pass it to the next exec so we never
try to get socket from the -x.
Must be backported as far as 2.6.
This commit is the first one of a serie to implement preloading of
haproxy counters via stats-file parsing.
This patch defines a basic apply_stats_file() function. It implements
reading line by line of a stats-file without any parsing for the moment.
It is called automatically on process startup via init().
This commit is part of a serie to align counters usage between
frontends/listeners on one side and backends/servers on the other.
"stot" metric refers to the total number of sessions. On backend side,
it is interpreted as a number of streams. Previously, this was accounted
using <cum_sess> be_counters field for servers, but <cum_conn> instead
for backend proxies.
Adjust this by using <cum_sess> for both proxies and servers. As such,
<cum_conn> field can be removed from be_counters.
Note that several diagnostic messages which reports total frontend and
backend connections were adjusted to use <cum_sess>. However, this is an
outdated and misleading information as it does reports streams count on
backend side. These messages should be fixed in a separate commit.
This should be backported to all stable releases.
Limiting total allocatable process memory (VSZ) via setting RLIMIT_AS limit is
no longer effective, in order to restrict memory consumption at run time.
We can see from process memory map below, that there are many holes within
the process VA space, which bumps its VSZ to 1.5G. These holes are here by
many reasons and could be explaned at first by the full randomization of
system VA space. Now it is usually enabled in Linux kernels by default. There
are always gaps around the process stack area to trap overflows. Holes before
and after shared libraries could be explained by the fact, that on many
architectures libraries have a 'preferred' address to be loaded at; putting
them elsewhere requires relocation work, and probably some unshared pages.
Repetitive holes of 65380K are most probably correspond to the header that
malloc has to allocate before asked a claimed memory block. This header is
used by malloc to link allocated chunks together and for its internal book
keeping.
$ sudo pmap -x -p `pidof haproxy`
127136: ./haproxy -f /home/haproxy/haproxy/haproxy_h2.cfg
Address Kbytes RSS Dirty Mode Mapping
0000555555554000 388 64 0 r---- /home/haproxy/haproxy/haproxy
00005555555b5000 2608 1216 0 r-x-- /home/haproxy/haproxy/haproxy
0000555555841000 916 64 0 r---- /home/haproxy/haproxy/haproxy
0000555555926000 60 60 60 r---- /home/haproxy/haproxy/haproxy
0000555555935000 116 116 116 rw--- /home/haproxy/haproxy/haproxy
0000555555952000 7872 5236 5236 rw--- [ anon ]
00007fff98000000 156 36 36 rw--- [ anon ]
00007fff98027000 65380 0 0 ----- [ anon ]
00007fffa0000000 156 36 36 rw--- [ anon ]
00007fffa0027000 65380 0 0 ----- [ anon ]
00007fffa4000000 156 36 36 rw--- [ anon ]
00007fffa4027000 65380 0 0 ----- [ anon ]
00007fffa8000000 156 36 36 rw--- [ anon ]
00007fffa8027000 65380 0 0 ----- [ anon ]
00007fffac000000 156 36 36 rw--- [ anon ]
00007fffac027000 65380 0 0 ----- [ anon ]
00007fffb0000000 156 36 36 rw--- [ anon ]
00007fffb0027000 65380 0 0 ----- [ anon ]
...
00007ffff7fce000 4 4 0 r-x-- [ anon ]
00007ffff7fcf000 4 4 0 r---- /usr/lib/x86_64-linux-gnu/ld-2.31.so
00007ffff7fd0000 140 140 0 r-x-- /usr/lib/x86_64-linux-gnu/ld-2.31.so
...
00007ffff7ffe000 4 4 4 rw--- [ anon ]
00007ffffffde000 132 20 20 rw--- [ stack ]
ffffffffff600000 4 0 0 --x-- [ anon ]
---------------- ------- ------- -------
total kB 1499288 75504 72760
This exceeded VSZ makes impossible to start an haproxy process with 200M
memory limit, set at its initialization stage as RLIMIT_AS. We usually
have in this case such cryptic output at stderr:
$ haproxy -m 200 -f haproxy_quic.cfg
(null)(null)(null)(null)(null)(null)
At the same time the process RSS (a memory really used) is only 75,5M.
So to make process memory accounting more realistic let's base the memory
limit, set by -m option, on RSS measurement and let's use RLIMIT_DATA instead
of RLIMIT_AS.
RLIMIT_AS was used before, because earlier versions of haproxy always allocate
memory buffers for new connections, but data were not written there
immediately. So these buffers were not instantly counted in RSS, but were
always counted in VSZ. Now we allocate new buffers only in the case, when we
will write there some data immediately, so using RLIMIT_DATA becomes more
appropriate.
The CPU variable, when used, is almost always exclusively used with
"generic" to disable any CPU-specific optimizations, or "native" to
enable "-march=native". Other options are not used and are just making
CPU_CFLAGS more confusing.
This commit just drops all pre-configured variants and replaces them
with documentation about examples of supported options. CPU_CFLAGS is
preserved as it appears that it's mostly used as a proxy to inject the
distro's CFLAGS, and it's just empty by default.
The CPU variable is checked, and if set to anything but "generic", it
emits a warning about its deprecation and invites the user to read
INSTALL.
Users who would just set CPU_CFLAGS will be able to continue to do so,
those who were using CPU=native will have to pass CPU_CFLAGS=-march=native
and those who were passing one of the other options will find it in the
doc as well.
Note that this also removes the "CPU=" line from haproxy -vv, that most
users got used to seeing set to "generic" or occasionally "native"
anyway, thus that didn't provide any useful information.
Since the Linux capabilities support add-on (see the commit bd84387beb
("MEDIUM: capabilities: enable support for Linux capabilities")), we can also
check haproxy process effective and permitted capabilities sets, when it
starts and runs as non-root.
Like this, if needed network capabilities are presented only in the process
permitted set, we can get this information with capget and put them in the
process effective set via capset. To do this properly, let's introduce
prepare_caps_from_permitted_set().
First, it checks if binary effective set has CAP_NET_ADMIN or CAP_NET_RAW. If
there is a match, LSTCHK_NETADM is removed from global.last_checks list to
avoid warning, because in the initialization sequence some last configuration
checks are based on LSTCHK_NETADM flag and haproxy process euid may stay
unpriviledged.
If there are no CAP_NET_ADMIN and CAP_NET_RAW in the effective set, permitted
set will be checked and only capabilities given in 'setcap' keyword will be
promoted in the process effective set. LSTCHK_NETADM will be also removed in
this case by the same reason. In order to be transparent, we promote from
permitted set only capabilities given by user in 'setcap' keyword. So, if
caplist doesn't include CAP_NET_ADMIN or CAP_NET_RAW, LSTCHK_NETADM would not
be unset and warning about missing priviledges will be emitted at
initialization.
Need to call it before protocol_bind_all() to allow binding to priviledged
ports under non-root and 'setcap cap_net_bind_service' must be set in the
global section in this case.
Linux capabilities support and ability to preserve it for running process
after switching to a global.uid was added recently by the commit bd84387beb
("MEDIUM: capabilities: enable support for Linux capabilities")).
This new feature hasn't yet been taken into account by last config checks,
which are performed at initialization stage.
So, to update it, let's perform it after set_identity() call. Like this,
current EUID is already changed to a global.uid and prepare_caps_for_setuid()
would unset LSTCHK_NETADM flag, only if capabilities given in the 'setcap'
keyword in the configuration file were preserved.
Otherwise, if system doesn't support Linux capabilities or they were not set
via 'setcap', we keep the previous strict behaviour: process will terminate
with an alert, in order to insist that user: either needs to change
run UID (worst case: start and run as root), or he needs to set/recheck
capabilities listed as 'setcap' arguments.
In the case, when haproxy will start and run under a non-root user this patch
doesn't change the previous behaviour: we'll still let him try the
configuration, but we inform via warning that unexpected things may occur.
Need to be backported until v2.9, including v2.9.
PR_FL_CHECKED is set on proxy once the proxy configuration was fully
checked (including postparsing checks).
This information may be useful to functions that need to know if some
config-related proxy properties are likely to change or not due to parsing
or postparsing/check logics. Also, during runtime, except for some rare cases
config-related proxy properties are not supposed to be changed.
As per the `sd_notify` manual:
> A field carrying the monotonic timestamp (as per CLOCK_MONOTONIC) formatted
> in decimal in μs, when the notification message was generated by the client.
> This is typically used in combination with "RELOADING=1", to allow the
> service manager to properly synchronize reload cycles. See systemd.service(5)
> for details, specifically "Type=notify-reload".
Thus this change allows users with a recent systemd to switch to
`Type=notify-reload`, should they desire to do so. Correct behavior was
verified with a Fedora 39 VM.
see systemd/systemd#25916
[wla: the service file should be updated this way:]
diff --git a/admin/systemd/haproxy.service.in b/admin/systemd/haproxy.service.in
index 22a53d8aab..8c6dadb5e5 100644
--- a/admin/systemd/haproxy.service.in
+++ b/admin/systemd/haproxy.service.in
@@ -8,12 +8,11 @@ EnvironmentFile=-/etc/default/haproxy
EnvironmentFile=-/etc/sysconfig/haproxy
Environment="CONFIG=/etc/haproxy/haproxy.cfg" "PIDFILE=/run/haproxy.pid" "EXTRAOPTS=-S /run/haproxy-master.sock"
ExecStart=@SBINDIR@/haproxy -Ws -f $CONFIG -p $PIDFILE $EXTRAOPTS
-ExecReload=@SBINDIR@/haproxy -Ws -f $CONFIG -c $EXTRAOPTS
-ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed
Restart=always
SuccessExitStatus=143
-Type=notify
+Type=notify-reload
+ReloadSignal=SIGUSR2
# The following lines leverage SystemD's sandboxing options to provide
# defense in depth protection at the expense of restricting some flexibility
Signed-off-by: William Lallemand <wlallemand@haproxy.com>
Thanks to previous commit, we can now build with USE_SYSTEMD=1 on any
system without requiring any parts from systemd. It just turns our that
there was one remaining include in haproxy.c that needed to be replaced
with haproxy/systemd.h to build correctly. That's what this commit does.
Now the rings have one wait queue per group. This should limit the
contention on systems such as EPYC CPUs where the performance drops
dramatically when using more than one CCX.
Tests were run with different numbers and it was showed that value
6 outperforms all other ones at 12, 24, 48, 64 and 80 threads on an
EPYC, a Xeon and an Ampere CPU. Value 7 sometimes comes close and
anything around these values degrades quickly. The value has been
left tunable in the global section.
This commit only introduces everything needed to set up the queue count
so that it's easier to adjust it in the forthcoming patches, but it was
initially added after the series, making it harder to compare.
It was also shown that trying to group the threads in queues by their
thread groups is counter-productive and that it was more efficient to
do that by applying a modulo on the thread number. As surprising as it
seems, it does have the benefit of well balancing any number of threads.
Similarly to "expose-exprimental-directives" option, there is no a global
option to expose some deprecated directives. Idea is to have a way to silent
warnings about deprecated directives when there is no alternative solution.
Of course, deprecated directives covered by this option are not listed and
may change. It is only a best effort to let users upgrade smoothly.
-dI allow to enable "insure-fork-wanted" directly from the command line,
which is useful when you want to run ASAN with addr2line with a lot of
configuration files without editing them.
Let's say that the largest packet number acknowledged by the peer is #10, when inspecting
the non already acknowledged packets to detect if they are lost or not, this is the
case a least if the difference between this largest packet number and and their
packet numbers are bigger or equal to the packet reordering threshold as defined
by the RFC 9002. This latter must not be less than QUIC_LOSS_PACKET_THRESHOLD(3).
Which such a value, packets #7 and oldest are detected as lost if non acknowledged,
contrary to packet number #8 or #9.
So, the packet loss detection is very sensitive to such a network characteristic
where non acknowledged packets are distant from each others by their packet number
differences.
Do not use this static value anymore for the packet reordering threshold which is used
as a criteria to detect packet loss. In place, make it depend on the difference
between the number of the last transmitted packet and the number of the oldest
one among the packet which are still in flight before being inspected to be
deemed as lost.
Add new tune.quic.reorder-ratio setting to apply a ratio in percent to this
dynamic packet reorder threshold.
Should be backported to 2.6.
Final diags were added in 2.4 by commit 5a6926dcf ("MINOR: diag: create
cfgdiag module"), but it's called too late in the startup process,
because when "-c" is passed, the call is not made, while it's its primary
use case. Let's just move the call earlier.
Note that currently the check in this function is limited to verifying
unicity of server cookies in a backend, so it can be backported as far
as 2.4, but there is little value in insisting if it doesn't backport
easily.
Released version 3.0-dev1 with the following main changes :
- MINOR: channel: Use dedicated functions to deal with STREAMER flags
- MEDIUM: applet: Handle channel's STREAMER flags on applets size
- MINOR: applets: Use channel's field to compute amount of data received
- MEDIUM: cache: Save body size of cached objects and track it on delivery
- MEDIUM: cache: Add support for endp-to-endp fast-forwarding
- MINOR: cache: Add global option to enable/disable zero-copy forwarding
- MINOR: pattern: Use reference name as filename to read patterns from a file
- MEDIUM: pattern: Add support for virtual and optional files for patterns
- DOC: config: Add section about name format for maps and ACLs
- DOC: management/lua: Update commands about map and acl
- MINOR: promex: Add support for specialized front/back/li/srv metric names
- MINOR: promex: Export active/backup metrics per-server
- BUG/MINOR: ssl: Double free of OCSP Certificate ID
- MINOR: ssl/cli: Add ha_(warning|alert) msgs to CLI ckch callback
- BUG/MINOR: ssl: Wrong OCSP CID after modifying an SSL certficate
- BUG/MINOR: lua: Wrong OCSP CID after modifying an SSL certficate (LUA)
- DOC: configuration: typo req.ssl_hello_type
- MINOR: hq-interop: add fastfwd support
- CLEANUP: mux_quic: rename ffwd function with prefix qmux_strm_
- MINOR: mux-quic: add traces for 0-copy/fast-forward
- BUG/MINOR: mworker/cli: fix set severity-output support
- CLEANUP: mworker/cli: add comments about pcli_find_and_exec_kw()
- BUG/MEDIUM: quic: Possible buffer overflow when building TLS records
- BUILD: ssl: update types in wolfssl cert selection callback
- MINOR: ssl: activate the certificate selection callback for WolfSSL
- CI: github: switch to wolfssl git-c4b77ad for new PR
- BUG/MEDIUM: map/acl: pat_ref_{set,delete}_by_id regressions
- BUG/MINOR: ext-check: cannot use without preserve-env
- CLEANUP: mux-quic: remove unused prototype
- MINOR: mux-quic: clean up qcs Rx buffer allocation API
- MINOR: mux-quic: clean up qcs Tx buffer allocation API
- CLEANUP: mux-quic: clean up app ops callback definitions
- MINOR: mux-quic: factorize QC_SF_UNKNOWN_PL_LENGTH set
- MINOR: h3: complete traces for sending
- MINOR: h3: adjust zero-copy sending related code
- MINOR: hq-interop: use zero-copy to transfer single HTX data block
- BUG/MEDIUM: quic: QUIC CID removed from tree without locking
- BUG/MEDIUM: stconn: Block zero-copy forwarding if EOS/ERROR on consumer side
- BUG/MEDIUM: mux-h1: Cound data from input buf during zero-copy forwarding
- BUG/MEDIUM: mux-h1: Explicitly skip request's C-L header if not set originally
- CLEANUP: mux-h1: Fix a trace message about C-L header addition
- BUG/MEDIUM: mux-h2: Report too large HEADERS frame only when rxbuf is empty
- BUG/MEDIUM: mux-quic: report early error on stream
- DOC: config: add arguments to sample fetch methods in the table
- DOC: config: also add arguments to the converters in the table
- BUG/MINOR: resolvers: default resolvers fails when network not configured
- SCRIPTS: mk-patch-list: produce a list of patches
- DEV: patchbot: add the AI-based bot to pre-select candidate patches to backport
- BUG/MEDIUM: mux-h2: Switch pending error to error if demux buffer is empty
- BUG/MEDIUM: mux-h2: Only Report H2C error on read error if demux buffer is empty
- BUG/MEDIUM: mux-h2: Don't report error on SE if error is only pending on H2C
- BUG/MEDIUM: mux-h2: Don't report error on SE for closed H2 streams
- DOC: config: Update documentation about local haproxy response
- DEV: patchbot: use checked buttons as reference instead of internal table
- DEV: patchbot: allow to show/hide backported patches
- MINOR: h3: remove quic_conn only reference
- BUG/MINOR: server: Use the configured address family for the initial resolution
- MINOR: mux-quic: remove qcc_shutdown() from qcc_release()
- MINOR: mux-quic: use qcc_release in case of init failure
- MINOR: mux-quic: adjust error code in init failure
- MINOR: h3: add traces for connection init stage
- BUG/MINOR: h3: properly handle alloc failure on finalize
- MINOR: h3: use INTERNAL_ERROR code for init failure
- BUG/MAJOR: stconn: Disable zero-copy forwarding if consumer is shut or in error
- MINOR: stats: store the parent proxy in stats ctx (http)
- BUG/MEDIUM: stats: unhandled switching rules with TCP frontend
- MEDIUM: proxy: set PR_O_HTTP_UPG on implicit upgrades
- MINOR: proxy: monitor-uri works with tcp->http upgrades
- OPTIM: server: eb lookup for server_find_by_name()
- OPTIM: server: ebtree lookups for findserver_unique_* functions
- MINOR: server/event_hdl: add server_inetaddr struct to facilitate event data usage
- MINOR: server/event_hdl: update _srv_event_hdl_prepare_inetaddr prototype
- BUG/MINOR: server/event_hdl: propagate map port info through inetaddr event
- MINOR: server: ensure connection cleanup on server addr changes
- CLEANUP: server/event_hdl: remove purge_conn hint in INETADDR event
- MEDIUM: server: merge srv_update_addr() and srv_update_addr_port() logic
- CLEANUP: server: remove unused server_parse_addr_change_request() function
- CLEANUP: resolvers: remove duplicate func prototype
- MINOR: resolvers: add unique numeric id to nameservers
- MEDIUM: server: make server_set_inetaddr() updater serializable
- MINOR: server/event_hdl: expose updater info through INETADDR event
- MINOR: server: add dns hint in server_inetaddr_updater struct
- MEDIUM: server/dns: clear RMAINT when addr resolves again
- BUG/MINOR: server/dns: use server_set_inetaddr() to unset srv addr from DNS
- BUG/MEDIUM: server/dns: perform svc_port updates atomically from SRV records
- MEDIUM: peers: use server as stream target
- CLEANUP: peers: remove unused sock_init_arg struct member
- CLEANUP: peers: remove unused "proto" and "xprt" struct members
- MINOR: peers: rely on srv->addr and remove peer->addr
- DOC: config: add context hint for server keywords
- MINOR: stktable: add table_process_entry helper function
- MINOR: stktable: use {show,set,clear} table with ptr
- MINOR: map: add map_*_key converters to provide the matching key
- DOC: fix typo for fastfwd QUIC option
- BUG/MINOR: mux-quic: always report error to SC on RESET_STREAM emission
- MEDIUM: mux-quic: add BUG_ON if sending on locally closed QCS
- BUG/MINOR: mux-quic: disable fast-fwd if connection on error
- BUG/MINOR: quic: Wrong keylog callback setting.
- BUG/MINOR: quic: Missing call to TLS message callbacks
- MINOR: h3: check connection error during sending
- BUG/MINOR: h3: close connection on header list too big
- BUG/MINOR: h3: close connection on sending alloc errors
- BUG/MINOR: h3: disable fast-forward on buffer alloc failure
- Revert "MINOR: mux-quic: Disable zero-copy forwarding for send by default"
- MINOR: stktable: stktable_data_ptr() cannot fail in table_process_entry()
- CLEANUP: assorted typo fixes in the code and comments
- CI: use semantic version compare for determing "latest" OpenSSL
- CLEANUP: server: remove ambiguous check in srv_update_addr_port()
- CLEANUP: resolvers: remove unused RSLV_UPD_OBSOLETE_IP flag
- CLEANUP: resolvers: remove some more unused RSLV_UDP flags
- MEDIUM: server: simplify snr_set_srv_down() to prevent confusions
- MINOR: backend: export get_server_*() functions
- MINOR: tcpcheck: export proxy_parse_tcpcheck()
- MEDIUM: udp: allow to retrieve the frontend destination address
- MINOR: global: export a way to list build options
- MINOR: debug: add features and build options to "show dev"
- BUG/MINOR: server: fix server_find_by_name() usage during parsing
- REGTESTS: check attach-srv out of order declaration
- CLEANUP: quic: Remaining useless code into server part
- BUILD: quic: Missing quic_ssl.h header protection
- BUG/MEDIUM: h3: fix incorrect snd_buf return value
- MINOR: h3: do not consider missing buf room as error on trailers
- BUG/MEDIUM: stconn: Forward shutdown on write timeout only if it is forwardable
- BUG/MEDIUM: stconn: Set fsb date if zero-copy forwarding is blocked during nego
- BUG/MEDIUM: spoe: Never create new spoe applet if there is no server up
- MINOR: mux-h2: support limiting the total number of H2 streams per connection
- CLEANUP: mux-h2: remove the printfs from previous commit on h2 streams limit.
- DEV: h2: add the ability to emit literals in mkhdr
- DEV: h2: add the preface as well in supported output types
- DEV: h2: support passing raw data for a frame
- IMPORT: ebtree: implement and use flsnz_long() to count bits
- IMPORT: ebtree: switch the sizes and offsets to size_t and ssize_t
- IMPORT: ebtree: rework the fls macros to better deal with arch-specific ones
- IMPORT: ebtree: make string_equal_bits turn back to unsigned char
- IMPORT: ebtree: use unsigned ints for flznz()
- IMPORT: ebtree: make string_equal_bits() return an unsigned
The new function hap_get_next_build_opt() will iterate over the list of
build options. This will be used for debugging, so that the build options
can be retrieved from the CLI.
This reverts commit 18f2ccd244.
Found issues related to QUIC fast-forward were resolved (see github
issue #2372). Reenable it by default. If any issue arises, it can be
disabled using the global statement :
tune.quit.zero-copy-fwd-send off
This can be backported to 2.9, but only after a sensible period of
observation.
There is at least an bug for now in this part and it is still unstable. Thus
it is better to disable it for now by default. It can be enable by setting
tune.quic.zero-copy-fwd-send to 'on'.
Zero-copy fast-forwading feature is a quite new and is a bit sensitive.
There is an option to disable it globally. However, all protocols have not
the same maturity. For instance, for the PT multiplexer, there is nothing
really new. The zero-copy fast-forwading is only another name for the kernel
splicing. However, for the QUIC/H3, it is pretty new, not really optimized
and it will evolved. And soon, the support will be added for the cache
applet.
In this context, it is usefull to be able to enable/disable zero-copy
fast-forwading per-protocol and applet. And when it is applicable, on sends
or receives separately. So, instead of having one flag to disable it
globally, there is now a dedicated bitfield, global.tune.no_zero_copy_fwd.
Add an optional argument for "-dt". This argument is interpreted as a
list of several trace statement separated by comma. For each statement,
a specific trace name can be specifed, or none to act on all sources.
Using double-colon separator, it is possible to add specifications on
the wanted level and verbosity.
Add '-dt' haproxy process argument. This will automatically activate all
trace sources on stderr with the error level. This could be useful to
troubleshoot issues such as protocol violations.
The mworker mode never had a proper 'hard-stop' (-st) for the reload,
this is a mode which was commonly used with the daemon mode, but it was
never implemented in mworker mode.
This patch fixes the problem by implementing a "hard-reload" command
over the master CLI. It does the same as the "reload" command, but
instead of waiting for the connections to stop in the previous process,
it immediately quits the previous process after binding.
When debugging a core, it's difficult to match a given gdb thread number
against an internal thread. Let's just store the pthread ID and the stack
pointer in each tinfo. This could help in the future by allowing to just
glance over them and pick the right one depending what info is found
first.
On CONNECTION_CLOSE reception/emission, QUIC connections enter CLOSING
state. At this stage, only CONNECTION_CLOSE can be reemitted and all
other exchanges are stopped.
Previously, on haproxy process stopping, if all QUIC connections were in
CLOSING state, they were released before their closing timer expiration
to not block the process shutdown. However, since a recent commit, the
closing timer has been shorten to a more reasonable delay. It is now
consider viable to respect connections closing state even on process
shutdown. As such, stopping specific code in QUIC connections idle timer
task was removed.
A specific function quic_handle_stopping() was implemented to notify
QUIC connections on shutdown from main() function. It should have been
deleted along the removal in QUIC idle timer task. This patch just does
this.
This bug was forbidding the GTUNE_SOCKET_TRANSFER option to be set
when haproxy is neither in daemon mode nor in mworker mode. So it
basically only impacts the foreground mode.
The fix moves the code outside the 'if (global.mode & (MODE_DAEMON |
MODE_MWORKER | MODE_MWORKER_WAIT))' condition.
Bug was introduced with 7f80eb23 ("MEDIUM: proxy: zombify proxies only
when the expose-fd socket is bound").
Must be backported in every stable version.
There was still a totally outdated comment speaking about issues
affecting solaris on 1.1.8pre4 (April 2002, 21 year-old)! This
proves that comments in headers are never read, so let's take this
opportunity for also removing the outdated one recommending to read
the "updated" RFC7230.
Since 2.7 and the mcli_reload_bind_conf (56f73b21a5), upon a reload
failure because of a bind error, the mcli_reload_bind_conf go through a
sock_unbind((). This is not supposed to do anything when a listener is
RX_F_INHERITED in the master, but unfortunately this is done too early
and provokes an exit of the master.
We already suspected in the past that setting the 'master' variable this
late could have negative impact.
The fix sets the master variable earlier before the bind.
This must be backported at least to 2.7. This could be backported
earlier but better wait any feedbacks on the fix.
MODE_CHECK does not output "Configuration file is valid" by default
anymore. To display this message the -V option must be used with -c.
However the warning and errors are still output by default if they
exist.
This allows to clean the output of the systemd unit file with is doing a
-c.
Defining a master CLI without the master-worker mode emits a warning
since version 1.8. This patch enforce the behavior by forbiding the
usage of the -S option without the master-worker mode.
The zero-copy forwarding or the mux-to-mux forwarding is a way to
fast-forward data without using the channels buffers. Data are transferred
from a mux to the other one. The kernel splicing is an optimization of the
zero-copy forwarding. But it can also use normal buffers (but not channels
ones). This way, it could be possible to fast-forward data with muxes not
supporting the kernel splicing (H2 and H3 muxes) but also with applets.
However, this mode can introduce regressions or bugs in future (just like
the kernel splicing). Thus, It could be usefull to disable this optim. To do
so, in configuration, the global tune settting
'tune.disable-zero-copy-forwarding' may be set in a global section or the
'-dZ' command line parameter may be used to start HAProxy. Of course, this
also disables the kernel splicing.
When 'log' directive was implemented, the internal representation was
named 'struct logsrv', because the 'log' directive would directly point
to the log target, which used to be a (UDP) log server exclusively at
that time, hence the name.
But things have become more complex, since today 'log' directive can point
to ring targets (implicit, or named) for example.
Indeed, a 'log' directive does no longer reference the "final" server to
which the log will be sent, but instead it describes which log API and
parameters to use for transporting the log messages to the proper log
destination.
So now the term 'logsrv' is rather confusing and prevents us from
introducing a new level of abstraction because they would be mixed
with logsrv.
So in order to better designate this 'log' directive, and make it more
generic, we chose the word 'logger' which now replaces logsrv everywhere
it was used in the code (including related comments).
This is internal rewording, so no functional change should be expected
on user-side.
The regtests are using the "feature()" predicate but this one can only
rely on build-time options. It would be nice if some runtime-specific
options could be detected at boot time so that regtests could more
flexibly adapt to what is supported (capabilities, splicing, etc).
Similarly, certain features that are currently enabled with USE_XXX
could also be automatically detected at build time using ifdefs and
would simplify the configuration, but then we'd lose the feature
report in the feature list which is convenient for regtests.
This patch makes sure that haproxy -vv shows the variable's contents
and not the macro's contents, and adds a new hap_register_feature()
to allow the code to register a new keyword.
cpu_map is 8.2kB/entry and there's one such entry per group, that's
~520kB total. In addition, the init code is still in haproxy.c enclosed
in ifdefs. Let's make this a dynamically allocated array in the cpuset
code and remove that init code.
Later we may even consider reallocating it once the number of threads
and groups is known, in order to shrink it a little bit, as the typical
setup with a single group will only need 8.2kB, thus saving half a MB
of RAM. This would require that the upper bound is placed in a variable
though.
More and more utility functions rely on the trash while most of the init
code doesn't have access to it because it's initialized very late (in
PRE_CHECK for the initial one). It's a pool, and it purposely supports
being reallocated, so let's initialize it in STG_POOL so that early
STG_INIT code can at least use it.
The function generate_random_cluster_secret() which initializes the cluster secret
when not supplied by configuration is buggy. There 1/256 that the cluster secret
string is empty.
To fix this, one stores the cluster as a reduced size first 128 bits of its own
SHA1 (160 bits) digest, if defined by configuration. If this is not the case, it
is initialized with a 128 bits random value. Furthermore, thus the cluster secret
is always initialized.
As the cluster secret is always initialized, there are several tests which
are for now on useless. This patch removes such tests (if(global.cluster_secret))
in the QUIC code part and at parsing time: no need to check that a cluster
secret was initialized with "quic-force-retry" option.
Must be backported as far as 2.6.
This detects when there are more threads bound via cpu-map than CPUs
enabled in cpu-map, or when there are more total threads than the total
number of CPUs available at boot (for unbound threads) and configured
for bound threads. In this case, a warning is emitted to explain the
problems it will cause, and explaining how to address the situation.
Note that some configurations will not be detected as faulty because
the algorithmic complexity to resolve all arrangements grows in O(N!).
This means that having 3 threads on 2 CPUs and one thread on 2 CPUs
will not be detected as it's 4 threads for 4 CPUs. But at least configs
such as T0:(1,4) T1:(1,4) T2:(2,4) T3:(3,4) will not trigger a warning
since they're valid.
It's very easy to mess up with some cpu-map directives and to leave
some thread unbound. Let's add a test that checks that either all
threads are bound or none are bound, but that we do not face the
intermediary situation where some are pinned and others are left
wandering around, possibly on the same CPUs as bound ones.
Note that this should not be backported, or maybe turned into a
notice only, as it appears that it will easily catch invalid
configs and that may break updates for some users.
This adds a new option for the Makefile USE_OPENSSL_AWSLC, and
update the documentation with instructions to use HAProxy with
AWS-LC.
Update the type of the OCSP callback retrieved with
SSL_CTX_get_tlsext_status_cb with the actual type for
libcrypto versions greater than 1.0.2. This doesn't affect
OpenSSL which casts the callback to void* in SSL_CTX_ctrl.
For a while there has been the constraint of having to run as root for
transparent proxying, and we're starting to see some cases where QUIC is
not running in socket-per-connection mode due to the missing capability
that would be needed to bind a privileged port. It's not realistic to
ask all QUIC users on port 443 to run as root, so instead let's provide
a basic support for capabilities at least on linux. The ones currently
supported are cap_net_raw, cap_net_admin and cap_net_bind_service. The
mechanism was made OS-specific with a dedicated file because it really
is. It can be easily refined later for other OSes if needed.
A new keyword "setcaps" is added to the global section, to enumerate the
capabilities that must be kept when switching from root to non-root. This
is ignored in other situations though. HAProxy has to be built with
USE_LINUX_CAP=1 for this to be supported, which is enabled by default
for linux-glibc, linux-glibc-legacy and linux-musl.
A good way to test this is to start haproxy with such a config:
global
uid 1000
setcap cap_net_bind_service
frontend test
mode http
timeout client 3s
bind quic4@:443 ssl crt rsa+dh2048.pem allow-0rtt
and run it under "sudo strace -e trace=bind,setuid", then connecting
there from an H3 client. The bind() syscall must succeed despite the
user id having been switched.
This field used to store the cpumap of the first thread in a group, and
was used till 2.4 to hold some default settings, after which it was no
longer used. Let's just drop it.
The per-process CPU affinity settings are only applied during forking,
which means that cpu-map are ignored when running in foreground (e.g.
haproxy started with -db). This is historic due to the original semantics
of a process array, but isn't documented and causes surprises when trying
to debug affinity settings.
Let's make sure the setting is applied to the workers themselves even
in foreground. This may be backported to 2.6 though it is really not
important. If backported, it also depends on previous commit:
BUG/MINOR: cpuset: remove the bogus "proc" from the cpu_map struct
We're currently having a problem with the porting from cpu_map from
processes to thread-groups as it happened in 2.7 with commit 5b09341c0
("MEDIUM: cpu-map: replace the process number with the thread group
number"), though it seems that it has deeper roots even in 2.0 and
that it was progressively made worng over time.
The issue stems in the way the per-process and per-thread cpu-sets were
employed over time. Originally only processes were supported. Then
threads were added after an optional "/" and it was documented that
"cpu-map 1" is exactly equivalent to "cpu-map 1/all" (this was clarified
in 2.5 by commit 317804d28 ("DOC: update references to process numbers
in cpu-map and bind-process").
The reality is different: when processes were still supported, setting
"cpu-map 1" would apply the mask to the process itself (and only when
run in the background, which is not documented either and is also a
bug for another fix), and would be combined with any possible per-thread
mask when calculating the threads' affinity, possibly resulting in empty
sets. However, "cpu-map 1/all" would only set the mask for the threads
and not the process. As such the following:
cpu-map 1 odd
cpu-map 1/1-8 even
would leave no CPU while doing:
cpu-map 1/all odd
cpu-map 1/1-8 even
would allow all CPUs.
While such configs are very unlikely to ever be met (which is why this
bug is tagged minor), this is becoming quite more visible while testing
automatic CPU binding during 2.9 development because due to this bug
it's much more common to end up with incorrect bindings.
This patch fixes it by simply removing the .proc entry from cpu_map and
always setting all threads' maps. The process is no longer arbitrarily
bound to the group 1's mask, but in case threads are disabled, we'll
use thread 1's mask since it contains the configured CPUs.
This fix should be backported at least to 2.6, but no need to insist if
it resists as it's easier to break cpu-map than to fix an unlikely issue.