This commit introduces the tune.renice.startup and tune.renice.runtime
global keywords that allows to change the priority with setpriority().
tune.renice.startup is parsed and applied in the worker or the standalone
process for configuration parsing. If this keyword is used alone, the
nice value is changed to the previous one after configuration parsing.
tune.renice.runtime is applied after configuration parsing, so in the
worker or a standalone process. Combined with tune.renice.startup it
allows to have a different nice value during configuration parsing and
during runtime.
The feature was discussed in github issue #1919.
Example:
global
tune.renice.startup 15
tune.renice.runtime 0
As master-worker fork happens now before step_init_2(), when pollers are
initialized and polling settings and dumped then in verbose and in debug modes
to stdout, it turns out that master and worker dump its same polling
settings separately. This creates long and messy output in these modes.
Polling settings are the same for master and for worker process for the moment.
Even if they would diverge in future we are interested here in worker's
settings. So, when started in the master-worker mode let's dump it only in the
worker context.
This doesn't need to be backported as related to the latest master-worker
refactoring.
If haproxy was started with -W -dK*, after master-worker refactoring, we dump
registered keywords to stdout twice in master and in worker processes. This
information is redundant and output has no longer the right format. So, as the
keyword registration happens very early before the fork, let's dump keywords
only in the worker context, if haproxy was launched with -W.
This does not need to be backported, as related to the latest master-worker
refactoring.
If haproxy was started with -W -dL, after master-worker refactoring we dump
libs to stdout twice in master and in worker processes. This is information is
redundant. So let's show linked libraries only in the worker context, if
haproxy was started also with -W.
This does not need to be backported, as related to the latest master-worker
rework.
Don't do master-worker fork if MODE_CHECK is detected from the command line along
with the master-worker mode. We should exit in MODE_CHECK, after the
configuration parsing and validation. So, with the new master-worker architecture
it's better to align this mode with the standalone.
This patch does not need to be backported, as related to the latest
master-worker rework.
Flag MODE_STARTING should be unset for master just before freeing the startup
logs ring, as it triggers the copy of process logs to this ring, see the code
of print_message().
Moreover with this flag set, if startup logs ring pointer is NULL, any
print_message() triggered just before the execvp in mworker_reexec() will call
startup_logs_init(). So ring will be allocated again "discretely" and after
execvp we will lost its address, as in step_init_1() we will call again
startup_logs_init().
No need to backport this fix as it's related to the latest master-worker
refactoring.
During re-execution master keeps always opened "reload" sockpair FDs and
shared sockpair ipc_fd[0], the latter is using to transfert listeners sockets
from the previously forked worker to the new one. So, these master's FDs are
inherited in the newly forked worker and must be closed in its context.
"reload" sockpair inherited FDs and shared sockpair FD (ipc_fd[0]) are closed
separately, becase master doesn't recreate "reload" sockpair each time after
its re-exec. It always keeps the same FDs for this "reload" sockpair. So in
worker context it can be closed immediately after the fork.
At contrast, shared sockpair is created each time after reload, when the new
worker will be forked. So, if N previous workers are still exist at this moment,
the new worker will inherit N ipc_fd[0] from master. So, it's more save to
close all these FDs after get_listeners_fd() and bind_listeners() calls.
Otherwise, early closed FDs in the worker context will be immediately bound to
listeners and we could potentially have some bugs.
There are two parts in mworker_cli_proxy_create(): allocating and setting up
MASTER proxy and allocating and setting up servers on ipc_fd[0] of the
sockpairs shared with workers.
So, let's split mworker_cli_proxy_create() into two functions respectively.
Each of them takes **errmsg as an argument to write an error message, which may
be triggered by some subcalls. The content of this errmsg will allow to extend
the final alert message shown to user, if these new functions will fail.
The main goals of this split is to allow to move these two parts independantly
in future and makes the code of haproxy initialization in haproxy.c more
transparent.
Before refactoring master-worker architecture, resources to setup master CLI
for the new worker process (shared sockpair, entry in proc_list) were created
in init() before parsing the configuration and binding listening sockets. So,
master during its re-exec has had to cleanup the new worker's ressources in
a case, when it fails at some initialization step before the fork.
Now fork happens very early and worker parses its configuration by itself. If
it fails during the initialization stage, all clean ups (deleting the fds of
the shared sockpair, proc_list cleanup) are performed in SIGCHLD handler up to
catching the SIGCHLD corresponded to this new worker. So, there is no longer
need to call mworker_cleanup_proc() in mworker_reexec().
As for mworker_cleanlisteners(), there is no longer need to call this function.
Master parses now only "global" and "program" sections, so it allocates only
MASTER proxy, which is stopped in mworker_reexec() by mworker_cli_proxy_stop().
Let's keep the definitions of mworker_cleanlisteners() and
mworker_cleanup_proc() in mworker.c for the moment. We may reuse parts of its
code later.
As master-worker fork happens now at early init stage and worker then parses
its configuration and performs all initialization steps, let's duplicate
startup logs ring for it, just before the moment when it enters in its pollong
loop. Startup logs ring content is shown as an output of the "reload" master
CLI command and we should be able to dump here worker initialization logs.
Log messages are written in startup logs ring only, when mode MODE_STARTING is
set (see print_message()). So, to be able to keep in startup logs the last
worker alerts, let's withdraw MODE_STARTING and let's reset user messages
context respectively just before entering in polling loop.
This fix does not need to be backported as it is a part of previous patches
from this version, which refactor master-worker architecture.
This patch simplifies the code of startup_logs_init_shm(). We no longer re-exec
master process twice after each reload to free its unused memory, which it had
to allocate, because it has parsed all configuration sections. So, there is no
longer need to keep SHM fd opened between the first and the next reloads. We
can completely remove HAPROXY_STARTUPLOGS_FD.
In step_init_1() we continue to call startup_logs_init_shm() to open SHM and to
allocate startup logs ring area within it. In master-worker mode, worker
duplicates initial startup logs ring after sending its READY state to master.
Sharing the same ring between two processes until the worker finishes its
initialization allows to show at master CLI output worker's startup logs.
During the next reload master process should free the memory allocated for the
ring structure. Then after the execvp() it will reopen and map SHM area again
and it will reallocate again the ring structure.
When master enters in recovery mode after unsuccessfull reload
HAPROXY_LOAD_SUCCESS should be set as 0. Like this
cli_io_handler_show_cli_sock() could dump in master CLI its warnings and alerts,
saved in startup logs ring.
No need to backport this fix, as this is related to the previous patches in
this version to refactor master-worker architecture.
In case of daemon mode now daemonization fork happens in the early init stage
before parsing and applying the configuration, so we can't close
stdio/stderr/stdout immediately after forking. We keep it open until the most
of configuration, including chroot are applied in order to show alerts, if
there are some problems. To achieve this /dev/null is opened just before calling
chroot(), and after the chroot block it's used to close all standard outputs
and stdin. At this point we no longer need the fd of /dev/null, so we can close
it as well.
setenv/resetenv/presetenv/unsetenv keywords in the configuration modify the
process environment. In case of master-worker and programs we need to restore
the initial process environment before reload, as the configuration could
change in between and newly forked workers and programs should be launched
in the environment corresponded to this new configuration.
To achieve this we backup the initial process environment before the first
configuration read, when 'global' and 'program' sections are read. And then we
clean up master process environment and restore the initial one from the backup
in mworker_reexec().
Let's reintroduce systemd support in the refactored master-worker mode.
As for now, the master-worker fork happens during early initialization steps and
then the master process receieves the "READY" status message from the newly
forked worker, that has successfully loaded. Let's propagate this "READY" status
message at this moment to the systemd from the master process context
(_send_status()). We use the master process to send messages to systemd,
because it is only the process, monitored by systemd.
In master recovery mode, we also need to send to the systemd the "READY"
message, but with the status "Reload failed". "READY" will signal to systemd,
that master process is still alive, because it doesn't exit in recovery mode
and it keeps the existed worker. Status "Reload failed" will signal to user,
that something wrong has happened with the configuration. Same message logic
was originally preserved for the case, when the worker fails to read its
configuration, see on_new_child_failure() for more details.
This patch is a part of series to reintroduce the program support in the new
master-worker architecture.
Let's add here mworker_ext_launch_all() call before master-worker fork to
start external programs. We keep the order and the place of these two forks
(program and master-worker) the same as before the refactoring, in order to
avoid regressions.
This patch is a part of series to reintroduce the program support in the new
master-worker architecture.
For the moment we keep the order of program and worker forks the same as before
the refactoring, as we need to be sure that this won't introduce regressions.
So, programs are forked before the new worker process.
Before the program's fork we already need deserialized processes list to find
the programs launched before reload and to stop them. Processes list saved
before the reload in HAPROXY_PROCESSES variable. It should be deserialized
before the first configuration read in discovery mode, because resetenv keyword
could be presented in the global section.
So, let's move mworker_env_to_proc_list() from mworker_create_master_cli() to
main(). We need to call it only after reload in master-worker mode, thus
HAPROXY_MWORKER_REEXEC and HAPROXY_PROCESSES should be still presented in the
re-executing process environment before the first configuration read.
Let's encapsulate the logic to set verbosity modes (MODE_DEBUG and MODE_VERBOSE)
in a separate function set_verbosity(). This makes the code of main() more
readable and this allows to call set_verbosity() for master process in recovery
mode. So, in this mode, verbosity settings before the master re-execution will
be re-applied to master. set_verbosity() will be extended in future commits to
reduce the verbosiness of master in order not to dump pollers list and filters,
if it was started with -V or -d.
In this commit we add run_master_in_recovery_mode(), which groups all necessary
initialization steps, which master should perform to be able to enter in its
polling loop (run_master()), when it fails while parsing its new config.
As exit_on_failure() is now adapted for master recovery mode. Let's register
it as atexit handler, when master enters in this mode. And let's remove
atexit_flag variable for master, because we no longer use it.
We also slightly refactor here read_cfg_in_discovery_mode() in order to call
run_master_in_recovery_mode() for the case, described above. Warning messages
are mandatory before calling the run_master_in_recovery_mode() as this allows
to stop haproxy with error, if it was launched in zero-warning mode.
So, in recovery mode master does not launch any worker. It just performs its
necessary initialization routines and enters in its polling loop to continue
to monitor the existed worker process.
Master recovery mode replaces the former wait-mode with a difference, that
master in this case doesn't try to fork the new worker process. But it still
needs to enter to its polling loop in order to monitor the previous worker.
Master performs some initialization steps for this and it recreates its master
CLI. During its initialization steps, master could potentially fail again.
As we use for the moment for master init steps some common routines
(step_init_2() and step_init_3()), there is no way there to signal to user that
failure has happened for the master and in addition, in its recovery mode. So,
in such case exit_on_failure() can be still useful in order to print an
appropriate alert, as we can register this function as atexit handler for the
master.
Let's encapsulate here the code to load and to read the configuration at the
first time in MODE_DISCOVERY. This makes the code of main() more readable and
this adds the structure for adding necessary master initializations routines
to support master recovery mode.
Let's encapsulate master's code (steps which it does before entering in its
polling loop and deinitialization routines after) in a separate run_master()
function. This makes the code of main() more readable. In future we plan to put
in run_master() more master process related code, in order to clean completely
init_step_2(), init_step_3() and init_step_4().
Let's encapsulate here another part of main, after binding listeners sockets
and before calling the master's code in master-worker mode. This block
contains the code, which applies verbosity settings, checks limits and updates
the ready date. It will take some time to figure out, which of these parts are
really needed for the master, or which ones it could skip. So let's put all
these for the moment in step_init_4() and let's call it for all modes.
Let's encapsulate here the code, which tries to bind listeners for the new
process in a separate function. This will make the main() code more readable.
Master process, even if it has failed while reading its new configuration, has
to bind its master CLI sockets. So like this we will can call this function in
the master recovery mode.
Master CLI socket address and port for external connections (user, monitoring
tools) are provided for now only via the command line. So, master, even after
this failure can and must reestablish master CLI connections again.
Let's encapsulate here the code, that calls sock_get_old_sockets() to obtain
listeners sockets from the previous process into a separate function. This
will make the code of main() more readable and we can move this new function
(if we might need so) in future.
MODE_CHECK and MODE_CHECK_CONDITION are applied now very early in
step_init_1() and step_init_2() in order to check the configuration or to check
some condition provided via the command line. When these checks have
terminated, the main process exits. So, no longer need to verify these modes at
the moment, when the current process have already done its basic initialization
routines and is asking for listeners sockets from the previously started one.
The first part of main(), just after calling the former init() and before
trying to bind listeners, need to be also encapsulated into a separate
step_init_3() as it is. It contains important blocks to register signals, to
apply memory and nofile limits, etc. The order of these blocks should be also
preserved (especially the signals part).
For the moment step_init_3() must be also executed for all runtime modes.
This is the first commit in a series to add a support of the 5-th reload
use case, when the master process fails to read its new configuration. In this
case it just need to perform its initialization steps and keep the existed
worker.
To add the support for this last use case we need to split init() and main()
in a shorter steps in order to encapsulate necessary initialization routines
into separate functions.
Let's at first, make here progname as a global variable for haproxy.c, as it
will be used in error messages in the initialization functions. Then let's
split the init() into separate routines, which set and apply modes, write
process PID in a pidfile, etc.
The big part of the former init(), which called functions to allocate pools,
to initialize proxies, to calculate maxconn and to perform some post checks was
just encasulated as is, into step_init_2(). It will take some time to figure
out exactly which parts of this initialization block are really necessary for
the master process and which ones it could skip. So, for the moment
step_init_2() is called for all runtime modes.
Before refactoring the master-worker mode, in all runtime modes, when the new
process successfully parsed its configuration and bound to sockets, it sent
either SIGUSR1 or SIGTERM to the previous one in order to terminate it.
Let's keep this logic as is for the standalone mode. In addition, in standalone
mode we need to send the signal to old process before calling set_identity(),
because in set_identity() effective user or group may change. So, the order is
important here.
In case of master-worker mode after refactoring, master terminates the previous
worker by itself up to receiving "READY" status from the new one in
_send_status(). Master also sets at this moment HAPROXY_LOAD_SUCCESS env
variable and checks, if there are some other workers to terminate with
max_reloads exceeded.
So, now in master-worker mode we terminate old workers only, when the new one
has successfully done all initialization steps and has sent "READY" status to
master.
In the new master-worker architecture, when a worker process is forked and
successfully initialized it needs somehow to communicate its "READY" state to
the master, in order to terminate the previous worker and workers, that might
exceeded max_reloads counter.
So, let's implement for this a new master CLI _send_status command. A new
worker can send its status string "READY" to the master, when it's about
entering to the run poll loop, thus it can start to receive data.
In _send_status() in the master context we update the status of the new worker:
PROC_O_INIT flag is withdrawn.
When TERM signal is sent to a worker, worker terminates and this triggers the
mworker_catch_sigchld() handler in master. This handler deletes the exiting
process entry from the processes list.
In _send_status() we loop over the processes list twice. At the first time, in
order to stop workers that exceeded the max_reloads counter. At the second time,
in order to stop the worker forked before the last reload. In the corner case,
when max_reloads=1, we avoid to send SIGTERM twice to the same worker by
setting sigterm_sent flag during the first loop.
When master performs a reexec it should set for an already existed worker the
flag PROC_O_LEAVING. It means that existed worked is marked as the previous one
and will be terminated after the reload.
In the previous implementation master process was need to do the reexec
twice (the first time for parsing its configuration and the second time to free
unused ressources). So the logic of setting PROC_O_LEAVING was based on
comparing the number of reloads, performed by each process from the processes
list, except the master.
Now, as being mentioned before, reexec is performed only once. So, in this case
we need to set PROC_O_LEAVING flag, when we deserialize the list. It is done for
all processes, which have the number of reloads stricly positive.
Previously reexec_on_failure() was called in cases when the process has failed
after reload, while it was parsing its configuration or it was trying to apply
it. reexec_on_failure() has called mworker_reexec() and the master process has
been reexecuted.
With the new architecture in such cases there is no longer need to reexecute
the master process after its reload again. It simply keeps the previous worker,
forked before the reload, and it lets the new one to exit with an error. But we
still need the code, which increments the number of failed reloads and which
notifies systemd with new "Reload failed!" status. So, let's reuse and adapt
for this reexec_on_failure() and let's rename it to on_new_child_failure().
Basically, this is the continuation of the previous commits. So, here after the
fork, worker process closes the "master" end of the copied CLI sockpair and
binds its end, ipc_fd[1], to the GLOBAL proxy listener.
mworker_cli_global_proxy_new_listener() guarantees that GLOBAL proxy will be
created, if it wasn't the case before.
Master process, at first, allocates the MASTER proxy, creates master CLI listener
(-S command line option) and reload sockpair and then closes the "worker" end of
the copied CLI sockpair and binds its end, ipc_fd[0], to the created MASTER proxy.
Usage of the new PROC_O_INIT state helps to reduce test conditions to find the
newly forked worker.
Here, to distinguish between the new worker and the previous one let's add a
new process state PROC_O_INIT and let's set it, when the memory is allocated
for the new worker in the processes list.
For the master process we always need to create a MASTER proxy, even if
master cli settings were not provided via command line, because now we bind a
listener in the master process context at ipc_fd[0]. So, MASTER proxy should be
already allocated at this moment.
The main idea here is to create a master CLI inherited sockpair just before the
master-worker fork. And only then after the fork let each process to bind a
needed listener to the its end of this sockpair.
Like this master and worker processes can close unused "ends" of its sockpair
copy (ipc_fd[0] for worker and and ipc_fd[1] for master).
When this sockpair creation happens inside the
mworker_cli_global_proxy_new_listener() is not possible for the master to
close ipc_fd[1] bound to the GLOBAL proxy listener, as this triggers a
BUG_ON(fd->owner) in fd_insert() in master context, because master process
has alredy entered in its polling loop and poller in its turn tries to reused
closed fd.
Let's rename mworker_cli_sockpair_new() to
mworker_cli_global_proxy_new_listener() to outline that this function creates
the GLOBAL proxy, allocates the listener with "master-socket" bind conf and
attaches this listener to this GLOBAL proxy. Listener is bound to ipc_fd[1] of
the sockpair inherited in master and in worker (master CLI sockpair).
*thread keywords parsers are sensitive to global section position. If they are
present there, the global section must be the first section in the
configuration. *thread parsers logic is based on non_global_section_parsed
counter. So, we need to reset it explicitly before the second configuration
read done by worker or in a standalone mode.
This commit is a part of the series to add a support of discovery mode in the
configuration parser and in initialization sequence.
In order to support discovery mode, we need to read the configuration twice.
So, we need to split the stage, when we load all configuration files, from
the stage when we parse it. To do this, let's encapsulate in read_cfg() the
part, where we load the configuration files in a separate function, load_cfg().
Like this we can call only the parsing part as many times as we need.
Before reading configuration at the first time we set MODE_DISCOVERY. After
the reading this mode is immediately unset, as the real runtime mode has been
already set by discovery keywords parsers.
Second read is performed when all primary runtime modes (daemon, master-worker)
are applied, because we should not read the configuration twice in the master
process.
MODE_MWORKER_WAIT becames redundant with MODE_MWORKER, due to moving
master-worker fork in init(). This change allows master no longer perform
reexec just after forking in order to free additional memory.
As after the fork in the master process we set 'master' variable, we can
replace now MODE_MWORKER_WAIT in some 'if' statements by simple check of this
'master' variable.
Let's also continue to get rid of HAPROXY_MWORKER_WAIT_ONLY environment
variable, as it's no longer needed as well.
In cfg_program_postparser(), which is used to check if cmdline is defined to
launch a program, we completely remove the check of mode for now, because
the master process does not parse the configuration for the moment. 'program'
section parsing will be reintroduced in master later in the next commits.
As we no longer support MODE_MWORKER_WAIT for master (it became redundant with
MODE_MWORKER after moving master-worker fork in init()), let's rename
exit_on_waitmode_failure() callback in just exit_on_failure().
This a first commit to prepare the removal of MODE_MWORKER_WAIT support. It has
became redundant with MODE_MWORKER, due to moving master-worker fork in init().
Master process does no longer perform reexec to free additional memory after
forking and does no longer changing its mode to MODE_MWORKER_WAIT, where it has
entered to its wait polling loop and has handled signals. Now, master enters in
this loop almost immediately after forking a worker and being always in mode
MODE_MWORKER.
So, we can remove mworker_reexec_waitmode() wrapper, which was used to set
HAPROXY_MWORKER_WAIT_ONLY variable and to call mworker_reexec(). But let's keep
for the moment the logic of reexec_on_failure() atexit callback for master in
order if in the future we will need to support this case again.
Due to moving the master-worker fork in init(), we need to protect
prepare_caps_from_permitted_set() call, which is executed after init(). This
call makes sense only for worker, daemon and for foreground mono process modes.
prepare_caps_from_permitted_set() allows to read Linux capabilities from
haproxy binary and to move some of them in process Effective set, if 'setcap'
keyword lists needed capabilities in the global section.
There are two set_identity() calls, both under quite same:
'if ((global.mode & (MODE_MWORKER|MODE_DAEMON...)...'
The first call serves to change uid/gid and set some needed Linux capabilities
only for process in the foreground mode. The second comes after master-worker
fork and allows to do the same in daemon and in worker modes.
Due to moving the master-worker fork in init() in some previous commit, the
second set_identity() now is no longer under the 'if'. So, it is executed
for all modes, except MODE_MWORKER. Now in MODE_MWORKER process enters in its
wait polling loop just after forking a worker and it terminates almost
immediately, if it exits this loop.
Worker, daemon and process in a foreground mode will perform set_identity() as
before, but now it will be called in a one place at main().
global.last_checks should be verified just after set_identity() call. As it's
stated in comments some configuration options may require full privileges or
some Linux capabilities need to be granted to process. set_identity() via
prepare_caps_for_setuid() may put configured capabilities in process Effective
set and, hence, remove respective flag from global.last_checks.
There are two 'chroot' code blocks, both under quite same:
'if ((global.mode & (MODE_MWORKER|MODE_DAEMON...)...'
The first block serves to perform chroot only for process in the foreground
mode. The second comes after master-worker fork and allows to do chroot
in daemon and in worker modes.
Due to moving the master-worker fork in init() in some previous commit, the
second 'chroot' code block now is no longer under the 'if'. So, it is executed
for all modes, except MODE_MWORKER. Now in MODE_MWORKER process enters in its
wait polling loop just after forking a worker and it terminates almost
immediately, if it exits this loop.
Worker, daemon and process in a foreground mode will perform the chroot as
before, but now it will be done in a one place at main().
mworker_create_master_cli() creates MASTER proxy and allocates listeners,
which are attached to this proxy. It also creates a reload sockpair.
So, it's more appropriate to do the check, that we are in a MODE_MWORKER, if
master CLI settings were provided via command line, just after the config
parsing. And only then, if runtime mode and command line settings are
coherent, try to perform master-worker fork and try to create master CLI.
After moving master-worker fork into init() and reintroducing it into a
switch-case (see the previous commit), it is more appropriate to set
nbthread=1 and nbtgroups=1 immediately in the 'case' for the parent process.
Before this fix, startup logs ring was duplicated before the fork(), so master
and worker had both the original startup_logs ring and the duplicated one. In
the worker context we freed the original ring and used a duplicated one. In
the master context we did nothing, but we still create a duplicated copy again
and again during the reload.
So, let's duplicate startup logs ring only in the worker context. Master
continues to use the original ring initialized in init() before its fork().
This refactoring allows to simplify 'master-worker' logic. The master process
with this change will fork a worker very early at the initialization stage,
which allows to perform a configuration parsing only for the worker. In reality
only the worker process needs to parse and to apply the whole configuration.
Master process just polls master CLI sockets, watches worker status, catches
its termination state and handles the signals. With this refactoring there is
no longer need for master to perform re-execution after reading the whole
configuration file to free additional memory. And there is no longer need for
worker to register atexit callbacks, in order to free the memory, when it
fails to apply the new configuration. In contrast, we now need to set
proc_self pointer to the new worker entry in processes list just after
the fork in the worker process context. proc_self is dereferenced in
mworker_sockpair_register_per_thread(), which is called when worker enters in
its polling loop.
Following patches will try to gather more 'worker' and 'master' specific' code
in the dedicated cases of this new fork() switch, or in a separate functions.