mworker_prepare_master() performs some preparation routines for the new worker
process, which will be forked during the startup. It's called only in
master-worker mode, so let's move it in mworker.c.
mworker_on_new_child_failure() performs some routines for the worker process,
if it has failed the reload. As it's called only in mworker_catch_sigchld()
from mworker.c, let's move mworker_on_new_child_failure() in mworker.c as well.
Like this it could also be declared as a static.
This patch prepares the moving of on_new_child_failure definition into
mworker.c. So, let's rename it accordingly and let's also update its
description.
In 3.1-dev10, commit 8dd4efe42f ("MAJOR: mworker: move master-worker
fork in init()"), the FD associated to /dev/null was made CLOEXEC
using O_CLOEXEC. Unfortunately this is not portable on older OSes,
doesn't build on Solaris for example, and was even reported as breaking
moderately old Linux OSes for other projects. Better not use it unless
absolutely certain it will work (currently we only use it for Linux
namespaces, which are optional), and use the conventional FD_CLOEXEC
instead.
No backport is needed.
This fixes the commit d6ccd1738b
("MINOR: startup: set HAPROXY_LOCALPEER only once").
Comment "/* preset some environment variables */" is now useless here as
HAPROXY_LOCALPEER is set later during the initialization stage and only once.
This should not be backported, as related to the latest master-worker
refactoring.
This fixes the commit d6ccd1738b
("MINOR: startup: set HAPROXY_LOCALPEER only once"). HAPROXY_LOCALPEER could
be checked in the configuration to set some servers settings or listeners. So,
we need to set it just before we read the configuration at the second time.
Let's mark HAPROXY_LOCALPEER as "usable" in the configuration in the related
documentation chapter.
This should not be backported, as related to the latest master-worker
refactoring.
Let's store progname in the global variable, as it is handy to use it in
different parts of code to format messages sent to stdout.
This reduces the number of arguments, which we should pass to some functions.
In the init_early() global.log_tag is initialized to the string from progname
pointer and global.log_tag.area points to this pointer.
If log-tag keyword is provided in the configuration, its parser at first frees
global.log_tag.area and then it does a new memory allocation to copy
there the argument of log-tag. So, progname no longer points to the valid
memory.
To fix this, let's always keep progname and global.log_tag.area at separate
memory areas. If log_tag will be redefined in the configuration, its parser will
free the memory allocated for the default value in chunk_destroy(). Memory
allocated for progname will be freed in deinit().
This should not be backported as related to the latest master-worker
refactoring.
Before this patch HAPROXY_LOCALPEER variable could be set in init_early(),
in init_args() and in cfg_parse_global(). In master-worker mode, if localpeer
keyword set in the global section, HAPROXY_LOCALPEER in the worker
environment is set to this keyword's value, but in the master environment it
still keeps the default, a localhost name. This is confusing.
To fix it, let's set HAPROXY_LOCALPEER only once, when a worker or process in a
standalone mode has finished to parse its configuration. And let's set this
variable only for the worker process or for the process in a standalone mode,
because the master doesn't need it.
HAPROXY_LOCALPEER takes the value saved in localpeer global variable, which is
always set by default in init_early() to the local hostname. Then, localpeer
could be reset in init_args (-L option) and in cfg_parse_global() (while
parsing "localpeer" keyword).
Since sd_notify() is now implemented in src/systemd.c, there is no need
anymore to build its support conditionnally with USE_SYSTEMD.
This patch add supports for -Ws for every build and removes the
USE_SYSTEMD build option. It also remove every reference to USE_SYSTEMD
in the documentation and the CI.
This also allows to run the reg-tests in -Ws with the new VTest support.
Before this patch HAPROXY_BRANCH was unset just after configuration parsing.
Let's keep it, as it could be used in conditional blocks and some
configuration directives and it's handy to check its runtime value via "show
env".
In master-worker mode, this variable is set to the same value for both
processes.
proxy auth_uri struct was manually cleaned up during deinit, but the logic
behind was kind of akward because it was required to find out which ones
were shared or not. Instead, let's switch to a proper refcount mechanism
and free the auth_uri struct directly in proxy_free_common().
When uri_auth admin rules were implemented in 474be415
("[MEDIUM] stats: add an admin level") no attempt was made to free the
list of allocated rules, which makes valgrind unhappy upon deinit when
"stats admin" is used in the config.
To fix the issue, let's cleanup the admin rules list upon deinit where
uri_auth freeing is already handled.
While this could be backported to every stable versions, given how minor
this is and has no impact on the dying process, it is probably not worth
the effort.
load_cfg() is called only once before the first reading of the configuration
(we parse here only the global section). Then, before reading the rest of the
sections (second call of read_cfg()), we call clean_env(). As
HAPROXY_CFGFILES is set in load_cfg(), which is called only once, clean_env()
erases it. Thus, it's not longer shown in "show env" output.
To fix this, let's set HAPROXY_CFGFILES in read_cfg(). Like this in
master-worker mode it is set for master and for worker processes, as it was
before the refactoring.
This fix doesn't need to be backported as related to the latest master-worker
architecture change.
After master-worker refactoring, master performs re-exec only once up to
receiving "reload" command or USR2 signal. There is no more the second
master's re-exec to free unused memory. Thus, there is no longer need to export
environment variable HAPROXY_LOAD_SUCCESS with worker process load status. This
status can be simply saved in a global variable load_status.
cfg_program_postparser() contains 2 parts:
- check the combination of MODE_MWORKER and "program" section. if
"program" section was parsed, MODE_MWORKER is mandatory;
- check "command" keyword, which is mandatory for this section as
well.
This is more appropriate now, after the master-worker refactoring, do the
first part in read_cfg_in_discovery_mode, where we already check the
combination of MODE_MWORKER and -S option.
We need to do the second part just below, in read_cfg_in_discovery_mode() as
well, because it's only the master process, who parses now program section and
programs are forked before running postparser functions in step_init_2.
Otherwise, mworker_ext_launch_all() will emit a log message, that program is
started, but actually nothing has been launched, if 'command' keyword is
absent.
This not needs to be backported, as related to the master-worker refactoring.
This commit introduces the tune.renice.startup and tune.renice.runtime
global keywords that allows to change the priority with setpriority().
tune.renice.startup is parsed and applied in the worker or the standalone
process for configuration parsing. If this keyword is used alone, the
nice value is changed to the previous one after configuration parsing.
tune.renice.runtime is applied after configuration parsing, so in the
worker or a standalone process. Combined with tune.renice.startup it
allows to have a different nice value during configuration parsing and
during runtime.
The feature was discussed in github issue #1919.
Example:
global
tune.renice.startup 15
tune.renice.runtime 0
As master-worker fork happens now before step_init_2(), when pollers are
initialized and polling settings and dumped then in verbose and in debug modes
to stdout, it turns out that master and worker dump its same polling
settings separately. This creates long and messy output in these modes.
Polling settings are the same for master and for worker process for the moment.
Even if they would diverge in future we are interested here in worker's
settings. So, when started in the master-worker mode let's dump it only in the
worker context.
This doesn't need to be backported as related to the latest master-worker
refactoring.
If haproxy was started with -W -dK*, after master-worker refactoring, we dump
registered keywords to stdout twice in master and in worker processes. This
information is redundant and output has no longer the right format. So, as the
keyword registration happens very early before the fork, let's dump keywords
only in the worker context, if haproxy was launched with -W.
This does not need to be backported, as related to the latest master-worker
refactoring.
If haproxy was started with -W -dL, after master-worker refactoring we dump
libs to stdout twice in master and in worker processes. This is information is
redundant. So let's show linked libraries only in the worker context, if
haproxy was started also with -W.
This does not need to be backported, as related to the latest master-worker
rework.
Don't do master-worker fork if MODE_CHECK is detected from the command line along
with the master-worker mode. We should exit in MODE_CHECK, after the
configuration parsing and validation. So, with the new master-worker architecture
it's better to align this mode with the standalone.
This patch does not need to be backported, as related to the latest
master-worker rework.
Flag MODE_STARTING should be unset for master just before freeing the startup
logs ring, as it triggers the copy of process logs to this ring, see the code
of print_message().
Moreover with this flag set, if startup logs ring pointer is NULL, any
print_message() triggered just before the execvp in mworker_reexec() will call
startup_logs_init(). So ring will be allocated again "discretely" and after
execvp we will lost its address, as in step_init_1() we will call again
startup_logs_init().
No need to backport this fix as it's related to the latest master-worker
refactoring.
During re-execution master keeps always opened "reload" sockpair FDs and
shared sockpair ipc_fd[0], the latter is using to transfert listeners sockets
from the previously forked worker to the new one. So, these master's FDs are
inherited in the newly forked worker and must be closed in its context.
"reload" sockpair inherited FDs and shared sockpair FD (ipc_fd[0]) are closed
separately, becase master doesn't recreate "reload" sockpair each time after
its re-exec. It always keeps the same FDs for this "reload" sockpair. So in
worker context it can be closed immediately after the fork.
At contrast, shared sockpair is created each time after reload, when the new
worker will be forked. So, if N previous workers are still exist at this moment,
the new worker will inherit N ipc_fd[0] from master. So, it's more save to
close all these FDs after get_listeners_fd() and bind_listeners() calls.
Otherwise, early closed FDs in the worker context will be immediately bound to
listeners and we could potentially have some bugs.
There are two parts in mworker_cli_proxy_create(): allocating and setting up
MASTER proxy and allocating and setting up servers on ipc_fd[0] of the
sockpairs shared with workers.
So, let's split mworker_cli_proxy_create() into two functions respectively.
Each of them takes **errmsg as an argument to write an error message, which may
be triggered by some subcalls. The content of this errmsg will allow to extend
the final alert message shown to user, if these new functions will fail.
The main goals of this split is to allow to move these two parts independantly
in future and makes the code of haproxy initialization in haproxy.c more
transparent.
Before refactoring master-worker architecture, resources to setup master CLI
for the new worker process (shared sockpair, entry in proc_list) were created
in init() before parsing the configuration and binding listening sockets. So,
master during its re-exec has had to cleanup the new worker's ressources in
a case, when it fails at some initialization step before the fork.
Now fork happens very early and worker parses its configuration by itself. If
it fails during the initialization stage, all clean ups (deleting the fds of
the shared sockpair, proc_list cleanup) are performed in SIGCHLD handler up to
catching the SIGCHLD corresponded to this new worker. So, there is no longer
need to call mworker_cleanup_proc() in mworker_reexec().
As for mworker_cleanlisteners(), there is no longer need to call this function.
Master parses now only "global" and "program" sections, so it allocates only
MASTER proxy, which is stopped in mworker_reexec() by mworker_cli_proxy_stop().
Let's keep the definitions of mworker_cleanlisteners() and
mworker_cleanup_proc() in mworker.c for the moment. We may reuse parts of its
code later.
As master-worker fork happens now at early init stage and worker then parses
its configuration and performs all initialization steps, let's duplicate
startup logs ring for it, just before the moment when it enters in its pollong
loop. Startup logs ring content is shown as an output of the "reload" master
CLI command and we should be able to dump here worker initialization logs.
Log messages are written in startup logs ring only, when mode MODE_STARTING is
set (see print_message()). So, to be able to keep in startup logs the last
worker alerts, let's withdraw MODE_STARTING and let's reset user messages
context respectively just before entering in polling loop.
This fix does not need to be backported as it is a part of previous patches
from this version, which refactor master-worker architecture.
This patch simplifies the code of startup_logs_init_shm(). We no longer re-exec
master process twice after each reload to free its unused memory, which it had
to allocate, because it has parsed all configuration sections. So, there is no
longer need to keep SHM fd opened between the first and the next reloads. We
can completely remove HAPROXY_STARTUPLOGS_FD.
In step_init_1() we continue to call startup_logs_init_shm() to open SHM and to
allocate startup logs ring area within it. In master-worker mode, worker
duplicates initial startup logs ring after sending its READY state to master.
Sharing the same ring between two processes until the worker finishes its
initialization allows to show at master CLI output worker's startup logs.
During the next reload master process should free the memory allocated for the
ring structure. Then after the execvp() it will reopen and map SHM area again
and it will reallocate again the ring structure.
When master enters in recovery mode after unsuccessfull reload
HAPROXY_LOAD_SUCCESS should be set as 0. Like this
cli_io_handler_show_cli_sock() could dump in master CLI its warnings and alerts,
saved in startup logs ring.
No need to backport this fix, as this is related to the previous patches in
this version to refactor master-worker architecture.
In case of daemon mode now daemonization fork happens in the early init stage
before parsing and applying the configuration, so we can't close
stdio/stderr/stdout immediately after forking. We keep it open until the most
of configuration, including chroot are applied in order to show alerts, if
there are some problems. To achieve this /dev/null is opened just before calling
chroot(), and after the chroot block it's used to close all standard outputs
and stdin. At this point we no longer need the fd of /dev/null, so we can close
it as well.
setenv/resetenv/presetenv/unsetenv keywords in the configuration modify the
process environment. In case of master-worker and programs we need to restore
the initial process environment before reload, as the configuration could
change in between and newly forked workers and programs should be launched
in the environment corresponded to this new configuration.
To achieve this we backup the initial process environment before the first
configuration read, when 'global' and 'program' sections are read. And then we
clean up master process environment and restore the initial one from the backup
in mworker_reexec().
Let's reintroduce systemd support in the refactored master-worker mode.
As for now, the master-worker fork happens during early initialization steps and
then the master process receieves the "READY" status message from the newly
forked worker, that has successfully loaded. Let's propagate this "READY" status
message at this moment to the systemd from the master process context
(_send_status()). We use the master process to send messages to systemd,
because it is only the process, monitored by systemd.
In master recovery mode, we also need to send to the systemd the "READY"
message, but with the status "Reload failed". "READY" will signal to systemd,
that master process is still alive, because it doesn't exit in recovery mode
and it keeps the existed worker. Status "Reload failed" will signal to user,
that something wrong has happened with the configuration. Same message logic
was originally preserved for the case, when the worker fails to read its
configuration, see on_new_child_failure() for more details.
This patch is a part of series to reintroduce the program support in the new
master-worker architecture.
Let's add here mworker_ext_launch_all() call before master-worker fork to
start external programs. We keep the order and the place of these two forks
(program and master-worker) the same as before the refactoring, in order to
avoid regressions.
This patch is a part of series to reintroduce the program support in the new
master-worker architecture.
For the moment we keep the order of program and worker forks the same as before
the refactoring, as we need to be sure that this won't introduce regressions.
So, programs are forked before the new worker process.
Before the program's fork we already need deserialized processes list to find
the programs launched before reload and to stop them. Processes list saved
before the reload in HAPROXY_PROCESSES variable. It should be deserialized
before the first configuration read in discovery mode, because resetenv keyword
could be presented in the global section.
So, let's move mworker_env_to_proc_list() from mworker_create_master_cli() to
main(). We need to call it only after reload in master-worker mode, thus
HAPROXY_MWORKER_REEXEC and HAPROXY_PROCESSES should be still presented in the
re-executing process environment before the first configuration read.
Let's encapsulate the logic to set verbosity modes (MODE_DEBUG and MODE_VERBOSE)
in a separate function set_verbosity(). This makes the code of main() more
readable and this allows to call set_verbosity() for master process in recovery
mode. So, in this mode, verbosity settings before the master re-execution will
be re-applied to master. set_verbosity() will be extended in future commits to
reduce the verbosiness of master in order not to dump pollers list and filters,
if it was started with -V or -d.
In this commit we add run_master_in_recovery_mode(), which groups all necessary
initialization steps, which master should perform to be able to enter in its
polling loop (run_master()), when it fails while parsing its new config.
As exit_on_failure() is now adapted for master recovery mode. Let's register
it as atexit handler, when master enters in this mode. And let's remove
atexit_flag variable for master, because we no longer use it.
We also slightly refactor here read_cfg_in_discovery_mode() in order to call
run_master_in_recovery_mode() for the case, described above. Warning messages
are mandatory before calling the run_master_in_recovery_mode() as this allows
to stop haproxy with error, if it was launched in zero-warning mode.
So, in recovery mode master does not launch any worker. It just performs its
necessary initialization routines and enters in its polling loop to continue
to monitor the existed worker process.
Master recovery mode replaces the former wait-mode with a difference, that
master in this case doesn't try to fork the new worker process. But it still
needs to enter to its polling loop in order to monitor the previous worker.
Master performs some initialization steps for this and it recreates its master
CLI. During its initialization steps, master could potentially fail again.
As we use for the moment for master init steps some common routines
(step_init_2() and step_init_3()), there is no way there to signal to user that
failure has happened for the master and in addition, in its recovery mode. So,
in such case exit_on_failure() can be still useful in order to print an
appropriate alert, as we can register this function as atexit handler for the
master.
Let's encapsulate here the code to load and to read the configuration at the
first time in MODE_DISCOVERY. This makes the code of main() more readable and
this adds the structure for adding necessary master initializations routines
to support master recovery mode.
Let's encapsulate master's code (steps which it does before entering in its
polling loop and deinitialization routines after) in a separate run_master()
function. This makes the code of main() more readable. In future we plan to put
in run_master() more master process related code, in order to clean completely
init_step_2(), init_step_3() and init_step_4().
Let's encapsulate here another part of main, after binding listeners sockets
and before calling the master's code in master-worker mode. This block
contains the code, which applies verbosity settings, checks limits and updates
the ready date. It will take some time to figure out, which of these parts are
really needed for the master, or which ones it could skip. So let's put all
these for the moment in step_init_4() and let's call it for all modes.
Let's encapsulate here the code, which tries to bind listeners for the new
process in a separate function. This will make the main() code more readable.
Master process, even if it has failed while reading its new configuration, has
to bind its master CLI sockets. So like this we will can call this function in
the master recovery mode.
Master CLI socket address and port for external connections (user, monitoring
tools) are provided for now only via the command line. So, master, even after
this failure can and must reestablish master CLI connections again.
Let's encapsulate here the code, that calls sock_get_old_sockets() to obtain
listeners sockets from the previous process into a separate function. This
will make the code of main() more readable and we can move this new function
(if we might need so) in future.
MODE_CHECK and MODE_CHECK_CONDITION are applied now very early in
step_init_1() and step_init_2() in order to check the configuration or to check
some condition provided via the command line. When these checks have
terminated, the main process exits. So, no longer need to verify these modes at
the moment, when the current process have already done its basic initialization
routines and is asking for listeners sockets from the previously started one.
The first part of main(), just after calling the former init() and before
trying to bind listeners, need to be also encapsulated into a separate
step_init_3() as it is. It contains important blocks to register signals, to
apply memory and nofile limits, etc. The order of these blocks should be also
preserved (especially the signals part).
For the moment step_init_3() must be also executed for all runtime modes.
This is the first commit in a series to add a support of the 5-th reload
use case, when the master process fails to read its new configuration. In this
case it just need to perform its initialization steps and keep the existed
worker.
To add the support for this last use case we need to split init() and main()
in a shorter steps in order to encapsulate necessary initialization routines
into separate functions.
Let's at first, make here progname as a global variable for haproxy.c, as it
will be used in error messages in the initialization functions. Then let's
split the init() into separate routines, which set and apply modes, write
process PID in a pidfile, etc.
The big part of the former init(), which called functions to allocate pools,
to initialize proxies, to calculate maxconn and to perform some post checks was
just encasulated as is, into step_init_2(). It will take some time to figure
out exactly which parts of this initialization block are really necessary for
the master process and which ones it could skip. So, for the moment
step_init_2() is called for all runtime modes.
Before refactoring the master-worker mode, in all runtime modes, when the new
process successfully parsed its configuration and bound to sockets, it sent
either SIGUSR1 or SIGTERM to the previous one in order to terminate it.
Let's keep this logic as is for the standalone mode. In addition, in standalone
mode we need to send the signal to old process before calling set_identity(),
because in set_identity() effective user or group may change. So, the order is
important here.
In case of master-worker mode after refactoring, master terminates the previous
worker by itself up to receiving "READY" status from the new one in
_send_status(). Master also sets at this moment HAPROXY_LOAD_SUCCESS env
variable and checks, if there are some other workers to terminate with
max_reloads exceeded.
So, now in master-worker mode we terminate old workers only, when the new one
has successfully done all initialization steps and has sent "READY" status to
master.
In the new master-worker architecture, when a worker process is forked and
successfully initialized it needs somehow to communicate its "READY" state to
the master, in order to terminate the previous worker and workers, that might
exceeded max_reloads counter.
So, let's implement for this a new master CLI _send_status command. A new
worker can send its status string "READY" to the master, when it's about
entering to the run poll loop, thus it can start to receive data.
In _send_status() in the master context we update the status of the new worker:
PROC_O_INIT flag is withdrawn.
When TERM signal is sent to a worker, worker terminates and this triggers the
mworker_catch_sigchld() handler in master. This handler deletes the exiting
process entry from the processes list.
In _send_status() we loop over the processes list twice. At the first time, in
order to stop workers that exceeded the max_reloads counter. At the second time,
in order to stop the worker forked before the last reload. In the corner case,
when max_reloads=1, we avoid to send SIGTERM twice to the same worker by
setting sigterm_sent flag during the first loop.
When master performs a reexec it should set for an already existed worker the
flag PROC_O_LEAVING. It means that existed worked is marked as the previous one
and will be terminated after the reload.
In the previous implementation master process was need to do the reexec
twice (the first time for parsing its configuration and the second time to free
unused ressources). So the logic of setting PROC_O_LEAVING was based on
comparing the number of reloads, performed by each process from the processes
list, except the master.
Now, as being mentioned before, reexec is performed only once. So, in this case
we need to set PROC_O_LEAVING flag, when we deserialize the list. It is done for
all processes, which have the number of reloads stricly positive.
Previously reexec_on_failure() was called in cases when the process has failed
after reload, while it was parsing its configuration or it was trying to apply
it. reexec_on_failure() has called mworker_reexec() and the master process has
been reexecuted.
With the new architecture in such cases there is no longer need to reexecute
the master process after its reload again. It simply keeps the previous worker,
forked before the reload, and it lets the new one to exit with an error. But we
still need the code, which increments the number of failed reloads and which
notifies systemd with new "Reload failed!" status. So, let's reuse and adapt
for this reexec_on_failure() and let's rename it to on_new_child_failure().