Now that we have the guarantee that init calls happen before any other
thread starts, we don't need anymore the workaround installed by commit
1605c7ae6 ("BUG/MEDIUM: threads/mworker: fix a race on startup") and we
can instead rely on a regular per-thread initcall for this function. It
will only be performed on worker thread #0, the other ones and the master
have nothing to do, just like in the original code that was only moved
to the function.
It's a bit dangerous to let threads initialize at different speeds on
startup. Some are still in their init functions while others area already
running. It was even subject to some race condition bugs like the one
fixed by commit 1605c7ae6 ("BUG/MEDIUM: threads/mworker: fix a race on
startup").
Here in order to secure all this, we take a very simplistic approach
consisting in using half of the rendez-vous point, which is made
exactly for this purpose : we first initialize the mask of the threads
requesting a rendez-vous to the mask of all threads, and we simply call
thread_release() once the init is complete. This guarantees that no
thread will go further than the initialization code during this time.
This could even safely be backported if any other issue related to an
init race was discovered in a stable release.
It's always a pain to have to stuff lots of #ifdef USE_OPENSSL around
ssl headers, it even results in some of them appearing in a random order
and multiple times just to benefit form an existing ifdef block. Let's
make these headers safe for inclusion when USE_OPENSSL is not defined,
they now perform the test themselves and do nothing if USE_OPENSSL is
not defined. This allows to remove no less than 8 such ifdef blocks
and make include blocks more readable.
Since we're providing a compatibility layer for multiple OpenSSL
implementations and their derivatives, it is important that no C file
directly includes openssl headers but only passes via openssl-compat
instead. As a bonus this also gets rid of redundant complex rules for
inclusion of certain files (engines etc).
They were all check to comply with the advertised openssl version. Now
that libressl doesn't pretend to be a more recent openssl anymore, we
can simply rely on the regular openssl version tests without having to
deal with exceptions for libressl.
Most tests on OPENSSL_VERSION_NUMBER have become complex and break all
the time because this number is fake for some derivatives like LibreSSL.
This patch creates a new macro, HA_OPENSSL_VERSION_NUMBER, which will
carry the real openssl version defining the compatibility level, and
this version will be adjusted depending on the variants.
As with every single OpenSSL fix, LibreSSL build broke again, this time
after commit 56996dabe ("BUG/MINOR: mworker/ssl: close OpenSSL FDs on
reload"). A definitive solution will have to be found quickly. For now,
let's exclude libressl from the version test.
This patch must be backported to 1.9 since the fix above was already
backported there.
This patch implements a new global parameter for the master-worker mode.
When setting the mworker-max-reloads value, a worker receive a SIGTERM
if its number of reloads is greater than this value.
Since previous commit it's not needed anymore to test a task pointer
before calling task_destory() so let's just remove these tests from
the various callers before they become confusing. The function's
arguments were also documented. The same should probably be done
with tasklet_free() which involves a test in roughly half of the
call places.
In commit 1b8e68e ("MEDIUM: stick-table: Stop handling stick-tables as
proxies."), the ->table member of proxy struct was replaced by a pointer
that is not always checked and in some situations can cause a segfault,
eg. during reload or while using "show table" on CLI socket.
No backport is needed.
From OpenSSL 1.1.1, the default behaviour is to maintain open FDs to any
random devices that get used by the random number library. As a result,
those FDs leak when the master re-execs on reload; since those FDs are
not marked FD_CLOEXEC or O_CLOEXEC, they also get inherited by children.
Eventually both master and children run out of FDs.
OpenSSL 1.1.1 introduces a new function to control whether the random
devices are kept open. When clearing the keep-open flag, it also closes
any currently open FDs, so it can be used to clean-up open FDs too.
Therefore, a call to this function is made in mworker_reload prior to
re-exec.
The call is guarded by whether SSL is in use, because it will cause
initialisation of the OpenSSL random number library if that has not
already been done.
This should be backported to 1.9 and 1.8.
Now we atomically allocate the my_regex struct within function
regex_comp() and compile the regex or free both in case of failure. The
pointer to the allocated my_regex struct is returned directly. The
my_regex* argument to regex_comp() is removed.
Function regex_free() was modified so that it systematically frees the
my_regex entry. The function does nothing when called with a NULL as
argument (like free()). It will avoid existing risk of not properly
freeing the initialized area.
Other structures are also updated in order to be compatible (the ones
related to Lua and action rules).
This patch adds the support for the "table" line parsing in "peers" sections
to declare stick-table in such sections. This also prevents the user from having
to declare dummy backends sections with a unique stick-table inside.
Even if still supported, this usage will become deprecated.
To do so, the ->table member of proxy struct which is a stktable struct is replaced
by a pointer to a stktable struct allocated at parsing time in src/cfgparse-listen.c
for the dummy stick-table backends and in src/cfgparse.c for "peers" sections.
This has an impact on the code for stick-table sample converters and on the stickiness
rules parsers which first store the name of the dummy before resolving the rules.
This patch replaces proxy_tbl_by_name() calls by stktable_find_by_name() calls
to lookup for stick-tables stored in "stktable_by_name" ebtree at parsing time.
There is only one remaining place where proxy_tbl_by_name() is used: src/hlua.c.
At several places in the code we relied on the fact that ->size member of stick-table
was equal to zero to consider the stick-table was present by not configured,
this do not make sense anymore as ->table member of struct proxyis fow now on a pointer.
These tests are replaced by a test on ->table value itself.
In "peers" section we do not have to temporary store the name of the section the
stick-table are attached to because this name is obviously already known just after
having entered this "peers" section.
About the CLI stick-table I/O handler, the pointer to proxy struct is replaced by
a pointer to a stktable struct.
Currently the thread array is a local variable inside a function block
and there is no access to it from outside, which often complicates
debugging. Let's make it global and export it. Also the allocation
return is now checked.
It's still obscure how we managed to initialize an array of integers
with values always equal to the index, just to retrieve the value
from an opaque pointer to the index instead of directly using it! I
suspect it's a leftover from the very early threading experiments.
This commit gets rid of this and simply passes the thread ID as the
argument to run_thread_poll_loop(), thus significantly simplifying the
few call places and removing the need to allocate then free an array
of identity.
When we initially experimented with threads and processes support, we
needed to implement arrays of threads per process for cpu-map, but this
is not needed anymore since we support either threads or processes.
Let's simply make the thread-based cpu-map per thread and not per
thread and per process since that's not used anymore. Doing so reduces
the global struct from 33kB to 1.5kB.
When using the "use_backend" configuration directive, the configuration
file name stored as rule->file was not freed in some situations. This
was introduced in commit 4ed1c95 ("MINOR: http/conf: store the
use_backend configuration file and line for logs").
This patch should be backported to 1.9, 1.8 and 1.7.
As by default we add all keepalive connections to the idle pool, if we run
into a pathological case, where all client don't do keepalive, but the server
does, and haproxy is configured to only reuse "safe" connections, we will
soon find ourself having lots of idling, unusable for new sessions, connections,
while we won't have any file descriptors available to create new connections.
To fix this, add 2 new global settings, "pool_low_ratio" and "pool_high_ratio".
pool-low-fd-ratio is the % of fds we're allowed to use (against the maximum
number of fds available to haproxy) before we stop adding connections to the
idle pool, and destroy them instead. The default is 20. pool-high-fd-ratio is
the % of fds we're allowed to use (against the maximum number of fds available
to haproxy) before we start killing idling connection in the event we have to
create a new outgoing connection, and no reuse is possible. The default is 25.
task_delete() was never used without calling task_free() just after, and
task_free() was only used on error pathes to destroy a just-created task,
so merge them into task_destroy(), that will remove the task from the
wait queue, and make sure the task is either destroyed immediately if it's
not in the run queue, or destroyed when it's supposed to run.
It's always a pain to get a core dump when enabling user/group setting
(which disables the dumpable flag on Linux), when using a chroot and/or
when haproxy is started by a service management tool which requires
complex operations to just raise the core dump limit.
This patch introduces a new "set-dumpable" global directive to work
around these troubles by doing the following :
- remove file size limits (equivalent of ulimit -f unlimited)
- remove core size limits (equivalent of ulimit -c unlimited)
- mark the process dumpable again (equivalent of suid_dumpable=1)
Some of these will depend on the operating system. This way it becomes
much easier to retrieve a core file. Temporarily moving the chroot to
a user-writable place generally enough.
Since the introduction of the options field, we can use it to store the
type of process.
type = 'm' is replaced by PROC_O_TYPE_MASTER
type = 'w' is replaced by PROC_O_TYPE_WORKER
type = 'e' is replaced by PROC_O_TYPE_PROG
The old values are still used in the HAPROXY_PROCESSES environment
variable to pass the information during a reload.
Pavlos Parissis reported an interesting case where some map identifiers
were not assigned (appearing as -1 in show map). It turns out that it
only happens for log-format expressions parsed in check_config_validity()
that involve maps (log-format, use_backend, unique-id-header), as in the
sample configuration below :
frontend foo
bind :8001
unique-id-format %[src,map(addr.lst)]
log-format %[src,map(addr.lst)]
use_backend %[src,map(addr.lst)]
The reason stems from the initial introduction of unique IDs in 1.5 via
commit af5a29d5f ("MINOR: pattern: Each pattern is identified by unique
id.") : the unique_id assignment was done before calling
check_config_validity() so all maps loaded after this call are not
properly configured. From what the function does, it seems they will not
be able to use a cache, will not have a unique_id assigned and will not
be updatable from the CLI.
This fix must be backported to all supported versions.
This patch implements the external binary support in the master worker.
To configure an external process, you need to use the program section,
for example:
program dataplane-api
command ./dataplane_api
Those processes are launched at the same time as the workers.
During a reload of HAProxy, those processes are dealing with the same
sequence as a worker:
- the master is re-executed
- the master sends a USR1 signal to the program
- the master launches a new instance of the program
During a stop, or restart, a SIGTERM is sent to the program.
The children variable is still used in haproxy, it is not required
anymore since we have the information about the current workers in the
mworker_proc linked list.
The oldpids array is also replaced by this linked list when we
generated the arguments for the master reexec.
The current initcall implementation relies on dedicated sections (one
section per init stage) to store the initcall descriptors. Then upon
startup, these sections are scanned from beginning to end and all items
found there are called in sequence.
On platforms like AIX or Cygwin it seems difficult to figure the
beginning and end of sections as the linker doesn't seem to provide
the corresponding symbols. In order to replace this, this patch
simply implements an array of single linked (one per init stage)
which are fed using constructors for each register call. These
constructors are declared static, with a name depending on their
line number in the file, in order to avoid name clashes. The final
effect is the same, except that the method is slightly more expensive
in that it explicitly produces code to register these initcalls :
$ size haproxy.sections haproxy.constructor
text data bss dec hex filename
4060312 249176 1457652 5767140 57ffe4 haproxy.sections
4062862 260408 1457652 5780922 5835ba haproxy.constructor
This mechanism is enabled as an alternative to the default one when
build option USE_OBSOLETE_LINKER is set. This option is currently
enabled by default only on AIX and Cygwin, and may be attempted for
any target which fails to build complaining about missing symbols
__start_init_* and/or __stop_init_*.
Once confirmed as a reliable fix, this will likely have to be backported
to 1.9 where AIX and Cygwin do not build anymore.
A bug occurs when the sigchld handler is called and a child which is
not in the process list just left, or with an empty process list.
The child variable won't be set and left as an uninitialized variable or
set to the wrong child entry, which can lead to a free of this
uninitialized variable or of the wrong child.
This can lead to a crash of the master during a stop or a reload.
It is not supposed to happen with a worker which was created by the
master. A cause could be a fork made by a dependency. (openssl, lua ?)
This patch strengthens the case of the missing child by doing the free
only if the child was found.
This patch must be backported to 1.9.
It's not convenient not to know the status of default options, and
requires the user to know what option is enabled by default in each
target. With this patch, a new "Features list" line is added to the
output of "haproxy -vv" to report the whole list of known features
with their respective status. They're prefixed with a "+" when enabled
or a "-" when disabled. The "USE_" prefix is removed for clarity.
It's never easy to guess what services are built in. We currently have
the prometheus exporter in contrib/ which is the only extension for now.
Let's enumerate all available ones just like we do for filterr and pollers.
Each thread uses one epoll_fd or kqueue_fd, and a pipe (thus two FDs).
These ones have to be accounted for in the maxsock calculation, otherwise
we can reach maxsock before maxconn. This is difficult to observe but it
in fact happens when a server connects back to the frontend and has checks
enabled : the check uses its FD and serves to fill the loop. In this case
all FDs planed for the datapath are used for this.
This needs to be backported to 1.9 and 1.8.
Some packages used to rely on DEFAULT_MAXCONN to set the default global
maxconn value to use regardless of the initial ulimit. The recent changes
made the lowest bound set to 100 so that it is compatible with almost any
environment. Now that DEFAULT_MAXCONN is not needed for anything else, we
can use it for the lowest bound set when maxconn is not configured. This
way it retains its original purpose of setting the default maxconn value
eventhough most of the time the effective value will be higher thanks to
the automatic computation based on "ulimit -n".
This entry was still set to 2000 but never used anymore. The only places
where it appeared was as an alias to SYSTEM_MAXCONN which forces it, so
let's turn these ones to SYSTEM_MAXCONN and remove the default value for
DEFAULT_MAXCONN. SYSTEM_MAXCONN still defines the upper bound however.
The global maxconn value is often a pain to configure :
- in development the user never has the permissions to increase the
rlim_cur value too high and gets warnings all the time ;
- in some production environments, users may have limited actions on
it or may only be able to act on rlim_fd_cur using ulimit -n. This
is sometimes particularly true in containers or whatever environment
where the user has no privilege to upgrade the limits.
- keeping config homogenous between machines is even less easy.
We already had the ability to automatically compute maxconn from the
memory limits when they were set. This patch goes a bit further by also
computing the limit permitted by the configured limit on the number of
FDs. For this it simply reverses the rlim_fd_cur calculation to determine
maxconn based on the number of reserved sockets for listeners & checks,
the number of SSL engines and the number of pipes (absolute or relative).
This way it becomes possible to make maxconn always be the highest possible
value resulting in maxsock matching what was set using "ulimit -n", without
ever setting it. Note that we adjust to the soft limit, not the hard one,
since it's what is configured with ulimit -n. This allows users to also
limit to low values if needed.
Just like before, the calculated value is reported in verbose mode.
We'll need to know the global maxsock before the maxconn calculation.
Actually only two components were calculated too late, the peers FD
and the stats FD. Let's move them a few lines upward.
The default number of pipes is adjusted based on the sum of frontends
and backends maxconn/fullconn settings. Now that it is possible to have
a null maxconn on a frontend to indicate "unlimited" with commit
c8d5b95e6 ("MEDIUM: config: don't enforce a low frontend maxconn value
anymore"), the sum of maxconn may remain low and limited to the only
frontends/backends where this limit is set.
This patch considers this new unlimited case when doing the check, and
automatically switches to the default value which is maxconn/4 in this
case. All the calculation was moved to a distinct function for ease of
use. This function also supports returning unlimited (-1) when the
value depends on global.maxconn and this latter is not yet set.
When the master re-execs itself on reload, it doesn't restore the initial
rlim_fd_cur/rlim_fd_max values, which have been modified by the ulimit-n
or global maxconn directives. This is a problem, because if these values
were set really low it could prevent the process from restarting, and if
they were set very high, this could have some implications on the restart
time, or later on the computed maxconn.
Let's simply reset these values to the ones we had at boot to maintain
the system in a consistent state.
A backport could be performed to 1.9 and maybe 1.8. This patch depends on
the two previous ones.
If a ulimit-n value is set, we must not lower the rlim_max value if the
new value is lower, we must only adjust the rlim_cur one. The effect is
that on very low values, this could prevent a master-worker reload, or
make an external check fail by lack of FDs.
This may be backported to 1.9 and earlier, but it depends on this patch
"MINOR: global: keep a copy of the initial rlim_fd_cur and rlim_fd_max
values".
Let's keep a copy of these initial values. They will be useful to
compute automatic maxconn, as well as to restore proper limits when
doing an execve() on external checks.
Historically the default frontend's maxconn used to be quite low (2000),
which was sufficient two decades ago but often proved to be a problem
when users had purposely set the global maxconn value but forgot to set
the frontend's.
There is no point in keeping this arbitrary limit for frontends : when
the global maxconn is lower, it's already too high and when the global
maxconn is much higher, it becomes a limiting factor which causes trouble
in production.
This commit allows the value to be set to zero, which becomes the new
default value, to mean it's not directly limited, or in fact it's set
to the global maxconn. Since this operation used to be performed before
computing a possibly automatic global maxconn based on memory limits,
the calculation of the maxconn value and its propagation to the backends'
fullconn has now moved to a dedicated function, proxy_adjust_all_maxconn(),
which is called once the global maxconn is stabilized.
This comes with two benefits :
1) a configuration missing "maxconn" in the defaults section will not
limit itself to a magically hardcoded value but will scale up to the
global maxconn ;
2) when the global maxconn is not set and memory limits are used instead,
the frontends' maxconn automatically adapts, and the backends' fullconn
as well.
Threads have long matured by now, still for most users their usage is
not trivial. It's about time to enable them by default on platforms
where we know the number of CPUs bound. This patch does this, it counts
the number of CPUs the process is bound to upon startup, and enables as
many threads by default. Of course, "nbthread" still overrides this, but
if it's not set the default behaviour is to start one thread per CPU.
The default number of threads is reported in "haproxy -vv". Simply using
"taskset -c" is now enough to adjust this number of threads so that there
is no more need for playing with cpu-map. And thanks to the previous
patches on the listener, the vast majority of configurations will not
need to duplicate "bind" lines with the "process x/y" statement anymore
either, so a simple config will automatically adapt to the number of
processors available.
tune.listener.multi-queue { on | off }
Enables ('on') or disables ('off') the listener's multi-queue accept which
spreads the incoming traffic to all threads a "bind" line is allowed to run
on instead of taking them for itself. This provides a smoother traffic
distribution and scales much better, especially in environments where threads
may be unevenly loaded due to external activity (network interrupts colliding
with one thread for example). This option is enabled by default, but it may
be forcefully disabled for troubleshooting or for situations where it is
estimated that the operating system already provides a good enough
distribution and connections are extremely short-lived.
Instead of having one task per thread and per server that does clean the
idling connections, have only one global task for every servers.
That tasks parses all the servers that currently have idling connections,
and remove half of them, to put them in a per-thread list of connections
to kill. For each thread that does have connections to kill, wake a task
to do so, so that the cleaning will be done in the context of said thread.
Released version 2.0-dev1 with the following main changes :
- MINOR: mux-h2: only increase the connection window with the first update
- REGTESTS: remove the expected window updates from H2 handshakes
- BUG/MINOR: mux-h2: make empty HEADERS frame return a connection error
- BUG/MEDIUM: mux-h2: mark that we have too many CS once we have more than the max
- MEDIUM: mux-h2: remove padlen during headers phase
- MINOR: h2: add a bit-based frame type representation
- MINOR: mux-h2: remove useless check for empty frame length in h2s_decode_headers()
- MEDIUM: mux-h2: decode HEADERS frames before allocating the stream
- MINOR: mux-h2: make h2c_send_rst_stream() use the dummy stream's error code
- MINOR: mux-h2: add a new dummy stream for the REFUSED_STREAM error code
- MINOR: mux-h2: fail stream creation more cleanly using RST_STREAM
- MINOR: buffers: add a new b_move() function
- MINOR: mux-h2: make h2_peek_frame_hdr() support an offset
- MEDIUM: mux-h2: handle decoding of CONTINUATION frames
- CLEANUP: mux-h2: remove misleading comments about CONTINUATION
- BUG/MEDIUM: servers: Don't try to reuse connection if we switched server.
- BUG/MEDIUM: tasks: Decrement tasks_run_queue in tasklet_free().
- BUG/MINOR: htx: send the proper authenticate header when using http-request auth
- BUG/MEDIUM: mux_h2: Don't add to the idle list if we're full.
- BUG/MEDIUM: servers: Fail if we fail to allocate a conn_stream.
- BUG/MAJOR: servers: Use the list api correctly to avoid crashes.
- BUG/MAJOR: servers: Correctly use LIST_ELEM().
- BUG/MAJOR: sessions: Use an unlimited number of servers for the conn list.
- BUG/MEDIUM: servers: Flag the stream_interface on handshake error.
- MEDIUM: servers: Be smarter when switching connections.
- MEDIUM: sessions: Keep track of which connections are idle.
- MINOR: payload: add sample fetch for TLS ALPN
- BUG/MEDIUM: log: don't mark log FDs as non-blocking on terminals
- MINOR: channel: Add the function channel_add_input
- MINOR: stats/htx: Call channel_add_input instead of updating channel state by hand
- BUG/MEDIUM: cache: Be sure to end the forwarding when XFER length is unknown
- BUG/MAJOR: htx: Return the good block address after a defrag
- MINOR: lb: allow redispatch when using consistent hash
- CLEANUP: mux-h2: fix end-of-stream flag name when processing headers
- BUG/MEDIUM: mux-h2: always restart reading if data are available
- BUG/MINOR: mux-h2: set the stream-full flag when leaving h2c_decode_headers()
- BUG/MINOR: mux-h2: don't check the CS count in h2c_bck_handle_headers()
- BUG/MINOR: mux-h2: mark end-of-stream after processing response HEADERS, not before
- BUG/MINOR: mux-h2: only update rxbuf's length for H1 headers
- BUG/MEDIUM: mux-h1: use per-direction flags to indicate transitions
- BUG/MEDIUM: mux-h1: make HTX chunking consistent with H2
- BUG/MAJOR: stream-int: Update the stream expiration date in stream_int_notify()
- BUG/MEDIUM: proto-htx: Set SI_FL_NOHALF on server side when request is done
- BUG/MEDIUM: mux-h1: Add a task to handle connection timeouts
- MINOR: mux-h2: make h2c_decode_headers() return a status, not a count
- MINOR: mux-h2: add a new dummy stream : h2_error_stream
- MEDIUM: mux-h2: make h2c_decode_headers() support recoverable errors
- BUG/MINOR: mux-h2: detect when the HTX EOM block cannot be added after headers
- MINOR: mux-h2: remove a misleading and impossible test
- CLEANUP: mux-h2: clean the stream error path on HEADERS frame processing
- MINOR: mux-h2: check for too many streams only for idle streams
- MINOR: mux-h2: set H2_SF_HEADERS_RCVD when a HEADERS frame was decoded
- BUG/MEDIUM: mux-h2: decode trailers in HEADERS frames
- MINOR: h2: add h2_make_h1_trailers to turn H2 headers to H1 trailers
- MEDIUM: mux-h2: pass trailers to H1 (legacy mode)
- MINOR: htx: add a new function to add a block without filling it
- MINOR: h2: add h2_make_htx_trailers to turn H2 headers to HTX trailers
- MEDIUM: mux-h2: pass trailers to HTX
- MINOR: mux-h1: parse the content-length header on output and set H1_MF_CLEN
- BUG/MEDIUM: mux-h1: don't enforce chunked encoding on requests
- MINOR: mux-h2: make HTX_BLK_EOM processing idempotent
- MINOR: h1: make the H1 headers block parser able to parse headers only
- MEDIUM: mux-h2: emit HEADERS frames when facing HTX trailers blocks
- MINOR: stream/htx: Add info about the HTX structs in "show sess all" command
- MINOR: stream: Add the subscription events of SIs in "show sess all" command
- MINOR: mux-h1: Add the subscription events in "show fd" command
- BUG/MEDIUM: h1: Get the h1m state when restarting the headers parsing
- BUG/MINOR: cache/htx: Be sure to count partial trailers
- BUG/MEDIUM: h1: In h1_init(), wake the tasklet instead of calling h1_recv().
- BUG/MEDIUM: server: Defer the mux init until after xprt has been initialized.
- MINOR: connections: Remove a stall comment.
- BUG/MEDIUM: cli: make "show sess" really thread-safe
- BUILD: add a new file "version.c" to carry version updates
- MINOR: stream/htx: add the HTX flags output in "show sess all"
- MINOR: stream/cli: fix the location of the waiting flag in "show sess all"
- MINOR: stream/cli: report more info about the HTTP messages on "show sess all"
- BUG/MINOR: lua: bad args are returned for Lua actions
- BUG/MEDIUM: lua: dead lock when Lua tasks are trigerred
- MINOR: htx: Add an helper function to get the max space usable for a block
- MINOR: channel/htx: Add HTX version for some helper functions
- BUG/MEDIUM: cache/htx: Respect the reserve when cached objects are served
- BUG/MINOR: stats/htx: Respect the reserve when the stats page is dumped
- DOC: regtest: make it clearer what the purpose of the "broken" series is
- REGTEST: mailers: add new test for 'mailers' section
- REGTEST: Add a reg test for health-checks over SSL/TLS.
- BUG/MINOR: mux-h1: Close connection on shutr only when shutw was really done
- MEDIUM: mux-h1: Clarify how shutr/shutw are handled
- BUG/MINOR: compression: Disable it if another one is already in progress
- BUG/MINOR: filters: Detect cache+compression config on legacy HTTP streams
- BUG/MINOR: cache: Disable the cache if any compression filter precedes it
- REGTEST: Add some informatoin to test results.
- MINOR: htx: Add a function to truncate all blocks after a specific offset
- MINOR: channel/htx: Add the HTX version of channel_truncate/erase
- BUG/MINOR: proto_htx: Use HTX versions to truncate or erase a buffer
- BUG/CRITICAL: mux-h2: re-check the frame length when PRIORITY is used
- DOC: Fix typo in req.ssl_alpn example (commit 4afdd138424ab...)
- DOC: http-request cache-use / http-response cache-store expects cache name
- REGTEST: "capture (request|response)" regtest.
- BUG/MINOR: lua/htx: Respect the reserve when data are send from an HTX applet
- REGTEST: filters: add compression test
- BUG/MEDIUM: init: Initialize idle_orphan_conns for first server in server-template
- BUG/MEDIUM: ssl: Disable anti-replay protection and set max data with 0RTT.
- DOC: Be a bit more explicit about allow-0rtt security implications.
- MINOR: mux-h1: make the mux_h1_ops struct static
- BUILD: makefile: add an EXTRA_OBJS variable to help build optional code
- BUG/MEDIUM: connection: properly unregister the mux on failed initialization
- BUG/MAJOR: cache: fix confusion between zero and uninitialized cache key
- REGTESTS: test case for map_regm commit 271022150d
- REGTESTS: Basic tests for concat,strcmp,word,field,ipmask converters
- REGTESTS: Basic tests for using maps to redirect requests / select backend
- DOC: REGTESTS README varnishtest -Dno-htx= define.
- MINOR: spoe: Make the SPOE filter compatible with HTX proxies
- MINOR: checks: Store the proxy in checks.
- BUG/MEDIUM: checks: Avoid having an associated server for email checks.
- REGTEST: Switch to vtest.
- REGTEST: Adapt reg test doc files to vtest.
- BUG/MEDIUM: h1: Make sure we destroy an inactive connectin that did shutw.
- BUG/MINOR: base64: dec func ignores padding for output size checking
- BUG/MEDIUM: ssl: missing allocation failure checks loading tls key file
- MINOR: ssl: add support of aes256 bits ticket keys on file and cli.
- BUG/MINOR: backend: don't use url_param_name as a hint for BE_LB_ALGO_PH
- BUG/MINOR: backend: balance uri specific options were lost across defaults
- BUG/MINOR: backend: BE_LB_LKUP_CHTREE is a value, not a bit
- MINOR: backend: move url_param_name/len to lbprm.arg_str/len
- MINOR: backend: make headers and RDP cookie also use arg_str/len
- MINOR: backend: add new fields in lbprm to store more LB options
- MINOR: backend: make the header hash use arg_opt1 for use_domain_only
- MINOR: backend: remap the balance uri settings to lbprm.arg_opt{1,2,3}
- MINOR: backend: move hash_balance_factor out of chash
- MEDIUM: backend: move all LB algo parameters into an union
- MINOR: backend: make the random algorithm support a number of draws
- BUILD/MEDIUM: da: Necessary code changes for new buffer API.
- BUG/MINOR: stick_table: Prevent conn_cur from underflowing
- BUG: 51d: Changes to the buffer API in 1.9 were not applied to the 51Degrees code.
- BUG/MEDIUM: stats: Get the right scope pointer depending on HTX is used or not
- DOC: add a missing space in the documentation for bc_http_major
- REGTEST: checks basic stats webpage functionality
- BUG/MEDIUM: servers: Make assign_tproxy_address work when ALPN is set.
- BUG/MEDIUM: connections: Add the CO_FL_CONNECTED flag if a send succeeded.
- DOC: add github issue templates
- MINOR: cfgparse: Extract some code to be re-used.
- CLEANUP: cfgparse: Return asap from cfg_parse_peers().
- CLEANUP: cfgparse: Code reindentation.
- MINOR: cfgparse: Useless frontend initialization in "peers" sections.
- MINOR: cfgparse: Rework peers frontend init.
- MINOR: cfgparse: Simplication.
- MINOR: cfgparse: Make "peer" lines be parsed as "server" lines.
- MINOR: peers: Make outgoing connection to SSL/TLS peers work.
- MINOR: cfgparse: SSL/TLS binding in "peers" sections.
- DOC: peers: SSL/TLS documentation for "peers"
- BUG/MINOR: startup: certain goto paths in init_pollers fail to free
- BUG/MEDIUM: checks: fix recent regression on agent-check making it crash
- BUG/MINOR: server: don't always trust srv_check_health when loading a server state
- BUG/MINOR: check: Wake the check task if the check is finished in wake_srv_chk()
- BUG/MEDIUM: ssl: Fix handling of TLS 1.3 KeyUpdate messages
- DOC: mention the effect of nf_conntrack_tcp_loose on src/dst
- BUG/MINOR: proto-htx: Return an error if all headers cannot be received at once
- BUG/MEDIUM: mux-h2/htx: Respect the channel's reserve
- BUG/MINOR: mux-h1: Apply the reserve on the channel's buffer only
- BUG/MINOR: mux-h1: avoid copying output over itself in zero-copy
- BUG/MAJOR: mux-h2: don't destroy the stream on failed allocation in h2_snd_buf()
- BUG/MEDIUM: backend: also remove from idle list muxes that have no more room
- BUG/MEDIUM: mux-h2: properly abort on trailers decoding errors
- MINOR: h2: declare new sets of frame types
- BUG/MINOR: mux-h2: CONTINUATION in closed state must always return GOAWAY
- BUG/MINOR: mux-h2: headers-type frames in HREM are always a connection error
- BUG/MINOR: mux-h2: make it possible to set the error code on an already closed stream
- BUG/MINOR: hpack: return a compression error on invalid table size updates
- MINOR: server: make sure pool-max-conn is >= -1
- BUG/MINOR: stream: take care of synchronous errors when trying to send
- CLEANUP: server: fix indentation mess on idle connections
- BUG/MINOR: mux-h2: always check the stream ID limit in h2_avail_streams()
- BUG/MINOR: mux-h2: refuse to allocate a stream with too high an ID
- BUG/MEDIUM: backend: never try to attach to a mux having no more stream available
- MINOR: server: add a max-reuse parameter
- MINOR: mux-h2: always consider a server's max-reuse parameter
- MEDIUM: stream-int: always mark pending outgoing SI_ST_CON
- MINOR: stream: don't wait before retrying after a failed connection reuse
- MEDIUM: h2: always parse and deduplicate the content-length header
- BUG/MINOR: mux-h2: always compare content-length to the sum of DATA frames
- CLEANUP: h2: Remove debug printf in mux_h2.c
- MINOR: cfgparse: make the process/thread parser support a maximum value
- MINOR: threads: make MAX_THREADS configurable at build time
- DOC: nbthread is no longer experimental.
- BUG/MINOR: listener: always fill the source address for accepted socketpairs
- BUG/MINOR: mux-h2: do not report available outgoing streams after GOAWAY
- BUG/MINOR: spoe: corrected fragmentation string size
- BUG/MINOR: task: fix possibly missed event in inter-thread wakeups
- BUG/MEDIUM: servers: Attempt to reuse an unfinished connection on retry.
- BUG/MEDIUM: backend: always call si_detach_endpoint() on async connection failure
- SCRIPTS: add the issue tracker URL to the announce script
- MINOR: peers: Extract some code to be reused.
- CLEANUP: peers: Indentation fixes.
- MINOR: peers: send code factorization.
- MINOR: peers: Add new functions to send code and reduce the I/O handler.
- MEDIUM: peers: synchronizaiton code factorization to reduce the size of the I/O handler.
- MINOR: peers: Move update receive code to reduce the size of the I/O handler.
- MINOR: peers: Move ack, switch and definition receive code to reduce the size of the I/O handler.
- MINOR: peers: Move high level receive code to reduce the size of I/O handler.
- CLEANUP: peers: Be more generic.
- MINOR: peers: move error handling to reduce the size of the I/O handler.
- MINOR: peers: move messages treatment code to reduce the size of the I/O handler.
- MINOR: peers: move send code to reduce the size of the I/O handler.
- CLEANUP: peers: Remove useless statements.
- MINOR: peers: move "hello" message treatment code to reduce the size of the I/O handler.
- MINOR: peers: move peer initializations code to reduce the size of the I/O handler.
- CLEANUP: peers: factor the error handling code in peer_treet_updatemsg()
- CLEANUP: peers: factor error handling in peer_treat_definedmsg()
- BUILD/MINOR: peers: shut up a build warning introduced during last cleanup
- BUG/MEDIUM: mux-h2: only close connection on request frames on closed streams
- CLEANUP: mux-h2: remove two useless but misleading assignments
- BUG/MEDIUM: checks: Check that conn_install_mux succeeded.
- BUG/MEDIUM: servers: Only destroy a conn_stream we just allocated.
- BUG/MEDIUM: servers: Don't add an incomplete conn to the server idle list.
- BUG/MEDIUM: checks: Don't try to set ALPN if connection failed.
- BUG/MEDIUM: h2: In h2_send(), stop the loop if we failed to alloc a buf.
- BUG/MEDIUM: peers: Handle mux creation failure.
- BUG/MEDIUM: servers: Close the connection if we failed to install the mux.
- BUG/MEDIUM: compression: Rewrite strong ETags
- BUG/MINOR: deinit: tcp_rep.inspect_rules not deinit, add to deinit
- CLEANUP: mux-h2: remove misleading leftover test on h2s' nullity
- BUG/MEDIUM: mux-h2: wake up flow-controlled streams on initial window update
- BUG/MEDIUM: mux-h2: fix two half-closed to closed transitions
- BUG/MEDIUM: mux-h2: make sure never to send GOAWAY on too old streams
- BUG/MEDIUM: mux-h2: do not abort HEADERS frame before decoding them
- BUG/MINOR: mux-h2: make sure response HEADERS are not received in other states than OPEN and HLOC
- MINOR: h2: add a generic frame checker
- MEDIUM: mux-h2: check the frame validity before considering the stream state
- CLEANUP: mux-h2: remove stream ID and frame length checks from the frame parsers
- BUG/MINOR: mux-h2: make sure request trailers on aborted streams don't break the connection
- DOC: compression: Update the reasons for disabled compression
- BUG/MEDIUM: buffer: Make sure b_is_null handles buffers waiting for allocation.
- DOC: htx: make it clear that htxbuf() and htx_from_buf() always return valid pointers
- MINOR: htx: never check for null htx pointer in htx_is_{,not_}empty()
- MINOR: mux-h2: consistently rely on the htx variable to detect the mode
- BUG/MEDIUM: peers: Peer addresses parsing broken.
- BUG/MEDIUM: mux-h1: Don't add "transfer-encoding" if message-body is forbidden
- BUG/MEDIUM: connections: Don't forget to remove CO_FL_SESS_IDLE.
- BUG/MINOR: stream: don't close the front connection when facing a backend error
- BUG/MEDIUM: mux-h2: wait for the mux buffer to be empty before closing the connection
- MINOR: stream-int: add a new flag to mention that we want the connection to be killed
- MINOR: connstream: have a new flag CS_FL_KILL_CONN to kill a connection
- BUG/MEDIUM: mux-h2: do not close the connection on aborted streams
- BUG/MINOR: server: fix logic flaw in idle connection list management
- MINOR: mux-h2: max-concurrent-streams should be unsigned
- MINOR: mux-h2: make sure to only check concurrency limit on the frontend
- MINOR: mux-h2: learn and store the peer's advertised MAX_CONCURRENT_STREAMS setting
- BUG/MEDIUM: mux-h2: properly consider the peer's advertised max-concurrent-streams
- MINOR: xref: Add missing barriers.
- MINOR: muxes: Don't bother to LIST_DEL(&conn->list) before calling conn_free().
- MINOR: debug: Add an option that causes random allocation failures.
- BUG/MEDIUM: backend: always release the previous connection into its own target srv_list
- BUG/MEDIUM: htx: check the HTX compatibility in dynamic use-backend rules
- BUG/MINOR: tune.fail-alloc: Don't forget to initialize ret.
- BUG/MINOR: backend: check srv_conn before dereferencing it
- BUG/MEDIUM: mux-h2: always omit :scheme and :path for the CONNECT method
- BUG/MEDIUM: mux-h2: always set :authority on request output
- BUG/MEDIUM: stream: Don't forget to free s->unique_id in stream_free().
- BUG/MINOR: threads: fix the process range of thread masks
- BUG/MINOR: config: fix bind line thread mask validation
- CLEANUP: threads: fix misleading comment about all_threads_mask
- CLEANUP: threads: use nbits to calculate the thread mask
- OPTIM: listener: optimize cache-line packing for struct listener
- MINOR: tools: improve the popcount() operation
- MINOR: config: keep an all_proc_mask like we have all_threads_mask
- MINOR: global: add proc_mask() and thread_mask()
- MINOR: config: simplify bind_proc processing using proc_mask()
- MINOR: threads: make use of thread_mask() to simplify some thread calculations
- BUG/MINOR: compression: properly report compression stats in HTX mode
- BUG/MINOR: task: close a tiny race in the inter-thread wakeup
- BUG/MAJOR: config: verify that targets of track-sc and stick rules are present
- BUG/MAJOR: spoe: verify that backends used by SPOE cover all their callers' processes
- BUG/MAJOR: htx/backend: Make all tests on HTTP messages compatible with HTX
- BUG/MINOR: config: make sure to count the error on incorrect track-sc/stick rules
- DOC: ssl: Clarify when pre TLSv1.3 cipher can be used
- DOC: ssl: Stop documenting ciphers example to use
- BUG/MINOR: spoe: do not assume agent->rt is valid on exit
- BUG/MINOR: lua: initialize the correct idle conn lists for the SSL sockets
- BUG/MEDIUM: spoe: initialization depending on nbthread must be done last
- BUG/MEDIUM: server: initialize the idle conns list after parsing the config
- BUG/MEDIUM: server: initialize the orphaned conns lists and tasks at the end
- MINOR: config: make MAX_PROCS configurable at build time
- BUG/MAJOR: spoe: Don't try to get agent config during SPOP healthcheck
- BUG/MINOR: config: Reinforce validity check when a process number is parsed
- BUG/MEDIUM: peers: check that p->srv actually exists before using p->srv->use_ssl
- CONTRIB: contrib/prometheus-exporter: Add a Prometheus exporter for HAProxy
- BUG/MINOR: mux-h1: verify the request's version before dropping connection: keep-alive
- BUG: 51d: In Hash Trie, multi header matching was affected by the header names stored globaly.
- MEDIUM: 51d: Enabled multi threaded operation in the 51Degrees module.
- BUG/MAJOR: stream: avoid double free on unique_id
- BUILD/MINOR: stream: avoid a build warning with threads disabled
- BUILD/MINOR: tools: fix build warning in the date conversion functions
- BUILD/MINOR: peers: remove an impossible null test in intencode()
- BUILD/MINOR: htx: fix some potential null-deref warnings with http_find_stline
- BUG/MEDIUM: peers: Missing peer initializations.
- BUG/MEDIUM: http_fetch: fix the "base" and "base32" fetch methods in HTX mode
- BUG/MEDIUM: proto_htx: Fix data size update if end of the cookie is removed
- BUG/MEDIUM: http_fetch: fix "req.body_len" and "req.body_size" fetch methods in HTX mode
- BUILD/MEDIUM: initcall: Fix build on MacOS.
- BUG/MEDIUM: mux-h2/htx: Always set CS flags before exiting h2_rcv_buf()
- MINOR: h2/htx: Set the flag HTX_SL_F_BODYLESS for messages without body
- BUG/MINOR: mux-h1: Add "transfer-encoding" header on outgoing requests if needed
- BUG/MINOR: mux-h2: Don't add ":status" pseudo-header on trailers
- BUG/MINOR: proto-htx: Consider a XFER_LEN message as chunked by default
- BUG/MEDIUM: h2/htx: Correctly handle interim responses when HTX is enabled
- MINOR: mux-h2: Set HTX extra value when possible
- BUG/MEDIUM: htx: count the amount of copied data towards the final count
- MINOR: mux-h2: make the H2 MAX_FRAME_SIZE setting configurable
- BUG/MEDIUM: mux-h2/htx: send an empty DATA frame on empty HTX trailers
- BUG/MEDIUM: servers: Use atomic operations when handling curr_idle_conns.
- BUG/MEDIUM: servers: Add a per-thread counter of idle connections.
- MINOR: fd: add a new my_closefrom() function to close all FDs
- MINOR: checks: use my_closefrom() to close all FDs
- MINOR: fd: implement an optimised my_closefrom() function
- BUG/MINOR: fd: make sure my_closefrom() doesn't miss some FDs
- BUG/MAJOR: fd/threads, task/threads: ensure all spin locks are unlocked
- BUG/MAJOR: listener: Make sure the listener exist before using it.
- MINOR: fd: Use closefrom() as my_closefrom() if supported.
- BUG/MEDIUM: mux-h1: Report the right amount of data xferred in h1_rcv_buf()
- BUG/MINOR: channel: Set CF_WROTE_DATA when outgoing data are skipped
- MINOR: htx: Add function to drain data from an HTX message
- MINOR: channel/htx: Add function to skips output bytes from an HTX channel
- BUG/MAJOR: cache/htx: Set the start-line offset when a cached object is served
- BUG/MEDIUM: cache: Get objects from the cache only for GET and HEAD requests
- BUG/MINOR: cache/htx: Return only the headers of cached objects to HEAD requests
- BUG/MINOR: mux-h1: Always initilize h1m variable in h1_process_input()
- BUG/MEDIUM: proto_htx: Fix functions applying regex filters on HTX messages
- BUG/MEDIUM: h2: advertise to servers that we don't support push
- MINOR: standard: Add a function to parse uints (dotted notation).
- MINOR: arg: Add support for ARGT_PBUF_FNUM arg type.
- MINOR: http_fetch: add "req.ungrpc" sample fetch for gRPC.
- MINOR: sample: Add two sample converters for protocol buffers.
- DOC: sample: Add gRPC related documentation.
Add a per-thread counter of idling connections, and use it to determine
how many connections we should kill after the timeout, instead of using
the global counter, or we're likely to just kill most of the connections.
This should be backported to 1.9.
For some embedded systems, it's pointless to have 32- or even 64- large
arrays of processes when it's known that much fewer processes will be
used in the worst case. Let's introduce this MAX_PROCS define which
contains the highest number of processes allowed to run at once. It
still defaults to LONGBITS but may be lowered.
Since all of them are exclusive, let's move them to an union instead
of eating memory with the sum of all of them. We're using a transparent
union to limit the code changes.
Doing so reduces the struct lbprm from 392 bytes to 372, and thanks
to these changes, the struct proxy is now down to 6480 bytes vs 6624
before the changes (144 bytes saved per proxy).
While testing fixes, it's sometimes confusing to rebuild only one C file
(e.g. a mux) and not to have the correct commit ID reported in "haproxy -v"
nor on the stats page.
This patch adds a new "version.c" file which is always rebuilt. It's
very small and contains only 3 variables derived from the various
version strings. These variables are used instead of the macros at the
few places showing the version. This way the output version of the
running code is always correct for the parts that were rebuilt.
First, it's a pain to always have to think about updating this date,
second for a long time I've not been the only developer there, and third,
some users contact me hoping to get help that I can't deliver. It's about
time to redirect them to the main site where all the useful links should
be.
If the reload fail after the parsing of the configuration, the
mworker_proc structures are created for the processes it tried to
create.
The mworker_proc_list_to_env() function was exporting these unitialized
structures in the "HAPROXY_PROCESSES" environment variable which was
leading to this kind of output in "show proc":
4294967295 worker [was: 1] 1 17879d 16h26m28s
Since HTX casts the buffer to a struct and stores relative pointers at the
end, it is mandatory that its end is properly aligned. This patch enforces
a buffer size rounding up to the next multiple of two void*, thus 8 on
32-bit and 16 on 64-bit, to match what malloc() already does on the beginning
of the buffer. In pratice it will never be really noticeable since default
sizes already are such multiples.
The master is not supposed to run (at the moment) any task before the
polling loop, the created tasks should be run only in the workers but in
the master they should be disabled or removed.
No backport needed.
The previous code was only stopping the listeners in the master, not the
entire proxy.
Since we now have a polling loop in the master, there might be some side
effects, indeed some things that are still initialized. For example the
checks were still running.
Add a new keyword for servers, "idle-timeout". If set, unused connections are
kept alive until the timeout happens, and will be picked for reuse if no
other connection is available.
signal_init(), init_log(), init_stream(), and init_task() all used to
only preset some values and lists. This needs to be done very early to
provide a reliable interface to all other users. The calls used to be
explicit in haproxy.c:init(). Now they're placed in initcalls at the
STG_PREPARE stage. The functions are not exported anymore.
Instead of exporting a number of pools and having to manually delete
them in deinit() or to have dedicated destructors to remove them, let's
simply kill all pools on deinit().
For this a new function pool_destroy_all() was introduced. As its name
implies, it destroys and frees all pools (provided they don't have any
user anymore of course).
This allowed to remove 4 implicit destructors, 2 explicit ones, and 11
individual calls to pool_destroy(). In addition it properly removes
the mux_pt_ctx pool which was not cleared on exit (no backport needed
here since it's 1.9 only). The sig_handler pool doesn't need to be
exported anymore and became static now.
This commit replaces the explicit pool creation that are made in
constructors with a pool registration. Not only this simplifies the
pools declaration (it can be done on a single line after the head is
declared), but it also removes references to pools from within
constructors. The only remaining create_pool() calls are those
performed in init functions after the config is parsed, so there
is no more user of potentially uninitialized pool now.
It has been the opportunity to remove no less than 12 constructors
and 6 init functions.
We reintroduced some FDs leaking by using a poller and some listeners in
the master.
The master proxy needs to be stopped to avoid leaking its listeners, the
polling loop needs to be deinit, and the thread waker pipe need to be
closed too.
No backport needed.
Valgrind reports:
==3389== Warning: invalid file descriptor -1 in syscall close()
Check for >= 0 before closing.
This bug was introduced in commit ce83b4a5dd
and is specific to 1.9. No backport needed.
At the moment the situation with activity measurement is quite tricky
because the struct activity is defined in global.h and declared in
haproxy.c, with operations made in time.h and relying on freq_ctr
which are defined in freq_ctr.h which itself includes time.h. It's
barely possible to touch any of these files without breaking all the
circular dependency.
Let's move all this stuff to activity.{c,h} and be done with it. The
measurement of active and stolen time is now done in a dedicated
function called just after tv_before_poll() instead of mixing the two,
which used to be a lazy (but convenient) decision.
No code was changed, stuff was just moved around.
The signal_register_fct() does not remove the handlers assigned to a
signal, but add a new handler to a list.
We accidentality inherited the handlers of the main() function in the
master process which is a problem because they act on the proxies.
The side effect was to stop the MASTER proxy which handle the master CLI
on a SIGUSR1, and to display some debug info when doing a SIGHUP and a
SIGQUIT.
The mworker waitpid mode (which is used when a reload failed to apply
the new configuration) was still using a specific initialisation path.
That's a problem since we use a polling loop in the master now, the
master proxy is not initialized and the master CLI is not activated.
This patch removes the initialisation code of the wait mode and
introduce the MODE_MWORKER_WAIT in order to use the same init path as
the MODE_MWORKER with some exceptions. It allows to use the master proxy
and the master CLI during the waitpid mode.
This patch allows a process to properly quit when some jobs are still
active, this feature is handled by the unstoppable_jobs variable, which
must be atomically incremented.
During each new iteration of run_poll_loop() the break condition of the
loop is now (jobs - unstoppable_jobs) == 0.
The unique usage of this at the moment is to handle the socketpair CLI
of a the worker during the stopping of the process. During the soft
stop, we could mark the CLI listener as an unstoppable job and still
handle new connections till every other jobs are stopped.
When using the CLI proxy of the master and trying to access a worker
with the @ prefix, the worker just crash.
The commit 7216032 ("MEDIUM: mworker: leave when the master die")
reintroduced the old code of the pipe, which was not trying to access
the pointers before. The owner of the FD was modified to a different
value, this is a problem since we call listener_accept() in most cases
now from the mworker_accept_wrapper() and it casts the owner variable to
get the listener.
This patch fix the issue by setting back the previous owner of the FD.
The process was aborting with nbthread > 1.
The mworker_pipe_register() could be called several time in multithread
mode, we don't want to abort() there.
When the master die, the worker should exit too, this is achieved by
checking if the FD of the socketpair/pipe was closed between the master
and the worker.
In the former architecture of the master-worker, there was only a pipe
between the master and the workers, and it was easy to check an EOF on
the pipe FD to exit() the worker.
With the new architecture, we use a socketpair by process, and this
socketpair is also used to accept new connections with the
listener_accept() callback.
This accept callback can't handle the EOF and the exit of the process,
because it's very specific to the master worker. This is why we
transformed the mworker_pipe_handler() function in a wrapper which check
if there is an EOF and exit the process, and if not call
listener_accept() to achieve the accept.
The former behavior was to exit() the master process with the latest
status code known, which was the one of the last process to exit.
The problem is that the master process was not exiting with the status
code which provoked the exit-on-failure.
The active peers output indicates both the number of established peers
connections and the number of peers connection attempts. The new counter
"ConnectedPeers" also indicates the number of currently connected peers.
This helps detect that some peers cannot be reached for example. It's
worth mentioning that this value changes over time because unused peers
are often disconnected and reconnected. Most of the time it should be
equal to ActivePeers.
Peers are the last type of activity which can maintain a job present, so
it's important to report that such an entity is still active to explain
why the job count may be higher than zero. Here by "ActivePeers" we report
peers sessions, which include both established connections and outgoing
connection attempts.
This patch introduces mworker_cli_proxy_new_listener() which allows the
creation of new listeners for the CLI proxy.
Using this function it is possible to create new listeners from the
program arguments with -Sa <unix_socket>. It is allowed to create
multiple listeners with several -Sa.
This patch implements a listen proxy within the master. It uses the
sockpair of all the workers as servers.
In the current state of the code, the proxy is only doing round robin on
the CLI of the workers. A CLI mode will be needed to know to which CLI
send the requests.
The init code of the mworker_proc structs has been moved before the
init of the listeners.
Each socketpair is now connected to a CLI within the workers, which
allows the master to access their CLI.
The inherited flag of the worker side socketpair is removed so the
socket can be closed in the master.
The listeners with the LI_O_INHERITED flag were deleted but not unbound
which is a problem since we have a polling in the master.
This patch unbind every listeners which are not require for the master,
but does not close the FD of those that have a LI_O_INHERITED flag.
This bug appeared only if nbthread > 1. Handling the pipe with the
master, multiple threads of the same worker could process the deinit().
In addition, deinit() was called while some other threads were still
performing some tasks.
This patch assign the handler of the pipe with master to only the first
thread and removes the call to deinit() before exiting with an error.
This patch should be backported in v1.8.
These ones are mostly called from cfgparse.c for the parsing and do
not depend on the HTTP representation. The functions's prototypes
were moved to proto/http_rules.h, making this file work exactly like
tcp_rules. Ideally we should stop calling these functions directly
from cfgparse and register keywords, but there are a few cases where
that wouldn't work (stats http-request) so it's probably not worth
trying to go this far.
Cyril Bont reported that commit f9cc07c25b broke the build without
thread.
We don't need to initialise tid = 0 in mworker_loop, so we could
completely remove it.
These error codes and messages are agnostic to the version, even if
they are represented as HTTP/1.0 messages. Ultimately they will have
to be transformed into internal HTTP messages to be used everywhere.
The HTTP/1.1 100 Continue message was turned to an IST and the local
copy in the Lua code was removed.
We need to clean the FDs registered manually in the poller to avoid FD
leaking during a reload of the master.
This patch call the per thread deinit function which close the thread
waker pipe.
In order to communicate with the workers, the master pipe has been
replaced by a socketpair() per worker.
The goal is to use these sockets as stats sockets and be able to access
them from the master.
When reloading, the master serialize the information of the workers and
put them in a environment variable. Once the master has been reexecuted
it unserialize that information and it is capable of closing the FDs of
the leaving children.
The master now use a poll loop, which should be initialized even in wait
mode. We need to init some variables if we didn't success to load the
configuration file.
If haproxy failed to load its configuration, the process is reexecuted
and it did not init the poller. So we must not try to deinit the poller
before the exec().
With the new way of handling the signals in the master worker, we are
are not staying in a waitpid() loop. Which means that we need to catch the
SIGCHLD signals to call waitpid().
The problem is when the master is reloading, this signal is neither
registered nor blocked so we lost all signals between the restart and
the call to mworker_loop().
This patch blocks the SIGCHLD signals before the reloading and ensure
it's not unblocked before the master registered the SIGCHLD handler.
In order to reorganize the code of the master worker, the mworker_wait()
function which was the main function was split. This function was
handling a wait() loop, but it does not need it anymore since the code
will use the poll loop of haproxy instead.
The function was split in several functions:
- mworker_catch_sigterm() which is a signal handler for SIGTERM ans
SIGUSR1 that sends the signals to the workers
- mworker_catch_sigchld() which is the code handling the leaving of a
child
- mworker_catch_sighup which basically call the mworker_restart()
function
- mworker_loop() which is the function calling the main poll loop in the
master
Now we try to synchronously push updates as they come using the new rdv
point, so that the call to the server update function from the main poll
loop is not needed anymore.
It further reduces the apparent latency in the health checks as the response
time almost always appears as 0 ms, resulting in a slightly higher check rate
of ~1960 conn/s. Despite this, the CPU consumption has slightly dropped again
to ~32% for the same test.
The only trick is that the checks code is built with a bit of recursivity
because srv_update_status() calls server_recalc_eweight(), and the latter
needs to signal srv_update_status() in case of updates. Thus we added an
extra argument to this function to indicate whether or not it must
propagate updates (no if it comes from srv_update_status).
This partially reverts commit d8fd2af ("BUG/MEDIUM: threads: Use the sync
point to check active jobs and exit") which used to address an issue in
the way the sync point used to check for present threads, which was later
addressed by commit ddb6c16 ("BUG/MEDIUM: threads: Fix the exit condition
of the thread barrier"). Thus there is no need anymore to use the sync
point for exiting and we can completely remove this call in the main loop.
The current sync point causes some important stress when a high number
of threads is in use on a config with lots of checks, because it wakes
up all threads every time a server state changes.
A config like the following can easily saturate a 4-core machine reaching
only 750 checks per second out of the ~2000 configured :
global
nbthread 4
defaults
mode http
timeout connect 5s
timeout client 5s
timeout server 5s
frontend srv
bind :8001 process 1/1
redirect location / if { method OPTIONS } { rand(100) ge 50 }
stats uri /
backend chk
option httpchk
server-template srv 1-100 127.0.0.1:8001 check rise 1 fall 1 inter 50
The reason is that the random on the fake server causes the responses
to randomly match an HTTP check, and results in a lot of up/down events
that are broadcasted to all threads. It's worth noting that the CPU usage
already dropped by about 60% between 1.8 and 1.9 just due to the scheduler
updates, but the sync point remains expensive.
In addition, it's visible on the stats page that a lot of requests end up
with an L7TOUT status in ~60ms. With smaller timeouts, it's even L4TOUT
around 20-25ms.
By not using THREAD_WANT_SYNC() anymore and only calling the server updates
under thread_isolate(), we can avoid all these wakeups. The CPU usage on
the same config drops to around 44% on the same machine, with all checks
being delivered at ~1900 checks per second, and the stats page shows no
more timeouts, even at 10 ms check interval. The difference is mainly
caused by the fact that there's no more need to wait for a thread to wake
up from poll() before starting to process check results.
Released version 1.9-dev1 with the following main changes :
- BUG/MEDIUM: kqueue: Don't bother closing the kqueue after fork.
- DOC: cache: update sections and fix some typos
- BUILD/MINOR: deviceatlas: enable thread support
- BUG/MEDIUM: tcp-check: Don't lock the server in tcpcheck_main
- BUG/MEDIUM: ssl: don't allocate shctx several time
- BUG/MEDIUM: cache: bad computation of the remaining size
- BUILD: checks: don't include server.h
- BUG/MEDIUM: stream: fix session leak on applet-initiated connections
- BUILD/MINOR: haproxy : FreeBSD/cpu affinity needs pthread_np header
- BUILD/MINOR: Makefile : enabling USE_CPU_AFFINITY
- BUG/MINOR: ssl: CO_FL_EARLY_DATA removal is managed by stream
- BUG/MEDIUM: threads/peers: decrement, not increment jobs on quitting
- BUG/MEDIUM: h2: don't report an error after parsing a 100-continue response
- BUG/MEDIUM: peers: fix some track counter rules dont register entries for sync.
- BUG/MAJOR: thread/peers: fix deadlock on peers sync.
- BUILD/MINOR: haproxy: compiling config cpu parsing handling when needed
- MINOR: config: report when "monitor fail" rules are misplaced
- BUG/MINOR: mworker: fix validity check for the pipe FDs
- BUG/MINOR: mworker: detach from tty when in daemon mode
- MINOR: threads: Fix pthread_setaffinity_np on FreeBSD.
- BUG/MAJOR: thread: Be sure to request a sync between threads only once at a time
- BUILD: Fix LDFLAGS vs. LIBS re linking order in various makefiles
- BUG/MEDIUM: checks: Be sure we have a mux if we created a cs.
- BUG/MINOR: hpack: fix debugging output of pseudo header names
- BUG/MINOR: hpack: must reject huffman literals padded with more than 7 bits
- BUG/MINOR: hpack: reject invalid header index
- BUG/MINOR: hpack: dynamic table size updates are only allowed before headers
- BUG/MAJOR: h2: correctly check the request length when building an H1 request
- BUG/MINOR: h2: immediately close if receiving GOAWAY after the last stream
- BUG/MINOR: h2: try to abort closed streams as soon as possible
- BUG/MINOR: h2: ":path" must not be empty
- BUG/MINOR: h2: fix a typo causing PING/ACK to be responded to
- BUG/MINOR: h2: the TE header if present may only contain trailers
- BUG/MEDIUM: h2: enforce the per-connection stream limit
- BUG/MINOR: h2: do not accept SETTINGS_ENABLE_PUSH other than 0 or 1
- BUG/MINOR: h2: reject incorrect stream dependencies on HEADERS frame
- BUG/MINOR: h2: properly check PRIORITY frames
- BUG/MINOR: h2: reject response pseudo-headers from requests
- BUG/MEDIUM: h2: remove connection-specific headers from request
- BUG/MEDIUM: h2: do not accept upper case letters in request header names
- BUG/MINOR: h2: use the H2_F_DATA_* macros for DATA frames
- BUG/MINOR: action: Don't check http capture rules when no id is defined
- BUG/MAJOR: hpack: don't pretend large headers fit in empty table
- BUG/MINOR: ssl: support tune.ssl.cachesize 0 again
- BUG/MEDIUM: mworker: also close peers sockets in the master
- BUG/MEDIUM: ssl engines: Fix async engines fds were not considered to fix fd limit automatically.
- BUG/MEDIUM: checks: a down server going to maint remains definitely stucked on down state.
- BUG/MEDIUM: peers: set NOLINGER on the outgoing stream interface
- BUG/MEDIUM: h2: fix handling of end of stream again
- MINOR: mworker: Update messages referencing exit-on-failure
- MINOR: mworker: Improve wording in `void mworker_wait()`
- CONTRIB: halog: Add help text for -s switch in halog program
- BUG/MEDIUM: email-alert: don't set server check status from a email-alert task
- BUG/MEDIUM: threads/vars: Fix deadlock in register_name
- MINOR: systemd: remove comment about HAPROXY_STATS_SOCKET
- DOC: notifications: add precisions about thread usage
- BUG/MEDIUM: lua/notification: memory leak
- MINOR: conn_stream: add new flag CS_FL_RCV_MORE to indicate pending data
- BUG/MEDIUM: stream-int: always set SI_FL_WAIT_ROOM on CS_FL_RCV_MORE
- BUG/MEDIUM: h2: automatically set CS_FL_RCV_MORE when the output buffer is full
- BUG/MEDIUM: h2: enable recv polling whenever demuxing is possible
- BUG/MEDIUM: h2: work around a connection API limitation
- BUG/MEDIUM: h2: debug incoming traffic in h2_wake()
- MINOR: h2: store the demux padding length in the h2c struct
- BUG/MEDIUM: h2: support uploading partial DATA frames
- MINOR: h2: don't demand that a DATA frame is complete before processing it
- BUG/MEDIUM: h2: don't switch the state to HREM before end of DATA frame
- BUG/MEDIUM: h2: don't close after the first DATA frame on tunnelled responses
- BUG/MEDIUM: http: don't disable lingering on requests with tunnelled responses
- BUG/MEDIUM: h2: fix stream limit enforcement
- BUG/MINOR: stream-int: don't try to receive again after receiving an EOS
- MINOR: sample: add len converter
- BUG: MAJOR: lb_map: server map calculation broken
- BUG: MINOR: http: don't check http-request capture id when len is provided
- MINOR: sample: rename the "len" converter to "length"
- BUG/MEDIUM: mworker: Set FD_CLOEXEC flag on log fd
- DOC/MINOR: intro: typo, wording, formatting fixes
- MINOR: netscaler: respect syntax
- MINOR: netscaler: remove the use of cip_magic only used once
- MINOR: netscaler: rename cip_len to clarify its uage
- BUG/MEDIUM: netscaler: use the appropriate IPv6 header size
- BUG/MAJOR: netscaler: address truncated CIP header detection
- MINOR: netscaler: check in one-shot if buffer is large enough for IP and TCP header
- MEDIUM: netscaler: do not analyze original IP packet size
- MEDIUM: netscaler: add support for standard NetScaler CIP protocol
- MINOR: spoe: add force-set-var option in spoe-agent configuration
- CONTRIB: iprange: Fix compiler warning in iprange.c
- CONTRIB: halog: Fix compiler warnings in halog.c
- BUG/MINOR: h2: properly report a stream error on RST_STREAM
- MINOR: mux: add flags to describe a mux's capabilities
- MINOR: stream-int: set flag SI_FL_CLEAN_ABRT when mux supports clean aborts
- BUG/MEDIUM: stream: don't consider abortonclose on muxes which close cleanly
- BUG/MEDIUM: checks: a server passed in maint state was not forced down.
- BUG/MEDIUM: lua: fix crash when using bogus mode in register_service()
- MINOR: http: adjust the list of supposedly cacheable methods
- MINOR: http: update the list of cacheable status codes as per RFC7231
- MINOR: http: start to compute the transaction's cacheability from the request
- BUG/MINOR: http: do not ignore cache-control: public
- BUG/MINOR: http: properly detect max-age=0 and s-maxage=0 in responses
- BUG/MINOR: cache: do not force the TX_CACHEABLE flag before checking cacheability
- MINOR: http: add a function to check request's cache-control header field
- BUG/MEDIUM: cache: do not try to retrieve host-less requests from the cache
- BUG/MEDIUM: cache: replace old object on store
- BUG/MEDIUM: cache: respect the request cache-control header
- BUG/MEDIUM: cache: don't cache the response on no-cache="set-cookie"
- BUG/MAJOR: connection: refine the situations where we don't send shutw()
- BUG/MEDIUM: checks: properly set servers to stopping state on 404
- BUG/MEDIUM: h2: properly handle and report some stream errors
- BUG/MEDIUM: h2: improve handling of frames received on closed streams
- DOC/MINOR: configuration: typo, formatting fixes
- BUG/MEDIUM: h2: ensure we always know the stream before sending a reset
- BUG/MEDIUM: mworker: don't close stdio several time
- MINOR: don't close stdio anymore
- BUG/MEDIUM: http: don't automatically forward request close
- BUG/MAJOR: hpack: don't return direct references to the dynamic headers table
- MINOR: h2: add a function to report pseudo-header names
- DEBUG: hpack: make hpack_dht_dump() expose the output file
- DEBUG: hpack: add more traces to the hpack decoder
- CONTRIB: hpack: add an hpack decoder
- MEDIUM: h2: prepare a graceful shutdown when the frontend is stopped
- BUG/MEDIUM: h2: properly handle the END_STREAM flag on empty DATA frames
- BUILD: ssl: silence a warning when building without NPN nor ALPN support
- CLEANUP: rbtree: remove
- BUG/MEDIUM: ssl: cache doesn't release shctx blocks
- BUG/MINOR: lua: Fix default value for pattern in Socket.receive
- DOC: lua: Fix typos in comments of hlua_socket_receive
- BUG/MEDIUM: lua: Fix IPv6 with separate port support for Socket.connect
- BUG/MINOR: lua: Fix return value of Socket.settimeout
- MINOR: dns: Handle SRV record weight correctly.
- BUG/MEDIUM: mworker: execvp failure depending on argv[0]
- MINOR: hathreads: add support for gcc < 4.7
- BUILD/MINOR: ancient gcc versions atomic fix
- BUG/MEDIUM: stream: properly handle client aborts during redispatch
- MINOR: spoe: add register-var-names directive in spoe-agent configuration
- MINOR: spoe: Don't queue a SPOE context if nothing is sent
- DOC: clarify the scope of ssl_fc_is_resumed
- CONTRIB: debug: fix a few flags definitions
- BUG/MINOR: poll: too large size allocation for FD events
- MINOR: sample: add date_us sample
- BUG/MEDIUM: peers: fix expire date wasn't updated if entry is modified remotely.
- MINOR: servers: Don't report duplicate dyncookies for disabled servers.
- MINOR: global/threads: move cpu_map at the end of the global struct
- MINOR: threads: add a MAX_THREADS define instead of LONGBITS
- MINOR: global: add some global activity counters to help debugging
- MINOR: threads/fd: Use a bitfield to know if there are FDs for a thread in the FD cache
- BUG/MEDIUM: threads/polling: Use fd_cache_mask instead of fd_cache_num
- BUG/MEDIUM: fd: maintain a per-thread update mask
- MINOR: fd: add a bitmask to indicate that an FD is known by the poller
- BUG/MEDIUM: epoll/threads: use one epoll_fd per thread
- BUG/MEDIUM: kqueue/threads: use one kqueue_fd per thread
- BUG/MEDIUM: threads/mworker: fix a race on startup
- BUG/MINOR: mworker: only write to pidfile if it exists
- MINOR: threads: Fix build when we're not compiling with threads.
- BUG/MINOR: threads: always set an owner to the thread_sync pipe
- BUG/MEDIUM: threads/server: Fix deadlock in srv_set_stopping/srv_set_admin_flag
- BUG/MEDIUM: checks: Don't try to release undefined conn_stream when a check is freed
- BUG/MINOR: kqueue/threads: Don't forget to close kqueue_fd[tid] on each thread
- MINOR: threads: Use __decl_hathreads instead of #ifdef/#endif
- BUILD: epoll/threads: Add test on MAX_THREADS to avoid warnings when complied without threads
- BUILD: kqueue/threads: Add test on MAX_THREADS to avoid warnings when complied without threads
- CLEANUP: sample: Fix comment encoding of sample.c
- CLEANUP: sample: Fix outdated comment about sample casts functions
- BUG/MINOR: sample: Fix output type of c_ipv62ip
- CLEANUP: Fix typo in ARGT_MSK6 comment
- CLEANUP: standard: Use len2mask4 in str2mask
- MINOR: standard: Add str2mask6 function
- MINOR: config: Add support for ARGT_MSK6
- MEDIUM: sample: Add IPv6 support to the ipmask converter
- MINOR: config: Enable tracking of up to MAX_SESS_STKCTR stick counters.
- BUG/MINOR: cli: use global.maxsock and not maxfd to list all FDs
- MINOR: polling: make epoll and kqueue not depend on maxfd anymore
- MINOR: fd: don't report maxfd in alert messages
- MEDIUM: polling: start to move maxfd computation to the pollers
- CLEANUP: fd/threads: remove the now unused fdtab_lock
- MINOR: poll: more accurately compute the new maxfd in the loop
- CLEANUP: fd: remove the unused "new" field
- MINOR: fd: move the hap_fd_{clr,set,isset} functions to fd.h
- MEDIUM: select: make use of hap_fd_* functions
- MEDIUM: fd: use atomic ops for hap_fd_{clr,set} and remove poll_lock
- MEDIUM: select: don't use the old FD state anymore
- MEDIUM: poll: don't use the old FD state anymore
- MINOR: fd: pass the iocb and owner to fd_insert()
- BUG/MINOR: threads: Update labels array because of changes in lock_label enum
- MINOR: stick-tables: Adds support for new "gpc1" and "gpc1_rate" counters.
- BUG/MINOR: epoll/threads: only call epoll_ctl(DEL) on polled FDs
- DOC: don't suggest using http-server-close
- MINOR: introduce proxy-v2-options for send-proxy-v2
- BUG/MEDIUM: spoe: Always try to receive or send the frame to detect shutdowns
- BUG/MEDIUM: spoe: Allow producer to read and to forward shutdown on request side
- MINOR: spoe: Remove check on min_applets number when a SPOE context is queued
- MINOR: spoe: Always link a SPOE context with the applet processing it
- MINOR: spoe: Replace sending_rate by a frequency counter
- MINOR: spoe: Count the number of frames waiting for an ack for each applet
- MEDIUM: spoe: Use an ebtree to manage idle applets
- MINOR: spoa_example: Count the number of frames processed by each worker
- MINOR: spoe: Add max-waiting-frames directive in spoe-agent configuration
- MINOR: init: make stdout unbuffered
- MINOR: early data: Don't rely on CO_FL_EARLY_DATA to wake up streams.
- MINOR: early data: Never remove the CO_FL_EARLY_DATA flag.
- MINOR: compiler: introduce offsetoff().
- MINOR: threads: Introduce double-width CAS on x86_64 and arm.
- MINOR: threads: add test and set/reset operations
- MINOR: pools/threads: Implement lockless memory pools.
- MAJOR: fd/threads: Make the fdcache mostly lockless.
- MEDIUM: fd/threads: Make sure we don't miss a fd cache entry.
- MAJOR: fd: compute the new fd polling state out of the fd lock
- MINOR: epoll: get rid of the now useless fd_compute_new_polled_status()
- MINOR: kqueue: get rid of the now useless fd_compute_new_polled_status()
- MINOR: poll: get rid of the now useless fd_compute_new_polled_status()
- MINOR: select: get rid of the now useless fd_compute_new_polled_status()
- CLEANUP: fd: remove the now unused fd_compute_new_polled_status() function
- MEDIUM: fd: make updt_fd_polling() use atomics
- MEDIUM: poller: use atomic ops to update the fdtab mask
- MINOR: fd: move the fd_{add_to,rm_from}_fdlist functions to fd.c
- BUG/MINOR: fd/threads: properly dereference fdcache as volatile
- MINOR: fd: remove the unneeded last CAS when adding an fd to the list
- MINOR: fd: reorder fd_add_to_fd_list()
- BUG/MINOR: time/threads: ensure the adjusted time is always correct
- BUG/MEDIUM: standard: Fix memory leak in str2ip2()
- MINOR: init: emit warning when -sf/-sd cannot parse argument
- BUILD: fd/threads: fix breakage build breakage without threads
- DOC: Describe routing impact of using interface keyword on bind lines
- DOC: Mention -Ws in the list of available options
- BUG/MINOR: config: don't emit a warning when global stats is incompletely configured
- BUG/MINOR: fd/threads: properly lock the FD before adding it to the fd cache.
- BUG/MEDIUM: threads: fix the double CAS implementation for ARMv7
- BUG/MEDIUM: ssl: Don't always treat SSL_ERROR_SYSCALL as unrecovarable.
- BUILD/MINOR: memory: stdint is needed for uintptr_t
- BUG/MINOR: init: Add missing brackets in the code parsing -sf/-st
- DOC: lua: new prototype for function "register_action()"
- DOC: cfgparse: Warn on option (tcp|http)log in backend
- BUG/MINOR: ssl/threads: Make management of the TLS ticket keys files thread-safe
- MINOR: sample: add a new "concat" converter
- BUG/MEDIUM: ssl: Shutdown the connection for reading on SSL_ERROR_SYSCALL
- BUG/MEDIUM: http: Switch the HTTP response in tunnel mode as earlier as possible
- BUG/MEDIUM: ssl/sample: ssl_bc_* fetch keywords are broken.
- MINOR: ssl/sample: adds ssl_bc_is_resumed fetch keyword.
- CLEANUP: cfgparse: Remove unused label end
- CLEANUP: spoe: Remove unused label retry
- CLEANUP: h2: Remove unused labels from mux_h2.c
- CLEANUP: pools: Remove unused end label in memory.h
- CLEANUP: standard: Fix typo in IPv6 mask example
- BUG/MINOR: pools/threads: don't ignore DEBUG_UAF on double-word CAS capable archs
- BUG/MINOR: debug/pools: properly handle out-of-memory when building with DEBUG_UAF
- MINOR: debug/pools: make DEBUG_UAF also detect underflows
- MINOR: stats: display the number of threads in the statistics.
- BUG/MINOR: h2: Set the target of dbuf_wait to h2c
- BUG/MEDIUM: h2: always consume any trailing data after end of output buffers
- BUG/MEDIUM: buffer: Fix the wrapping case in bo_putblk
- BUG/MEDIUM: buffer: Fix the wrapping case in bi_putblk
- BUG/MEDIUM: spoe: Remove idle applets from idle list when HAProxy is stopping
- Revert "BUG/MINOR: send-proxy-v2: string size must include ('\0')"
- MINOR: ssl: extract full pkey info in load_certificate
- MINOR: ssl: add ssl_sock_get_pkey_algo function
- MINOR: ssl: add ssl_sock_get_cert_sig function
- MINOR: connection: add proxy-v2-options ssl-cipher,cert-sig,cert-key
- MINOR: connection: add proxy-v2-options authority
- MINOR: systemd: Add section for SystemD sandboxing to unit file
- MINOR: systemd: Add SystemD's Protect*= options to the unit file
- MINOR: systemd: Add SystemD's SystemCallFilter option to the unit file
- CLEANUP: h2: rename misleading h2c_stream_close() to h2s_close()
- MINOR: h2: provide and use h2s_detach() and h2s_free()
- MEDIUM: h2: use a single buffer allocator
- MINOR/BUILD: fix Lua build on Mac OS X
- BUILD/MINOR: fix Lua build on Mac OS X (again)
- BUG/MINOR: session: Fix tcp-request session failure if handshake.
- CLEANUP: .gitignore: Ignore binaries from the contrib directory
- BUG/MINOR: unix: Don't mess up when removing the socket from the xfer_sock_list.
- DOC: buffers: clarify the purpose of the <from> pointer in offer_buffers()
- BUG/MEDIUM: h2: also arm the h2 timeout when sending
- BUG/MINOR: cli: Fix a crash when passing a negative or too large value to "show fd"
- CLEANUP: ssl: Remove a duplicated #include
- CLEANUP: cli: Remove a leftover debug message
- BUG/MINOR: cli: Fix a typo in the 'set rate-limit' usage
- BUG/MEDIUM: fix a 100% cpu usage with cpu-map and nbthread/nbproc
- BUG/MINOR: force-persist and ignore-persist only apply to backends
- BUG/MEDIUM: threads/unix: Fix a deadlock when a listener is temporarily disabled
- BUG/MAJOR: threads/queue: Fix thread-safety issues on the queues management
- BUG/MINOR: dns: don't downgrade DNS accepted payload size automatically
- TESTS: Add a testcase for multi-port + multi-server listener issue
- CLEANUP: dns: remove duplicate code in src/dns.c
- BUG/MINOR: seemless reload: Fix crash when an interface is specified.
- BUG/MINOR: cli: Ensure all command outputs end with a LF
- BUG/MINOR: cli: Fix a crash when sending a command with too many arguments
- BUILD: ssl: Fix build with OpenSSL without NPN capability
- BUG/MINOR: spoa-example: unexpected behavior for more than 127 args
- BUG/MINOR: lua: return bad error messages
- CLEANUP: lua/syntax: lua is a name and not an acronym
- BUG/MEDIUM: tcp-check: single connect rule can't detect DOWN servers
- BUG/MINOR: tcp-check: use the server's service port as a fallback
- BUG/MEDIUM: threads/queue: wake up other threads upon dequeue
- MINOR: log: stop emitting alerts when it's not possible to write on the socket
- BUILD/BUG: enable -fno-strict-overflow by default
- BUG/MEDIUM: fd/threads: ensure the fdcache_mask always reflects the cache contents
- DOC: log: more than 2 log servers are allowed
- MINOR: hash: add new function hash_crc32c
- MINOR: proxy-v2-options: add crc32c
- MINOR: accept-proxy: support proxy protocol v2 CRC32c checksum
- REORG: compact "struct server"
- MINOR: samples: add crc32c converter
- BUG/MEDIUM: h2: properly account for DATA padding in flow control
- BUG/MINOR: h2: ensure we can never send an RST_STREAM in response to an RST_STREAM
- BUG/MINOR: listener: Don't decrease actconn twice when a new session is rejected
- CLEANUP: map, stream: remove duplicate code in src/map.c, src/stream.c
- BUG/MINOR: lua: the function returns anything
- BUG/MINOR: lua funtion hlua_socket_settimeout don't check negative values
- CLEANUP: lua: typo fix in comments
- BUILD/MINOR: fix build when USE_THREAD is not defined
- MINOR: lua: allow socket api settimeout to accept integers, float, and doubles
- BUG/MINOR: hpack: fix harmless use of uninitialized value in hpack_dht_insert
- MINOR: cli/threads: make "show fd" report thread_sync_io_handler instead of "unknown"
- MINOR: cli: make "show fd" report the mux and mux_ctx pointers when available
- BUILD/MINOR: cli: fix a build warning introduced by last commit
- BUG/MAJOR: h2: remove orphaned streams from the send list before closing
- MINOR: h2: always call h2s_detach() in h2_detach()
- MINOR: h2: fuse h2s_detach() and h2s_free() into h2s_destroy()
- BUG/MEDIUM: h2/threads: never release the task outside of the task handler
- BUG/MEDIUM: h2: don't consider pending data on detach if connection is in error
- BUILD/MINOR: threads: always export thread_sync_io_handler()
- MINOR: mux: add a "show_fd" function to dump debugging information for "show fd"
- MINOR: h2: implement a basic "show_fd" function
- MINOR: cli: report cache indexes in "show fd"
- BUG/MINOR: h2: remove accidental debug code introduced with show_fd function
- BUG/MEDIUM: h2: always add a stream to the send or fctl list when blocked
- BUG/MINOR: checks: check the conn_stream's readiness and not the connection
- BUG/MINOR: fd: Don't clear the update_mask in fd_insert.
- BUG/MINOR: email-alert: Set the mailer port during alert initialization
- BUG/MINOR: cache: fix "show cache" output
- BUG/MAJOR: cache: fix random crashes caused by incorrect delete() on non-first blocks
- BUG/MINOR: spoe: Initialize variables used during conf parsing before any check
- BUG/MINOR: spoe: Don't release the context buffer in .check_timeouts callbaclk
- BUG/MINOR: spoe: Register the variable to set when an error occurred
- BUG/MINOR: spoe: Don't forget to decrement fpa when a processing is interrupted
- MINOR: spoe: Add metrics in to know time spent in the SPOE
- MINOR: spoe: Add options to store processing times in variables
- MINOR: log: move 'log' keyword parsing in dedicated function
- MINOR: log: Keep the ref when a log server is copied to avoid duplicate entries
- MINOR: spoe: Add loggers dedicated to the SPOE agent
- MINOR: spoe: Add support for option dontlog-normal in the SPOE agent section
- MINOR: spoe: use agent's logger to log SPOE messages
- MINOR: spoe: Add counters to log info about SPOE agents
- BUG/MAJOR: cache: always initialize newly created objects
- MINOR: servers: Support alphanumeric characters for the server templates names
- BUG/MEDIUM: threads: Fix the max/min calculation because of name clashes
- BUG/MEDIUM: connection: Make sure we have a mux before calling detach().
- BUG/MINOR: http: Return an error in proxy mode when url2sa fails
- MINOR: proxy: Add fe_defbe fetcher
- MINOR: config: Warn if resolvers has no nameservers
- BUG/MINOR: cli: Guard against NULL messages when using CLI_ST_PRINT_FREE
- MINOR: cli: Ensure the CLI always outputs an error when it should
- MEDIUM: sample: Extend functionality for field/word converters
- MINOR: export localpeer as an environment variable
- BUG/MEDIUM: kqueue: When adding new events, provide an output to get errors.
- BUILD: sample: avoid build warning in sample.c
- BUG/CRITICAL: h2: fix incorrect frame length check
- DOC: lua: update the links to the config and Lua API
- BUG/MINOR: pattern: Add a missing HA_SPIN_INIT() in pat_ref_newid()
- BUG/MAJOR: channel: Fix crash when trying to read from a closed socket
- BUG/MINOR: log: t_idle (%Ti) is not set for some requests
- BUG/MEDIUM: lua: Fix segmentation fault if a Lua task exits
- MINOR: h2: detect presence of CONNECT and/or content-length
- BUG/MEDIUM: h2: implement missing support for chunked encoded uploads
- BUG/MINOR: spoe: Fix counters update when processing is interrupted
- BUG/MINOR: spoe: Fix parsing of dontlog-normal option
- MEDIUM: cli: Add payload support
- MINOR: map: Add payload support to "add map"
- MINOR: ssl: Add payload support to "set ssl ocsp-response"
- BUG/MINOR: lua/threads: Make lua's tasks sticky to the current thread
- MINOR: sample: Add strcmp sample converter
- MINOR: http: Add support for 421 Misdirected Request
- BUG/MINOR: config: disable http-reuse on TCP proxies
- MINOR: ssl: disable SSL sample fetches when unsupported
- MINOR: ssl: add fetch 'ssl_fc_session_key' and 'ssl_bc_session_key'
- BUG/MINOR: checks: Fix check->health computation for flapping servers
- BUG/MEDIUM: threads: Fix the sync point for more than 32 threads
- BUG/MINOR, BUG/MINOR: lua: Put tasks to sleep when waiting for data
- MINOR: backend: implement random-based load balancing
- DOC/MINOR: clean up LUA documentation re: servers & array/table.
- MINOR: lua: Add server name & puid to LUA Server class.
- MINOR: lua: add get_maxconn and set_maxconn to LUA Server class.
- BUG/MINOR: map: correctly track reference to the last ref_elt being dumped
- BUG/MEDIUM: task: Don't free a task that is about to be run.
- MINOR: fd: Make the lockless fd list work with multiple lists.
- BUG/MEDIUM: pollers: Use a global list for fd shared between threads.
- MINOR: pollers: move polled_mask outside of struct fdtab.
- BUG/MINOR: lua: schedule socket task upon lua connect()
- BUG/MINOR: lua: ensure large proxy IDs can be represented
- BUG/MEDIUM: pollers/kqueue: use incremented position in event list
- BUG/MINOR: cli: don't stop cli_gen_usage_msg() when kw->usage == NULL
- BUG/MEDIUM: http: don't always abort transfers on CF_SHUTR
- BUG/MEDIUM: ssl: properly protect SSL cert generation
- BUG/MINOR: lua: Socket.send threw runtime error: 'close' needs 1 arguments.
- BUG/MINOR: spoe: Mistake in error message about SPOE configuration
- BUG/MEDIUM: spoe: Flags are not encoded in network order
- CLEANUP: spoe: Remove unused variables the agent structure
- DOC: spoe: fix a typo
- BUG/MEDIUM: contrib/mod_defender: Use network order to encode/decode flags
- BUG/MEDIUM: contrib/modsecurity: Use network order to encode/decode flags
- DOC: add some description of the pending rework of the buffer structure
- BUG/MINOR: ssl/lua: prevent lua from affecting automatic maxconn computation
- MINOR: lua: Improve error message
- BUG/MEDIUM: cache: don't cache when an Authorization header is present
- MINOR: ssl: set SSL_OP_PRIORITIZE_CHACHA
- BUG/MEDIUM: dns: Delay the attempt to run a DNS resolution on check failure.
- BUG/BUILD: threads: unbreak build without threads
- BUG/MEDIUM: servers: Add srv_addr default placeholder to the state file
- BUG/MEDIUM: lua/socket: Length required read doesn't work
- MINOR: tasks: Change the task API so that the callback takes 3 arguments.
- MAJOR: tasks: Create a per-thread runqueue.
- MAJOR: tasks: Introduce tasklets.
- MINOR: tasks: Make the number of tasks to run at once configurable.
- MAJOR: applets: Use tasks, instead of rolling our own scheduler.
- BUG/MEDIUM: stick-tables: Decrement ref_cnt in table_* converters
- MINOR: http: Log warning if (add|set)-header fails
- DOC: management: add the new wrew stats column
- MINOR: stats: also report the failed header rewrites warnings on the stats page
- BUG/MEDIUM: tasks: Don't forget to increase/decrease tasks_run_queue.
- BUG/MEDIUM: task: Don't forget to decrement max_processed after each task.
- MINOR: task: Also consider the task list size when getting global tasks.
- MINOR: dns: Implement `parse-resolv-conf` directive
- BUG/MEDIUM: spoe: Return an error when the wrong ACK is received in sync mode
- MINOR: task/notification: Is notifications registered ?
- BUG/MEDIUM: lua/socket: wrong scheduling for sockets
- BUG/MAJOR: lua: Dead lock with sockets
- BUG/MEDIUM: lua/socket: Notification error
- BUG/MEDIUM: lua/socket: Sheduling error on write: may dead-lock
- BUG/MEDIUM: lua/socket: Buffer error, may segfault
- DOC: contrib/modsecurity: few typo fixes
- DOC: SPOE.txt: fix a typo
- MAJOR: spoe: upgrade the SPOP version to 2.0 and remove the support for 1.0
- BUG/MINOR: contrib/spoa_example: Don't reset the status code during disconnect
- BUG/MINOR: contrib/mod_defender: Don't reset the status code during disconnect
- BUG/MINOR: contrib/modsecurity: Don't reset the status code during disconnect
- BUG/MINOR: contrib/mod_defender: update pointer on the end of the frame
- BUG/MINOR: contrib/modsecurity: update pointer on the end of the frame
- MINOR: task: Fix a compiler warning by adding a cast.
- MINOR: stats: also report the nice and number of calls for applets
- MINOR: applet: assign the same nice value to a new appctx as its owner task
- MINOR: task: Fix compiler warning.
- BUG/MEDIUM: tasks: Use the local runqueue when building without threads.
- MINOR: tasks: Don't define rqueue if we're building without threads.
- BUG/MINOR: unix: Make sure we can transfer abns sockets on seamless reload.
- MINOR: lua: Increase debug information
- BUG/MEDIUM: threads: handle signal queue only in thread 0
- BUG/MINOR: don't ignore SIG{BUS,FPE,ILL,SEGV} during signal processing
- BUG/MINOR: signals: ha_sigmask macro for multithreading
- BUG/MAJOR: map: fix a segfault when using http-request set-map
- DOC: regression testing: Add a short starting guide.
- MINOR: tasks: Make sure we correctly init and deinit a tasklet.
- BUG/MINOR: tasklets: Just make sure we don't pass a tasklet to the handler.
- BUG/MINOR: lua: Segfaults with wrong usage of types.
- BUG/MAJOR: ssl: Random crash with cipherlist capture
- BUG/MAJOR: ssl: OpenSSL context is stored in non-reserved memory slot
- BUG/MEDIUM: ssl: do not store pkinfo with SSL_set_ex_data
- MINOR: tests: First regression testing file.
- MINOR: reg-tests: Add reg-tests/README file.
- MINOR: reg-tests: Add a few regression testing files.
- DOC: Add new REGTEST tag info about reg testing.
- BUG/MEDIUM: fd: Don't modify the update_mask in fd_dodelete().
- MINOR: Some spelling cleanup in the comments.
- BUG/MEDIUM: threads: Use the sync point to check active jobs and exit
- MINOR: threads: Be sure to remove threads from all_threads_mask on exit
- REGTEST/MINOR: Wrong URI in a reg test for SSL/TLS.
- REGTEST/MINOR: Set HAPROXY_PROGRAM default value.
- REGTEST/MINOR: Add levels to reg-tests target.
- BUG/MAJOR: Stick-tables crash with segfault when the key is not in the stick-table
- BUG/BUILD: threads: unbreak build without threads
- BUG/MAJOR: stick_table: Complete incomplete SEGV fix
- MINOR: stick-tables: make stktable_release() do nothing on NULL
- BUG/MEDIUM: lua: possible CLOSE-WAIT state with '\n' headers
- MINOR: startup: change session/process group settings
- MINOR: systemd: consider exit status 143 as successful
- REGTEST/MINOR: Wrong URI syntax.
- CLEANUP: dns: remove obsolete macro DNS_MAX_IP_REC
- CLEANUP: dns: inacurate comment about prefered IP score
- MINOR: dns: fix wrong score computation in dns_get_ip_from_response
- MINOR: dns: new DNS options to allow/prevent IP address duplication
- REGTEST/MINOR: Unexpected curl URL globling.
- BUG/MINOR: ssl: properly ref-count the tls_keys entries
- MINOR: h2: keep a count of the number of conn_streams attached to the mux
- BUG/MEDIUM: h2: don't accept new streams if conn_streams are still in excess
- MINOR: h2: add the mux and demux buffer lengths on "show fd"
- BUG/MEDIUM: h2: never leave pending data in the output buffer on close
- BUG/MEDIUM: h2: make sure the last stream closes the connection after a timeout
- MINOR: tasklet: Set process to NULL.
- MINOR: buffer: implement a new file for low-level buffer manipulation functions
- MINOR: buffer: switch buffer sizes and offsets to size_t
- MINOR: buffer: add a few basic functions for the new API
- MINOR: buffer: Introduce b_sub(), b_add(), and bo_add()
- MINOR: buffer: Add b_set_data().
- MINOR: buffer: introduce b_realign_if_empty()
- MINOR: compression: pass the channel to http_compression_buffer_end()
- MINOR: channel: add a few basic functions for the new buffer API
- MINOR: channel/buffer: use c_realign_if_empty() instead of buffer_realign()
- MINOR: channel/buffer: replace buffer_slow_realign() with channel_slow_realign() and b_slow_realign()
- MEDIUM: channel: make channel_slow_realign() take a swap buffer
- MINOR: h2: use b_slow_realign() with the trash as a swap buffer
- MINOR: buffer: remove buffer_slow_realign() and the swap_buffer allocation code
- MINOR: channel/buffer: replace b_{adv,rew} with c_{adv,rew}
- MINOR: buffer: replace calls to buffer_space_wraps() with b_space_wraps()
- MINOR: buffer: remove bi_getblk() and bi_getblk_nc()
- MINOR: buffer: split bi_contig_data() into ci_contig_data and b_config_data()
- MINOR: buffer: remove bi_ptr()
- MINOR: buffer: remove bo_ptr()
- MINOR: buffer: remove bo_end()
- MINOR: buffer: remove bi_end()
- MINOR: buffer: remove bo_contig_data()
- MINOR: buffer: merge b{i,o}_contig_space()
- MINOR: buffer: replace bo_getblk() with direction agnostic b_getblk()
- MINOR: buffer: replace bo_getblk_nc() with b_getblk_nc() which takes an offset
- MINOR: buffer: replace bi_del() and bo_del() with b_del()
- MINOR: buffer: convert most b_ptr() calls to c_ptr()
- MINOR: h1: make h1_measure_trailers() take the byte count in argument
- MINOR: h2: clarify the fact that the send functions are unsigned
- MEDIUM: h2: prevent the various mux encoders from modifying the buffer
- MINOR: h1: make h1_skip_chunk_crlf() not depend on b_ptr() anymore
- MINOR: h1: make h1_parse_chunk_size() not depend on b_ptr() anymore
- MINOR: h1: make h1_measure_trailers() use an offset and a count
- MEDIUM: h2: do not use buf->o anymore inside h2_snd_buf's loop
- MEDIUM: h2: don't use b_ptr() nor b_end() anymore
- MINOR: buffer: get rid of b_end() and b_to_end()
- MINOR: buffer: make b_getblk_nc() take const pointers
- MINOR: buffer: make b_getblk_nc() take size_t for the block sizes
- MEDIUM: connection: make xprt->snd_buf() take the byte count in argument
- MEDIUM: mux: make mux->snd_buf() take the byte count in argument
- MEDIUM: connection: make xprt->rcv_buf() use size_t for the count
- MEDIUM: mux: make mux->rcv_buf() take a size_t for the count
- MINOR: connection: add a flags argument to rcv_buf()
- MINOR: connection: add a new receive flag : CO_RFL_BUF_WET
- MINOR: buffer: get rid of b_ptr() and convert its last users
- MINOR: buffer: use b_room() to determine available space in a buffer
- MINOR: buffer: replace buffer_not_empty() with b_data() or c_data()
- MINOR: buffer: replace buffer_empty() with b_empty() or c_empty()
- MINOR: buffer: make bo_putchar() use b_tail()
- MINOR: buffer: replace buffer_full() with channel_full()
- MINOR: buffer: replace bi_space_for_replace() with ci_space_for_replace()
- MINOR: buffer: replace buffer_pending() with ci_data()
- MINOR: buffer: replace buffer_flush() with c_adv(chn, ci_data(chn))
- MINOR: buffer: use c_head() instead of buffer_wrap_sub(c->buf, p-o)
- MINOR: buffer: use b_orig() to replace most references to b->data
- MINOR: buffer: Use b_add()/bo_add() instead of accessing b->i/b->o.
- MINOR: channel: remove almost all references to buf->i and buf->o
- MINOR: channel: Add co_set_data().
- MEDIUM: channel: adapt to the new buffer API
- MINOR: checks: adapt to the new buffer API
- MEDIUM: h2: update to the new buffer API
- MINOR: buffer: remove unused bo_add()
- MEDIUM: spoe: use the new buffer API for the SPOE buffer
- MINOR: stats: adapt to the new buffers API
- MINOR: cli: use the new buffer API
- MINOR: cache: use the new buffer API
- MINOR: stream-int: use the new buffer API
- MINOR: stream: use wrappers instead of directly manipulating buffers
- MINOR: backend: use new buffer API
- MEDIUM: http: use wrappers instead of directly manipulating buffers states
- MINOR: filters: convert to the new buffer API
- MINOR: payload: convert to the new buffer API
- MEDIUM: h1: port to new buffer API.
- MINOR: flt_trace: adapt to the new buffer API
- MEDIUM: compression: start to move to the new buffer API
- MINOR: lua: use the wrappers instead of directly manipulating buffer states
- MINOR: buffer: convert part bo_putblk() and bi_putblk() to the new API
- MINOR: buffer: adapt buffer_slow_realign() and buffer_dump() to the new API
- MAJOR: start to change buffer API
- MINOR: buffer: remove the check for output on b_del()
- MINOR: buffer: b_set_data() doesn't truncate output data anymore
- MINOR: buffer: rename the "data" field to "area"
- MEDIUM: buffers: move "output" from struct buffer to struct channel
- MINOR: buffer: replace bi_fast_delete() with b_del()
- MINOR: buffer: replace b{i,o}_put* with b_put*
- MINOR: buffer: add a new file for ist + buffer manipulation functions
- MINOR: checks: use b_putist() instead of b_putstr()
- MINOR: buffers: remove b_putstr()
- CLEANUP: buffer: minor cleanups to buffer.h
- MINOR: buffers/channel: replace buffer_insert_line2() with ci_insert_line2()
- MINOR: buffer: replace buffer_replace2() with b_rep_blk()
- MINOR: buffer: rename the data length member to '->data'
- MAJOR: buffer: finalize buffer detachment
- MEDIUM: chunks: make the chunk struct's fields match the buffer struct
- MAJOR: chunks: replace struct chunk with struct buffer
- DOC: buffers: document the new buffers API
- DOC: buffers: remove obsolete docs about buffers
- MINOR: tasklets: Don't attempt to add a tasklet in the list twice.
- MINOR: connections/mux: Add a new "subscribe" method.
- MEDIUM: connections/mux: Revamp the send direction.
- MINOR: connection: simplify subscription by adding a registration function
- BUG/MINOR: http: Set brackets for the unlikely macro at the right place
- BUG/MINOR: build: Fix compilation with debug mode enabled
- BUILD: Generate sha256 checksums in publish-release
- MINOR: debug: Add check for CO_FL_WILL_UPDATE
- MINOR: debug: Add checks for conn_stream flags
- MINOR: ist: Add the function isteqi
- BUG/MEDIUM: threads: Fix the exit condition of the thread barrier
- BUG/MEDIUM: mux_h2: Call h2_send() before updating polling.
- MINOR: buffers: simplify b_contig_space()
- MINOR: buffers: split b_putblk() into __b_putblk()
- MINOR: buffers: add b_xfer() to transfer data between buffers
- DOC: add some design notes about the new layering model
- MINOR: conn_stream: add a new CS_FL_REOS flag
- MINOR: conn_stream: add an rx buffer to the conn_stream
- MEDIUM: conn_stream: add cs_recv() as a default rcv_buf() function
- MEDIUM: stream-int: automatically call si_cs_recv_cb() if the cs has data on wake()
- MINOR: h2: make each H2 stream support an intermediary input buffer
- MEDIUM: h2: make h2_frt_decode_headers() use an intermediary buffer
- MEDIUM: h2: make h2_frt_transfer_data() copy via an intermediary buffer
- MEDIUM: h2: centralize transfer of decoded frames in h2_rcv_buf()
- MEDIUM: h2: move headers and data frame decoding to their respective parsers
- MEDIUM: buffers: make b_xfer() automatically swap buffers when possible
- MEDIUM: h2: perform a single call to the data layer in demux()
- MEDIUM: h2: don't call data_cb->recv() anymore
- MINOR: h2: make use of CS_FL_REOS to indicate that end of stream was seen
- MEDIUM: h2: use the default conn_stream's receive function
- DOC: add more design feedback on the new layering model
- MINOR: h2: add the error code and the max/last stream IDs to "show fd"
- BUG/MEDIUM: stream-int: don't immediately enable reading when the buffer was reportedly full
- BUG/MEDIUM: stats: don't ask for more data as long as we're responding
- BUG/MINOR: servers: Don't make "server" in a frontend fatal.
- BUG/MEDIUM: tasks: make sure we pick all tasks in the run queue
- BUG/MEDIUM: tasks: Decrement rqueue_size at the right time.
- BUG/MEDIUM: tasks: use atomic ops for active_tasks_mask
- BUG/MEDIUM: tasks: Make sure there's no task left before considering inactive.
- MINOR: signal: don't pass the signal number anymore as the wakeup reason
- MINOR: tasks: extend the state bits from 8 to 16 and remove the reason
- MINOR: tasks: Add a flag that tells if we're in the global runqueue.
- BUG/MEDIUM: tasks: make __task_unlink_rq responsible for the rqueue size.
- MINOR: queue: centralize dequeuing code a bit better
- MEDIUM: queue: make pendconn_free() work on the stream instead
- DOC: queue: document the expected locking model for the server's queue
- MINOR: queue: make sure pendconn->strm->pend_pos is always valid
- MINOR: queue: use a distinct variable for the assigned server and the queue
- MINOR: queue: implement pendconn queue locking functions
- MEDIUM: queue: get rid of the pendconn lock
- MINOR: tasks: Make active_tasks_mask volatile.
- MINOR: tasks: Make global_tasks_mask volatile.
- MINOR: pollers: Add a way to wake a thread sleeping in the poller.
- MINOR: threads/queue: Get rid of THREAD_WANT_SYNC in the queue code.
- BUG/MEDIUM: threads/sync: use sched_yield when available
- MINOR: ssl: BoringSSL matches OpenSSL 1.1.0
- BUG/MEDIUM: h2: prevent orphaned streams from blocking a connection forever
- BUG/MINOR: config: stick-table is not supported in defaults section
- BUILD/MINOR: threads: unbreak build with threads disabled
- BUG/MINOR: threads: Handle nbthread == MAX_THREADS.
- BUG/MEDIUM: threads: properly fix nbthreads == MAX_THREADS
- MINOR: threads: move "nbthread" parsing to hathreads.c
- BUG/MEDIUM: threads: unbreak "bind" referencing an incorrect thread number
- MEDIUM: proxy_protocol: Convert IPs to v6 when protocols are mixed
- BUILD/MINOR: compiler: fix offsetof() on older compilers
- SCRIPTS: git-show-backports: add missing quotes to "echo"
- MINOR: threads: add more consistency between certain variables in no-thread case
- MEDIUM: hathreads: implement a more flexible rendez-vous point
- BUG/MEDIUM: cli: make "show fd" thread-safe
When threads are disabled, some variables such as tid and tid_bit are
still checked everywhere, the MAX_THREADS_MASK macro is ~0UL while
MAX_THREADS is 1, and the all_threads_mask variable is replaced with a
macro forced to zero. The compiler cannot optimize away all this code
involving checks on tid and tid_bit, and we end up in special cases
where all_threads_mask has to be specifically tested for being zero or
not. It is not even certain the code paths are always equivalent when
testing without threads and with nbthread 1.
Let's change this to make sure we always present a single thread when
threads are disabled, and have the relevant values declared as constants
so that the compiler can optimize all the tests away. Now we have
MAX_THREADS_MASK set to 1, all_threads_mask set to 1, tid set to zero
and tid_bit set to 1. Doing just this has removed 4 kB of code in the
no-thread case.
A few checks for all_threads_mask==0 have been removed since it never
happens anymore.
The purpose is to make sure that all variables which directly depend
on this nbthread argument are set at the right moment. For now only
all_threads_mask needs to be set. It used to be set while calling
thread_sync_init() which is called too late for certain checks. The
same function handles threads and non-threads, which removes the need
for some thread-specific knowledge from cfgparse.c.
While moving Olivier's patch for nbthread==MAX_THREADS in commit
3e12304 ("BUG/MINOR: threads: Handle nbthread == MAX_THREADS.") to
hathreads.c, I missed one place resulting in the computed thread mask
being used as the thread count, which is worse than the initial bug.
Let's fix it properly this time.
This fix must be backported to 1.8 just like the other one.
Add a new pipe, one per thread, so that we can write on it to wake a thread
sleeping in a poller, and use it to wake threads supposed to take care of a
task, if they are all sleeping.
Chunks are only a subset of a buffer (a non-wrapping version with no head
offset). Despite this we still carry a lot of duplicated code between
buffers and chunks. Replacing chunks with buffers would significantly
reduce the maintenance efforts. This first patch renames the chunk's
fields to match the name and types used by struct buffers, with the goal
of isolating the code changes from the declaration changes.
Most of the changes were made with spatch using this coccinelle script :
@rule_d1@
typedef chunk;
struct chunk chunk;
@@
- chunk.str
+ chunk.area
@rule_d2@
typedef chunk;
struct chunk chunk;
@@
- chunk.len
+ chunk.data
@rule_i1@
typedef chunk;
struct chunk *chunk;
@@
- chunk->str
+ chunk->area
@rule_i2@
typedef chunk;
struct chunk *chunk;
@@
- chunk->len
+ chunk->data
Some minor updates to 3 http functions had to be performed to take size_t
ints instead of ints in order to match the unsigned length here.
Now the buffers only contain the header and a pointer to the storage
area which can be anywhere. This will significantly simplify buffer
swapping and will make it possible to map chunks on buffers as well.
The buf_empty variable was removed, as now it's enough to have size==0
and area==NULL to designate the empty buffer (thus a non-allocated head
is the empty buffer by default). buf_wanted for now is indicated by
size==0 and area==(void *)1.
The channels and the checks now embed the buffer's head, and the only
pointer is to the storage area. This slightly increases the unallocated
buffer size (3 extra ints for the empty buffer) but considerably
simplifies dynamic buffer management. It will also later permit to
detach unused checks.
The way the struct buffer is arranged has proven quite efficient on a
number of tests, which makes sense given that size is always accessed
and often first, followed by the othe ones.
Change the way the process groups are set. Indeed setsid() was called
for every processes which caused the worker to have a different process
group than the master.
This patch behave in a better way:
- In daemon mode only, each child do a setsid()
- In master worker + daemon mode, the setsid() is done in the master before
forking the children
- In any foreground mode, we don't do a setsid()
Could be backported in 1.8 but the master-worker mode is mostly used
with systemd which rely on cgroups so that won't affect much people.
The build without threads was once again broken.
This issue was introduced in commit ba86c6c ("MINOR: threads: Be sure to
remove threads from all_threads_mask on exit").
This is exactly the same problem as last time it happened, because of
all_threads_mask not being defined with USE_THREAD=
This must be backported in 1.8
When HAProxy is started with several threads, Each running thread holds a bit in
the bitfiled all_threads_mask. This bitfield is used here and there to check
which threads are registered to take part in a specific processing. So when a
thread exits, it seems normal to remove it from all_threads_mask.
No direct impact could be identified with this right now but it would
be better to backport it to 1.8 as a preventive measure to avoid complex
situations like the one in previous bug.
When HAProxy is shutting down, it exits the polling loop when there is no jobs
anymore (jobs == 0). When there is no thread, it works pretty well, but when
HAProxy is started with several threads, a thread can decide to exit because
jobs variable reached 0 while another one is processing a task (e.g. a
health-check). At this stage, the running thread could decide to request a
synchronization. But because at least one of them has already gone, the others
will wait infinitly in the sync point and the process will never die.
To fix the bug, when the first thread (and only this one) detects there is no
active jobs anymore, it requests a synchronization. And in the sync point, all
threads will check if jobs variable reached 0 to exit the polling loop.
This patch must be backported in 1.8.
The behavior of sigprocmask in an multithreaded environment is
undefined.
The new macro ha_sigmask() calls either pthreads_sigmask() or
sigprocmask() if haproxy was built with thread support or not.
This should be backported to 1.8.
Signals were handled in all threads which caused some signals to be lost
from time to time. To avoid complicated lock system (threads+signals),
we prefer handling the signals in one thread avoiding concurrent access.
The side effect of this bug was that some process were not leaving from
time to time during a reload.
This patch must be backported in 1.8.
There's no real reason to have a specific scheduler for applets anymore, so
nuke it and just use tasks. This comes with some benefits, the first one
being that applets cannot induce high latencies anymore since they share
nice values with other tasks. Later it will be possible to configure the
applets' nice value. The second benefit is that the applet scheduler was
not very thread-friendly, having a big lock around it in prevision of this
change. Thus applet-intensive workloads should now scale much better with
threads.
Some more improvement is possible now : some applets also use a task to
handle timers and timeouts. These ones could now be simplified to use only
one task.
In preparation for thread-specific runqueues, change the task API so that
the callback takes 3 arguments, the task itself, the context, and the state,
those were retrieved from the task before. This will allow these elements to
change atomically in the scheduler while the application uses the copied
value, and even to have NULL tasks later.
Export localpeer as the environment variable $HAPROXY_LOCALPEER,
allowing to use this variable in the configuration file.
It's useful to use this variable in the case of synchronized
configuration between peers.
When doing a seemless reload, while receiving the sockets from the old process
the new process will die if the socket has been bound to a specific
interface.
This happens because the code that tries to parse the informations bogusly
try to set xfer_sock->namespace, while it should be setting wfer_sock->iface.
This should be backported to 1.8.
Krishna Kumar reported a 100% cpu usage with a configuration using
cpu-map and a high number of threads,
Indeed, this minimal configuration to reproduce the issue :
global
nbthread 40
cpu-map auto:1/1-40 0-39
frontend test
bind :8000
This is due to a wrong type in a shift operator (int vs unsigned long int),
causing an endless loop while applying the cpu affinity on threads. The same
issue may also occur with nbproc under FreeBSD. This commit addresses both
cases.
This patch must be backported to 1.8.
The codes tries to strip trailing spaces of arguments but due to missing
brackets, it will always exit.
It can be reproduced with this (silly) example:
$ haproxy -f /etc/haproxy/haproxy.cfg -sf 1234 "1235 " 1236
$ echo $?
1
This was introduced in commit 236062f7c ("MINOR: init: emit warning when
-sf/-sd cannot parse argument")
Signed-off-by: Aurélien Nephtali <aurelien.nephtali@gmail.com>
Previously, -sf and -sd command line parsing used atol which cannot
detect errors. I had a problem where I was doing -sf "$pid1 $pid2 $pid"
and it was sending the gracefully terminate signal only to the first pid.
The change uses strtol and checks endptr and errno to see if the parsing
worked. It will exit when the pid list is not parsed.
[wt: this should be backported to 1.8]
fd_insert() is currently called just after setting the owner and iocb,
but proceeding like this prevents the operation from being atomic and
requires a lock to protect the maxfd computation in another thread from
meeting an incompletely initialized FD and computing a wrong maxfd.
Fortunately for now all fdtab[].owner are set before calling fd_insert(),
and the first lock in fd_insert() enforces a memory barrier so the code
is safe.
This patch moves the initialization of the owner and iocb to fd_insert()
so that the function will be able to properly arrange its operations and
remain safe even when modified to become lockless. There's no other change
beyond the internal API.
Since only select() and poll() still make use of maxfd, let's move
its computation right there in the pollers themselves, and only
during each fd update pass. The computation doesn't need a lock
anymore, only a few atomic ops. It will be accurate, be done much
less often and will not be required anymore in the FD's fast patch.
This provides a small performance increase of about 1% in connection
rate when using epoll since we get rid of this computation which was
performed under a lock.
A #ifdef/#endif on USE_THREAD was added in the commit 0048dd04 ("MINOR: threads:
Fix build when we're not compiling with threads.") to conditionally define the
start_lock variable, because HA_SPINLOCK_T is only defined when HAProxy is
compiled with threads.
If fact, to do that, we should use the macro __decl_hathreads instead.
If commit 0048dd04 is backported in 1.8, this one can also be backported.
Only declare the start_lock if threads are compiled in, otherwise
HA_SPINLOCK_T won't be defined.
This should be backported to 1.8 when/if
1605c7ae61 is backported.
A missing test causes a write(-1, $PID) to appear in strace output when
in master-worker mode. This is totally harmless though.
This fix must be backported to 1.8.
Marc Fournier reported an interesting case when using threads with the
master-worker mode : sometimes, a listener would have its FD closed
during startup. Sometimes it could even be health checks seeing this.
What happens is that after the threads are created, and the pollers
enabled on each threads, the master-worker pipe is registered, and at
the same time a close() is performed on the write side of this pipe
since the children must not use it.
But since this is replicated in every thread, what happens is that the
first thread closes the pipe, thus releases the FD, and the next thread
starting a listener in parallel gets this FD reassigned. Then another
thread closes the FD again, which this time corresponds to the listener.
It can also happen with the health check sockets if they're started
early enough.
This patch splits the mworker_pipe_register() function in two, so that
the close() of the write side of the FD is performed very early after the
fork() and long before threads are created (we don't need to delay it
anyway). Only the pipe registration is done in the threaded code since
it is important that the pollers are properly allocated for this.
The mworker_pipe_register() function now takes care of registering the
pipe only once, and this is guaranteed by a new surrounding lock.
The call to protocol_enable_all() looks fragile in theory since it
scans the list of proxies and their listeners, though in practice
all threads scan the same list and take the same locks for each
listener so it's not possible that any of them escapes the process
and finishes before all listeners are started. And the operation is
idempotent.
This fix must be backported to 1.8. Thanks to Marc for providing very
detailed traces clearly showing the problem.
fd_cache_num is the number of FDs in the FD cache. It is a global variable. So
it is underoptimized because we may be lead to consider there are waiting FDs
for the current thread in the FD cache while in fact all FDs are assigned to the
other threads. So, in such cases, the polling loop will be evaluated many more
times than necessary.
Instead, we now check if the thread id is set in the bitfield fd_cache_mask.
[wt: it's not exactly a bug, rather a design limitation of the thread
which was not addressed in time for the 1.8 release. It can appear more
often than we initially predicted, when more threads are running than
the number of assigned CPU cores, or when certain threads spend
milliseconds computing crypto keys while other threads spin on
epoll_wait(0)=0]
This patch should be backported to 1.8.
A number of counters have been added at special places helping better
understanding certain bug reports. These counters are maintained per
thread and are shown using "show activity" on the CLI. The "clear
counters" commands also reset these counters. The output is sent as a
single write(), which currently produces up to about 7 kB of data for
64 threads. If more counters are added, it may be necessary to write
into multiple buffers, or to reset the counters.
To backport to 1.8 to help collect more detailed bug reports.
This one allows not to inflate some structures when threads are
disabled. Now struct global is 1.4 kB instead of 33 kB.
Should be backported to 1.8 for ease of backporting of upcoming
patches.
The copy_argv() function lacks a check on '-' to remove the -x, -sf and
-st parameters.
When reloading a master process with a path starting by /st, /sf, or
/x.. the copy_argv() function skipped argv[0] leading to an execvp()
without the binary.
Closing the standard IO FDs (0,1,2) can be troublesome, especially in
the case of the master-worker.
Instead of closing those FDs, they are now pointing to /dev/null which
prevents sending debugging messages to the wrong FDs.
This patch could be backported in 1.8.
This patch makes sure that a frontend socket that gets created after
initialization won't be closed when the master gets re-executed.
When used in daemon mode, the master-worker is closing the FDs 0, 1, 2
after the fork of the children.
When the master was reloading, those FDs were assigned again during the
parsing of the configuration (probably for some listeners), and the
workers were closing them thinking it was the stdio.
This patch must be backported to 1.8.
The number of async fd is computed considering the maxconn, the number
of sides using ssl and the number of engines using async mode.
This patch should be backported on haproxy 1.8
There's a nasty case related to signaling all processes via SIGUSR1.
Since the master process still holds the peers sockets, the old process
trying to connect to the new one to teach it its tables has a risk to
connect to the master instead, which will not do anything, causing the
old process to hang instead of quitting.
This patch ensures we correctly close the peers in the master process
on startup, just like it is done for proxies. Ultimately we would rather
have a complete list of listeners to avoid such issues. But that's a bit
trickier as it would require using unbind_all() and avoiding side effects
the master could cause to other processes (like unlinking unix sockets).
To be backported to 1.8.
As with the call to cpuset_setaffinity(), FreeBSD expects the argument to
pthread_setaffinity_np() to be a cpuset_t, not an unsigned long, so the call
was silently failing.
This should probably be backported to 1.8.
This allows a calling script to show the first startup output and
know when to stop reading from stdout so haproxy can daemonize.
To be backpored to 1.8.
Check if master-worker pipe getenv succeeded, also allow pipe fd 0 as
valid. On FreeBSD in quiet mode the stdin/stdout/stderr are closed
which lets the mworker_pipe to use fd 0 and fd 1. Additionally exit()
upon failure to create or get the master-worker pipe.
This needs to be backported to 1.8.
This patch changes the behavior of the master during the exit of a
worker.
When a worker exits with an error code, for example in the case of a
segfault, all workers are now killed and the master leaves.
If you don't want this behavior you can use the option
"master-worker no-exit-on-failure".
During the migration to the second version of the pools, the new
functions and pool pointers were all called "pool_something2()" and
"pool2_something". Now there's no more pool v1 code and it's a real
pain to still have to deal with this. Let's clean this up now by
removing the "2" everywhere, and by renaming the pool heads
"pool_head_something".
Rename the global variable "proxy" to "proxies_list".
There's been multiple proxies in haproxy for quite some time, and "proxy"
is a potential source of bugs, a number of functions have a "proxy" argument,
and some code used "proxy" when it really meant "px" or "curproxy". It worked
by pure luck, because it usually happened while parsing the config, and thus
"proxy" pointed to the currently parsed proxy, but we should probably not
rely on this.
[wt: some of these are definitely fixes that are worth backporting]
Now, it is possible to bind CPU at the thread level instead of the process level
by defining a thread set in "cpu-map" directives. Thus, its format is now:
cpu-map [auto:]<process-set>[/<thread-set>] <cpu-set>...
where <process-set> and <thread-set> must follow the format:
all | odd | even | number[-[number]]
Having a process range and a thread range in same time with the "auto:" prefix
is not supported. Only one range is supported, the other one must be a fixed
number. But it is allowed when there is no "auto:" prefix.
Because it is possible to define a mapping for a process and another for a
thread on this process, threads will be bound on the intersection of their
mapping and the one of the process on which they are attached. If the
intersection is null, no specific binding will be set for the threads.
While using mmap() to allocate pools for debugging purposes, kill -USR1 caused
libc aborts in deinit() on two calls to free() on proxies' tasks and the global
listener task. The issue comes from the fact that we're using free() to release
a task instead of task_free(), so the task was allocated from a pool and released
using a different method.
This bug has been there since at least 1.5, so a backport is desirable to all
maintained versions.
Since we switched to notify mode in the systemd unit file in commit
d6942c8, haproxy won't start if the daemon keyword is present in the
configuration.
This change makes sure that haproxy remains in foreground when using
systemd mode and adds a note in the documentation.
This patch adds support for `Type=notify` to the systemd unit.
Supporting `Type=notify` improves both starting as well as reloading
of the unit, because systemd will be let known when the action completed.
See this quote from `systemd.service(5)`:
> Note however that reloading a daemon by sending a signal (as with the
> example line above) is usually not a good choice, because this is an
> asynchronous operation and hence not suitable to order reloads of
> multiple services against each other. It is strongly recommended to
> set ExecReload= to a command that not only triggers a configuration
> reload of the daemon, but also synchronously waits for it to complete.
By making systemd aware of a reload in progress it is able to wait until
the reload actually succeeded.
This patch introduces both a new `USE_SYSTEMD` build option which controls
including the sd-daemon library as well as a `-Ws` runtime option which
runs haproxy in master-worker mode with systemd support.
When haproxy is running in master-worker mode with systemd support it will
send status messages to systemd using `sd_notify(3)` in the following cases:
- The master process forked off the worker processes (READY=1)
- The master process entered the `mworker_reload()` function (RELOADING=1)
- The master process received the SIGUSR1 or SIGTERM signal (STOPPING=1)
Change the unit file to specify `Type=notify` and replace master-worker
mode (`-W`) with master-worker mode with systemd support (`-Ws`).
Future evolutions of this feature could include making use of the `STATUS`
feature of `sd_notify()` to send information about the number of active
connections to systemd. This would require bidirectional communication
between the master and the workers and thus is left for future work.
applets_active_queue is the active queue size. It is a global variable. So it is
underoptimized because we may be lead to consider there are active applets for a
thread while in fact all active applets are assigned to the otherthreads. So, in
such cases, the polling loop will be evaluated many more times than necessary.
Instead, we now check if the thread id is set in the bitfield active_applets_mask.
This is specific to threads, no backport is needed.
tasks_run_queue is the run queue size. It is a global variable. So it is
underoptimized because we may be lead to consider there are active tasks for a
thread while in fact all active tasks are assigned to the other threads. So, in
such cases, the polling loop will be evaluated many more times than necessary.
Instead, we now check if the thread id is set in the bitfield active_tasks_mask.
Another change has been made in process_runnable_tasks. Now, we always limit the
number of tasks processed to 200.
This is specific to threads, no backport is needed.
Since the commit cd7879adc ("BUG/MEDIUM: threads: Run the poll loop on the main
thread too"), the log buffers are allocated after the proxies startup. So log
messages produced during this startup was ignored.
To fix the bug, we restore the initialization of these buffers before proxies
startup.
This is specific to threads, no backport is needed.
At the end of the master initialisation, a call to protocol_unbind_all()
was made, in order to close all the FDs.
Unfortunately, this function closes the inherited FDs (fd@), upon reload
the master wasn't able to reload a configuration with those FDs.
The create_listeners() function now store a flag to specify if the fd
was inherited or not.
Replace the protocol_unbind_all() by mworker_cleanlisteners() +
deinit_pollers()
Does not use the deinit() function during a reload, it's dangerous and
might be subject to double free, segfault and hazardous behavior if
it's called twice in the case of a execvp fail.
After execvp fails, the signals were ignored, preventing to try a reload
again. It is now fixed by reaching the top of the mworker_wait()
function once the execvp failed.
When the master worker fail the execvp, it returns the wrong error
"Cannot allocate memory".
We now display the accurate error corresponding to the errno value.
If haproxy is started using the name of the binary only (i.e.
not using a relative or absolute path) the `execv` in
`mworker_reload` fails with `ENOENT`, because it does not
examine the `PATH`:
[WARNING] 315/161139 (7) : Reexecuting Master process
[WARNING] 315/161139 (7) : Cannot allocate memory
[WARNING] 315/161139 (7) : Failed to reexecute the master processs [7]
The error messages are misleading, because the return value of
`execv` is not checked. This should be fixed in a separate commit.
Once this happened the master process ignores any further
signals sent by the administrator.
Replace `execv` with `execvp` to establish the expected
behaviour.
This bug was introduced in commit 73b85e75b3.
At a number of places, bitmasks are used for process affinity and to map
listeners to processes. Every time 1UL<<(relative_pid-1) is used. Let's
create a "pid_bit" variable corresponding to this value to clean this up.
The first pid in the pidfile is now the parent, it's more convenient for
supervising the processus.
You can now reload haproxy in master-worker mode with convenient command
like: kill -USR2 $(head -1 /tmp/haproxy.pid)
This patch introduces a new struct conn_stream. It's the stream-side of
a multiplexed connection. A pool is created and destroyed on exit. For
now the conn_streams are not used at all.
There was a flaw in the way the threads was created. the main one was just used
to create all the others and just wait to exit. Now, it is used to run a poll
loop. So we only create nbthread-1 threads.
This also fixes a bug about the compression filter when there is only 1 thread
(nbthread == 1 or no threads support). The bug was in the way thread-local
resources was initialized. per-thread init/deinit callbacks were never called
for the main process. So, with nthread set to 1, some buffers remained
uninitialized.
By default, no affinity is set for threads. To bind threads on CPU, you must
define a "thread-map" in the global section. The format is the same than the
"cpu-map" parameter, with a small difference. The process number must be
defined, with the same format than cpu-map ("all", "even", "odd" or a number
between 1 and 31/63).
A thread will be bound on the intersection of its mapping and the one of the
process on which it is attached. If the intersection is null, no specific bind
will be set for the thread.
A lock for LB parameters has been added inside the proxy structure and atomic
operations have been used to update server variables releated to lb.
The only significant change is about lb_map. Because the servers status are
updated in the sync-point, we can call recalc_server_map function synchronously
in map_set_server_status_up/down function.
This list is used to save changes on the servers state. So when serveral threads
are used, it must be locked. The changes are then applied in the sync-point. To
do so, servers_update_status has be moved in the sync-point. So this is useless
to lock it at this step because the sync-point is a protected area by iteself.
Now, each proxy contains a lock that must be used when necessary to protect
it. Moreover, all proxy's counters are now updated using atomic operations.
2 global locks have been added to protect, respectively, the run queue and the
wait queue. And a process mask has been added on each task. Like for FDs, this
mask is used to know which threads are allowed to process a task.
For many tasks, all threads are granted. And this must be your first intension
when you create a new task, else you have a good reason to make a task sticky on
some threads. This is then the responsibility to the process callback to lock
what have to be locked in the task context.
Nevertheless, all tasks linked to a session must be sticky on the thread
creating the session. It is important that I/O handlers processing session FDs
and these tasks run on the same thread to avoid conflicts.
Many changes have been made to do so. First, the fd_updt array, where all
pending FDs for polling are stored, is now a thread-local array. Then 3 locks
have been added to protect, respectively, the fdtab array, the fd_cache array
and poll information. In addition, a lock for each entry in the fdtab array has
been added to protect all accesses to a specific FD or its information.
For pollers, according to the poller, the way to manage the concurrency is
different. There is a poller loop on each thread. So the set of monitored FDs
may need to be protected. epoll and kqueue are thread-safe per-se, so there few
things to do to protect these pollers. This is not possible with select and
poll, so there is no sharing between the threads. The poller on each thread is
independant from others.
Finally, per-thread init/deinit functions are used for each pollers and for FD
part for manage thread-local ressources.
Now, you must be carefull when a FD is created during the HAProxy startup. All
update on the FD state must be made in the threads context and never before
their creation. This is mandatory because fd_updt array is thread-local and
initialized only for threads. Because there is no pollers for the main one, this
array remains uninitialized in this context. For this reason, listeners are now
enabled in run_thread_poll_loop function, just like the worker pipe.
The function sync_poll_loop is called at the end of each loop inside
run_poll_loop function. It is a protected area where all threads have a chance
to execute tricky tasks with the warranty that no concurrent access is
possible. Of course, it comes with a cost because all threads must be
syncrhonized. So changes must be uncommon.
[WARNING] For now, HAProxy is not thread-safe, so from this commit, it will be
broken for a while, when compiled with threads.
When nbthread parameter is greater than 1, HAProxy will create the corresponding
number of threads. If nbthread is set to 1, nothing should be done. So if there
are concurrency issues (and be sure there will be, unfortunatly), an obvious
workaround is to disable the multithreading...
Each created threads will run a polling loop. So, in a certain way, it is pretty
similar to the nbproc mode ("outside" the bugs and the lock
contention). Nevertheless, there are an init and a deinit steps for each thread
to deal with per-thread allocation.
Each thread has a tid (thread-id), numbered from 0 to (nbtread-1). It is used in
many place to do bitwise operations or to improve debugging information.
hap_register_per_thread_init and hap_register_per_thread_deinit functions has
been added to register functions to do, for each thread, respectively, some
initialization and deinitialization. These functions are added in the global
lists per_thread_init_list and per_thread_deinit_list.
These functions are called only when HAProxy is started with more than 1 thread
(global.nbthread > 1).
This is a huge patch with many changes, all about the DNS. Initially, the idea
was to update the DNS part to ease the threads support integration. But quickly,
I started to refactor some parts. And after several iterations, it was
impossible for me to commit the different parts atomically. So, instead of
adding tens of patches, often reworking the same parts, it was easier to merge
all my changes in a uniq patch. Here are all changes made on the DNS.
First, the DNS initialization has been refactored. The DNS configuration parsing
remains untouched, in cfgparse.c. But all checks have been moved in a post-check
callback. In the function dns_finalize_config, for each resolvers, the
nameservers configuration is tested and the task used to manage DNS resolutions
is created. The links between the backend's servers and the resolvers are also
created at this step. Here no connection are kept alive. So there is no needs
anymore to reopen them after HAProxy fork. Connections used to send DNS queries
will be opened on demand.
Then, the way DNS requesters are linked to a DNS resolution has been
reworked. The resolution used by a requester is now referenced into the
dns_requester structure and the resolution pointers in server and dns_srvrq
structures have been removed. wait and curr list of requesters, for a DNS
resolution, have been replaced by a uniq list. And Finally, the way a requester
is removed from a DNS resolution has been simplified. Now everything is done in
dns_unlink_resolution.
srv_set_fqdn function has been simplified. Now, there is only 1 way to set the
server's FQDN, independently it is done by the CLI or when a SRV record is
resolved.
The static DNS resolutions pool has been replaced by a dynamoc pool. The part
has been modified by Baptiste Assmann.
The way the DNS resolutions are triggered by the task or by a health-check has
been totally refactored. Now, all timeouts are respected. Especially
hold.valid. The default frequency to wake up a resolvers is now configurable
using "timeout resolve" parameter.
Now, as documented, as long as invalid repsonses are received, we really wait
all name servers responses before retrying.
As far as possible, resources allocated during DNS configuration parsing are
releases when HAProxy is shutdown.
Beside all these changes, the code has been cleaned to ease code review and the
doc has been updated.
In order to prepare multi-thread development, code was re-worked
to propagate changes asynchronoulsy.
Servers with pending status changes are registered in a list
and this one is processed and emptied only once 'run poll' loop.
Operational status changes are performed before administrative
status changes.
In a case of multiple operational status change or admin status
change in the same 'run poll' loop iteration, those changes are
merged to reach only the targeted status.
The server state and weight was reworked to handle
"pending" values updated by checks/CLI/LUA/agent.
These values are commited to be propagated to the
LB stack.
In further dev related to multi-thread, the commit
will be handled into a sync point.
Pending values are named using the prefix 'next_'
Current values used by the LB stack are named 'cur_'
This string is used in sample fetches so it is safe to use a preallocated trash
chunk instead of a buffer dynamically allocated during HAProxy startup.
First, this variable does not need to be publicly exposed because it is only
used by stick_table functions. So we declare it as a global static in
stick_table.c file. Then, it is useless to use a pointer. Using a plain struct
variable avoids any dynamic allocation.
swap_buffer is a global variable only used by buffer_slow_realign. So it has
been moved from global.h to buffer.c and it is allocated by init_buffer
function. deinit_buffer function has been added to release it. It is also used
to destroy the buffers' pool.
During the configuration parsing, log buffers are reallocated when
global.max_syslog_len is updated. This can be done serveral time. So, instead of
doing it serveral time, we do it only once after the configuration parsing.
Now, we use init_log_buffers and deinit_log_buffers to, respectively, initialize
and deinitialize log buffers used for syslog messages.
These functions have been introduced to be used by threads, to deal with
thread-local log buffers.
Trash buffers are reallocated when "tune.bufsize" parameter is changed. Here, we
just move the realloc after the configuration parsing.
Given that the config parser doesn't rely on the trash size, it should be
harmless.
Now, we use init_trash_buffers and deinit_trash_buffers to, respectively,
initialize and deinitialize trash buffers (trash, trash_buf1 and trash_buf2).
These functions have been introduced to be used by threads, to deal with
thread-local trash buffers.
Use a cpuset_t instead of assuming the cpu mask is an unsigned long.
This should fix setting the CPU affinity on FreeBSD >= 11.
This patch should be backported to stable releases.
As mentionned in commit cf4e496c9 ("BUG/MEDIUM: build without openssl broken"),
commit 872f9c213 ("MEDIUM: ssl: add basic support for OpenSSL crypto engine")
broke the build without openssl support. But the former did only fix it when
openssl is not enabled, but not when it's not installed on the system :
In file included from src/haproxy.c:112:
include/proto/ssl_sock.h:24:25: openssl/ssl.h: No such file or directory
In file included from src/haproxy.c:112:
include/proto/ssl_sock.h:45: error: syntax error before "SSL_CTX"
include/proto/ssl_sock.h:75: error: syntax error before '*' token
include/proto/ssl_sock.h:75: warning: type defaults to `int' in declaration of `ssl_sock_create_cert'
include/proto/ssl_sock.h:75: warning: data definition has no type or storage class
include/proto/ssl_sock.h:76: error: syntax error before '*' token
include/proto/ssl_sock.h:76: warning: type defaults to `int' in declaration of `ssl_sock_get_generated_cert'
include/proto/ssl_sock.h:76: warning: data definition has no type or storage class
include/proto/ssl_sock.h:77: error: syntax error before '*' token
Now we also surround the include with #ifdef USE_OPENSSL to fix this. No
backport is needed since openssl async engines were not backported.
When several stick-tables were configured with several peers sections,
only a part of them could be synchronized: the ones attached to the last
parsed 'peers' section. This was due to the fact that, at least, the peer I/O handler
refered to the wrong peer section list, in fact always the same: the last one parsed.
The fact that the global peer section list was named "struct peers *peers"
lead to this issue. This variable name is dangerous ;).
So this patch renames global 'peers' variable to 'cfg_peers' to ensure that
no such wrong references are still in use, then all the functions wich used
old 'peers' variable have been modified to refer to the correct peer list.
Must be backported to 1.6 and 1.7.
When starting the master worker with -sf or -st, the PIDs will be reused
on the next reload, which is a problem if new processes on the system
took those PIDs.
This patch ensures that we don't register old PIDs in the reload system
when launching the master worker.
Don't copy the -x argument anymore in copy_argv() since it's already
allocated in mworker_reload().
Make the copy_argv() more consistent when used with multiple arguments
to strip.
It prevents multiple -x on reload, which is not supported.
This patch fixes a segfault in the command line parser.
When haproxy is launched with -x with no argument and -x is the latest
option in argv it segfaults.
Use usage() insteads of exit() on error.
Commit cb11fd2 ("MEDIUM: mworker: wait mode on reload failure")
introduced a regression, when HAProxy is used in daemon mode, it exits 1
after forking its children.
HAProxy should exit(0), the exit(EXIT_FAILURE) was expected to be use
when the master fail in master-worker mode.
Thanks to Emmanuel Hocdet for reporting this bug. No backport needed.
The commit 872f9c213 ("MEDIUM: ssl: add basic support for OpenSSL crypto
engine") broke the build without openssl support.
The ssl_free_dh() function is not defined when USE_OPENSSL is not
defined and leads to a compilation failure.
This patch ensure that the children will exit when the master quits,
even if the master didn't send any signal.
The master and the workers are connected through a pipe, when the pipe
closes the children leave.
This option exits every workers when one of the current workers die.
It allows you to monitor the master process in order to relaunch
everything on a failure.
For example it can be used with systemd and Restart=on-failure in a spec
file.
In master worker mode, you can't specify the stats socket where you get
your listeners FDs on a reload, because the command line of the re-exec
is launched by the master.
To solve the problem, when -x is found on the command line, its
parameter is rewritten on a reexec with the first stats socket with the
capability to send sockets. It tries to reuse the original parameter if
it has this capability.
In Master Worker mode, when the reloading of the configuration fail,
the process is exiting leaving the children without their father.
To handle this, we register an exit function with atexit(3), which is
reexecuting the binary in a special mode. This particular mode of
HAProxy don't reload the configuration, it only loops on wait().
The master-worker will reload itself on SIGUSR2/SIGHUP
It's inherited from the systemd wrapper, when the SIGUSR2 signal is
received, the master process will reexecute itself with the -sf flag
followed by the PIDs of the children.
In the systemd wrapper, the children were using a pipe to notify when
the config has been parsed and when the new process is ready. The goal
was to ensure that the process couldn't reload during the parsing of the
configuration, before signals were send to old process.
With the new mworker model, the master parses the configuration and is
aware of all the children. We don't need a pipe, but we need to block
those signals before the end of a reload, to ensure that the process
won't be killed during a reload.
The SIGUSR1 signal is forwarded to the children to soft-stop HAProxy.
The SIGTERM and SIGINT signals are forwarded to the children in order to
terminate them.
This commit remove the -Ds systemd mode in HAProxy in order to replace
it by a more generic master worker system. It aims to replace entirely
the systemd wrapper in the near future.
The master worker mode implements a new way of managing HAProxy
processes. The master is in charge of parsing the configuration
file and is responsible for spawning child processes.
The master worker mode can be invoked by using the -W flag. It can be
used either in background mode (-D) or foreground mode. When used in
background mode, the master will fork to daemonize.
In master worker background mode, chroot, setuid and setgid are done in
each child rather than in the master process, because the master process
will still need access to filesystem to reload the configuration.
This patch adds the global 'ssl-engine' keyword. First arg is an engine
identifier followed by a list of default_algorithms the engine will
operate.
If the openssl version is too old, an error is reported when the option
is used.
When HAProxy is running with multiple processes and some listeners
arebound to processes, the unused sockets were not closed in the other
processes. The aim was to be able to send those listening sockets using
the -x option.
However to ensure the previous behavior which was to close those
sockets, we provided the "no-unused-socket" global option.
This patch changes this behavior, it will close unused sockets which are
not in the same process as an expose-fd socket, making the
"no-unused-socket" option useless.
The "no-unused-socket" option was removed in this patch.
Overall we do have an issue with the severity of a number of errors. Most
fatal errors are reported with ERR_FATAL (which prevents startup) and not
ERR_ABORT (which stops parsing ASAP), but check_config_validity() is still
called on ERR_FATAL, and will most of the time report bogus errors. This
is what caused smp_resolve_args() to be called on a number of unparsable
ACLs, and it also is what reports incorrect ordering or unresolvable
section names when certain entries could not be properly parsed.
This patch stops this domino effect by simply aborting before trying to
further check and resolve the configuration when it's already know that
there are fatal errors.
A concrete example comes from this config :
userlist users :
user foo insecure-password bar
listen foo
bind :1234
mode htttp
timeout client 10S
timeout server 10s
timeout connect 10s
stats uri /stats
stats http-request auth unless { http_auth(users) }
http-request redirect location /index.html if { path / }
It contains a colon after the userlist name, a typo in the client timeout value,
another one in "mode http" which cause some other configuration elements not to
be properly handled.
Previously it would confusingly report :
[ALERT] 108/114851 (20224) : parsing [err-report.cfg:1] : 'userlist' cannot handle unexpected argument ':'.
[ALERT] 108/114851 (20224) : parsing [err-report.cfg:6] : unknown proxy mode 'htttp'.
[ALERT] 108/114851 (20224) : parsing [err-report.cfg:7] : unexpected character 'S' in 'timeout client'
[ALERT] 108/114851 (20224) : Error(s) found in configuration file : err-report.cfg
[ALERT] 108/114851 (20224) : parsing [err-report.cfg:11] : unable to find userlist 'users' referenced in arg 1 of ACL keyword 'http_auth' in proxy 'foo'.
[WARNING] 108/114851 (20224) : config : missing timeouts for proxy 'foo'.
| While not properly invalid, you will certainly encounter various problems
| with such a configuration. To fix this, please ensure that all following
| timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[WARNING] 108/114851 (20224) : config : 'stats' statement ignored for proxy 'foo' as it requires HTTP mode.
[WARNING] 108/114851 (20224) : config : 'http-request' rules ignored for proxy 'foo' as they require HTTP mode.
[ALERT] 108/114851 (20224) : Fatal errors found in configuration.
The "requires HTTP mode" errors are just pollution resulting from the
improper spelling of this mode earlier. The unresolved reference to the
userlist is caused by the extra colon on the declaration, and the warning
regarding the missing timeouts is caused by the wrong character.
Now it more accurately reports :
[ALERT] 108/114900 (20225) : parsing [err-report.cfg:1] : 'userlist' cannot handle unexpected argument ':'.
[ALERT] 108/114900 (20225) : parsing [err-report.cfg:6] : unknown proxy mode 'htttp'.
[ALERT] 108/114900 (20225) : parsing [err-report.cfg:7] : unexpected character 'S' in 'timeout client'
[ALERT] 108/114900 (20225) : Error(s) found in configuration file : err-report.cfg
[ALERT] 108/114900 (20225) : Fatal errors found in configuration.
Despite not really a fix, this patch should be backported at least to 1.7,
possibly even 1.6, and 1.5 since it hardens the config parser against
certain bad situations like the recently reported use-after-free and the
last null dereference.
When running with multiple process, if some proxies are just assigned
to some processes, the other processes will just close the file descriptors
for the listening sockets. However, we may still have to provide those
sockets when reloading, so instead we just try hard to pretend those proxies
are dead, while keeping the sockets opened.
A new global option, no-reused-socket", has been added, to restore the old
behavior of closing the sockets not bound to this process.
Add the "-x" flag, that takes a path to a unix socket as an argument. If
used, haproxy will connect to the socket, and asks to get all the
listening sockets from the old process. Any failure is fatal.
This is needed to get seamless reloads on linux.
Released version 1.8-dev1 with the following main changes :
- BUG/MEDIUM: proxy: return "none" and "unknown" for unknown LB algos
- BUG/MINOR: stats: make field_str() return an empty string on NULL
- DOC: Spelling fixes
- BUG/MEDIUM: http: Fix tunnel mode when the CONNECT method is used
- BUG/MINOR: http: Keep the same behavior between 1.6 and 1.7 for tunneled txn
- BUG/MINOR: filters: Protect args in macros HAS_DATA_FILTERS and IS_DATA_FILTER
- BUG/MINOR: filters: Invert evaluation order of HTTP_XFER_BODY and XFER_DATA analyzers
- BUG/MINOR: http: Call XFER_DATA analyzer when HTTP txn is switched in tunnel mode
- BUG/MAJOR: stream: fix session abort on resource shortage
- OPTIM: stream-int: don't disable polling anymore on DONT_READ
- BUG/MINOR: cli: allow the backslash to be escaped on the CLI
- BUG/MEDIUM: cli: fix "show stat resolvers" and "show tls-keys"
- DOC: Fix map table's format
- DOC: Added 51Degrees conv and fetch functions to documentation.
- BUG/MINOR: http: don't send an extra CRLF after a Set-Cookie in a redirect
- DOC: mention that req_tot is for both frontends and backends
- BUG/MEDIUM: variables: some variable name can hide another ones
- MINOR: lua: Allow argument for actions
- BUILD: rearrange target files by build time
- CLEANUP: hlua: just indent functions
- MINOR: lua: give HAProxy variable access to the applets
- BUG/MINOR: stats: fix be/sessions/max output in html stats
- MINOR: proxy: Add fe_name/be_name fetchers next to existing fe_id/be_id
- DOC: lua: Documentation about some entry missing
- DOC: lua: Add documentation about variable manipulation from applet
- MINOR: Do not forward the header "Expect: 100-continue" when the option http-buffer-request is set
- DOC: Add undocumented argument of the trace filter
- DOC: Fix some typo in SPOE documentation
- MINOR: cli: Remove useless call to bi_putchk
- BUG/MINOR: cli: be sure to always warn the cli applet when input buffer is full
- MINOR: applet: Count number of (active) applets
- MINOR: task: Rename run_queue and run_queue_cur counters
- BUG/MEDIUM: stream: Save unprocessed events for a stream
- BUG/MAJOR: Fix how the list of entities waiting for a buffer is handled
- BUILD/MEDIUM: Fixing the build using LibreSSL
- BUG/MEDIUM: lua: In some case, the return of sample-fetches is ignored (2)
- SCRIPTS: git-show-backports: fix a harmless typo
- SCRIPTS: git-show-backports: add -H to use the hash of the commit message
- BUG/MINOR: stream-int: automatically release SI_FL_WAIT_DATA on SHUTW_NOW
- CLEANUP: applet/lua: create a dedicated ->fcn entry in hlua_cli context
- CLEANUP: applet/table: add an "action" entry in ->table context
- CLEANUP: applet: remove the now unused appctx->private field
- DOC: lua: documentation about time parser functions
- DOC: lua: improve links
- DOC: lua: section declared twice
- MEDIUM: cli: 'show cli sockets' list the CLI sockets
- BUG/MINOR: cli: "show cli sockets" wouldn't list all processes
- BUG/MINOR: cli: "show cli sockets" would always report process 64
- CLEANUP: lua: rename one of the lua appctx union
- BUG/MINOR: lua/cli: bad error message
- MEDIUM: lua: use memory pool for hlua struct in applets
- MINOR: lua/signals: Remove Lua part from signals.
- DOC: cli: show cli sockets
- MINOR: cli: automatically enable a CLI I/O handler when there's no parser
- CLEANUP: memory: remove the now unused cli_parse_show_pools() function
- CLEANUP: applet: group all CLI contexts together
- CLEANUP: stats: move a misplaced stats context initialization
- MINOR: cli: add two general purpose pointers and integers in the CLI struct
- MINOR: appctx/cli: remove the cli_socket entry from the appctx union
- MINOR: appctx/cli: remove the env entry from the appctx union
- MINOR: appctx/cli: remove the "be" entry from the appctx union
- MINOR: appctx/cli: remove the "dns" entry from the appctx union
- MINOR: appctx/cli: remove the "server_state" entry from the appctx union
- MINOR: appctx/cli: remove the "tlskeys" entry from the appctx union
- CONTRIB: tcploop: add limits.h to fix build issue with some compilers
- MINOR/DOC: lua: just precise one thing
- DOC: fix small typo in fe_id (backend instead of frontend)
- BUG/MINOR: Fix the sending function in Lua's cosocket
- BUG/MINOR: lua: memory leak executing tasks
- BUG/MINOR: lua: bad return code
- BUG/MINOR: lua: memleak when Lua/cli fails
- MEDIUM: lua: remove Lua struct from session, and allocate it with memory pools
- CLEANUP: haproxy: statify unexported functions
- MINOR: haproxy: add a registration for build options
- CLEANUP: wurfl: use the build options list to report it
- CLEANUP: 51d: use the build options list to report it
- CLEANUP: da: use the build options list to report it
- CLEANUP: namespaces: use the build options list to report it
- CLEANUP: tcp: use the build options list to report transparent modes
- CLEANUP: lua: use the build options list to report it
- CLEANUP: regex: use the build options list to report the regex type
- CLEANUP: ssl: use the build options list to report the SSL details
- CLEANUP: compression: use the build options list to report the algos
- CLEANUP: auth: use the build options list to report its support
- MINOR: haproxy: add a registration for post-check functions
- CLEANUP: checks: make use of the post-init registration to start checks
- CLEANUP: filters: use the function registration to initialize all proxies
- CLEANUP: wurfl: make use of the late init registration
- CLEANUP: 51d: make use of the late init registration
- CLEANUP: da: make use of the late init registration code
- MINOR: haproxy: add a registration for post-deinit functions
- CLEANUP: wurfl: register the deinit function via the dedicated list
- CLEANUP: 51d: register the deinitialization function
- CLEANUP: da: register the deinitialization function
- CLEANUP: wurfl: move global settings out of the global section
- CLEANUP: 51d: move global settings out of the global section
- CLEANUP: da: move global settings out of the global section
- MINOR: cfgparse: add two new functions to check arguments count
- MINOR: cfgparse: move parsing of "ca-base" and "crt-base" to ssl_sock
- MEDIUM: cfgparse: move all tune.ssl.* keywords to ssl_sock
- MEDIUM: cfgparse: move maxsslconn parsing to ssl_sock
- MINOR: cfgparse: move parsing of ssl-default-{bind,server}-ciphers to ssl_sock
- MEDIUM: cfgparse: move ssl-dh-param-file parsing to ssl_sock
- MEDIUM: compression: move the zlib-specific stuff from global.h to compression.c
- BUG/MEDIUM: ssl: properly reset the reused_sess during a forced handshake
- BUG/MEDIUM: ssl: avoid double free when releasing bind_confs
- BUG/MINOR: stats: fix be/sessions/current out in typed stats
- MINOR: tcp-rules: check that the listener exists before updating its counters
- MEDIUM: spoe: don't create a dummy listener for outgoing connections
- MINOR: listener: move the transport layer pointer to the bind_conf
- MEDIUM: move listener->frontend to bind_conf->frontend
- MEDIUM: ssl: remote the proxy argument from most functions
- MINOR: connection: add a new prepare_bind_conf() entry to xprt_ops
- MEDIUM: ssl_sock: implement ssl_sock_prepare_bind_conf()
- MINOR: connection: add a new destroy_bind_conf() entry to xprt_ops
- MINOR: ssl_sock: implement ssl_sock_destroy_bind_conf()
- MINOR: server: move the use_ssl field out of the ifdef USE_OPENSSL
- MINOR: connection: add a minimal transport layer registration system
- CLEANUP: connection: remove all direct references to raw_sock and ssl_sock
- CLEANUP: connection: unexport raw_sock and ssl_sock
- MINOR: connection: add new prepare_srv()/destroy_srv() entries to xprt_ops
- MINOR: ssl_sock: implement and use prepare_srv()/destroy_srv()
- CLEANUP: ssl: move tlskeys_finalize_config() to a post_check callback
- CLEANUP: ssl: move most ssl-specific global settings to ssl_sock.c
- BUG/MINOR: backend: nbsrv() should return 0 if backend is disabled
- BUG/MEDIUM: ssl: for a handshake when server-side SNI changes
- BUG/MINOR: systemd: potential zombie processes
- DOC: Add timings events schemas
- BUILD: lua: build failed on FreeBSD.
- MINOR: samples: add xx-hash functions
- MEDIUM: regex: pcre2 support
- BUG/MINOR: option prefer-last-server must be ignored in some case
- MINOR: stats: Support "select all" for backend actions
- BUG/MINOR: sample-fetches/stick-tables: bad type for the sample fetches sc*_get_gpt0
- BUG/MAJOR: channel: Fix the definition order of channel analyzers
- BUG/MINOR: http: report real parser state in error captures
- BUILD: scripts: automatically update the branch in version.h when releasing
- MINOR: tools: add a generic hexdump function for debugging
- BUG/MAJOR: http: fix risk of getting invalid reports of bad requests
- MINOR: http: custom status reason.
- MINOR: connection: add sample fetch "fc_rcvd_proxy"
- BUG/MINOR: config: emit a warning if http-reuse is enabled with incompatible options
- BUG/MINOR: tools: fix off-by-one in port size check
- BUG/MEDIUM: server: consider AF_UNSPEC as a valid address family
- MEDIUM: server: split the address and the port into two different fields
- MINOR: tools: make str2sa_range() return the port in a separate argument
- MINOR: server: take the destination port from the port field, not the addr
- MEDIUM: server: disable protocol validations when the server doesn't resolve
- BUG/MEDIUM: tools: do not force an unresolved address to AF_INET:0.0.0.0
- BUG/MINOR: ssl: EVP_PKEY must be freed after X509_get_pubkey usage
- BUG/MINOR: ssl: assert on SSL_set_shutdown with BoringSSL
- MINOR: Use "500 Internal Server Error" for 500 error/status code message.
- MINOR: proto_http.c 502 error txt typo.
- DOC: add deprecation notice to "block"
- MINOR: compression: fix -vv output without zlib/slz
- BUG/MINOR: Reset errno variable before calling strtol(3)
- MINOR: ssl: don't show prefer-server-ciphers output
- OPTIM/MINOR: config: Optimize fullconn automatic computation loading configuration
- BUG/MINOR: stream: Fix how backend-specific analyzers are set on a stream
- MAJOR: ssl: bind configuration per certificat
- MINOR: ssl: add curve suite for ECDHE negotiation
- MINOR: checks: Add agent-addr config directive
- MINOR: cli: Add possiblity to change agent config via CLI/socket
- MINOR: doc: Add docs for agent-addr configuration variable
- MINOR: doc: Add docs for agent-addr and agent-send CLI commands
- BUILD: ssl: fix to build (again) with boringssl
- BUILD: ssl: fix build on OpenSSL 1.0.0
- BUILD: ssl: silence a warning reported for ERR_remove_state()
- BUILD: ssl: eliminate warning with OpenSSL 1.1.0 regarding RAND_pseudo_bytes()
- BUILD: ssl: kill a build warning introduced by BoringSSL compatibility
- BUG/MEDIUM: tcp: don't poll for write when connect() succeeds
- BUG/MINOR: unix: fix connect's polling in case no data are scheduled
- MINOR: server: extend the flags to 32 bits
- BUG/MINOR: lua: Map.end are not reliable because "end" is a reserved keyword
- MINOR: dns: give ability to dns_init_resolvers() to close a socket when requested
- BUG/MAJOR: dns: restart sockets after fork()
- MINOR: chunks: implement a simple dynamic allocator for trash buffers
- BUG/MEDIUM: http: prevent redirect from overwriting a buffer
- BUG/MEDIUM: filters: Do not truncate HTTP response when body length is undefined
- BUG/MEDIUM: http: Prevent replace-header from overwriting a buffer
- BUG/MINOR: http: Return an error when a replace-header rule failed on the response
- BUG/MINOR: sendmail: The return of vsnprintf is not cleanly tested
- BUG/MAJOR: ssl: fix a regression in ssl_sock_shutw()
- BUG/MAJOR: lua segmentation fault when the request is like 'GET ?arg=val HTTP/1.1'
- BUG/MEDIUM: config: reject anything but "if" or "unless" after a use-backend rule
- MINOR: http: don't close when redirect location doesn't start with "/"
- MEDIUM: boringssl: support native multi-cert selection without bundling
- BUG/MEDIUM: ssl: fix verify/ca-file per certificate
- BUG/MEDIUM: ssl: switchctx should not return SSL_TLSEXT_ERR_ALERT_WARNING
- MINOR: ssl: removes SSL_CTX_set_ssl_version call and cleanup CTX creation.
- BUILD: ssl: fix build with -DOPENSSL_NO_DH
- MEDIUM: ssl: add new sample-fetch which captures the cipherlist
- MEDIUM: ssl: remove ssl-options from crt-list
- BUG/MEDIUM: ssl: in bind line, ssl-options after 'crt' are ignored.
- BUG/MINOR: ssl: fix cipherlist captures with sustainable SSL calls
- MINOR: ssl: improved cipherlist captures
- BUG/MINOR: spoe: Fix soft stop handler using a specific id for spoe filters
- BUG/MINOR: spoe: Fix parsing of arguments in spoe-message section
- MAJOR: spoe: Add support of pipelined and asynchronous exchanges with agents
- MINOR: spoe: Add support for pipelining/async capabilities in the SPOA example
- MINOR: spoe: Remove SPOE details from the appctx structure
- MINOR: spoe: Add status code in error variable instead of hardcoded value
- MINOR: spoe: Send a log message when an error occurred during event processing
- MINOR: spoe: Check the scope of sample fetches used in SPOE messages
- MEDIUM: spoe: Be sure to wakeup the good entity waiting for a buffer
- MINOR: spoe: Use the min of all known max_frame_size to encode messages
- MAJOR: spoe: Add support of payload fragmentation in NOTIFY frames
- MINOR: spoe: Add support for fragmentation capability in the SPOA example
- MAJOR: spoe: refactor the filter to clean up the code
- MINOR: spoe: Handle NOTIFY frames cancellation using ABORT bit in ACK frames
- REORG: spoe: Move struct and enum definitions in dedicated header file
- REORG: spoe: Move low-level encoding/decoding functions in dedicated header file
- MINOR: spoe: Improve implementation of the payload fragmentation
- MINOR: spoe: Add support of negation for options in SPOE configuration file
- MINOR: spoe: Add "pipelining" and "async" options in spoe-agent section
- MINOR: spoe: Rely on alertif_too_many_arg during configuration parsing
- MINOR: spoe: Add "send-frag-payload" option in spoe-agent section
- MINOR: spoe: Add "max-frame-size" statement in spoe-agent section
- DOC: spoe: Update SPOE documentation to reflect recent changes
- MINOR: config: warn when some HTTP rules are used in a TCP proxy
- BUG/MEDIUM: ssl: Clear OpenSSL error stack after trying to parse OCSP file
- BUG/MEDIUM: cli: Prevent double free in CLI ACL lookup
- BUG/MINOR: Fix "get map <map> <value>" CLI command
- MINOR: Add nbsrv sample converter
- CLEANUP: Replace repeated code to count usable servers with be_usable_srv()
- MINOR: Add hostname sample fetch
- CLEANUP: Remove comment that's no longer valid
- MEDIUM: http_error_message: txn->status / http_get_status_idx.
- MINOR: http-request tarpit deny_status.
- CLEANUP: http: make http_server_error() not set the status anymore
- MEDIUM: stats: Add JSON output option to show (info|stat)
- MEDIUM: stats: Add show json schema
- BUG/MAJOR: connection: update CO_FL_CONNECTED before calling the data layer
- MINOR: server: Add dynamic session cookies.
- MINOR: cli: Let configure the dynamic cookies from the cli.
- BUG/MINOR: checks: attempt clean shutw for SSL check
- CONTRIB: tcploop: make it build on FreeBSD
- CONTRIB: tcploop: fix time format to silence build warnings
- CONTRIB: tcploop: report action 'K' (kill) in usage message
- CONTRIB: tcploop: fix connect's address length
- CONTRIB: tcploop: use the trash instead of NULL for recv()
- BUG/MEDIUM: listener: do not try to rebind another process' socket
- BUG/MEDIUM server: Fix crash when dynamic is defined, but not key is provided.
- CLEANUP: config: Typo in comment.
- BUG/MEDIUM: filters: Fix channels synchronization in flt_end_analyze
- TESTS: add a test configuration to stress handshake combinations
- BUG/MAJOR: stream-int: do not depend on connection flags to detect connection
- BUG/MEDIUM: connection: ensure to always report the end of handshakes
- MEDIUM: connection: don't test for CO_FL_WAKE_DATA
- CLEANUP: connection: completely remove CO_FL_WAKE_DATA
- BUG: payload: fix payload not retrieving arbitrary lengths
- BUILD: ssl: simplify SSL_CTX_set_ecdh_auto compatibility
- BUILD: ssl: fix OPENSSL_NO_SSL_TRACE for boringssl and libressl
- BUG/MAJOR: http: fix typo in http_apply_redirect_rule
- MINOR: doc: 2.4. Examples should be 2.5. Examples
- BUG/MEDIUM: stream: fix client-fin/server-fin handling
- MINOR: fd: add a new flag HAP_POLL_F_RDHUP to struct poller
- BUG/MINOR: raw_sock: always perfom the last recv if RDHUP is not available
- OPTIM: poll: enable support for POLLRDHUP
- MINOR: kqueue: exclusively rely on the kqueue returned status
- MEDIUM: kqueue: take care of EV_EOF to improve polling status accuracy
- MEDIUM: kqueue: only set FD_POLL_IN when there are pending data
- DOC/MINOR: Fix typos in proxy protocol doc
- DOC: Protocol doc: add checksum, TLV type ranges
- DOC: Protocol doc: add SSL TLVs, rename CHECKSUM
- DOC: Protocol doc: add noop TLV
- MEDIUM: global: add a 'hard-stop-after' option to cap the soft-stop time
- MINOR: dns: improve DNS response parsing to use as many available records as possible
- BUG/MINOR: cfgparse: loop in tracked servers lists not detected by check_config_validity().
- MINOR: server: irrelevant error message with 'default-server' config file keyword.
- MINOR: server: Make 'default-server' support 'backup' keyword.
- MINOR: server: Make 'default-server' support 'check-send-proxy' keyword.
- CLEANUP: server: code alignement.
- MINOR: server: Make 'default-server' support 'non-stick' keyword.
- MINOR: server: Make 'default-server' support 'send-proxy' and 'send-proxy-v2 keywords.
- MINOR: server: Make 'default-server' support 'check-ssl' keyword.
- MINOR: server: Make 'default-server' support 'force-sslv3' and 'force-tlsv1[0-2]' keywords.
- CLEANUP: server: code alignement.
- MINOR: server: Make 'default-server' support 'no-ssl*' and 'no-tlsv*' keywords.
- MINOR: server: Make 'default-server' support 'ssl' keyword.
- MINOR: server: Make 'default-server' support 'send-proxy-v2-ssl*' keywords.
- CLEANUP: server: code alignement.
- MINOR: server: Make 'default-server' support 'verify' keyword.
- MINOR: server: Make 'default-server' support 'verifyhost' setting.
- MINOR: server: Make 'default-server' support 'check' keyword.
- MINOR: server: Make 'default-server' support 'track' setting.
- MINOR: server: Make 'default-server' support 'ca-file', 'crl-file' and 'crt' settings.
- MINOR: server: Make 'default-server' support 'redir' keyword.
- MINOR: server: Make 'default-server' support 'observe' keyword.
- MINOR: server: Make 'default-server' support 'cookie' keyword.
- MINOR: server: Make 'default-server' support 'ciphers' keyword.
- MINOR: server: Make 'default-server' support 'tcp-ut' keyword.
- MINOR: server: Make 'default-server' support 'namespace' keyword.
- MINOR: server: Make 'default-server' support 'source' keyword.
- MINOR: server: Make 'default-server' support 'sni' keyword.
- MINOR: server: Make 'default-server' support 'addr' keyword.
- MINOR: server: Make 'default-server' support 'disabled' keyword.
- MINOR: server: Add 'no-agent-check' server keyword.
- DOC: server: Add docs for "server" and "default-server" new "no-*" and other settings.
- MINOR: doc: fix use-server example (imap vs mail)
- BUG/MEDIUM: tcp: don't require privileges to bind to device
- BUILD: make the release script use shortlog for the final changelog
- BUILD: scripts: fix typo in announce-release error message
- CLEANUP: time: curr_sec_ms doesn't need to be exported
- BUG/MEDIUM: server: Wrong server default CRT filenames initialization.
- BUG/MEDIUM: peers: fix buffer overflow control in intdecode.
- BUG/MEDIUM: buffers: Fix how input/output data are injected into buffers
- BUG/MINOR: http: Fix conditions to clean up a txn and to handle the next request
- CLEANUP: http: Remove channel_congested function
- CLEANUP: buffers: Remove buffer_bounce_realign function
- CLEANUP: buffers: Remove buffer_contig_area and buffer_work_area functions
- MINOR: http: remove useless check on HTTP_MSGF_XFER_LEN for the request
- MINOR: http: Add debug messages when HTTP body analyzers are called
- BUG/MEDIUM: http: Fix blocked HTTP/1.0 responses when compression is enabled
- BUG/MINOR: filters: Don't force the stream's wakeup when we wait in flt_end_analyze
- DOC: fix parenthesis and add missing "Example" tags
- DOC: update the contributing file
- DOC: log-format/tcplog/httplog update
- MINOR: config parsing: add warning when log-format/tcplog/httplog is overriden in "defaults" sections
When SIGUSR1 is received, haproxy enters in soft-stop and quits when no
connection remains.
It can happen that the instance remains alive for a long time, depending
on timeouts and traffic. This option ensures that soft-stop won't run
for too long.
Example:
global
hard-stop-after 30s # Once in soft-stop, the instance will remain
# alive for at most 30 seconds.
The trash buffers are becoming increasingly complex to deal with due to
the code's modularity allowing some functions to be chained and causing
the same chunk buffers to be used multiple times along the chain, possibly
corrupting each other. In fact the trash were designed from scratch for
explicitly not surviving a function call but string manipulation makes
this impossible most of the time while not fullfilling the need for
reliable temporary chunks.
Here we introduce the ability to allocate a temporary trash chunk which
is reserved, so that it will not conflict with the trash chunks other
functions use, and will even support reentrant calls (eg: build_logline).
For this, we create a new pool which is exactly the size of a usual chunk
buffer plus the size of the chunk struct so that these chunks when allocated
are exactly the same size as the ones returned by get_trash_buffer(). These
chunks may fail so the caller must check them, and the caller is also
responsible for freeing them.
The code focuses on minimal changes and ease of reliable backporting
because it will be needed in stable versions in order to support next
patch.
UDP sockets used to send DNS queries are created before fork happens and
this is a big problem because all the processes (in case of a
configuration starting multiple processes) share the same socket. Some
processes may consume responses dedicated to an other one, some servers
may be disabled, some IPs changed, etc...
As a workaround, this patch close the existing socket and create a new
one after the fork() has happened.
[wt: backport this to 1.7]
The function dns_init_resolvers() is used to initialize socket used to
send DNS queries.
This patch gives the function the ability to close a socket before
re-opening it.
[wt: this needs to be backported to 1.7 for next fix]
In systemd mode (-Ds), the master haproxy process is waiting for each
child to exit in a specific order. If a process die when it's not his
turn, it will become a zombie process until every processes exit.
The master is now waiting for any process to exit in any order.
This patch should be backported to 1.7, 1.6 and 1.5.
Historically a lot of SSL global settings were stored into the global
struct, but we've reached a point where there are 3 ifdefs in it just
for this, and others in haproxy.c to initialize it.
This patch moves all the private fields to a new struct "global_ssl"
stored in ssl_sock.c. This includes :
char *crt_base;
char *ca_base;
char *listen_default_ciphers;
char *connect_default_ciphers;
int listen_default_ssloptions;
int connect_default_ssloptions;
int tune.sslprivatecache; /* Force to use a private session cache even if nbproc > 1 */
unsigned int tune.ssllifetime; /* SSL session lifetime in seconds */
unsigned int tune.ssl_max_record; /* SSL max record size */
unsigned int tune.ssl_default_dh_param; /* SSL maximum DH parameter size */
int tune.ssl_ctx_cache; /* max number of entries in the ssl_ctx cache. */
The "tune" part was removed (useless here) and the occasional "ssl"
prefixes were removed as well. Thus for example instead of
global.tune.ssl_default_dh_param
we now have :
global_ssl.default_dh_param
A few initializers were present in the constructor, they could be brought
back to the structure declaration.
A few other entries had to stay in global for now. They concern memory
calculationn (used in haproxy.c) and stats (used in stats.c).
The code is already much cleaner now, especially for global.h and haproxy.c
which become readable.
tlskeys_finalize_config() was the only reason for haproxy.c to still
require ifdef and includes for ssl_sock. This one fits perfectly well
in the late initializers so it was changed to be registered with
hap_register_post_check().
Now we can simply check the transport layer at run time and decide
whether or not to initialize or destroy these entries. This removes
other ifdefs and includes from cfgparse.c, haproxy.c and hlua.c.
Instead of hard-coding all SSL destruction in cfgparse.c and haproxy.c,
we now register this new function as the transport layer's destroy_bind_conf()
and call it only when defined. This removes some non-obvious SSL-specific
code and #ifdefs from cfgparse.c and haproxy.c
This finishes to clean up the zlib-specific parts. It also unbreaks recent
commit b97c6fb ("CLEANUP: compression: use the build options list to report
the algos") which broke USE_ZLIB due to MAXWBITS not being defined anymore
in haproxy.c.
We replaced global.deviceatlas with global_deviceatlas since there's no need
to store all this into the global section. This removes the last #ifdefs,
and now the code is 100% self-contained in da.c. The file da.h was now
removed because it was only used to load dac.h, which is more easily
loaded directly from da.c. It provides another good example of how to
integrate code in the future without touching the core parts.
We replaced global._51degrees with global_51degrees since there's no need
to store all this into the global section. This removes the last #ifdefs,
and now the code is 100% self-contained in 51d.c. The file 51d.h was now
removed because it was only used to load 51Degrees.h, which is more easily
loaded from 51d.c. It provides a good example of how to integrate code in
the future without touching the core parts.
We replaced global.wurfl with global_wurfl since there's no need to store
all this into the global section. This removes the last #ifdefs, and now
the code is 100% self-contained in wurfl.c. It provides a good example of
how to integrate code in the future without touching the core parts.
deinit_51degrees() is not called anymore from haproxy.c, removing
2 #ifdefs and one include. The function was made static. The include
file still includes 51Degrees.h which is needed by global.h and 51d.c
so it was not touched beyond this last function removal.
By registering the deinit function we avoid another #ifdef in haproxy.c.
The ha_wurfl_deinit() function has been made static and unexported. Now
proto/wurfl.h is totally empty, the code being self-contained in wurfl.c,
so the useless .h has been removed.
The 3 device detection engines stop at the same place in deinit()
with the usual #ifdefs. Similar to the other functions we can have
some late deinitialization functions. These functions do not return
anything however so we have to use a different type.
Instead of having a #ifdef in the main init code we now use the registered
init functions. Doing so also enables error checking as errors were previously
reported as alerts but ignored. Also they were incorrect as the 'status'
variable was hidden by a second one and was always reporting DA_SYS (which
is apparently an error) in every case including the case where no file was
loaded. The init_deviceatlas() function was unexported since it's not used
outside of this place anymore.
This removes some #ifdefs from the main haproxy code path. Function
init_51degrees() now returns ERR_* instead of exit(1) on error, and
this function was made static and is not exported anymore.
This removes some #ifdefs from the main haproxy code path and enables
error checking. The current code only makes use of warnings even for
some errors that look serious. While this choice is questionnable, it
has been kept as-is, and only the return codes were adapted to ERR_WARN
to at least report that some warnings were emitted. ha_wurfl_init() was
unexported as it's not needed anymore.
Instead of calling the checks directly from the init code, we now
register the start_checks() function to be run at this point. This
also allows to unexport the check init function and to remove one
include from haproxy.c.
There's a significant amount of late initialization calls which are
performed after the point where we exit in check mode. These calls
are used to allocate resource and perform certain slow operations.
Let's have a way to register some functions which need to be called
there instead of having this multitude of #ifdef in the init path.
This removes 2 #ifdef, an include, an ugly construct and a wild "extern"
declaration from haproxy.c. The message indicating that compression is
*not* enabled is not there anymore.
Many extensions now report some build options to ease debugging, but
this is now being done at the expense of code maintainability. Let's
provide a registration function to do this so that we can start to
remove most of the #ifdefs from haproxy.c (18 currently just for a
single function).
<run_queue> is used to track the number of task in the run queue and
<run_queue_cur> is a copy used for the reporting purpose. These counters has
been renamed, respectively, <tasks_run_queue> and <tasks_run_queue_cur>. So the
naming is consistent between tasks and applets.
[wt: needed for next fixes, backport to 1.7 and 1.6]
As for tasks, 2 counters has been added to track :
* the total number of applets : nb_applets
* the number of active applets : applets_active_queue
[wt: needed for next fixes, to backport to 1.7 and 1.6]
Now it is possible to use variables attached to a process. The scope name is
'proc'. These variables are released only when HAProxy is stopped.
'tune.vars.proc-max-size' directive has been added to confiure the maximum
amount of memory used by "proc" variables. And because memory accounting is
hierachical for variables, memory for "proc" vars includes memory for "sess"
vars.
This code has been moved from haproxy.c to sample.c and the function
release_sample_expr can now be called from anywhere to release a sample
expression. This function will be used by the stream processing offload engine
(SPOE).
It is very common when validating a configuration out of production not to
have access to the same resolvers and to fail on server address resolution,
making it difficult to test a configuration. This option simply appends the
"none" method to the list of address resolution methods for all servers,
ensuring that even if the libc fails to resolve an address, the startup
sequence is not interrupted.
Server addresses are not resolved anymore upon the first pass so that we
don't fail if an address cannot be resolved by the libc. Instead they are
processed all at once after the configuration is fully loaded, by the new
function srv_init_addr(). This function only acts on the server's address
if this address uses an FQDN, which appears in server->hostname.
For now the function does two things, to followup with HAProxy's historical
default behavior:
1. apply server IP address found in server-state file if runtime DNS
resolution is enabled for this server
2. use the DNS resolver provided by the libc
If none of the 2 options above can find an IP address, then an error is
returned.
All of this will be needed to support the new server parameter "init-addr".
For now, the biggest user-visible change is that all server resolution errors
are dumped at once instead of causing a startup failure one by one.
Currently, the function which applies server states provided by the
"old" process is applied after configuration sanity check. This results
in the impossibility to check the validity of the state file during a
regular config check, implying a full start is required, which can be
a problem sometimes.
This patch moves the loading of server_state file before MODE_CHECK.
The only reason wurfl/wurfl.h was needed outside of wurfl.c was to expose
wurfl_handle which is a pointer to a structure, referenced by global.h.
By just storing a void* there instead, we can confine all wurfl code to
wurfl.c, which is really nice.
WURFL is a high-performance and low-memory footprint mobile device
detection software component that can quickly and accurately detect
over 500 capabilities of visiting devices. It can differentiate between
portable mobile devices, desktop devices, SmartTVs and any other types
of devices on which a web browser can be installed.
In order to add WURFL device detection support, you would need to
download Scientiamobile InFuze C API and install it on your system.
Refer to www.scientiamobile.com to obtain a valid InFuze license.
Any useful information on how to configure HAProxy working with WURFL
may be found in:
doc/WURFL-device-detection.txt
doc/configuration.txt
examples/wurfl-example.cfg
Please find more information about WURFL device detection API detection
at https://docs.scientiamobile.com/documentation/infuze/infuze-c-api-user-guide
Right now there is an issue with the way the maintenance flags are
propagated upon startup. They are not propagate, just copied from the
tracked server. This implies that depending on the server's order, some
tracking servers may not be marked down. For example this configuration
does not work as expected :
server s1 1.1.1.1:8000 track s2
server s2 1.1.1.1:8000 track s3
server s3 1.1.1.1:8000 track s4
server s4 wtap:8000 check inter 1s disabled
It results in s1/s2 being up, and s3/s4 being down, while all of them
should be down.
The only clean way to process this is to run through all "root" servers
(those not tracking any other server), and to propagate their state down
to all their trackers. This is the same algorithm used to propagate the
state changes. It has to be done both to compute the IDRAIN flag and the
IMAINT flag. However, doing so requires that tracking servers are not
marked as inherited maintenance anymore while parsing the configuration
(and given that it is wrong, better drop it).
This fix also addresses another side effect of the bug above which is
that the IDRAIN/IMAINT flags are stored in the state files, and if
restored while the tracked server doesn't have the equivalent flag,
the servers may end up in a situation where it's impossible to remove
these flags. For example in the configuration above, after removing
"disabled" on server s4, the other servers would have remained down,
and not anymore with this fix. Similarly, the combination of IMAINT
or IDRAIN with their respective forced modes was not accepted on
reload, which is wrong as well.
This bug has been present at least since 1.5, maybe even 1.4 (it came
with tracking support). The fix needs to be backported there, though
the srv-state parts are irrelevant.
This commit relies on previous patch to silence warnings on startup.
Pierre Cheynier found that there's a persistent issue with the systemd
wrapper. Too fast reloads can lead to certain old processes not being
signaled at all and continuing to run. The problem was tracked down as
a race between the startup and the signal processing : nothing prevents
the wrapper from starting new processes while others are still starting,
and the resulting pid file will only contain the latest pids in this
case. This can happen with large configs and/or when a lot of SSL
certificates are involved.
In order to solve this we want the wrapper to wait for the new processes
to complete their startup. But we also want to ensure it doesn't wait for
nothing in case of error.
The solution found here is to create a pipe between the wrapper and the
sub-processes. The wrapper waits on the pipe and the sub-processes are
expected to close this pipe once they completed their startup. That way
we don't queue up new processes until the previous ones have registered
their pids to the pid file. And if anything goes wrong, the wrapper is
immediately released. The only thing is that we need the sub-processes
to know the pipe's file descriptor. We pass it in an environment variable
called HAPROXY_WRAPPER_FD.
It was confirmed both by Pierre and myself that this completely solves
the "zombie" process issue so that only the new processes continue to
listen on the sockets.
It seems that in the future this stuff could be moved to the haproxy
master process, also getting rid of an environment variable.
This fix needs to be backported to 1.6 and 1.5.
With Linux officially introducing SO_REUSEPORT support in 3.9 and
its mainstream adoption we have seen more people running into strange
SO_REUSEPORT related issues (a process management issue turning into
hard to diagnose problems because the kernel load-balances between the
new and an obsolete haproxy instance).
Also some people simply want the guarantee that the bind fails when
the old process is still bound.
This change makes SO_REUSEPORT configurable, introducing the command
line argument "-dR" and the noreuseport configuration directive.
A backport to 1.6 should be considered.
pcre_version() returns the running PCRE release, not the release
haproxy was built with.
This simple string fix should be backported to supported releases,
as the output may be confusing.
When the requested amount of FDs cannot be allocated, setrlimit() fails.
That's bad because if the limit is set to 1024 and we need 10000, we
stay on 1024 while we could possibly raise it to 4096 thanks to rlim_max.
This patch takes care of trying to assign rlim_cur to rlim_max on failure
so that we get as much as possible if we can't get all we need. The case
is particularly visible when starting haproxy as a non-privileged user
and a large maxconn is specified in the configuration.
Another point of doing this is that it is the only way to allow us to
close inherited FDs upon fork(), ie those between rlim_cur and rlim_max.
This patch may be backported to 1.6 and 1.5.
global.rlimit_nofile contains the mxa number of file descriptors that
can be allocated, except if the user is not allowed to reach this limit,
where it still contains the initially requested value. It is important
that this value always matches what is really configured so that it is
properly reported in the stats and that we can use it later to close
all FDs without wasting time closing impossible FDs.
This fix may be backported to 1.6 and 1.5.
This patch removes setlocale from the main function. It was introduced
by commit 379d9c7 ("MEDIUM: init: allow directory as argument of -f")
in 1.7-dev a few commits ago after a discussion on the mailing list.
Some regex may have different behaviours depending on the
locale. Some LUA scripts may change their behaviour too
(http://lua-users.org/wiki/LuaLocales).
Without this patch (haproxy is using setlocale) :
$ cat locale.cfg
defaults
mode http
frontend test
bind :9000
mode http
use_backend testbk if { hdr_reg(X-Test) ^\w+$ }
backend testbk
mode http
server s 127.0.0.1:80
$ LANG=fr_FR.UTF-8 ./haproxy -f locale.cfg
$ curl -i -H "X-Test: chec" localhost:9000
HTTP/1.1 200 OK
...
$ LANG=C ./haproxy -f locale.cfg
$ curl -i -H "X-Test: chec" localhost:9000
HTTP/1.0 503 Service Unavailable
...
If -f argument is a directory add all the files (and only files) it
containes to the config files list.
These files are added in lexical order (respecting LC_COLLATE).
Only files with ".cfg" extension are added.
Only non hidden files (not prefixed with ".") are added.
Symlink are followed.
The -f order is still respected:
$ tree -a rootdir
rootdir
|-- dir1
||-- .6.cfg
||-- 1.cfg
||-- 2
||-- 3.cfg
||-- 4.cfg -> 1.cfg
||-- 5 -> 1.cfg
||-- 7.cfg -> .
|`-- dir4
|`-- 8.cfg
|-- dir2
||-- 10.cfg
|`-- 9.cfg
|-- dir3
|`-- 11.cfg
|-- link -> dir3/
|-- root1
|-- root2
`-- root3
$ ./haproxy -C rootdir -f root2 -f dir2 -f root3 -f dir1 \
-f link -f root1
root2
dir2/10.cfg
dir2/9.cfg
root3
dir1/1.cfg
dir1/3.cfg
dir1/4.cfg
link/11.cfg
root1
This can be useful on systemd where you can't change the haproxy
commande line options on service reload.
Released version 1.7-dev3 with the following main changes :
- MINOR: sample: Moves ARGS underlying type from 32 to 64 bits.
- BUG/MINOR: log: Don't use strftime() which can clobber timezone if chrooted
- BUILD: namespaces: fix a potential build warning in namespaces.c
- MINOR: da: Using ARG12 macro for the sample fetch and the convertor.
- DOC: add encoding to json converter example
- BUG/MINOR: conf: "listener id" expects integer, but its not checked
- DOC: Clarify tunes.vars.xxx-max-size settings
- CLEANUP: chunk: adding NULL check to chunk_dup allocation.
- CLEANUP: connection: fix double negation on memcmp()
- BUG/MEDIUM: peers: fix incorrect age in frequency counters
- BUG/MEDIUM: Fix RFC5077 resumption when more than TLS_TICKETS_NO are present
- BUG/MAJOR: Fix crash in http_get_fhdr with exactly MAX_HDR_HISTORY headers
- BUG/MINOR: lua: can't load external libraries
- BUG/MINOR: prevent the dump of uninitialized vars
- CLEANUP: map: it seems that the map were planed to be chained
- MINOR: lua: move class registration facilities
- MINOR: lua: remove some useless checks
- CLEANUP: lua: Remove two same functions
- MINOR: lua: refactor the Lua object registration
- MINOR: lua: precise message when a critical error is catched
- MINOR: lua: post initialization
- MINOR: lua: Add internal function which strip spaces
- MINOR: lua: convert field to lua type
- DOC: "addr" parameter applies to both health and agent checks
- DOC: timeout client: pointers to timeout http-request
- DOC: typo on stick-store response
- DOC: stick-table: amend paragraph blaming the loss of table upon reload
- DOC: typo: ACL subdir match
- DOC: typo: maxconn paragraph is wrong due to a wrong buffer size
- DOC: regsub: parser limitation about the inability to use closing square brackets
- DOC: typo: req.uri is now replaced by capture.req.uri
- DOC: name set-gpt0 mismatch with the expected keyword
- MINOR: http: sample fetch which returns unique-id
- MINOR: dumpstats: extract stats fields enum and names
- MINOR: dumpstats: split stats_dump_info_to_buffer() in two parts
- MINOR: dumpstats: split stats_dump_fe_stats() in two parts
- MINOR: dumpstats: split stats_dump_li_stats() in two parts
- MINOR: dumpstats: split stats_dump_sv_stats() in two parts
- MINOR: dumpstats: split stats_dump_be_stats() in two parts
- MINOR: lua: dump general info
- MINOR: lua: add class proxy
- MINOR: lua: add class server
- MINOR: lua: add class listener
- BUG/MEDIUM: stick-tables: some sample-fetch doesn't work in the connection state.
- MEDIUM: proxy: use dynamic allocation for error dumps
- CLEANUP: remove unneeded casts
- CLEANUP: uniformize last argument of malloc/calloc
- DOC: fix "needed" typo
- BUG/MINOR: dumpstats: fix write to global chunk
- BUG/MINOR: dns: inapropriate way out after a resolution timeout
- BUG/MINOR: dns: trigger a DNS query type change on resolution timeout
- CLEANUP: proto_http: few corrections for gcc warnings.
- BUG/MINOR: DNS: resolution structure change
- BUG/MINOR : allow to log cookie for tarpit and denied request
- BUG/MEDIUM: ssl: rewind the BIO when reading certificates
- OPTIM/MINOR: session: abort if possible before connecting to the backend
- DOC: http: rename the unique-id sample and add the documentation
- BUG/MEDIUM: trace.c: rdtsc() is defined in two files
- BUG/MEDIUM: channel: fix miscalculation of available buffer space (2nd try)
- BUG/MINOR: server: risk of over reading the pref_net array.
- BUG/MINOR: cfgparse: couple of small memory leaks.
- BUG/MEDIUM: sample: initialize the pointer before parse_binary call.
- DOC: fix discrepancy in the example for http-request redirect
- MINOR: acl: Add predefined METH_DELETE, METH_PUT
- CLEANUP: .gitignore cleanup
- DOC: Clarify IPv4 address / mask notation rules
- CLEANUP: fix inconsistency between fd->iocb, proto->accept and accept()
- BUG/MEDIUM: fix maxaccept computation on per-process listeners
- BUG/MINOR: listener: stop unbound listeners on startup
- BUG/MINOR: fix maxaccept computation according to the frontend process range
- TESTS: add blocksig.c to run tests with all signals blocked
- MEDIUM: unblock signals on startup.
- MINOR: filters: Print the list of existing filters during HA startup
- MINOR: filters: Typo in an error message
- MINOR: filters: Filters must define the callbacks struct during config parsing
- DOC: filters: Add filters documentation
- BUG/MEDIUM: channel: don't allow to overwrite the reserve until connected
- BUG/MEDIUM: channel: incorrect polling condition may delay event delivery
- BUG/MEDIUM: channel: fix miscalculation of available buffer space (3rd try)
- BUG/MEDIUM: log: fix risk of segfault when logging HTTP fields in TCP mode
- MINOR: Add ability for agent-check to set server maxconn
- CLEANUP: Use server_parse_maxconn_change_request for maxconn CLI updates
- MINOR: filters: add opaque data
- BUG/MEDIUM: lua: protects the upper boundary of the argument list for converters/fetches.
- MINOR: lua: migrate the argument mask to 64 bits type.
- BUG/MINOR: dumpstats: Fix the "Total bytes saved" counter in backends stats
- BUG/MINOR: log: fix a typo that would cause %HP to log <BADREQ>
- BUG/MEDIUM: http: fix incorrect reporting of server errors
- MINOR: channel: add new function channel_congested()
- BUG/MEDIUM: http: fix risk of CPU spikes with pipelined requests from dead client
- BUG/MAJOR: channel: fix miscalculation of available buffer space (4th try)
- BUG/MEDIUM: stream: ensure the SI_FL_DONT_WAKE flag is properly cleared
- BUG/MEDIUM: channel: fix inconsistent handling of 4GB-1 transfers
- BUG/MEDIUM: stats: show servers state may show an empty or incomplete result
- BUG/MEDIUM: stats: show backend may show an empty or incomplete result
- MINOR: stats: fix typo in help messages
- MINOR: stats: show stat resolvers missing in the help message
- BUG/MINOR: dns: fix DNS header definition
- BUG/MEDIUM: dns: fix alignment issue when building DNS queries
- CLEANUP: don't ignore scripts in .gitignore
- BUILD: add a few release and backport scripts in scripts/
In C89, "void *" is automatically promoted to any pointer type. Casting
the result of malloc/calloc to the type of the LHS variable is therefore
unneeded.
Most of this patch was built using this Coccinelle patch:
@@
type T;
@@
- (T *)
(\(lua_touserdata\|malloc\|calloc\|SSL_get_app_data\|hlua_checkudata\|lua_newuserdata\)(...))
@@
type T;
T *x;
void *data;
@@
x =
- (T *)
data
@@
type T;
T *x;
T *data;
@@
x =
- (T *)
data
Unfortunately, either Coccinelle or I is too limited to detect situation
where a complex RHS expression is of type "void *" and therefore casting
is not needed. Those cases were manually examined and corrected.
Released version 1.7-dev2 with the following main changes :
- DOC: lua: fix lua API
- DOC: mailers: typo in 'hostname' description
- DOC: compression: missing mention of libslz for compression algorithm
- BUILD/MINOR: regex: missing header
- BUG/MINOR: stream: bad return code
- DOC: lua: fix somme errors and add implicit types
- MINOR: lua: add set/get priv for applets
- BUG/MINOR: http: fix several off-by-one errors in the url_param parser
- BUG/MINOR: http: Be sure to process all the data received from a server
- MINOR: filters/http: Use a wrapper function instead of stream_int_retnclose
- BUG/MINOR: chunk: make chunk_dup() always check and set dst->size
- DOC: ssl: fixed some formatting errors in crt tag
- MINOR: chunks: ensure that chunk_strcpy() adds a trailing zero
- MINOR: chunks: add chunk_strcat() and chunk_newstr()
- MINOR: chunk: make chunk_initstr() take a const string
- MEDIUM: tools: add csv_enc_append() to preserve the original chunk
- MINOR: tools: make csv_enc_append() always start at the first byte of the chunk
- MINOR: lru: new function to delete <nb> least recently used keys
- DOC: add Ben Shillito as the maintainer of 51d
- BUG/MINOR: 51d: Ensures a unique domain for each configuration
- BUG/MINOR: 51d: Aligns Pattern cache implementation with HAProxy best practices.
- BUG/MINOR: 51d: Releases workset back to pool.
- BUG/MINOR: 51d: Aligned const pointers to changes in 51Degrees.
- CLEANUP: 51d: Aligned if statements with HAProxy best practices and removed casts from malloc.
- MINOR: rename master process name in -Ds (systemd mode)
- DOC: fix a few spelling mistakes
- DOC: fix "workaround" spelling
- BUG/MINOR: examples: Fixing haproxy.spec to remove references to .cfg files
- MINOR: fix the return type for dns_response_get_query_id() function
- MINOR: server state: missing LF (\n) on error message printed when parsing server state file
- BUG/MEDIUM: dns: no DNS resolution happens if no ports provided to the nameserver
- BUG/MAJOR: servers state: server port is erased when dns resolution is enabled on a server
- BUG/MEDIUM: servers state: server port is used uninitialized
- BUG/MEDIUM: config: Adding validation to stick-table expire value.
- BUG/MEDIUM: sample: http_date() doesn't provide the right day of the week
- BUG/MEDIUM: channel: fix miscalculation of available buffer space.
- MEDIUM: pools: add a new flag to avoid rounding pool size up
- BUG/MEDIUM: buffers: do not round up buffer size during allocation
- BUG/MINOR: stream: don't force retries if the server is DOWN
- BUG/MINOR: counters: make the sc-inc-gpc0 and sc-set-gpt0 touch the table
- MINOR: unix: don't mention free ports on EAGAIN
- BUG/CLEANUP: CLI: report the proper field states in "show sess"
- MINOR: stats: send content-length with the redirect to allow keep-alive
- BUG: stream_interface: Reuse connection even if the output channel is empty
- DOC: remove old tunnel mode assumptions
- BUG/MAJOR: http-reuse: fix risk of orphaned connections
- BUG/MEDIUM: http-reuse: do not share private connections across backends
- BUG/MINOR: ssl: Be sure to use unique serial for regenerated certificates
- BUG/MINOR: stats: fix missing comma in stats on agent drain
- MAJOR: filters: Add filters support
- MINOR: filters: Do not reset stream analyzers if the client is gone
- REORG: filters: Prepare creation of the HTTP compression filter
- MAJOR: filters/http: Rewrite the HTTP compression as a filter
- MEDIUM: filters: Use macros to call filters callbacks to speed-up processing
- MEDIUM: filters: remove http_start_chunk, http_last_chunk and http_chunk_end
- MEDIUM: filters: Replace filter_http_headers callback by an analyzer
- MEDIUM: filters/http: Move body parsing of HTTP messages in dedicated functions
- MINOR: filters: Add stream_filters structure to hide filters info
- MAJOR: filters: Require explicit registration to filter HTTP body and TCP data
- MINOR: filters: Remove unused or useless stuff and do small optimizations
- MEDIUM: filters: Optimize the HTTP compression for chunk encoded response
- MINOR: filters/http: Slightly update the parsing of chunks
- MINOR: filters/http: Forward remaining data when a channel has no "data" filters
- MINOR: filters: Add an filter example
- MINOR: filters: Extract proxy stuff from the struct filter
- MINOR: map: Add regex matching replacement
- BUG/MINOR: lua: unsafe initialization
- DOC: lua: fix somme errors
- MINOR: lua: file dedicated to unsafe functions
- MINOR: lua: add "now" time function
- MINOR: standard: add RFC HTTP date parser
- MINOR: lua: Add date functions
- MINOR: lua: move common function
- MINOR: lua: merge function
- MINOR: lua: Add concat class
- MINOR: standard: add function "escape_chunk"
- MEDIUM: log: add a new log format flag "E"
- DOC: add server name at rate-limit sessions example
- BUG/MEDIUM: ssl: fix off-by-one in ALPN list allocation
- BUG/MEDIUM: ssl: fix off-by-one in NPN list allocation
- DOC: LUA: fix some typos and syntax errors
- MINOR: cli: add a new "show env" command
- MEDIUM: config: allow to manipulate environment variables in the global section
- MEDIUM: cfgparse: reject incorrect 'timeout retry' keyword spelling in resolvers
- MINOR: mailers: increase default timeout to 10 seconds
- MINOR: mailers: use <CRLF> for all line endings
- BUG/MAJOR: lua: segfault using Concat object
- DOC: lua: copyrights
- MINOR: common: mask conversion
- MEDIUM: dns: extract options
- MEDIUM: dns: add a "resolve-net" option which allow to prefer an ip in a network
- MINOR: mailers: make it possible to configure the connection timeout
- BUG/MAJOR: lua: applets can't sleep.
- BUG/MINOR: server: some prototypes are renamed
- BUG/MINOR: lua: Useless copy
- BUG/MEDIUM: stats: stats bind-process doesn't propagate the process mask correctly
- BUG/MINOR: server: fix the format of the warning on address change
- CLEANUP: server: add "const" to some message strings
- MINOR: server: generalize the "updater" source
- BUG/MEDIUM: chunks: always reject negative-length chunks
- BUG/MINOR: systemd: ensure we don't miss signals
- BUG/MINOR: systemd: report the correct signal in debug message output
- BUG/MINOR: systemd: propagate the correct signal to haproxy
- MINOR: systemd: ensure a reload doesn't mask a stop
- BUG/MEDIUM: cfgparse: wrong argument offset after parsing server "sni" keyword
- CLEANUP: stats: Avoid computation with uninitialized bits.
- CLEANUP: pattern: Ignore unknown samples in pat_match_ip().
- CLEANUP: map: Avoid memory leak in out-of-memory condition.
- BUG/MINOR: tcpcheck: fix incorrect list usage resulting in failure to load certain configs
- BUG/MAJOR: samples: check smp->strm before using it
- MINOR: sample: add a new helper to initialize the owner of a sample
- MINOR: sample: always set a new sample's owner before evaluating it
- BUG/MAJOR: vars: always retrieve the stream and session from the sample
- CLEANUP: payload: remove useless and confusing nullity checks for channel buffer
- BUG/MINOR: ssl: fix usage of the various sample fetch functions
- MINOR: stats: create fields types suitable for all CSV output data
- MINOR: stats: add all the "show info" fields in a table
- MEDIUM: stats: fill all the show info elements prior to displaying them
- MINOR: stats: add a function to emit fields into a chunk
- MINOR: stats: add stats_dump_info_fields() to dump one field per line
- MEDIUM: stats: make use of stats_dump_info_fields() for "show info"
- MINOR: stats: add a declaration of all stats fields
- MINOR: stats: don't hard-code the CSV fields list anymore
- MINOR: stats: create stats fields storage and CSV dump function
- MEDIUM: stats: convert stats_dump_fe_stats() to use stats_dump_fields_csv()
- MEDIUM: stats: make stats_dump_fe_stats() use stats fields for HTML dump
- MEDIUM: stats: convert stats_dump_li_stats() to use stats_dump_fields_csv()
- MEDIUM: stats: make stats_dump_li_stats() use stats fields for HTML dump
- MEDIUM: stats: convert stats_dump_be_stats() to use stats_dump_fields_csv()
- MEDIUM: stats: make stats_dump_be_stats() use stats fields for HTML dump
- MEDIUM: stats: convert stats_dump_sv_stats() to use stats_dump_fields_csv()
- MEDIUM: stats: make stats_dump_sv_stats() use the stats field for HTML
- MEDIUM: stats: move the server state coloring logic to the server dump function
- MINOR: stats: do not use srv->admin & STATS_ADMF_MAINT in HTML dumps
- MINOR: stats: do not check srv->state for SRV_ST_STOPPED in HTML dumps
- MINOR: stats: make CSV report server check status only when enabled
- MINOR: stats: only report backend's down time if it has servers
- MINOR: stats: prepend '*' in front of the check status when in progress
- MINOR: stats: make HTML stats dump rely on the table for the check status
- MINOR: stats: add agent_status, agent_code, agent_duration to output
- MINOR: stats: add check_desc and agent_desc to the output fields
- MINOR: stats: add check and agent's health values in the output
- MEDIUM: stats: make the HTML server state dump use the CSV states
- MEDIUM: stats: only report observe errors when observe is set
- MEDIUM: stats: expose the same flags for CLI and HTTP accesses
- MEDIUM: stats: report server's address in the CSV output
- MEDIUM: stats: report the cookie value in the server & backend CSV dumps
- MEDIUM: stats: compute the color code only in the HTML form
- MEDIUM: stats: report the listeners' address in the CSV output
- MEDIUM: stats: make it possible to report the WAITING state for listeners
- REORG: stats: dump the frontend's HTML stats via a generic function
- REORG: stats: dump the socket stats via the generic function
- REORG: stats: dump the server stats via the generic function
- REORG: stats: dump the backend stats via the generic function
- MEDIUM: stats: add a new "mode" column to report the proxy mode
- MINOR: stats: report the load balancing algorithm in CSV output
- MINOR: stats: add 3 fields to report the frontend-specific connection stats
- MINOR: stats: report number of intercepted requests for frontend and backends
- MINOR: stats: introduce stats_dump_one_line() to dump one stats line
- CLEANUP: stats: make stats_dump_fields_html() not rely on proxy anymore
- MINOR: stats: add ST_SHOWADMIN to pass the admin info in the regular flags
- MINOR: stats: make stats_dump_fields_html() not use &trash by default
- MINOR: stats: add functions to emit typed fields into a chunk
- MEDIUM: stats: support "show info typed" on the CLI
- MEDIUM: stats: implement a typed output format for stats
- DOC: document the "show info typed" and "show stat typed" output formats
- MINOR: cfgparse: warn when uid parameter is not a number
- MINOR: cfgparse: warn when gid parameter is not a number
- BUG/MINOR: standard: Avoid free of non-allocated pointer
- BUG/MINOR: pattern: Avoid memory leak on out-of-memory condition
- CLEANUP: http: fix a build warning introduced by a recent fix
- BUG/MINOR: log: GMT offset not updated when entering/leaving DST
GMT offset used in local time formats was computed at startup, but was not updated when DST status changed while running.
For example these two RFC5424 syslog traces where emitted 5 seconds apart, just before and after DST changed:
<14>1 2016-03-27T01:59:58+01:00 bunch-VirtualBox haproxy 2098 - - Connect ...
<14>1 2016-03-27T03:00:03+01:00 bunch-VirtualBox haproxy 2098 - - Connect ...
It looked like they were emitted more than 1 hour apart, unlike with the fix:
<14>1 2016-03-27T01:59:58+01:00 bunch-VirtualBox haproxy 3381 - - Connect ...
<14>1 2016-03-27T03:00:03+02:00 bunch-VirtualBox haproxy 3381 - - Connect ...
This patch should be backported to 1.6 and partially to 1.5 (no fix needed in log.c).
The +E mode escapes characters '"', '\' and ']' with '\' as prefix. It
mostly makes sense to use it in the RFC5424 structured-data log formats.
Example:
log-format-sd %{+Q,+E}o\ [exampleSDID@1234\ header=%[capture.req.hdr(0)]]
HTTP compression has been rewritten to use the filter API. This is more a PoC
than other thing for now. It allocates memory to work. So, if only for that, it
should be rewritten.
In the mean time, the implementation has been refactored to allow its use with
other filters. However, there are limitations that should be respected:
- No filter placed after the compression one is allowed to change input data
(in 'http_data' callback).
- No filter placed before the compression one is allowed to change forwarded
data (in 'http_forward_data' callback).
For now, these limitations are informal, so you should be careful when you use
several filters.
About the configuration, 'compression' keywords are still supported and must be
used to configure the HTTP compression behavior. In absence of a 'filter' line
for the compression filter, it is added in the filter chain when the first
compression' line is parsed. This is an easy way to do when you do not use other
filters. But another filter exists, an error is reported so that the user must
explicitly declare the filter.
For example:
listen tst
...
compression algo gzip
compression offload
...
filter flt_1
filter compression
filter flt_2
...
This patch adds the support of filters in HAProxy. The main idea is to have a
way to "easely" extend HAProxy by adding some "modules", called filters, that
will be able to change HAProxy behavior in a programmatic way.
To do so, many entry points has been added in code to let filters to hook up to
different steps of the processing. A filter must define a flt_ops sutrctures
(see include/types/filters.h for details). This structure contains all available
callbacks that a filter can define:
struct flt_ops {
/*
* Callbacks to manage the filter lifecycle
*/
int (*init) (struct proxy *p);
void (*deinit)(struct proxy *p);
int (*check) (struct proxy *p);
/*
* Stream callbacks
*/
void (*stream_start) (struct stream *s);
void (*stream_accept) (struct stream *s);
void (*session_establish)(struct stream *s);
void (*stream_stop) (struct stream *s);
/*
* HTTP callbacks
*/
int (*http_start) (struct stream *s, struct http_msg *msg);
int (*http_start_body) (struct stream *s, struct http_msg *msg);
int (*http_start_chunk) (struct stream *s, struct http_msg *msg);
int (*http_data) (struct stream *s, struct http_msg *msg);
int (*http_last_chunk) (struct stream *s, struct http_msg *msg);
int (*http_end_chunk) (struct stream *s, struct http_msg *msg);
int (*http_chunk_trailers)(struct stream *s, struct http_msg *msg);
int (*http_end_body) (struct stream *s, struct http_msg *msg);
void (*http_end) (struct stream *s, struct http_msg *msg);
void (*http_reset) (struct stream *s, struct http_msg *msg);
int (*http_pre_process) (struct stream *s, struct http_msg *msg);
int (*http_post_process) (struct stream *s, struct http_msg *msg);
void (*http_reply) (struct stream *s, short status,
const struct chunk *msg);
};
To declare and use a filter, in the configuration, the "filter" keyword must be
used in a listener/frontend section:
frontend test
...
filter <FILTER-NAME> [OPTIONS...]
The filter referenced by the <FILTER-NAME> must declare a configuration parser
on its own name to fill flt_ops and filter_conf field in the proxy's
structure. An exemple will be provided later to make it perfectly clear.
For now, filters cannot be used in backend section. But this is only a matter of
time. Documentation will also be added later. This is the first commit of a long
list about filters.
It is possible to have several filters on the same listener/frontend. These
filters are stored in an array of at most MAX_FILTERS elements (define in
include/types/filters.h). Again, this will be replaced later by a list of
filters.
The filter API has been highly refactored. Main changes are:
* Now, HA supports an infinite number of filters per proxy. To do so, filters
are stored in list.
* Because filters are stored in list, filters state has been moved from the
channel structure to the filter structure. This is cleaner because there is no
more info about filters in channel structure.
* It is possible to defined filters on backends only. For such filters,
stream_start/stream_stop callbacks are not called. Of course, it is possible
to mix frontend and backend filters.
* Now, TCP streams are also filtered. All callbacks without the 'http_' prefix
are called for all kind of streams. In addition, 2 new callbacks were added to
filter data exchanged through a TCP stream:
- tcp_data: it is called when new data are available or when old unprocessed
data are still waiting.
- tcp_forward_data: it is called when some data can be consumed.
* New callbacks attached to channel were added:
- channel_start_analyze: it is called when a filter is ready to process data
exchanged through a channel. 2 new analyzers (a frontend and a backend)
are attached to channels to call this callback. For a frontend filter, it
is called before any other analyzer. For a backend filter, it is called
when a backend is attached to a stream. So some processing cannot be
filtered in that case.
- channel_analyze: it is called before each analyzer attached to a channel,
expects analyzers responsible for data sending.
- channel_end_analyze: it is called when all other analyzers have finished
their processing. A new analyzers is attached to channels to call this
callback. For a TCP stream, this is always the last one called. For a HTTP
one, the callback is called when a request/response ends, so it is called
one time for each request/response.
* 'session_established' callback has been removed. Everything that is done in
this callback can be handled by 'channel_start_analyze' on the response
channel.
* 'http_pre_process' and 'http_post_process' callbacks have been replaced by
'channel_analyze'.
* 'http_start' callback has been replaced by 'http_headers'. This new one is
called just before headers sending and parsing of the body.
* 'http_end' callback has been replaced by 'channel_end_analyze'.
* It is possible to set a forwarder for TCP channels. It was already possible to
do it for HTTP ones.
* Forwarders can partially consumed forwardable data. For this reason a new
HTTP message state was added before HTTP_MSG_DONE : HTTP_MSG_ENDING.
Now all filters can define corresponding callbacks (http_forward_data
and tcp_forward_data). Each filter owns 2 offsets relative to buf->p, next and
forward, to track, respectively, input data already parsed but not forwarded yet
by the filter and parsed data considered as forwarded by the filter. A any time,
we have the warranty that a filter cannot parse or forward more input than
previous ones. And, of course, it cannot forward more input than it has
parsed. 2 macros has been added to retrieve these offets: FLT_NXT and FLT_FWD.
In addition, 2 functions has been added to change the 'next size' and the
'forward size' of a filter. When a filter parses input data, it can alter these
data, so the size of these data can vary. This action has an effet on all
previous filters that must be handled. To do so, the function
'filter_change_next_size' must be called, passing the size variation. In the
same spirit, if a filter alter forwarded data, it must call the function
'filter_change_forward_size'. 'filter_change_next_size' can be called in
'http_data' and 'tcp_data' callbacks and only these ones. And
'filter_change_forward_size' can be called in 'http_forward_data' and
'tcp_forward_data' callbacks and only these ones. The data changes are the
filter responsability, but with some limitation. It must not change already
parsed/forwarded data or data that previous filters have not parsed/forwarded
yet.
Because filters can be used on backends, when we the backend is set for a
stream, we add filters defined for this backend in the filter list of the
stream. But we must only do that when the backend and the frontend of the stream
are not the same. Else same filters are added a second time leading to undefined
behavior.
The HTTP compression code had to be moved.
So it simplifies http_response_forward_body function. To do so, the way the data
are forwarded has changed. Now, a filter (and only one) can forward data. In a
commit to come, this limitation will be removed to let all filters take part to
data forwarding. There are 2 new functions that filters should use to deal with
this feature:
* flt_set_http_data_forwarder: This function sets the filter (using its id)
that will forward data for the specified HTTP message. It is possible if it
was not already set by another filter _AND_ if no data was yet forwarded
(msg->msg_state <= HTTP_MSG_BODY). It returns -1 if an error occurs.
* flt_http_data_forwarder: This function returns the filter id that will
forward data for the specified HTTP message. If there is no forwarder set, it
returns -1.
When an HTTP data forwarder is set for the response, the HTTP compression is
disabled. Of course, this is not definitive.
When memmax is forced using "-m", the per-process memory limit is enforced
using setrlimit(), but this value is not used to compute the automatic
maxconn limit. In addition, the per-process memory limit didn't consider
the fact that the shared SSL cache only needs to be accounted once.
The doc was also fixed to clearly state that "-m" is global and not per
process. It makes sense because people who use -m want to protect the
system's resources regardless of whatever appears in the configuration.
In order to properly enable sched_setaffinity, in some versions of Linux,
it is rather _GNU_SOURCE than __USE_GNU (spotted on Alpine Linux for instance),
also for the sake of consistency as __USE_GNU seems not used across the code and
for last, it seems on Linux it is the best way to enable non portable code.
On Linux glibc's based versions, it seems _GNU_SOURCE defines __USE_GNU
it should be safe enough.
It's pointless to reserve this amount of memory when zlib is not used.
Adding the condition will make build scripts easier to manage. This may
be backported to 1.6.
Causes HAProxy to emit a static string to the agent on every check,
so that you can independently control multiple services running
behind a single agent port.
It was accidently discovered that limiting haproxy to 5000 MB leads to
an effective limit of 904 MB. This is because the computation for the
size limit is performed by multiplying rlimit_memmax by 1048576, and
doing so causes the operation to be performed on an int instead of a
long or long long. Just switch to 1048576ULL as is done at other places
to fix this.
This bug affects all supported versions, the backport is desired, though
it rarely affects users since few people apply memory limits.
HAProxy could already support being passed a file list on the command
line, by passing multiple times "-f" followed by a file name. People
have been complaining that it made it hard to pass file lists from init
scripts.
This patch introduces an end of arguments using the common "--" tag,
after which only file names may appear. These files are then added to
the existing list of other files specified using -f and are loaded in
their declaration order. Thus it becomes possible to do something like
this :
haproxy -sf $(pidof haproxy) -- /etc/haproxy/global.cfg /etc/haproxy/customers/*.cfg
Given that all command line arguments start with a '-' and that
no pid number can start with this character, there's no constraint
to make the pid list the last argument. Let's relax this rule.
Michael Ezzell reported a bug causing haproxy to segfault during startup
when trying to send syslog message from Lua. The function __send_log() can
be called with *p that is NULL and/or when the configuration is not fully
parsed, as is the case with Lua.
This patch fixes this problem by using individual vectors instead of the
pre-generated strings log_htp and log_htp_rfc5424.
Also, this patch fixes a problem causing haproxy to write the wrong pid in
the logs -- the log_htp(_rfc5424) strings were generated at the haproxy
start, but "pid" value would be changed after haproxy is started in
daemon/systemd mode.
When peers are stopped due to not being running on the appropriate
process, we want to completely release them and unregister their signals
and task in order to ensure there's no way they may be called in the
future.
Note: ideally we should have a list of all tables attached to a peers
section being disabled in order to unregister them and void their
sync_task. It doesn't appear to be *that* easy for now.
This patch adds a new RFC5424-specific log-format for the structured-data
that is automatically send by __send_log() when the sender is in RFC5424
mode.
A new statement "log-format-sd" should be used in order to set log-format
for the structured-data part in RFC5424 formatted syslog messages.
Example:
log-format-sd [exampleSDID@1234\ bytes=\"%B\"\ status=\"%ST\"]
The function __send_log() iterates over senders and passes the header as
the first vector to sendmsg(), thus it can send a logger-specific header
in each message.
A new logger arguments "format rfc5424" should be used in order to enable
RFC5424 header format. For example:
log 10.2.3.4:1234 len 2048 format rfc5424 local2 info
At the moment we have to call snprintf() for every log line just to
rebuild a constant. Thanks to sendmsg(), we send the message in 3 parts:
time-based header, proxy-specific hostname+log-tag+pid, session-specific
message.
static_table_key, get_http_auth_buff and swap_buffer static variables
are now freed during deinit and the two previously new functions are
called as well. In addition, the 'trash' string buffer is cleared.
The tune.maxrewrite parameter used to be pre-initialized to half of
the buffer size since the very early days when buffers were very small.
It has grown to absurdly large values over the years to reach 8kB for a
16kB buffer. This prevents large requests from being accepted, which is
the opposite of the initial goal.
Many users fix it to 1024 which is already quite large for header
addition.
So let's change the default setting policy :
- pre-initialize it to 1024
- let the user tweak it
- in any case, limit it to tune.bufsize / 2
This results in 15kB usable to buffer HTTP messages instead of 8kB, and
doesn't affect existing configurations which already force it.
This is not a real run queue and we're facing ugly bugs because
if this : if a an applet removes another applet from the queue,
typically the next one after itself, the list iterator loops
forever because the list's backup pointer is not valid anymore.
Before creating a run queue, let's rename this list.
This was the first transparent proxy technology supported by haproxy
circa 2005 but it was obsoleted in 2007 by Tproxy 4.0 which removed a
lot of the earlier versions' shortcomings and was finally merged into
the kernel. Since nobody has been using cttproxy for many years now
and nobody has even just tried to compile the files, it's time to
remove it. The doc was updated as well.
This patch is the first of a serie which merge all the action structs. The
function "tcp-request content", "tcp-response-content", "http-request" and
"http-response" have the same values and the same process for some defined
actions, but the struct and the prototype of the declared function are
different.
This patch try to unify all of these entries.
This patch adds a few checks on "global._51degrees.data_file_path" and allows
haproxy to start even when the pattern or trie data file is not specified.
If the "51d" converter is used, a new function "_51d_conv_check" will check
"global._51degrees.data_file_path" and displays a warning if necessary.
In src/haproxy.c, the global 51Degrees "cache_size" has moved outside of the
FIFTYONEDEGREES_H_PATTERN_INCLUDED ifdef block.
This cache is used by 51d converter. The input User-Agent string, the
converter args and a random seed are used as a hashing key. The cached
entries contains a pointer to the resulting string for specific
User-Agent string detection.
The cache size can be tuned using 51degrees-cache-size parameter.
Moved 51Degrees code from src/haproxy.c, src/sample.c and src/cfgparse.c
into a separate files src/51d.c and include/import/51d.h.
Added two new functions init_51degrees() and deinit_51degrees(), updated
Makefile and other code reorganizations related to 51Degrees.
Released version 1.6-dev2 with the following main changes :
- BUG/MINOR: ssl: Display correct filename in error message
- MEDIUM: logs: Add HTTP request-line log format directives
- BUG/MEDIUM: check: tcpcheck regression introduced by e16c1b3f
- BUG/MINOR: check: fix tcpcheck error message
- MINOR: use an int instead of calling tcpcheck_get_step_id
- MINOR: tcpcheck_rule structure update
- MINOR: include comment in tcpcheck error log
- DOC: tcpcheck comment documentation
- MEDIUM: server: add support for changing a server's address
- MEDIUM: server: change server ip address from stats socket
- MEDIUM: protocol: add minimalist UDP protocol client
- MEDIUM: dns: implement a DNS resolver
- MAJOR: server: add DNS-based server name resolution
- DOC: server name resolution + proto DNS
- MINOR: dns: add DNS statistics
- MEDIUM: http: configurable http result codes for http-request deny
- BUILD: Compile clean when debug options defined
- MINOR: lru: Add the possibility to free data when an item is removed
- MINOR: lru: Add lru64_lookup function
- MEDIUM: ssl: Add options to forge SSL certificates
- MINOR: ssl: Export functions to manipulate generated certificates
- MEDIUM: config: add DeviceAtlas global keywords
- MEDIUM: global: add the DeviceAtlas required elements to struct global
- MEDIUM: sample: add the da-csv converter
- MEDIUM: init: DeviceAtlas initialization
- BUILD: Makefile: add options to build with DeviceAtlas
- DOC: README: explain how to build with DeviceAtlas
- BUG/MEDIUM: http: fix the url_param fetch
- BUG/MEDIUM: init: segfault if global._51d_property_names is not initialized
- MAJOR: peers: peers protocol version 2.0
- MINOR: peers: avoid re-scheduling of pending stick-table's updates still not pushed.
- MEDIUM: peers: re-schedule stick-table's entry for sync when data is modified.
- MEDIUM: peers: support of any stick-table data-types for sync
- BUG/MAJOR: sample: regression on sample cast to stick table types.
- CLEANUP: deinit: remove codes for cleaning p->block_rules
- DOC: Fix L4TOUT typo in documentation
- DOC: set-log-level in Logging section preamble
- BUG/MEDIUM: compat: fix segfault on FreeBSD
- MEDIUM: check: include server address and port in the send-state header
- MEDIUM: backend: Allow redispatch on retry intervals
- MINOR: Add TLS ticket keys reference and use it in the listener struct
- MEDIUM: Add support for updating TLS ticket keys via socket
- DOC: Document new socket commands "show tls-keys" and "set ssl tls-key"
- MINOR: Add sample fetch which identifies if the SSL session has been resumed
- DOC: Update doc about weight, act and bck fields in the statistics
- BUG/MEDIUM: ssl: fix tune.ssl.default-dh-param value being overwritten
- MINOR: ssl: add a destructor to free allocated SSL ressources
- MEDIUM: ssl: add the possibility to use a global DH parameters file
- MEDIUM: ssl: replace standards DH groups with custom ones
- MEDIUM: stats: Add enum srv_stats_state
- MEDIUM: stats: Separate server state and colour in stats
- MEDIUM: stats: Only report drain state in stats if server has SRV_ADMF_DRAIN set
- MEDIUM: stats: Differentiate between DRAIN and DRAIN (agent)
- MEDIUM: Lower priority of email alerts for log-health-checks messages
- MEDIUM: Send email alerts when servers are marked as UP or enter the drain state
- MEDIUM: Document when email-alerts are sent
- BUG/MEDIUM: lua: bad argument number in analyser and in error message
- MEDIUM: lua: automatically converts strings in proxy, tables, server and ip
- BUG/MINOR: utf8: remove compilator warning
- MEDIUM: map: uses HAProxy facilities to store default value
- BUG/MINOR: lua: error in detection of mandatory arguments
- BUG/MINOR: lua: set current proxy as default value if it is possible
- BUG/MEDIUM: http: the action set-{method|path|query|uri} doesn't run.
- BUG/MEDIUM: lua: undetected infinite loop
- BUG/MAJOR: http: don't read past buffer's end in http_replace_value
- BUG/MEDIUM: http: the function "(req|res)-replace-value" doesn't respect the HTTP syntax
- MEDIUM/CLEANUP: http: rewrite and lighten http_transform_header() prototype
- BUILD: lua: it miss the '-ldl' directive
- MEDIUM: http: allows 'R' and 'S' in the protocol alphabet
- MINOR: http: split the function http_action_set_req_line() in two parts
- MINOR: http: split http_transform_header() function in two parts.
- MINOR: http: export function inet_set_tos()
- MINOR: lua: txn: add function set_(loglevel|tos|mark)
- MINOR: lua: create and register HTTP class
- DOC: lua: fix some typos
- MINOR: lua: add log functions
- BUG/MINOR: lua: Fix SSL initialisation
- DOC: lua: some fixes
- MINOR: lua: (req|res)_get_headers return more than one header value
- MINOR: lua: map system integration in Lua
- BUG/MEDIUM: http: functions set-{path,query,method,uri} breaks the HTTP parser
- MINOR: sample: add url_dec converter
- MEDIUM: sample: fill the struct sample with the session, proxy and stream pointers
- MEDIUM: sample change the prototype of sample-fetches and converters functions
- MINOR: sample: fill the struct sample with the options.
- MEDIUM: sample: change the prototype of sample-fetches functions
- MINOR: http: split the url_param in two parts
- CLEANUP: http: bad indentation
- MINOR: http: add body_param fetch
- MEDIUM: http: url-encoded parsing function can run throught wrapped buffer
- DOC: http: req.body_param documentation
- MINOR: proxy: custom capture declaration
- MINOR: capture: add two "capture" converters
- MEDIUM: capture: Allow capture with slot identifier
- MINOR: http: add array of generic pointers in http_res_rules
- MEDIUM: capture: adds http-response capture
- MINOR: common: escape CSV strings
- MEDIUM: stats: escape some strings in the CSV dump
- MINOR: tcp: add custom actions that can continue tcp-(request|response) processing
- MINOR: lua: Lua tcp action are not final action
- DOC: lua: schematics about lua socket organization
- BUG/MINOR: debug: display (null) in place of "meth"
- DOC: mention the "lua action" in documentation
- MINOR: standard: add function that converts signed int to a string
- BUG/MINOR: sample: wrong conversion of signed values
- MEDIUM: sample: Add type any
- MINOR: debug: add a special converter which display its input sample content.
- MINOR: tcp: increase the opaque data array
- MINOR: tcp/http/conf: extends the keyword registration options
- MINOR: build: fix build dependency
- MEDIUM: vars: adds support of variables
- MINOR: vars: adds get and set functions
- MINOR: lua: Variable access
- MINOR: samples: add samples which returns constants
- BUG/MINOR: vars/compil: fix some warnings
- BUILD: add 51degrees options to makefile.
- MINOR: global: add several 51Degrees members to global
- MINOR: config: add 51Degrees config parsing.
- MINOR: init: add 51Degrees initialisation code
- MEDIUM: sample: add fiftyone_degrees converter.
- MEDIUM: deinit: add cleanup for 51Degrees to deinit
- MEDIUM: sample: add trie support to 51Degrees
- DOC: add 51Degrees notes to configuration.txt.
- DOC: add build indications for 51Degrees to README.
- MEDIUM: cfgparse: introduce weak and strong quoting
- BUG/MEDIUM: cfgparse: incorrect memmove in quotes management
- MINOR: cfgparse: remove line size limitation
- MEDIUM: cfgparse: expand environment variables
- BUG/MINOR: cfgparse: fix typo in 'option httplog' error message
- BUG/MEDIUM: cfgparse: segfault when userlist is misused
- CLEANUP: cfgparse: remove reference to 'ruleset' section
- MEDIUM: cfgparse: check section maximum number of arguments
- MEDIUM: cfgparse: max arguments check in the global section
- MEDIUM: cfgparse: check max arguments in the proxies sections
- CLEANUP: stream-int: remove a redundant clearing of the linger_risk flag
- MINOR: connection: make conn_sock_shutw() actually perform the shutdown() call
- MINOR: stream-int: use conn_sock_shutw() to shutdown a connection
- MINOR: connection: perform the call to xprt->shutw() in conn_data_shutw()
- MEDIUM: stream-int: replace xprt->shutw calls with conn_data_shutw()
- MINOR: checks: use conn_data_shutw_hard() instead of call via xprt
- MINOR: connection: implement conn_sock_send()
- MEDIUM: stream-int: make conn_si_send_proxy() use conn_sock_send()
- MEDIUM: connection: make conn_drain() perform more controls
- REORG: connection: move conn_drain() to connection.c and rename it
- CLEANUP: stream-int: remove inclusion of fd.h that is not used anymore
- MEDIUM: channel: don't always set CF_WAKE_WRITE on bi_put*
- CLEANUP: lua: don't use si_ic/si_oc on known stream-ints
- BUG/MEDIUM: peers: correctly configure the client timeout
- MINOR: peers: centralize configuration of the peers frontend
- MINOR: proxy: store the default target into the frontend's configuration
- MEDIUM: stats: use frontend_accept() as the accept function
- MEDIUM: peers: use frontend_accept() instead of peer_accept()
- CLEANUP: listeners: remove unused timeout
- MEDIUM: listener: store the default target per listener
- BUILD: fix automatic inclusion of libdl.
- MEDIUM: lua: implement a simple memory allocator
- MEDIUM: compression: postpone buffer adjustments after compression
- MEDIUM: compression: don't send leading zeroes with chunk size
- BUG/MINOR: compression: consider the expansion factor in init
- MINOR: http: check the algo name "identity" instead of the function pointer
- CLEANUP: compression: statify all algo-specific functions
- MEDIUM: compression: add a distinction between UA- and config- algorithms
- MEDIUM: compression: add new "raw-deflate" compression algorithm
- MEDIUM: compression: split deflate_flush() into flush and finish
- CLEANUP: compression: remove unused reset functions
- MAJOR: compression: integrate support for libslz
- BUG/MEDIUM: http: hdr_cnt would not count any header when called without name
- BUG/MAJOR: http: null-terminate the http actions keywords list
- CLEANUP: lua: remove the unused hlua_sleep memory pool
- BUG/MAJOR: lua: use correct object size when initializing a new converter
- CLEANUP: lua: remove hard-coded sizeof() in object creations and mallocs
- CLEANUP: lua: fix confusing local variable naming in hlua_txn_new()
- CLEANUP: hlua: stop using variable name "s" alternately for hlua_txn and hlua_smp
- CLEANUP: lua: get rid of the last "*ht" for struct hlua_txn.
- CLEANUP: lua: rename last occurrences of "*s" to "*htxn" for hlua_txn
- CLEANUP: lua: rename variable "sc" for struct hlua_smp
- CLEANUP: lua: get rid of the last two "*hs" for hlua_smp
- REORG/MAJOR: session: rename the "session" entity to "stream"
- REORG/MEDIUM: stream: rename stream flags from SN_* to SF_*
- MINOR: session: start to reintroduce struct session
- MEDIUM: stream: allocate the session when a stream is created
- MEDIUM: stream: move the listener's pointer to the session
- MEDIUM: stream: move the frontend's pointer to the session
- MINOR: session: add a pointer to the session's origin
- MEDIUM: session: use the pointer to the origin instead of s->si[0].end
- CLEANUP: sample: remove useless tests in fetch functions for l4 != NULL
- MEDIUM: http: move header captures from http_txn to struct stream
- MINOR: http: create a dedicated pool for http_txn
- MAJOR: http: move http_txn out of struct stream
- MAJOR: sample: don't pass l7 anymore to sample fetch functions
- CLEANUP: lua: remove unused hlua_smp->l7 and hlua_txn->l7
- MEDIUM: http: remove the now useless http_txn from {req/res} rules
- CLEANUP: lua: don't pass http_txn anymore to hlua_request_act_wrapper()
- MAJOR: sample: pass a pointer to the session to each sample fetch function
- MINOR: stream: provide a few helpers to retrieve frontend, listener and origin
- CLEANUP: stream: don't set ->target to the incoming connection anymore
- MINOR: stream: move session initialization before the stream's
- MINOR: session: store the session's accept date
- MINOR: session: don't rely on s->logs.logwait in embryonic sessions
- MINOR: session: implement session_free() and use it everywhere
- MINOR: session: add stick counters to the struct session
- REORG: stktable: move the stkctr_* functions from stream to sticktable
- MEDIUM: streams: support looking up stkctr in the session
- MEDIUM: session: update the session's stick counters upon session_free()
- MEDIUM: proto_tcp: track the session's counters in the connection ruleset
- MAJOR: tcp: make tcp_exec_req_rules() only rely on the session
- MEDIUM: stream: don't call stream_store_counters() in kill_mini_session() nor session_accept()
- MEDIUM: stream: move all the session-specific stuff of stream_accept() earlier
- MAJOR: stream: don't initialize the stream anymore in stream_accept
- MEDIUM: session: remove the task pointer from the session
- REORG: session: move the session parts out of stream.c
- MINOR: stream-int: make appctx_new() take the applet in argument
- MEDIUM: peers: move the appctx initialization earlier
- MINOR: session: introduce session_new()
- MINOR: session: make use of session_new() when creating a new session
- MINOR: peers: make use of session_new() when creating a new session
- MEDIUM: peers: initialize the task before the stream
- MINOR: session: set the CO_FL_CONNECTED flag on the connection once ready
- CLEANUP: stream.c: do not re-attach the connection to the stream
- MEDIUM: stream: isolate connection-specific initialization code
- MEDIUM: stream: also accept appctx as origin in stream_accept_session()
- MEDIUM: peers: make use of stream_accept_session()
- MEDIUM: frontend: make ->accept only return +/-1
- MEDIUM: stream: return the stream upon accept()
- MEDIUM: frontend: move some stream initialisation to stream_new()
- MEDIUM: frontend: move the fd-specific settings to session_accept_fd()
- MEDIUM: frontend: don't restrict frontend_accept() to connections anymore
- MEDIUM: frontend: move some remaining stream settings to stream_new()
- CLEANUP: frontend: remove one useless local variable
- MEDIUM: stream: don't rely on the session's listener anymore in stream_new()
- MEDIUM: lua: make use of stream_new() to create an outgoing connection
- MINOR: lua: minor cleanup in hlua_socket_new()
- MINOR: lua: no need for setting timeouts / conn_retries in hlua_socket_new()
- MINOR: peers: no need for setting timeouts / conn_retries in peer_session_create()
- CLEANUP: stream-int: swap stream-int and appctx declarations
- CLEANUP: namespaces: fix protection against multiple inclusions
- MINOR: session: maintain the session count stats in the session, not the stream
- MEDIUM: session: adjust the connection flags before stream_new()
- MINOR: stream: pass the pointer to the origin explicitly to stream_new()
- CLEANUP: poll: move the conditions for waiting out of the poll functions
- BUG/MEDIUM: listener: don't report an error when resuming unbound listeners
- BUG/MEDIUM: init: don't limit cpu-map to the first 32 processes only
- BUG/MAJOR: tcp/http: fix current_rule assignment when restarting over a ruleset
- BUG/MEDIUM: stream-int: always reset si->ops when si->end is nullified
- DOC: update the entities diagrams
- BUG/MEDIUM: http: properly retrieve the front connection
- MINOR: applet: add a new "owner" pointer in the appctx
- MEDIUM: applet: make the applet not depend on a stream interface anymore
- REORG: applet: move the applet definitions out of stream_interface
- CLEANUP: applet: rename struct si_applet to applet
- REORG: stream-int: create si_applet_ops dedicated to applets
- MEDIUM: applet: add basic support for an applet run queue
- MEDIUM: applet: implement a run queue for active appctx
- MEDIUM: stream-int: add a new function si_applet_done()
- MAJOR: applet: now call si_applet_done() instead of si_update() in I/O handlers
- MAJOR: stream: use a regular ->update for all stream interfaces
- MEDIUM: dumpstats: don't unregister the applet anymore
- MEDIUM: applet: centralize the call to si_applet_done() in the I/O handler
- MAJOR: stream: do not allocate request buffers anymore when the left side is an applet
- MINOR: stream-int: add two flags to indicate an applet's wishes regarding I/O
- MEDIUM: applet: make the applets only use si_applet_{cant|want|stop}_{get|put}
- MEDIUM: stream-int: pause the appctx if the task is woken up
- BUG/MAJOR: tcp: only call registered actions when they're registered
- BUG/MEDIUM: peers: fix applet scheduling
- BUG/MEDIUM: peers: recent applet changes broke peers updates scheduling
- MINOR: tools: provide an rdtsc() function for time comparisons
- IMPORT: lru: import simple ebtree-based LRU functions
- IMPORT: hash: import xxhash-r39
- MEDIUM: pattern: add a revision to all pattern expressions
- MAJOR: pattern: add LRU-based cache on pattern matching
- BUG/MEDIUM: http: remove content-length from chunked messages
- DOC: http: update the comments about the rules for determining transfer-length
- BUG/MEDIUM: http: do not restrict parsing of transfer-encoding to HTTP/1.1
- BUG/MEDIUM: http: incorrect transfer-coding in the request is a bad request
- BUG/MEDIUM: http: remove content-length form responses with bad transfer-encoding
- MEDIUM: http: restrict the HTTP version token to 1 digit as per RFC7230
- MEDIUM: http: disable support for HTTP/0.9 by default
- MEDIUM: http: add option-ignore-probes to get rid of the floods of 408
- BUG/MINOR: config: clear proxy->table.peers.p for disabled proxies
- MEDIUM: init: don't stop proxies in parent process when exiting
- MINOR: stick-table: don't attach to peers in stopped state
- MEDIUM: config: initialize stick-tables after peers, not before
- MEDIUM: peers: add the ability to disable a peers section
- MINOR: peers: store the pointer to the signal handler
- MEDIUM: peers: unregister peers that were never started
- MEDIUM: config: propagate the table's process list to the peers sections
- MEDIUM: init: stop any peers section not bound to the correct process
- MEDIUM: config: validate that peers sections are bound to exactly one process
- MAJOR: peers: allow peers section to be used with nbproc > 1
- DOC: relax the peers restriction to single-process
- DOC: document option http-ignore-probes
- DOC: fix the comments about the meaning of msg->sol in HTTP
- BUG/MEDIUM: http: wait for the exact amount of body bytes in wait_for_request_body
- BUG/MAJOR: http: prevent risk of reading past end with balance url_param
- MEDIUM: stream: move HTTP request body analyser before process_common
- MEDIUM: http: add a new option http-buffer-request
- MEDIUM: http: provide 3 fetches for the body
- DOC: update the doc on the proxy protocol
- BUILD: pattern: fix build warnings introduced in the LRU cache
- BUG/MEDIUM: stats: properly initialize the scope before dumping stats
- CLEANUP: config: fix misleading information in error message.
- MINOR: config: report the number of processes using a peers section in the error case
- BUG/MEDIUM: config: properly compute the default number of processes for a proxy
- MEDIUM: http: add new "capture" action for http-request
- BUG/MEDIUM: http: fix the http-request capture parser
- BUG/MEDIUM: http: don't forward client shutdown without NOLINGER except for tunnels
- BUILD/MINOR: ssl: fix build failure introduced by recent patch
- BUG/MAJOR: check: fix breakage of inverted tcp-check rules
- CLEANUP: checks: fix double usage of cur / current_step in tcp-checks
- BUG/MEDIUM: checks: do not dereference head of a tcp-check at the end
- CLEANUP: checks: simplify the loop processing of tcp-checks
- BUG/MAJOR: checks: always check for end of list before proceeding
- BUG/MEDIUM: checks: do not dereference a list as a tcpcheck struct
- BUG/MAJOR: checks: break infinite loops when tcp-checks starts with comment
- MEDIUM: http: make url_param iterate over multiple occurrences
- BUG/MEDIUM: peers: apply a random reconnection timeout
- MEDIUM: config: reject invalid config with name duplicates
- MEDIUM: config: reject conflicts in table names
- CLEANUP: proxy: make the proxy lookup functions more user-friendly
- MINOR: proxy: simply ignore duplicates in proxy name lookups
- MINOR: config: don't open-code proxy name lookups
- MEDIUM: config: clarify the conflicting modes detection for backend rules
- CLEANUP: proxy: remove now unused function findproxy_mode()
- MEDIUM: stick-table: remove the now duplicate find_stktable() function
- MAJOR: config: remove the deprecated reqsetbe / reqisetbe actions
- MINOR: proxy: add a new function proxy_find_by_id()
- MINOR: proxy: add a flag to memorize that the proxy's ID was forced
- MEDIUM: proxy: add a new proxy_find_best_match() function
- CLEANUP: http: explicitly reference request in http_apply_redirect_rules()
- MINOR: http: prepare support for parsing redirect actions on responses
- MEDIUM: http: implement http-response redirect rules
- MEDIUM: http: no need to close the request on redirect if data was parsed
- BUG/MEDIUM: http: fix body processing for the stats applet
- BUG/MINOR: da: fix log-level comparison to emove annoying warning
- CLEANUP: global: remove one ifdef USE_DEVICEATLAS
- CLEANUP: da: move the converter registration to da.c
- CLEANUP: da: register the config keywords in da.c
- CLEANUP: adjust the envelope name in da.h to reflect the file name
- CLEANUP: da: remove ifdef USE_DEVICEATLAS from da.c
- BUILD: make 51D easier to build by defaulting to 51DEGREES_SRC
- BUILD: fix build warning when not using 51degrees
- BUILD: make DeviceAtlas easier to build by defaulting to DEVICEATLAS_SRC
- BUILD: ssl: fix recent build breakage on older SSL libs
Implementation of a DNS client in HAProxy to perform name resolution to
IP addresses.
It relies on the freshly created UDP client to perform the DNS
resolution. For now, all UDP socket calls are performed in the
DNS layer, but this might change later when the protocols are
extended to be more suited to datagram mode.
A new section called 'resolvers' is introduced thanks to this patch. It
is used to describe DNS servers IP address and also many parameters.
With this patch, it is possible to configure HAProxy to forge the SSL
certificate sent to a client using the SNI servername. We do it in the SNI
callback.
To enable this feature, you must pass following BIND options:
* ca-sign-file <FILE> : This is the PEM file containing the CA certitifacte and
the CA private key to create and sign server's certificates.
* (optionally) ca-sign-pass <PASS>: This is the CA private key passphrase, if
any.
* generate-certificates: Enable the dynamic generation of certificates for a
listener.
Because generating certificates is expensive, there is a LRU cache to store
them. Its size can be customized by setting the global parameter
'tune.ssl.ssl-ctx-cache-size'.
When using the "51d" converter without specifying the list of 51Degrees
properties to detect (see parameter "51degrees-property-name-list"), the
"global._51d_property_names" could be left uninitialized which will lead to
segfault during init.
Since all rules listed in p->block_rules have been moved to the beginning of
the http-request rules in check_config_validity(), there is no need to clean
p->block_rules in deinit().
Signed-off-by: Godbach <nylzhaowei@gmail.com>
An ifdef was missing to avoid declaring these variables :
src/haproxy.c: In function 'deinit':
src/haproxy.c:1253:47: warning: unused variable '_51d_prop_nameb' [-Wunused-variable]
src/haproxy.c:1253:30: warning: unused variable '_51d_prop_name' [-Wunused-variable]
This diff initialises few DeviceAtlas struct fields member with
their inherent default values.
Furthermore, the specific DeviceAtlas configuration keywords are
registered and the module is initialised and all necessary
resources are freed during the deinit phase.
These ones were already obsoleted in 1.4, marked for removal in 1.5,
and not documented anymore. They used to emit warnings, and do still
require quite some code to stay in place. Let's remove them now.
Until now, HAproxy needed to be restarted to change the TLS ticket
keys. With this patch, the TLS keys can be updated on a per-file
basis using the admin socket. Two new socket commands have been
introduced: "show tls-keys" and "set ssl tls-keys".
Signed-off-by: Nenad Merdanovic <nmerdan@anine.io>
This will prevent the peers section from remaining in listen state on
the incorrect process. The peers_fe pointer is set to NULL, which will
tell the peers task to commit suicide if it was already scheduled.
The principle of this cache is to have a global cache for all pattern
matching operations which rely on lists (reg, sub, dir, dom, ...). The
input data, the expression and a random seed are used as a hashing key.
The cached entries contains a pointer to the expression and a revision
number for that expression so that we don't accidently used obsolete
data after a pattern update or a very unlikely hash collision.
Regarding the risk of collisions, 10k entries at 10k req/s mean 1% risk
of a collision after 60 years, that's already much less than the memory's
reliability in most machines and more durable than most admin's life
expectancy. A collision will result in a valid result to be returned
for a different entry from the same list. If this is not acceptable,
the cache can be disabled using tune.pattern.cache-size.
A test on a file containing 10k small regex showed that the regex
matching was limited to 6k/s instead of 70k with regular strings.
When enabling the LRU cache, the performance was back to 70k/s.
The new function is called for each round of polling in order to call any
active appctx. For now we pick the stream interface from the appctx's
owner. At the moment there's no appctx queued yet, but we have everything
needed to queue them and remove them.
We have to allow 32 or 64 processes depending on the machine's word
size, and on 64-bit machines only the first 32 processes were properly
bound.
This fix should be backported to 1.5.
The poll() functions have become a bit dirty because they now check the
size of the signal queue, the FD cache and the number of tasks. It's not
their job, this must be moved to the caller. In the end it simplifies the
code because the expiration date is now set to now_ms if we must not wait,
and this achieves in exactly the same result and is cleaner. The change
looks large due to the change of indent for blocks which were inside an
"if" block.
This one will not necessarily be allocated for each stream, and we want
to use the fact that it equals null to know it's not present so that we
can always deduce its presence from the stream pointer.
This commit only creates the new pool.
There is now a pointer to the session in the stream, which is NULL
for now. The session pool is created as well. Some parts will move
from the stream to the session now.
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
Thanks to MSIE/IIS, the "deflate" name is ambigous. According to the RFC
it's a zlib-wrapped deflate stream, but IIS used to send only a raw deflate
stream, which is the only format MSIE understands for "deflate". The other
widely used browsers do support both formats. For this reason some people
prefer to emit a raw deflate stream on "deflate" to serve more users even
it that means violating the standards. Haproxy only follows the standard,
so they cannot do this.
This patch makes it possible to have one algorithm name in the configuration
and another one in the protocol. This will make it possible to have a new
configuration token to add a different algorithm so that users can decide if
they want a raw deflate or the standard one.
Released version 1.6-dev1 with the following main changes :
- CLEANUP: extract temporary $CFG to eliminate duplication
- CLEANUP: extract temporary $BIN to eliminate duplication
- CLEANUP: extract temporary $PIDFILE to eliminate duplication
- CLEANUP: extract temporary $LOCKFILE to eliminate duplication
- CLEANUP: extract quiet_check() to avoid duplication
- BUG/MINOR: don't start haproxy on reload
- DOC: Address issue where documentation is excluded due to a gitignore rule.
- BUG/MEDIUM: systemd: set KillMode to 'mixed'
- BUILD: fix "make install" to support spaces in the install dirs
- BUG/MINOR: config: http-request replace-header arg typo
- BUG: config: error in http-response replace-header number of arguments
- DOC: missing track-sc* in http-request rules
- BUILD: lua: missing ifdef related to SSL when enabling LUA
- BUG/MEDIUM: regex: fix pcre_study error handling
- MEDIUM: regex: Use pcre_study always when PCRE is used, regardless of JIT
- BUG/MINOR: Fix search for -p argument in systemd wrapper.
- MEDIUM: Improve signal handling in systemd wrapper.
- DOC: fix typo in Unix Socket commands
- BUG/MEDIUM: checks: external checks can't change server status to UP
- BUG/MEDIUM: checks: segfault with external checks in a backend section
- BUG/MINOR: checks: external checks shouldn't wait for timeout to return the result
- BUG/MEDIUM: auth: fix segfault with http-auth and a configuration with an unknown encryption algorithm
- BUG/MEDIUM: config: userlists should ensure that encrypted passwords are supported
- BUG/MINOR: config: don't propagate process binding for dynamic use_backend
- BUG/MINOR: log: fix request flags when keep-alive is enabled
- BUG/MEDIUM: checks: fix conflicts between agent checks and ssl healthchecks
- MINOR: checks: allow external checks in backend sections
- MEDIUM: checks: provide environment variables to the external checks
- MINOR: checks: update dynamic environment variables in external checks
- DOC: checks: environment variables used by "external-check command"
- BUG/MEDIUM: backend: correctly detect the domain when use_domain_only is used
- MINOR: ssl: load certificates in alphabetical order
- BUG/MINOR: checks: prevent http keep-alive with http-check expect
- MINOR: lua: typo in an error message
- MINOR: report the Lua version in -vv
- MINOR: lua: add a compilation error message when compiled with an incompatible version
- BUG/MEDIUM: lua: segfault when calling haproxy sample fetches from lua
- BUILD: try to automatically detect the Lua library name
- BUILD/CLEANUP: systemd: avoid a warning due to mixed code and declaration
- BUG/MEDIUM: backend: Update hash to use unsigned int throughout
- BUG/MEDIUM: connection: fix memory corruption when building a proxy v2 header
- MEDIUM: connection: add new bit in Proxy Protocol V2
- BUG/MINOR: ssl: rejects OCSP response without nextupdate.
- BUG/MEDIUM: ssl: Fix to not serve expired OCSP responses.
- BUG/MINOR: ssl: Fix OCSP resp update fails with the same certificate configured twice.
- BUG/MINOR: ssl: Fix external function in order not to return a pointer on an internal trash buffer.
- MINOR: add fetchs 'ssl_c_der' and 'ssl_f_der' to return DER formatted certs
- MINOR: ssl: add statement to force some ssl options in global.
- BUG/MINOR: ssl: correctly initialize ssl ctx for invalid certificates
- BUG/MEDIUM: ssl: fix bad ssl context init can cause segfault in case of OOM.
- BUG/MINOR: samples: fix unnecessary memcopy converting binary to string.
- MINOR: samples: adds the bytes converter.
- MINOR: samples: adds the field converter.
- MINOR: samples: add the word converter.
- BUG/MINOR: server: move the directive #endif to the end of file
- BUG/MAJOR: buffer: check the space left is enough or not when input data in a buffer is wrapped
- DOC: fix a few typos
- CLEANUP: epoll: epoll_events should be allocated according to global.tune.maxpollevents
- BUG/MINOR: http: fix typo: "401 Unauthorized" => "407 Unauthorized"
- BUG/MINOR: parse: refer curproxy instead of proxy
- BUG/MINOR: parse: check the validity of size string in a more strict way
- BUILD: add new target 'make uninstall' to support uninstalling haproxy from OS
- DOC: expand the docs for the provided stats.
- BUG/MEDIUM: unix: do not unlink() abstract namespace sockets upon failure.
- MEDIUM: ssl: Certificate Transparency support
- MEDIUM: stats: proxied stats admin forms fix
- MEDIUM: http: Compress HTTP responses with status codes 201,202,203 in addition to 200
- BUG/MEDIUM: connection: sanitize PPv2 header length before parsing address information
- MAJOR: namespace: add Linux network namespace support
- MINOR: systemd: Check configuration before start
- BUILD: ssl: handle boringssl in openssl version detection
- BUILD: ssl: disable OCSP when using boringssl
- BUILD: ssl: don't call get_rfc2409_prime when using boringssl
- MINOR: ssl: don't use boringssl's cipher_list
- BUILD: ssl: use OPENSSL_NO_OCSP to detect OCSP support
- MINOR: stats: fix minor typo in HTML page
- MINOR: Also accept SIGHUP/SIGTERM in systemd-wrapper
- MEDIUM: Add support for configurable TLS ticket keys
- DOC: Document the new tls-ticket-keys bind keyword
- DOC: clearly state that the "show sess" output format is not fixed
- MINOR: stats: fix minor typo fix in stats_dump_errors_to_buffer()
- DOC: httplog does not support 'no'
- BUG/MEDIUM: ssl: Fix a memory leak in DHE key exchange
- MINOR: ssl: use SSL_get_ciphers() instead of directly accessing the cipher list.
- BUG/MEDIUM: Consistently use 'check' in process_chk
- MEDIUM: Add external check
- BUG/MEDIUM: Do not set agent health to zero if server is disabled in config
- MEDIUM/BUG: Only explicitly report "DOWN (agent)" if the agent health is zero
- MEDIUM: Remove connect_chk
- MEDIUM: Refactor init_check and move to checks.c
- MEDIUM: Add free_check() helper
- MEDIUM: Move proto and addr fields struct check
- MEDIUM: Attach tcpcheck_rules to check
- MEDIUM: Add parsing of mailers section
- MEDIUM: Allow configuration of email alerts
- MEDIUM: Support sending email alerts
- DOC: Document email alerts
- MINOR: Remove trailing '.' from email alert messages
- MEDIUM: Allow suppression of email alerts by log level
- BUG/MEDIUM: Do not consider an agent check as failed on L7 error
- MINOR: deinit: fix memory leak
- MINOR: http: export the function 'smp_fetch_base32'
- BUG/MEDIUM: http: tarpit timeout is reset
- MINOR: sample: add "json" converter
- BUG/MEDIUM: pattern: don't load more than once a pattern list.
- MINOR: map/acl/dumpstats: remove the "Done." message
- BUG/MAJOR: ns: HAProxy segfault if the cli_conn is not from a network connection
- BUG/MINOR: pattern: error message missing
- BUG/MEDIUM: pattern: some entries are not deleted with case insensitive match
- BUG/MINOR: ARG6 and ARG7 don't fit in a 32 bits word
- MAJOR: poll: only rely on wake_expired_tasks() to compute the wait delay
- MEDIUM: task: call session analyzers if the task is woken by a message.
- MEDIUM: protocol: automatically pick the proto associated to the connection.
- MEDIUM: channel: wake up any request analyzer on response activity
- MINOR: converters: add a "void *private" argument to converters
- MINOR: converters: give the session pointer as converter argument
- MINOR: sample: add private argument to the struct sample_fetch
- MINOR: global: export function and permits to not resolve DNS names
- MINOR: sample: add function for browsing samples.
- MINOR: global: export many symbols.
- MINOR: includes: fix a lot of missing or useless includes
- MEDIUM: tcp: add register keyword system.
- MEDIUM: buffer: make bo_putblk/bo_putstr/bo_putchk return the number of bytes copied.
- MEDIUM: http: change the code returned by the response processing rule functions
- MEDIUM: http/tcp: permit to resume http and tcp custom actions
- MINOR: channel: functions to get data from a buffer without copy
- MEDIUM: lua: lua integration in the build and init system.
- MINOR: lua: add ease functions
- MINOR: lua: add runtime execution context
- MEDIUM: lua: "com" signals
- MINOR: lua: add the configuration directive "lua-load"
- MINOR: lua: core: create "core" class and object
- MINOR: lua: post initialisation bindings
- MEDIUM: lua: add coroutine as tasks.
- MINOR: lua: add sample and args type converters
- MINOR: lua: txn: create class TXN associated with the transaction.
- MINOR: lua: add shared context in the lua stack
- MINOR: lua: txn: import existing sample-fetches in the class TXN
- MINOR: lua: txn: add lua function in TXN that returns an array of http headers
- MINOR: lua: register and execute sample-fetches in LUA
- MINOR: lua: register and execute converters in LUA
- MINOR: lua: add bindings for tcp and http actions
- MINOR: lua: core: add sleep functions
- MEDIUM: lua: socket: add "socket" class for TCP I/O
- MINOR: lua: core: pattern and acl manipulation
- MINOR: lua: channel: add "channel" class
- MINOR: lua: txn: object "txn" provides two objects "channel"
- MINOR: lua: core: can set the nice of the current task
- MINOR: lua: core: can yield an execution stack
- MINOR: lua: txn: add binding for closing the client connection.
- MEDIUM: lua: Lua initialisation "on demand"
- BUG/MAJOR: lua: send function fails and return bad bytes
- MINOR: remove unused declaration.
- MINOR: lua: remove some #define
- MINOR: lua: use bitfield and macro in place of integer and enum
- MINOR: lua: set skeleton for Lua execution expiration
- MEDIUM: lua: each yielding function returns a wake up time.
- MINOR: lua: adds "forced yield" flag
- MEDIUM: lua: interrupt the Lua execution for running other process
- MEDIUM: lua: change the sleep function core
- BUG/MEDIUM: lua: the execution timeout is ignored in yield case
- DOC: lua: Lua configuration documentation
- MINOR: lua: add the struct session in the lua channel struct
- BUG/MINOR: lua: set buffer if it is nnot avalaible.
- BUG/MEDIUM: lua: reset flags before resuming execution
- BUG/MEDIUM: lua: fix infinite loop about channel
- BUG/MEDIUM: lua: the Lua process is not waked up after sending data on requests side
- BUG/MEDIUM: lua: many errors when we try to send data with the channel API
- MEDIUM: lua: use the Lua-5.3 version of the library
- BUG/MAJOR: lua: some function are not yieldable, the forced yield causes errors
- BUG/MEDIUM: lua: can't handle the response bytes
- BUG/MEDIUM: lua: segfault with buffer_replace2
- BUG/MINOR: lua: check buffers before initializing socket
- BUG/MINOR: log: segfault if there are no proxy reference
- BUG/MEDIUM: lua: sockets don't have buffer to write data
- BUG/MEDIUM: lua: cannot connect socket
- BUG/MINOR: lua: sockets receive behavior doesn't follows the specs
- BUG/BUILD: lua: The strict Lua 5.3 version check is not done.
- BUG/MEDIUM: buffer: one byte miss in buffer free space check
- MEDIUM: lua: make the functions hlua_gethlua() and hlua_sethlua() faster
- MINOR: replace the Core object by a simple model.
- MEDIUM: lua: change the objects configuration
- MEDIUM: lua: create a namespace for the fetches
- MINOR: converters: add function to browse converters
- MINOR: lua: wrapper for converters
- MINOR: lua: replace function (req|get)_channel by a variable
- MINOR: lua: fetches and converters can return an empty string in place of nil
- DOC: lua api
- BUG/MEDIUM: sample: fix random number upper-bound
- BUG/MINOR: stats:Fix incorrect printf type.
- BUG/MAJOR: session: revert all the crappy client-side timeout changes
- BUG/MINOR: logs: properly initialize and count log sockets
- BUG/MEDIUM: http: fetch "base" is not compatible with set-header
- BUG/MINOR: counters: do not untrack counters before logging
- BUG/MAJOR: sample: correctly reinitialize sample fetch context before calling sample_process()
- MINOR: stick-table: make stktable_fetch_key() indicate why it failed
- BUG/MEDIUM: counters: fix track-sc* to wait on unstable contents
- BUILD: remove TODO from the spec file and add README
- MINOR: log: make MAX_SYSLOG_LEN overridable at build time
- MEDIUM: log: support a user-configurable max log line length
- DOC: provide an example of how to use ssl_c_sha1
- BUILD: checks: external checker needs signal.h
- BUILD: checks: kill a minor warning on Solaris in external checks
- BUILD: http: fix isdigit & isspace warnings on Solaris
- BUG/MINOR: listener: set the listener's fd to -1 after deletion
- BUG/MEDIUM: unix: failed abstract socket binding is retryable
- MEDIUM: listener: implement a per-protocol pause() function
- MEDIUM: listener: support rebinding during resume()
- BUG/MEDIUM: unix: completely unbind abstract sockets during a pause()
- DOC: explicitly mention the limits of abstract namespace sockets
- DOC: minor fix on {sc,src}_kbytes_{in,out}
- DOC: fix alphabetical sort of converters
- MEDIUM: stick-table: implement lookup from a sample fetch
- MEDIUM: stick-table: add new converters to fetch table data
- MINOR: samples: add two converters for the date format
- BUG/MAJOR: http: correctly rewind the request body after start of forwarding
- DOC: remove references to CPU=native in the README
- DOC: mention that "compression offload" is ignored in defaults section
- DOC: mention that Squid correctly responds 400 to PPv2 header
- BUILD: fix dependencies between config and compat.h
- MINOR: session: export the function 'smp_fetch_sc_stkctr'
- MEDIUM: stick-table: make it easier to register extra data types
- BUG/MINOR: http: base32+src should use the big endian version of base32
- MINOR: sample: allow IP address to cast to binary
- MINOR: sample: add new converters to hash input
- MINOR: sample: allow integers to cast to binary
- BUILD: report commit ID in git versions as well
- CLEANUP: session: move the stick counters declarations to stick_table.h
- MEDIUM: http: add the track-sc* actions to http-request rules
- BUG/MEDIUM: connection: fix proxy v2 header again!
- BUG/MAJOR: tcp: fix a possible busy spinning loop in content track-sc*
- OPTIM/MINOR: proxy: reduce struct proxy by 48 bytes on 64-bit archs
- MINOR: log: add a new field "%lc" to implement a per-frontend log counter
- BUG/MEDIUM: http: fix inverted condition in pat_match_meth()
- BUG/MEDIUM: http: fix improper parsing of HTTP methods for use with ACLs
- BUG/MINOR: pattern: remove useless allocation of unused trash in pat_parse_reg()
- BUG/MEDIUM: acl: correctly compute the output type when a converter is used
- CLEANUP: acl: cleanup some of the redundancy and spaghetti after last fix
- BUG/CRITICAL: http: don't update msg->sov once data start to leave the buffer
- MEDIUM: http: enable header manipulation for 101 responses
- BUG/MEDIUM: config: propagate frontend to backend process binding again.
- MEDIUM: config: properly propagate process binding between proxies
- MEDIUM: config: make the frontends automatically bind to the listeners' processes
- MEDIUM: config: compute the exact bind-process before listener's maxaccept
- MEDIUM: config: only warn if stats are attached to multi-process bind directives
- MEDIUM: config: report it when tcp-request rules are misplaced
- DOC: indicate in the doc that track-sc* can wait if data are missing
- MINOR: config: detect the case where a tcp-request content rule has no inspect-delay
- MEDIUM: systemd-wrapper: support multiple executable versions and names
- BUG/MEDIUM: remove debugging code from systemd-wrapper
- BUG/MEDIUM: http: adjust close mode when switching to backend
- BUG/MINOR: config: don't propagate process binding on fatal errors.
- BUG/MEDIUM: check: rule-less tcp-check must detect connect failures
- BUG/MINOR: tcp-check: report the correct failed step in the status
- DOC: indicate that weight zero is reported as DRAIN
- BUG/MEDIUM: config: avoid skipping disabled proxies
- BUG/MINOR: config: do not accept more track-sc than configured
- BUG/MEDIUM: backend: fix URI hash when a query string is present
- BUG/MEDIUM: http: don't dump debug headers on MSG_ERROR
- BUG/MAJOR: cli: explicitly call cli_release_handler() upon error
- BUG/MEDIUM: tcp: fix outgoing polling based on proxy protocol
- BUILD/MINOR: ssl: de-constify "ciphers" to avoid a warning on openssl-0.9.8
- BUG/MEDIUM: tcp: don't use SO_ORIGINAL_DST on non-AF_INET sockets
- BUG/BUILD: revert accidental change in the makefile from latest SSL fix
- BUG/MEDIUM: ssl: force a full GC in case of memory shortage
- MEDIUM: ssl: add support for smaller SSL records
- MINOR: session: release a few other pools when stopping
- MINOR: task: release the task pool when stopping
- BUG/MINOR: config: don't inherit the default balance algorithm in frontends
- BUG/MAJOR: frontend: initialize capture pointers earlier
- BUG/MINOR: stats: correctly set the request/response analysers
- MAJOR: polling: centralize calls to I/O callbacks
- DOC: fix typo in the body parser documentation for msg.sov
- BUG/MINOR: peers: the buffer size is global.tune.bufsize, not trash.size
- MINOR: sample: add a few basic internal fetches (nbproc, proc, stopping)
- DEBUG: pools: apply poisonning on every allocated pool
- BUG/MAJOR: sessions: unlink session from list on out of memory
- BUG/MEDIUM: patterns: previous fix was incomplete
- BUG/MEDIUM: payload: ensure that a request channel is available
- BUG/MINOR: tcp-check: don't condition data polling on check type
- BUG/MEDIUM: tcp-check: don't rely on random memory contents
- BUG/MEDIUM: tcp-checks: disable quick-ack unless next rule is an expect
- BUG/MINOR: config: fix typo in condition when propagating process binding
- BUG/MEDIUM: config: do not propagate processes between stopped processes
- BUG/MAJOR: stream-int: properly check the memory allocation return
- BUG/MEDIUM: memory: fix freeing logic in pool_gc2()
- BUG/MAJOR: namespaces: conn->target is not necessarily a server
- BUG/MEDIUM: compression: correctly report zlib_mem
- CLEANUP: lists: remove dead code
- CLEANUP: memory: remove dead code
- CLEANUP: memory: replace macros pool_alloc2/pool_free2 with functions
- MINOR: memory: cut pool allocator in 3 layers
- MEDIUM: memory: improve pool_refill_alloc() to pass a refill count
- MINOR: stream-int: retrieve session pointer from stream-int
- MINOR: buffer: reset a buffer in b_reset() and not channel_init()
- MEDIUM: buffer: use b_alloc() to allocate and initialize a buffer
- MINOR: buffer: move buffer initialization after channel initialization
- MINOR: buffer: only use b_free to release buffers
- MEDIUM: buffer: always assign a dummy empty buffer to channels
- MEDIUM: buffer: add a new buf_wanted dummy buffer to report failed allocations
- MEDIUM: channel: do not report full when buf_empty is present on a channel
- MINOR: session: group buffer allocations together
- MINOR: buffer: implement b_alloc_fast()
- MEDIUM: buffer: implement b_alloc_margin()
- MEDIUM: session: implement a basic atomic buffer allocator
- MAJOR: session: implement a wait-queue for sessions who need a buffer
- MAJOR: session: only allocate buffers when needed
- MINOR: stats: report a "waiting" flags for sessions
- MAJOR: session: only wake up as many sessions as available buffers permit
- MINOR: config: implement global setting tune.buffers.reserve
- MINOR: config: implement global setting tune.buffers.limit
- MEDIUM: channel: implement a zero-copy buffer transfer
- MEDIUM: stream-int: support splicing from applets
- OPTIM: stream-int: try to send pending spliced data
- CLEANUP: session: remove session_from_task()
- DOC: add missing entry for log-format and clarify the text
- MINOR: logs: add a new per-proxy "log-tag" directive
- BUG/MEDIUM: http: fix header removal when previous header ends with pure LF
- MINOR: config: extend the default max hostname length to 64 and beyond
- BUG/MEDIUM: channel: fix possible integer overflow on reserved size computation
- BUG/MINOR: channel: compare to_forward with buf->i, not buf->size
- MINOR: channel: add channel_in_transit()
- MEDIUM: channel: make buffer_reserved() use channel_in_transit()
- MEDIUM: channel: make bi_avail() use channel_in_transit()
- BUG/MEDIUM: channel: don't schedule data in transit for leaving until connected
- CLEANUP: channel: rename channel_reserved -> channel_is_rewritable
- MINOR: channel: rename channel_full() to !channel_may_recv()
- MINOR: channel: rename buffer_reserved() to channel_reserved()
- MINOR: channel: rename buffer_max_len() to channel_recv_limit()
- MINOR: channel: rename bi_avail() to channel_recv_max()
- MINOR: channel: rename bi_erase() to channel_truncate()
- BUG/MAJOR: log: don't try to emit a log if no logger is set
- MINOR: tools: add new round_2dig() function to round integers
- MINOR: global: always export some SSL-specific metrics
- MINOR: global: report information about the cost of SSL connections
- MAJOR: init: automatically set maxconn and/or maxsslconn when possible
- MINOR: http: add a new fetch "query" to extract the request's query string
- MINOR: hash: add new function hash_crc32
- MINOR: samples: provide a "crc32" converter
- MEDIUM: backend: add the crc32 hash algorithm for load balancing
- BUG/MINOR: args: add missing entry for ARGT_MAP in arg_type_names
- BUG/MEDIUM: http: make http-request set-header compute the string before removal
- MEDIUM: args: use #define to specify the number of bits used by arg types and counts
- MEDIUM: args: increase arg type to 5 bits and limit arg count to 5
- MINOR: args: add type-specific flags for each arg in a list
- MINOR: args: implement a new arg type for regex : ARGT_REG
- MEDIUM: regex: add support for passing regex flags to regex_exec_match()
- MEDIUM: samples: add a regsub converter to perform regex-based transformations
- BUG/MINOR: sample: fix case sensitivity for the regsub converter
- MEDIUM: http: implement http-request set-{method,path,query,uri}
- DOC: fix missing closing brackend on regsub
- MEDIUM: samples: provide basic arithmetic and bitwise operators
- MEDIUM: init: continue to enforce SYSTEM_MAXCONN with auto settings if set
- BUG/MINOR: http: fix incorrect header value offset in replace-hdr/replace-value
- BUG/MINOR: http: abort request processing on filter failure
- MEDIUM: tcp: implement tcp-ut bind option to set TCP_USER_TIMEOUT
- MINOR: ssl/server: add the "no-ssl-reuse" server option
- BUG/MAJOR: peers: initialize s->buffer_wait when creating the session
- MINOR: http: add a new function to iterate over each header line
- MINOR: http: add the new sample fetches req.hdr_names and res.hdr_names
- MEDIUM: task: always ensure that the run queue is consistent
- BUILD: Makefile: add -Wdeclaration-after-statement
- BUILD/CLEANUP: ssl: avoid a warning due to mixed code and declaration
- BUILD/CLEANUP: config: silent 3 warnings about mixed declarations with code
- MEDIUM: protocol: use a family array to index the protocol handlers
- BUILD: lua: cleanup many mixed occurrences declarations & code
- BUG/MEDIUM: task: fix recently introduced scheduler skew
- BUG/MINOR: lua: report the correct function name in an error message
- BUG/MAJOR: http: fix stats regression consecutive to HTTP_RULE_RES_YIELD
- Revert "BUG/MEDIUM: lua: can't handle the response bytes"
- MINOR: lua: convert IP addresses to type string
- CLEANUP: lua: use the same function names in C and Lua
- REORG/MAJOR: move session's req and resp channels back into the session
- CLEANUP: remove now unused channel pool
- REORG/MEDIUM: stream-int: introduce si_ic/si_oc to access channels
- MEDIUM: stream-int: add a flag indicating which side the SI is on
- MAJOR: stream-int: only rely on SI_FL_ISBACK to find the requested channel
- MEDIUM: stream-interface: remove now unused pointers to channels
- MEDIUM: stream-int: make si_sess() use the stream int's side
- MEDIUM: stream-int: use si_task() to retrieve the task from the stream int
- MEDIUM: stream-int: remove any reference to the owner
- CLEANUP: stream-int: add si_ib/si_ob to dereference the buffers
- CLEANUP: stream-int: add si_opposite() to find the other stream interface
- REORG/MEDIUM: channel: only use chn_prod / chn_cons to find stream-interfaces
- MEDIUM: channel: add a new flag "CF_ISRESP" for the response channel
- MAJOR: channel: only rely on the new CF_ISRESP flag to find the SI
- MEDIUM: channel: remove now unused ->prod and ->cons pointers
- CLEANUP: session: simplify references to chn_{prod,cons}(&s->{req,res})
- CLEANUP: session: use local variables to access channels / stream ints
- CLEANUP: session: don't needlessly pass a pointer to the stream-int
- CLEANUP: session: don't use si_{ic,oc} when we know the session.
- CLEANUP: stream-int: limit usage of si_ic/si_oc
- CLEANUP: lua: limit usage of si_ic/si_oc
- MINOR: channel: add chn_sess() helper to retrieve session from channel
- MEDIUM: session: simplify receive buffer allocator to only use the channel
- MEDIUM: lua: use CF_ISRESP to detect the channel's side
- CLEANUP: lua: remove the session pointer from hlua_channel
- CLEANUP: lua: hlua_channel_new() doesn't need the pointer to the session anymore
- MEDIUM: lua: remove struct hlua_channel
- MEDIUM: lua: remove hlua_sample_fetch
As of the other libraries used by haproxy, it can be useful to display the Lua
version used at compilation time.
A new line is added to "haproxy -vv", which shows if Lua is supported by the
binary, and with which version it was compiled.
This system permits to execute some lua function after than HAProxy
complete his initialisation. These functions are executed between
the end of the configuration parsing and check and the begin of the
scheduler.
This is the first step of the lua integration. We add the useful
files in the HAProxy project. These files contains the main
includes, the Makefile options and empty initialisation function.
Is is the LUA skeleton.
Actually, HAProxy uses the function "process_runnable_tasks" and
"wake_expired_tasks" to get the next task which can expires.
If a task is added with "task_schedule" or other method during
the execution of an other task, the expiration of this new task
is not taken into account, and the execution of this task can be
too late.
Actualy, HAProxy seems to be no sensitive to this bug.
This fix moves the call to process_runnable_tasks() before the timeout
calculation and ensures that all wakeups are processed together. Only
wake_expired_tasks() needs to return a timeout now.
Commit d025648 ("MAJOR: init: automatically set maxconn and/or maxsslconn
when possible") resulted in a case where if enough memory is available,
a maxconn value larger than SYSTEM_MAXCONN could be computed, resulting
in possibly overflowing other systems resources (eg: kernel socket buffers,
conntrack entries, etc). Let's bound any automatic maxconn to SYSTEM_MAXCONN
if it is defined. Note that the value is set to DEFAULT_MAXCONN since
SYSTEM_MAXCONN forces DEFAULT_MAXCONN, thus it is not an error.
This one will be used when a regex is expected. It is automatically
resolved after the parsing and compiled into a regex. Some optional
flags are supported in the type-specific flags that should be set by
the optional arg checker. One is used during the regex compilation :
ARGF_REG_ICASE to ignore case.
If a memory size limit is enforced using "-n" on the command line and
one or both of maxconn / maxsslconn are not set, instead of using the
build-time values, haproxy now computes the number of sessions that can
be allocated depending on a number of parameters among which :
- global.maxconn (if set)
- global.maxsslconn (if set)
- maxzlibmem
- tune.ssl.cachesize
- presence of SSL in at least one frontend (bind lines)
- presence of SSL in at least one backend (server lines)
- tune.bufsize
- tune.cookie_len
The purpose is to ensure that not haproxy will not run out of memory
when maxing out all parameters. If neither maxconn nor maxsslconn are
used, it will consider that 100% of the sessions involve SSL on sides
where it's supported. That means that it will typically optimize maxconn
for SSL offloading or SSL bridging on all connections. This generally
means that the simple act of enabling SSL in a frontend or in a backend
will significantly reduce the global maxconn but in exchange of that, it
will guarantee that it will not fail.
All metrics may be enforced using #defines to accomodate variations in
SSL libraries or various allocation sizes.
We've already experimented with three wake up algorithms when releasing
buffers : the first naive one used to wake up far too many sessions,
causing many of them not to get any buffer. The second approach which
was still in use prior to this patch consisted in waking up either 1
or 2 sessions depending on the number of FDs we had released. And this
was still inaccurate. The third one tried to cover the accuracy issues
of the second and took into consideration the number of FDs the sessions
would be willing to use, but most of the time we ended up waking up too
many of them for nothing, or deadlocking by lack of buffers.
This patch completely removes the need to allocate two buffers at once.
Instead it splits allocations into critical and non-critical ones and
implements a reserve in the pool for this. The deadlock situation happens
when all buffers are be allocated for requests pending in a maxconn-limited
server queue, because then there's no more way to allocate buffers for
responses, and these responses are critical to release the servers's
connection in order to release the pending requests. In fact maxconn on
a server creates a dependence between sessions and particularly between
oldest session's responses and latest session's requests. Thus, it is
mandatory to get a free buffer for a response in order to release a
server connection which will permit to release a request buffer.
Since we definitely have non-symmetrical buffers, we need to implement
this logic in the buffer allocation mechanism. What this commit does is
implement a reserve of buffers which can only be allocated for responses
and that will never be allocated for requests. This is made possible by
the requester indicating how much margin it wants to leave after the
allocation succeeds. Thus it is a cooperative allocation mechanism : the
requester (process_session() in general) prefers not to get a buffer in
order to respect other's need for response buffers. The session management
code always knows if a buffer will be used for requests or responses, so
that is not difficult :
- either there's an applet on the initiator side and we really need
the request buffer (since currently the applet is called in the
context of the session)
- or we have a connection and we really need the response buffer (in
order to support building and sending an error message back)
This reserve ensures that we don't take all allocatable buffers for
requests waiting in a queue. The downside is that all the extra buffers
are really allocated to ensure they can be allocated. But with small
values it is not an issue.
With this change, we don't observe any more deadlocks even when running
with maxconn 1 on a server under severely constrained memory conditions.
The code becomes a bit tricky, it relies on the scheduler's run queue to
estimate how many sessions are already expected to run so that it doesn't
wake up everyone with too few resources. A better solution would probably
consist in having two queues, one for urgent requests and one for normal
requests. A failed allocation for a session dealing with an error, a
connection event, or the need for a response (or request when there's an
applet on the left) would go to the urgent request queue, while other
requests would go to the other queue. Urgent requests would be served
from 1 entry in the pool, while the regular ones would be served only
according to the reserve. Despite not yet having this, it works
remarkably well.
This mechanism is quite efficient, we don't perform too many wake up calls
anymore. For 1 million sessions elapsed during massive memory contention,
we observe about 4.5M calls to process_session() compared to 4.0M without
memory constraints. Previously we used to observe up to 16M calls, which
rougly means 12M failures.
During a test run under high memory constraints (limit enforced to 27 MB
instead of the 58 MB normally needed), performance used to drop by 53% prior
to this patch. Now with this patch instead it *increases* by about 1.5%.
The best effect of this change is that by limiting the memory usage to about
2/3 to 3/4 of what is needed by default, it's possible to increase performance
by up to about 18% mainly due to the fact that pools are reused more often
and remain hot in the CPU cache (observed on regular HTTP traffic with 20k
objects, buffers.limit = maxconn/10, buffers.reserve = limit/2).
Below is an example of scenario which used to cause a deadlock previously :
- connection is received
- two buffers are allocated in process_session() then released
- one is allocated when receiving an HTTP request
- the second buffer is allocated then released in process_session()
for request parsing then connection establishment.
- poll() says we can send, so the request buffer is sent and released
- process session gets notified that the connection is now established
and allocates two buffers then releases them
- all other sessions do the same till one cannot get the request buffer
without hitting the margin
- and now the server responds. stream_interface allocates the response
buffer and manages to get it since it's higher priority being for a
response.
- but process_session() cannot allocate the request buffer anymore
=> We could end up with all buffers used by responses so that none may
be allocated for a request in process_session().
When the applet processing leaves the session context, the test will have
to be changed so that we always allocate a response buffer regardless of
the left side (eg: H2->H1 gateway). A final improvement would consists in
being able to only retry the failed I/O operation without waking up a
task, but to date all experiments to achieve this have proven not to be
reliable enough.
This patch makes it possible to create binds and servers in separate
namespaces. This can be used to proxy between multiple completely independent
virtual networks (with possibly overlapping IP addresses) and a
non-namespace-aware proxy implementation that supports the proxy protocol (v2).
The setup is something like this:
net1 on VLAN 1 (namespace 1) -\
net2 on VLAN 2 (namespace 2) -- haproxy ==== proxy (namespace 0)
net3 on VLAN 3 (namespace 3) -/
The proxy is configured to make server connections through haproxy and sending
the expected source/target addresses to haproxy using the proxy protocol.
The network namespace setup on the haproxy node is something like this:
= 8< =
$ cat setup.sh
ip netns add 1
ip link add link eth1 type vlan id 1
ip link set eth1.1 netns 1
ip netns exec 1 ip addr add 192.168.91.2/24 dev eth1.1
ip netns exec 1 ip link set eth1.$id up
...
= 8< =
= 8< =
$ cat haproxy.cfg
frontend clients
bind 127.0.0.1:50022 namespace 1 transparent
default_backend scb
backend server
mode tcp
server server1 192.168.122.4:2222 namespace 2 send-proxy-v2
= 8< =
A bind line creates the listener in the specified namespace, and connections
originating from that listener also have their network namespace set to
that of the listener.
A server line either forces the connection to be made in a specified
namespace or may use the namespace from the client-side connection if that
was set.
For more documentation please read the documentation included in the patch
itself.
Signed-off-by: KOVACS Tamas <ktamas@balabit.com>
Signed-off-by: Sarkozi Laszlo <laszlo.sarkozi@balabit.com>
Signed-off-by: KOVACS Krisztian <hidden@balabit.com>
Google's boringssl doesn't have OPENSSL_VERSION_TEXT, SSLeay_version()
or SSLEAY_VERSION, in fact, it doesn't have any real versioning, its
just git-based.
So in case we build against boringssl, we can't access those values.
Instead, we just inform the user that HAProxy was build against
boringssl.
Signed-off-by: Lukas Tribus <luky-37@hotmail.com>
This patch remove all references of standard regex in haproxy. The last
remaining references are only in the regex.[ch] files.
In the file src/checks.c, the original function uses a "pmatch" array.
In fact this array is unused. This patch remove it.
This patch adds two new actions to http-request and http-response rulesets :
- replace-header : replace a whole header line, suited for headers
which might contain commas
- replace-value : replace a single header value, suited for headers
defined as lists.
The match consists in a regex, and the replacement string takes a log-format
and supports back-references.
When no static DH parameters are specified, this patch makes haproxy
use standardized (rfc 2409 / rfc 3526) DH parameters with prime lenghts
of 1024, 2048, 4096 or 8192 bits for DHE key exchange. The size of the
temporary/ephemeral DH key is computed as the minimum of the RSA/DSA server
key size and the value of a new option named tune.ssl.default-dh-param.
Using the systemd daemon mode the parent doesn't exits but waits for
his childs without closing its listening sockets.
As linux 3.9 introduced a SO_REUSEPORT option (always enabled in
haproxy if available) this will give unhandled connections problems
after an haproxy reload with open connections.
The problem is that when on reload a new parent is started (-Ds
$oldchildspids), in haproxy.c main there's a call to start_proxies
that, without SO_REUSEPORT, should fail (as the old processes are
already listening) and so a SIGTOU is sent to old processes. On this
signal the old childs will call (in pause_listener) a shutdown() on
the listening fd. From my tests (if I understand it correctly) this
affects the in kernel file (so the listen is really disabled for all
the processes, also the parent).
Instead, with SO_REUSEPORT, the call to start_proxies doesn't fail and
so SIGTOU is never sent. Only SIGUSR1 is sent and the listen isn't
disabled for the parent but only the childs will stop listening (with
a call to close())
So, with SO_REUSEPORT, the old childs will close their listening
sockets but will wait for the current connections to finish or
timeout, and, as their parent has its listening socket open, the
kernel will schedule some connections on it. These connections will
never be accepted by the parent as it's in the waitpid loop.
This fix will close all the listeners on the parent before entering the
waitpid loop.
Signed-off-by: Simone Gotti <simone.gotti@gmail.com>
Servers used to have 3 flags to store a state, now they have 4 states
instead. This avoids lots of confusion for the 4 remaining undefined
states.
The encoding from the previous to the new states can be represented
this way :
SRV_STF_RUNNING
| SRV_STF_GOINGDOWN
| | SRV_STF_WARMINGUP
| | |
0 x x SRV_ST_STOPPED
1 0 0 SRV_ST_RUNNING
1 0 1 SRV_ST_STARTING
1 1 x SRV_ST_STOPPING
Note that the case where all bits were set used to exist and was randomly
dealt with. For example, the task was not stopped, the throttle value was
still updated and reported in the stats and in the http_server_state header.
It was the same if the server was stopped by the agent or for maintenance.
It's worth noting that the internal function names are still quite confusing.
Till now, the server's state and flags were all saved as a single bit
field. It causes some difficulties because we'd like to have an enum
for the state and separate flags.
This commit starts by splitting them in two distinct fields. The first
one is srv->state (with its counter-part srv->prev_state) which are now
enums, but which still contain bits (SRV_STF_*).
The flags now lie in their own field (srv->flags).
The function srv_is_usable() was updated to use the enum as input, since
it already used to deal only with the state.
Note that currently, the maintenance mode is still in the state for
simplicity, but it must move as well.
Some consistency checks cannot be performed between frontends, backends
and peers at the moment because there is no way to check for intersection
between processes bound to some processes when the number of processes is
higher than the number of bits in a word.
So first, let's limit the number of processes to the machine's word size.
This means nbproc will be limited to 32 on 32-bit machines and 64 on 64-bit
machines. This is far more than enough considering that configs rarely go
above 16 processes due to scalability and management issues, so 32 or 64
should be fine.
This way we'll ensure we can always build a mask of all the processes a
section is bound to.
Since it became possible to use log-format expressions in use_backend,
having a mandatory condition becomes annoying because configurations
are full of "if TRUE". Let's relax the check to accept no condition
like many other keywords (eg: redirect).
When compiled with USE_GETADDRINFO, make sure we use getaddrinfo(3) to
perform name lookups. On default dual-stack setups this will change the
behavior of using IPv6 first. Global configuration option
'nogetaddrinfo' can be used to revert to deprecated gethostbyname(3).
The pattern reference are stored with two identifiers: the unique_id and
the reference.
The reference identify a file. Each file with the same name point to the
same reference. We can register many times one file. If the file is
modified, all his dependencies are also modified. The reference can be
used with map or acl.
The unique_id identify inline acl. The unique id is unique for each acl.
You cannot force the same id in the configuration file, because this
repport an error.
The format of the acl and map listing through the "socket" has changed
for displaying these new ids.
Sometimes it can be useful to generate a random value, at least
for debugging purposes, but also to take routing decisions or to
pass such a value to a backend server.
The ability to globally override the default client and server cipher
suites has been requested multiple times since the introduction of SSL.
This commit adds two new keywords to the global section for this :
- ssl-default-bind-ciphers
- ssl-default-server-ciphers
It is still possible to preset them at build time by setting the macros
LISTEN_DEFAULT_CIPHERS and CONNECT_DEFAULT_CIPHERS.
The new tune.idletimer value allows one to set a different value for
idle stream detection. The default value remains set to one second.
It is possible to disable it using zero, and to change the default
value at build time using DEFAULT_IDLE_TIMER.
Released version 1.5-dev22 with the following main changes :
- MEDIUM: tcp-check new feature: connect
- MEDIUM: ssl: Set verify 'required' as global default for servers side.
- MINOR: ssl: handshake optim for long certificate chains.
- BUG/MINOR: pattern: pattern comparison executed twice
- BUG/MEDIUM: map: segmentation fault with the stats's socket command "set map ..."
- BUG/MEDIUM: pattern: Segfault in binary parser
- MINOR: pattern: move functions for grouping pat_match_* and pat_parse_* and add documentation.
- MINOR: standard: The parse_binary() returns the length consumed and his documentation is updated
- BUG/MINOR: payload: the patterns of the acl "req.ssl_ver" are no parsed with the good function.
- BUG/MEDIUM: pattern: "pat_parse_dotted_ver()" set bad expect_type.
- BUG/MINOR: sample: The c_str2int converter does not fail if the entry is not an integer
- BUG/MEDIUM: http/auth: Sometimes the authentication credentials can be mix between two requests
- MINOR: doc: Bad cli function name.
- MINOR: http: smp_fetch_capture_header_* fetch captured headers
- BUILD: last release inadvertently prepended a "+" in front of the date
- BUG/MEDIUM: stream-int: fix the keep-alive idle connection handler
- BUG/MEDIUM: backend: do not re-initialize the connection's context upon reuse
- BUG: Revert "OPTIM/MEDIUM: epoll: fuse active events into polled ones during polling changes"
- BUG/MINOR: checks: successful check completion must not re-enable MAINT servers
- MINOR: http: try to stick to same server after status 401/407
- BUG/MINOR: http: always disable compression on HTTP/1.0
- OPTIM: poll: restore polling after a poll/stop/want sequence
- OPTIM: http: don't stop polling for read on the client side after a request
- BUG/MEDIUM: checks: unchecked servers could not be enabled anymore
- BUG/MEDIUM: stats: the web interface must check the tracked servers before enabling
- BUG/MINOR: channel: CHN_INFINITE_FORWARD must be unsigned
- BUG/MINOR: stream-int: do not clear the owner upon unregister
- MEDIUM: stats: add support for HTTP keep-alive on the stats page
- BUG/MEDIUM: stats: fix HTTP/1.0 breakage introduced in previous patch
- Revert "MEDIUM: stats: add support for HTTP keep-alive on the stats page"
- MAJOR: channel: add a new flag CF_WAKE_WRITE to notify the task of writes
- OPTIM: session: set the READ_DONTWAIT flag when connecting
- BUG/MINOR: http: don't clear the SI_FL_DONT_WAKE flag between requests
- MINOR: session: factor out the connect time measurement
- MEDIUM: session: prepare to support earlier transitions to the established state
- MEDIUM: stream-int: make si_connect() return an established state when possible
- MINOR: checks: use an inline function for health_adjust()
- OPTIM: session: put unlikely() around the freewheeling code
- MEDIUM: config: report a warning when multiple servers have the same name
- BUG: Revert "OPTIM: poll: restore polling after a poll/stop/want sequence"
- BUILD/MINOR: listener: remove a glibc warning on accept4()
- BUG/MAJOR: connection: fix mismatch between rcv_buf's API and usage
- BUILD: listener: fix recent accept4() again
- BUG/MAJOR: ssl: fix breakage caused by recent fix abf08d9
- BUG/MEDIUM: polling: ensure we update FD status when there's no more activity
- MEDIUM: listener: fix polling management in the accept loop
- MINOR: protocol: improve the proto->drain() API
- MINOR: connection: add a new conn_drain() function
- MEDIUM: tcp: report in tcp_drain() that lingering is already disabled on close
- MEDIUM: connection: update callers of ctrl->drain() to use conn_drain()
- MINOR: connection: add more error codes to report connection errors
- MEDIUM: tcp: report connection error at the connection level
- MEDIUM: checks: make use of chk_report_conn_err() for connection errors
- BUG/MEDIUM: unique_id: HTTP request counter is not stable
- DOC: fix misleading information about SIGQUIT
- BUG/MAJOR: fix freezes during compression
- BUG/MEDIUM: stream-interface: don't wake the task up before end of transfer
- BUILD: fix VERDATE exclusion regex
- CLEANUP: polling: rename "spec_e" to "state"
- DOC: add a diagram showing polling state transitions
- REORG: polling: rename "spec_e" to "state" and "spec_p" to "cache"
- REORG: polling: rename "fd_spec" to "fd_cache"
- REORG: polling: rename the cache allocation functions
- REORG: polling: rename "fd_process_spec_events()" to "fd_process_cached_events()"
- MAJOR: polling: rework the whole polling system
- MAJOR: connection: remove the CO_FL_WAIT_{RD,WR} flags
- MEDIUM: connection: remove conn_{data,sock}_poll_{recv,send}
- MEDIUM: connection: add check for readiness in I/O handlers
- MEDIUM: stream-interface: the polling flags must always be updated in chk_snd_conn
- MINOR: stream-interface: no need to call fd_stop_both() on error
- MEDIUM: connection: no need to recheck FD state
- CLEANUP: connection: use conn_ctrl_ready() instead of checking the flag
- CLEANUP: connection: use conn_xprt_ready() instead of checking the flag
- CLEANUP: connection: fix comments in connection.h to reflect new behaviour.
- OPTIM: raw-sock: don't speculate after a short read if polling is enabled
- MEDIUM: polling: centralize polled events processing
- MINOR: polling: create function fd_compute_new_polled_status()
- MINOR: cli: add more information to the "show info" output
- MEDIUM: listener: add support for limiting the session rate in addition to the connection rate
- MEDIUM: listener: apply a limit on the session rate submitted to SSL
- REORG: stats: move the stats socket states to dumpstats.c
- MINOR: cli: add the new "show pools" command
- BUG/MEDIUM: counters: flush content counters after each request
- BUG/MEDIUM: counters: fix stick-table entry leak when using track-sc2 in connection
- MINOR: tools: add very basic support for composite pointers
- MEDIUM: counters: stop relying on session flags at all
- BUG/MINOR: cli: fix missing break in command line parser
- BUG/MINOR: config: correctly report when log-format headers require HTTP mode
- MAJOR: http: update connection mode configuration
- MEDIUM: http: make keep-alive + httpclose be passive mode
- MAJOR: http: switch to keep-alive mode by default
- BUG/MEDIUM: http: fix regression caused by recent switch to keep-alive by default
- BUG/MEDIUM: listener: improve detection of non-working accept4()
- BUILD: listener: add fcntl.h and unistd.h
- BUG/MINOR: raw_sock: correctly set the MSG_MORE flag
If no CA file specified on a server line, the config parser will show an error.
Adds an cmdline option '-dV' to re-set verify 'none' as global default on
servers side (previous behavior).
Also adds 'ssl-server-verify' global statement to set global default to
'none' or 'required'.
WARNING: this changes the default verify mode from "none" to "required" on
the server side, and it *will* break insecure setups.
It's becoming increasingly difficult to ignore unwanted function returns in
debug code with gcc. Now even when you try to work around it, it suggests a
way to write your code differently. For example :
src/frontend.c:187:65: warning: if statement has empty body [-Wempty-body]
if (write(1, trash.str, trash.len) < 0) /* shut gcc warning */;
^
src/frontend.c:187:65: note: put the semicolon on a separate line to silence this warning
1 warning generated.
This is totally unacceptable, this code already had to be written this way
to shut it up in earlier versions. And now it comments the form ? What's the
purpose of the C language if you can't write anymore the code that does what
you want ?
Emeric proposed to just keep a global variable to drain such useless results
so that gcc stops complaining all the time it believes people who write code
are monkeys. The solution is acceptable because the useless assignment is done
only in debug code so it will not impact performance. This patch implements
this, until gcc becomes even "smarter" to detect that we tried to cheat.