All includes that were not absolutely necessary were removed because
checks.h happens to very often be part of dependency loops. A warning
was added about this in check-t.h. The fields, enums and structs were
a bit tidied because it's particularly tedious to find anything there.
It would make sense to split this in two or more files (at least
extract tcp-checks).
The file was renamed to the singular because it was one of the rare
exceptions to have an "s" appended to its name compared to the struct
name.
The type file is becoming a mess, half of it is for the proxy protocol,
another good part describes conn_streams and mux ops, it would deserve
being split again. At least it was reordered so that elements are easier
to find, with the PP-stuff left at the end. The MAX_SEND_FD macro was moved
to compat.h as it's said to be the value for Linux.
The TASK_IS_TASKLET() macro was moved to the proto file instead of the
type one. The proto part was a bit reordered to remove a number of ugly
forward declaration of static inline functions. About a tens of C and H
files had their dependency dropped since they were not using anything
from task.h.
global.h was one of the messiest files, it has accumulated tons of
implicit dependencies and declares many globals that make almost all
other file include it. It managed to silence a dependency loop between
server.h and proxy.h by being well placed to pre-define the required
structs, forcing struct proxy and struct server to be forward-declared
in a significant number of files.
It was split in to, one which is the global struct definition and the
few macros and flags, and the rest containing the functions prototypes.
The UNIX_MAX_PATH definition was moved to compat.h.
There is no C file for this one, the code was placed into sample.c which
thus has a dependency on this file which itself includes sample.h. Probably
that it would be wise to split that later.
This one is particularly tricky to move because everyone uses it
and it depends on a lot of other types. For example it cannot include
arg-t.h and must absolutely only rely on forward declarations to avoid
dependency loops between vars -> sample_data -> arg. In order to address
this one, it would be nice to split the sample_data part out of sample.h.
There's no type file, it only contains fetch_rdp_cookie_name() and
val_payload_lv() which probably ought to move somewhere else instead
of staying there.
It was moved as-is, except for extern declaration of pattern_reference.
A few C files used to include it but didn't need it anymore after having
been split apart so this was cleaned.
One function prototype makes reference to struct mworker_proc which was
not defined there but in global.h instead. This definition, along with
the PROC_O_* fields were moved to mworker-t.h instead.
The file mostly contained struct definitions but there was also a
variable export. Most of the stuff currently lies in checks.h and
should definitely move here!
The STATS_DEFAULT_REALM and STATS_DEFAULT_URI were moved to defaults.h.
It was required to include types/pattern.h and types/sample.h since they
are mentioned in function prototypes.
It would be wise to merge this with uri_auth.h later.
List.h was missing for LIST_ADDQ(). A few unneeded includes of action.h
were removed from certain files.
This one still relies on applet.h and stick-table.h.
A few includes had to be added, namely list-t.h in the type file and
types/proxy.h in the proto file. actions.h was including http-htx.h
but didn't need it so it was dropped.
The sink files could be moved with almost no change at since they
didn't rely on anything fancy. ssize_t required sys/types.h and
thread.h was needed for the locks.
A few includes were missing in each file. A definition of
struct polled_mask was moved to fd-t.h. The MAX_POLLERS macro was
moved to defaults.h
Stdio used to be silently inherited from whatever path but it's needed
for list_pollers() which takes a FILE* and which can thus not be
forward-declared.
And also rename standard.c to tools.c. The original split between
tools.h and standard.h dates from version 1.3-dev and was mostly an
accident. This patch moves the files back to what they were expected
to be, and takes care of not changing anything else. However this
time tools.h was split between functions and types, because it contains
a small number of commonly used macros and structures (e.g. name_desc)
which in turn cause the massive list of includes of tools.h to conflict
with the callers.
They remain the ugliest files of the whole project and definitely need
to be cleaned and split apart. A few types are defined there only for
functions provided there, and some parts are even OS-specific and should
move somewhere else, such as the symbol resolution code.
The protocol.h files are pretty low in the dependency and (sadly) used
by some files from common/. Almost nothing was changed except lifting a
few comments.
The file was moved almost verbatim (only stdio.h was dropped as useless).
It was not split between types and functions because it's only included
from direct C code (fcgi.c and mux_fcgi.c) as well as fcgi_app.h, included
from the same ones, which should also be remerged as a single one.
The various hpack files are self-contained, but hpack-tbl was one of
those showing difficulties when pools were added because that began
to add quite some dependencies. Now when built in standalone mode,
it still uses the bare minimum pool definitions and doesn't require
to know the prototypes anymore when only the structures are needed.
Thus the files were moved verbatim except for hpack-tbl which was
split between types and prototypes.
Most of the file was a large set of HTX elements manipulation functions
and few types, so splitting them allowed to further reduce dependencies
and shrink the build time. Doing so revealed that a few files (h2.c,
mux_pt.c) needed haproxy/buf.h and were previously getting it through
htx.h. They were fixed.
The file was moved as-is. There was a wrong dependency on dynbuf.h
instead of buf.h which was addressed. There was no benefit to
splitting this between types and functions.
There's only one struct and 2 inline functions. It could have been
merged into http.h but that would have added a massive dependency on
the hpack parts for nothing, so better keep it this way since hpack
is already freestanding and portable.
So the enums and structs were placed into http-t.h and the functions
into http.h. This revealed that several files were dependeng on http.h
but not including it, as it was silently inherited via other files.
The type is the only element needed by applet.h and hlua.h, while hlua.c
needs the various functions. XREF_BUSY was placed into the types as well
since it's better to have the special values there.
Regex are essentially included for myregex_t but it turns out that
several of the C files didn't include it directly, relying on the
one included by their own .h. This has been cleanly addressed so
that only the type is included by H files which need it, and adding
the missing includes for the other ones.
The type was moved out as it's used by standard.h for netns_entry.
Instead of just being a forward declaration when not used, it's an
empty struct, which makes gdb happier (the resulting stripped executable
is the same).
The pretty confusing "buffer.h" was in fact not the place to look for
the definition of "struct buffer" but the one responsible for dynamic
buffer allocation. As such it defines the struct buffer_wait and the
few functions to allocate a buffer or wait for one.
This patch moves it renaming it to dynbuf.h. The type definition was
moved to its own file since it's included in a number of other structs.
Doing this cleanup revealed that a significant number of files used to
rely on this one to inherit struct buffer through it but didn't need
anything from this file at all.
This moves types/activity.h to haproxy/activity-t.h and
proto/activity.h to haproxy/activity.h.
The macros defining the bit field values for the profiling variable
were moved to the type file to be more future-proof.
Now the file is ready to be stored into its final destination. A few
minor reorderings were performed to keep the file properly organized,
making the various sections more visible (cache & lockless).
In addition and to stay consistent, memory.c was renamed to pool.c.
Till now the local pool caches were implemented only when lockless pools
were in use. This was mainly due to the difficulties to disentangle the
code parts. However the locked pools would further benefit from the local
cache, and having this would reduce the variants in the code.
This patch does just this. It adds a new debug macro DEBUG_NO_LOCAL_POOLS
to forcefully disable local pool caches, and makes sure that the high
level functions are now strictly the same between locked and lockless
(pool_alloc(), pool_alloc_dirty(), pool_free(), pool_get_first()). The
pool index calculation was moved inside the CONFIG_HAP_LOCAL_POOLS guards.
This allowed to move them out of the giant #ifdef and to significantly
reduce the code duplication.
A quick perf test shows that with locked pools the performance increases
by roughly 10% on 8 threads and gets closer to the lockless one.
Just as for the allocation path, the release path was not symmetrical.
It was not logical to have pool_put_to_cache() free the objects itself,
it was pool_free's job. In addition, just because of a variable export
issue, it the insertion of the object to free back into the local cache
couldn't be inlined while it was very cheap.
This patch just slightly reorganizes this code path by making pool_free()
decide whether or not to put the object back into the cache via
pool_put_to_cache() otherwise place it back to the global pool using
__pool_free(). Then pool_put_to_cache() adds the item to the local
cache and only calls pool_evict_from_cache() if the cache is too big.
The memory.h file is particularly complex due to the combination of
debugging options. This patch extracts the OS-level interface and
places it into a new file: pool-os.h.
Doing this also moves pool_alloc_area() and pool_free_area() out of
the #ifndef CONFIG_HAP_LOCKLESS_POOLS, making them usable from
__pool_refill_alloc(), pool_free(), pool_flush() and pool_gc() instead
of having direct calls to malloc/free there that are hard to wrap for
debugging purposes.
This is the beginning of the move and cleanup of memory.h. This first
step only extracts type definitions and basic macros that are needed
by the files which reference a pool. They're moved to pool-t.h (since
"pool" is more obvious than "memory" when looking for pool-related
stuff). 3 files which didn't need to include the whole memory.h were
updated.
In memory.h we had to reimplement the swrate* functions just because of
a broken circular dependency around freq_ctr.h. Now that this one is
solved, let's get rid of this copy and use the original ones instead.
types/freq_ctr.h was moved to haproxy/freq_ctr-t.h and proto/freq_ctr.h
was moved to haproxy/freq_ctr.h. Files were updated accordingly, no other
change was applied.
Some of them were simply removed as unused (possibly some leftovers
from an older cleanup session), some were turned to haproxy/bitops.h
and a few had to be added (hlua.c and stick-table.h need standard.h
for parse_time_err; htx.h requires chunk.h but used to get it through
standard.h).
This one is included almost everywhere and used to rely on a few other
.h that are not needed (unistd, stdlib, standard.h). It could possibly
make sense to split it into multiple parts to distinguish operations
performed on timers and the internal time accounting, but at this point
it does not appear much important.
This splits the hathreads.h file into types+macros and functions. Given
that most users of this file used to include it only to get the definition
of THREAD_LOCAL and MAXTHREADS, the bare minimum was placed into thread-t.h
(i.e. types and macros).
All the thread management was left to haproxy/thread.h. It's worth noting
the drop of the trailing "s" in the name, to remove the permanent confusion
that arises between this one and the system implementation (no "s") and the
makefile's option (no "s").
For consistency, src/hathreads.c was also renamed thread.c.
A number of files were updated to only include thread-t which is the one
they really needed.
Some future improvements are possible like replacing empty inlined
functions with macros for the thread-less case, as building at -O0 disables
inlining and causes these ones to be emitted. But this really is cosmetic.
A few files were including it while not needing it (anymore). Some
only required access to the atomic ops and got haproxy/atomic.h in
exchange. Others didn't need it at all. A significant number of
files still include it only for THREAD_LOCAL definition.
The hathreads.h file has quickly become a total mess because it contains
thread definitions, atomic operations and locking operations, all this for
multiple combinations of threads, debugging and architectures, and all this
done with random ordering!
This first patch extracts all the atomic ops code from hathreads.h to move
it to haproxy/atomic.h. The code there still contains several sections
based on non-thread vs thread, and GCC versions in the latter case. Each
section was arranged in the exact same order to ease finding.
The redundant HA_BARRIER() which was the same as __ha_compiler_barrier()
was dropped in favor of the latter which follows the naming convention
of all other barriers. It was only used in freq_ctr.c which was updated.
Additionally, __ha_compiler_barrier() was defined inconditionally but
used only for thread-related operations, so it was made thread-only like
HA_BARRIER() used to be. We'd still need to have two types of compiler
barriers, one for the general case (e.g. signals) and another one for
concurrency, but this was not addressed here.
Some comments were added at the beginning of each section to inform about
the use case and warn about the traps to avoid.
Some files which continue to include hathreads.h solely for atomic ops
should now be updated.
Half of the users of this include only need the type definitions and
not the manipulation macros nor the inline functions. Moves the various
types into mini-clist-t.h makes the files cleaner. The other one had all
its includes grouped at the top. A few files continued to reference it
without using it and were cleaned.
In addition it was about time that we'd rename that file, it's not
"mini" anymore and contains a bit more than just circular lists.
File buf.h is one common cause of pain in the dependencies. Many files in
the code need it to get the struct buffer definition, and a few also need
the inlined functions to manipulate a buffer, but the file used to depend
on a long chain only for BUG_ON() (addressed by last commit).
Now buf.h is split into buf-t.h which only contains the type definitions,
and buf.h for all inlined functions. Callers who don't care can continue
to use buf.h but files in types/ must only use buf-t.h. sys/types.h had
to be added to buf.h to get ssize_t as used by b_move(). It's worth noting
that ssize_t is only supposed to be a size_t supporting -1, so b_move()
ought to be rethought regarding this.
The files were moved to haproxy/ and all their users were updated
accordingly. A dependency issue was addressed on fcgi whose C file didn't
include buf.h.
This one used to be stored into debug.h but the debug tools got larger
and require a lot of other includes, which can't use BUG_ON() anymore
because of this. It does not make sense and instead this macro should
be placed into the lower includes and given its omnipresence, the best
solution is to create a new bug.h with the few surrounding macros needed
to trigger bugs and place assertions anywhere.
Another benefit is that it won't be required to add include <debug.h>
anymore to use BUG_ON, it will automatically be covered by api.h. No
less than 32 occurrences were dropped.
The FSM_PRINTF macro was dropped since not used at all anymore (probably
since 1.6 or so).
Fortunately that file wasn't made dependent upon haproxy since it was
integrated, better isolate it before it's too late. Its dependency on
api.h was the result of the change from config.h, which in turn wasn't
correct. It was changed back to stddef.h for size_t and sys/types.h for
ssize_t. The recently added reference to MAX() was changed as it was
placed only to avoid a zero length in the non-free-standing version and
was causing a build warning in the hpack encoder.
This file is to openssl what compat.h is to the libc, so it makes sense
to move it to haproxy/. It could almost be part of api.h but given the
amount of openssl stuff that gets loaded I fear it could increase the
build time.
Note that this file contains lots of inlined functions. But since it
does not depend on anything else in haproxy, it remains safe to keep
all that together.
All files that were including one of the following include files have
been updated to only include haproxy/api.h or haproxy/api-t.h once instead:
- common/config.h
- common/compat.h
- common/compiler.h
- common/defaults.h
- common/initcall.h
- common/tools.h
The choice is simple: if the file only requires type definitions, it includes
api-t.h, otherwise it includes the full api.h.
In addition, in these files, explicit includes for inttypes.h and limits.h
were dropped since these are now covered by api.h and api-t.h.
No other change was performed, given that this patch is large and
affects 201 files. At least one (tools.h) was already freestanding and
didn't get the new one added.
This is where other imported components are located. All files which
used to directly include ebtree were touched to update their include
path so that "import/" is now prefixed before the ebtree-related files.
The ebtree.h file was slightly adjusted to read compiler.h from the
common/ subdirectory (this is the only change).
A build issue was encountered when eb32sctree.h is loaded before
eb32tree.h because only the former checks for the latter before
defining type u32. This was addressed by adding the reverse ifdef
in eb32tree.h.
No further cleanup was done yet in order to keep changes minimal.
As part of the include files cleanup, we're going to kill the ebtree
directory. For this we need to host its C files in a different location
and src/ is the right one.
Fix a trash buffer leak when we can't take the lock of the ckch, or when
"set ssl cert" is wrongly used.
The bug was mentionned in this thread:
https://www.mail-archive.com/haproxy@formilux.org/msg37539.html
The bug was introduced by commit bc6ca7c ("MINOR: ssl/cli: rework 'set
ssl cert' as 'set/commit'").
Must be backported in 2.1.
When HAProxy is started with a '--' option, all following parameters are
considered configuration files. You can't add new options after a '--'.
The current reload system of the master-worker adds extra options at the
end of the arguments list. Which is a problem if HAProxy was started wih
'--'.
This patch fixes the issue by copying the new option at the beginning of
the arguments list instead of the end.
This patch must be backported as far as 1.8.
There is no reason the -S option can't take an argument which starts with
a -. This limitation must only be used for options that take a
non-finite list of parameters (-sf/-st)
This can be backported only if the previous patch which fixes
copy_argv() is backported too.
Could be backported as far as 1.9.
There is no reason the -x option can't take an argument which starts with
a -. This limitation must only be used for options that take a
non-finite list of parameters (-sf/-st)
This can be backported only if the previous patch which fixes
copy_argv() is backported too.
Could be backported as far as 1.8.
The copy_argv() function, which is used to copy and remove some of the
arguments of the command line in order to re-exec() the master process,
is poorly implemented.
The function tries to remove the -x and the -sf/-st options but without
taking into account that some of the options could take a parameter
starting with a dash.
In issue #644, haproxy starts with "-L -xfoo" which is perfectly
correct. However, the re-exec is done without "-xfoo" because the master
tries to remove the "-x" option. Indeed, the copy_argv() function does
not know how much arguments an option can have, and just assume that
everything starting with a dash is an option. So haproxy is exec() with
"-L" but without a parameter, which is wrong and leads to the exit of
the master, with usage().
To fix this issue, copy_argv() must know how much parameters an option
takes, and copy or skip the parameters correctly.
This fix is a first step but it should evolve to a cleaner way of
declaring the options to avoid deduplication of the parsing code, so we
avoid new bugs.
Should be backported with care as far as 1.8, by removing the options
that does not exists in the previous versions.
When checking the config validity of the http-check rulesets, the test on the
ruleset type is inverted. So a warning about ignored directive is emitted when
the config is valid and omitted when it should be reported.
No backport needed.
The pattern references lock must be hold to perform set/add/del
operations. Unfortunately, it is not true for the lua functions manipulating acl
and map files.
This patch should fix the issue #664. It must be backported as far as 1.8.
Before executing a lua action, the analyse expiration timeout of the
corresponding channel must be reset. Otherwise, when it expires, for instance
because of a call to core.msleep(), if the action yields, an expired timeout
will be used for the stream's task, leading to a loop.
This patch should fix the issue #661. It must be backported in all versions
supporting the lua.
By default, HAProxy is able to implicitly upgrade an H1 client connection to an
H2 connection if the first request it receives from a given HTTP connection
matches the HTTP/2 connection preface. This way, it is possible to support H1
and H2 clients on a non-SSL connections. It could be a problem if for any
reason, the H2 upgrade is not acceptable. "option disable-h2-upgrade" may now be
used to disable it, per proxy. The main puprose of this option is to let an
admin to totally disable the H2 support for security reasons. Recently, a
critical issue in the HPACK decoder was fixed, forcing everyone to upgrade their
HAProxy version to fix the bug. It is possible to disable H2 for SSL
connections, but not on clear ones. This option would have been a viable
workaround.
This reverts commit 4fed93eb72.
This commit was simplifying the certificate chain loading with
SSL_CTX_add_extra_chain_cert() which is available in all SSL libraries.
Unfortunately this function is not compatible with the
multi-certificates bundles, which have the effect of concatenating the
chains of all certificate types instead of creating a chain for each
type (RSA, ECDSA etc.)
Should fix issue #655.
The support for reqrep and friends was removed in 2.1 but the
chain_regex() function and the "action" field in the regex struct
was still there. This patch removes them.
One point worth mentioning though. There is a check_replace_string()
function whose purpose was to validate the replacement strings passed
to reqrep. It should also be used for other replacement regex, but is
never called. Callers of exp_replace() should be checked and a call to
this function should be added to detect the error early.
Network types were directly and mistakenly mapped on sample types:
This patch fix the doc with values effectively used to keep backward
compatiblitiy on existing implementations.
In addition it adds an internal/network mapping for key types to avoid
further mistakes adding or modifying internals types.
This patch should be backported on all maintained branches,
particularly until v1.8 included for documentation part.
The recently added ring section post-processing added this bening
warning on 32-bit archs:
src/sink.c: In function 'cfg_post_parse_ring':
src/sink.c:994:15: warning: format '%lu' expects argument of type 'long unsigned int', but argument 4 has type 'size_t {aka unsigned int}' [-Wformat=]
ha_warning("ring '%s' event max length '%u' exceeds size, forced to size '%lu'.\n",
^
Let's just cast b_size() to unsigned long here.
Since 8177ad9 ("MINOR: ssl: split config and runtime variable for
ssl-{min,max}-ver"), the dump for ssl-min-ver and ssl-max-ver is fixed,
so we can remove the comment.
Using ssl-max-ver without ssl-min-ver is ambiguous.
When the ssl-min-ver is not configured, and ssl-max-ver is set to a
value lower than the default ssl-min-ver (which is TLSv1.2 currently),
set the ssl-min-ver to the value of ssl-max-ver, and emit a warning.
log-proto <logproto>
The "log-proto" specifies the protocol used to forward event messages to
a server configured in a ring section. Possible values are "legacy"
and "octet-count" corresponding respectively to "Non-transparent-framing"
and "Octet counting" in rfc6587. "legacy" is the default.
Notes: a separated io_handler was created to avoid per messages test
and to prepare code to set different log protocols such as
request- response based ones.
This patch adds new statement "server" into ring section, and the
related "timeout connect" and "timeout server".
server <name> <address> [param*]
Used to configure a syslog tcp server to forward messages from ring buffer.
This supports for all "server" parameters found in 5.2 paragraph.
Some of these parameters are irrelevant for "ring" sections.
timeout connect <timeout>
Set the maximum time to wait for a connection attempt to a server to succeed.
Arguments :
<timeout> is the timeout value specified in milliseconds by default, but
can be in any other unit if the number is suffixed by the unit,
as explained at the top of this document.
timeout server <timeout>
Set the maximum time for pending data staying into output buffer.
Arguments :
<timeout> is the timeout value specified in milliseconds by default, but
can be in any other unit if the number is suffixed by the unit,
as explained at the top of this document.
Example:
global
log ring@myring local7
ring myring
description "My local buffer"
format rfc3164
maxlen 1200
size 32764
timeout connect 5s
timeout server 10s
server mysyslogsrv 127.0.0.1:6514
Commit 04f5fe87d3 introduced an rwlock in the pools to deal with the risk
that pool_flush() dereferences an area being freed, and commit 899fb8abdc
turned it into a spinlock. The pools already contain a spinlock in case of
locked pools, so let's use the same and simplify the code by removing ifdefs.
At this point I'm really suspecting that if pool_flush() would instead
rely on __pool_get_first() to pick entries from the pool, the concurrency
problem could never happen since only one user would get a given entry at
once, thus it could not be freed by another user. It's not certain this
would be faster however because of the number of atomic ops to retrieve
one entry compared to a locked batch.
Since HAProxy 1.8, the TLS default minimum version was set to TLSv1.0 to
avoid using the deprecated SSLv3.0. Since then, the standard changed and
the recommended TLS version is now TLSv1.2.
This patch changes the minimum default version to TLSv1.2 on bind lines.
If you need to use prior TLS version, this is still possible by
using the ssl-min-ver keyword.
When a tcpcheck ruleset is created, it is automatically inserted in a global
tree. Unfortunately for applicative health checks (redis, mysql...), the created
ruleset is inserted a second time during the directive parsing. The leads to a
infinite loop when haproxy is stopped when we try to scan the tree to release
all tcpcheck rulesets.
Now, only the function responsible to create the tcpcheck ruleset insert it into
the tree.
No backport needed.
In issue #657, Coverity found a bug in the "nameserver" parser for the
resolv.conf when "parse-resolv-conf" is set. What happens is that if an
unparsable address appears on a "nameserver" line, it will destroy the
previously allocated pointer before reporting the warning, then the next
"nameserver" line will dereference it again and wlil cause a crash. If
the faulty nameserver is the last one, it will only be a memory leak.
Let's just make sure we preserve the pointer when handling the error.
The patch also fixes a typo in the warning.
The bug was introduced in 1.9 with commit 44e609bfa ("MINOR: dns:
Implement `parse-resolv-conf` directive") so the fix needs to be backported
up to 1.9 or 2.0.
This patch removes all trailing LFs and Zeros from
log messages. Previously only the last LF was removed.
It's a regression from e8ea0ae6f6 "BUG/MINOR: logs:
prevent double line returns in some events."
This should fix github issue #654
In hlua_stktable_lookup(), the key length is never set so all
stktable:lookup("key") calls return nil from lua.
This patch must be backported as far as 1.9.
[Cf: I slightly updated the patch to use lua_tolstring() instead of
luaL_checkstring() + strlen()]
Most of code in event_srv_chk_io() function is inherited from the checks before
the recent refactoring. Now, it is enough to only call wake_srv_chk(). Since the
refactoring, the removed code is dead and never called. wake_srv_chk() may only
return 0 if tcpcheck_main() returns 0 and the check status is unknown
(CHK_RES_UNKNOWN). When this happens, nothing is performed in event_srv_chk_io().
When an health check is waiting for a connection establishment, it subscribe for
receive or send events, depending on the next rule to be evaluated. For
subscription for send events, there is no problem. It works as expected. For
subscription for receive events, It only works for HTTP checks because the
underlying multiplexer takes care to do a receive before subscribing again,
updating the fd state. For TCP checks, the PT multiplexer only forwards
subscriptions at the transport layer. So the check itself is woken up. This
leads to a subscribe/notify loop waiting the connection establishment or a
timeout, uselessly eating CPU.
Thus, when a check is waiting for a connect, instead of blindly resubscribe for
receive events when it is woken up, we now try to receive data.
This patch should fix the issue #635. No backport needed.
HTTP_1XX, HTTP_3XX and HTTP_4XX message templates are no longer used. Only
HTTP_302 and HTTP_303 are used during configuration parsing by "errorloc" family
directives. So these templates are removed from the generic http code. And
HTTP_302 and HTTP_303 templates are moved as static strings in the function
parsing "errorloc" directives.
Now http-request auth rules are evaluated in a dedicated function and no longer
handled "in place" during the HTTP rules evaluation. Thus the action name
ACT_HTTP_REQ_AUTH is removed. In additionn, http_reply_40x_unauthorized() is
also removed. This part is now handled in the new action_ptr callback function.
There is no reason to not use proxy's error replies to emit 401/407
responses. The function http_reply_40x_unauthorized(), responsible to emit those
responses, is not really complex. It only adds a
WWW-Authenticate/Proxy-Authenticate header to a generic message.
So now, error replies can be defined for 401 and 407 status codes, using
errorfile or http-error directives. When an http-request auth rule is evaluated,
the corresponding error reply is used. For 401 responses, all occurrences of the
WWW-Authenticate header are removed and replaced by a new one with a basic
authentication challenge for the configured realm. For 407 responses, the same
is done on the Proxy-Authenticate header. If the error reply must not be
altered, "http-request return" rule must be used instead.
This adds a sliding estimate of the pools' usage. The goal is to be able
to use this to start to more aggressively free memory instead of keeping
lots of unused objects in pools. The average is calculated as a sliding
average over the last 1024 consecutive measures of ->used during calls to
pool_free(), and is bumped up for 1/4 of its history from ->allocated when
allocation from the pool fails and results in a call to malloc().
The result is a floating value between ->used and ->allocated, that tries
to react fast to under-estimates that result in expensive malloc() but
still maintains itself well in case of stable usage, and progressively
goes down if usage shrinks over time.
This new metric is reported as "needed_avg" in "show pools".
Sadly due to yet another include dependency hell, we couldn't reuse the
functions from freq_ctr.h so they were temporarily duplicated into memory.h.
In connect_server(), if we can't create the mux immediately because we have
to wait until the alpn is negociated, store the session as the connection's
owner. conn_create_mux() expects it to be set, and provides it to the mux
init() method. Failure to do so will result to crashes later if the
connection is private, and even if we didn't do so it would prevent connection
reuse for private connections.
This should fix github issue #651.
When a PROXY protocol line must be sent, it is important to always get the
stream if it exists. It is mandatory to send an unique ID when the unique-id
option is enabled. In conn_si_send_proxy(), to get the stream, we first retrieve
the conn-stream attached to the backend connection. Then if the conn-stream data
callback is si_conn_cb, it is possible to get the stream. But for now, it only
works for connections with a multiplexer. Thus, for mux-less connections, the
unique ID is never sent. This happens for all SSL connections relying on the
alpn to choose the good multiplexer. But it is possible to use the context of
such connections to get the conn-stream.
The bug was introduced by the commit cf6e0c8a8 ("MEDIUM: proxy_protocol: Support
sending unique IDs using PPv2"). Thus, this patch must be backported to the same
versions as the commit above.
It is possible to send a unique ID when the PROXY protocol v2 is used. It relies
on the stream to do so. So we must be sure to have a stream. Locally initiated
connections may not be linked to a stream. For instance, outgoing connections
created by health checks have no stream. Moreover, the stream is not retrieved
for mux-less connections (this bug will be fixed in another commit).
Unfortunately, in make_proxy_line_v2() function, the stream is not tested before
generating the unique-id. This bug leads to a segfault when a health check is
performed for a server with the PROXY protocol v2 and the unique-id option
enabled. It also crashes for servers using SSL connections with alpn. The bug
was introduced by the commit cf6e0c8a8 ("MEDIUM: proxy_protocol: Support sending
unique IDs using PPv2")
This patch should fix the issue #640. It must be backported to the same versions
as the commit above.
During an health check execution, the conn-stream and the conncetion may only be
NULL before the evaluation of the first rule, which is always a connect, or if
the first conn-stream allocation failed. Thus, in tcpcheck_main(), useless tests
on the conn-stream or on the connection have been removed. A comment has been
added to make it clear.
No backport needed.
When a connect rule is evaluated, the conn-stream and the connection must be
refreshed in tcpcheck_main(). Otherwise, in case of synchronous connect, these
variables point on outdated values (NULL for the first connect or released
otherwise).
No backport needed.
It is possible to globally declare ring-buffers, to be used as target for log
servers or traces.
ring <ringname>
Creates a new ring-buffer with name <ringname>.
description <text>
The descritpition is an optional description string of the ring. It will
appear on CLI. By default, <name> is reused to fill this field.
format <format>
Format used to store events into the ring buffer.
Arguments:
<format> is the log format used when generating syslog messages. It may be
one of the following :
iso A message containing only the ISO date, followed by the text.
The PID, process name and system name are omitted. This is
designed to be used with a local log server.
raw A message containing only the text. The level, PID, date, time,
process name and system name are omitted. This is designed to be
used in containers or during development, where the severity
only depends on the file descriptor used (stdout/stderr). This
is the default.
rfc3164 The RFC3164 syslog message format. This is the default.
(https://tools.ietf.org/html/rfc3164)
rfc5424 The RFC5424 syslog message format.
(https://tools.ietf.org/html/rfc5424)
short A message containing only a level between angle brackets such as
'<3>', followed by the text. The PID, date, time, process name
and system name are omitted. This is designed to be used with a
local log server. This format is compatible with what the systemd
logger consumes.
timed A message containing only a level between angle brackets such as
'<3>', followed by ISO date and by the text. The PID, process
name and system name are omitted. This is designed to be
used with a local log server.
maxlen <length>
The maximum length of an event message stored into the ring,
including formatted header. If an event message is longer than
<length>, it will be truncated to this length.
size <size>
This is the optional size in bytes for the ring-buffer. Default value is
set to BUFSIZE.
Example:
global
log ring@myring local7
ring myring
description "My local buffer"
format rfc3164
maxlen 1200
Note: ring names are resolved during post configuration processing.
As discussed in GitHub issue #624 Lua scripts should not use
variables that are never going to be read, because the memory
for variable names is never going to be freed.
Add an optional `ifexist` parameter to the `set_var` function
that allows a Lua developer to set variables that are going to
be ignored if the variable name was not used elsewhere before.
Usually this mean that there is no `var()` sample fetch for the
variable in question within the configuration.
With "MINOR: lua: Use vars_unset_by_name_ifexist()" the last user was
removed and as outlined in that commit there is no good reason for this
function to exist.
May be backported together with the commit mentioned above.
There is no good reason to register a variable name, just to unset
that value that could not even be set without the variable existing.
This change should be safe, may be backported if desired.
With the checks refactoring, all connections are performed into a tcp-check
ruleset. So, in process_chk_conn(), it is no longer required to check
synchronous error on the connect when an health check is started. This part is
now handled in tcpcheck_main(). And because it is impossible to start a health
check with a connection, all tests on the connection during the health check
startup can be safely removed.
This patch should fix the issue #636. No backport needed.
When "hdr" arguments of an http reply are parsed, the allocated header may leak
on error path. Adding it to the header list earlier fixes the issue.
This patch should partly fix the issue #645.
No backport needed.
The http reply must be released in the function responsible to release it. This
leak was introduced when the http return was refactored to use http reply.
This patch should partly fix the issue #645.
No backport needed.
A bug in the PROXY protocol v2 implementation was present in HAProxy up to
version 2.1, causing it to emit a PROXY command instead of a LOCAL command
for health checks. This is particularly minor but confuses some servers'
logs. Sadly, the bug was discovered very late and revealed that some servers
which possibly only tested their PROXY protocol implementation against
HAProxy fail to properly handle the LOCAL command, and permanently remain in
the "down" state when HAProxy checks them. When this happens, it is possible
to enable this global option to revert to the older (bogus) behavior for the
time it takes to contact the affected components' vendors and get them fixed.
This option is disabled by default and acts on all servers having the
"send-proxy-v2" statement.
Older versions were reverted to the old behavior and should not attempt to
be fixed by default again. However a variant of this patch could possibly
be implemented to ask to explicitly send LOCAL if needed by some servers.
More context here:
https://www.mail-archive.com/haproxy@formilux.org/msg36890.htmlhttps://www.mail-archive.com/haproxy@formilux.org/msg37218.html
When a check port or a check address is specified, the check transport layer is
ignored. So it is impossible to do a SSL check in this case. This bug was
introduced by the commit 8892e5d30 ("BUG/MEDIUM: server/checks: Init server
check during config validity check").
This patch should fix the issue #643. It must be backported to all branches
where the above commit was backported.
The http-error directive can now be used instead of errorfile to define an error
message in a proxy section (including default sections). This directive uses the
same syntax that http return rules. The only real difference is the limitation
on status code that may be specified. Only status codes supported by errorfile
directives are supported for this new directive. Parsing of errorfile directive
remains independent from http-error parsing. But functionally, it may be
expressed in terms of http-errors :
errorfile <status> <file> ==> http-errror status <status> errorfile <file>
When an error response is sent to a client, the write of the http reply in the
channel buffer and its sending are performed in different functions. The
http_reply_to_htx() function is used to write an http reply in HTX message. This
way, it could be possible to use the http replies in a different context.
The htx_copy_msg() function can now be used to copy the HTX message stored in a
buffer in an existing HTX message. It takes care to not overwrite existing
data. If the destination message is empty, a raw copy is performed. All the
message is copied or nothing.
This function is used instead of channel_htx_copy_msg().
When HAProxy returns an http error message, the corresponding http reply is now
used instead of the buffer containing the corresponding HTX message. So,
http_error_message() function now returns the http reply to use for a given
stream. And the http_reply_and_close() function now relies on
http_reply_message() to send the response to the client.
The txn flag TX_CONST_REPLY may now be used to prevent after-response ruleset
evaluation. It is used if this ruleset evaluation failed on an internal error
response. Before, it was done incrementing the parameter <final>. But it is not
really convenient if an intermediary function is used to produce the
response. Using a txn flag could also be a good way to prevent after-response
ruleset evaluation in a different context.
When an http reply is configured to use an error message from an http-errors
section, instead of referencing the error message, the http reply is used. To do
so the new http reply type HTTP_REPLY_INDIRECT has been added.
Error messages defined in proxy section or inherited from a default section are
now also referenced using an array of http replies. This is done during the
configuration validity check.
During configuration parsing, error messages resulting of parsing of errorloc
and errorfile directives are now also stored as an http reply. So, for now,
these messages are stored as a buffer and as an http reply. To be able to
release all these http replies when haproxy is stopped, a global list is
used. We must do that because the same http reply may be referenced several
times by different proxies if it is defined in a default section.
Error messages specified in an http-errors section is now also stored in an
array of http replies. So, for now, these messages are stored as a buffer and as
a http reply.
Default error messages are stored as a buffer, in http_err_chunks global array.
Now, they are also stored as a http reply, in http_err_replies global array.
"http-request deny", "http-request tarpit" and "http-response deny" rules now
use the same syntax than http return rules and internally rely on the http
replies. The behaviour is not the same when no argument is specified (or only
the status code). For http replies, a dummy response is produced, with no
payload. For old deny/tarpit rules, the proxy's error messages are used. Thus,
to be compatible with existing configuration, the "default-errorfiles" parameter
is implied. For instance :
http-request deny deny_status 404
is now an alias of
http-request deny status 404 default-errorfiles
The http_reply_message() function may be used to send an http reply to a
client. This function is responsile to convert the reply in HTX, to push it in
the response buffer and to forward it to the client. It is also responsible to
terminate the transaction.
This function is used during evaluation of http return rules.
A dedicated function is added to check the validity of an http reply object,
after parsing. It is used to check the validity of http return rules.
For now, this function is only used to find the right error message in an
http-errors section for http replies of type HTTP_REPLY_ERRFILES (using
"errorfiles" argument). On success, such replies are updated to point on the
corresponding error message and their type is set to HTTP_REPLY_ERRMSG. If an
unknown http-errors section is referenced, anx error is returned. If a unknown
error message is referenced inside an existing http-errors section, a warning is
emitted and the proxy's error messages are used instead.
A dedicated function to parse arguments and create an http_reply object is
added. It is used to parse http return rule. Thus, following arguments are
parsed by this function :
... [status <code>] [content-type <type>]
[ { default-errorfiles | errorfile <file> | errorfiles <name> |
file <file> | lf-file <file> | string <str> | lf-string <fmt> } ]
[ hdr <name> <fmt> ]*
Because the status code argument is optional, a default status code must be
defined when this function is called.
No real change here. Instead of using an internal structure to the action rule,
the http return rules are now stored as an http reply. The main change is about
the action type. It is now always set to ACT_CUSTOM. The http reply type is used
to know how to evaluate the rule.
The structure owns an error message, most of time loaded from a file, and
converted to HTX. It is created when an errorfile or errorloc directive is
parsed. It is renamed to avoid ambiguities with http_reply structure.
For HTTP rules, this flag is only used to trigger a warning during HAProxy
startup when a final rule without ACL is not the last one. So this patch is
marked as a bug, but its impact is really limited.
No backport needed because http return rules were introduced in the 2.2.
TX_CLDENY, TX_CLALLOW, TX_SVDENY and TX_SVALLOW flags are unused. Only
TX_CLTARPIT is used to make the difference between an http deny rule and an http
tarpit rule. So these unused flags are removed.
In the CLI command 'show ssl crt-list', the ssl-min-ver and the
ssl-min-max arguments were always displayed because the dumped versions
were the actual version computed and used by haproxy, instead of the
version found in the configuration.
To fix the problem, this patch separates the variables to have one with
the configured version, and one with the actual version used. The dump
only shows the configured version.
This reverts commit 957ec59571.
As discussed with Emeric, the current syntax is not extensible enough,
this will be turned to a section instead in a forthcoming patch.
The ring to applet communication was only made to deal with CLI functions
but it's generic. Let's have generic appctx functions and have the CLI
rely on these instead. This patch introduces ring_attach_appctx() and
ring_detach_appctx().
A few fields, including a generic list entry, were added to the CLI context
by commit 300decc8d9 ("MINOR: cli: extend the CLI context with a list and
two offsets"). It turns out that the list entry (l0) is solely used to
consult rings and that the generic ring_write() code is restricted to a
consumer on the CLI due to this, which was not the initial intent. Let's
make it a general purpose wait_entry field that is properly initialized
during appctx_init(). This will allow any applet to wait on a ring, not
just the CLI.
The LIST_ADDQ() and LIST_DEL_INIT() calls made to attach/detach a waiter
to the ring were made under a read lock which was sufficient in front of
the writer's write lock. But it's not sufficient against other readers!
Thus theorically multiple "show events" on the same ring buffer on the
CLI could result in a crash, even though for now I couldn't manage to
reproduce it.
This fixes commit 1d181e489c ("MEDIUM: ring: implement a wait mode for
watchers") so it must be backported to 2.1, possibly further if the ring
code gets backported.
Because of a typo error in conditions to exit the sending loop, it is possible
to loop infinitely in fcgi_snd_buf() function. Instead of checking the FCGI
stream is not blocked to continue sending data, the FCGI connection is used. So
it is possible to have a stream blocked because there is not enough space in the
mux buffers to copy more data but continue to loop to send more data.
This patch should fix the issue #637. It must be backported to 2.1.
Instead of using malloc/free to allocate an HPACK table, let's declare
a pool. However the HPACK size is configured by the H2 mux, so it's
also this one which allocates it after post_check.
This patch adds the new global statement:
ring <name> [desc <desc>] [format <format>] [size <size>] [maxlen <length>]
Creates a named ring buffer which could be used on log line for instance.
<desc> is an optionnal description string of the ring. It will appear on
CLI. By default, <name> is reused to fill this field.
<format> is the log format used when generating syslog messages. It may be
one of the following :
iso A message containing only the ISO date, followed by the text.
The PID, process name and system name are omitted. This is
designed to be used with a local log server.
raw A message containing only the text. The level, PID, date, time,
process name and system name are omitted. This is designed to be
used in containers or during development, where the severity only
depends on the file descriptor used (stdout/stderr). This is
the default.
rfc3164 The RFC3164 syslog message format. This is the default.
(https://tools.ietf.org/html/rfc3164)
rfc5424 The RFC5424 syslog message format.
(https://tools.ietf.org/html/rfc5424)
short A message containing only a level between angle brackets such as
'<3>', followed by the text. The PID, date, time, process name
and system name are omitted. This is designed to be used with a
local log server. This format is compatible with what the systemd
logger consumes.
timed A message containing only a level between angle brackets such as
'<3>', followed by ISO date and by the text. The PID, process
name and system name are omitted. This is designed to be
used with a local log server.
<length> is the maximum length of event message stored into the ring,
including formatted header. If the event message is longer
than <length>, it would be truncated to this length.
<name> is the ring identifier, which follows the same naming convention as
proxies and servers.
<size> is the optionnal size in bytes. Default value is set to BUFSIZE.
Note: Historically sink's name and desc were refs on const strings. But with new
configurable rings a dynamic allocation is needed.
Before this path, they rely directly on ring_write bypassing
a part of the sink API.
Now the maxlen parameter of the log will apply only on the text
message part (and not the header, for this you woud prefer
to use the maxlen parameter on the sink/ring).
sink_write prototype was also reviewed to return the number of Bytes
written to be compliant with the other write functions.
This patch extends the sink_write prototype and code to
handle the rfc5424 and rfc3164 header.
It uses header building tools from log.c. Doing this some
functions/vars have been externalized.
facility and minlevel have been removed from the struct sink
and passed to args at sink_write because they depends of the log
and not of the sink (they remained unused by rest of the code
until now).
Historically some messages used to already contain the trailing LF but
not all, and __do_send_log adds a new one in needed cases. It also does
trim a trailing LF in certain cases while computing the max message
length, as a result of subtracting 1 to the available room in the
destination buffer. But the way it's done is wrong since some messages
still contain it.
So the code was fixed to always trim the trailing LF from messages if
present, and then only subtract 1 from the destination buffer room
instead of the size..
Note: new sink API is not designed to receive a trailing LF on
event messages
This could be backported to relevant stable versions with particular
care since the logic of the code changed a bit since 1.6 and there
may be other locations that need to be adjusted.
MySQL 4.1 is old enough to be the default mode for mysql checks. So now, once a
username is defined, post-41 mode is automatically used. To do mysql checks on
previous MySQL version, the argument "pre-41" must be used.
Note, it is a compatibility breakage for everyone using an antique and
unsupported MySQL version.
Helper functions are used to dump bind, server or filter keywords. These
functions are used to report errors during the configuration parsing. To have a
coherent API, these functions are now prepared to handle a null pointer as
argument. If so, no action is performed and functions immediately return.
This patch should fix the issue #631. It is not a bug. There is no reason to
backport it.
parse_cache_flt() is the registered callback for the "cache" filter keyword. It
is only called when the "cache" keyword is found on a filter line. So, it is
useless to test the filter name in the callback function.
This patch should fix the issue #634. It may be backported as far as 1.9.
The CI revealed that the boringssl build is still broken because of some
ifdef misplacement.
Bug introduced by dad3105 ("REORG: ssl: move ssl configuration to
cfgparse-ssl.c").
No backport needed.
Fix issue #633.
Coverity found unused variable assignment
CID 1299671 (#1 of 1): Unused value (UNUSED_VALUE)assigned_pointer:
Assigning value from args[arg + 1] to word here, but that stored
value is overwritten before it can be used.
958 word = args[arg + 1];
959 arg = arg_end;
In issue #632 boringssl build were broken by the lack of errno.h
include in ssl_crtlist.c
Bug introduced by 6e9556b ("REORG: ssl: move crtlist functions to src/ssl_crtlist.c").
No backport needed.
Expose native cum_req metric for a server: so far it was calculated as a
sum or all responses. Rename it from Cum. HTTP Responses to Cum. HTTP
Requests to be consistent with Frontend and Backend.
In order to move all SSL sample fetches in another file, moving the
ssl_sock_ctx definition in a .h file is required.
Unfortunately it became a cross dependencies hell to solve, because of
the struct wait_event field, so <types/connection.h> is needed which
created other problems.
The ssl_sock.c file contains a lot of macros and structure definitions
that should be in a .h. Move them to the more appropriate
types/ssl_sock.h file.
Make use of ssl_sock_register_msg_callback(). Function ssl_sock_msgcbk()
is now split into two dedicated functions for heartbeat and clienthello.
They are both registered by using a new callback mechanism for SSL/TLS
protocol messages.
This patch adds the ability to register callbacks for SSL/TLS protocol
messages by using the function ssl_sock_register_msg_callback().
All registered callback functions will be called when observing received
or sent SSL/TLS protocol messages.
Only allow L7 retries when using HTTP, it only really makes sense for HTTP,
anyway, and as the L7 retries code assume the message will be HTX, it will
crash when used with mode TCP.
This should fix github issue #627.
This should be backported to 2.1 and 2.0.
In do_l7_retry(), remove the SF_ADDR_SET flag. Otherwise,
assign_server_address() won't be called again, which means for 2.1 or 2.2,
we will always retry to connect to the server that just failed, and for 2.0,
that we will try to use to whatever the address is for the connection,
probably the last server used by that connection before it was pool_free()
and reallocated.
This should be backported to 2.1 and 2.0.
Commit 42a50bd19 ("BUG/MINOR: pollers: remove uneeded free in global
init") removed the 'fail_revt' label from the _do_init() function
in src/ev_select.c but left the local label declaration, which makes
clang unhappy and unable to build.
These labels are only historic and unneeded anyway so let's remove them.
This should be backported where 42a50bd19 is backported.
When the first thread stops and wakes others up, it's possible some of
them will also start to wake others in parallel. Let's make give this
notification task to the very first one instead since it's enough and
can reduce the amount of needless (though harmless) wakeup calls.
Currently the soft-stop can lead to old processes remaining alive for as
long as two seconds after receiving a soft-stop signal. What happens is
that when receiving SIGUSR1, one thread (usually the first one) wakes up,
handles the signal, sets "stopping", goes into runn_poll_loop(), and
discovers that stopping is set, so its also sets itself in the
stopping_thread_mask bit mask. After this it sees that other threads are
not yet willing to stop, so it continues to wait.
From there, other threads which were waiting in poll() expire after one
second on poll timeout and enter run_poll_loop() in turn. That's already
one second of wait time. They discover each in turn that they're stopping
and see that other threads are not yet stopping, so they go back waiting.
After the end of the first second, all threads know they're stopping and
have set their bit in stopping_thread_mask. It's only now that those who
started to wait first wake up again on timeout to discover that all other
ones are stopping, and can now quit. One second later all threads will
have done it and the process will quit.
This is effectively strictly larger than one second and up to two seconds.
What the current patch does is simple, when the first thread stops, it sets
its own bit into stopping_thread_mask then wakes up all other threads to do
also set theirs. This kills the first second which corresponds to the time
to discover the stopping state. Second, when a thread exists, it wakes all
other ones again because some might have gone back sleeping waiting for
"jobs" to go down to zero (i.e. closing the last connection). This kills
the last second of wait time.
Thanks to this, as SIGUSR1 now acts instantly again if there's no active
connection, or it stops immediately after the last connection has left if
one was still present.
This should be backported as far as 2.0.
while reading the code, I thought it was clearer to put one instruction
per line as it is mostly done elsewhere
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Since commit d4604adeaa ("MAJOR: threads/fd: Make fd stuffs
thread-safe"), we init pollers per thread using a helper. It was still
correct for mono-thread mode until commit cd7879adc2 ("BUG/MEDIUM:
threads: Run the poll loop on the main thread too"). We now use a deinit
helper for all threads, making those free uneeded.
Only poll and select are affected by this very minor issue.
it could be backported from v1.8 to v2.1.
Fixes: cd7879adc2 ("BUG/MEDIUM: threads: Run the poll loop on the main
thread too")
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
In dump_pools_to_trash() we happen to use %d to display unsigned ints!
This has probably been there since "show pools" was introduced so this
fix must be backported to all versions. The impact is negligible since
no pool uses 2 billion entries. It could possibly affect the report of
failed allocation counts but in this case there's a bigger problem to
solved!
In the commit 2fabd9d53 ("BUG/MEDIUM: checks: Subscribe to I/O events on an
unfinished connect"), we force the subscribtion to I/O events when a new
connection is opened if it is not fully established. But it must only be done if
a mux was immediately installed. If there is no mux, no subscription must be
performed.
No backport needed.
Make the digest and HMAC function of OpenSSL accessible to the user via
converters. They can be used to sign and validate content.
Reviewed-by: Tim Duesterhus <tim@bastelstu.be>
aes_gcm_dec is independent of the TLS implementation and fits better
in sample.c file with others hash functions.
[Cf: I slightly updated this patch to move aes_gcm_dec converter in sample.c
instead the new file crypto.c]
Reviewed-by: Tim Duesterhus <tim@bastelstu.be>
It is useless to try to send outgoing data if the check is still waiting to be
able to send data.
No backport needed.
(cherry picked from commit d94653700437430864c03090d710b95f4e860321)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
In tcpcheck_main(), when we are waiting for a connection, we must rely on the
next connect/send/expect rule to subscribe to I/O events, not on the immediate
next rule. Because, if it is a set-var or an unset-var rule, we will not
subscribe to I/O events while it is in fact mandatory because a send or an
expect rule is coming. It is required to wake-up the health check as soon as I/O
are possible, instead of hitting a timeout.
No backport needed.
(cherry picked from commit 758d48f54cc3372c2d8e7c34b926d218089c533a)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Subscription to I/O events should not be performed if the check is already
subscribed.
No backport needed.
(cherry picked from commit 9e0b3e92f73b6715fb2814e3d09b8ba62270b417)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
In tcp-check based health check, when a new connection is opened, we must wait
it is really established before moving to the next rule. But at this stage, we
must also be sure to subscribe to I/O events. Otherwise, depending on the
timing, the health check may remains sleepy till the timeout.
No backport needed. This patch should fix the issue #622.
(cherry picked from commit b2a4c0d473e3c5dcb87f7d16f2ca410bafc62f64)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
For 6 years now we've been seeing a warning suggesting to set dh-param
beyond 1024 if possible when it was not set. It's about time to do it
and get rid of this warning since most users seem to already use 2048.
It will remain possible to set a lower value of course, so only those
who were experiencing the warning and were relying on the default value
may notice a change (higher CPU usage). For more context, please refer
to this thread :
https://www.mail-archive.com/haproxy@formilux.org/msg37226.html
This commit removes a big chunk of code which happened to be needed
exclusively to figure if it was required to emit a warning or not :-)
This fixes OSS Fuzz issue https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=21931.
OSS Fuzz detected a hang on configuration parsing for a 200kB line with a large number of
invalid escape sequences. Most likely due to the amounts of error output generated.
This issue is very minor, because usually generated configurations are to be trusted.
The bug exists since at the very least HAProxy 1.4. The patch may be backported if desired.
Commit 9df188695f ("BUG/MEDIUM: http-ana: Handle NTLM messages correctly.")
tried to address an HTTP-reuse issue reported in github issue #511 by making
sure we properly detect extended NTLM responses, but made the match case-
sensitive while it's a token so it's case insensitive.
This should be backported to the same versions as the commit above.
During use_backend and use-server post-parsing, if the log-format string used to
specify the backend or the server is just a single string, the log-format string
(a list, internally) is replaced by the string itself. Because the field is an
union, the list is not emptied according to the rules of art. The element, when
released, is not removed from the list. There is no bug, but it is clearly not
obvious and error prone.
This patch should fix#544. The fix for the use_backend post-parsing may be
backported to all stable releases. use-server is static in 2.1 and prior.
The issue can easily be reproduced with "stick on" statement
backend BE_NAME
stick-table type ip size 1k
stick on src
and calling dump() method on BE_NAME stick table from Lua
Before the fix, HAProxy would return 500 and log something like
the following:
runtime error: attempt to index a string value from [C] method 'dump'
Where one would expect a Lua table like this:
{
["IP_ADDR"] = {
["server_id"] = 1,
["server_name"] = "srv1"
}
}
This patch needs to backported to 1.9 and later releases.
A default statement in the switch testing the header name has been added to be
sure it is impossible to eval the value pattern on an uninitialized header
value. It should never happen of course. But this way, it is explicit.
And a comment has been added to make clear that ctx.value is always defined when
it is evaluated.
This patch fixes the issue #619.
For http-check send rules, it is now possible to use a log-format string to set
the request's body. the keyword "body-lf" should be used instead of "body". If the
string eval fails, no body is added.
For http-check send rules, it is now possible to use a log-format string to set
the request URI. the keyword "uri-lf" should be used instead of "uri". If the
string eval fails, we fall back on the default uri "/".
Since commit bb86986 ("MINOR: init: report the haproxy version and
executable path once on errors") the master-worker displays its version
and path upon a successful exits of a current worker. Which is kind of
confusing upon a clean exits.
This is due to the fact that that commit displays this upon a ha_alert()
which was used during the exit of the process.
Replace the ha_alert() by an ha_warning() if the process exit correctly
and was supposed to.
It still displays the message upon a SIGINT since the workers are
catching the signal.
A sample must always have a session defined. Otherwise, it is a bug. So it is
unnecessary to test if it is defined when called from a health checks context.
This patch fixes the issue #616.
Extra parameters on http-check expect rules, for the header matching method, to
use log-format string or to match full header line have been removed. There is
now separate matching methods to match a full header line or to match each
comma-separated values. "http-check expect fhdr" must be used in the first case,
and "http-check expect hdr" in the second one. In addition, to match log-format
header name or value, "-lf" suffix must be added to "name" or "value"
keyword. For intance:
http-check expect hdr name "set-cookie" value-lf -m beg "sessid=%[var(check.cookie)]"
Thanks to this changes, each parameter may only be interpreted in one way.
Following actions have been added to send log-format strings from a tcp-check
ruleset instead the log-format parameter:
* tcp-check send-lf <fmt>
* tcp-check send-binary-lf <fmt>
It is easier for tools generating configurations. Each action may only be
interpreted in one way.
All sample fetches in the scope "check." have been removed. Response sample
fetches must be used instead. It avoids keyword duplication. So, for instance,
res.hdr() must be now used instead of check.hdr().
To do so, following sample fetches have been added on the response :
* res.body, res.body_len and res.body_size
* res.hdrs and res.hdrs_bin
Sample feches dealing with the response's body are only useful in the health
checks context. When called from a stream context, there is no warranty on the
body presence. There is no option to wait the response's body.
It is now possible to use log-format string (or hexadecimal string for the
binary version) to match a content in tcp-check based expect rules. For
hexadecimal log-format string, the conversion in binary is performed after the
string evaluation, during health check execution. The pattern keywords to use
are "string-lf" for the log-format string and "binary-lf" for the hexadecimal
log-format string.
TCP and HTTP expect rules may fail because of unexpected and internal
error. Mainly during log-format strings eval. The error report is improved by
this patch.
An additional argument has been added to smp_prefetch_htx() function in the
commit 778f5ed47 ("MEDIUM: checks/http-fetch: Support htx prefetch from a check
for HTTP samples"). But forgot to update call in 51d module.
No need to backport.
An additional argument has been added to smp_prefetch_htx() function in the
commit 778f5ed47 ("MEDIUM: checks/http-fetch: Support htx prefetch from a check
for HTTP samples"). But forgot to update call in wurfl module.
No need to backport.
An additional argument has been added to smp_prefetch_htx() function in the
commit 778f5ed47 ("MEDIUM: checks/http-fetch: Support htx prefetch from a check
for HTTP samples"). But forgot to update call in da module.
No need to backport.
It is now possible to add http-check expect rules matching HTTP header names and
values. Here is the format of these rules:
http-check expect header name [ -m <meth> ] <name> [log-format] \
[ value [ -m <meth> ] <value> [log-format] [full] ]
the name pattern (name ...) is mandatory but the value pattern (value ...) is
optionnal. If not specified, only the header presence is verified. <meth> is the
matching method, applied on the header name or the header value. Supported
matching methods are:
* "str" (exact match)
* "beg" (prefix match)
* "end" (suffix match)
* "sub" (substring match)
* "reg" (regex match)
If not specified, exact matching method is used. If the "log-format" option is
used, the pattern (<name> or <value>) is evaluated as a log-format string. This
option cannot be used with the regex matching method. Finally, by default, the
header value is considered as comma-separated list. Each part may be tested. The
"full" option may be used to test the full header line. Note that matchings are
case insensitive on the header names.
For an http-check ruleset, it should be allowed to set a chain of expect
rules. But an error is triggered during the post-parsing because of a wrong
test, inherited from the evaluation mode before the refactoring.
No need to backport.
The status (ok, error and timeout) of an TCP or HTTP expect rule are set to
HCHK_STATUS_UNKNOWN by default, when not specified, during the configuration
parsing. This does not change the default status used for a terminal expect rule
(ok=L7OK, err=L7RSP and tout=L7TOUT). But this way, it is possible to know if a
specific status was forced by config or not.
It is now possible to use different matching methods to look for header names in
an HTTP message:
* The exact match. It is the default method. http_find_header() uses this
method. http_find_str_header() is an alias.
* The prefix match. It evals the header names starting by a prefix.
http_find_pfx_header() must be called to use this method.
* The suffix match. It evals the header names ending by a suffix.
http_find_sfx_header() must be called to use this method.
* The substring match. It evals the header names containing a string.
http_find_sub_header() must be called to use this method.
* The regex match. It evals the header names matching a regular expression.
http_match_header() must be called to use this method.
HTPP sample fetches acting on the response can now be called from any sample
expression or log-format string in a tcp-check based ruleset. To avoid any
ambiguities, all these sample fetches are in the check scope, for instance
check.hdr() or check.cook().
SSL sample fetches acting on the server connection can now be called from any
sample expression or log-format string in a tcp-check based ruleset. ssl_bc and
ssl_bc_* sample fetches are concerned.
It is now possible to call be_id, be_name, srv_id and srv_name sample fetches
from any sample expression or log-format string in a tcp-check based ruleset.
It is now possible to call check.payload(), check.payload_lv() and check.len()
sample fetches from any sample expression or log-format string in a tcp-check
based ruleset. In fact, check.payload() was already added. But instead of having
a specific function to handle this sample fetch, we use the same than
req.payload().
These sample fetches act on the check input buffer, containing data received for
the server. So it should be part of or after an expect rule, but before any send
rule. Because the input buffer is cleared at this stage.
Some HTTP sample fetches will be accessible from the context of a http-check
health check. Thus, the prefetch function responsible to return the HTX message
has been update to handle a check, in addition to a channel. Both cannot be used
at the same time. So there is no ambiguity.
A binary sample data can be converted, implicitly or not, to a string by cutting
the buffer on the first null byte.
I guess this patch should be backported to all stable versions.
srv_cleanup_connections() is supposed to be static, so mark it as so.
This patch should be backported where commit 6318d33ce6
("BUG/MEDIUM: connections: force connections cleanup on server changes")
will be backported, that is to say v1.9 to v2.1.
Fixes: 6318d33ce6 ("BUG/MEDIUM: connections: force connections cleanup
on server changes")
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
After we call SSL_SESSION_get_id(), the length of the id in bytes is
stored in "len", which was never checked. This could cause unexpected
behavior when using the "ssl_fc_session_id" or "ssl_bc_session_id"
fetchers (eg. the result can be an empty value).
The issue was introduced with commit 105599c ("BUG/MEDIUM: ssl: fix
several bad pointer aliases in a few sample fetch functions").
This patch must be backported to 2.1, 2.0, and 1.9.
When only request headers are parsed, the host header should not be compared to
the request authority because no start-line was parsed. Thus there is no
authority.
Till now this bug was hidden because this parsing mode was only used for the
response in the FCGI multiplexer. Since the HTTP checks refactoring, the request
headers may now also be parsed without the start-line.
This patch fixes the issue #610. It must be backported to 2.1.
I've been trying to understand a change of behaviour between v2.2dev5 and
v2.2dev6. Indeed our probe is regularly testing to add and remove
servers on a given backend such as:
# echo "show servers state be_foo" | sudo socat stdio /var/lib/haproxy/stats
113 be_foo 1 srv0 10.236.139.34 2 0 1 1 263 15 3 4 6 0 0 0 - 31255 -
113 be_foo 2 srv1 0.0.0.0 0 1 256 256 0 15 3 0 14 0 0 0 - 0 -
-> curl on the corresponding frontend: reply from server:31255
# echo "set server be_foo/srv1 addr 10.236.139.34 port 31257" | sudo socat stdio /var/lib/haproxy/stats
IP changed from '0.0.0.0' to '10.236.139.34', port changed from '0' to '31257' by 'stats socket command'
# echo "set server be_foo/srv1 weight 256" | sudo socat stdio /var/lib/haproxy/stats
# echo "set server be_foo/srv1 check-port 8500" | sudo socat stdio /var/lib/haproxy/stats
health check port updated.
# echo "set server be_foo/srv1 state ready" | sudo socat stdio /var/lib/haproxy/stats
# echo "show servers state be_foo" | sudo socat stdio /var/lib/haproxy/stats
113 be_foo 1 srv0 10.236.139.34 2 0 1 1 105 15 3 4 6 0 0 0 - 31255 -
113 be_foo 2 srv1 10.236.139.34 2 0 256 256 2319 15 3 2 6 0 0 0 - 31257 -
-> curl on the corresponding frontend: reply for server:31257
(notice the difference of weight)
# echo "set server be_foo/srv1 state maint" | sudo socat stdio /var/lib/haproxy/stats
# echo "set server be_foo/srv1 addr 0.0.0.0 port 0" | sudo socat stdio /var/lib/haproxy/stats
IP changed from '10.236.139.34' to '0.0.0.0', port changed from '31257' to '0' by 'stats socket command'
# echo "show servers state be_foo" | sudo socat stdio /var/lib/haproxy/stats
113 be_foo 1 srv0 10.236.139.34 2 0 1 1 263 15 3 4 6 0 0 0 - 31255 -
113 be_foo 2 srv1 0.0.0.0 0 1 256 256 0 15 3 0 14 0 0 0 - 0 -
-> curl on the corresponding frontend: reply from server:31255
# echo "set server be_foo/srv1 addr 10.236.139.34 port 31256" | sudo socat stdio /var/lib/haproxy/stats
IP changed from '0.0.0.0' to '10.236.139.34', port changed from '0' to '31256' by 'stats socket command'
# echo "set server be_foo/srv1 weight 256" | sudo socat stdio /var/lib/haproxy/stats
# echo "set server be_foo/srv1 check-port 8500" | sudo socat stdio /var/lib/haproxy/stats
health check port updated.
# echo "set server be_foo/srv1 state ready" | sudo socat stdio /var/lib/haproxy/stats
# echo "show servers state be_foo" | sudo socat stdio /var/lib/haproxy/stats
113 be_foo 1 srv0 10.236.139.34 2 0 1 1 105 15 3 4 6 0 0 0 - 31255 -
113 be_foo 2 srv1 10.236.139.34 2 0 256 256 2319 15 3 2 6 0 0 0 - 31256 -
-> curl on the corresponding frontend: reply from server:31257 (!)
Here we indeed would expect to get an anver from server:31256. The issue
is highly linked to the usage of `pool-purge-delay`, with a value which
is higher than the duration of the test, 10s in our case.
a git bisect between dev5 and dev6 seems to show commit
079cb9af22 ("MEDIUM: connections: Revamp the way idle connections are killed")
being the origin of this new behaviour.
So if I understand the later correctly, it seems that it was more a
matter of chance that we did not saw the issue earlier.
My patch proposes to force clean idle connections in the two following
cases:
- we set a (still running) server to maintenance
- we change the ip/port of a server
This commit should be backported to 2.1, 2.0, and 1.9.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
When a stream is detached from its connection, we try to move the connection in
an idle list to keep it opened, the session one or the server one. But it must
only be done if there is no connection error and if we want to keep it
open. This last statement is true if FCGI_CF_KEEP_CONN flag is set. But the test
is inverted at the stage.
This patch must be backported to 2.1.
fcgi_release() function is responsible to release a FCGI connection. But the
release of the connection itself is missing.
This patch must be backported to 2.1.
When the last stream is detached from a FCGI connection, if the server don't add
the connection in its idle list, the connection is destroyed. Thus it is
important to exist immediately from the detach function. A return statement is
missing here.
This bug was introduced in the commit 2444aa5b6 ("MEDIUM: sessions: Don't be
responsible for connections anymore.").
It is a 2.2-dev bug. No need to backport.
Now we very rarely catch spinning streams, and whenever we catch one it
seems a filter is involved, but we currently report no info about them.
Let's print the list of enabled filters on the stream with such a crash
to help with the reports. A typical output will now look like this:
[ALERT] 121/165908 (1110) : A bogus STREAM [0x7fcaf4016a60] is spinning at 2 calls per second and refuses to die, aborting now! Please report this error to developers [strm=0x7fcaf4016a60 src=127.0.0.1 fe=l1 be=l1 dst=<CACHE> rqf=6dc42000 rqa=48000 rpf=a0040223 rpa=24000000 sif=EST,10008 sib=DIS,80110 af=(nil),0 csf=0x7fcaf4023c00,10c000 ab=0x7fcaf40235f0,4 csb=(nil),0 cof=0x7fcaf4016610,1300:H1(0x7fcaf4016840)/RAW((nil))/tcpv4(29) cob=(nil),0:NONE((nil))/NONE((nil))/NONE(0) filters={0x7fcaf4016fb0="cache store filter", 0x7fcaf4017080="compression filter"}]
This may be backported to 2.0.
I changed my mind twice on this one and pushed after the last test with
threads disabled, without re-enabling long long, causing this rightful
build warning.
This needs to be backported if the previous commit ff64d3b027 ("MINOR:
threads: export the POSIX thread ID in panic dumps") is backported as
well.
It is very difficult to map a panic dump against a gdb thread dump
because the thread numbers do not match. However gdb provides the
pthread ID but this one is supposed to be opaque and not to be cast
to a scalar.
This patch provides a fnuction, ha_get_pthread_id() which retrieves
the pthread ID of the indicated thread and casts it to an unsigned
long long so as to lose the least possible amount of information from
it. This is done cleanly using a union to maintain alignment so as
long as these IDs are stored on 1..8 bytes they will be properly
reported. This ID is now presented in the panic dumps so it now
becomes possible to map these threads. When threads are disabled,
zero is returned. For example, this is a panic dump:
Thread 1 is about to kill the process.
*>Thread 1 : id=0x7fe92b825180 act=0 glob=0 wq=1 rq=0 tl=0 tlsz=0 rqsz=0
stuck=1 prof=0 harmless=0 wantrdv=0
cpu_ns: poll=5119122 now=2009446995 diff=2004327873
curr_task=0xc99bf0 (task) calls=4 last=0
fct=0x592440(task_run_applet) ctx=0xca9c50(<CLI>)
strm=0xc996a0 src=unix fe=GLOBAL be=GLOBAL dst=<CLI>
rqf=848202 rqa=0 rpf=80048202 rpa=0 sif=EST,200008 sib=EST,204018
af=(nil),0 csf=0xc9ba40,8200
ab=0xca9c50,4 csb=(nil),0
cof=0xbf0e50,1300:PASS(0xc9cee0)/RAW((nil))/unix_stream(20)
cob=(nil),0:NONE((nil))/NONE((nil))/NONE(0)
call trace(20):
| 0x59e4cf [48 83 c4 10 5b 5d 41 5c]: wdt_handler+0xff/0x10c
| 0x7fe92c170690 [48 c7 c0 0f 00 00 00 0f]: libpthread:+0x13690
| 0x7ffce29519d9 [48 c1 e2 20 48 09 d0 48]: linux-vdso:+0x9d9
| 0x7ffce2951d54 [eb d9 f3 90 e9 1c ff ff]: linux-vdso:__vdso_gettimeofday+0x104/0x133
| 0x57b484 [48 89 e6 48 8d 7c 24 10]: main+0x157114
| 0x50ee6a [85 c0 75 76 48 8b 55 38]: main+0xeaafa
| 0x50f69c [48 63 54 24 20 85 c0 0f]: main+0xeb32c
| 0x59252c [48 c7 c6 d8 ff ff ff 44]: task_run_applet+0xec/0x88c
Thread 2 : id=0x7fe92b6e6700 act=0 glob=0 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
stuck=0 prof=0 harmless=1 wantrdv=0
cpu_ns: poll=786738 now=1086955 diff=300217
curr_task=0
Thread 3 : id=0x7fe92aee5700 act=0 glob=0 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
stuck=0 prof=0 harmless=1 wantrdv=0
cpu_ns: poll=828056 now=1129738 diff=301682
curr_task=0
Thread 4 : id=0x7fe92a6e4700 act=0 glob=0 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
stuck=0 prof=0 harmless=1 wantrdv=0
cpu_ns: poll=818900 now=1153551 diff=334651
curr_task=0
And this is the gdb output:
(gdb) info thr
Id Target Id Frame
* 1 Thread 0x7fe92b825180 (LWP 15234) 0x00007fe92ba81d6b in raise () from /lib64/libc.so.6
2 Thread 0x7fe92b6e6700 (LWP 15235) 0x00007fe92bb56a56 in epoll_wait () from /lib64/libc.so.6
3 Thread 0x7fe92a6e4700 (LWP 15237) 0x00007fe92bb56a56 in epoll_wait () from /lib64/libc.so.6
4 Thread 0x7fe92aee5700 (LWP 15236) 0x00007fe92bb56a56 in epoll_wait () from /lib64/libc.so.6
We can clearly see that while threads 1 and 2 are the same, gdb's
threads 3 and 4 respectively are haproxy's threads 4 and 3.
This may be backported to 2.0 as it removes some confusion in github issues.
We tried hard to make sure we report threads as not stuck at various
crucial places, but one of them is special, it's the listener_accept()
function. The reason it is special is because it will loop a certain
number of times (default: 64) accepting incoming connections, allocating
resources, dispatching them to other threads or running L4 rules on them,
and while all of this is supposed to be extremely fast, when the machine
slows down or runs low on memory, the expectedly small delays in malloc()
caused by contention with other threads can quickly accumulate and suddenly
become critical to the point of triggering the watchdog. Furthermore, it
is technically possible to trigger this by pure configuration by setting
a huge tune.maxaccept value, which should not be possible.
Given that each operation isn't related to the same task but to a different
one each time, it is appropriate to mark the thread as not stuck each time
it accepts new work that possibly gets dispatched to other threads which
execute it.
This looks like this could be a good reason for the issue reported in
issue #388.
This fix must be backported to 2.0.
Building without threads now shows this warning:
src/ssl_sock.c: In function 'cli_io_handler_commit_cert':
src/ssl_sock.c:12121:24: warning: unused variable 'bind_conf' [-Wunused-variable]
struct bind_conf *bind_conf = ckchi->bind_conf;
^~~~~~~~~
This is because the variable is needed only to unlock the structure, and
the unlock operation does nothing in this case. Let's mark the variable
__maybe_unused for this, but it would be convenient in the long term if
we could make the thread macros pretend they consume the argument so that
this remains less visible outside.
No backport is needed.
Because in HTTP, the host header and the request authority, if any, must be
identical, we keep both synchornized. It means the right flags are set on the
HTX statrt-line calling http_update_host(). There is no header when it happens,
but it is not an issue. Then, if a Host header is inserted,
http_update_authority() is called.
Note that for now, the host header is not automatically added when required.
Connection, content-length and transfer-encoding headers are ignored for
http-check send rules. For now, the keep-alive is not supported and the
"connection: close" header is always added to the request. And the
content-length header is automatically added.
cpu_calls, cpu_ns_avg, cpu_ns_tot, lat_ns_avg and lat_ns_tot depend on the
stream to find the current task and must check for it or they may cause a
crash if misused or used in a log-format string after commit 5f940703b3
("MINOR: log: Don't depends on a stream to process samples in log-format
string").
This must be backported as far as 1.9.
get_http_auth() expects a valid stream but this is not mentioned, though
fortunately it's always called from places which already check this.
smp_prefetch_htx() performs all the required checks and is the key to the
stability of almost all sample fetch functions, so let's make this clearer.
Since commit 5f940703b3 ("MINOR: log: Don't depends on a stream to process
samples in log-format string") it has become quite obvious that a few sample
fetch functions and converters were still heavily dependent on the presence
of a stream without testing for it.
The unique-id sample fetch function, if called without a stream, will result
in a crash.
This fix adds a check for the stream's existence, and should be backported
to all stable versions up to 1.7.
Since commit 5f940703b3 ("MINOR: log: Don't depends on a stream to process
samples in log-format string") it has become quite obvious that a few sample
fetch functions and converters were still heavily dependent on the presence
of a stream without testing for it.
The http_first_req sample fetch function, if called without a stream, will
result in a crash.
This fix adds a check for the stream's existence, and should be backported
to all stable versions up to 1.6.
Since commit 5f940703b3 ("MINOR: log: Don't depends on a stream to process
samples in log-format string") it has become quite obvious that a few sample
fetch functions and converters were still heavily dependent on the presence
of a stream without testing for it.
The capture.req.hdr, capture.res.hdr, capture.req.method, capture.req.uri,
capture.req.ver and capture.res.ver sample fetches used to assume the
presence of a stream, which is not necessarily the case (especially after
the commit above) and would crash haproxy if incorrectly used. Let's make
sure they check for this stream.
This fix adds a check for the stream's existence, and should be backported
to all stable versions up to 1.6.
Since commit 5f940703b3 ("MINOR: log: Don't depends on a stream to process
samples in log-format string") it has become quite obvious that a few sample
fetch functions and converters were still heavily dependent on the presence
of a stream without testing for it.
The capture-req and capture-res converters were in this case and could
crash the process if misused.
This fix adds a check for the stream's existence, and should be backported
to all stable versions up to 1.6.
Mux-h1 currently heavily relies on the presence of an upper stream, even
when waiting for a new request after one is being finished, and it's that
upper stream that's in charge of request and keep-alive timeouts for now.
But since recent commit 493d9dc6ba ("MEDIUM: mux-h1: do not blindly wake
up the tasklet at end of request anymore") that assumption was broken as
the purpose of this change was to avoid initiating processing of a request
when there's no data in the buffer. The side effect is that there's no more
timeout to handle the front connection, resulting in dead front connections
stacking up as clients get kicked off the net.
This fix makes sure we always enable the timeout when there's no stream
attached to the connection. It doesn't do this for back connections since
they may purposely be left idle.
No backport is needed as this bug was introduced in 2.2-dev4.
A bug was introduced in the commit 2edcd4cbd ("BUG/MINOR: checks: Avoid
incompatible cast when a binary string is parsed"). The length of the
destination buffer must be set before call the parse_binary() function.
No backport needed.
It can be sometimes useful to measure total time of a request as seen
from an end user, including TCP/TLS negotiation, server response time
and transfer time. "Tt" currently provides something close to that, but
it also takes client idle time into account, which is problematic for
keep-alive requests as idle time can be very long. "Ta" is also not
sufficient as it hides TCP/TLS negotiationtime. To improve that, introduce
a "Tu" timer, without idle time and everything else. It roughly estimates
time spent time spent from user point of view (without DNS resolution
time), assuming network latency is the same in both directions.
When a tcp-check line is parsed, a warning may be reported if the keyword is
used for a frontend. The return value must be used to report it. But this info
is lost before the end of the function.
Partly fixes issue #600. No backport needed.
When an error is found during the parsing of an expect rule (tcp or http),
everything is released at the same place, at the end of the function.
Partly fixes issue #600. No backport needed.
parse_binary() function must be called with a pointer on an integer. So don't
pass a pointer on a size_t element, casting it to a pointer on a integer.
Partly fixes issue #600. No backport needed.
The variable s, pointing on the check server, may be null when a connection is
openned. It happens for email alerts. To avoid ambiguities, its use is now more
explicit. Comments have been added at some places and tests on the variable have
been added elsewhere (useless but explicit).
Partly fixes issue #600.
If a message is not fully received from a mysql server, depending on last_read
value, an error must be reported or we must wait for more data. The first if
statement must not check last_read.
Partly fixes issue #600. No backport needed.
The version is partially parsed to set the flag HTX_SL_F_VER_11 on the HTX
message. But exactly 8 chars is expected. So if "HTTP/2" is specified, the flag
is not set. Thus, the version parsing has been updated to handle "HTTP/2" and
"HTTP/2.0" the same way.
For PostgreSQL health check, there is a regex on the backend authentication
packet. It must match to succeed. But it exists 6 types of authentication
packets and the regex only matches the first one (AuthenticationOK). This patch
fixes the regex to match all authentication packets.
No backport needed.
At the end of a tcp-check based health check, if there is still a connection
attached to the check, it must be closed. But it must be done before releasing
the session, because the session may still be referenced by the mux. For
instance, an h2 stream may still have a reference on the session.
No need to backport.
When a new connection is established, if an old connection is still attached to
the current check, it must be detroyed. When it happens, the old conn-stream
must be used to unsubscribe for events, not the new one.
No backport is needed.
The variable 'ret' must only be declared When HAProxy is compiled with the SSL
support (more precisely SSL_CTRL_SET_TLSEXT_HOSTNAME must be defined).
No backport needed.
Since the tcp-check based heath checks uses the best multuplexer for a
connection, the mux-pt is no longer the only possible choice. So events
subscriptions and unsubscriptions must be done with the mux.
No backport needed.
It is now possible to match on a comma-separated list of status codes or range
of codes. In addtion, instead of a string comparison to match the response's
status code, a integer comparison is performed. Here is an example:
http-check expect status 200,201,300-310
When default parameters are set in a request message, we get the client
connection using the session's origin. But it is not necessarily a
conncection. For instance, for health checks, thanks to recent changes, it may
be a check object. At this step, the client connection may be NULL. Thus, we
must be sure to have a client connection before using it.
This patch must be backported to 2.1.
It is now possible to force the mux protocol for a tcp-check based health check
using the server keyword "check-proto". If set, this parameter overwrites the
server one.
In the same way, a "proto" parameter has been added for tcp-check and http-check
connect rules. If set, this mux protocol overwrites all others for the current
connection.
First, when a server health check is initialized, it inherits the mux protocol
from the server if it is not already specified. Because there is no option to
specify the mux protocol for the checks, it is always inherited from the server
for now.
Then, if the connect rule is configured to use the server options, the mux
protocol of the check is used, if defined. Of course, if a mux protocol is
already defined for the connect rule, it is used in priority. But for now, it is
not possible.
Thus, if a server is configured to use, for instance, the h2 protocol, it is
possible to do the same for the health-checks.
No backport needed.
This reverts commit 1979943c30ef285ed04f07ecf829514de971d9b2.
Captures in comment was only used when a tcp-check expect based on a negative
regex matching failed to eventually report what was captured while it was not
expected. It is a bit far-fetched to be useable IMHO. on-error and on-success
log-format strings are far more usable. For now there is few check sample
fetches (in fact only one...). But it could be really powerful to report info in
logs.
HTTP health-checks now use HTX multiplexers. So it is important to really send
the amount of outgoing data for such checks because the HTX buffers appears
always full.
No backport needed.
Since all tcp-check rulesets are globally stored, it is a problem to use
list. For configuration with many backends, the lookups in list may be costly
and slow downs HAProxy startup. To solve this problem, tcp-check rulesets are
now stored in a tree.
When some data are scheduled to be sent, we must be sure to subscribe for sends
if nothing was sent. Because of a bug, when nothing was sent, connection errors
are checks. If no error is found, we exit, waiting for more data, without any
subcription on send events.
No need to backport.
Instead of accessing directly to the ist fields, the ist API is used instead. To
get its length or its pointer, to release it or to duplicate it. It is more
readable this way.
The patch is not obvious at the first glance. But it is just a reorg. Functions
have been grouped and ordered in a more logical way. Some structures and flags
are now private to the checks module (so moved from the .h to the .c file).
Thanks to previous change, it is now possible to removed all code handling pure
tcp checks. Now every connection based health-checks are handled by the
tcpcheck_main() function. __event_srv_chk_w() and __event_srv_chk_r() have been
removed. And all connection establishment is handled at one place.
Defaut health-checks, without any option, doing only a connection check, are now
based on tcp-checks. An implicit default tcp-check connect rule is used. A
shared tcp-check ruleset, name "*tcp-check" is created to support these checks.
When a tcp-check connect rule is evaluated, the mux protocol corresponding to
the health-check is chosen. So for TCP based health-checks, the mux-pt is
used. For HTTP based health-checks, the mux-h1 is used. The connection is marked
as private to be sure to not ruse regular HTTP connection for
health-checks. Connections reuse will be evaluated later.
The functions evaluating HTTP send rules and expect rules have been updated to
be HTX compliant. The main change for users is that HTTP health-checks are now
stricter on the HTTP message format. While before, the HTTP formatting and
parsing were minimalist, now messages should be well formatted.
Before, the server was used as origin during session creation. It was only used
to get the check associated to the server when a variable is get or set in the
check scope or when a check sample fetch was called. So it seems easier to use
the check as origin of a session. It is also more logical becaues the session is
created by the health-check itself and not its server.
A dedicated function is now used to received data. fundamentally, it should do
the same operations than before. But the way data are received has been reworked
to be closer to the si_cs_recv() function.
First tests before executing the loop on tcp-check rules in tcpcheck_main()
function have been slightly modified to be more explicit and easier to
understand.
HTTP health-checks are now internally based on tcp-checks. Of course all the
configuration parsing of the "http-check" keyword and the httpchk option has
been rewritten. But the main changes is that now, as for tcp-check ruleset, it
is possible to perform several send/expect sequences into the same
health-checks. Thus the connect rule is now also available from HTTP checks, jst
like set-var, unset-var and comment rules.
Because the request defined by the "option httpchk" line is used for the first
request only, it is now possible to set the method, the uri and the version on a
"http-check send" line.
the get_last_tcpcheck_rule() function iters on a rule list in the reverse order
and returns the first non comment and non action-kw rule. If no such rule is
found, NULL is returned.
Instead of having 2 independent integers, used as boolean values, to know if the
expect rule is invered and to know if the matching regexp has captures, we know
use a 32-bits bitfield.
All tcp-check rules are now stored in the globla shared list. The ones created
to parse a specific protocol, for instance redis, are already stored in this
list. Now pure tcp-check rules are also stored in it. The ruleset name is
created using the proxy name and its config file and line. tcp-check rules
declared in a defaults section are also stored this way using "defaults" as
proxy name.
For now, all tcp-check ruleset are stored in a list. But it could be a bit slow
to looks for a specific ruleset with a huge number of backends. So, it could be
a good idea to use a tree instead.
It is now possible to specified the healthcheck status to use on success of a
tcp-check rule, if it is the last evaluated rule. The option "ok-status"
supports "L4OK", "L6OK", "L7OK" and "L7OKC" status.
A shared tcp-check ruleset is now created to support agent checks. The following
sequence is used :
tcp-check send "%[var(check.agent_string)] log-format
tcp-check expect custom
The custom function to evaluate the expect rule does the same that it was done
to handle agent response when a custom check was used.
Parsing of following keywords have been moved in checks.c file : addr, check,
check-send-proxy, check-via-socks4, no-check, no-check-send-proxy, rise, fall,
inter, fastinter, downinter and port.
A share tcp-check ruleset is now created to support SPOP checks. This way no
extra memory is used if several backends use a SPOP check.
The following sequence is used :
tcp-check send-binary SPOP_REQ
tcp-check expect custom min-recv 4
The spop request is the result of the function
spoe_prepare_healthcheck_request() and the expect rule relies on a custom
function calling spoe_handle_healthcheck_response().
A shared tcp-check ruleset is now created to support LDAP check. This way no
extra memory is used if several backends use a LDAP check.
The following sequance is used :
tcp-check send-binary "300C020101600702010304008000"
tcp-check expect rbinary "^30" min-recv 14 \
on-error "Not LDAPv3 protocol"
tcp-check expect custom
The last expect rule relies on a custom function to check the LDAP server reply.
A share tcp-check ruleset is now created to support MySQL checks. This way no
extra memory is used if several backends use a MySQL check.
One for the following sequence is used :
## If no extra params are set
tcp-check connect default linger
tcp-check expect custom ## will test the initial handshake
## If the username is defined
tcp-check connect default linger
tcp-check send-binary MYSQL_REQ log-format
tcp-check expect custom ## will test the initial handshake
tcp-check expect custom ## will test the reply to the client message
The log-format hexa string MYSQL_REQ depends on 2 preset variables, the packet
header containing the packet length and the sequence ID (check.header) and the
username (check.username). If is also different if the "post-41" option is set
or not. Expect rules relies on custom functions to check MySQL server packets.
A shared tcp-check ruleset is now created to support postgres check. This way no
extra memory is used if several backends use a pgsql check.
The following sequence is used :
tcp-check connect default linger
tcp-check send-binary PGSQL_REQ log-format
tcp-check expect !rstring "^E" min-recv 5 \
error-status "L7RSP" on-error "%[check.payload(6,0)]"
tcp-check expect rbinary "^520000000800000000 min-recv "9" \
error-status "L7STS" \
on-success "PostgreSQL server is ok" \
on-error "PostgreSQL unknown error"
The log-format hexa string PGSQL_REQ depends on 2 preset variables, the packet
length (check.plen) and the username (check.username).
A share tcp-check ruleset is now created to support smtp checks. This way no
extra memory is used if several backends use a smtp check.
The following sequence is used :
tcp-check connect default linger
tcp-check expect rstring "^[0-9]{3}[ \r]" min-recv 4 \
error-status "L7RSP" on-error "%[check.payload(),cut_crlf]"
tcp-check expect rstring "^2[0-9]{2}[ \r]" min-recv 4 \
error-status "L7STS" \
on-error %[check.payload(4,0),ltrim(' '),cut_crlf] \
status-code "check.payload(0,3)"
tcp-echeck send "%[var(check.smtp_cmd)]\r\n" log-format
tcp-check expect rstring "^2[0-9]{2}[- \r]" min-recv 4 \
error-status "L7STS" \
on-error %[check.payload(4,0),ltrim(' '),cut_crlf] \
on-success "%[check.payload(4,0),ltrim(' '),cut_crlf]" \
status-code "check.payload(0,3)"
The variable check.smtp_cmd is by default the string "HELO localhost" by may be
customized setting <helo> and <domain> parameters on the option smtpchk
line. Note there is a difference with the old smtp check. The server gretting
message is checked before send the HELO/EHLO comand.
A shared tcp-check ruleset is now created to support ssl-hello check. This way
no extra memory is used if several backends use a ssl-hello check.
The following sequence is used :
tcp-check send-binary SSLV3_CLIENT_HELLO log-format
tcp-check expect rbinary "^1[56]" min-recv 5 \
error-status "L6RSP" tout-status "L6TOUT"
SSLV3_CLIENT_HELLO is a log-format hexa string representing a SSLv3 CLIENT HELLO
packet. It is the same than the one used by the old ssl-hello except the sample
expression "%[date(),htonl,hex]" is used to set the date field.
A share tcp-check ruleset is now created to support redis checks. This way no
extra memory is used if several backends use a redis check.
The following sequence is used :
tcp-check send "*1\r\n$4\r\nPING\r\n"
tcp-check expect string "+PONG\r\n" error-status "L7STS" \
on-error "%[check.payload(),cut_crlf]" on-success "Redis server is ok"
It is now possible to set a custom function to evaluate a tcp-check expect
rule. It is an internal and not documentd option because the right pointer of
function must be set and it is not possible to express it in the
configuration. It will be used to convert some protocol healthchecks to
tcp-checks.
Custom functions must have the following signature:
enum tcpcheck_eval_ret (*custom)(struct check *, struct tcpcheck_rule *, int);
A list of variables is now associated to each tcp-check ruleset. It is more a
less a list of set-var expressions. This list may be filled during the
configuration parsing. The listed variables will then be set during each
execution of the tcp-check healthcheck, at the begining, before execution of the
the first tcp-check rule.
This patch is mandatory to convert all protocol checks to tcp-checks. It is a
way to customize shared tcp-check rulesets.
This option defines a sample expression, evaluated as an integer, to set the
status code (check->code) if a tcp-check healthcheck ends on the corresponding
expect rule.
These options define log-format strings used to produce the info message if a
tcp-check expect rule fails (on-error option) or succeeds (on-success
option). For this last option, it must be the ending rule, otherwise the
parameter is ignored.
It is now possible to extract information from the check input buffer using the
check.payload sample fetch. As req.payload or res.payload, an offset and a
length must be specified.
A new section has been added in the configuration manual. Now check sample
fetches will have to be documented under the section 7.3.7 (Fetching
health-check samples).
When a tcp-check healthcheck fails on a specific rule with no dedicated comment,
we look in previous rules if a comment rule is specified. Now, instead of doing
it during tcp-checks execution, we assign the comment to the corresponding rules
during the configuration parsing. So after HAProxy startup, no more comment
rules remains in a tcp-check ruleset.
It is now possible to specified the healthcheck status to use on error or on
timeout for tcp-check expect rules. First, to define the error status, the
option "error-status" must be used followed by "L4CON", "L6RSP", "L7RSP" or
"L7STS". Then, to define the timeout status, the option "tout-status" must be
used followed by "L4TOUT", "L6TOUT" or "L7TOUT".
These options will be used to convert specific protocol healthchecks (redis,
pgsql...) to tcp-check ones.
x
This converter tranform a integer to its binary representation in the network
byte order. Integer are already automatically converted to binary during sample
expression evaluation. But because samples own 8-bytes integers, the conversion
produces 8 bytes. the htonl converter do the same but for 4-bytes integer.
A global list to tcp-check ruleset can now be used to share common rulesets with
all backends without any duplication. It is mandatory to convert all specific
protocol checks (redis, pgsql...) to tcp-check healthchecks.
To do so, a flag is now attached to each tcp-check ruleset to know if it is a
shared ruleset or not. tcp-check rules defined in a backend are still directly
attached to the proxy and not shared. In addition a second flag is used to know
if the ruleset is inherited from the defaults section.
When a log-format string is parsed, if a sample fetch is found, the flag LW_REQ
is systematically added on the proxy. Unfortunately, this produce a warning
during HAProxy start-up when a log-format string is used for a tcp-check send
rule. Now this flag is only added if the parsed sample fetch depends on HTTP
information.
When a log-format string is evaluated, there is no reason to process sample
fetches only when a stream is defined. Several sample fetches are available
outside the stream scope. All others should handle calls without stream. This
patch is mandatory to support log-format string in tcp-check rules.
An extra parameter for tcp-check send rules can be specified to handle the
string or the hexa string as a log-format one. Using "log-format" option,
instead of considering the data to send as raw data, it is parsed as a
log-format string. Thus it is possible to call sample fetches to customize data
sent to a server. Of course, because we have no stream attached to healthchecks,
not all sample fetches are available. So be careful.
tcp-check set-var(check.port) int(8000)
tcp-check set-var(check.uri) str(/status)
tcp-check connect port var(check.port)
tcp-check send "GET %[check.uri] HTTP/1.0\r\n" log-format
tcp-check send "Host: %[srv_name]\r\n" log-format
tcp-check send "\r\n"
Since we have a session attached to tcp-check healthchecks, It is possible use
sample expression and variables. In addition, it is possible to add tcp-check
set-var rules to define custom variables. So, now, a sample expression can be
used to define the port to use to establish a connection for a tcp-check connect
rule. For instance:
tcp-check set-var(check.port) int(8888)
tcp-check connect port var(check.port)
With this option, it is now possible to use a specific address to open the
connection for a tcp-check connect rule. If the port option is also specified,
it is used in priority.
With this option, it is possible to open a connection from a tcp-check connect
rule using all parameter of the server line, like any other healthcheck. For
now, this parameter is exclusive with all other option for a tcp-check connect
rule.
With this option, it is possible to establish the connection opened by a
tcp-check connect rule using upstream socks4 proxy. Info from the socks4
parameter on the server are used.
Evaluate the registered action_ptr associated with each CHK_ACTION_KW rules from
a ruleset. Currently only the 'set-var' and 'unset-var' are parsed by the
tcp-check parser. Thus it is now possible to set or unset variables. It is
possible to use such rules before the first connect of the ruleset.
Register the custom action rules "set-var" and "unset-var", that will
call the parse_store() command upon parsing.
These rules are thus built and integrated to the tcp-check ruleset, but
have no further effect for the moment.
Add a dedicated vars scope for checks. This scope is considered as part of the
session scope for accounting purposes.
The scope can be addressed by a valid session, even embryonic. The stream is not
necessary.
The scope is initialized after the check session is created. All variables are
then pruned before the session is destroyed.
Create a session for each healthcheck relying on a tcp-check ruleset. When such
check is started, a session is allocated, which will be freed when the check
finishes. A dummy static frontend is used to create these sessions. This will be
useful to support variables and sample expression. This will also be used,
later, by HTTP healthchecks to rely on HTTP muxes.
The loop in tcpcheck_main() function is quite hard to understand. Depending
where we are in the loop, The current_step is the currentely executed rule or
the one to execute on the next call to tcpcheck_main(). When the check result is
reported, we rely on the rule pointed by last_started_step or the one pointed by
current_step. In addition, the loop does not use the common list_for_each_entry
macro and it is thus quite confusing.
So the loop has been totally rewritten and splitted to several functions to
simplify its reading and its understanding. Tcp-check rules are evaluated in
dedicated functions. And a common for_each loop is used and only one rule is
referenced, the current one.
After the configuration parsing, when its validity check, an implicit tcp-check
connect rule is added in front of the tcp-check ruleset if the first non-comment
rule is not a connect one. This implicit rule is flagged to use the default
check parameter.
This means now, all tcp-check rulesets begin with a connect and are never
empty. When tcp-check healthchecks are used, all connections are thus handled by
tcpcheck_main() function.
The keyword 'tcp-check' is now parsed in a dedicated callback function. Thus the
code to parse these rules is now located in checks.c. In addition, a deinit
function have been added to release proxy tcp-check rules, on error or when
HAProxy is stopped.
This patch is based on Gaetan Rivet work. It uses a callback function registerd
on the 'tcp-check' keyword instead, but the spirit is the same.
To allow reusing these blocks without consuming more memory, their list
should be static and share-able accross uses. The head of the list will
be shared as well.
It is thus necessary to extract the head of the rule list from the proxy
itself. Transform it into a pointer instead, that can be easily set to
an external dynamically allocated head.
Parse back-references in comments of tcp-check expect rules. If references are
made, capture groups in the match and replace references to it within the
comment when logging the error. Both text and binary regex can caputre groups
and reference them in the expect rule comment.
[Cf: I slightly updated the patch. exp_replace() function is used instead of a
custom one. And if the trash buffer is too small to contain the comment during
the substitution, the comment is ignored.]
The loop to find the id corresponding to the current rule in
tcpcheck_get_step_id() function has been simplified. And
tcpcheck_get_step_comment() function now only relies on the current rule to find
the rigth comment string. The step id is no longer used. To do so, we iterate
backward from the current step to find the first COMMENT rule immediately
preceedding the expect rule chain.
The rbinary match works similarly to the rstring match type, however the
received data is rewritten as hex-string before the match operation is
done.
This allows using regexes on binary content even with the POSIX regex
engine.
[Cf: I slightly updated the patch. mem2hex function was removed and dump_binary
is used instead.]
On the input buffer, it was mainly done to call regex_exec() function. But
regex_exec2() can be used instead. This way, it is no more required to add the
terminating null byte. For the output buffer, it was only done for debugging
purpose.
Allow declaring tcpcheck connect commands with a new parameter,
"linger". This option will configure the connection to avoid using an
RST segment to close, instead following the four-way termination
handshake. Some servers would otherwise log each healthcheck as
an error.
Some expect rules cannot be satisfied due to inherent ambiguity towards
the received data: in the absence of match, the current behavior is to
be forced to wait either the end of the connection or a buffer full,
whichever comes first. Only then does the matching diagnostic is
considered conclusive. For instance :
tcp-check connect
tcp-check expect !rstring "^error"
tcp-check expect string "valid"
This check will only succeed if the connection is closed by the server before
the check timeout. Otherwise the first expect rule will wait for more data until
"^error" regex matches or the check expires.
Allow the user to explicitly define an amount of data that will be
considered enough to determine the value of the check.
This allows succeeding on negative rstring rules, as previously
in valid condition no match happened, and the matching was repeated
until the end of the connection. This could timeout the check
while no error was happening.
[Cf: I slighly updated the patch. The parameter was renamed and the value is a
signed integer to support -1 as default value to ignore the parameter.]
Reduce copy of parsing portions that is common to all three types of
expect actions.
This reduces the amount of code, helping maintainability and reducing
future change spread.
Functionality is identical.
When receiving additional data while chaining multiple tcp-check expects,
previous inverse expects might have a different result with the new data. They
need to be evaluated again against the new data.
Add a pointer to the first inverse expect rule of the current expect chain
(possibly of length one) to each expect rule. When receiving new data, the
currently evaluated tcp-check rule is set back to this pointed rule.
Fonctionnaly speaking, it is a bug and it exists since the introduction of the
feature. But there is no way for now to hit it because when an expect rule does
not match, we wait for more data, independently on the inverse flag. The only
way to move to the following rule is to be sure no more data will be received.
This patch depends on the commit "MINOR: mini-clist: Add functions to iterate
backward on a list".
[Cf: I slightly updated the patch. First, it only concerns inverse expect
rule. Normal expect rules are not concerned. Then, I removed the BUG tag
because, for now, it is not possible to move to the following rule when the
current one does not match while more data can be received.]
TCP check expect matching strings or binary patterns are able to know
prior to applying their match function whether the available data is
already sufficient to attempt the match or not.
As such, on insufficient data the expect is postponed. This behavior
avoids unnecessary matches when the data could not possibly match.
When chaining expect, upon passing the previous and going onto the next
however, this length check is not done again. Then the match is done and
will necessarily fail, triggering a new wait for more data. The end
result is the same for a slightly higher cost.
Check received data length for all expects in a chain.
This bug exists since the introduction of the feature:
Fixes: 5ecb77f4c7 ("MEDIUM: checks: add send/expect tcp based check")
Version 1.5+ impacted.
The options and directives related to the configuration of checks in a backend
may be defined after the servers declarations. So, initialization of the check
of each server must not be performed during configuration parsing, because some
info may be missing. Instead, it must be done during the configuration validity
check.
Thus, callback functions are registered to be called for each server after the
config validity check, one for the server check and another one for the server
agent-check. In addition deinit callback functions are also registered to
release these checks.
This patch should be backported as far as 1.7. But per-server post_check
callback functions are only supported since the 2.1. And the initcall mechanism
does not exist before the 1.9. Finally, in 1.7, the code is totally
different. So the backport will be harder on older versions.
This options is used to force a non-SSL connection to check a SSL server or to
invert a check-ssl option inherited from the default section. The use_ssl field
in the check structure is used to know if a SSL connection must be used
(use_ssl=1) or not (use_ssl=0). The server configuration is used by default.
The problem is that we cannot distinguish the default case (no specific SSL
check option) and the case of an explicit non-SSL check. In both, use_ssl is set
to 0. So the server configuration is always used. For a SSL server, when
no-check-ssl option is set, the check is still performed using a SSL
configuration.
To fix the bug, instead of a boolean value (0=TCP, 1=SSL), we use a ternary value :
* 0 = use server config
* 1 = force SSL
* -1 = force non-SSL
The same is done for the server parameter. It is not really necessary for
now. But it is a good way to know is the server no-ssl option is set.
In addition, the PR_O_TCPCHK_SSL proxy option is no longer used to set use_ssl
to 1 for a check. Instead the flag is directly tested to prepare or destroy the
server SSL context.
This patch should be backported as far as 1.8.
Error codes ERR_WARN and ERR_ALERT are used to signal that the error
given is of the corresponding level. All errors are displayed as ALERT
in the display_parser_err() function.
Differentiate the display level based on the error code. If both
ERR_WARN and ERR_ALERT are used, ERR_ALERT is given priority.
The 'http-check send' directive have been added to add headers and optionnaly a
payload to the request sent during HTTP healthchecks. The request line may be
customized by the "option httpchk" directive but there was not official way to
add extra headers. An old trick consisted to hide these headers at the end of
the version string, on the "option httpchk" line. And it was impossible to add
an extra payload with an "http-check expect" directive because of the
"Connection: close" header appended to the request (See issue #16 for details).
So to make things official and fully support payload additions, the "http-check
send" directive have been added :
option httpchk POST /status HTTP/1.1
http-check send hdr Content-Type "application/json;charset=UTF-8" \
hdr X-test-1 value1 hdr X-test-2 value2 \
body "{id: 1, field: \"value\"}"
When a payload is defined, the Content-Length header is automatically added. So
chunk-encoded requests are not supported yet. For now, there is no special
validity checks on the extra headers.
This patch is inspired by Kiran Gavali's work. It should fix the issue #16 and
as far as possible, it may be backported, at least as far as 1.8.
Server address and port may change at runtime. So the address and port passed as
arguments and as environment variables when an external check is executed must
be updated. The current number of connections on the server was already updated
before executing the command. So the same mechanism is used for the server
address and port. But in addition, command arguments are also updated.
This patch must be backported to all stable versions. It should fix the
issue #577.
It is the intended behaviour. But because of a bug, the 500 error resulting of a
rewrite failure during http-after-response ruleset evaluation is also
rewritten. So if at this step, if there is also a rewrite error, the session is
closed and no error message is returned.
Instead, we must be sure to not evaluate the http-after-response rules on an
error message if it is was thrown because of a rewrite failure on a previous
error message.
It is a 2.2-dev2+ bug. No need to backport. This patch should fix the issue
pool_gc() causes quite some stress on the memory allocator because
it calls a lot of free() calls while other threads are using malloc().
In addition, pool_gc() needs to take care of possible locking because
it may be called from pool allocators. One way to avoid all this is to
use thread_isolate() to make sure the gc runs alone. By putting less
pressure on the pools and getting rid of the locks, it may even take
less time to complete.
The url_decode() function used by the url_dec converter and a few other
call points is ambiguous on its processing of the '+' character which
itself isn't stable in the spec. This one belongs to the reserved
characters for the query string but not for the path nor the scheme,
in which it must be left as-is. It's only in argument strings that
follow the application/x-www-form-urlencoded encoding that it must be
turned into a space, that is, in query strings and POST arguments.
The problem is that the function is used to process full URLs and
paths in various configs, and to process query strings from the stats
page for example.
This patch updates the function to differentiate the situation where
it's parsing a path and a query string. A new argument indicates if a
query string should be assumed, otherwise it's only assumed after seeing
a question mark.
The various locations in the code making use of this function were
updated to take care of this (most call places were using it to decode
POST arguments).
The url_dec converter is usually called on path or url samples, so it
needs to remain compatible with this and will default to parsing a path
and turning the '+' to a space only after a question mark. However in
situations where it would explicitly be extracted from a POST or a
query string, it now becomes possible to enforce the decoding by passing
a non-null value in argument.
It seems to be what was reported in issue #585. This fix may be
backported to older stable releases.
A typo resulted in '||' being used to concatenate trace flags, which will
only set flag of value '1' there. Noticed by clang 10 and reported in
issue #588.
No backport is needed, this trace was added in 2.2-dev.
When checking www-authenticate headers, we don't want to just accept
"NTLM" as value, because the server may send "HTLM <base64 value>". Instead,
just check that it starts with NTLM.
This should be backported to 2.1, 2.0, 1.9 and 1.8.
Documentation states that default settings for ssl server options can be set
using either ssl-default-server-options or default-server directives. In practice,
not all ssl server options can have default values, such as ssl-min-ver, ssl-max-ver,
etc..
This patch adds the missing ssl options in srv_ssl_settings_cpy() and srv_parse_ssl(),
making it possible to write configurations like the following examples, and have them
behave as expected.
global
ssl-default-server-options ssl-max-ver TLSv1.2
defaults
mode http
listen l1
bind 1.2.3.4:80
default-server ssl verify none
server s1 1.2.3.5:443
listen l2
bind 2.2.3.4:80
default-server ssl verify none ssl-max-ver TLSv1.3 ssl-min-ver TLSv1.2
server s1 1.2.3.6:443
This should be backported as far as 1.8.
This fixes issue #595.
This option activate the feature introduce in commit 16739778:
"MINOR: ssl: skip self issued CA in cert chain for ssl_ctx".
The patch disable the feature per default.
When trying to insert a new certificate into a directory with "add ssl
crt-list", no check were done on the path of the new certificate.
To be more consistent with the HAProxy reload, when adding a file to
a crt-list, if this crt-list is a directory, the certificate will need
to have the directory in its path.
Allowing the use of SSL options and filters when adding a file in a
directory is not really consistent with the reload of HAProxy. Disable
the ability to use these options if one try to use them with a directory.
This patch adds the sysname, release, version and machine fields from
the uname results to the version output. It intentionally leaves out the
machine name, because it is usually not useful and users might not want to
expose their machine names for privacy reasons.
May be backported if it is considered useful for debugging.
If haproxy fails to start and emits an alert, then it can be useful
to have it also emit the version and the path used to load it. Some
users may be mistakenly launching the wrong binary due to a misconfigured
PATH variable and this will save them some troubleshooting time when it
reports that some keywords are not understood.
What we do here is that we *try* to extract the binary name from the
AUX vector on glibc, and we report this as a NOTICE tag before the
very first alert is emitted.
Some portability issues were met a few times in the past depending on
compiler versions, but this one was not reported in haproxy -vv output
while it's trivial to add it. This patch tries to be the most accurate
by explicitly reporting the clang version if detected, otherwise the
gcc version.
Since some systems switched to service managers which hide all warnings
by default, some users are not aware of some possibly important warnings
and get caught too late with errors that could have been detected earlier.
This patch adds a new global keyword, "zero-warning" and an equivalent
command-line option "-dW" to refuse to start in case any warning is
detected. It is recommended to use these with configurations that are
managed by humans in order to catch mistakes very early.
This helps quickly checking if the config produces any warning. For
this we reuse the "warned" bit field to add a new WARN_ANY bit that is
set by ha_warning(). The rest of the bit field was also cleaned from
unused bits.
Before supporting "server" line in "peers" section, such sections without
any local peer were removed from the configuration to get it validated.
This patch fixes the issue where a "server" line without address and port which
is a remote peer without address and port makes the configuration parsing fail.
When encoutering such cases we now ignore such lines remove them from the
configuration.
Thank you to Jrme Magnin for having reported this bug.
Must be backported to 2.1 and 2.0.
Commit 7f26391bc5 ("BUG/MINOR: connection: make sure to correctly tag
local PROXY connections") revealed that some implementations do not
properly ignore addresses in LOCAL connections (at least Dovecot was
spotted). More context information in the thread below:
https://www.mail-archive.com/haproxy@formilux.org/msg36890.html
The patch above was using LOCAL on top of local addresses in order to
minimize the risk of breakage but revealed worse than a clean fix. So
let's partially revert it and send pure LOCAL connections instead now.
After a bit of observation, this patch should be progressively backported
to stable branches. However if it reveals new breakage, the backport of
the patch above will have to be reverted from stable branches while other
products work on fixing their code based on the master branch.
When reading a crt-list file, the SSL options betweeen square brackets
are parsed, however the calling function sets the ssl_conf ptr to NULL
leading to all options being ignored, and a memory leak.
This is a remaining of the previous code which was forgotten.
This bug was introduced by 97b0810 ("MINOR: ssl: split the line parsing
of the crt-list").
Create a ckch_store_new() function which alloc and initialize a
ckch_store, allowing us to remove duplicated code and avoiding wrong
initialization in the future.
Bug introduced by d9d5d1b ("MINOR: ssl: free instances and SNIs with
ckch_inst_free()").
Upon an 'commit ssl cert' the HA_RWLOCK_WRUNLOCK of the SNI lock is done
with using the bind_conf pointer of the ckch_inst which was freed.
Fix the problem by using an intermediate variable to store the
bind_conf pointer.
Replace ckchs_free() by ckch_store_free() which frees the ckch_store but
now also all its ckch_inst with ckch_inst_free().
Also remove the "ckchs" naming since its confusing.
In 'commit ssl cert', instead of trying to regenerate a list of filters
from the SNIs, use the list provided by the crtlist_entry used to
generate the ckch_inst.
This list of filters doesn't need to be free'd anymore since they are
always reused from the crtlist_entry.
Use the refcount of the SSL_CTX' to free them instead of freeing them on
certains conditions. That way we can free the SSL_CTX everywhere its
pointer is used.
When deleting the previous SNI entries with 'set ssl cert', the old
SSL_CTX' were not free'd, which probably prevent the completion of the
free of the X509 in the old ckch_store, because of the refcounts in the
SSL library.
This bug was introduced by 150bfa8 ("MEDIUM: cli/ssl: handle the
creation of SSL_CTX in an IO handler").
Must be backported to 2.1.
The crtlist_load_cert_dir() caches the directory name without trailing
slashes when ssl_sock_load_cert_list_file() tries to lookup without
cleaning the trailing slashes.
This bug leads to creating the crtlist twice and prevents to remove
correctly a crtlist_entry since it exists in the serveral crtlists
created by accident.
Move the trailing slashes cleanup in ssl_sock_load_cert_list_file() to
fix the problem.
This bug was introduced by 6be66ec ("MINOR: ssl: directories are loaded
like crt-list")
Delete a certificate store from HAProxy and free its memory. The
certificate must be unused and removed from any crt-list or directory.
The deletion doesn't work with a certificate referenced directly with
the "crt" directive in the configuration.
Bundles are deprecated and can't be used with the crt-list command of
the CLI, improve the error output when trying to use them so the users
can disable them.
The cli_parse_del_crtlist() does unlock the ckch big lock, but it does
not lock it at the beginning of the function which is dangerous.
As a side effect it let the structures locked once it called the unlock.
This bug was introduced by 0a9b941 ("MINOR: ssl/cli: 'del ssl crt-list'
delete an entry")
Issue #574 reported an unclear error when trying to open a file with not
enough permission.
[ALERT] 096/032117 (835) : parsing [/etc/haproxy/haproxy.cfg:54] : 'bind :443' : error encountered while processing 'crt'.
[ALERT] 096/032117 (835) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg
[ALERT] 096/032117 (835) : Fatal errors found in configuration.
Improve the error to give us more information:
[ALERT] 097/142030 (240089) : parsing [test.cfg:22] : 'bind :443' : cannot open the file 'kikyo.pem.rsa'.
[ALERT] 097/142030 (240089) : Error(s) found in configuration file : test.cfg
[ALERT] 097/142030 (240089) : Fatal errors found in configuration.
This patch could be backported in 2.1.
The dump and show ssl crt-list commands does the same thing, they dump
the content of a crt-list, but the 'show' displays an ID in the first
column. Delete the 'dump' command so it is replaced by the 'show' one.
The old 'show' command is replaced by an '-n' option to dump the ID.
And the ID which was a pointer is replaced by a line number and placed
after colons in the filename.
Example:
$ echo "show ssl crt-list -n kikyo.crt-list" | socat /tmp/sock1 -
# kikyo.crt-list
kikyo.pem.rsa:1 secure.domain.tld
kikyo.pem.ecdsa:2 secure.domain.tld
Delete an entry in a crt-list, this is done by iterating over the
ckch_inst in the crtlist_entry. For each ckch_inst the bind_conf lock is
held during the deletion of the sni_ctx in the SNI trees. Everything
is free'd.
If there is several entries with the same certificate, a line number
must be provided to chose with entry delete.
Initialize fcount to 0 when 'add ssl crt-list' does not contain any
filters. This bug can lead to trying to read some filters even if they
doesn't exist.
The HPACK header table is implemented as a wrapping list inside a contigous
area. Headers names and values are stored from right to left while indexes
are stored from left to right. When there's no more room to store a new one,
we wrap to the right again, or possibly defragment it if needed. The condition
do use the right part (called tailroom) or the left part (called headroom)
depends on the location of the last inserted header. After wrapping happens,
the code forces to stick to tailroom by pretending there's no more headroom,
so that the size fit test always fails. The problem is that nothing prevents
from storing a header with an empty name and empty value, resulting in a
total size of zero bytes, which satisfies the condition to use the headroom.
Doing this in a wrapped buffer results in changing the "front" header index
and causing miscalculations on the available size and the addresses of the
next headers. This may even allow to overwrite some parts of the index,
opening the possibility to perform arbitrary writes into a 32-bit relative
address space.
This patch fixes the issue by making sure the headroom is considered only
when the buffer does not wrap, instead of relying on the zero size. This
must be backported to all versions supporting H2, which is as far as 1.8.
Many thanks to Felix Wilhelm of Google Project Zero for responsibly
reporting this problem with a reproducer and a detailed analysis.
CVE-2020-11100 was assigned to this issue.
Add the support for filters and SSL options in the CLI command
"add ssl crt-list".
The feature was implemented by applying the same parser as the crt-list
file to the payload.
The new options are passed to the command as a payload with the same
format that is suppported by the crt-list file itself, so you can easily
copy a line from a file and push it via the CLI.
Example:
printf "add ssl crt-list localhost.crt-list <<\necdsa.pem [verify none allow-0rtt] localhost !www.test1.com\n\n" | socat /tmp/sock1 -
In order to reuse the crt-list line parsing in "add ssl crt-list",
the line parsing was extracted from crtlist_parse_file() to a new
function crtlist_parse_line().
This function fills a crtlist_entry structure with a bind_ssl_conf and
the list of filters.
We can't expect the DNS answer to always match the case we used for the
request, so we can't just use memcmp() to compare the DNS answer with what
we are expected.
Instead, introduce dns_hostname_cmp(), which compares each string in a
case-insensitive way.
This should fix github issue #566.
This should be backported to 2.1, 2.0, 1.9 and 1.8.
This patch fixes#53 where it was noticed that when an active
server is set to DRAIN it no longer has the color blue reflected
within the stats page. This patch addresses that and adds the
color back to drain. It's to be noted that backup servers are
configured to have an orange color when they are draining.
Should be backported as far as 1.7.
The new 'add ssl crt-list' command allows to add a new crt-list entry in
a crt-list (or a directory since they are handled the same way).
The principle is basicaly the same as the "commit ssl cert" command with
the exception that it iterates on every bind_conf that uses the crt-list
and generates a ckch instance (ckch_inst) for each of them.
The IO handler yield every 10 bind_confs so HAProxy does not get stuck in
a too much time consuming generation if it needs to generate too much
SSL_CTX'.
This command does not handle the SNI filters and the SSL configuration
yet.
Example:
$ echo "new ssl cert foo.net.pem" | socat /tmp/sock1 -
New empty certificate store 'foo.net.pem'!
$ echo -e -n "set ssl cert foo.net.pem <<\n$(cat foo.net.pem)\n\n" | socat /tmp/sock1 -
Transaction created for certificate foo.net.pem!
$ echo "commit ssl cert foo.net.pem" | socat /tmp/sock1 -
Committing foo.net.pem
Success!
$ echo "add ssl crt-list one.crt-list foo.net.pem" | socat /tmp/sock1 -
Inserting certificate 'foo.net.pem' in crt-list 'one.crt-list'......
Success!
$ echo "show ssl crt-list one.crt-list" | socat /tmp/sock1 -
# one.crt-list
0x55d17d7be360 kikyo.pem.rsa [ssl-min-ver TLSv1.0 ssl-max-ver TLSv1.3]
0x55d17d82cb10 foo.net.pem
The crtlist_entry structure use a pointer to the store as key.
That's a problem with the dynamic update of a certificate over the CLI,
because it allocates a new ckch_store. So updating the pointers is
needed. To achieve that, a linked list of the crtlist_entry was added in
the ckch_store, so it's easy to iterate on this list to update the
pointers. Another solution would have been to rework the system so we
don't allocate a new ckch_store, but it requires a rework of the ckch
code.
When updating a ckch_store we may want to update its pointer in the
crtlist_entry which use it. To do this, we need the list of the entries
using the store.
The instances were wrongly inserted in the crtlist entries, all
instances of a crt-list were inserted in the last crt-list entry.
Which was kind of handy to free all instances upon error.
Now that it's done correctly, the error path was changed, it must
iterate on the entries and find the ckch_insts which were generated for
this bind_conf. To avoid wasting time, it stops the iteration once it
found the first unsuccessful generation.
In order to be able to add new certificate in a crt-list, we need the
list of bind_conf that uses this crt-list so we can create a ckch_inst
for each of them.
The original algorithm always killed half the idle connections. This doesn't
take into account the way the load can change. Instead, we now kill half
of the exceeding connections (exceeding connection being the number of
used + idle connections past the last maximum used connections reached).
That way if we reach a peak, we will kill much less, and it'll slowly go back
down when there's less usage.
Add a counter to know the current number of used connections, as well as the
max, this will be used later to refine the algorithm used to kill idle
connections, based on current usage.
With server-template was introduced the possibility to scale the
number of servers in a backend without needing a configuration change
and associated reload. On the other hand it became impractical to
write use-server rules for these servers as they would only accept
existing server labels as argument. This patch allows the use of
log-format notation to describe targets of a use-server rules, such
as in the example below:
listen test
bind *:1234
use-server %[hdr(srv)] if { hdr(srv) -m found }
use-server s1 if { path / }
server s1 127.0.0.1:18080
server s2 127.0.0.1:18081
If a use-server rule is applied because it was conditionned by an
ACL returning true, but the target of the use-server rule cannot be
resolved, no other use-server rule is evaluated and we fall back to
load balancing.
This feature was requested on the ML, and bumped with issue #563.
Add a sample fetch for the name of a bind. This can be useful to
take decisions when PROXY protocol is used and we can't rely on dst,
such as the sample config below.
defaults
mode http
listen bar
bind 127.0.0.1:1111
server s1 127.0.1.1:1234 send-proxy
listen foo
bind 127.0.1.1:1234 name foo accept-proxy
http-request return status 200 hdr dst %[dst] if { dst 127.0.1.1 }
First: self issued CA, aka root CA, is the enchor for chain validation,
no need to send it, client must have it. HAProxy can skip it in ssl_ctx.
Second: the main motivation to skip root CA in ssl_ctx is to be able to
provide it in the chain without drawback. Use case is to provide issuer
for ocsp without the need for .issuer and be able to share it in
issuers-chain-path. This concerns all certificates without intermediate
certificates. It's useless for BoringSSL, .issuer is ignored because ocsp
bits doesn't need it.
13a9232ebc introduced parsing of
Additionnal DNS response section to pick up IP address when available.
That said, this introduced a side effect for other query types (A and
AAAA) leading to consider those responses invalid when parsing the
Additional section.
This patch avoids this situation by ensuring the Additional section is
parsed only for SRV queries.
In h1_detach(), if our input buffer isn't empty, don't just subscribe(), we
may hold a new request, and there's nothing left to read. Instead, call
h1_process() directly, so that a new stream is created.
Failure to do so means if we received the new request to early, the
connecetion will just hang, as it happens when using svn.
When a "peers" section has not any local peer, it is removed of the list
of "peers" sections by check_config_validity(). But a stick-table which
refers to a "peers" section stores a pointer to this peers section.
These pointer must be reset to NULL value for each stick-table refering to
such a "peers" section to prevent stktable_init() to start the peers frontend
attached to the peers section dereferencing the invalid pointer.
Furthemore this patch stops the peers frontend as this is done for other
configurations invalidated by check_config_validity().
Thank you to Olivier D for having reported this issue with such a
simple configuration file which made haproxy crash when started with
-c option for configuration file validation.
defaults
mode http
peers mypeers
peer toto 127.0.0.1:1024
backend test
stick-table type ip size 10k expire 1h store http_req_rate(1h) peers mypeers
Must be backported to 2.1 and 2.0.
Fix an infinite loop which was added in an attempt to fix#558.
If the peers_fe is NULL, it will loop forever.
Must be backported with a2cfd7e as far as 1.8.
Tim reported that in master-worker mode, if a stick-table is declared
but not used in the configuration, its associated peers listener won't
bind.
This problem is due the fact that the master-worker and the daemon mode,
depend on the bind_proc field of the peers proxy to disable the listener.
Unfortunately the bind_proc is left to 0 if no stick-table were used in
the configuration, stopping the listener on all processes.
This fixes sets the bind_proc to the first process if it wasn't
initialized.
Should fix bug #558. Should be backported as far as 1.8.
SSL_CTX_set1_chain is used for openssl >= 1.0.2 and a loop with
SSL_CTX_add_extra_chain_cert for openssl < 1.0.2.
SSL_CTX_add_extra_chain_cert exist with openssl >= 1.0.2 and can be
used for all openssl version (is new name is SSL_CTX_add0_chain_cert).
This patch use SSL_CTX_add_extra_chain_cert to remove any #ifdef for
compatibilty. In addition sk_X509_num()/sk_X509_value() replace
sk_X509_shift() to extract CA from chain, as it is used in others part
of the code.
This bug was introduced by 85888573 "BUG/MEDIUM: ssl: chain must be
initialized with sk_X509_new_null()". No need to set find_chain with
sk_X509_new_null(), use find_chain conditionally to fix issue #516.
This bug was referenced by issue #559.
[wla: fix some alignment/indentation issue]
In run_thread_poll_loop() we test both for (global_tasks_mask & tid_bit)
and thread_has_tasks(), but the former is useless since this test is
already part of the latter.
Commit 4b3f27b ("BUG/MINOR: haproxy/threads: try to make all threads
leave together") improved the soft-stop synchronization but it left a
small race open because it looks at tasks_run_queue, which can drop
to zero then back to one while another thread picks the task from the
run queue to insert it into the tasklet_list. The risk is very low but
not null. In addition the condition didn't consider the possible presence
of signals in the queue.
This patch moves the stopping detection just after the "wake" calculation
which already takes care of the various queues' sizes and signals. It
avoids needlessly duplicating these tests.
The bug was discovered during a code review but will probably never be
observed. This fix may be backported to 2.1 and 2.0 along with the commit
above.
In the various muxes, add a comment documenting that once
srv_add_to_idle_list() got called, any thread may pick that conenction up,
so it is unsafe to access the mux context/the connection, the only thing we
can do is returning.
In h1_detach(), make sure we subscribe before we call
srv_add_to_idle_list(), not after. As soon as srv_add_to_idle_list() is
called, and it is put in an idle list, another thread can take it, and
we're no longer allowed to subscribe.
This fixes a race condition when another thread grabs a connection as soon
as it is put, the original owner would subscribe, and thus the new thread
would fail to do so, and to activate polling.
Fix a potential NULL dereference in "show ssl cert" when we can't
allocate the <out> trash buffer.
This patch creates a new label so we could jump without trying to do the
ci_putchk in this case.
This bug was introduced by ea987ed ("MINOR: ssl/cli: 'new ssl cert'
command"). 2.2 only.
This bug was referenced by issue #556.
In connect_server(), make sure we properly free a newly created connection
if we somehow fail, and it has not yet been attached to a conn_stream, or
it would lead to a memory leak.
This should appease coverity for backend.c, as reported in inssue #556.
This should be backported to 2.1, 2.0 and 1.9
Fix a memory leak that could happen upon a "show ssl cert" if notBefore:
or notAfter: failed to extract its ASN1 string.
Introduced by d4f946c ("MINOR: ssl/cli: 'show ssl cert' give information
on the certificates"). 2.2 only.
In `crtlist_dup_filters()` add the `1` to the number of elements instead of
the size of a single element.
This bug was introduced in commit 2954c478eb,
which is 2.2+. No backport needed.
In `ckch_inst_sni_ctx_to_sni_filters` use `calloc()` to allocate the filter
array. When the function fails to allocate memory for a single entry the
whole array will be `free()`d using free_sni_filters(). With the previous
`malloc()` the pointers for entries after the failing allocation could
possibly be a garbage value.
This bug was introduced in commit 38df1c8006,
which is 2.2+. No backport needed.
In conn_backend_get(), when we manage to get an idle connection from the
current thread's pool, don't forget to decrement the idle connection
counters, or we may end up not reusing connections when we could, and/or
killing connections when we shouldn't.
In connect_server(), if we notice we have more file descriptors opened than
we should, there's no reason not to close a connection just because we're
reusing one, so do it anyway.
In connect_server(), if we no longer have any idle connections for the
current thread, attempt to use the new "takeover" mux method to steal a
connection from another thread.
This should have no impact right now, given no mux implements it.
Make the "list" element a struct mt_list, and explicitely use
list_from_mt_list to get a struct list * where it is used as such, so that
mt_list_for_each_entry will be usable with it.
Implement a new function, fd_takeover(), that lets you become the thread
responsible for the fd. On architectures that do not have a double-width CAS,
use a global rwlock.
fd_set_running() was also changed to be able to compete with fd_takeover(),
either using a dooble-width CAS on both running_mask and thread_mask, or
by claiming a reader on the global rwlock. This extra operation should not
have any measurable impact on modern architectures where threading is
relevant.
Revamp the server connection lists. We know have 3 lists :
- idle_conns, which contains idling connections
- safe_conns, which contains idling connections that are safe to use even
for the first request
- available_conns, which contains connections that are not idling, but can
still accept new streams (those are HTTP/2 or fastcgi, and are always
considered safe).
Make it so sessions are not responsible for connection anymore, except for
connections that are private, and thus can't be shared, otherwise, as soon
as a request is done, the session will just add the connection to the
orphan connections pool.
This will break http-reuse safe, but it is expected to be fixed later.
The CLI command "new ssl cert" allows one to create a new certificate
store in memory. It can be filed with "set ssl cert" and "commit ssl
cert".
This patch also made a small change in "show ssl cert" to handle an
empty certificate store.
Multi-certificate bundles are not supported since they will probably be
removed soon.
This feature alone is useless since there is no way to associate the
store to a crt-list yet.
Example:
$ echo "new ssl cert foobar.pem" | socat /tmp/sock1 -
New empty certificate store 'foobar.pem'!
$ printf "set ssl cert foobar.pem <<\n$(cat localhost.pem.rsa)\n\n" | socat /tmp/sock1 -
Transaction created for certificate foobar.pem!
$ echo "commit ssl cert foobar.pem" | socat /tmp/sock1 -
Committing foobar.pem
Success!
$ echo "show ssl cert foobar.pem" | socat /tmp/sock1 -
Filename: foobar.pem
[...]
The flush_lock was introduced, mostly to be sure that pool_gc() will never
dereference a pointer that has been free'd. __pool_get_first() was acquiring
the lock to, the fear was that otherwise that pointer could get free'd later,
and then pool_gc() would attempt to dereference it. However, that can not
happen, because the only functions that can free a pointer, when using
lockless pools, are pool_gc() and pool_flush(), and as long as those two
are mutually exclusive, nobody will be able to free the pointer while
pool_gc() attempts to access it.
So change the flush_lock to a spinlock, and don't bother acquire/release
it in __pool_get_first(), that way callers of __pool_get_first() won't have
to wait while the pool is flushed. The worst that can happen is we call
__pool_refill_alloc() while the pool is getting flushed, and memory can
get allocated just to be free'd.
This may help with github issue #552
This may be backported to 2.1, 2.0 and 1.9.
When running __signal_process_queue(), we ignore most signals. We can't,
however, ignore WDTSIG and DEBUGSIG, otherwise that thread may end up
waiting for another one that could hold a glibc lock, while the other thread
wait for this one to enter debug_handler().
So make sure WDTSIG and DEBUGSIG aren't ignored, if they are defined.
This probably explains the watchdog deadlock described in github issue
This should be backported to 2.1, 2.0 and 1.9.
Move the definition of WDTSIG and DEBUGSIG from wdt.c and debug.c into
types/signal.h, so that we can access them in another file.
We need those definition to avoid blocking those signals when running
__signal_process_queue().
This should be backported to 2.1, 2.0 and 1.9.
The behavior of calloc() when being passed `0` as `nelem` is implementation
defined. It may return a NULL pointer.
Avoid this issue by checking before allocating. While doing so adjust the local
integer variables that are used to refer to memory offsets to `size_t`.
This issue was introced in commit f91ac19299. This
patch should be backported together with that commit.
There is a memleak of the entry structure in crtlist_load_cert_dir(), in
the case we can't stat the file, or this is not a regular file. Let's
move the entry allocation so it's done after these tests.
Fix issue #551.
When tasklet were introduced, it has been decided not to provide the tasklet
to the callback, but NULL instead. While it may have been reasonable back
then, maybe to be able to differentiate a task from a tasklet from the
callback, it also means that we can't access the tasklet from the handler if
the context provided can't be trusted.
As no handler is shared between a task and a tasklet, and there are now
other means of distinguishing between task and tasklet, just pass the
tasklet pointer too.
This may be backported to 2.1, 2.0 and 1.9 if needed.
A memory leak happens in an error case when ckchs_load_cert_file()
returns NULL in crtlist_parse_file().
This bug was introduced by commit 2954c47 ("MEDIUM: ssl: allow crt-list caching")
This patch fixes bug #551.
In the struct fdtab, introduce a new mask, running_mask. Each thread should
add its bit before using the fd.
Use the running_mask instead of a lock, in fd_insert/fd_delete, we'll just
spin as long as the mask is non-zero, to be sure we access the data
exclusively.
fd_set_running_excl() spins until the mask is 0, fd_set_running() just
adds the thread bit, and fd_clr_running() removes it.
Implement 2 new commands on the CLI:
show ssl crt-list [<filename>]: Without a specified filename, display
the list of crt-lists used by the configuration. If a filename is
specified, it will displays the content of this crt-list, with a line
identifier at the beginning of each line. This output must not be used
as a crt-list file.
dump ssl crt-list <filename>: Dump the content of a crt-list, the output
can be used as a crt-list file.
Note: It currently displays the default ssl-min-ver and ssl-max-ver
which are potentialy not in the original file.
Don't bother trying to remove the connection from the idle list, as the
only connections the mux_pt handles are now the TCP-mode connections, and
those are never added to the idle list.
The agent's engine_id forgot to dup from trash, all engine_ids point to
the same address "&trash.area", the engine_id changed at run time and will
double free when release agents and trash.
This bug was introduced by the commit ee3bcddef ("MINOR: tools: add a generic
function to generate UUIDs").
No backport is needed, this is 2.2-dev.
The commit 6be66ec ("MINOR: ssl: directories are loaded like crt-list")
broke the directory loading of the certificates. The <crtlist> wasn't
filled by the crtlist_load_cert_dir() function. And the entries were
not correctly initialized. Leading to a segfault during startup.
Generate a directory cache with the crtlist and crtlist_entry structures.
With this new model, directories are a special case of the crt-lists.
A directory is a crt-list which allows only one occurence of each file,
without SSL configuration (ssl_bind_conf) and without filters.
The crtlist structure defines a crt-list in the HAProxy configuration.
It contains crtlist_entry structures which are the lines in a crt-list
file.
crt-list are now loaded in memory using crtlist and crtlist_entry
structures. The file is read only once. The generation algorithm changed
a little bit, new ckch instances are generated from the crtlist
structures, instead of being generated during the file loading.
The loading function was split in two, one that loads and caches the
crt-list and certificates, and one that looks for a crt-list and creates
the ckch instances.
Filters are also stored in crtlist_entry->filters as a char ** so we can
generate the sni_ctx again if needed. I won't be needed anymore to parse
the sni_ctx to do that.
A crtlist_entry stores the list of all ckch_inst that were generated
from this entry.
Pass a pointer to the struct ckch_inst to the ssl_sock_load_ckchs()
function so we can manipulate the ckch_inst from
ssl_sock_load_cert_list_file() and ssl_sock_load_cert().
This bug was introduced with peers.c code re-work (7d0ceeec80):
"struct peer" flags are mistakenly checked instead of
"struct peers" flags to check the resync status of the local peer.
The issue was reported here:
https://github.com/haproxy/haproxy/issues/545
This bug affects all branches >= 2.0 and should be backported.
It's more generic and versatile than the previous shut_your_big_mouth_gcc()
that was used to silence annoying warnings as it's not limited to ignoring
syscalls returns only. This allows us to get rid of the aforementioned
function and the shut_your_big_mouth_gcc_int variable, that started to
look ugly in multi-threaded environments.
This message comes when we run:
haproxy -c -V -f /etc/haproxy/haproxy.cfg
[ALERT] 072/233727 (30865) : parsing [/etc/haproxy/haproxy.cfg:34] : The 'rspirep' directive is not supported anymore sionce HAProxy 2.1. Use 'http-response replace-header' instead.
[ALERT] 072/233727 (30865) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg
[ALERT] 072/233727 (30865) : Fatal errors found in configuration.
These are mostly comments in the code. A few error messages were fixed
and are of low enough importance not to deserve a backport. Some regtests
were also fixed.
This adds the missing blank lines in `make_proxy_line_v2` and
`conn_recv_proxy`. It also adjusts the type of the temporary variable
used for the return value of `recv` to be `ssize_t` instead of `int`.
This patch adds the `unique-id` option to `proxy-v2-options`. If this
option is set a unique ID will be generated based on the `unique-id-format`
while sending the proxy protocol v2 header and stored as the unique id for
the first stream of the connection.
This feature is meant to be used in `tcp` mode. It works on HTTP mode, but
might result in inconsistent unique IDs for the first request on a keep-alive
connection, because the unique ID for the first stream is generated earlier
than the others.
Now that we can send unique IDs in `tcp` mode the `%ID` log variable is made
available in TCP mode.
There's a small issue with soft stop combined with the incoming
connection load balancing. A thread may dispatch a connection to
another one at the moment stopping=1 is set, and the second one could
stop by seeing (jobs - unstoppable_jobs) == 0 in run_poll_loop(),
without ever picking these connections from the queue. This is
visible in that it may occasionally cause a connection drop on
reload since no remaining thread will ever pick that connection
anymore.
In order to address this, this patch adds a stopping_thread_mask
variable by which threads acknowledge their willingness to stop
when their runqueue is empty. And all threads will only stop at
this moment, so that if finally some late work arrives in the
thread's queue, it still has a chance to process it.
This should be backported to 2.1 and 2.0.
When stopping there is a risk that other threads are already in the
process of stopping, so let's not add new work in their queue and
instead keep the incoming connection local.
This should be backported to 2.1 and 2.0.
Surprizingly the variable was never initialized, though on most
platforms it's zeroed at boot, and it is relatively harmless
anyway since in the worst case the bits are updated around poll().
This was introduced by commit 79321b95a8 and needs to be backported
as far as 1.9.
In pool_gc(), when we're not using lockless pool, always update free_list,
and read from it the next element to free. As we now unlock the pool while
we're freeing the item, another thread could have updated free_list in our
back. Not doing so could lead to segfaults when pool_gc() is called.
This should be backported to 2.1.
Don't assume the connection always has a valid session in "owner".
Instead, attempt to retrieve the session from the stream, and modify
the error snapshot code to not assume we always have a session, or the proxy
for the other end.
x86_64 and ARM64 do support the double-word atomic CAS. However on
ARM it must be done only on aligned data. The random generator makes
use of such double-word atomic CAS when available but didn't enforce
alignment, which causes ARM64 to crash early in the startup since
commit 52bf839 ("BUG/MEDIUM: random: implement a thread-safe and
process-safe PRNG").
This commit just unconditionally aligns the arrays. It must be
backported to all branches where the commit above is backported
(likely till 2.0).
Remove the list of private connections from server, it has been largely
unused, we only inserted connections in it, but we would never actually
use it.
When a maximum memory setting is passed to haproxy and maxconn is not set
and ulimit-n is not set, it is expected that maxconn will be set to the
highest value permitted by this memory setting, possibly affecting the
FD limit.
When maxconn was changed to be deduced from the current process's FD limit,
the automatic setting above was partially lost because it now remains
limited to the current FD limit in addition to being limited to the
memory usage. For unprivileged processes it does not change anything,
but for privileged processes the difference is important. Indeed, the
previous behavior ensured that the new FD limit could be enforced on
the process as long as the user had the privilege to do so. Now this
does not happen anymore, and some people rely on this for automatic
sizing in VM environments.
This patch implements the ability to verify if the setting will be
enforceable on the process or not. First it computes maxconn based on
the memory limits alone, then checks if the process is willing to accept
them, otherwise tries again by respecting the process' hard limit.
Thanks to this we now have the best of the pre-2.0 behavior and the
current one, in that privileged users will be able to get as high a
maxconn as they need just based on the memory limit, while unprivileged
users will still get as high a setting as permitted by the intersection
of the memory limit and the process' FD limit.
Ideally, after some observation period, this patch along with the
previous one "MINOR: init: move the maxsock calculation code to
compute_ideal_maxsock()" should be backported to 2.1 and 2.0.
Thanks to Baptiste for raising the issue.
The maxsock value is currently derived from global.maxconn and a few other
settings, some of which also depend on global.maxconn. This makes it
difficult to check if a limit is already too high or not during the maxconn
automatic sizing.
Let's move this code into a new function, compute_ideal_maxsock() which now
takes a maxconn in argument. It performs the same operations and returns
the maxsock value if global.maxconn were to be set to that value. It now
replaces the previous code to compute maxsock.
When calling MT_LIST_DEL_SAFE(), give him the temporary pointer "tmpelt",
as that's what is expected. We want to be able to set that pointer to NULL,
to let other parts of the code know we deleted an element.
In order to store and cache the directory loading, the directory loading
was separated from ssl_sock_load_cert() and put in a new function
ssl_sock_load_cert_dir() to be more readable.
This patch only splits the function in two.
SI_TKILL is not necessarily defined on older systems and is used only
with the pthread_kill() call a few lines below, so it should also be
subject to the USE_THREAD condition.
Technically speaking the call was implemented in glibc 2.3 so we must
rely on this and not on __USE_GNU which is an internal define of glibc
to track use of GNU_SOURCE.
The splice() syscall has been supported in glibc since version 2.5 issued
in 2006 and is present on supported systems so there's no need for having
our own arch-specific syscall definitions anymore.
This was made to support epoll on patched 2.4 kernels, and on early 2.6
using alternative libcs thanks to the arch-specific syscall definitions.
All the features we support have been around since 2.6.2 and present in
glibc since 2.3.2, neither of which are found in field anymore. Let's
simply drop this and use epoll normally.
The accept4() syscall has been present for a while now, there is no more
reason for maintaining our own arch-specific syscall implementation for
systems lacking it in libc but having it in the kernel.
This was introduced 10 years ago to squeeze a few CPU cycles per syscall
on 32-bit x86 machines and was already quite old by then, requiring to
explicitly enable support for this in the kernel. We don't even know if
it still builds, let alone if it works at all on recent kernels! Let's
completely drop this now.
Since commit 244b070 ("MINOR: ssl/cli: support crt-list filters"),
HAProxy generates a list of filters based on the sni_ctx in memory.
However it's not always relevant, sometimes no filters were configured
and the CN/SAN in the new certificate are not the same.
This patch fixes the issue by using a flag filters in the ckch_inst, so
we are able to know if there were filters or not. In the late case it
uses the CN/SAN of the new certificate to generate the sni_ctx.
note: filters are still only used in the crt-list atm.
We currently have two UUID generation functions, one for the sample
fetch and the other one in the SPOE filter. Both were a bit complicated
since they were made to support random() implementations returning an
arbitrary number of bits, and were throwing away 33 bits every 64. Now
we don't need this anymore, so let's have a generic function consuming
64 bits at once and use it as appropriate.
The rand() sample fetch supports being limited to a certain range, but
it only uses 31 bits and scales them as requested, which means that when
the requested output range is larger than 31 bits, the least significant
one is not random and may even be constant.
Let's make use of the whole 32 bits now that we have access ot them.
In order to honor spread_checks we currently call rand() which is not
thread safe and which must never turn its internal state to zero. This
is not thread safe, let's use ha_random() instead. This is a complement
to commimt 52bf839394 ("BUG/MEDIUM: random: implement a thread-safe and
process-safe PRNG") and may be backported with it.
For the random LB algorithm we need a random 32-bit hashing key that used
to be made of two calls to random(). Now we can simply perform a single
call to ha_random32() and get rid of the useless operations.
This is the replacement of failed attempt to add thread safety and
per-process sequences of random numbers initally tried with commit
1c306aa84d ("BUG/MEDIUM: random: implement per-thread and per-process
random sequences").
This new version takes a completely different approach and doesn't try
to work around the horrible OS-specific and non-portable random API
anymore. Instead it implements "xoroshiro128**", a reputedly high
quality random number generator, which is one of the many variants of
xorshift, which passes all quality tests and which is described here:
http://prng.di.unimi.it/
While not cryptographically secure, it is fast and features a 2^128-1
period. It supports fast jumps allowing to cut the period into smaller
non-overlapping sequences, which we use here to support up to 2^32
processes each having their own, non-overlapping sequence of 2^96
numbers (~7*10^28). This is enough to provide 1 billion randoms per
second and per process for 2200 billion years.
The implementation was made thread-safe either by using a double 64-bit
CAS on platforms supporting it (x86_64, aarch64) or by using a local
lock for the time needed to perform the shift operations. This ensures
that all threads pick numbers from the same pool so that it is not
needed to assign per-thread ranges. For processes we use the fast jump
method to advance the sequence by 2^96 for each process.
Before this patch, the following config:
global
nbproc 8
frontend f
bind :4445
mode http
log stdout format raw daemon
log-format "%[uuid] %pid"
redirect location /
Would produce this output:
a4d0ad64-2645-4b74-b894-48acce0669af 12987
a4d0ad64-2645-4b74-b894-48acce0669af 12992
a4d0ad64-2645-4b74-b894-48acce0669af 12986
a4d0ad64-2645-4b74-b894-48acce0669af 12988
a4d0ad64-2645-4b74-b894-48acce0669af 12991
a4d0ad64-2645-4b74-b894-48acce0669af 12989
a4d0ad64-2645-4b74-b894-48acce0669af 12990
82d5f6cd-f6c1-4f85-a89c-36ae85d26fb9 12987
82d5f6cd-f6c1-4f85-a89c-36ae85d26fb9 12992
82d5f6cd-f6c1-4f85-a89c-36ae85d26fb9 12986
(...)
And now produces:
f94b29b3-da74-4e03-a0c5-a532c635bad9 13011
47470c02-4862-4c33-80e7-a952899570e5 13014
86332123-539a-47bf-853f-8c8ea8b2a2b5 13013
8f9efa99-3143-47b2-83cf-d618c8dea711 13012
3cc0f5c7-d790-496b-8d39-bec77647af5b 13015
3ec64915-8f95-4374-9e66-e777dc8791e0 13009
0f9bf894-dcde-408c-b094-6e0bb3255452 13011
49c7bfde-3ffb-40e9-9a8d-8084d650ed8f 13014
e23f6f2e-35c5-4433-a294-b790ab902653 13012
There are multiple benefits to using this method. First, it doesn't
depend anymore on a non-portable API. Second it's thread safe. Third it
is fast and more proven than any hack we could attempt to try to work
around the deficiencies of the various implementations around.
This commit depends on previous patches "MINOR: tools: add 64-bit rotate
operators" and "BUG/MEDIUM: random: initialize the random pool a bit
better", all of which will need to be backported at least as far as
version 2.0. It doesn't require to backport the build fixes for circular
include files dependecy anymore.
This reverts commit 1c306aa84d.
It breaks the build on all non-glibc platforms. I got confused by the
man page (which possibly is the most confusing man page I've ever read
about a standard libc function) and mistakenly understood that random_r
was portable, especially since it appears in latest freebsd source as
well but not in released versions, and with a slightly different API :-/
We need to find a different solution with a fallback. Among the
possibilities, we may reintroduce this one with a fallback relying on
locking around the standard functions, keeping fingers crossed for no
other library function to call them in parallel, or we may also provide
our own PRNG, which is not necessarily more difficult than working
around the totally broken up design of the portable API.
As mentioned in previous patch, the random number generator was never
made thread-safe, which used not to be a problem for health checks
spreading, until the uuid sample fetch function appeared. Currently
it is possible for two threads or processes to produce exactly the
same UUID. In fact it's extremely likely that this will happen for
processes, as can be seen with this config:
global
nbproc 8
frontend f
bind :4445
mode http
log stdout daemon format raw
log-format "%[uuid] %pid"
redirect location /
It typically produces this log:
551ce567-0bfb-4bbd-9b58-cdc7e9365325 30645
551ce567-0bfb-4bbd-9b58-cdc7e9365325 30641
551ce567-0bfb-4bbd-9b58-cdc7e9365325 30644
551ce567-0bfb-4bbd-9b58-cdc7e9365325 30639
551ce567-0bfb-4bbd-9b58-cdc7e9365325 30646
07764439-c24d-4e6f-a5a6-0138be59e7a8 30645
07764439-c24d-4e6f-a5a6-0138be59e7a8 30639
551ce567-0bfb-4bbd-9b58-cdc7e9365325 30643
07764439-c24d-4e6f-a5a6-0138be59e7a8 30646
b6773fdd-678f-4d04-96f2-4fb11ad15d6b 30646
551ce567-0bfb-4bbd-9b58-cdc7e9365325 30642
07764439-c24d-4e6f-a5a6-0138be59e7a8 30642
What this patch does is to use a distinct per-thread and per-process
seed to make sure the same sequences will not appear, and will then
extend these seeds by "burning" a number of randoms that depends on
the global random seed, the thread ID and the process ID. This adds
roughly 20 extra bits of randomness, resulting in 52 bits total per
thread and per process.
It only takes a few milliseconds to burn these randoms and given
that threads start with a different seed, we know they will not
catch each other. So these random extra bits are essentially added
to ensure randomness between boots and cluster instances.
This replaces all uses of random() with ha_random() which uses the
thread-local state.
This must be backported as far as 2.0 or any version having the
UUID sample-fetch function since it's the main victim here.
It's important to note that this patch, in addition to depending on
the previous one "BUG/MEDIUM: init: initialize the random pool a bit
better", also depends on the preceeding build fixes to address a
circular dependency issue in the include files that prevented it
from building. Part or all of these patches may need to be backported
or adapted as well.
Since the UUID sample fetch was created, some people noticed that in
certain virtualized environments they manage to get exact same UUIDs
on different instances started exactly at the same moment. It turns
out that the randoms were only initialized to spread the health checks
originally, not to provide "clean" randoms.
This patch changes this and collects more randomness from various
sources, including existing randoms, /dev/urandom when available,
RAND_bytes() when OpenSSL is available, as well as the timing for such
operations, then applies a SHA1 on all this to keep a 160 bits random
seed available, 32 of which are passed to srandom().
It's worth mentioning that there's no clean way to pass more than 32
bits to srandom() as even initstate() provides an opaque state that
must absolutely not be tampered with since known implementations
contain state information.
At least this allows to have up to 4 billion different sequences
from the boot, which is not that bad.
Note that the thread safety was still not addressed, which is another
issue for another patch.
This must be backported to all versions containing the UUID sample
fetch function, i.e. as far as 2.0.
In the same way than for the request, when a redirect rule is applied the
transction is aborted. This must be done returning HTTP_RULE_RES_ABRT from
http_res_get_intercept_rule() function.
No backport needed because on previous versions, the action return values are
not handled the same way.
Backend counters must be incremented only if a backend was already assigned to
the stream (when the stream exists). Otherwise, it means we are still on the
frontend side.
This patch may be backported as far as 1.6.
When an action interrupts a transaction, returning a response or not, it must
return the ACT_RET_ABRT value and not ACT_RET_STOP. ACT_RET_STOP is reserved to
stop the processing of the current ruleset.
No backport needed because on previous versions, the action return values are
not handled the same way.
When at least a filter is attached to a stream, FLT_END analyzers must be
preserved on request and response channels.
This patch should be backported as far as 1.7.
Since the HTX mode is the only mode to process HTTP messages, the stream is
created for a uniq transaction. The keep-alive is handled at the mux level. So,
the compression filter can be initialized when the stream is created and
released with the stream. Concretly, .channel_start_analyze and
.channel_end_analyze callback functions are replaced by .attach and .detach
ones.
With this change, it is no longer necessary to call FLT_START_FE/BE and FLT_END
analysers for the compression filter.
Since the HTX mode is the only mode to process HTTP messages, the stream is
created for a uniq transaction. The keep-alive is handled at the mux level. So,
the cache filter can be initialized when the stream is created and released with
the stream. Concretly, .channel_start_analyze and .channel_end_analyze callback
functions are replaced by .attach and .detach ones.
With this change, it is no longer necessary to call FLT_START_FE/BE and FLT_END
analysers for the cache filter.
A typo was introduced by the commit c5bb5a0f2 ("BUG/MINOR: http-rules: Preserve
FLT_END analyzers on reject action").
This patch must be backported with the commit c5bb5a0f2.
When at least a filter is attached to a stream, FLT_END analyzers must be
preserved on request and response channels.
This patch should be backported as far as 1.8.
When an action interrupts a transaction, returning a response or not, it must
return the ACT_RET_ABRT value and not ACT_RET_DONE. ACT_RET_DONE is reserved to
stop the processing on the current channel but some analysers may still be
active. When ACT_RET_ABRT is returned, all analysers are removed, except FLT_END
if it is set.
No backport needed because on previous verions, the action return value was not
handled the same way.
It is stated in the comment the return action returns ACT_RET_ABRT on
success. It it the right code to use to abort a transaction. ACT_RET_DONE must
be used when the message processing must be stopped. This does not means the
transaction is interrupted.
No backport needed.
The wake_time of a lua context is now always set to TICK_ETERNITY when the
context is initialized and when everytime the execution of the lua stack is
started. It is mandatory to not set arbitrary wake_time when an action yields.
No backport needed.
This flag was used in some internal functions to be sure the current stream is
able to handle HTTP content. It was introduced when the legacy HTTP code was
still there. Now, It is possible to rely on stream's flags to be sure we have an
HTX stream.
So the flag HLUA_TXN_HTTP_RDY can be removed. Everywhere it was tested, it is
replaced by a call to the IS_HTX_STRM() macro.
This patch is mandatory to allow the support of the filters written in lua.
In these functions, the lua txn was not used. So it can be removed from the
function argument list.
This patch is mandatory to allow the support of the filters written in lua.
In this function, the lua txn was only used to retrieve the stream. But it can
be retieve from the HTTP message, using its channel pointer. So, the lua txn can
be removed from the function argument list.
This patch is mandatory to allow the support of the filters written in lua.
In this function, the lua txn was only used to test if the HTTP transaction is
defined. But it is always used in a context where it is true. So, the lua txn
can be removed from the function argument list.
This patch is mandatory to allow the support of the filters written in lua.
The Lua function Channel.is_full() should not take care of the reserve because
it is not called from a producer (an applet for instance). From an action, it is
allowed to overwrite the buffer reserve.
This patch should be backported as far as 1.7. But it must be adapted for 1.8
and lower because there is no HTX on these versions.
When a lua action aborts a transaction calling txn:done() function, the action
must return ACT_RET_ABRT instead of ACT_RET_DONE. It is mandatory to
abort the message analysis.
This patch must be backported everywhere the commit 7716cdf45 ("MINOR: lua: Get
the action return code on the stack when an action finishes") was
backported. For now, no backport needed.
When an error occurred on the response side, request analysers must be reset. At
this stage, only AN_REQ_HTTP_XFER_BODY analyser remains, and possibly
AN_REQ_FLT_END, if at least one filter is attached to the stream. So it is safe
to remove the AN_REQ_HTTP_XFER_BODY analyser. An error was already handled and a
response was already returned to the client (or it was at least scheduled to be
sent). So there is no reason to continue to process the request payload. It may
cause some troubles for the filters because when an error occurred, data from
the request buffer are truncated.
This patch must be backported as far as 1.9, for the HTX part only. I don't know
if the legacy HTTP code is affected.
During the payload filtering, the offset is relative to the head of the HTX
message and not its first index. This index is the position of the first block
to (re)start the HTTP analysis. It must be used during HTTP analysis but not
during the payload forwarding.
So, from the compression filter point of view, when we loop on the HTX blocks to
compress the response payload, we must start from the head of the HTX
message. To ease the loop, we use the function htx_find_offset().
This patch must be backported as far as 2.0. It depends on the commit "MINOR:
htx: Add a function to return a block at a specific an offset". So this one must
be backported first.
During the payload filtering, the offset is relative to the head of the HTX
message and not its first index. This index is the position of the first block
to (re)start the HTTP analysis. It must be used during HTTP analysis but not
during the payload forwarding.
So, from the cache point of view, when we loop on the HTX blocks to cache the
response payload, we must start from the head of the HTX message. To ease the
loop, we use the function htx_find_offset().
This patch must be backported as far as 2.0. It depends on the commit "MINOR:
htx: Add a function to return a block at a specific an offset". So this one must
be backported first.
If a filter enable the data filtering, in TCP or in HTTP, but it does not
defined the corresponding callback function (so http_payload() or
tcp_payload()), it will be ignored. If all configured data filter do the same,
we must be sure to forward everything. Otherwise nothing will be forwarded at
all.
This patch must be forwarded as far as 1.9.
When the tcp or http payload is filtered, it is important to use the filter
offset to decude the amount of forwarded data because this offset may change
during the call to the callback function. So we should not rely on a local
variable defined before this call.
For now, existing HAproxy filters don't change this offset, so this bug may only
affect external filters.
This patch must be forwarded as far as 1.9.
The htx_find_offset() function may be used to look for a block at a specific
offset in an HTX message, starting from the message head. A compound result is
returned, an htx_ret structure, with the found block and the position of the
offset in the block. If the offset is ouside of the HTX message, the returned
block is NULL.
This patch fixes PROXYv2 parsing when the payload of the TCP connection is
fused with the PROXYv2 header within a single recv() call.
Previously HAProxy ignored the PROXYv2 header length when attempting to
parse the TLV, possibly interpreting the first byte of the payload as a
TLV type.
This patch adds proper validation. It ensures that:
1. TLV parsing stops when the end of the PROXYv2 header is reached.
2. TLV lengths cannot exceed the PROXYv2 header length.
3. The PROXYv2 header ends together with the last TLV, not allowing for
"stray bytes" to be ignored.
A reg-test was added to ensure proper behavior.
This patch tries to find the sweat spot between a small and easily
backportable one, and a cleaner one that's more easily adaptable to
older versions, hence why it merges the "if" and "while" blocks which
causes a reindent of the whole block. It should be used as-is for
versions 1.9 to 2.1, the block about PP2_TYPE_AUTHORITY should be
dropped for 2.0 and the block about CRC32C should be dropped for 1.8.
This bug was introduced when TLV parsing was added. This happened in commit
b3e54fe387. This commit was first released
with HAProxy 1.6-dev1.
A similar issue was fixed in commit 7209c204bd.
This patch must be backported to HAProxy 1.6+.
James Stroehmann reported something working as documented but that can
be considered as a regression in the way the automatic maxconn is
calculated from the process' limits :
https://www.mail-archive.com/haproxy@formilux.org/msg36523.html
The purpose of the changes in 2.0 was to have maxconn default to the
highest possible value permitted to the user based on the ulimit -n
setting, however the calculation starts from the soft limit, which
can be lower than what users were allowed to with previous versions
where the default value of 2000 would force a higher ulimit -n as
long as it fitted in the hard limit.
Usually this is not noticeable if the user changes the limits, because
quite commonly setting a new value restricts both the soft and hard
values.
Let's instead always use the max between the hard and soft limits, as
we know these values are permitted. This was tried on the following
setup:
$ cat ulimit-n.cfg
global
stats socket /tmp/sock1 level admin
$ ulimit -n
1024
Before the change the limits would show like this:
$ socat - /tmp/sock1 <<< "show info" | grep -im2 ^Max
Maxsock: 1023
Maxconn: 489
After the change the limits are now much better and more in line with
the default settings in earlier versions:
$ socat - /tmp/sock1 <<< "show info" | grep -im2 ^Max
Maxsock: 4095
Maxconn: 2025
The difference becomes even more obvious when running moderately large
configs with hundreds of checked servers and hundreds of listeners:
$ cat ulimit-n.cfg
global
stats socket /tmp/sock1 level admin
listen l
bind :10000-10300
server-template srv- 300 0.0.0.0 check disabled
Before After
Maxsock 1024 4096
Maxconn 189 1725
This issue is tagged as minor since a trivial config change fixes it,
but it would help new users to have it backported as far as 2.0.
pattern_finalize_config() uses an inefficient algorithm which is a
problem with very large configuration files. This affects startup, and
therefore reload time. When haproxy is deployed as a router in a
Kubernetes cluster the generated configuration file may be large and
reloads are frequently occuring, which makes this a significant issue.
The old algorithm is O(n^2)
* allocate missing uids - O(n^2)
* sort linked list - O(n^2)
The new algorithm is O(n log n):
* find the user allocated uids - O(n)
* store them for efficient lookup - O(n log n)
* allocate missing uids - n times O(log n)
* sort all uids - O(n log n)
* convert back to linked list - O(n)
Performance examples, startup time in seconds:
pat_refs old new
1000 0.02 0.01
10000 2.1 0.04
20000 12.3 0.07
30000 27.9 0.10
40000 52.5 0.14
50000 77.5 0.17
Please backport to 1.8, 2.0 and 2.1.
This command is used to produce an arbitrary amount of data on the
output. It can be used to test the CLI's state machine as well as
the internal parts related to applets an I/O. A typical test consists
in asking for all sizes from 0 to 16384:
$ (echo "prompt;expert-mode on";for i in {0..16384}; do
echo "debug dev write $i"; done) | socat - /tmp/sock1 | wc -c
134258738
A better test would consist in first waiting for the response before
sending a new request.
This command is not restricted to the admin since it's harmless.
There's a build error reported here:
c9c6cdbf9c/checks
It's just caused by an inconditional assignment of tmp_filter to
*sni_filter without having been initialized, though it's harmless because
this return pointer is not used when fcount is NULL, which is the only
case where this happens.
No backport is needed as this was brought today by commit 38df1c8006
("MINOR: ssl/cli: support crt-list filters").
It was only possible to go down from the ckch_store to the sni_ctx but
not to go up from the sni_ctx to the ckch_store.
To allow that, 2 pointers were added:
- a ckch_inst pointer in the struct sni_ctx
- a ckckh_store pointer in the struct ckch_inst
Generate a list of the previous filters when updating a certificate
which use filters in crt-list. Then pass this list to the function
generating the sni_ctx during the commit.
This feature allows the update of the crt-list certificates which uses
the filters with "set ssl cert".
This function could be probably replaced by creating a new
ckch_inst_new_load_store() function which take the previous sni_ctx list as
an argument instead of the char **sni_filter, avoiding the
allocation/copy during runtime for each filter. But since are still
handling the multi-cert bundles, it's better this way to avoid code
duplication.
When building with DEBUG_STRICT, there are still some BUG_ON(events&event_type)
in the subscribe() code which are not welcome anymore since we explicitly
permit to wake the caller up on readiness. This causes some regtests to fail
since 2c1f37d353 ("OPTIM: mux-h1: subscribe rather than waking up at a few
other places") when built with this option.
No backport is needed, this is 2.2-dev.
This is another round of conversion from a blind tasklet_wakeup() to
a more careful subscribe(). It has significantly improved the number
of function calls per HTTP request (/?s=1k/t=20) :
before after
tasklet_wakeup: 3 2
conn_subscribe: 3 2
h1_iocb: 3 2
h1_process: 3 2
h1_parse_msg_hdrs: 4 3
h1_rcv_buf: 5 3
h1_send: 5 4
h1_subscribe: 2 1
h1_wake_stream_for_send: 5 4
http_wait_for_request: 2 1
process_stream: 3 2
si_cs_io_cb: 4 2
si_cs_process: 3 1
si_cs_rcv: 5 3
si_sync_send: 2 1
si_update_both: 2 1
stream_int_chk_rcv_conn: 3 2
stream_int_notify: 3 1
stream_release_buffers: 9 4
In order to save a lot on syscalls, we currently don't disable receiving
on a file descriptor anymore if its handler was already woken up. But if
the run queue is huge and the poller collects a lot of events, this causes
excess wakeups which take CPU time which is not used to flush these tasklets.
This patch simply considers the run queue size to decide whether or not to
stop receiving. Tests show that by stopping receiving when the run queue
reaches ~16 times its configured size, we can still hold maximal performance
in extreme situations like maxpollevents=20k for runqueue_depth=2, and still
totally avoid calling epoll_event under moderate load using default settings
on keep-alive connections.
We used to call ->wake() for any I/O event for which there was no
subscriber. But this is a problem because this causes massive wake()
storms since we disabled fd_stop_recv() to save syscalls.
The only reason for the io_available condition is to detect that an
asynchronous connect() just finished and will not be handled by any
registered event handler. Since we now properly handle synchronous
connects, we can detect this situation by the fact that we had a
success on conn_fd_check() and no requested I/O took over.
In the rare case of immediate connect() (unix sockets, socket pairs, and
occasionally TCP over the loopback), it is counter-productive to subscribe
for sending and then getting immediately back to process_stream() after
having passed through si_cs_process() just to update the connection. We
already know it is established and it doesn't have any handshake anymore
so we just have to complete it and return to process_stream() with the
stream_interface in the SI_ST_RDY state. In this case, process_stream will
simply loop back to the beginning to synchronize the state and turn it to
SI_ST_EST/ASS/CLO/TAR etc.
This will save us from having to needlessly subscribe in the connect()
code, something which in addition cannot work with edge-triggered pollers.
With commit 065a025610 ("MEDIUM: connection: don't stop receiving events
in the FD handler") we disabled a number of fd_stop_* in conn_fd_handler(),
in order to wait for their respective handlers to deal with them. But it
is not correct to do that for the send direction, as we may very well
have nothing to send. This is visible when connecting in TCP mode to
a server with no data to send, there's nobody anymore to disable the
polling for the send direction.
And it is logical, on the recv() path we know the system has data to
deliver and that some code will be in charge of it. On the send
direction we simply don't know if it was the result of a successful
connect() or if there is still something to send. In any case we
almost never fill the network buffer on a single send() after being
woken up by the system, so disabling the FD immediately or much later
will not change the number of operations.
No backport is needed, this is 2.2-dev.
The approach was wrong. USE_DL is for the makefile to know if it's required
to append -ldl at link time. Some platforms do not need it (and in fact do
not have it) yet they have a working dladdr(). The real condition is related
to ELF. Given that due to Lua, all platforms that require -ldl already have
USE_DL set, let's replace USE_DL with __ELF__ here and consider that dladdr
is always needed on ELF, which basically is already the case.
resolve_sym_name() doesn't build when USE_DL is set on non-GNU platforms
because "Elf(W)" isn't defined. Since it's only used for dladdr1(), let's
refactor all this so that we can completely ifdef out that part on other
platforms. Now we have a separate function to perform the call depending
on the platform and it only returns the size when available.
Instead of special-casing the use of the symbol resolving to decide
whether to dump a partial or complete trace, let's simply start over
and dump everything when we reach the end after having found nothing.
It will be more robust against dirty traces as well.
It happens that on aarch64 backtrace() only returns one entry (tested
with gcc 4.7.4, 5.5.0 and 7.4.1). Probably that it refrains from unwinding
the stack due to the risk of hitting a bad pointer. Here we can use
may_access() to know when it's safe, so we can actually unwind the stack
without taking risks. It happens that the faulting function (the one
just after the signal handler) is not listed here, very likely because
the signal handler uses a special stack and did not create a new frame.
So this patch creates a new my_backtrace() function in standard.h that
either calls backtrace() or does its own unrolling. The choice depends
on HA_HAVE_WORKING_BACKTRACE which is set in compat.h based on the build
target.
It's useful to get an indication of unresolved stuff or memory
corruption to have the apparent depth of the stack trace in the
output, especially if we dump nothing.
At least FreeBSD has a fully functional CLOCK_THREAD_CPUTIME but it
cannot create a timer on it. This is not a problem since our timer is
only used to measure each thread's usage using now_cpu_time_thread().
So by just replacing this clock with CLOCK_REALTIME we allow such
platforms to periodically call the wdt and check the thread's CPU usage.
The consequence is that even on a totally idle system there will still
be a few extra periodic wakeups, but the watchdog becomes usable there
as well.
On operating systems not supporting to create a timer on
POSIX_THREAD_CPUTIME we emit a warning but we return an error so the
process fails to start, which is absurd. Let's return a success once
the warning is emitted instead.
This may be backported to 2.1 and 2.0.
It's only available for bind line. "ca-verify-file" allows to separate
CA certificates from "ca-file". CA names sent in server hello message is
only compute from "ca-file". Typically, "ca-file" must be defined with
intermediate certificates and "ca-verify-file" with certificates to
ending the chain, like root CA.
Fix issue #404.
Calling backtrace() will access libgcc at runtime. We don't want to do
it after the chroot, so let's perform a first call to have it ready in
memory for later use.
When a panic() occurs due to a stuck thread, we'll try to dump a
backtrace of this thread if the config directive USE_BACKTRACE is
set (which is the case on linux+glibc). For this we use the
backtrace() call provided by glibc and iterate the pointers through
resolve_sym_name(). In order to minimize the output (which is limited
to one buffer), we only do this for stuck threads, and we start the
dump above ha_panic()/ha_thread_dump_all_to_trash(), and stop when
meeting known points such as main/run_tasks_from_list/run_poll_loop.
If enabled without USE_DL, the dump will be complete with no details
except that pointers will all be given relative to main, which is
still better than nothing.
The new USE_BACKTRACE config option is enabled by default on glibc since
it has been present for ages. When it is set, the export-dynamic linker
option is enabled so that all non-static symbols are properly resolved.
Now in "show threads", the task/tasklet handler will be resolved
using this function, which will provide more detailed results and
will still support offsets to main for unresolved symbols.
We use various hacks at a few places to try to identify known function
pointers in debugging outputs (show threads & show fd). Let's centralize
this into a new function dedicated to this. It already knows about the
functions matched by "show threads" and "show fd", and when built with
USE_DL, it can rely on dladdr1() to resolve other functions. There are
some limitations, as static functions are not resolved, linking with
-rdynamic is mandatory, and even then some functions will not necessarily
appear. It's possible to do a better job by rebuilding the whole symbol
table from the ELF headers in memory but it's less portable and the gains
are still limited, so this solution remains a reasonable tradeoff.
This function dumps <n> bytes from <addr> in hex form into buffer <buf>
enclosed in brackets after the address itself, formatted on 14 chars
including the "0x" prefix. This is meant to be used as a prefix for code
areas. For example: "0x7f10b6557690 [48 c7 c0 0f 00 00 00 0f]: "
It relies on may_access() to know if the bytes are dumpable, otherwise "--"
is emitted. An optional prefix is supported.
Since commit 4c2ae48375 ("MINOR: trace: implement a very basic trace()
function") merged in 2.1, trace() is an inline function. It must not
appear in standard.c anymore and may break build depending on includes.
This can be backported to 2.1.
It happens that just sending the debug signal to the process makes on
thread wait for its turn while nobody wants to dump. We need to at
least verify that a dump was really requested for this thread.
This can be backported to 2.1 and 2.0.
Often in crash dumps we see unknown function pointers. Let's display
them relative to main, that helps quite a lot figure the function
from an executable, for example:
(gdb) x/a main+645360
0x4c56a0 <h1_timeout_task>: 0x2e6666666666feeb
This could be backported to 2.0.
In commit 4eee38a ("BUILD/MINOR: tools: fix build warning in the date
conversion functions") we added some return checks to shut build
warnings but the last test is useless since the tested pointer is not
updated by the last call to utoa_pad() used to convert the milliseconds.
It turns out the original code from 2012 already skipped this part,
probably in order to avoid the risk of seeing a millisecond field not
belonging to the 0-999 range. Better keep the check and put the code
into stricter shape.
No backport is needed. This fixes issue #526.
Commit 80b53ffb1c ("MEDIUM: arg: make make_arg_list() stop after its
own arguments") changed the way we detect the empty list because we
cannot stop by looking up the closing parenthesis anymore, thus for
the first missing arg we have to enter the parsing loop again. And
there, finding an empty arg means we go to the empty_err label, where
it was not initially planned to handle this condition. This results
in %[date()] to fail while %[date] works. Let's simply check if we've
reached the minimally supported args there (it used to be done during
the function entry).
Thanks to Jrme for reporting this issue. No backport is needed,
this is 2.2-dev2+ only.
Since commit "MEDIUM: connection: make the subscribe() call able to wakeup
if ready" we have the guarantee that the tasklet will be woken up if
subscribing to a connection for an even that's ready. Since we have too
many tasklet_wakeup() calls in mux-h1, let's now use this property to
improve the situation a bit.
With this change, no syscall count changed, however the number of useless
calls to some functions significantly went down. Here are the differences
for the test below (100k req), in number of calls per request :
$ ./h1load -n 100000 -t 4 -c 1000 -T 20 -F 127.0.0.1:8001/?s=1k/t=20
before after change note
tasklet_wakeup: 3 1 -66%
h1_io_cb: 4 3 -25%
h1_send: 6.7 5.4 -19%
h1_wake: 0.73 0.44 -39%
h1_process: 4.7 3.4 -27%
h1_wake_stream_for_send: 6.7 5.5 -18%
si_cs_process 3.7 3.4 -7.8%
conn_fd_handler 2.7 2.4 -10%
raw_sock_to_buf: 4 2 -50%
pool_free: 4 2 -50% from failed rx calls
Note that the situation could be further improved by having muxes lazily
subscribe to Rx events in case the FD is already being polled. However
this requires deeper changes to implement a LAZY_RECV subscribe mode,
and to replace the FD's active bit by 3 states representing the desired
action to perform on the FD during the update, among NONE (no need to
change), POLL (can't proceed without), and STOP (buffer full). This
would only impact Rx since on Tx we know what we have to send. The
savings to expect from this might be more visible with splicing and/or
when dealing with many connections having long think times.
The remaining epoll_ctl() calls are exclusively caused by the disagreement
between conn_fd_handler() and the mux receiving the data: the fd handler
wants to stop after having woken up the tasklet, then the mux after
receiving data wants to receive again. Given that they don't happen in
the same poll loop when there are many FDs, this causes a lot of state
changes.
As suggested by Olivier, if the task is already scheduled for running,
we don't need to disable the event because it's in the run queue, poll()
cannot stop, and reporting it again will be harmless. What *might*
happen however is that a sampling-based poller like epoll() would report
many times the same event and has trouble getting others behind. But if
it would happen, it would still indicate the run queue has plenty of
pending operations, so it would in fact only displace the problem from
the poller to the run queue, which doesn't seem to be worse (and in
fact we do support priorities while the poller does not).
By doing this change, the keep-alive test with 1k conns and 100k reqs
completely gets rid of the per-request epoll_ctl changes, while still
not causing extra recvfrom() :
$ ./h1load -n 100000 -t 4 -c 1000 -T 20 -F 127.0.0.1:8001/?s=1k/t=20
200000 sendto 1
200000 recvfrom 1
10762 epoll_wait 1
3664 epoll_ctl 1
1999 recvfrom -1
In close mode, it didn't change anything, we're still in the optimal
case (2 epoll per connection) :
$ ./h1load -n 100000 -r 1 -t 4 -c 1000 -T 20 -F 127.0.0.1:8001/?s=1k/t=20
203764 epoll_ctl 1
200000 sendto 1
200000 recvfrom 1
6091 epoll_wait 1
2994 recvfrom -1
There's currently an internal API limitation at the connection layer
regarding conn_subscribe(). We must not subscribe if we haven't yet
met EAGAIN or such a condition, so we sometimes force ourselves to read
in order to meet this condition and being allowed to call subscribe.
But reading cannot always be done (e.g. at the end of a loop where we
cannot afford to retrieve new data and start again) so we instead
perform a tasklet_wakeup() of the requester's io_cb. This is what is
done in mux_h1 for example. The problem with this is that it forces
a new receive when we're not necessarily certain we need one. And if
the FD is not ready and was already being polled, it's a useless
wakeup.
The current patch improves the connection-level subscribe() so that
it really manipulates the polling if the FD is marked not-ready, but
instead schedules the argument tasklet for a wakeup if the FD is
ready. This guarantees we'll wake this tasklet up in any case once the
FD is ready, either immediately or after polling.
By doing so, a test on pure close mode shows we cut in half the number
of epoll_ctl() calls and almost eliminate failed recvfrom():
$ ./h1load -n 100000 -r 1 -t 4 -c 1000 -T 20 -F 127.0.0.1:8001/?s=1k/t=20
before:
399464 epoll_ctl 1
200007 recvfrom 1
200000 sendto 1
100000 recvfrom -1
7508 epoll_wait 1
after:
205739 epoll_ctl 1
200000 sendto 1
200000 recvfrom 1
6084 epoll_wait 1
2651 recvfrom -1
On keep-alive there is no change however.
This partially reverts commit 1113116b4a ("MEDIUM: raw-sock: remove
obsolete calls to fd_{cant,cond,done}_{send,recv}") so that we can mark
the FD not ready as required since commit 19bc201c9f ("MEDIUM: connection:
remove the intermediary polling state from the connection"). Indeed, with
the removal of the latter we don't have any other reliable indication that
the FD is blocked, which explains why there are so many EAGAIN in traces.
It's worth noting that a short read or a short write are also reliable
indicators of exhausted buffers and are even documented as such in the
epoll man page in case of edge-triggered mode. That's why we also report
the FD as blocked in such a case.
With this change we completely got rid of EAGAIN in keep-alive tests, but
they were expectedly transferred to epoll_ctl:
$ ./h1load -n 100000 -t 4 -c 1000 -T 20 -F 127.0.0.1:8001/?s=1k/t=20
before:
266331 epoll_ctl 1
200000 sendto 1
200000 recvfrom 1
135757 recvfrom -1
8626 epoll_wait 1
after:
394865 epoll_ctl 1
200000 sendto 1
200000 recvfrom 1
10748 epoll_wait 1
1999 recvfrom -1
When a header is added or modified, in http_add_header() or
http_replace_header(), a comparison is performed on its name to know if it is
the Host header and if the authority part of the uri must be updated or
not. This comparision must be case-insensive.
This patch should fix the issue #522. It must be backported to 2.1.
As per issue #435 a hostname with a trailing dot confuses our DNS code,
as for a zero length DNS label we emit a null-byte. This change makes us
ignore the zero length label instead.
Must be backported to 1.8.
Even when there isn't a chain, it must be initialized to a empty X509
structure with sk_X509_new_null().
This patch fixes a segfault which appears with older versions of the SSL
libs (openssl 0.9.8, libressl 2.8.3...) because X509_chain_up_ref() does
not check the pointer.
This bug was introduced by b90d2cb ("MINOR: ssl: resolve issuers chain
later").
Should fix issue #516.
Previously when the `unique-id-format` contained non-deterministic parts,
such as the `uuid` fetch each use of the `unique-id` fetch would generate
a new unique ID, replacing the old one. The following configuration shows
the error:
global
log stdout format short daemon
listen test
log global
log-format "%ID"
unique-id-format %{+X}o\ TEST-%[uuid]
mode http
bind *:8080
http-response set-header A %[unique-id]
http-response set-header B %[unique-id]
server example example.com:80
Without the patch the contents of the `A` and `B` response header would
differ.
This bug was introduced in commit f4011ddcf5,
which was first released with HAProxy 1.7-dev3.
This fix should be backported to HAProxy 1.7+.
valgrind complains that epoll_ctl() uses an epoll_event in which we
have only set the part we use from the data field (i.e. the fd). Tests
show that pre-initializing the struct in the stack doesn't have a
measurable impact so let's do it.
In issue #471 it was reported that valgrind sometimes complains about
timer_create() being called with uninitialized bytes. These are in fact
the bits from sigev_value.sival_ptr that are not part of sival_int that
are tagged as such, as valgrind has no way to know we're using the int
instead of the ptr in the union. It's cheap to initialize the field so
let's do it.
Since commit 92919f7fd5 ("MEDIUM: h2: make the request parser rebuild
a complete URI") we make sure to rebuild a complete URI. Unfortunately
the test for an empty :path pseudo-header that is mandated by #8.1.2.3
appened to be performed on the URI before this patch, which is never
empty anymore after being rebuilt, causing h2spec to complain :
8. HTTP Message Exchanges
8.1. HTTP Request/Response Exchange
8.1.2. HTTP Header Fields
8.1.2.3. Request Pseudo-Header Fields
- 1: Sends a HEADERS frame with empty ":path" pseudo-header field
-> The endpoint MUST respond with a stream error of type PROTOCOL_ERROR.
Expected: GOAWAY Frame (Error Code: PROTOCOL_ERROR)
RST_STREAM Frame (Error Code: PROTOCOL_ERROR)
Connection closed
Actual: DATA Frame (length:0, flags:0x01, stream_id:1)
It's worth noting that this error doesn't trigger when calling h2spec
with a timeout as some scripts do, which explains why it wasn't detected
after the patch above.
This fixes one half of issue #471 and should be backported to 2.1.
The goal is to use the ckch to store data from PEM files or <payload> and
only for that. This patch adresses the ckch->ocsp_issuer case. It finds
issuers chain if no chain is present in the ckch in ssl_sock_put_ckch_into_ctx(),
filling the ocsp_issuer from the chain must be done after.
It changes the way '.issuer' is managed: it tries to load '.issuer' in
ckch->ocsp_issuer first and then look for the issuer in the chain later
(in ssl_sock_load_ocsp() ). "ssl-load-extra-files" without the "issuer"
parameter can negate extra '.issuer' file check.
The goal is to use the ckch to store data from a loaded PEM file or a
<payload> and only for that. This patch addresses the ckch->chain case.
Looking for the issuers chain, if no chain is present in the ckch, can
be done in ssl_sock_put_ckch_into_ctx(). This way it is possible to know
the origin of the certificate chain without an extra state.
This lock was only needed to protect the buffer_wq list, but now we have
the mt_list for this. This patch simply turns the buffer_wq list to an
mt_list and gets rid of the lock.
It's worth noting that the whole buffer_wait thing still looks totally
wrong especially in a threaded context: the wakeup_cb() callback is
called synchronously from any thread and may end up calling some
connection code that was not expected to run on a given thread. The
whole thing should probably be reworked to use tasklets instead and be
a bit more centralized.
Move the cert_issuer_tree outside the global_ssl structure since it's
not a configuration variable. And move the declaration of the
issuer_chain structure in types/ssl_sock.h
Reorder the 'show ssl cert' output so it's easier to see if the whole
chain is correct.
For a chain to be correct, an "Issuer" line must have the same
content as the next "Subject" line.
Example:
Subject: /C=FR/ST=Paris/O=HAProxy Test Certificate/CN=test.haproxy.local
Issuer: /C=FR/ST=Paris/O=HAProxy Test Intermediate CA 2/CN=ca2.haproxy.local
Chain Subject: /C=FR/ST=Paris/O=HAProxy Test Intermediate CA 2/CN=ca2.haproxy.local
Chain Issuer: /C=FR/ST=Paris/O=HAProxy Test Intermediate CA 1/CN=ca1.haproxy.local
Chain Subject: /C=FR/ST=Paris/O=HAProxy Test Intermediate CA 1/CN=ca1.haproxy.local
Chain Issuer: /C=FR/ST=Paris/O=HAProxy Test Root CA/CN=root.haproxy.local
For each certificate in the chain, displays the issuer, so it's easy to
know if the chain is right.
Also rename "Chain" to "Chain Subject".
Example:
Chain Subject: /C=FR/ST=Paris/O=HAProxy Test Intermediate CA 2/CN=ca2.haproxy.local
Chain Issuer: /C=FR/ST=Paris/O=HAProxy Test Intermediate CA 1/CN=ca1.haproxy.local
Chain Subject: /C=FR/ST=Paris/O=HAProxy Test Intermediate CA 1/CN=ca1.haproxy.local
Chain Issuer: /C=FR/ST=Paris/O=HAProxy Test Root CA/CN=root.haproxy.local
Display the subject of each certificate contained in the chain in the
output of "show ssl cert <filename>".
Each subjects are on a unique line prefixed by "Chain: "
Example:
Chain: /C=FR/ST=Paris/O=HAProxy Test Intermediate CA 2/CN=ca2.haproxy.local
Chain: /C=FR/ST=Paris/O=HAProxy Test Intermediate CA 1/CN=ca1.haproxy.local
This directive has never made any sense and has already caused trouble
by forcing the process to stay in foreground during the boot process.
Let's emit a warning mentioning it's deprecated and will be removed in
2.3.
As reported in issue #511, when sending an outgoing local connection
(e.g. health check) we must set the "local" tag and not a "proxy" tag.
The issue comes from historic support on v1 which required to steal the
address on the outgoing connection for such ones, creating confusion in
the v2 code which believes it sees the incoming connection.
In order not to risk to break existing setups which might rely on seeing
the LB's address in the connection's source field, let's just change the
connection type from proxy to local and keep the addresses. The protocol
spec states that for local, the addresses must be ignored anyway.
This problem has always existed, this can be backported as far as 1.5,
though it's probably not a good idea to change such setups, thus maybe
2.0 would be more reasonable.
There were 8 strict aliasing warnings there due to the dereferences
casting to uint32_t of input and output. We can achieve the same using
two write_u64() and four read_u64() which do not cause this issue and
even let the compiler use 64-bit operations.
Enabling strict aliasing fails on the cache's hash which is a series of
20 bytes cast as u32. And in practice it could even fail on some archs
if the http_txn didn't guarantee the hash was properly aligned. Let's
use read_u32() to read the value and write_u32() to set it, this makes
sure the compiler emits the correct code to access these and knows about
the intentional aliasing.
Enabling strict aliasing fails in fd.c when using the double-word CAS,
let's get rid of the (void**)(void*)&cur_list junk and use a union
instead. This way the compiler knows they do alias.
Sample fetch functions ssl_x_sha1(), ssl_fc_npn(), ssl_fc_alpn(),
ssl_fc_session_id(), as well as the CLI's "show cert details" handler
used to dereference the output buffer's <data> field by casting it to
"unsigned *". But while doing this could work before 1.9, it broke
starting with commit 843b7cbe9d ("MEDIUM: chunks: make the chunk struct's
fields match the buffer struct") which merged chunks and buffers, causing
the <data> field to become a size_t. The impact is only on 64-bit platform
and depends on the endianness: on little endian, there should never be any
non-zero bits in the field as it is supposed to have been zeroed before the
call, so it shouldbe harmless; on big endian, the high part of the value
only is written instead of the lower one, often making the result appear
4 billion times larger, and making such values dropped everywhere due to
being larger than a buffer.
It seems that it would be wise to try to re-enable strict-aliasing to
catch such errors.
This must be backported till 1.9.
About every time there's a pointer cast in the code, there's a hidden
bug, and this one was no exception, as it passes the first octet of the
native representation of an integer as a single-character string, which
obviously only works on little endian machines. On big-endian machines,
something as simple as "str(foo),json" only returns zeroes.
This bug was introduced with the JSON converter in 1.6-dev1 by commit
317e1c4f1e ("MINOR: sample: add "json" converter"), the fix may be
backported to all stable branches.
The isalnum(), isalpha(), isdigit() etc functions from ctype.h are
supposed to take an int in argument which must either reflect an
unsigned char or EOF. In practice on some platforms they're implemented
as macros referencing an array, and when passed a char, they either cause
a warning "array subscript has type 'char'" when lucky, or cause random
segfaults when unlucky. It's quite unconvenient by the way since none of
them may return true for negative values. The recent introduction of
cygwin to the list of regularly tested build platforms revealed a lot
of breakage there due to the same issues again.
So this patch addresses the problem all over the code at once. It adds
unsigned char casts to every valid use case, and also drops the unneeded
double cast to int that was sometimes added on top of it.
It may be backported by dropping irrelevant changes if that helps better
support uncommon platforms. It's unlikely to fix bugs on platforms which
would already not emit any warning though.
A build failure on cygwin was reported on github actions here:
https://github.com/haproxy/haproxy/runs/466507874
It's caused by a signed char being passed to isspace(), and this one
being implemented as a macro instead of a function as the man page
suggests. It's the same issue that regularly pops up on Solaris. This
comes from commit 98263291cc which was merged in 1.8-dev1. A backport
is possible though not incredibly useful.
`curr_idle_thr` is of type `unsigned int`, not `int`. Fix this issue by
taking the size of the dereferenced `curr_idle_thr` array.
This issue was introduced when adding the `curr_idle_thr` struct member
in commit f131481a0a. This commit is first
tagged in 2.0-dev1 and marked for backport to 1.9.
This used to be a minor optimization on ix86 where registers are scarce
and the calling convention not very efficient, but this platform is not
relevant enough anymore to warrant all this dirt in the code for the sake
of saving 1 or 2% of performance. Modern platforms don't use this at all
since their calling convention already defaults to using several registers
so better get rid of this once for all.
Don't try to load a .key in a directory without loading its associated
certificate file.
This patch ignores the .key files when iterating over the files in a
directory.
Introduced by 4c5adbf ("MINOR: ssl: load the key from a dedicated
file").
For a certificate on a bind line, if the private key was not found in
the PEM file, look for a .key and load it.
This default behavior can be changed by using the ssl-load-extra-files
directive in the global section
This feature was mentionned in the issue #221.
Now that we have flags indicating the CPU's capabilities, better use
them instead of missing some updates for new CPU families (ARMv8 was
missing there).
The blocksize and the extra field are not necessarily aligned on a
machine word. This can result in crashing an align-sensitive machine
when initializing the shctx area. Let's round both sizes up to a pointer
size to make this safe everywhere.
This fixes issue #512. This should be backported as far as 1.8.
In sample_fetch_path we can use iststop() instead of a for loop
to find the '?' and return the correct length. This requires commit
"MINOR: ist: add an iststop() function".
In http_action_replace_uri() we call http_get_path() in the case of
a replace-path rule. http_get_path() will return an ist pointing to
the start of the path, but uri.ptr + uri.len points to the end of the
uri. As as result, we are matching against a string containing the
query, which we append to the "path" later, effectively duplicating
the query string.
This patch uses the iststop() function introduced in "MINOR: ist: add
an iststop() function" to find the '?' character and update the ist
length when needed.
This fixes issue #510.
The bug was introduced by commit 262c3f1a ("MINOR: http: add a new
"replace-path" action"), which was backported to 2.1 and 2.0.
When we're in H1_MSG_RQBEFORE or H1_MSG_RPBEFORE, we know that the
first message is highly likely the only one and that it's pointless
to try to perform a second recvfrom() to complete a first partial
read. This is similar to what used to be done in the older I/O methods
with the CF_READ_DONTWAIT flag on the channel. So let's pass
CO_RFL_READ_ONCE to the transport layer during rcv_buf() in this case.
By doing so, in a test involving keep-alive connections with a non-null
client think time, we remove 20% of the recvfrom() calls, all of which
used to systematically fail. More precisely, we observe a drop from 5.0
recvfrom() per request with 60% failure to 4.0 per request with 50%
failure.
This flag is currently supported by raw_sock to perform a single recv()
attempt and avoid subscribing. Typically on the request and response
paths with keep-alive, with short messages we know that it's very likely
that the first message is enough.
This marks the end of the transition from the connection polling states
introduced in 1.5-dev12 and the subscriptions in that arrived in 1.9.
The socket layer can now safely use its FD while all upper layers rely
exclusively on subscriptions. These old functions were removed. Some may
deserve some renaming to improved clarty though. The single call to
conn_xprt_stop_both() was dropped in favor of conn_cond_update_polling()
which already does the same.
The last few calls to conn_xprt_{want,stop}_{recv,send} in the central
connection code were replaced with their strictly exact equivalent fd_*,
adding the call to conn_ctrl_ready() when it was missing.
Just like previous commit, we don't need to pass through the connection
layer anymore to enable polling during a connect(), we know the FD, so
let's simply call fd_want_send().
Now that we know that the connection layer is transparent for polling
changes, we have no reason for hiding behind conn_xprt_stop_send() and
can safely call fd_stop_send() on the FD once the buffer is empty.
Historically we used to require that the connections held the desired
polling states for the data layer and the socket layer. Then with muxes
these were more or less merged into the transport layer, and now it
happens that with all transport layers having their own state, the
"transport layer state" as we have it in the connection (XPRT_RD_ENA,
XPRT_WR_ENA) is only an exact copy of the undelying file descriptor
state, but with a delay. All of this is causing some difficulties at
many places in the code because there are still some locations which
use the conn_want_* API to remain clean and only rely on connection,
and count on a later collection call to conn_cond_update_polling(),
while others need an immediate action and directly use the FD updates.
Since our updates are now much cheaper, most of them being only an
atomic test-and-set operation, and since our I/O callbacks are deferred,
there's no benefit anymore in trying to "cache" the transient state
change in the connection flags hoping to cancel them before they
become an FD event. Better make such calls transparent indirections
to the FD layer instead and get rid of the deferred operations which
needlessly complicate the logic inside.
This removes flags CO_FL_XPRT_{RD,WR}_ENA and CO_FL_WILL_UPDATE.
A number of functions related to polling updates were either greatly
simplified or removed.
Two places were using CO_FL_XPRT_WR_ENA as a hint to know if more data
were expected to be sent after a PROXY protocol or SOCKSv4 header. These
ones were simply replaced with a check on the subscription which is
where we ought to get the autoritative information from.
Now the __conn_xprt_want_* and their conn_xprt_want_* counterparts
are the same. conn_stop_polling() and conn_xprt_stop_both() are the
same as well. conn_cond_update_polling() only causes errors to stop
polling. It also becomes way more obvious that muxes should not at
all employ conn_xprt_{want|stop}_{recv,send}(), and that the call
to __conn_xprt_stop_recv() in case a mux failed to allocate a buffer
is inappropriate, it ought to unsubscribe from reads instead. All of
this definitely requires a serious cleanup.
Historically we used to have a global epoll_event for various
manipulations involving epoll_ctl() and when threads were added,
this was turned to a thread_local, which is needlessly expensive
since it's just a temporary variable. Let's move it to a local
variable wherever it's called instead.
While trying to address issue #253, Commit 5909380c ("BUG/MINOR: checks:
stop polling for write when we have nothing left to send") made sure that
we stop polling for writes when the buffer is empty. This was actually
more a workaround than a bug fix because by doing so we may be stopping
polling for an intermediary transport layer without acting on the check
itself in case there's SSL or send-proxy in the chain for example, thus
the approach is wrong. In practice due to the small size of check
requests, this will not have any impact. At best, we ought to unsubscribe
for sending, but that's already the case when we arrive in this function.
But given that the root cause of the issue was addressed later in commits
cc705a6b, c5940392 and ccf3f6d1, we can now safely revert this change.
It was confirmed on the faulty config that this change doesn't have any
effect anymore on the test.
In H1/H2/FCGI, the *_get_buf() functions try to disable receipt of data
when there's no buffer available. But they do so at the lowest possible
level, which is unrelated to the upper transport layers which may still
be trying to feed data based on subscription. The correct approach here
would theorically be to only disable subscription, though when we get
there, the subscription will already have been dropped, so we can safely
just remove that call.
It's unlikely that this could have had any practical impact, as the upper
xprt layer would call this callback which would fail an not resubscribe.
Having the lowest layer disabled would just be temporary since when
re-enabling reading, a subscribe at the end of data would re-enable it.
Backport should not harm but seems useless at this point.
If an client error is reported on the request channel (CF_READ_ERROR) while a
session is tarpitted, no error is returned to the client. Concretly,
http_reply_and_close() function is not called. This function is reponsible to
forward the error to the client. But not only. It is also responsible to abort
the request. Because this function is not called when a read error is reported
on the request channel, and because the tarpit analyzer is the last one, there
is nothing preventing a connection attempt on a server while it is totally
unexpected.
So, a useless connexion on a backend server may be performed because of this
bug. If an HTTP load-balancing algorithm is used on the backend side, it leads
to a crash of HAProxy because the request was already erased.
If you have tarpit rules and if you use an HTTP load-balancing algorithm on your
backends, you must apply this patch. Otherwise a simple TCP reset on a tarpitted
connexion will most likely crash your HAProxy. A safe workaround is to use a
silent-drop rule or a deny rule instead of a tarpit.
This bug also affect the legacy code. It is in fact an very old hidden bug. But
the refactoring of process_stream() in the 1.9 makes it visible. And,
unfortunately, with the HTX, it is easier to hit it because many processing has
been moved in lower layers, in the muxes.
It must be backported as far as 1.9. For the 2.0 and the 1.9, the legacy HTTP
code must also be patched the same way. For older versions, it may be backported
but the bug seems to not impact them.
Thanks to Olivier D <webmaster@ajeux.com> to have reported the bug and provided
all the infos to analyze it.
gcc complains rightfully:
src/ssl_sock.c: In function ‘ssl_load_global_issuers_from_path’:
src/ssl_sock.c:9860:4: warning: format not a string literal and no format arguments [-Wformat-security]
ha_warning(warn);
^
Introduced in 70df7bf19c.
if the monitor-uri starts by a slash ('/'), the matching is performed against
the request's path instead of the request's uri. It is a workaround to let the
HTTP/2 requests match the monitor-uri. Indeed, in HTTP/2, clients are encouraged
to send absolute URIs only.
This patch is not tagged as a bug, because the previous behavior matched exactly
what the doc describes. But it may surprise that HTTP/2 requests don't match the
monitor-uri.
This patch may be backported to 2.1 because URIs of HTTP/2 are stored using the
absolute-form starting this version. For previous versions, this patch will only
helps explicitely absolute HTTP/1 requests (and only the HTX part because on the
legacy HTTP, all the URI is matched).
It should fix the issue #509.
The monitor-uri should be case-sensitive. In reality, the scheme and the host
part are case-insensitives and only the path is case-sensive. But concretely,
since the start, the matching on the monitor-uri is case-sensitive. And it is
probably the expected behavior of almost all users.
This patch must be backported as far as 1.9. For HAProxy 2.0 and 1.9, it must be
applied on src/proto_htx.c.
Certificates loaded with "crt" and "crt-list" commonly share the same
intermediate certificate in PEM file. "issuers-chain-path" is a global
directive to share intermediate chain certificates in a directory. If
certificates chain is not included in certificate PEM file, haproxy
will complete chain if issuer match the first certificate of the chain
stored via "issuers-chain-path" directive. Such chains will be shared
in memory.
As reported in issue #507, since commiy 07e1e3c93e ("MINOR: sample:
regsub now supports backreferences") we must not proceed in regsub()
if we fali to allocate a trash (which in practice never happens). No
backport needed.
This counter is already incremented when a new request is received (or if an
error occurred waiting it). So it must not be incremented when the stream is
terminated, at the end of process_strem(). This bug was introduced by the commit
cff0f739e ("MINOR: counters: Review conditions to increment counters from
analysers").
No backport needed.
When an Host header is updated, the autority part, if any, is also updated to
keep the both syncrhonized. But, when the update is performed while there is no
change, a failure is reported while, in reality, no update is necessary. This
bug was introduced by the commit d7b7a1ce5 ("MEDIUM: http-htx: Keep the Host
header and the request start-line synchronized").
This commit was pushed in the 2.1. But on this version, the bug is hidden
because rewrite errors are silently ignored. And because it happens when there
is no change, if the rewrite fails, noone notices it. But since the 2.2, rewrite
errors are now fatals by default. So when the bug is hit, a 500 error is
returned to the client. Without this fix, a workaround is to disable the strict
rewriting mode (see the "strict-mode" HTTP rule).
The following HTTP rule is a good way to reproduce the bug if a request with an
authority is received. In HTT2, it is pretty common.
acl host_header_exists req.hdr(host) -m found
http-request set-header host %[req.hdr(host)] if host_header_exists
This patch must be backported to 2.1 and everywhere the commit d7b7a1ce5 is
backported. It should fix the issue #494.
In flt_analyze_http_headers() HTTP analyzer, we must not forward systematically
the headers. We must only count them as filtered data (ie. increment the offset
of the right size). It is the http_payload callback responsibility to decide to
forward headers or not by forwarding at least 1 byte of payload. And there is
always at least 1 byte of payload to forward, the EOM block.
This patch depends on following commits:
* MINOR: filters: Forward data only if the last filter forwards something
* MINOR: http-htx: Add a function to retrieve the headers size of an HTX message
This patch must be backported with commits above as far as 1.9. In HAProxy 2.0
and 1.9, the patch must be adapted because of the legacy HTTP code.
In flt_tcp_payload() and flt_http_payload(), if the last filter does not
forwarding anything, nothing is forwarded, not even the already filtered
data. For now, this patch is useless because the last filter is always sync with
the stream's offset. But it will be mandatory for a bugfix.
http_get_hdrs_size() function may now be used to get the bytes held by headers
in an HTX message. It only works if the headers were not already
forwarded. Metadata are not counted here.
Now that the configuration parser is more flexible with samples,
converters and their arguments, we can leverage this to enable
support for backreferences in regsub.
Recent commit 807aef8a14 ("BUG/MINOR: arg: report an error if an argument
is larger than bufsize") aimed at fixing the argument length check but
relied on the fact that the end of string was not reached, except that
it forgot to consider the delimiters (comma and parenthesis) which are
valid conditions to break out of the loop. This used to break simple
expressions like "hdr(xff,1)". Thanks to Jrme for reporting this.
No backport is needed.
Commit ef21facd99 ("MEDIUM: arg: make make_arg_list() support quotes
in arguments") removed the chunk_strncpy() to fill the trash buffer
as the input is being parsed, and accidently dropped the jump to the
error path in case the argument is too large, which is now fixed.
No backport is needed, this is for 2.2. This addresses issue #502.
For a very long time it used to be impossible to pass a closing square
bracket as a valid character in argument to a sample fetch function or
to a converter because the LF parser used to stop on the first such
character found and to pass what was between the first '[' and the first
']' to sample_parse_expr().
This patch addresses this by passing the whole string to sample_parse_expr()
which is the only one authoritative to indicate the first character that
does not belong to the expression. The LF parser then verifies it matches
a ']' or fails. As a result it is finally possible to write rules such as
the following, which is totally valid an unambigous :
http-request redirect location %[url,regsub([.:/?-],!,g)]
|-----| | |
arg1 | `---> arg3
`-----> arg2
|-----------------|
converter
|---------------------|
sample expression
|------------------------|
log-format tag
When an end pointer is passed, instead of complaining that a comma is
missing after a keyword, sample_parse_expr() will silently return the
pointer to the current location into this return pointer so that the
caller can continue its parsing. This will be used by more complex
expressions which embed sample expressions, and may even permit to
embed sample expressions into arguments of other expressions.
Now it becomes possible to reuse the quotes within arguments, allowing
the parser to distinguish a ',' or ')' that is part of the value from
one which delimits the argument. In addition, ',' and ')' may be escaped
using a backslash. However, it is also important to keep in mind that
just like in shell, quotes are first resolved by the word tokenizer, so
in order to pass quotes that are visible to the argument parser, a second
level is needed, either using backslash escaping, or by using an alternate
type.
For example, it's possible to write this to append a comma:
http-request add-header paren-comma-paren "%[str('(--,--)')]"
or this:
http-request add-header paren-comma-paren '%[str("(--,--)")]'
or this:
http-request add-header paren-comma-paren %[str(\'(--,--)\')]
or this:
http-request add-header paren-comma-paren %[str(\"(--,--)\")]
or this:
http-request add-header paren-comma-paren %[str(\"(\"--\',\'--\")\")]
Note that due to the wide use of '\' in front of parenthesis in regex,
the backslash character will purposely *not* escape parenthesis, so that
'\)' placed in quotes is passed verbatim to a regex engine.
For each and every argument parsed by make_arg_list(), there was an
strndup() call, just so that we have a trailing zero for most functions,
and this temporary buffer is released afterwards except for strings where
it is kept.
Proceeding like this is not convenient because 1) it performs a huge
malloc/free dance, and 2) it forces to decide upfront where the argument
ends, which is what prevents commas and right parenthesis from being used.
This patch makes the function copy the temporary argument into the trash
instead, so that we avoid the malloc/free dance for most all non-string
args (e.g. integers, addresses, time, size etc), and that we can later
produce the contents on the fly while parsing the input. It adds a length
check to make sure that the argument is not longer than the buffer size,
which should obviously never be the case but who knows what people put in
their configuration.
The main problem we're having with argument parsing is that at the
moment the caller looks for the first character looking like an end
of arguments (')') and calls make_arg_list() on the sub-string inside
the parenthesis.
Let's first change the way it works so that make_arg_list() also
consumes the parenthesis and returns the pointer to the first char not
consumed. This will later permit to refine each argument parsing.
For now there is no functional change.
Instead of scanning a string looking for an end of line, ')' or ',',
let's only accept characters which are actually valid identifier
characters. This will let the parser know that in %[src], only "src"
is the sample fetch name, not "src]". This was done both for samples
and ACLs since they are the same here.
Now, only one capture is mandatory in the path-info regex, the one matching the
script-name. The path-info capture is optional. Of couse, it must be defined to
fill the PATH_INFO parameter. But it is not mandatory. This way, it is possible
to get the script-name part from the path, excluding the path-info.
This patch is small enough to be backported to 2.1.
If a regex to match the PATH_INFO parameter is configured, it systematically
fails if a newline or a null character is present in the URL-decoded path. So,
from the moment there is at least a "%0a" or a "%00" in the request path, we
always fail to get the PATH_INFO parameter and all the decoded path is used for
the SCRIPT_NAME parameter.
It is probably not the expected behavior. Because, most of time, these
characters are not expected at all in a path, an error is now triggered when one
of these characters is found in the URL-decoded path before trying to execute
the path_info regex. However, this test is not performed if there is no regex
configured.
Note that in reality, the newline character is only a problem when HAProxy is
complied with pcre or pcre2 library and conversely, the null character is only a
problem for the libc's regex library. But both are always excluded to avoid any
inconsistency depending on compile options.
An alternative, not implemented yet, is to replace these characters by another
one. If someone complains about this behavior, it will be re-evaluated.
This patch must be backported to all versions supporting the FastCGI
applications, so to 2.1 for now.
When calling the mux "destroy" method, the argument should be the mux
context, not the connection. In a few instances in the mux code, the
connection was used (mainly when the session wouldn't handle the idle
connection, and the server pool was fool), and that could lead to random
segfaults.
This should be backported to 2.1, 2.0, and 1.9
we cannot return right after socket opening as we need to move back to
the default namespace first
this should fix github issue #500
this might be backported to all version >= 1.6
Fixes: b3e54fe387 ("MAJOR: namespace: add Linux network namespace
support")
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
when `getsockopt` previously failed, we were trying to set defaultmss
with -2 value.
this is a followup of github issue #499
this should be backported to all versions >= v1.8
Fixes: 153659f1ae ("MINOR: tcp: When binding socket, attempt to
reuse one from the old proc.")
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
we were trying to close file descriptor even when `socket` call was
failing.
this should fix github issue #499
this should be backported to all versions >= v1.8
Fixes: 153659f1ae ("MINOR: tcp: When binding socket, attempt to
reuse one from the old proc.")
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
When intializing a listener, let's make sure the bind_thread mask is
always limited to all_threads_mask when inserting the FD. This will
avoid seeing listening FDs with bits corresponding to threads that are
not active (e.g. when using "bind ... process 1/even"). The side effect
is very limited, all that was identified is that atomic operations are
used in fd_update_events() when not necessary. It's more a matter of
long-term correctness in practice.
This fix might be backported as far as 1.8 (then proto_sockpair must
be dropped).
In bug #495 we found that it is possible to resume a listener on an
inexistent thread. This happens when a bind's thread_mask contains bits
out of the active threads mask, such as when using "1/odd" or "1/even".
The thread_mask was used as-is to pick a thread number to re-enable the
listener, and given that the highest number is used, 1/odd or 1/even can
produce quite high thread numbers and crash the process by queuing some
entries into non-existent lists.
This bug is an incomplete fix of commit 413e926ba ("BUG/MAJOR: listener:
fix thread safety in resume_listener()") though it will only trigger if
some bind lines are explicitly bound to thread numbers higher than the
thread count. The fix must be backported to all branches having the fix
above (as far as 1.8, though the code is different there, see the commit
message in 1.8 for changes).
There are a few other places where bind_thread is used without
enforcing all_thread_mask, namely when doing fd_insert() while creating
listeners. It seems harmless but would probably deserve another fix.
Issue #490 reports that there are a few bogus constructs of the famous
"do { if (cond) continue; } while (0)" in the connection code, that are
used to retry on I/O failures caused by receipt of a signal. Let's turn
them into the more correct "while (1) { if (cond) continue; break }"
instead. This may or may not be backported, it shouldn't have any
visible effect.
We do have some checks for the UNIX socket path length to validate the
full pathname of a unix socket but the pathname extension is only taken
into account when using a bind_prefix. The second check only matches
against MAXPATHLEN. So this means that path names between 98 and 108
might successfully parse but fail to bind. Let's adjust the check in
the address parser and refine the error checking at the bind() step.
This addresses bug #493.
In commit 477902b ("MEDIUM: connections: Get ride of the xprt_done
callback.") we added an inconditional call to h2_wake_some_streams()
in h2_wake(), though we must not do it if the connection is destroyed
or we end up with a use-after-free. In this case it's already done in
h2_process() before destroying the connection anyway.
Let's just add this test for now. A cleaner approach might consist in
doing it in the h2_process() function itself when a connection status
change is detected.
No backport is needed, this is purely 2.2.
A bug was introduced during TCP rules refactoring by the commit ac98d81f4
("MINOR: http-rule/tcp-rules: Make track-sc* custom actions"). There is no
stream when L4/L5 TCP rules are evaluated. For these rulesets, In track-sc*
actions, we must take care to rely on the session instead of the stream.
Because of this bug, any evaluation of L4/L5 TCP rules using a track-sc* action
leads to a crash of HAProxy.
No backport needed, except if the above commit is backported.
The code which is supposed to apply the bind_conf configuration on the
SSL_CTX was not called correctly. Indeed it was called with the previous
SSL_CTX so the new ones were left with default settings. For example the
ciphers were not changed.
This patch fixes#429.
Must be backported in 2.1.
This patch fixes memory leaks and a null pointer dereference found by coverity
on the error path when an HTTP return action is parsed. See issue #491.
No need to backport this patch except the HTT return action is backported too.
In action_http_set_status(), when a rewrite error occurred, the stream error
flag must be set before returning the error.
No need to backport this patch except if commit 333bf8c33 ("MINOR: http-rules:
Set SF_ERR_PRXCOND termination flag when a header rewrite fails") is
backported. This bug was reported in issue #491.
The patch adding this check initially only issued a warning, instead of
being fatal. It was changed before committing. However when making this
change the type of the log message was not changed from `ha_warning` to
`ha-alert`. This patch makes this forgotten adjustment.
see 0cf811a5f9
No backport needed. The initial patch was backported as a warning, thus
the log message type is correct.
Consider a configuration like this:
> acl t always_true
> acl or always_false
>
> http-response set-header Foo Bar if t or t
The 'or' within the condition will be treated as a logical disjunction
and the header will be set, despite the ACL 'or' being falsy.
This patch makes it an error to declare such an ACL that will never
work. This patch may be backported to stable releases, turning the
error into a warning only (the code was written in a way to make this
trivial). It should not break anything and might improve the users'
lifes.
If the longjmp() call is not flagged as "noreturn", for example, because the
operating system doesn't target a gcc-compatible compiler, we may get this
warning when building Lua :
src/hlua.c: In function 'hlua_panic_ljmp':
src/hlua.c:128:1: warning: no return statement in function returning non-void [-Wreturn-type]
static int hlua_panic_ljmp(lua_State *L) { longjmp(safe_ljmp_env, 1); }
^~~~~~
The function's prototype cannot be changed because it must be compatible
with Lua's callbacks. Let's simply enclose the call inside WILL_LJMP()
which we created exactly to signal a call to longjmp(). It lets the compiler
know we won't get back into the function and that the return statement is
not needed.
It is now possible to intercept HTTP messages from a lua action and reply to
clients. To do so, a reply object must be provided to the function
txn:done(). It may contain a status code with a reason, a header list and a
body. By default, if an empty reply object is used, an empty 200 response is
returned. If no reply is passed when txn:done() is called, the previous
behaviour is respected, the transaction is terminated and nothing is returned to
the client. The same is done for TCP streams. When txn:done() is called, the
action is terminated with the code ACT_RET_DONE on success and ACT_RET_ERR on
error, interrupting the message analysis.
The reply object may be created for the lua, by hand. Or txn:reply() may be
called. If so, this object provides some methods to fill it:
* Reply:set_status(<status> [ <reason>]) : Set the status and optionally the
reason. If no reason is provided, the default one corresponding to the status
code is used.
* Reply:add_header(<name>, <value>) : Add a header. For a given name, the
values are stored in an ordered list.
* Reply:del_header(<name>) : Removes all occurrences of a header name.
* Reply:set_body(<body>) : Set the reply body.
Here are some examples, all doing the same:
-- ex. 1
txn:done{
status = 400,
reason = "Bad request",
headers = {
["content-type"] = { "text/html" },
["cache-control"] = { "no-cache", "no-store" },
},
body = "<html><body><h1>invalid request<h1></body></html>"
}
-- ex. 2
local reply = txn:reply{
status = 400,
reason = "Bad request",
headers = {
["content-type"] = { "text/html" },
["cache-control"] = { "no-cache", "no-store" }
},
body = "<html><body><h1>invalid request<h1></body></html>"
}
txn:done(reply)
-- ex. 3
local reply = txn:reply()
reply:set_status(400, "Bad request")
reply:add_header("content-length", "text/html")
reply:add_header("cache-control", "no-cache")
reply:add_header("cache-control", "no-store")
reply:set_body("<html><body><h1>invalid request<h1></body></html>")
txn:done(reply)
This function may be used to defined a timeout when a lua action returns
act:YIELD. It is a way to force to reexecute the script after a short time
(defined in milliseconds).
Unlike core:sleep() or core:yield(), the script is fully reexecuted if it
returns act:YIELD. With core functions to yield, the script is interrupted and
restarts from the yield point. When a script returns act:YIELD, it is finished
but the message analysis is blocked on the action waiting its end.
ACT_RET_* code are now available from lua scripts. The gloabl object "act" is
used to register these codes as constant. Now, lua actions can return any of
following codes :
* act.CONTINUE for ACT_RET_CONT
* act.STOP for ACT_RET_STOP
* act.YIELD for ACT_RET_YIELD
* act.ERROR for ACT_RET_ERR
* act.DONE for ACT_RET_DONE
* act.DENY for ACT_RET_DENY
* act.ABORT for ACT_RET_ABRT
* act.INVALID for ACT_RET_INV
For instance, following script denied all requests :
core.register_action("deny", { "http-req" }, function (txn)
return act.DENY
end)
Thus "http-request lua.deny" do exactly the same than "http-request deny".
When an action successfully finishes, the action return code (ACT_RET_*) is now
retrieve on the stack, ff the first element is an integer. In addition, in
hlua_txn_done(), the value ACT_RET_DONE is pushed on the stack before
exiting. Thus, when a script uses this function, the corresponding action still
finishes with the good code. Thanks to this change, the flag HLUA_STOP is now
useless. So it has been removed.
It is a mandatory step to allow a lua action to return any action return code.
In http_process_res_common() analyzer, when a invalid response is reported, the
failed_resp counters must be incremented.
No need to backport this patch, except if the commit b8a5371a ("MEDIUM:
http-ana: Properly handle internal processing errors") is backported too.
It is not possible anymore to alter the HTTP parser state from lua sample
fetches or lua actions. So there is no reason to still check for the parser
state consistency.
It is now possible to append extra headers to the generated responses by HTTP
return actions, while it is not based on an errorfile. For return actions based
on errorfiles, these extra headers are ignored. To define an extra header, a
"hdr" argument must be used with a name and a value. The value is a log-format
string. For instance:
http-request status 200 hdr "x-src" "%[src]" hdr "x-dst" "%[dst]"
Thanks to this new action, it is now possible to return any responses from
HAProxy, with any status code, based on an errorfile, a file or a string. Unlike
the other internal messages generated by HAProxy, these ones are not interpreted
as errors. And it is not necessary to use a file containing a full HTTP
response, although it is still possible. In addition, using a log-format string
or a log-format file, it is possible to have responses with a dynamic
content. This action can be used on the request path or the response path. The
only constraint is to have a responses smaller than a buffer. And to avoid any
warning the buffer space reserved to the headers rewritting should also be free.
When a response is returned with a file or a string as payload, it only contains
the content-length header and the content-type header, if applicable. Here are
examples:
http-request return content-type image/x-icon file /var/www/favicon.ico \
if { path /favicon.ico }
http-request return status 403 content-type text/plain \
lf-string "Access denied. IP %[src] is blacklisted." \
if { src -f /etc/haproxy/blacklist.lst }
This patch introduces the 'http-after-response' rules. These rules are evaluated
at the end of the response analysis, just before the data forwarding, on ALL
HTTP responses, the server ones but also all responses generated by
HAProxy. Thanks to this ruleset, it is now possible for instance to add some
headers to the responses generated by the stats applet. Following actions are
supported :
* allow
* add-header
* del-header
* replace-header
* replace-value
* set-header
* set-status
* set-var
* strict-mode
* unset-var
Call http_forward_proxy_resp() function when an internal response is
returned. It concerns redirect, auth and error reponses. But also 100-Continue
and 103-Early-Hints responses. For errors, there is a subtlety. if the forward
fails, an HTTP 500 error is generated if it is not already an internal
error. For now http_forward_proxy_resp() cannot fail. But it will be possible
when the new ruleset applied on all responses will be added.
Operations performed when internal responses (redirect/deny/auth/errors) are
returned are always the same. The http_forward_proxy_resp() function is added to
group all of them under a unique function.
The http_server_error() function now relies on http_reply_and_close(). Both do
almost the same actions. In addtion, http_server_error() sets the error flag and
the final state flag on the stream.
The rule direction must be tested to do specific processing on the request
path. intercepted_req counter shoud be updated if the rule is evaluated on the
frontend and remaining request's analyzers must be removed. But only on the
request path. The rule direction must also be tested to set the right final
stream state flag.
This patch depends on the commit "MINOR: http-rules: Add a flag on redirect
rules to know the rule direction". Both must be backported to all stable
versions.
HTTP redirect rules can be evaluated on the request or the response path. So
when a redirect rule is evaluated, it is important to have this information
because some specific processing may be performed depending on the direction. So
the REDIRECT_FLAG_FROM_REQ flag has been added. It is set when applicable on the
redirect rule during the parsing.
This patch is mandatory to fix a bug on redirect rule. It must be backported to
all stable versions.
It is important to not forget to specify the HTX resposne was internally
generated when a server perform a redirect. This information is used by the H1
multiplexer to choose the right connexion mode when the response is sent to the
client.
This patch must be backported to 2.1.
The first index in an HTX message is the HTX block index from which the HTTP
analysis must be performed. When HAProxy sends an HTTP response, on error or
redirect, this index must be reset because all pending incoming data are
considered as forwarded. For now, it is only a bug for 103-Early-Hints
response. For other responses, it is not a problem. But it will be when the new
ruleset applied on all responses will be added. For 103 responses, if the first
index is not reset, if there are rewritting rules on server responses, the
generated 103 responses, if any, are evaluated too.
This patch must be backported and probably adapted, at least for 103 responses,
as far as 1.9.
<.arg.dns.dns_opts> field in the act_rule structure is now dynamically allocated
when a do-resolve rule is parsed. This drastically reduces the structure size.
When an error is returned to a client, the right message is injected into the
response buffer. It is performed by http_server_error() or
http_replay_and_close(). Both ignore any data already present into the channel's
buffer. While it is legitimate to remove all input data, it is important to not
remove any outgoing data.
So now, we try to append the error message to the response buffer, only removing
input data. We rely on the channel_htx_copy_msg() function to do so. So this
patch depends on the following two commits:
* MINOR: htx: Add a function to append an HTX message to another one
* MINOR: htx/channel: Add a function to copy an HTX message in a channel's buffer
This patch must be backported as far as 1.9. However, above patches must be
backported first.
the htx_append_msg() function can now be used to append an HTX message to
another one. All the message is copied or nothing. If an error occurs during the
copy, all changes are rolled back.
This patch is mandatory to fix a bug in http_reply_and_close() function. Be
careful to backport it first.
If an error file is too big and, once converted in HTX, runs over the buffer
space reserved to headers rewritting, a warning is emitted. Because a new set of
rules will be added to allow headers rewritting on all responses, including
HAProxy ones, it is important to always keep this space free for error files.
When a header rewrite fails, an internal errors is triggered. But
SF_ERR_INTERNAL is documented to be the concequence of a bug and must be
reported to the dev teamm. So, when this happens, the SF_ERR_PRXCOND termination
flag is set now.
When the global structure is initialized, instead of setting tune.maxrewrite to
-1, its default value can be immediately set. This way, it is always defined
during the configuration validity check. Otherwise, the only way to have it at
this stage, it is to explicity set it in the global section.
Since the strict rewritting mode was introduced, actions manipulating headers
(set/add/replace) always rely on the request message to test if the
HTTP_MSGF_SOFT_RW flag is set or not. But, of course, we must only rely on the
request for http-request rules. For http-response rules, we must use the
response message.
This patch must be backported if the strict rewritting is backported too.
The function was added in commit 6c39198b57,
but was also used within a single function `free_dcache` which was unused
itself.
see issue #301
see commit 10ce0c2f31 which removed
`free_dcache`
The function was changed to be static in commit
6c39198b57, but even that commit
no longer uses it. The purpose of the change vs. outright removal
is unclear.
see issue #301
As reported in issue #485 the test for !len at the end of the
loop in get_var_int() is useless since it was already done inside
the loop. Actually the code is more readable if we remove the first
one so let's do this instead. The resulting code is exactly the same
since the compiler already optimized the test away.
In ssl_sock_load_dh_params(), if haproxy failed to apply the dhparam
with SSL_CTX_set_tmp_dh(), it will apply the DH with
SSL_CTX_set_dh_auto().
The problem is that we don't clean the OpenSSL errors when leaving this
function so it could fail to load the certificate, even if it's only a
warning.
Fixes bug #483.
Must be backported in 2.1.
We have the ability per bind option to ignore certain errors (CA, crt, ...),
and for this we use a 64-bit field. In issue #479 coverity reports a risk of
too large a left shift. For now as of OpenSSL 1.1.1 the highest error value
that may be reported by X509_STORE_CTX_get_error() seems to be around 50 so
there should be no risk yet, but it's enough of a warning to add a check so
that we don't accidently hide random errors in the future.
This may be backported to relevant stable branches.
This new setting in the global section alters the way HAProxy will look
for unspecified files (.ocsp, .sctl, .issuer, bundles) during the
loading of the SSL certificates.
By default, HAProxy discovers automatically a lot of files not specified
in the configuration, and you may want to disable this behavior if you
want to optimize the startup time.
This patch sets flags in global_ssl.extra_files and then check them
before trying to load an extra file.
When using lockless pools, add a new rwlock, flush_pool. read-lock it when
getting memory from the pool, so that concurrenct access are still
authorized, but write-lock it when we're about to free memory, in
pool_flush() and pool_gc().
The problem is, when removing an item from the pool, we unreference it
to get the next one, however, that pointer may have been free'd in the
meanwhile, and that could provoke a crash if the pointer has been unmapped.
It should be OK to use a rwlock, as normal operations will still be able
to access the pool concurrently, and calls to pool_flush() and pool_gc()
should be pretty rare.
This should be backported to 2.1, 2.0 and 1.9.
In pool_create(), only initialize the pool spinlock if we just created the
pool, in the event we're reusing it, there's no need to initialize it again.
In pool_flush(), we can't just set the free_list to NULL, or we may suffer
the ABA problem. Instead, use a double-width CAS and update the sequence
number.
This should be backported to 2.1, 2.0 and 1.9.
This may, or may not, be related to github issue #476.
We can't clear flags on tasklets because we don't know if they're still
present upon return (they all return NULL, maybe that could change in
the future). As a side effect, once TASK_RUNNING is set, it's never
cleared anymore, which is misleading and resulted in some incorrect
flagging of bulk tasks in the recent scheduler changes. And the only
reason for setting TASK_RUNNING on tasklets was to detect self-wakers,
which is not done using a dedicated flag. So instead of setting this
flags for no opportunity to clear it, let's simply not set it.
Now that we can more accurately watch which connection is really
being woken up from itself, it was desirable to re-adjust the CPU BW
thresholds based on measurements. New tests with 60000 concurrent
connections were run at 100 Gbps with unbounded queues and showed
the following distribution:
scenario TC0 TC1 TC2 observation
-------------------+---+---+----+---------------------------
TCP conn rate : 32, 51, 17
HTTP conn rate : 34, 41, 25
TCP byte rate : 2, 3, 95 (2 MB objets)
splicing byte rate: 11, 6, 83 (2 MB objets)
H2 10k object : 44, 23, 33 client-limited
mixed traffic : 18, 10, 72 2*1m+1*0: 11kcps, 36 Gbps
The H2 experienced a huge change since it uses a persistent connection
that was accidently flagged in the previous test. The splicing test
exhibits a higher need for short tasklets, so does the mixed traffic
test. Given that latency mainly matters for conn rate and H2 here,
the ratios were readjusted as 33% for TC0, 50% for TC1 and 17% for
TC2, keeping in mind that whatever is not consumed by one class is
automatically shared in equal propertions by the next one(s). This
setting immediately provided a nice improvement as with the default
settings (maxpollevents=200, runqueue-depth=200), the same ratios as
above are still reported, while the time to request "show activity"
on the CLI dropped to 30-50ms. The average loop time is around 5.7ms
on the mixed traffic.
In addition, one extra stress test at 90.5 Gbps with 5100 conn/s shows
70-100ms CLI request time, with an average loop time of 17 ms.
sched->current is used to know the current task/tasklet, and is currently
only used by the panic dump code. However it turns out it was not set for
tasklets, which prevents us from using it for more usages, despite the
panic handling code already handling this case very well. Let's make sure
it's now set.
Commit a17664d829 ("MEDIUM: tasks: automatically requeue into the bulk
queue an already running tasklet") tried to inflict a penalty to
self-requeuing tasks/tasklets which correspond to those involved in
large, high-latency data transfers, for the benefit of all other
processing which requires a low latency. However, it turns out that
while it ought to do this on a case-by-case basis, basing itself on
the RUNNING flag isn't accurate because this flag doesn't leave for
tasklets, so we'd rather need a distinct flag to tag such tasklets.
This commit introduces TASK_SELF_WAKING to mark tasklets acting like
this. For now it's still set when TASK_RUNNING is present but this
will have to change. The flag is kept across wakeups.
Commit 140237471e made sure we hold the
toremove_lock for the corresponding thread before removing a connection
from its idle_orphan_conns list, however it failed to unlock it if we
found a connection, leading to a deadlock, so add the missing deadlock.
This should be backported to 2.1 and 2.0.
Measures with unbounded execution ratios under 40000 concurrent
connections at 100 Gbps showed the following CPU bandwidth
distribution between task classes depending on traffic scenarios:
scenario TC0 TC1 TC2 observation
-------------------+---+---+----+---------------------------
TCP conn rate : 29, 48, 23 221 kcps
HTTP conn rate : 29, 47, 24 200 kcps
TCP byte rate : 3, 5, 92 53 Gbps
splicing byte rate: 5, 10, 85 70 Gbps
H2 10k object : 10, 21, 74 client-limited
mixed traffic : 4, 7, 89 2*1m+1*0: 11kcps, 36 Gbps
Thus it seems that we always need a bit of bulk tasks even for short
connections, which seems to imply a suboptimal processing somewhere,
and that there are roughly twice as many tasks (TC1=normal) as regular
tasklets (TC0=urgent). This ratio stands even when data forwarding
increases. So at first glance it looks reasonable to enforce the
following ratio by default:
- 16% for TL_URGENT
- 33% for TL_NORMAL
- 50% for TL_BULK
With this, the TCP conn rate climbs to ~225 kcps, and the mixed traffic
pattern shows a more balanced 17kcps + 35 Gbps with 35ms CLI request
time time instead of 11kcps + 36 Gbps and 400 ms response time. The
byte rate tests (1M objects) are not affected at all. This setting
looks "good enough" to allow immediate merging, and could be refined
later.
It's worth noting that it resists very well to massive increase of
run queue depth and maxpollevents: with the run queue depth changed
from 200 to 10000 and maxpollevents to 10000 as well, the CLI's
request time is back to the previous ~400ms, but the mixed traffic
test reaches 52 Gbps + 7500 CPS, which was never met with the previous
scheduling model, while the CLI used to show ~1 minute response time.
The reason is that in the bulk class it becomes possible to perform
multiple rounds of recv+send and eliminate objects at once, increasing
the L3 cache hit ratio, and keeping the connection count low, without
degrading too much the latency.
Another test with mixed traffic involving 2/3 splicing on huge objects
and 1/3 on empty objects without touching any setting reports 51 Gbps +
5300 cps and 35ms CLI request time.
We used to mix high latency tasks and low latency tasklets in the same
list, and to even refill bulk tasklets there, causing some unfairness
in certain situations (e.g. poll-less transfers between many connections
saturating the machine with similarly-sized in and out network interfaces).
This patch changes the mechanism to split the load into 3 lists depending
on the task/tasklet's desired classes :
- URGENT: this is mainly for tasklets used as deferred callbacks
- NORMAL: this is for regular tasks
- BULK: this is for bulk tasks/tasklets
Arbitrary ratios of max_processed are picked from each of these lists in
turn, with the ability to complete in one list from what was not picked
in the previous one. After some quick tests, the following setup gave
apparently good results both for raw TCP with splicing and for H2-to-H1
request rate:
- 0 to 75% for urgent
- 12 to 50% for normal
- 12 to what remains for bulk
Bulk is not used yet.
New function run_tasks_from_list() will run over a tasklet list and will
run all the tasks and tasklets it finds there within a limit of <max>
that is passed in arggument. This is a preliminary work for scheduler QoS
improvements.
Previous patch 160287b676 ("MEDIUM: pipe/thread: maintain a per-thread
local cache of recently used pipes") didn't replace all pipe counter
updates with atomic ops since some were already under a lock, which is
obviously not a valid reason since these ones can be updated in parallel
to other atomic ops. The result was that the pipes_used could seldom be
seen as negative in the stats (harmless) but also this could result in
slightly more pipes being allocated than permitted, thus stealing a few
file descriptors that were not usable for connections anymore. Let's use
pure atomic ops everywhere these counters are updated.
No backport is needed.
In order to completely remove the pipe locking cost and try to reuse
hot pipes, each thread now maintains a local cache of recently used pipes
that is no larger than its share (maxpipes/nbthreads). All extra pipes
are instead refilled into the global pool. Allocations are made from the
local pool first, and fall back to the global one before allocating one.
This completely removes the observed pipe locking cost at high bit rates,
which was still around 5-6%.
In a quick test involving splicing, we can see that get_pipe() and
put_pipe() together consume up to 12% of the CPU. That's not surprizing
considering how much work is performed under the lock, including the
pipe struct allocation, the pipe creation and its initialization. Same
for releasing, we don't need a lock there to call close() nor to free
to the pool.
Changing this alone was enough to cut the overhead in half. A better
approach should consist in having a per-thread pipe cache, which will
also help keep pages hot in the CPU caches.
src/ssl_sock.c: In function ‘cli_io_handler_show_cert’:
src/ssl_sock.c:10214:6: warning: unused variable ‘n’ [-Wunused-variable]
int n;
^
Fix this problem in the io handler of the "show ssl cert" function.
Given that raw_sock's functions solely act on connections and that all its
callers properly use subscribe() when they want to receive/send more, there
is no more reason for calling fd_{cant,cond,done}_{send,recv} anymore as
this call is immediately overridden by the subscribe call. It's also worth
noting that the purpose of fd_cond_recv() whose purpose was to speculatively
enable reading in the FD cache if the FD was active but not yet polled was
made to save on expensive epoll_ctl() calls and was implicitly covered more
cleanly by recent commit 5d7dcc2a8e ("OPTIM: epoll: always poll for recv if
neither active nor ready").
No change on the number of calls to epoll_ctl() was noticed consecutive to
this change.
this log could be sometimes a bit confusing (depending on the number in
fact) when you read it (e.g is it the number of active connection?) -
only trained eyes knows haproxy output a different log when closing
active connections while stopping.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
hostname were limited to 62 char, which is not RFC1035 compliant;
- the parsing loop should stop when above max label char
- fix len label test where d[i] was wrongly used
- simplify the whole function to avoid using two extra char* variable
this should fix github issue #387
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Reviewed-by: Tim Duesterhus <tim@bastelstu.be>
Acked-by: Baptiste <bedis9@gmail.com>
triggered by coverity; src_port is set earlier.
this should fix github issue #467
Fixes: 7fec021537 ("MEDIUM: proxy_protocol: Convert IPs to v6 when
protocols are mixed")
This should be backported to 1.8.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Reviewed-by: Tim Duesterhus <tim@bastelstu.be>
Many HTTP actions rely on <.arg.http> in the act_rule structure. Not all actions
use the log-format expression, but it must be initialized anyway. Otherwise,
HAProxy may crash during the deinit when the release function is called.
No backport needed. This patch should fix issue #468.
In issue #465, we see that Coverity detected dead code in checks.c
which is in fact a missing parenthesis to build the connect() flags
consecutive to the API change in commit fdcb007ad8 ("MEDIUM: proto:
Change the prototype of the connect() method.").
The impact should be imperceptible as in the best case it may have
resulted in a missed optimization trying to save a syscall or to merge
outgoing packets.
It may be backported as far as 2.0 though it's not critical.
In ssl_sock_init(), if we fail to allocate the BIO, don't forget to free
the SSL *, or we'd end up with a memory leak.
This should be backported to 2.1 and 2.0.
As the server early data buffer is allocated in the middle of the loop
used to allocate the SSL session without being freed before retrying,
this leads to a memory leak.
To fix this we move the section of code responsible of this early data buffer
alloction after the one reponsible of allocating the SSL session.
Must be backported to 2.1 and 2.0.
When commit 477902bd2e made the conn_stream
allocation unconditional, it unfortunately moved the code doing the allocation
inside #if USE_OPENSSL, which means anybody compiling haproxy without
openssl wouldn't allocate any conn_stream, and would get a segfault later.
Fix that by moving the code that does the allocation outside #if USE_OPENSSL.
In process_stream(), when a client or a server abort is handled, the
corresponding listener's counter is incremented. But, we must be sure to have a
listener attached to the session. This bug was introduced by the commit
cff0f739e5.
Thanks to Fred to reporting me the bug.
No need to backport this patch, except if commit cff0f739e5 is backported.
A stupid cut-paste bug was introduced in the commit cff0f739e5. Backend
counters must of course be incremented on the stream's backend. Not the
frontend.
No need to backport this patch, except if commit cff0f739e5 is backported.
A first patch was made during 2.0-dev to silence a bogus warning emitted
by gcc : dd1c8f1f72 ("MINOR: cfgparse: Add a cast to make gcc happier."),
but it happens it was not sufficient as the warning re-appeared on 32-bit
machines under gcc-8 and gcc-9 :
src/cfgparse.c: In function 'check_config_validity':
src/cfgparse.c:3642:33: warning: argument 1 range [2147483648, 4294967295] exceeds maximum object size 2147483647 [-Walloc-size-larger-than=]
newsrv->idle_orphan_conns = calloc((unsigned int)global.nbthread, sizeof(*newsrv->idle_orphan_conns));
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This warning doesn't trigger in other locations, and it immediately
vanishes if the previous or subsequent loops do not depend on
global.nbthread anymore, or if the field ordering of the struct server
changes! As discussed in the thread at:
https://www.mail-archive.com/haproxy@formilux.org/msg36107.html
playing with -Walloc-size-larger-than has no effect. And a minimal
reproducer could be isolated, indicating it's pointless to circle around
this one. Let's just cast nbthread to ushort so that gcc cannot make
this wrong detection. It's unlikely we'll use more than 65535 threads in
the near future anyway.
This may be backported to older releases if they are also affected, at
least to ease the job of distro maintainers.
Thanks to Ilya for testing.
lua-prepend-path allows the administrator to specify a custom Lua library
path to load custom Lua modules that are useful within the context of HAProxy
without polluting the global Lua library folder.
While the H2 parser properly checks for the absence of anything but
"trailers" in the TE header field, we forget to check this when sending
the request to an H2 server. The problem is that an H2->H2 conversion
may keep "gzip" and fail on the next stage.
This patch makes sure that we only send "TE: trailers" if the TE header
contains the "trailers" token, otherwise it's dropped.
This fixes issue #464 and should be backported till 1.9.
Since commit 1b8e68e89a ("MEDIUM: stick-table: Stop handling stick-tables
as proxies."), a rule referencing the current proxy with no table leads
to the following error :
[ALERT] 023/071924 (16479) : Proxy 'px': unable to find stick-table '(null)'.
[ALERT] 023/071914 (16479) : Fatal errors found in configuration.
for a config like this one:
backend px
stick on src
This patch fixes it and should be backported as far as 2.0.
A few places in health checks and stream-int on the send path were still
checking for this flag. Now we do not and instead we rely on snd_buf()
to report the error if any.
It's worth noting that all 3 real muxes still use CO_FL_SOCK_WR_SH and
CO_FL_ERROR interchangeably at various places to decide to abort and/or
free their data. This should be clarified and fixed so that only
CO_FL_ERROR is used, and this will render the error paths simpler and
more accurate.
The test was added before splice() and send() to make sure we never
accidently send after a shutdown, because upper layers do not all
check and it's not their job to do it. In such a case we also set
errno to EPIPE so that the error can be accurately reported, e.g.,
in health checks.
Just like with CO_FL_SOCK_RD_SH, we don't need to check for this flag too
early because conn_sock_send() already does it. No error was lost so it
was harmless, it was only useless code.
The handshake functions dedicated to proxy proto, netscaler and
socks4 all check for this flag before proceeding. This is wrong,
they must not do and instead perform the call to recv() then
report the close. The reason for this is that the current
construct managed to lose the CO_ER_CIP_EMPTY error code in case
the connection was already shut, thus causing a race condition
with some errors being reported correctly or as unknown depending
on the timing.
There are still leftovers from the pre-xprt_handshake era with lots
of places where I/O callbacks refrain from receiving/sending if they
see that a handshake is present. This needlessly duplicates the
subscribe calls as it will automatically be done by the underlying
xprt_handshake code when attempting the operation.
The only reason for still checking CO_FL_HANDSHAKE is when we decide
to instantiate xprt_handshake. This patch removes all other ones.
As mentioned in commit c192b0ab95 ("MEDIUM: connection: remove
CO_FL_CONNECTED and only rely on CO_FL_WAIT_*"), there is a lack of
consistency on which flags are checked among L4/L6/HANDSHAKE depending
on the code areas. A number of sample fetch functions only check for
L4L6 to report MAY_CHANGE, some places only check for HANDSHAKE and
many check both L4L6 and HANDSHAKE.
This patch starts to make all of this more consistent by introducing a
new mask CO_FL_WAIT_XPRT which is the union of L4/L6/HANDSHAKE and
reports whether the transport layer is ready or not.
All inconsistent call places were updated to rely on this one each time
the goal was to check for the readiness of the transport layer.
Most places continue to check CO_FL_HANDSHAKE while in fact they should
check CO_FL_HANDSHAKE_NOSSL, which contains all handshakes but the one
dedicated to SSL renegotiation. In fact the SSL layer should be the
only one checking CO_FL_SSL_WAIT_HS, so as to avoid processing data
when a renegotiation is in progress, but other ones randomly include it
without knowing. And ideally it should even be an internal flag that's
not exposed in the connection.
This patch takes CO_FL_SSL_WAIT_HS out of CO_FL_HANDSHAKE, uses this flag
consistently all over the code, and gets rid of CO_FL_HANDSHAKE_NOSSL.
In order to limit the confusion that has accumulated over time, the
CO_FL_SSL_WAIT_HS flag which indicates an ongoing SSL handshake,
possibly used by a renegotiation was moved after the other ones.
As mentioned in c192b0ab95 ("MEDIUM: connection: remove CO_FL_CONNECTED
and only rely on CO_FL_WAIT_*"), si_cs_recv() currently does not propagate
CS_FL_EOS to CF_READ_NULL if CO_FL_WAIT_L4L6 is set, while this situation
doesn't exist anymore. Let's get rid of this confusing test.
We only add the Early-data header, or get ssl_fc_has_early to return 1, if
we didn't already did the SSL handshake, as otherwise, we know the early
data were fine, and there's no risk of replay attack. But to do so, we
wrongly checked CO_FL_HANDSHAKE, we have to check CO_FL_SSL_WAIT_HS instead,
as we don't care about the status of any other handshake.
This should be backported to 2.1, 2.0, and 1.9.
When deciding if we should add the Early-Data header, or if the sample fetch
should return
Commit 477902bd2e ("MEDIUM: connections: Get ride of the xprt_done
callback.") broke the master CLI for a very obscure reason. It happens
that short requests immediately terminated by a shutdown are properly
received, CS_FL_EOS is correctly set, but in si_cs_recv(), we refrain
from setting CF_SHUTR on the channel because CO_FL_CONNECTED was not
yet set on the connection since we've not passed again through
conn_fd_handler() and it was not done in conn_complete_session(). While
commit a8a415d31a ("BUG/MEDIUM: connections: Set CO_FL_CONNECTED in
conn_complete_session()") fixed the issue, such accident may happen
again as the root cause is deeper and actually comes down to the fact
that CO_FL_CONNECTED is lazily set at various check points in the code
but not every time we drop one wait bit. It is not the first time we
face this situation.
Originally this flag was used to detect the transition between WAIT_*
and CONNECTED in order to call ->wake() from the FD handler. But since
at least 1.8-dev1 with commit 7bf3fa3c23 ("BUG/MAJOR: connection: update
CO_FL_CONNECTED before calling the data layer"), CO_FL_CONNECTED is
always synchronized against the two others before being checked. Moreover,
with the I/Os moved to tasklets, the decision to call the ->wake() function
is performed after the I/Os in si_cs_process() and equivalent, which don't
care about this transition either.
So in essence, checking for CO_FL_CONNECTED has become a lazy wait to
check for (CO_FL_WAIT_L4_CONN | CO_FL_WAIT_L6_CONN), but that always
relies on someone else having synchronized it.
This patch addresses it once for all by killing this flag and only checking
the two others (for which a composite mask CO_FL_WAIT_L4L6 was added). This
revealed a number of inconsistencies that were purposely not addressed here
for the sake of bisectability:
- while most places do check both L4+L6 and HANDSHAKE at the same time,
some places like assign_server() or back_handle_st_con() and a few
sample fetches looking for proxy protocol do check for L4+L6 but
don't care about HANDSHAKE ; these ones will probably fail on TCP
request session rules if the handshake is not complete.
- some handshake handlers do validate that a connection is established
at L4 but didn't clear CO_FL_WAIT_L4_CONN
- the ->ctl method of mux_fcgi, mux_pt and mux_h1 only checks for L4+L6
before declaring the mux ready while the snd_buf function also checks
for the handshake's completion. Likely the former should validate the
handshake as well and we should get rid of these extra tests in snd_buf.
- raw_sock_from_buf() would directly set CO_FL_CONNECTED and would only
later clear CO_FL_WAIT_L4_CONN.
- xprt_handshake would set CO_FL_CONNECTED itself without actually
clearing CO_FL_WAIT_L4_CONN, which could apparently happen only if
waiting for a pure Rx handshake.
- most places in ssl_sock that were checking CO_FL_CONNECTED don't need
to include the L4 check as an L6 check is enough to decide whether to
wait for more info or not.
It also becomes obvious when reading the test in si_cs_recv() that caused
the failure mentioned above that once converted it doesn't make any sense
anymore: having CS_FL_EOS set while still waiting for L4 and L6 to complete
cannot happen since for CS_FL_EOS to be set, the other ones must have been
validated.
Some of these parts will still deserve further cleanup, and some of the
observations above may induce some backports of potential bug fixes once
totally analyzed in their context. The risk of breaking existing stuff
is too high to blindly backport everything.
ocsp_issuer is primary set from ckch->chain when PEM is loaded from file,
but not set when PEM is loaded via CLI payload. Set ckch->ocsp_issuer in
ssl_sock_load_pem_into_ckch to fix that.
Should be backported in 2.1.
We can't just assume conn_create_mux() will be called, and set CO_FL_CONNECTED,
conn_complete_session() might be call synchronously if we're not using SSL,
so ew haee no choice but to set CO_FL_CONNECTED in there. This should fix
the recent breakage of the mcli reg tests.
When using the OCSP response, if the issuer of the response is in
the certificate chain, its address will be stored in ckch->ocsp_issuer.
However, since the ocsp_issuer could be filled by a separate file, this
pointer is free'd. The refcount of the X509 need to be incremented to
avoid a double free if we free the ocsp_issuer AND the chain.
As reported in bug #447, gcc 9.2 invents impossible code paths and then
complains that we don't check for our pointers to be NULL... This code
path is not critical, better add the test to shut it up than try to
help it being less creative.
This code hasn't changed for a while, so it could help distros to
backport this to older releases.
objt_conn() may return a NULL though here we don't have this situation
anymore since the connection is always there, so let's simply switch
to the unchecked __objt_conn(). This addresses issue #454.
Coverity rightfully reported that it's pointless to test for "conn"
to be null while all code paths leading to it have already
dereferenced it. This addresses issue #461.
When using "set ssl cert" on the CLI, if we load a new PEM, the previous
sctl, issuer and OCSP response are still loaded. This doesn't make any
sense since they won't be usable with a new private key.
This patch free the previous data.
Should be backported in 2.1.
When using multiple filters with "show table", it can be useful to
report which filter entry failed
> show table MY_TABLE data.gpc0 gt 0 data.gpc0a lt 1000
Filter entry #2: Unknown data type
> show table MY_TABLE data.gpc0 gt 0 data.gpc0 lt 1000a
Filter entry #2: Require a valid integer value to compare against
We now also catch garbage data after the filter
> show table MY_TABLE data.gpc0 gt 0 data.gpc0 lt 1000 data.gpc0 gt 1\
data.gpc0 gt 10 a
Detected extra data in filter, 16th word of input, after '10'
Even before multi-filter feature we've also silently accepted garbage
after the input, hiding potential bugs
> show table MY_TABLE data.gpc0 gt 0 data.gpc0
or
> show table MY_TABLE data.gpc0 gt 0 a
In both cases, only first filter entry would be used, silently ignoring
extra filter entry or garbage data.
Last, but not the least, it is now possible to detect multi-filter
feature from cli with something like the following:
> show table MY_TABLE data.blah
Filter entry #1: Unknown data type
The xprt_done_cb callback was used to defer some connection initialization
until we're connected and the handshake are done. As it mostly consists of
creating the mux, instead of using the callback, introduce a conn_create_mux()
function, that will just call conn_complete_session() for frontend, and
create the mux for backend.
In h2_wake(), make sure we call the wake method of the stream_interface,
as we no longer wakeup the stream task.
In connect_server(), when creating a new connection for which we don't yet
know the mux (because it'll be decided by the ALPN), instead of associating
the connection to the stream_interface, always create a conn_stream. This way,
we have less special-casing needed. Store the conn_stream in conn->ctx,
so that we can reach the upper layers if needed.
We don't properly check for missing data values for additional filter
entries, passing out of bounds index to args[], then passing to strlen.
Introduced in commit 1a693fc2: (MEDIUM: cli: Allow multiple filter
entries for "show table")
Last commit 1a693fc2fd ("MEDIUM: cli: Allow multiple filter entries for "show table"")
broke the build at two places:
src/stick_table.c: In function 'table_prepare_data_request':
src/stick_table.c:3620:33: warning: ordered comparison of pointer with integer zero [-Wextra]
src/stick_table.c: In function 'cli_io_handler_table':
src/stick_table.c:3763:5: error: 'for' loop initial declarations are only allowed in C99 mode
src/stick_table.c:3763:5: note: use option -std=c99 or -std=gnu99 to compile your code
make: *** [src/stick_table.o] Error 1
This patch fixes both. No backport needed.
"set ssl cert <filename> <payload>" CLI command should have the same
result as reload HAproxy with the updated pem file (<filename>).
Is not the case, DHparams/cert-chain is kept from the previous
context if no DHparams/cert-chain is set in the context (<payload>).
This patch should be backport to 2.1
In conn_recv_netscaler_cip(), don't forget to allocate conn->src and
conn->dst, as those are now dynamically allocated. Not doing so results in
getting a crash when using netscaler.
This should fix github issue #460.
This should be backported to 2.1.
For complex stick tables with many entries/columns, it can be beneficial
to filter using multiple criteria. The maximum number of filter entries
can be controlled by defining STKTABLE_FILTER_LEN during build time.
This patch can be backported to older releases.
Released version 2.2-dev1 with the following main changes :
- DOC: this is development again
- MINOR: version: this is development again, update the status
- SCRIPTS: update create-release to fix the changelog on new branches
- CLEANUP: ssl: Clean up error handling
- BUG/MINOR: contrib/prometheus-exporter: decode parameter and value only
- BUG/MINOR: h1: Don't test the host header during response parsing
- BUILD/MINOR: trace: fix use of long type in a few printf format strings
- DOC: Clarify behavior of server maxconn in HTTP mode
- MINOR: ssl: deduplicate ca-file
- MINOR: ssl: compute ca-list from deduplicate ca-file
- MINOR: ssl: deduplicate crl-file
- CLEANUP: dns: resolution can never be null
- BUG/MINOR: http-htx: Don't make http_find_header() fail if the value is empty
- DOC: ssl/cli: set/commit/abort ssl cert
- BUG/MINOR: ssl: fix SSL_CTX_set1_chain compatibility for openssl < 1.0.2
- BUG/MINOR: fcgi-app: Make the directive pass-header case insensitive
- BUG/MINOR: stats: Fix HTML output for the frontends heading
- BUG/MINOR: ssl: fix X509 compatibility for openssl < 1.1.0
- DOC: clarify matching strings on binary fetches
- DOC: Fix ordered list in summary
- DOC: move the "group" keyword at the right place
- MEDIUM: init: prevent process and thread creation at runtime
- BUG/MINOR: ssl/cli: 'ssl cert' cmd only usable w/ admin rights
- BUG/MEDIUM: stream-int: don't subscribed for recv when we're trying to flush data
- BUG/MINOR: stream-int: avoid calling rcv_buf() when splicing is still possible
- BUG/MINOR: ssl/cli: don't overwrite the filters variable
- BUG/MEDIUM: listener/thread: fix a race when pausing a listener
- BUG/MINOR: ssl: certificate choice can be unexpected with openssl >= 1.1.1
- BUG/MEDIUM: mux-h1: Never reuse H1 connection if a shutw is pending
- BUG/MINOR: mux-h1: Don't rely on CO_FL_SOCK_RD_SH to set H1C_F_CS_SHUTDOWN
- BUG/MINOR: mux-h1: Fix conditions to know whether or not we may receive data
- BUG/MEDIUM: tasks: Make sure we switch wait queues in task_set_affinity().
- BUG/MEDIUM: checks: Make sure we set the task affinity just before connecting.
- MINOR: debug: replace popen() with pipe+fork() in "debug dev exec"
- MEDIUM: init: set NO_NEW_PRIVS by default when supported
- BUG/MINOR: mux-h1: Be sure to set CS_FL_WANT_ROOM when EOM can't be added
- BUG/MEDIUM: mux-fcgi: Handle cases where the HTX EOM block cannot be inserted
- BUG/MINOR: proxy: make soft_stop() also close FDs in LI_PAUSED state
- BUG/MINOR: listener/threads: always use atomic ops to clear the FD events
- BUG/MINOR: listener: also clear the error flag on a paused listener
- BUG/MEDIUM: listener/threads: fix a remaining race in the listener's accept()
- MINOR: listener: make the wait paths cleaner and more reliable
- MINOR: listener: split dequeue_all_listener() in two
- REORG: listener: move the global listener queue code to listener.c
- DOC: document the listener state transitions
- BUG/MEDIUM: kqueue: Make sure we report read events even when no data.
- BUG/MAJOR: dns: add minimalist error processing on the Rx path
- BUG/MEDIUM: proto_udp/threads: recv() and send() must not be exclusive.
- DOC: listeners: add a few missing transitions
- BUG/MINOR: tasks: only requeue a task if it was already in the queue
- MINOR: tasks: split wake_expired_tasks() in two parts to avoid useless wakeups
- DOC: proxies: HAProxy only supports 3 connection modes
- DOC: remove references to the outdated architecture.txt
- BUG/MINOR: log: fix minor resource leaks on logformat error path
- BUG/MINOR: mworker: properly pass SIGTTOU/SIGTTIN to workers
- BUG/MINOR: listener: do not immediately resume on transient error
- BUG/MINOR: server: make "agent-addr" work on default-server line
- BUG/MINOR: listener: fix off-by-one in state name check
- BUILD/MINOR: unix sockets: silence an absurd gcc warning about strncpy()
- MEDIUM: h1-htx: Add HTX EOM block when the message is in H1_MSG_DONE state
- MINOR: http-htx: Add some htx sample fetches for debugging purpose
- REGTEST: Add an HTX reg-test to check an edge case
- DOC: clarify the fact that replace-uri works on a full URI
- BUG/MINOR: sample: fix the closing bracket and LF in the debug converter
- BUG/MINOR: sample: always check converters' arguments
- MINOR: sample: Validate the number of bits for the sha2 converter
- BUG/MEDIUM: ssl: Don't set the max early data we can receive too early.
- MINOR: ssl/cli: 'show ssl cert' give information on the certificates
- BUG/MINOR: ssl/cli: fix build for openssl < 1.0.2
- MINOR: debug: support logging to various sinks
- MINOR: http: add a new "replace-path" action
- REGTEST: ssl: test the "set ssl cert" CLI command
- REGTEST: run-regtests: implement #REQUIRE_BINARIES
- MINOR: task: only check TASK_WOKEN_ANY to decide to requeue a task
- BUG/MAJOR: task: add a new TASK_SHARED_WQ flag to fix foreing requeuing
- BUG/MEDIUM: ssl: Revamp the way early data are handled.
- MINOR: fd/threads: make _GET_NEXT()/_GET_PREV() use the volatile attribute
- BUG/MEDIUM: fd/threads: fix a concurrency issue between add and rm on the same fd
- REGTEST: make the "set ssl cert" require version 2.1
- BUG/MINOR: ssl: openssl-compat: Fix getm_ defines
- BUG/MEDIUM: state-file: do not allocate a full buffer for each server entry
- BUG/MINOR: state-file: do not store duplicates in the global tree
- BUG/MINOR: state-file: do not leak memory on parse errors
- BUG/MAJOR: mux-h1: Don't pretend the input channel's buffer is full if empty
- BUG/MEDIUM: stream: Be sure to never assign a TCP backend to an HTX stream
- BUILD: ssl: improve SSL_CTX_set_ecdh_auto compatibility
- BUILD: travis-ci: link with ssl libraries using rpath instead of LD_LIBRARY_PATH/DYLD_LIBRARY_PATH
- BUILD: travis-ci: reenable address sanitizer for clang builds
- BUG/MINOR: checks: refine which errno values are really errors.
- BUG/MINOR: connection: only wake send/recv callbacks if the FD is active
- CLEANUP: connection: conn->xprt is never NULL
- MINOR: pollers: add a new flag to indicate pollers reporting ERR & HUP
- MEDIUM: tcp: make tcp_connect_probe() consider ERR/HUP
- REORG: connection: move tcp_connect_probe() to conn_fd_check()
- MINOR: connection: check for connection validation earlier
- MINOR: connection: remove the double test on xprt_done_cb()
- CLEANUP: connection: merge CO_FL_NOTIFY_DATA and CO_FL_NOTIFY_DONE
- MINOR: poller: do not call the IO handler if the FD is not active
- OPTIM: epoll: always poll for recv if neither active nor ready
- OPTIM: polling: do not create update entries for FD removal
- BUG/MEDIUM: checks: Only attempt to do handshakes if the connection is ready.
- BUG/MEDIUM: connections: Hold the lock when wanting to kill a connection.
- BUILD: CI: modernize cirrus-ci
- MINOR: config: disable busy polling on old processes
- MINOR: ssl: Remove unused variable "need_out".
- BUG/MINOR: h1: Report the right error position when a header value is invalid
- BUG/MINOR: proxy: Fix input data copy when an error is captured
- BUG/MEDIUM: http-ana: Truncate the response when a redirect rule is applied
- BUG/MINOR: channel: inject output data at the end of output
- BUG/MEDIUM: session: do not report a failure when rejecting a session
- MEDIUM: dns: implement synchronous send
- MINOR: raw_sock: make sure to disable polling once everything is sent
- MINOR: http: Add 410 to http-request deny
- MINOR: http: Add 404 to http-request deny
- CLEANUP: mux-h2: remove unused goto "out_free_h2s"
- BUILD: cirrus-ci: choose proper openssl package name
- BUG/MAJOR: listener: do not schedule a task-less proxy
- CLEANUP: server: remove unused err section in server_finalize_init
- REGTEST: set_ssl_cert.vtc: replace "echo" with "printf"
- BUG/MINOR: stream-int: Don't trigger L7 retry if max retries is already reached
- BUG/MEDIUM: tasks: Use the MT macros in tasklet_free().
- BUG/MINOR: mux-h2: use a safe list_for_each_entry in h2_send()
- BUG/MEDIUM: mux-h2: fix missing test on sending_list in previous patch
- CLEANUP: ssl: remove opendir call in ssl_sock_load_cert
- MEDIUM: lua: don't call the GC as often when dealing with outgoing connections
- BUG/MEDIUM: mux-h2: don't stop sending when crossing a buffer boundary
- BUG/MINOR: cli/mworker: can't start haproxy with 2 programs
- REGTEST: mcli/mcli_start_progs: start 2 programs
- BUG/MEDIUM: mworker: remain in mworker mode during reload
- DOC: clarify crt-base usage
- CLEANUP: compression: remove unused deinit_comp_ctx section
- BUG/MEDIUM: mux_h1: Don't call h1_send if we subscribed().
- BUG/MEDIUM: raw_sock: Make sur the fd and conn are sync.
- CLEANUP: proxy: simplify proxy_parse_rate_limit proxy checks
- BUG/MAJOR: hashes: fix the signedness of the hash inputs
- REGTEST: add sample_fetches/hashes.vtc to validate hashes
- BUG/MEDIUM: cli: _getsocks must send the peers sockets
- CLEANUP: cli: deduplicate the code in _getsocks
- BUG/MINOR: stream: don't mistake match rules for store-request rules
- BUG/MEDIUM: connection: add a mux flag to indicate splice usability
- BUG/MINOR: pattern: handle errors from fgets when trying to load patterns
- MINOR: connection: move the CO_FL_WAIT_ROOM cleanup to the reader only
- MINOR: stream-int: remove dependency on CO_FL_WAIT_ROOM for rcv_buf()
- MEDIUM: connection: get rid of CO_FL_CURR_* flags
- BUILD: pattern: include errno.h
- MEDIUM: mux-h2: do not try to stop sending streams on blocked mux
- MEDIUM: mux-fcgi: do not try to stop sending streams on blocked mux
- MEDIUM: mux-h2: do not make an h2s subscribe to itself on deferred shut
- MEDIUM: mux-fcgi: do not make an fstrm subscribe to itself on deferred shut
- REORG: stream/backend: move backend-specific stuff to backend.c
- MEDIUM: backend: move the connection finalization step to back_handle_st_con()
- MEDIUM: connection: merge the send_wait and recv_wait entries
- MEDIUM: xprt: merge recv_wait and send_wait in xprt_handshake
- MEDIUM: ssl: merge recv_wait and send_wait in ssl_sock
- MEDIUM: mux-h1: merge recv_wait and send_wait
- MEDIUM: mux-h2: merge recv_wait and send_wait event notifications
- MEDIUM: mux-fcgi: merge recv_wait and send_wait event notifications
- MINOR: connection: make the last arg of subscribe() a struct wait_event*
- MINOR: ssl: Add support for returning the dn samples from ssl_(c|f)_(i|s)_dn in LDAP v3 (RFC2253) format.
- DOC: Fix copy and paste mistake in http-response replace-value doc
- BUG/MINOR: cache: Fix leak of cache name in error path
- BUG/MINOR: dns: Make dns_query_id_seed unsigned
- BUG/MINOR: 51d: Fix bug when HTX is enabled
- MINOR: http-htx: Move htx sample fetches in the scope "internal"
- MINOR: http-htx: Rename 'internal.htx_blk.val' to 'internal.htx_blk.data'
- MINOR: http-htx: Make 'internal.htx_blk_data' return a binary string
- DOC: Add a section to document the internal sample fetches
- MINOR: mux-h1: Inherit send flags from the upper layer
- MINOR: contrib/prometheus-exporter: Add heathcheck status/code in server metrics
- BUG/MINOR: http-ana/filters: Wait end of the http_end callback for all filters
- BUG/MINOR: http-rules: Remove buggy deinit functions for HTTP rules
- BUG/MINOR: stick-table: Use MAX_SESS_STKCTR as the max track ID during parsing
- MEDIUM: http-rules: Register an action keyword for all http rules
- MINOR: tcp-rules: Always set from which ruleset a rule comes from
- MINOR: actions: Use ACT_RET_CONT code to ignore an error from a custom action
- MINOR: tcp-rules: Kill connections when custom actions return ACT_RET_ERR
- MINOR: http-rules: Return an error when custom actions return ACT_RET_ERR
- MINOR: counters: Add a counter to report internal processing errors
- MEDIUM: http-ana: Properly handle internal processing errors
- MINOR: http-rules: Add a rule result to report internal error
- MINOR: http-rules: Handle internal errors during HTTP rules evaluation
- MINOR: http-rules: Add more return codes to let custom actions act as normal ones
- MINOR: tcp-rules: Handle denied/aborted/invalid connections from TCP rules
- MINOR: http-rules: Handle denied/aborted/invalid connections from HTTP rules
- MINOR: stats: Report internal errors in the proxies/listeners/servers stats
- MINOR: contrib/prometheus-exporter: Export internal errors per proxy/server
- MINOR: counters: Remove failed_secu counter and use denied_resp instead
- MINOR: counters: Review conditions to increment counters from analysers
- MINOR: http-ana: Add a txn flag to support soft/strict message rewrites
- MINOR: http-rules: Handle all message rewrites the same way
- MINOR: http-rules: Add a rule to enable or disable the strict rewriting mode
- MEDIUM: http-rules: Enable the strict rewriting mode by default
- REGTEST: Fix format of set-uri HTTP request rule in h1or2_to_h1c.vtc
- MINOR: actions: Add a function pointer to release args used by actions
- MINOR: actions: Regroup some info about HTTP rules in the same struct
- MINOR: http-rules/tcp-rules: Call the defined action function first if defined
- MINOR: actions: Rename the act_flag enum into act_opt
- MINOR: actions: Add flags to configure the action behaviour
- MINOR: actions: Use an integer to set the action type
- MINOR: http-rules: Use a specific action type for some custom HTTP actions
- MINOR: http-rules: Make replace-header and replace-value custom actions
- MINOR: http-rules: Make set-header and add-header custom actions
- MINOR: http-rules: Make set/del-map and add/del-acl custom actions
- MINOR: http-rules: Group all processing of early-hint rule in its case clause
- MEDIUM: http-rules: Make early-hint custom actions
- MINOR: http-rule/tcp-rules: Make track-sc* custom actions
- MINOR: tcp-rules: Make tcp-request capture a custom action
- MINOR: http-rules: Add release functions for existing HTTP actions
- BUG/MINOR: http-rules: Fix memory releases on error path during action parsing
- MINOR: tcp-rules: Add release functions for existing TCP actions
- BUG/MINOR: tcp-rules: Fix memory releases on error path during action parsing
- MINOR: http-htx: Add functions to read a raw error file and convert it in HTX
- MINOR: http-htx: Add functions to create HTX redirect message
- MINOR: config: Use dedicated function to parse proxy's errorfiles
- MINOR: config: Use dedicated function to parse proxy's errorloc
- MEDIUM: http-htx/proxy: Use a global and centralized storage for HTTP error messages
- MINOR: proxy: Register keywords to parse errorfile and errorloc directives
- MINOR: http-htx: Add a new section to create groups of custom HTTP errors
- MEDIUM: proxy: Add a directive to reference an http-errors section in a proxy
- MINOR: http-rules: Update txn flags and status when a deny rule is executed
- MINOR: http-rules: Support an optional status on deny rules for http reponses
- MINOR: http-rules: Use same function to parse request and response deny actions
- MINOR: http-ana: Add an error message in the txn and send it when defined
- MEDIUM: http-rules: Support an optional error message in http deny rules
- REGTEST: Add a strict rewriting mode reg test
- REGEST: Add reg tests about error files
- MINOR: ssl: accept 'verify' bind option with 'set ssl cert'
- BUG/MINOR: ssl: ssl_sock_load_ocsp_response_from_file memory leak
- BUG/MINOR: ssl: ssl_sock_load_issuer_file_into_ckch memory leak
- BUG/MINOR: ssl: ssl_sock_load_sctl_from_file memory leak
- BUG/MINOR: http_htx: Fix some leaks on error path when error files are loaded
- CLEANUP: http-ana: Remove useless test on txn when the error message is retrieved
- BUILD: CI: introduce ARM64 builds
- BUILD: ssl: more elegant anti-replay feature presence check
- MINOR: proxy/http-ana: Add support of extra attributes for the cookie directive
- MEDIUM: dns: use Additional records from SRV responses
- CLEANUP: Consistently `unsigned int` for bitfields
- CLEANUP: pattern: remove the pat_time definition
- BUG/MINOR: http_act: don't check capture id in backend
- BUG/MINOR: ssl: fix build on development versions of openssl-1.1.x
A wrong behavior was introduced by
e9544935e8, leading to preventing loading
any configuration where a capture slot id is used in a backend.
IE, the configuration below does not parse:
frontend f
bind *:80
declare capture request len 32
default_backend webserver
backend webserver
http-request capture req.hdr(Host) id 1
The point is that such type of configuration is valid and should run.
This patch enforces the check of capture slot id only if the action rule
is configured in a frontend.
The point is that at configuration parsing time, it is impossible to
check which frontend could point to this backend (furthermore if we use
dynamic backend name resolution at runtime).
The documentation has been updated to warn the user to ensure that
relevant frontends have required declaration when such rule has to be
used in a backend.
If no capture slot can be found, then the action will just not be
executed and HAProxy will process the next one in the list, as expected.
This should be backported to all supported branches (bug created as part
of a bug fix introduced into 1.7 and backported to 1.6).
Most DNS servers provide A/AAAA records in the Additional section of a
response, which correspond to the SRV records from the Answer section:
;; QUESTION SECTION:
;_http._tcp.be1.domain.tld. IN SRV
;; ANSWER SECTION:
_http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A1.domain.tld.
_http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A8.domain.tld.
_http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A5.domain.tld.
_http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A6.domain.tld.
_http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A4.domain.tld.
_http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A3.domain.tld.
_http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A2.domain.tld.
_http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A7.domain.tld.
;; ADDITIONAL SECTION:
A1.domain.tld. 3600 IN A 192.168.0.1
A8.domain.tld. 3600 IN A 192.168.0.8
A5.domain.tld. 3600 IN A 192.168.0.5
A6.domain.tld. 3600 IN A 192.168.0.6
A4.domain.tld. 3600 IN A 192.168.0.4
A3.domain.tld. 3600 IN A 192.168.0.3
A2.domain.tld. 3600 IN A 192.168.0.2
A7.domain.tld. 3600 IN A 192.168.0.7
SRV record support was introduced in HAProxy 1.8 and the first design
did not take into account the records from the Additional section.
Instead, a new resolution is associated to each server with its relevant
FQDN.
This behavior generates a lot of DNS requests (1 SRV + 1 per server
associated).
This patch aims at fixing this by:
- when a DNS response is validated, we associate A/AAAA records to
relevant SRV ones
- set a flag on associated servers to prevent them from running a DNS
resolution for said FADN
- update server IP address with information found in the Additional
section
If no relevant record can be found in the Additional section, then
HAProxy will failback to running a dedicated resolution for this server,
as it used to do.
This behavior is the one described in RFC 2782.
It is now possible to insert any attribute when a cookie is inserted by
HAProxy. Any value may be set, no check is performed except the syntax validity
(CTRL chars and ';' are forbidden). For instance, it may be used to add the
SameSite attribute:
cookie SRV insert attr "SameSite=Strict"
The attr option may be repeated to add several attributes.
This patch should fix the issue #361.
Since patches initiated with d4f9a60e "MINOR: ssl: deduplicate ca-file",
no more file access is done for 'verify' bind options (crl/ca file).
Remove conditional restriction for "set ssl cert" CLI commands.
It is now possible to set the error message to use when a deny rule is
executed. It may be a specific error file, adding "errorfile <file>" :
http-request deny deny_status 400 errorfile /etc/haproxy/errorfiles/400badreq.http
It may also be an error file from an http-errors section, adding "errorfiles
<name>" :
http-request deny errorfiles my-errors # use 403 error from "my-errors" section
When defined, this error message is set in the HTTP transaction. The tarpit rule
is also concerned by this change.
It is now possible to set the error message to return to client in the HTTP
transaction. If it is defined, this error message is used instead of proxy's
errors or default errors.
When a deny rule is executed, the flag TX_CLDENY and the status code are set on
the HTTP transaction. Now, these steps are handled by the code executing the
deny rule. So into http_req_get_intercept_rule() for the request and
http_res_get_intercept_rule() for the response.
It is now possible to import in a proxy, fully or partially, error files
declared in an http-errors section. It may be done using the "errorfiles"
directive, followed by a name and optionally a list of status code. If there is
no status code specified, all error files of the http-errors section are
imported. Otherwise, only error files associated to the listed status code are
imported. For instance :
http-errors my-errors
errorfile 400 ...
errorfile 403 ...
errorfile 404 ...
frontend frt
errorfiles my-errors 403 404 # ==> error 400 not imported
A new section may now be declared in the configuration to create global groups
of HTTP errors. These groups are not linked to a proxy and are referenced by
name. The section must be declared using the keyword "http-errors" followed by
the group name. This name must be unique. A list of "errorfile" directives may
be declared in such section. For instance:
http-errors website-1
errorfile 400 /path/to/site1/400.http
errorfile 404 /path/to/site1/404.http
http-errors website-2
errorfile 400 /path/to/site2/400.http
errorfile 404 /path/to/site2/404.http
For now, it is just possible to create "http-errors" sections. There is no
documentation because these groups are not used yet.
All custom HTTP errors are now stored in a global tree. Proxies use a references
on these messages. The key used for errorfile directives is the file name as
specified in the configuration. For errorloc directives, a key is created using
the redirect code and the url. This means that the same custom error message is
now stored only once. It may be used in several proxies or for several status
code, it is only parsed and stored once.
http_parse_errorloc() may now be used to create an HTTP 302 or 303 redirect
message with a specific url passed as parameter. A parameter is used to known if
it is a 302 or a 303 redirect. A status code is passed as parameter. It must be
one of the supported HTTP error codes to be valid. Otherwise an error is
returned. It aims to be used to parse "errorloc" directives. It relies on
http_load_errormsg() to do most of the job, ie converting it in HTX.
http_parse_errorfile() may now be used to parse a raw HTTP message from a
file. A status code is passed as parameter. It must be one of the supported HTTP
error codes to be valid. Otherwise an error is returned. It aims to be used to
parse "errorfile" directives. It relies on http_load_errorfile() to do most of
the job, ie reading the file content and converting it in HTX.
When an error occurred during the parsing of a TCP action, if some memory was
allocated, it should be released before exiting. Here, the fix consists for
replace a call to free() on a sample expression by a call to
release_sample_expr().
This patch may be backported to all supported versions.
When an error occurred during the parsing of an HTTP action, if some memory was
allocated, it should be released before exiting. Sometime a call to free() is
used on a sample expression instead of a call to release_sample_expr(). Other
time, it is just a string or a regex that should be released.
There is no real reason to backport this patch. Especially because this part was
highly modified recentely in 2.2-DEV.
Now, this action is use its own dedicated function and is no longer handled "in
place" during the TCP rules evaluation. Thus the action name ACT_TCP_CAPTURE is
removed. The action type is set to ACT_CUSTOM and a check function is used to
know if the rule depends on request contents while there is no inspect-delay.
Now, these actions use their own dedicated function and are no longer handled
"in place" during the TCP/HTTP rules evaluation. Thus the action names
ACT_ACTION_TRK_SC0 and ACT_ACTION_TRK_SCMAX are removed. The action type is now
the tracking index. Thus the function trk_idx() is no longer needed.
Now, the early-hint action uses its own dedicated action and is no longer
handled "in place" during the HTTP rules evaluation. Thus the action name
ACT_HTTP_EARLY_HINT is removed. In additionn, http_add_early_hint_header() and
http_reply_103_early_hints() are also removed. This part is now handled in the
new action_ptr callback function.
To know if the 103 response start-line must be added, we test if it is the first
rule of the ruleset or if the previous rule is not an early-hint rule. And at
the end, to know if the 103 response must be terminated, we test if it is the
last rule of the ruleset or if the next rule is not an early-hint rule. This
way, all the code dealing with early-hint rules is grouped in its case clause.
Now, these actions use their own dedicated function and are no longer handled
"in place" during the HTTP rules evaluation. Thus the action names
ACT_HTTP_*_ACL and ACT_HTTP_*_MAP are removed. The action type is now mapped as
following: 0 = add-acl, 1 = set-map, 2 = del-acl and 3 = del-map.
Now, these actions use their own dedicated function and are no longer handled
"in place" during the HTTP rules evaluation. Thus the action names
ACT_HTTP_SET_HDR and ACT_HTTP_ADD_VAL are removed. The action type is now set to
0 to set a header (so remove existing ones if any and add a new one) or to 1 to
add a header (add without remove).
Now, these actions use their own dedicated function and are no longer handled
"in place" during the HTTP rules evaluation. Thus the action names
ACT_HTTP_REPLACE_HDR and ACT_HTTP_REPLACE_VAL are removed. The action type is
now set to 0 to evaluate the whole header or to 1 to evaluate every
comma-delimited values.
The function http_transform_header_str() is renamed to http_replace_hdrs() to be
more explicit and the function http_transform_header() is removed. In fact, this
last one is now more or less the new action function.
The lua code has been updated accordingly to use http_replace_hdrs().
For set-method, set-path, set-query and set-uri, a specific action type is
used. The same as before but no longer stored in <arg.http.i>. Same is done for
replace-path and replace-uri. The same types are used than the "set-" versions.
Some flags can now be set on an action when it is registered. The flags are
defined in the act_flag enum. For now, only ACT_FLAG_FINAL may be set on an
action to specify if it stops the rules evaluation. It is set on
ACT_ACTION_ALLOW, ACT_ACTION_DENY, ACT_HTTP_REQ_TARPIT, ACT_HTTP_REQ_AUTH,
ACT_HTTP_REDIR and ACT_TCP_CLOSE actions. But, when required, it may also be set
on custom actions.
Consequently, this flag is checked instead of the action type during the
configuration parsing to trigger a warning when a rule inhibits all the
following ones.
The flags in the act_flag enum have been renamed act_opt. It means ACT_OPT
prefix is used instead of ACT_FLAG. The purpose of this patch is to reserve the
action flags for the actions configuration.
When TCP and HTTP rules are evaluated, if an action function (action_ptr field
in the act_rule structure) is defined for a given action, it is now always
called in priority over the test on the action type. Concretly, for now, only
custom actions define it. Thus there is no change. It just let us the choice to
extend the action type beyond the existing ones in the enum.
Info used by HTTP rules manipulating the message itself are splitted in several
structures in the arg union. But it is possible to group all of them in a unique
struct. Now, <arg.http> is used by most of these rules, which contains:
* <arg.http.i> : an integer used as status code, nice/tos/mark/loglevel or
action id.
* <arg.http.str> : an IST used as header name, reason string or auth realm.
* <arg.http.fmt> : a log-format compatible expression
* <arg.http.re> : a regular expression used by replace rules
Arguments used by actions are never released during HAProxy deinit. Now, it is
possible to specify a function to do so. ".release_ptr" field in the act_rule
structure may be set during the configuration parsing to a specific deinit
function depending on the action type.
Now, by default, when a rule performing a rewrite on an HTTP message fails, an
internal error is triggered. Before, the failure was ignored. But most of users
are not aware of this behavior. And it does not happen very often because the
buffer reserve space in large enough. So it may be surprising. Returning an
internal error makes the rewrite failure explicit. If it is acceptable to
silently ignore it, the strict rewriting mode can be disabled.
It is now possible to explicitly instruct rewriting rules to be strict or not
towards errors. It means that in this mode, an internal error is trigger if a
rewrite rule fails. The HTTP action "strict-mode" can be used to enable or
disable the strict rewriting mode. It can be used in an http-request and an
http-response ruleset.
For now, by default the strict rewriting mode is disabled. Because it is the
current behavior. But it will be changed in another patch.
In HTTP rules, error handling during a rewrite is now handle the same way for
all rules. First, allocation errors are reported as internal errors. Then, if
soft rewrites are allowed, rewrite errors are ignored and only the
failed_rewrites counter is incremented. Otherwise, when strict rewrites are
mandatory, interanl errors are returned.
For now, only soft rewrites are supported. Note also that the warning sent to
notify a rewrite failure was removed. It will be useless once the strict
rewrites will be possible.
the HTTP_MSGF_SOFT_RW flag must now be set on the HTTP transaction to ignore
rewrite errors on a message, from HTTP rules. The mode is called the soft
rewrites. If thes flag is not set, strict rewrites are performed. In this mode,
if a rewrite error occurred, an internal error is reported.
For now, HTTP_MSGF_SOFT_RW is always set and there is no way to switch a
transaction in strict mode.
Now, for these counters, the following rules are followed to know if it must be
incremented or not:
* if it exists for a frontend, the counter is incremented
* if stats must be collected for the session's listener, if the counter exists
for this listener, it is incremented
* if the backend is already assigned, if the counter exists for this backend,
it is incremented
* if a server is attached to the stream, if the counter exists for this
server, it is incremented
It is not hardcoded rules. Some counters are still handled in a different
way. But many counters are incremented this way now.
The failed_secu counter is only used for the servers stats. It is used to report
the number of denied responses. On proxies, the same info is stored in the
denied_resp counter. So, it is more consistent to use the same field for
servers.
The stats field ST_F_EINT has been added to report internal errors encountered
per proxy, per listener and per server. It appears in the CLI export and on the
HTML stats page.
The new possible results for a custom action (deny/abort/invalid) are now handled
during HTTP rules evaluation. These codes are mapped on HTTP rules ones :
* ACT_RET_DENY => HTTP_RULE_RES_DENY
* ACT_RET_ABRT => HTTP_RULE_RES_ABRT
* ACT_RET_INV => HTTP_RULE_RES_BADREQ
For now, no custom action uses these new codes.
The new possible results for a custom action (deny/abort/invalid) are now
handled during TCP rules evaluation. For L4/L5 rules, the session is
rejected. For L7 rules, the right counter is incremented, then the connections
killed.
For now, no custom action uses these new codes.
The HTTP_RULE_RES_ERROR code is now used by HTTP analyzers to handle internal
errors during HTTP rules evaluation. It is used instead of HTTP_RULE_RES_BADREQ,
used for invalid requests/responses. In addition, the SF_ERR_RESOURCE flag is
set on the stream when an allocation failure happens.
Note that the return value of http-response rules evaluation is now tested in
the same way than the result of http-request rules evaluation.
Now, processing errors are properly handled. Instead of returning an error 400
or 502, depending where the error happens, an error 500 is now returned. And the
processing_errors counter is incremented. By default, when such error is
detected, the SF_ERR_INTERNAL stream error is used. When the error is caused by
an allocation failure, and when it is reasonnably possible, the SF_ERR_RESOURCE
stream error is used. Thanks to this patch, bad requests and bad responses
should be easier to detect.
Thanks to the commit "MINOR: actions: Use ACT_RET_CONT code to ignore an error
from a custom action", it is now possible to trigger an error from a custom
action in http rules. Now, when a custom action returns the ACT_RET_ERR code
from an http-request rule, an error 400 is returned. And from an http-response
rule, an error 502 is returned.
Be careful if this patch is backported. The other mentioned patch must be
backported first.
Thanks to the commit "MINOR: actions: Use ACT_RET_CONT code to ignore an error
from a custom action", it is now possible to trigger an error from a custom
action in tcp-content rules. Now, when a custom action returns the ACT_RET_ERR
code, it has the same behavior than a reject rules, the connection is killed.
Be careful if this patch is backported. The other mentioned patch must be
backported first.
Some custom actions are just ignored and skipped when an error is encoutered. In
that case, we jump to the next rule. To do so, most of them use the return code
ACT_RET_ERR. Currently, for http rules and tcp content rules, it is not a
problem because this code is handled the same way than ACT_RET_CONT. But, it
means there is no way to handle the error as other actions. The custom actions
must handle the error and return ACT_RET_DONE. For instance, when http-request
rules are processed, an error when we try to replace a header value leads to a
bad request and an error 400 is returned to the client. But when we fail to
replace the URI, the error is silently ignored. This difference between the
custom actions and the others is an obstacle to write new custom actions.
So, in this first patch, ACT_RET_CONT is now returned from custom actions
instead of ACT_RET_ERR when an error is encoutered if it should be ignored. The
behavior remains the same but it is now possible to handle true errors using the
return code ACT_RET_ERR. Some actions will probably be reviewed to determine if
an error is fatal or not. Other patches will be pushed to trigger an error when
a custom action returns the ACT_RET_ERR code.
This patch is not tagged as a bug because it is just a design issue. But others
will depends on it. So be careful during backports, if so.
The ruleset from which a TCP rule comes from (the <from> field in the act_rule
structure) is only set when a rule is created from a registered keyword and not
for all TCP rules. But this information may be useful to check the configuration
validity or during the rule evaluation. So now, we systematically set it.
There are many specific http actions that don't use the action registration
mechanism (allow, deny, set-header...). Instead, the parsing of these actions is
inlined in the functions responsible to parse the http-request/http-response
rules. There is no reason to not register an action keyword for all these
actions. It it the purpose of this patch. The new functions responsible to parse
these http actions are defined in http_act.c
During the parsing of the sc-inc-gpc0, sc-inc-gpc1 and sc-inc-gpt1 actions, the
maximum stick table track ID allowed is tested against ACT_ACTION_TRK_SCMAX. It
is the action number and not the maximum number of stick counters. Instead,
MAX_SESS_STKCTR must be used.
This patch must be backported to all stable versions.
Functions to deinitialize the HTTP rules are buggy. These functions does not
check the action name to release the right part in the arg union. Only few info
are released. For auth rules, the realm is released and there is no problem
here. But the regex <arg.hdr_add.re> is always unconditionally released. So it
is easy to make these functions crash. For instance, with the following rule
HAProxy crashes during the deinit :
http-request set-map(/path/to/map) %[src] %[req.hdr(X-Value)]
For now, These functions are simply removed and we rely on the deinit function
used for TCP rules (renamed as deinit_act_rules()). This patch fixes the
bug. But arguments used by actions are not released at all, this part will be
addressed later.
This patch must be backported to all stable versions.
Filters may define the "http_end" callback, called at the end of the analysis of
any HTTP messages. It is called at the end of the payload forwarding and it can
interrupt the stream processing. So we must be sure to not remove the XFER_BODY
analyzers while there is still at least filter in progress on this callback.
Unfortunatly, once the request and the response are borh in the DONE or the
TUNNEL mode, we consider the XFER_BODY analyzer has finished its processing on
both sides. So it is possible to prematurely interrupt the execution of the
filters "http_end" callback.
To fix this bug, we switch a message in the ENDING state. It is then switched in
DONE/TUNNEL mode only after the execution of the filters "http_end" callback.
This patch must be backported (and adapted) to 2.1, 2.0 and 1.9. The legacy HTTP
mode shoud probaly be fixed too.
Send flags (CO_SFL_*) used when xprt->snd_buf() is called, in h1_send(), are now
inherited from the upper layer, when h1_snd_buf() is called. First, the flag
CO_SFL_MSG_MORE is no more set if the output buffer is full, but only if the
stream-interface decides to set it. It has more info to do it than the
mux. Then, the flag CO_SFL_STREAMER is now also handled this way. It was just
ignored till now.
When HTX is enabled, the sample flags were set too early. When matching for
multiple HTTP headers, the sample is fetched more than once, meaning that the
flags would need to be set again. Instead, the flags are now set last (just
before the outermost function returns). This could be further improved by
passing around the message without calling prefetch again.
This patch must be backported as far as 1.9. it should fix bug #450.
Left shifting of large signed values and negative values is undefined.
In a test script clang's ubsan rightfully complains:
> runtime error: left shift of 1934242336581872173 by 13 places cannot be represented in type 'int64_t' (aka 'long')
This bug was introduced in the initial version of the DNS resolver
in 325137d603. The fix must be backported
to HAProxy 1.6+.
Modifies the existing sample extraction methods (smp_fetch_ssl_x_i_dn,
smp_fetch_ssl_x_s_dn) to accommodate a third argument that indicates the
DN should be returned in LDAP v3 format. When the third argument is
present, the new function (ssl_sock_get_dn_formatted) is called with
three parameters including the X509_NAME, a buffer containing the format
argument, and a buffer for the output. If the supplied format matches
the supported format string (currently only "rfc2253" is supported), the
formatted value is extracted into the supplied output buffer using
OpenSSL's X509_NAME_print_ex and BIO_s_mem. 1 is returned when a dn
value is retrieved. 0 is returned when a value is not retrieved.
Argument validation is added to each of the related sample
configurations to ensure the third argument passed is either blank or
"rfc2253" using strcmp. An error is returned if the third argument is
present with any other value.
Documentation was updated in configuration.txt and it was noted during
preliminary reviews that a CLEANUP patch should follow that adjusts the
documentation. Currently, this patch and the existing documentation are
copied with some minor revisions for each sample configuration. It
might be better to have one entry for all of the samples or entries for
each that reference back to a primary entry that explains the sample in
detail.
Special thanks to Chris, Willy, Tim and Aleks for the feedback.
Author: Elliot Otchet <degroens@yahoo.com>
Reviewed-by: Tim Duesterhus <tim@bastelstu.be>
The subscriber used to be passed as a "void *param" that was systematically
cast to a struct wait_event*. By now it appears clear that the subscribe()
call at every layer is well defined and always takes a pointer to an event
subscriber of type wait_event, so let's enforce this in the functions'
prototypes, remove the intermediary variables used to cast it and clean up
the comments to clarify what all these functions do in their context.
This is the last of the "recv_wait+send_wait merge" patches and is
functionally equivalent to previous commit "MEDIUM: mux-h2: merge
recv_wait and send_wait event notifications" but for FCGI this time.
The principle is pretty much the same, since the code is very similar.
We use a single wait_event for both recv and send and rely on the
subscribe flags to know the desired notifications.
This is the continuation of the recv+send event notifications merge
that was started. This patch is less trivial than the previous ones
because the existence of a send event subscription is also used to
decide to put a stream back into the send list.
This is the same principle as previous commit, but for the H1 mux this
time. The checks in the subscribe()/unsubscribe() calls were factored
and some BUG_ON() were added to detect unexpected cases.
h1_wake_for_recv() and h1_wake_for_send() needed to be refined to
consider the current subscription before deciding to wake up.
In practice all callers use the same wait_event notification for any I/O
so instead of keeping specific code to handle them separately, let's merge
them and it will allow us to create new events later.
Currently there's still lots of code in conn_complete_server() that performs
one half of the connection setup, which is then checked and finalized in
back_handle_st_con(). There isn't a valid reason for this anymore, we can
simplify this and make sure that conn_complete_server() only wakes the stream
up to inform it about the fact the whole connection stack is set up so that
back_handle_st_con() finishes its job at the stream-int level.
It looks like the there could even be further simplified, but for now it
was moved straight out of conn_complete_server() with no modification.
For more than a decade we've kept all the sess_update_st_*() functions
in stream.c while they're only there to work in relation with what is
currently being done in backend.c (srv_redispatch_connect, connect_server,
etc). Let's move all this pollution over there and take this opportunity
to try to find slightly less confusing names for these old functions
whose role is only to handle transitions from one specific stream-int
state:
sess_update_st_rdy_tcp() -> back_handle_st_rdy()
sess_update_st_con_tcp() -> back_handle_st_con()
sess_update_st_cer() -> back_handle_st_cer()
sess_update_stream_int() -> back_try_conn_req()
sess_prepare_conn_req() -> back_handle_st_req()
sess_establish() -> back_establish()
The last one remained in stream.c because it's more or less a completion
function which does all the initialization expected on a connection
success or failure, can set analysers and emit logs.
The other ones could possibly slightly benefit from being modified to
take a stream-int instead since it's really what they're working with,
but it's unimportant here.
This is the port to FCGI of previous commit "MEDIUM: mux-h2: do not make
an h2s subscribe to itself on deferred shut".
The purpose is to avoid subscribing to the send_wait list when trying to
close, because we'll soon have to merge both recv and send lists. Basic
testing showed no difference (performance nor issues).
The logic handling the deferred shutdown is a bit complex because it
involves a wait_event struct in each h2s dedicated to subscribing to
itself when shutdowns are not immediately possible. This implies that
we will not be able to support a shutdown and a receive subscription
in the future when we merge all wait events.
Let's solely rely on the H2_SF_WANT_SHUT_{R,W} flags instead and have
an autonomous tasklet for this. This requires to add a few controls
in the code because now when waking up a stream we need to check if it
is for I/O or just a shut, but since sending and shutting are exclusive
it's not difficult.
One point worth noting is that further resources could be shaved off
by only allocating the tasklet when failing to shut, given that in the
vast majority of streams it will never be used. In fact the sole purpose
of the tasklet is to support calling this code from outside the H2 mux
context. Looking at the code, it seems that not too many adaptations
would be required to have the send_list walking code deal with sending
the shut bits itself and further simplify all this.
This is essentially the same change as applied to mux-h2 in previous commit
"MEDIUM: mux-h2: do not try to stop sending streams on blocked mux". The
goal is to make sure we don't need to keep the item in the send_wait list
until it's executed so that we can later merge it with the recv_wait list.
No performance changes were observed.
This partially reverts commit d846c267 ("MINOR: h2: Don't run tasks that
are waiting to send if mux in full"). This commit was introduced to
limit the start/stop overhead incurred by waking many streams to let
only a few work. But since commit 9c218e7521 ("MAJOR: mux-h2: switch
to next mux buffer on buffer full condition."), this situation occurs
way less (typically 2000 to 4000 times less often) and the benefits of
the patch above do not outweigh its shortcomings anymore. And commit
c7ce4e3e7f ("BUG/MEDIUM: mux-h2: don't stop sending when crossing a
buffer boundary") addressed a root cause of many unexpected sleeps and
wakeups.
The main problem it's causing is that it requires to keep the element
in the send_wait list until it's executed, leaving the entry in an
uncertain state, and significantly complicating the coexistence of this
list and the wait list dedicated to shutdown. Also it happens that this
call to tasklet_remove_from_task_list() will not be usable anymore once
we start to support streams on different threads. And finally, some of
the other streams that we remove might very well have managed to find
their way to the h2_snd_buf() with an unblocked condition as well so it
is possible that some of these removals were not welcome.
So this patch now makes sure that send_wait is immediately nulled when
the task is woken up, and that we don't have to play with it afterwards.
Since we don't need to stop the tasklets anymore, we don't need the
sending_list that we can remove.
However one very useful benefit of the sending_list was that it used to
provide the information about the fact that the stream already tried to
send and failed. This was an important factor to improve fairness because
late arrived streams should not be allowed to send if others are already
scheduled. So this patch introduces a new per-stream flag H2_SF_NOTIFIED
to distinguish such streams.
With this patch the fairness is preserved, and the ratio of aborted
h2_snd_buf() due to other streams already sending remains quite low
(~0.3-2.1% measured depending on object size, this is within
expectations for 100 independent streams).
If the contention issue the patch above used to address comes up again
in the future, a much better (though more complicated) solution would
be to switch to per-connection buffer pools to distribute between the
connection and the streams so that by default there are more buffers
available for the mux and the streams only have some when the mux's are
unused, i.e. it would push the memory pressure back to the data layer.
One observation made while developing this patch is that when dealing
with large objects we still spend a huge amount of time scanning the
send_list with tasks that are already woken up every time a send()
manages to purge a bit more data. Indeed, by removing the elements
from the list when H2_SF_NOTIFIED is set, the netowrk bandwidth on
1 MB objects fetched over 100 streams per connection increases by 38%.
This was not done here to preserve fairness but is worth studying (e.g.
by keeping a restart pointer on the list or just having a flag indicating
if an entry was added since last scan).
Commit 3c79d4bdc introduced the use of errno in pattern.c without
including errno.h.
If we build haproxy without any option errno is not defined and the
build fails.
These ones used to serve as a set of switches between CO_FL_SOCK_* and
CO_FL_XPRT_*, and now that the SOCK layer is gone, they're always a
copy of the last know CO_FL_XPRT_* ones that is resynchronized before
I/O events by calling conn_refresh_polling_flags(), and that are pushed
back to FDs when detecting changes with conn_xprt_polling_changes().
While these functions are not particularly heavy, what they do is
totally redundant by now because the fd_want_*/fd_stop_*() actions
already perform test-and-set operations to decide to create an entry
or not, so they do the exact same thing that is done by
conn_xprt_polling_changes(). As such it is pointless to call that
one, and given that the only reason to keep CO_FL_CURR_* is to detect
changes there, we can now remove them.
Even if this does only save very few cycles, this removes a significant
complexity that has been responsible for many bugs in the past, including
the last one affecting FreeBSD.
All tests look good, and no performance regressions were observed.
The only case where this made sense was with mux_h1 but Since we
introduced CS_FL_MAY_SPLICE, we don't need to rely on this anymore,
thus we don't need to clear it either when we do not splice.
There is a last check on this flag used to determine if the rx channel
is full and that cannot go away unless it's changed to use the CS
instead but for now this wouldn't add any benefit so better not do
it yet.
CO_FL_WAIT_ROOM is set by the splicing function in raw_sock, and cleared
by the stream-int when splicing is disabled, as well as in
conn_refresh_polling_flags() so that a new call to ->rcv_pipe() could
be attempted by the I/O callbacks called from conn_fd_handler(). This
clearing in conn_refresh_polling_flags() makes no sense anymore and is
in no way related to the polling at all.
Since we don't call them from there anymore it's better to clear it
before attempting to receive, and to set it again later. So let's move
this operation where it should be, in raw_sock_to_pipe() so that it's
now symmetric. It was also placed in raw_sock_to_buf() so that we're
certain that it gets cleared if an attempt to splice is replaced with
a subsequent attempt to recv(). And these were currently already achieved
by the call to conn_refresh_polling_flags(). Now it could theorically be
removed from the stream-int.
We need to do some error handling after we call fgets to make sure everything
went fine. If we don't users can be fooled into thinking they can load pattens
from directory because cfgparse doesn't flinch. This applies to acl patterns
map files.
This should be backported to all supported versions.
Commit c640ef1a7d ("BUG/MINOR: stream-int: avoid calling rcv_buf() when
splicing is still possible") fixed splicing in TCP and legacy mode but
broke it badly in HTX mode.
What happens in HTX mode is that the channel's to_forward value remains
set to CHN_INFINITE_FORWARD during the whole transfer, and as such it is
not a reliable signal anymore to indicate whether more data are expected
or not. Thus, when data are spliced out of the mux using rcv_pipe(), even
when the end is reached (that only the mux knows about), the call to
rcv_buf() to get the final HTX blocks completing the message were skipped
and there was often no new event to wake this up, resulting in transfer
timeouts at the end of large objects.
All this goes down to the fact that the channel has no more information
about whether it can splice or not despite being the one having to take
the decision to call rcv_pipe() or not. And we cannot afford to call
rcv_buf() inconditionally because, as the commit above showed, this
reduces the forwarding performance by 2 to 3 in TCP and legacy modes
due to data lying in the buffer preventing splicing from being used
later.
The approach taken by this patch consists in offering the muxes the ability
to report a bit more information to the upper layers via the conn_stream.
This information could simply be to indicate that more data are awaited
but the real need being to distinguish splicing and receiving, here
instead we clearly report the mux's willingness to be called for splicing
or not. Hence the flag's name, CS_FL_MAY_SPLICE.
The mux sets this flag when it knows that its buffer is empty and that
data waiting past what is currently known may be spliced, and clears it
when it knows there's no more data or that the caller must fall back to
rcv_buf() instead.
The stream-int code now uses this to determine if splicing may be used
or not instead of looking at the rcv_pipe() callbacks through the whole
chain. And after the rcv_pipe() call, it checks the flag again to decide
whether it may safely skip rcv_buf() or not.
All this bitfield dance remains a bit complex and it starts to appear
obvious that splicing vs reading should be a decision of the mux based
on permission granted by the data layer. This would however increase
the API's complexity but definitely need to be thought about, and should
even significantly simplify the data processing layer.
The way it was integrated in mux-h1 will also result in no more calls
to rcv_pipe() on chunked encoded data, since these ones are currently
disabled at the mux level. However once the issue with chunks+splice
is fixed, it will be important to explicitly check for curr_len|CHNK
to set MAY_SPLICE, so that we don't call rcv_buf() after each chunk.
This fix must be backported to 2.1 and 2.0.
In process_sticking_rules() we only want to apply the first store-request
rule for a given table, but when doing so we need to make sure we only
count actual store-request rules when we list the sticking rules.
Failure to do so leads to not being able to write store-request and match
sticking rules in any order as a match rule after a store-request rule
will be ignored.
The following configuration reproduces the issue:
global
stats socket /tmp/foobar
defaults
mode http
frontend in
bind *:8080
default_backend bar
backend bar
server s1 127.0.0.1:21212
server s2 127.0.0.1:21211
stick store-request req.hdr(foo)
stick match req.hdr(foo)
stick-table type string size 10
listen foo
bind *:21212
bind *:21211
http-request deny deny_status 200 if { dst_port 21212 }
http-request deny
This patch fixes issue #448 and should be backported as far as 1.6.
Since the fix 5fd3b28 ("BUG/MEDIUM: cli: _getsocks must send the peers
sockets") for bug #443. The code which sends the socket for the peers
and the proxies is duplicated. This patch move this code in a separated
function.
This bug prevents to reload HAProxy when you have both the seamless
reload (-x / expose-fd listeners) and the peers.
Indeed the _getsocks command does not send the FDs of the peers
listeners, so if no reuseport is possible during the bind, the new
process will fail to bind and exits.
With this feature, it is not possible to fallback on the SIGTTOU method
if we didn't receive all the sockets, because you can't close() the
sockets of the new process without closing those of the previous
process, they are the same.
Should fix bug #443.
Must be backported as far as 1.8.
Wietse Venema reported in the thread below that we have a signedness
issue with our hashes implementations: due to the use of const char*
for the input key that's often text, the crc32, sdbm, djb2, and wt6
algorithms return a platform-dependent value for binary input keys
containing bytes with bit 7 set. This means that an ARM or PPC
platform will hash binary inputs differently from an x86 typically.
Worse, some algorithms are well defined in the industry (like CRC32)
and do not provide the expected result on x86, possibly causing
interoperability issues (e.g. a user-agent would fail to compare the
CRC32 of a message body against the one computed by haproxy).
Fortunately, and contrary to the first impression, the CRC32c variant
used in the PROXY protocol processing is not affected. Thus the impact
remains very limited (the vast majority of input keys are text-based,
such as user-agent headers for exmaple).
This patch addresses the issue by fixing all hash functions' prototypes
(even those not affected, for API consistency). A reg test will follow
in another patch.
The vast majority of users do not use these hashes. And among those
using them, very few will pass them on binary inputs. However, for the
rare ones doing it, this fix MAY have an impact during the upgrade. For
example if the package is upgraded on one LB then on another one, and
the CRC32 of a binary input is used as a stick table key (why?) then
these CRCs will not match between both nodes. Similarly, if
"hash-type ... crc32" is used, LB inconsistency may appear during the
transition. For this reason it is preferable to apply the patch on all
nodes using such hashes at the same time. Systems upgraded via their
distros will likely observe the least impact since they're expected to
be upgraded within a short time frame.
And it is important for distros NOT to skip this fix, in order to avoid
distributing an incompatible implementation of a hash. This is the
reason why this patch is tagged as MAJOR, eventhough it's extremely
unlikely that anyone will ever notice a change at all.
This patch must be backported to all supported branches since the
hashes were introduced in 1.5-dev20 (commit 98634f0c). Some parts
may be dropped since implemented later.
Link to Wietse's report:
https://marc.info/?l=postfix-users&m=157879464518535&w=2
rate-limits are valid for both frontend and listen, but not backend; so
we can simplify this check in a similar manner as it is done in e.g
max-keep-alive-queue.
this should fix github issue #449
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Commit 08fa16e397 made sure
we let the fd layer we didn't want to poll anymore if
we failed to send and sendto() returne EAGAIN.
However, just disabling the polling with fd_stop_send()
while not notifying the connection layer means the
connection layer may believe the polling is activated
and nothing needs to be done when it is wrong.
A better fix is to revamp that whole code, for the
time being, just make sure the fd and connection
layer are properly synchronised.
This should fix the problem recently reported on FreeBSD.
In h1_snd_buf(), only attempt to call h1_send() if we haven't
already subscribed.
It makes no sense to do it if we subscribed, as we know we failed
to send before, and will create a useless call to sendto(), and
in 2.2, the call to raw_sock_from_buf() will disable polling if
it is enabled.
This should be backported to 2.2, 2.1, 2.0 and 1.9.
Since commit 27d93c3f94 ("BUG/MAJOR: compression/cache: Make it
really works with these both filters"), we no longer use section
deinit_comp_ctx.
This should fix github issue #441
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
If you reload an haproxy started in master-worker mode with
"master-worker" in the configuration, and no "-W" argument,
the new process lost the fact that is was in master-worker mode
resulting in weird behaviors.
The bigest problem is that if it is reloaded with an bad configuration,
the master will exits instead of remaining in waitpid mode.
This problem was discovered in bug #443.
Should be backported in every version using the master-worker mode.
(as far as 1.8)
When trying to start HAProxy with the master CLI and more than one
program in the configuration, it refuses to start with:
[ALERT] 013/132926 (1378) : parsing [cur--1:0] : proxy 'MASTER', another server named 'cur--1' was already defined at line 0, please use distinct names.
[ALERT] 013/132926 (1378) : Fatal errors found in configuration.
The problem is that haproxy tries to create a server for the MASTER
proxy but only the worker are supposed to be in the server list.
Fix issue #446.
Must be backported as far as 2.0.
In version 2.0, after commit 9c218e7521 ("MAJOR: mux-h2: switch to next
mux buffer on buffer full condition."), the H2 mux started to use a ring
buffer for the output data in order to reduce competition between streams.
However, one corner case was suboptimally covered: when crossing a buffer
boundary, we have to shrink the outgoing frame size to the one left in
the output buffer, but this shorter size is later used as a signal of
incomplete send due to a buffer full condition (which used to be true when
using a single buffer). As a result, function h2s_frt_make_resp_data()
used to return less than requested, which in turn would cause h2_snd_buf()
to stop sending and leave some unsent data in the buffer, and si_cs_send()
to subscribe for sending more later.
But it goes a bit further than this, because subscribing to send again
causes the mux's send_list not to be empty anymore, hence extra streams
can be denied the access to the mux till the first stream is woken again.
This causes a nasty wakeup-sleep dance between streams that makes it
totally impractical to try to remove the sending list. A test showed
that it was possible to observe 3 million h2_snd_buf() giveups for only
100k requests when using 100 concurrent streams on 20kB objects.
It doesn't seem likely that a stream could get blocked and time out due
to this bug, though it's not possible either to demonstrate the opposite.
One risk is that incompletely sent streams do not have any blocking flags
so they may not be identified as blocked. However on first scan of the
send_list they meet all conditions for a wakeup.
This patch simply allows to continue on a new frame after a partial
frame. with only this change, the number of failed h2_snd_buf() was
divided by 800 (4% of calls). And by slightly increasing the H2C_MBUF_CNT
size, it can go down to zero.
This fix must be backported to 2.1 and 2.0.
In order to properly close connections established from Lua in case
a Lua context dies, the context currently automatically gets a flag
HLUA_MUST_GC set whenever an outgoing connection is used. This causes
the GC to be enforced on the context's death as well as on yield. First,
it does not appear necessary to do it when yielding, since if the
connections die they are already cleaned up. Second, the problem with
the flag is that even if a connection gets properly closed, the flag is
not removed and the GC continues to be called on the Lua context.
The impact on performance looks quite significant, as noticed and
diagnosed by Sadasiva Gujjarlapudi in the following thread:
https://www.mail-archive.com/haproxy@formilux.org/msg35810.html
This patch changes the flag for a counter so that each created
connection increments it and each cleanly closed connection decrements
it. That way we know we have to call the GC on the context's death only
if the count is non-null. As reported in the thread above, the Lua
performance gain is now over 20% by doing this.
Thanks to Sada and Thierry for the design discussion and tests that
led to this solution.
Since commit 3180f7b554 ("MINOR: ssl: load certificates in
alphabetical order"), `readdir` was replaced by `scandir`. We can indeed
replace it with a check on the previous `stat` call.
This micro cleanup can be a good benefit when you have hundreds of bind
lines which open TLS certificates directories in terms of syscall,
especially in a case of frequent reloads.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Previous commit 989539b048 ("BUG/MINOR: mux-h2: use a safe
list_for_each_entry in h2_send()") accidently lost its sending_list test,
resulting in some elements to be woken up again while already in the
sending_list and h2_unsubscribe() crashing on integrity tests (only
when built with DEBUG_DEV).
If the fix above is backported this one must be as well.
h2_send() uses list_for_each_entry() to scan paused streams and resume
them, but happily deletes any leftover from a previous failed unsubscribe,
which is obviously not safe and would corrupt the list. In practice this
is a proof that this doesn't happen, but it's not the best way to prove it.
In order to fix this and reduce the maintenance burden caused by code
duplication (this list walk exists at 3 places), let's introduce a new
function h2_resume_each_sending_h2s() doing exactly this and use it at
all 3 places.
This bug was introduced as a side effect of fix 998410a41b ("BUG/MEDIUM:
h2: Revamp the way send subscriptions works.") so it should be backported
as far as 1.9.
When an HTTP response is received, at the stream-interface level, if a L7 retry
must be triggered because of the status code, the response is trashed and a read
error is reported on the response channel. Then the stream handles this error
and perform the retry. Except if the maximum connection retries is reached. In
this case, an error is reported. Because the server response was already trashed
by the stream-interface, a generic 502 error is returned to the client instead
of the server's one.
Now, the stream-interface triggers a L7 retry only if the maximum connection
retries is not already reached. Thus, at the end, the last server's response is
returned.
This patch must be backported to 2.1 and 2.0. It should fix the issue #439.
Since commit 980855bd95 ("BUG/MEDIUM: server: initialize the orphaned
conns lists and tasks at the end"), we no longer use err section.
This should fix github issue #438
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Apparently seamingless commit 0591bf7deb ("MINOR: listener: make the
wait paths cleaner and more reliable") caused a nasty regression and
revealed a rare race that hits regtest stickiness/lb-services.vtc
about 4% of the times for 8 threads.
The problem is that when a multi-threaded listener wakes up on an
incoming connection, several threads can receive the event, especially
when idle. And all of them will race to accept the connections in
parallel, adjusting the listener's nbconn and proxy's feconn until
one reaches the proxy's limit and declines. At this step the changes
are cancelled, the listener is marked "limited", and when the threads
exit the function, one of them will unlimit the listener/proxy again
so that it can accept incoming connections again.
The problem happens when many threads connect to a small peers section
because its maxconn is very limited (typically 6 for 2 peers), and it's
sometimes possible for enough competing threads to hit the limit and
one of them will limit the listener and queue the proxy's task... except
that peers do not initialize their proxy task since they do not use rate
limiting. Thus the process crashes when doing task_schedule(p->task).
Prior to the cleanup patch above, this didn't happen because the error
path that was dedicated to only limiting the listener did not call
task_schedule(p->task).
Given that the proxy's task is optional, and that the expire value
passed there is always TICK_ETERNITY, it's sufficient and reasonable to
avoid calling this task_schedule() when expire is not set. And for long
term safety we can also avoid to do it when the task is not set. A first
fix consisted in allocating a task for the peers proxies but it's never
used and would eat resources for reason.
No backport is needed as this commit was only merged into 2.2.
Since commit fa8aa867b9 ("MEDIUM: connections: Change struct
wait_list to wait_event.") we no longer use this section.
this should fix github issue #437
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Analysing traces revealed a rare but surprizing pattern :
connect() = -1 EAGAIN
send() = success
epoll_ctl(ADD, EPOLLOUT)
epoll_wait()
recvfrom() = success
close()
What happens is that the failed connect() creates an FD update for pollout,
but the successful synchronous send() doesn't disable it because polling was
only disabled in the FD handler. But a successful synchronous connect()
cancellation is a good opportunity to disable polling before it's effectively
enabled in the next loop, so better disable it when reaching the end. The
cost is very low if it was already disabled anyway (one atomic op).
This only affects local connections but with this the typical number of
epoll_ctl() calls per connection dropped from ~4.2 to ~3.8 for plain TCP
and 10k transfers.
In dns_send_query(), there's no point in first waking up the FD, to get
called back by the poller to send the request and sleep. Instead let's
simply send the request as soon as it's known and only subscribe to the
poller when the socket buffers are full and it's required to poll (i.e.
almost never).
This significantly reduces the number of calls to the poller. A large
config sees the number of epoll_ctl() calls reduced from 577 to 7 over
10 seconds, the number of recvfrom() from 1533 to 582 and the number of
sendto() from 369 to 162.
It also has the extra benefit of building each requests only once per
resolution and sending it to multiple resolvers instead of rebuilding
it for each and every resolver.
This will reduce the risk of seeing situations similar to bug #416 in
the future.
In session_accept_fd() we can perform a synchronous call to
conn_complete_session() and if it succeeds the connection is accepted
and turned into a session. If it fails we take it as an error while it
is not, in this case, it's just that a tcp-request rule has decided to
reject the incoming connection. The problem with reporting such an event
as an error is that the failed status is passed down to the listener code
which decides to disable accept() for 100ms in order to leave some time
for transient issues to vanish, and that's not what we want to do here.
This fix must be backported as far as 1.7. In 1.7 the code is a bit
different as tcp_exec_l5_rules() is called directly from within
session_new_fd() and ret=0 must be assigned there.
In co_inject(), data must be inserted at the end of output, not the end of
input. For the record, this function does not take care of input data which are
supposed to not exist. But the caller may reset input data after or before the
call. It is its own choice.
This bug, among other effects, is visible when a redirect is performed on
the response path, on legacy HTTP mode (so for HAProxy < 2.1). The redirect
response is appended after the server response when it should overwrite it.
Thanks to Kevin Zhu <ip0tcp@gmail.com> to report the bug. It must be backported
as far as 1.9.
When a redirect rule is executed on the response path, we must truncate the
received response. Otherwise, the redirect is appended after the response, which
is sent to the client. So it is obviously a bug because the redirect is not
performed. With bodyless responses, it is the "only" bug. But if the response
has a body, the result may be invalid. If the payload is not fully received yet
when the redirect is performed, an internal error is reported.
It must be backported as far as 1.9.
In proxy_capture_error(), input data are copied in the error snapshot. The copy
must take care of the data wrapping. But the length of the first block is
wrong. It should be the amount of contiguous input data that can be copied
starting from the input's beginning. But the mininum between the input length
and the buffer size minus the input length is used instead. So it is a problem
if input data are wrapping or if more than the half of the buffer is used by
input data.
This patch must be backported as far as 1.9.
During H1 messages parsing, when the parser has finished to parse a full header
line, some tests are performed on its value, depending on its name, to be sure
it is valid. The content-length is checked and converted in integer and the host
header is also checked. If an error occurred during this step, the error
position must point on the header value. But from the parser point of view, we
are already on the start of the next header. Thus the effective reported
position in the error capture is the beginning of the unparsed header line. It
is a bit confusing when we try to figure out why a message is rejected.
Now, the parser state is updated to point on the invalid value. This way, the
error position really points on the right position.
This patch must be backported as far as 1.9.
The "need_out" variable was used to let the ssl code know we're done
reading early data, and we should start the handshake.
Now that the handshake function is responsible for taking care of reading
early data, all that logic has been removed from ssl_sock_to_buf(), but
need_out was forgotten, and left. Remove it know.
This patch was submitted by William Dauchy <w.dauchy@criteo.com>, and should
fix github issue #434.
This should be backported to 2.0 and 2.1.
in the context of seamless reload and busy polling, older processes will
create unecessary cpu conflicts; we can assume there is no need for busy
polling for old processes which are waiting to be terminated.
This patch is not a bug fix itself but might be a good stability
improvment when you are un the context of frequent seamless reloads with
a high "hard-stop-after" value; for that reasons I think this patch
should be backported in all 2.x versions.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
In connect_server(), when we decide we want to kill the connection of
another thread because there are too many idle connections, hold the
toremove_lock of the corresponding thread, othervise, there's a small race
condition where we could try to add the connection to the toremove_connections
list while it has already been free'd.
This should be backported to 2.0 and 2.1.
When creating a new check connection, only attempt to add an handshake
connection if the connection has fully been initialized. It can not be the
case if a DNS resolution is still pending, and thus we don't yet have the
address for the server, as the handshake code assumes the connection is fully
initialized and would otherwise crash.
This is not ideal, the check shouldn't probably run until we have an address,
as it leads to check failures with "Socket error".
While I'm there, also add an xprt handshake if we're using socks4, otherwise
checks wouldn't be able to use socks4 properly.
This should fix github issue #430
This should be backported to 2.0 and 2.1.
The cost of enabling polling in one direction with epoll is very high
because it requires one syscall per FD and per direction change. In
addition we don't know about input readiness until we either try to
receive() or enable polling and watch the result. With HTTP keep-alive,
both are equally expensive as it's very uncommon to see the server
instantly respond (unless it's a second stage of the same process on
localhost, which has become much less common with threads).
But when a connection is established it's also quite usual to have to
poll for sending (except on localhost or UNIX sockets where it almost
always instantly works). So this cost of polling could be factored out
with the second step if both were enabled together.
This is the idea behind this patch. What it does is to always enable
polling for Rx if it's not ready and at least one direction is active.
This means that if it's not explicitly disabled, or if it was but in a
state that causes the loss of the information (rx ready cannot be
guessed), then let's take any opportunity for a polling change to
enable it at the same time, and learn about rx readiness for free.
In addition the FD never gets unregistered for Rx unless it's ready
and was blocked (buffer full). This avoids a lot of the flip-flop
behaviour at beginning and end of requests.
On a test with 10k requests in keep-alive, the difference is quite
noticeable:
Before:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
83.67 0.010847 0 20078 epoll_ctl
16.33 0.002117 0 2231 epoll_wait
0.00 0.000000 0 20 20 connect
------ ----------- ----------- --------- --------- ----------------
100.00 0.012964 22329 20 total
After:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
96.35 0.003351 1 2644 epoll_wait
2.36 0.000082 4 20 20 connect
1.29 0.000045 0 66 epoll_ctl
------ ----------- ----------- --------- --------- ----------------
100.00 0.003478 2730 20 total
It may also save a recvfrom() after connect() by changing the following
sequence, effectively saving one epoll_ctl() and one recvfrom() :
before | after
-----------------------------+----------------------------
- connect() | - connect()
- epoll_ctl(add,out) | - epoll_ctl(add, in|out)
- sendto() | - epoll_wait() = out
- epoll_ctl(mod,in|out) | - send()
- epoll_wait() = out | - epoll_wait() = in|out
- recvfrom() = EAGAIN | - recvfrom() = OK
- epoll_ctl(mod,in) | - recvfrom() = EAGAIN
- epoll_wait() = in | - epoll_ctl(mod, in)
- recvfrom() = OK | - epoll_wait()
- recvfrom() = EAGAIN |
- epoll_wait() |
(...)
Now on a 10M req test on 16 threads with 2k concurrent conns and 415kreq/s,
we see 190k updates total and 14k epoll_ctl() only.
Both flags became equal in commit 82967bf9 ("MINOR: connection: adjust
CO_FL_NOTIFY_DATA after removal of flags"), which already predicted the
overlap between xprt_done_cb() and wake() after the removal of the DATA
specific flags in 1.8. Let's simply remove CO_FL_NOTIFY_DATA since the
"_DONE" version already covers everything and explains the intent well
enough.
The conn_fd_handler used to have one possible call to this function to
notify about end of handshakes, and another one to notify about connection
setup or error. But given that we're now only performing wakeup calls
after connection validation, we don't need to keep two places to run
this test since the conditions do not change in between.
This patch merges the two tests into a single one and moves the
CO_FL_CONNECTED test appropriately as well so that it's called even
on the error path if needed.
In conn_fd_handler() we used to first give a chance to the send()
callback to try to send data and validate the connection at the same
time. But since 1.9 we do not call this callback anymore inline, it's
scheduled. So let's validate the connection ealier so that all other
decisions can be taken based on this confirmation. This may notably
be useful to the xprt_done_cb() to know that the connection was
properly validated.
The function is not TCP-specific at all, it covers all FD-based sockets
so let's move this where other similar functions are, in connection.c,
and rename it conn_fd_check().
Now that we know what pollers can return ERR/HUP, we can take this
into account to save one syscall: with such a poller, if neither are
reported, then we know the connection succeeded and we don't need to
go with getsockopt() nor connect() to validate this. In addition, for
the remaining cases (select() or suspected errors), we'll always go
through the extra connect() attempt and enumerate possible "in progress",
"connected" or "failed" status codes and take action solely based on
this.
This results in one saved syscall on modern pollers, only a second
connect() still being used on select() and the server's address never
being needed anymore.
Note that we cannot safely replace connect() with getsockopt() as the
latter clears the error on the socket without saving it, and health
checks rely on it for their reporting. This would be OK if the error
was saved in the connection itself.
In practice it's all pollers except select(). It turns out that we're
keeping some legacy code only for select and enforcing it on all
pollers, let's offer the pollers the ability to declare that they
do not need that.
Since commit c3df4507fa ("MEDIUM: connections: Wake the upper layer even
if sending/receiving is disabled.") the send/recv callbacks are called
on I/O if the FD is ready and not just if it's active. This means that
in some situations (e.g. send ready but nothing to send) we may
needlessly enter the if() block, notice we're not subscribed, set
io_available=1 and call the wake() callback even if we're just called
for read activity. Better make sure we only do this when the FD is
active in that direction..
This may be backported as far as 2.0 though it should remain under
observation for a few weeks first as the risk of harm by a mistake
is higher than the trouble it should cause.
Two regtest regularly fail in a random fashion depending on the machine's
load (one could really wonder if it's really worth keeping such
unreproducible tests) :
- tcp-check_multiple_ports.vtc
- 4be_1srv_smtpchk_httpchk_layer47errors.vtc
It happens that one of the reason is the time it takes to connect to
the local socket (hence the load-dependent aspect): if connect() on the
loopback returns EINPROGRESS then this status is reported instead of a
real error. Normally such a test is expected to see the error cleaned
by tcp_connect_probe() but it really depends on the timing and instead
we may very well send() first and see this error. The problem is that
everything is collected based on errno, hoping it won't get molested
in the way from the last unsuccesful syscall to wake_srv_chk(), which
obviously is hard to guarantee.
This patch at least makes sure that a few non-errors are reported as
zero just like EAGAIN. It doesn't fix the root cause but makes it less
likely to report incorrect failures.
This fix could be backported as far as 1.9.
SSL_CTX_set_ecdh_auto() is not defined when OpenSSL 1.1.1 is compiled
with the no-deprecated option. Remove existing, incomplete guards and
add a compatibility macro in openssl-compat.h, just as OpenSSL does:
bf4006a6f9/include/openssl/ssl.h (L1486)
This should be backported as far as 2.0 and probably even 1.9.
With a TCP frontend, it is possible to upgrade a connection to HTTP when the
backend is in HTTP mode. Concretly the upgrade install a new mux. So, once it is
done, the downgrade to TCP is no longer possible. So we must take care to never
assign a TCP backend to a stream on this connection. Otherwise, HAProxy crashes
because raw data from the server are handled as structured data on the client
side.
This patch fixes the issue #420. It must be backported to all versions
supporting the HTX.
A regression was introduced by the commit 76014fd1 ("MEDIUM: h1-htx: Add HTX EOM
block when the message is in H1_MSG_DONE state"). When nothing is copied in the
channel's buffer when the input message is parsed, we erroneously pretend it is
because there is not enough room by setting the CS_FL_WANT_ROOM flag on the
conn-stream. This happens when a partial request is parsed. Because of this
flag, we never try anymore to get input data from the mux because we first wait
for more room in the channel's buffer, which is empty. Because of this bug, it
is pretty easy to freeze a h1 connection.
To fix the bug, we must obsiously set the CS_FL_WANT_ROOM flag only when there
are still data to transfer while the channel's buffer is not empty.
This patch must be backported if the patch 76014fd1 is backported too. So for
now, no backport needed.
Issue #417 reports a possible memory leak in the state-file loading code.
There's one such place in the loop which corresponds to parsing errors
where the curreently allocated line is not freed when dropped. In any
case this is very minor in that no more than the file's length may be
lost in the worst case, considering that the whole file is kept anyway
in case of success. This fix addresses this.
It should be backported to 2.1.
The global state file tree isn't configured for unique keys, so if an
entry appears multiple times, e.g. due to a bogus script that concatenates
entries multiple times, this will needlessly eat memory. Let's just drop
duplicates.
This should be backported to 2.1.
Starting haproxy with a state file of 700k servers eats 11.2 GB of RAM
due to a mistake in the function that loads the strings into a tree: it
allocates a full buffer for each backend+server name instead of allocating
just the required string. By just fixing this we're down to 80 MB.
This should be backported to 2.1.
There's a very hard-to-trigger bug in the FD list code where the
fd_add_to_fd_list() function assumes that if the FD it's trying to add
is already locked, it's in the process of being added. Unfortunately, it
can also be in the process of being removed. It is very hard to trigger
because it requires that one thread is removing the FD while another one
is adding it. First very few FDs run on multiple threads (listeners and
DNS), and second, it does not make sense to add and remove the FD at the
same time.
In practice the DNS code built on the older callback-only model does
perform bursts of fd_want_send() for all resolvers at once when it wants
to send a new query (dns_send_query()). And this is more likely to happen
when here are lots of resolutions in parallel and many resolvers, because
the dns_response_recv() callback can also trigger a series of queries on
all resolvers for each invalid response it receives. This means that it
really is perfectly possible to both stop and start in parallel during
short periods of time there.
This issue was not reported before 2.1, but 2.1 had the FD cache, built
on the exact same code base. It's very possible that the issue caused
exactly the opposite situation, where an event was occasionally lost,
causing a DNS retry that worked, and nobody noticing the problem in the
end. In 2.1 the lost entries are the updates asking for not polling for
writes anymore, and the effect is that the poller contiuously reports
writability on the socket when the issue happens.
This patch fixes bug #416 and must be backported as far as 1.8, and
absolutely requires that previous commit "MINOR: fd/threads: make
_GET_NEXT()/_GET_PREV() use the volatile attribute" is backported as
well otherwise it will make the issue worse.
Special thanks to Julien Pivotto for setting up a reliable reproducer
for this difficult issue.
These macros are either used between atomic ops which cause the volatile
to be implicit, or with an explicit volatile cast. However not having it
in the macro causes some traps in the code because certain loop paths
cannot safely be used without risking infinite loops if one isn't careful
enough.
Let's place the volatile attribute inside the macros and remove them from
the explicit places to avoid this. It was verified that the output executable
remains exactly the same byte-wise.
Instead of attempting to read the early data only when the upper layer asks
for data, allocate a temporary buffer, stored in the ssl_sock_ctx, and put
all the early data in there. Requiring that the upper layer takes care of it
means that if for some reason the upper layer wants to emit data before it
has totally read the early data, we will be stuck forever.
This should be backported to 2.1 and 2.0.
This may fix github issue #411.
Since 1.9 with commit b20aa9eef3 ("MAJOR: tasks: create per-thread wait
queues") a task bound to a single thread will not use locks when being
queued or dequeued because the wait queue is assumed to be the owner
thread's.
But there exists a rare situation where this is not true: the health
check tasks may be running on one thread waiting for a response, and
may in parallel be requeued by another thread calling health_adjust()
after a detecting a response error in traffic when "observe l7" is set,
and "fastinter" is lower than "inter", requiring to shorten the running
check's timeout. In this case, the task being requeued was present in
another thread's wait queue, thus opening a race during task_unlink_wq(),
and gets requeued into the calling thread's wait queue instead of the
running one's, opening a second race here.
This patch aims at protecting against the risk of calling task_unlink_wq()
from one thread while the task is queued on another thread, hence unlocked,
by introducing a new TASK_SHARED_WQ flag.
This new flag indicates that a task's position in the wait queue may be
adjusted by other threads than then one currently executing it. This means
that such WQ manipulations must be performed under a lock. There are two
types of such tasks:
- the global ones, using the global wait queue (technically speaking,
those whose thread_mask has at least 2 bits set).
- some local ones, which for now will be placed into the global wait
queue as well in order to benefit from its lock.
The flag is automatically set on initialization if the task's thread mask
indicates more than one thread. The caller must also set it if it intends
to let other threads update the task's expiration delay (e.g. delegated
I/Os), or if it intends to change the task's affinity over time as this
could lead to the same situation.
Right now only the situation described above seems to be affected by this
issue, and it is very difficult to trigger, and even then, will often have
no visible effect beyond stopping the checks for example once the race is
met. On my laptop it is feasible with the following config, chained to
httpterm:
global
maxconn 400 # provoke FD errors, calling health_adjust()
defaults
mode http
timeout client 10s
timeout server 10s
timeout connect 10s
listen px
bind :8001
option httpchk /?t=50
server sback 127.0.0.1:8000 backup
server-template s 0-999 127.0.0.1:8000 check port 8001 inter 100 fastinter 10 observe layer7
This patch will automatically address the case for the checks because
check tasks are created with multiple threads bound and will get the
TASK_SHARED_WQ flag set.
If in the future more tasks need to rely on this (multi-threaded muxes
for example) and the use of the global wait queue becomes a bottleneck
again, then it should not be too difficult to place locks on the local
wait queues and queue the task on its bound thread.
This patch needs to be backported to 2.1, 2.0 and 1.9. It depends on
previous patch "MINOR: task: only check TASK_WOKEN_ANY to decide to
requeue a task".
Many thanks to William Dauchy for providing detailed traces allowing to
spot the problem.
After processing a task, its RUNNING bit is cleared and at the same time
we check for other bits to decide whether to requeue the task or not. It
happens that we only want to check the TASK_WOKEN_* bits, because :
- TASK_RUNNING was just cleared
- TASK_GLOBAL and TASK_QUEUE cannot be set yet as the task was running,
preventing it from being requeued
It's important not to catch yet undefined flags there because it would
prevent addition of new task flags. This also shows more clearly that
waking a task up with flags 0 is not something safe to do as the task
will not be woken up if it's already running.
This action is very similar to "replace-uri" except that it only acts on the
path component. This is assumed to better match users' expectations when they
used to rely on "replace-uri" in HTTP/1 because mostly origin forms were used
in H1 while mostly absolute URI form is used in H2, and their rules very often
start with a '/', and as such do not match.
It could help users to get this backported to 2.0 and 2.1.
As discussed in the thread below [1], the debug converter is currently
not of much use given that it's only built when DEBUG_EXPR is set, and
it is limited to stderr only.
This patch changes this to make it take an optional prefix and an optional
target sink so that it can log to stdout, stderr or a ring buffer. The
default output is the "buf0" ring buffer, that can be consulted from the
CLI.
[1] https://www.mail-archive.com/haproxy@formilux.org/msg35671.html
Note: if this patch is backported, it also requires the following commit to
work: 46dfd78cbf ("BUG/MINOR: sample: always check converters' arguments").
Commit d4f946c ("MINOR: ssl/cli: 'show ssl cert' give information on the
certificates") introduced a build issue with openssl version < 1.0.2
because it uses the certificate bundles.
When accepting the max early data, don't set it on the SSL_CTX while parsing
the configuration, as at this point global.tune.maxrewrite may still be -1,
either because it was not set, or because it hasn't been set yet. Instead,
set it for each connection, just after we created the new SSL.
Not doing so meant that we could pretend to accept early data bigger than one
of our buffer.
This should be backported to 2.1, 2.0, 1.9 and 1.8.
Instead of failing the conversion when an invalid number of bits is
given the sha2 converter now fails with an appropriate error message
during startup.
The sha2 converter was introduced in d437630237,
which is in 2.1 and higher.
In 1.5-dev20, sample-fetch arguments parsing was addresse by commit
689a1df0a1 ("BUG/MEDIUM: sample: simplify and fix the argument parsing").
The issue was that argument checks were not run for sample-fetches if
parenthesis were not present. Surprisingly, the fix was mde only for
sample-fetches and not for converters which suffer from the exact same
problem. There are even a few comments in the code mentioning that some
argument validation functions are not called when arguments are missing.
This fix applies the exact same method as the one above. The impact of
this bug is limited because over the years the code has learned to work
around this issue instead of fixing it.
This may be backported to all maintained versions.
The closing bracket was emitted for the "debug" converter even when the
opening one was not sent, and the new line was not always emitted. Let's
fix this. This is harmless since this converter is not built by default.
These sample fetches are internal and must be used for debugging purpose. Idea
is to have a way to add some checks on the HTX content from http rules. The main
purpose is to ease reg-tests writing.
During H1 parsing, the HTX EOM block is added before switching the message state
to H1_MSG_DONE. It is an exception in the way to convert an H1 message to
HTX. Except for this block, the message is first switched to the right state
before starting to add the corresponding HTX blocks. For instance, the message
is switched in H1_MSG_DATA state and then the HTX DATA blocks are added.
With this patch, the message is switched to the H1_MSG_DONE state when all data
blocks or trailers were processed. It is the caller responsibility to call
h1_parse_msg_eom() when the H1_MSG_DONE state is reached. This way, it is far
easier to catch failures when the HTX buffer is full.
The H1 and FCGI muxes have been updated accordingly.
This patch may eventually be backported to 2.1 if it helps other backports.
Apparently gcc developers decided that strncpy() semantics are no longer
valid and now deserve a warning, especially if used exactly as designed.
This results in issue #304. Let's just remove one to the target size to
please her majesty gcc, the God of C Compilers, who tries hard to make
users completely eliminate any use of string.h and reimplement it by
themselves at much higher risks. Pfff....
This can be backported to stable version, the fix is harmless since it
ignores the last zero that is already set on next line.
As reported in issue #408, "agent-addr" doesn't work on default-server
lines. This is due to the transcription of the old "addr" option in commit
6e5e0d8f9e ("MINOR: server: Make 'default-server' support 'addr' keyword.")
which correctly assigns it to the check.addr and agent.addr fields, but
which also copies the default check.addr into both the check's and the
agent's addr fields. Thus the default agent's address is never used.
This fix makes sure to copy the check from the check and the agent from
the agent. However it's worth noting that if "addr" is specified on the
server line, it will still overwrite both the check and the agent's
addresses.
This must be backported as far as 1.8.
The listener supports a "transient error" situation, which corresponds
to those situations where accept fails badly but poll() reports an event.
This happens for example when a listener is paused, or on out of FD. The
same mechanism is used when facing a maxconn or maxsessrate limitation.
When this happens, the listener is disabled for up to 100ms and put back
into the global listener queue so that it automatically wakes up again
as soon as the conditions change from an existing connection releasing
one resource, or the system recovers from a transient issue.
The listener_accept() function has a bug in its exit path causing a
freshly limited listener to be immediately enabled again because all
the conditions are met (connection count < max). It doesn't take into
account the fact that the listener might have been queued and must
first wait for the timeout to expire before doing so. The impact is
that upon certain errors, the faulty process will busy loop on the
accept code without sleeping. This is the scenario reported and
diagnosed by @hedong0411 in issue #382.
This commit fixes it by verifying that the global queue's delay is
at least expired before deciding to resume the listener. Another
approach could consist in having an extra state like LI_DELAY for
situations where only a delay is acceptable, but this would probably
not bring anything except more complex code.
This issue was introduced with the lock-free listener accept code
(commits 3f0d02b and 82c9789a) that were backported to 1.8.20+ and
1.9.7+, so this fix must be backported to the relevant branches.
If a new process is started with -sf and it fails to bind, it may send
a SIGTTOU to the master process in hope that it will temporarily unbind.
Unfortunately this one doesn't catch it and stops to background instead
of forwarding the signal to the workers. The same is true for SIGTTIN.
This commit simply implements an extra signal handler for the master to
deal with such signals that must be passed down to the workers. It must
be backported as far as 1.8, though there the code differs in that it's
entirely in haproxy.c and doesn't require an extra sig handler.
As reported by Ilya in issue #392, Coverity found that we're leaking
allocated strings on error paths in parse_logformat(). Let's use a
proper exit label for failures instead of seeding return 0 everywhere.
This should be backported to all supported versions.
We used to have wake_expired_tasks() wake up tasks and return the next
expiration delay. The problem this causes is that we have to call it just
before poll() in order to consider latest timers, but this also means that
we don't wake up all newly expired tasks upon return from poll(), which
thus systematically requires a second poll() round.
This is visible when running any scheduled task like a health check, as there
are systematically two poll() calls, one with the interval, nothing is done
after it, and another one with a zero delay, and the task is called:
listen test
bind *:8001
server s1 127.0.0.1:1111 check
09:37:38.200959 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8696843}) = 0
09:37:38.200967 epoll_wait(3, [], 200, 1000) = 0
09:37:39.202459 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8712467}) = 0
>> nothing run here, as the expired task was not woken up yet.
09:37:39.202497 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8715766}) = 0
09:37:39.202505 epoll_wait(3, [], 200, 0) = 0
09:37:39.202513 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8719064}) = 0
>> now the expired task was woken up
09:37:39.202522 socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
09:37:39.202537 fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
09:37:39.202565 setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
09:37:39.202577 setsockopt(7, SOL_TCP, TCP_QUICKACK, [0], 4) = 0
09:37:39.202585 connect(7, {sa_family=AF_INET, sin_port=htons(1111), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
09:37:39.202659 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLOUT, {u32=7, u64=7}}) = 0
09:37:39.202673 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8814713}) = 0
09:37:39.202683 epoll_wait(3, [{EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=7, u64=7}}], 200, 1000) = 1
09:37:39.202693 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8818617}) = 0
09:37:39.202701 getsockopt(7, SOL_SOCKET, SO_ERROR, [111], [4]) = 0
09:37:39.202715 close(7) = 0
Let's instead split the function in two parts:
- the first part, wake_expired_tasks(), called just before
process_runnable_tasks(), wakes up all expired tasks; it doesn't
compute any timeout.
- the second part, next_timer_expiry(), called just before poll(),
only computes the next timeout for the current thread.
Thanks to this, all expired tasks are properly woken up when leaving
poll, and each poll call's timeout remains up to date:
09:41:16.270449 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10223556}) = 0
09:41:16.270457 epoll_wait(3, [], 200, 999) = 0
09:41:17.270130 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10238572}) = 0
09:41:17.270157 socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
09:41:17.270194 fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
09:41:17.270204 setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
09:41:17.270216 setsockopt(7, SOL_TCP, TCP_QUICKACK, [0], 4) = 0
09:41:17.270224 connect(7, {sa_family=AF_INET, sin_port=htons(1111), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
09:41:17.270299 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLOUT, {u32=7, u64=7}}) = 0
09:41:17.270314 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10337841}) = 0
09:41:17.270323 epoll_wait(3, [{EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=7, u64=7}}], 200, 1000) = 1
09:41:17.270332 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10341860}) = 0
09:41:17.270340 getsockopt(7, SOL_SOCKET, SO_ERROR, [111], [4]) = 0
09:41:17.270367 close(7) = 0
This may be backported to 2.1 and 2.0 though it's unlikely to bring any
user-visible improvement except to clarify debugging.
This is a complement to previous fix for bug #399. The exclusion between
the recv() and send() calls prevents send handlers from being called if
rx readiness is reported. The DNS code can trigger this situations with
threads where the fd_recv_ready() flag disappears between the test in
dgram_fd_handler() and the second test in dns_resolve_recv() while a
thread calls fd_cant_recv(), and this situation can sustain itself for
a while. With 8 threads and an error in the socket queue, placing a
printf on the return statement in dns_resolve_recv() scrolls very fast.
Simply removing the "else" in dgram_fd_handler() addresses the issue.
This fix must be backported as far as 1.6.
It was reported in bug #399 that the DNS sometimes enters endless loops
after hours working fine. The issue is caused by a lack of error
processing in the DNS's recv() path combined with an exclusive recv OR
send in the UDP layer, resulting in some errors causing CPU loops that
will never stop until the process is restarted.
The basic cause is that the FD_POLL_ERR and FD_POLL_HUP flags are sticky
on the FD, and contrary to a stream socket, receiving an error on a
datagram socket doesn't indicate that this socket cannot be used anymore.
Thus the Rx code must at least handle this situation and flush the error
otherwise it will constantly be reported. In theory this should not be a
big issue but in practise it is due to another bug in the UDP datagram
handler which prevents the send() callback from being called when Rx
readiness was reported, so the situation cannot go away. It happens way
more easily with threads enabled, so that there is no dead time between
the moment the FD is disabled and another recv() is called, such as in
the example below where the request was sent to a closed port on the
loopback provoking an ICMP unreachable to be sent back:
[pid 20888] 18:26:57.826408 sendto(29, ";\340\1\0\0\1\0\0\0\0\0\1\0031wt\2eu\0\0\34\0\1\0\0)\2\0\0\0\0\0\0\0", 35, 0, NULL, >
[pid 20893] 18:26:57.826566 recvfrom(29, 0x7f97c54ef2f0, 513, 0, NULL, NULL) = -1 ECONNREFUSED (Connection refused)
[pid 20889] 18:26:57.826601 recvfrom(29, 0x7f97c76182f0, 513, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 20892] 18:26:57.826630 recvfrom(29, 0x7f97c5cf02f0, 513, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 20891] 18:26:57.826684 recvfrom(29, 0x7f97c66162f0, 513, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 20895] 18:26:57.826716 recvfrom(29, 0x7f97bffda2f0, 513, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 20894] 18:26:57.826747 recvfrom(29, 0x7f97c4cee2f0, 513, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 20888] 18:26:58.419838 recvfrom(29, 0x7ffcc8712c20, 513, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 20893] 18:26:58.419900 recvfrom(29, 0x7f97c54ef2f0, 513, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
(... hundreds before next sendto() ...)
This situation was handled by clearing HUP and ERR when recv()
returns <0.
A second case was handled, there was a control for a missing dgram
handler, but it does nothing, causing the FD to ring again if this
situation ever happens. After looking at the rest of the code, it
doesn't seem possible to face such a situation because these handlers
are registered during startup, but at least we need to handle it
properly.
A third case was handled, that's mainly a small optimization. With
threads and massive responses, due to the large lock around the loop,
it's likely that some threads will have seen fd_recv_ready() and will
wait at the lock(). But if they wait here, chances are that other
threads will have eliminated pending data and issued fd_cant_recv().
In this case, better re-check fd_recv_ready() before performing the
recv() call to avoid the huge amounts of syscalls that happen on
massively threaded setups.
This patch must be backported as far as 1.6 (the atomic AND just
needs to be turned to a regular AND).
When we have a EVFILT_READ event, an optimization was made, and the FD was
not reported as ready to receive if there were no data available. That way,
if the socket was closed by our peer (the EV8EOF flag was set), and there were
no remaining data to read, we would just close(), and avoid doing a recv().
However, it may be fine for TCP socket, but it is not for UDP.
If we send data via UDP, and we receive an error, the only way to detect it
is to attempt a recv(). However, in this case, kevent() will report a read
event, but with no data, so we'd just ignore that read event, nothing would be
done about it, and the poller would be woken up by it over and over.
To fix this, report read events if either we have data, or the EV_EOF flag
is not set.
This should be backported to 2.1, 2.0, 1.9 and 1.8.
The global listener queue code and declarations were still lying in
haproxy.c while not needed there anymore at all. This complicates
the code for no reason. As a result, the global_listener_queue_task
and the global_listener_queue were made static.
We use it half times for the global_listener_queue and half times
for a proxy's queue and this requires the callers to take care of
these. Let's split it in two versions, the current one working only
on the global queue and another one dedicated to proxies for the
per-proxy queues. This cleans up quite a bit of code.
In listener_accept() there are several situations where we have to wait
for an event or a delay. These ones all implement their own call to
limit_listener() and the associated task_schedule(). In addition to
being ugly and confusing, one expire date computation is even wrong as
it doesn't take in account the fact that we're using threads and that
the value might change in the middle. Fortunately task_schedule() gets
it right for us.
This patch creates two jump locations, one for the global queue and
one for the proxy queue, allowing the rest of the code to only compute
the expire delay and jump to the right location.
Recent fix 4c044e274c ("BUG/MEDIUM: listener/thread: fix a race when
pausing a listener") is insufficient and moves the race slightly farther.
What now happens is that if we're limiting a listener due to a transient
error such as an accept() error for example, or because the proxy's
maxconn was reached, another thread might in the mean time have switched
again to LI_READY and at the end of the function we'll disable polling on
this FD, resulting in a listener that never accepts anything anymore. It
can more easily happen when sending SIGTTOU/SIGTTIN to temporarily pause
the listeners to let another process bind next to them.
What this patch does instead is to move all enable/disable operations at
the end of the function and condition them to the state. The listener's
state is checked under the lock and the FD's polling state adjusted
accordingly so that the listener's state and the FD always remain 100%
synchronized. It was verified with 16 threads that the cost of taking
that lock is not measurable so that's fine.
This should be backported to the same branches the patch above is
backported to.
When accept() fails because a listener is temporarily paused, the
FD might have both FD_POLL_HUP and FD_POLL_ERR bits set. While we do
not exploit FD_POLL_ERR here it's better to clear it because it is
reported on "show fd" and is confusing.
This may be backported to all versions.
There was a leftover of the single-threaded era when removing the
FD_POLL_HUP flag from the listeners. By not using an atomic operation
to clear the flag, another thread acting on the same listener might
have lost some events, though this would have resulted in that thread
to reprocess them immediately on the next loop pass.
This should be backported as far as 1.8.
The proxies' soft_stop() function closes the FDs in all opened states
except LI_PAUSED. This means that a transient error on a listener might
cause it to turn back to the READY state if it happens exactly when a
reload signal is received.
This must be backported to all supported versions.
During the HTTP response parsing, if there is not enough space in the channel's
buffer, it is possible to fail to add the HTX EOM block while all data in the
rxbuf were consumed. As for the h1 mux, we must notify the conn-stream the
buffer is full to have a chance to add the HTX EOM block later. In this case, we
must also be carefull to not report a server abort by setting too early the
CS_FL_EOS flag on the conn-stream.
To do so, the FCGI_SF_APPEND_EOM flag must be set on the FCGI stream to know the
HTX EOM block is missing.
This patch must be backported to 2.1.
During the message parsing, when the HTX buffer is full and only the HTX EOM
block cannot be added, it is important to notify the conn-stream that some
processing must still be done but it is blocked because there is not enough room
in the buffer. The way to do so is to set the CS_FL_WANT_ROOM flag on the
conn-stream. Otherwise, because all data are received and consumed, the mux is
not called anymore to add this last block, leaving the message unfinished from
the HAProxy point of view. The only way to unblock it is to receive a shutdown
for reads or to hit a timeout.
This patch must be backported to 2.1 and 2.0. The 1.9 does not seem to be
affected.
HAProxy doesn't need to call executables at run time (except when using
external checks which are strongly recommended against), and is even expected
to isolate itself into an empty chroot. As such, there basically is no valid
reason to allow a setuid executable to be called without the user being fully
aware of the risks. In a situation where haproxy would need to call external
checks and/or disable chroot, exploiting a vulnerability in a library or in
haproxy itself could lead to the execution of an external program. On Linux
it is possible to lock the process so that any setuid bit present on such an
executable is ignored. This significantly reduces the risk of privilege
escalation in such a situation. This is what haproxy does by default. In case
this causes a problem to an external check (for example one which would need
the "ping" command), then it is possible to disable this protection by
explicitly adding this directive in the global section. If enabled, it is
possible to turn it back off by prefixing it with the "no" keyword.
Before the option:
$ socat - /tmp/sock1 <<< "expert-mode on; debug dev exec sudo /bin/id"
uid=0(root) gid=0(root) groups=0(root
After the option:
$ socat - /tmp/sock1 <<< "expert-mode on; debug dev exec sudo /bin/id"
sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the
'nosuid' option set or an NFS file system without root privileges?
popen() is annoying because it doesn't catch stderr. The command was
implemented using it just by pure laziness, let's just redo it a bit
cleaner using normal syscalls. Note that this command is only enabled
when built with -DDEBUG_DEV.
In process_chk_conn(), make sure we set the task affinity to the current
thread as soon as we're attempting a connection (and reset the affinity to
"any thread" if we detect a failure).
We used to only set the task affinity if connect_conn_chk() returned
SF_ERR_NONE, however for TCP checks, SF_ERR_UP is returned, so for those
checks, the task could still run on any thread, and this could lead to a
race condition where the connection runs on one thread, while the task runs
on another one, which could create random memory corruption and/or crashes.
This may fix github issue #369.
This should be backported to 2.1, 2.0 and 1.9.
The h1_recv_allowed() function is inherited from the h2 multiplexer. But for the
h1, conditions to know if we may receive data are less complex because there is
no multiplexing and because data are not parsed when received. So now, following
rules are respected :
* if an error or a shutdown for reads was detected on the connection we must
not attempt to receive
* if the input buffer failed to be allocated or is full, we must not try to
receive
* if the input processing is busy waiting for the output side, we may attempt
to receive
* otherwise must may not attempt to receive
This patch must be backported as far as 1.9.
The CO_FL_SOCK_RD_SH flag is only set when a read0 is received. So we must not
rely on it to set the H1 connection in shutdown state (H1C_F_CS_SHUTDOWN). In
fact, it is suffisant to set the connection in shutdown state when the shutdown
for writes is forwared to the sock layer.
This patch must be backported as far as 1.9.
On the server side, when a H1 stream is detached from the connection, if the
connection is not reusable but some outgoing data remain, the connection is not
immediatly released. In this case, the connection is not inserted in any idle
connection list. But it is still attached to the session. Because of that, it
can be erroneously reused. h1_avail_streams() always report a free slot if no
stream is attached to the connection, independently on the connection's
state. It is obviously a bug. If a second request is handled by the same session
(it happens with H2 connections on the client side), this connection is reused
before we close it.
There is small window to hit the bug, but it may lead to very strange
behaviors. For instance, if a first h2 request is quickly aborted by the client
while it is blocked in the mux on the server side (so before any response is
received), a second request can be processed and sent to the server. Because the
connection was not closed, the possible reply to the first request will be
interpreted as a reply to the second one. It is probably the bug described by
Peter Fröhlich in the issue #290.
To fix the bug, a new flag has been added to know if an H1 connection is idle or
not. So now, H1C_F_CS_IDLE is set when a connection is idle and useable to
handle a new request. If it is set, we try to add the connection in an idle
connection list. And h1_avail_streams() only relies on this flag
now. Concretely, this flag is set when a K/A stream is detached and both the
request and the response are in DONE state. It is exclusive to other H1C_F_CS
flags.
This patch must be backported as far as 1.9.
It's regression from 9f9b0c6 "BUG/MEDIUM: ECC cert should work with
TLS < v1.2 and openssl >= 1.1.1". Wilcard EC certifcate could be selected
at the expense of specific RSA certificate.
In any case, specific certificate should always selected first, next wildcard.
Reflect this rule in a loop to avoid any bug in certificate selection changes.
Fix issue #394.
It should be backported as far as 1.8.
There exists a race in the listener code where a thread might disable
receipt on a listener's FD then turn it to LI_PAUSED while at the same
time another one faces EAGAIN on accept() and enables it again via
fd_cant_recv(). The result is that the FD is in LI_PAUSED state with
its polling still enabled. listener_accept() does not do anything then
and doesn't disable the FD either, resulting in a thread eating all the
CPU as reported in issue #358. A solution would be to take the listener's
lock to perform the fd_cant_recv() call and do it only if the FD is still
in LI_READY state, but this would be totally overkill while in practice
the issue only happens during shutdown.
Instead what is done here is that when leaving we recheck the state and
disable polling if the listener is not in LI_READY state, which never
happens except when being limited. In the worst case there could be one
extra check per thread for the time required to converge, which is
absolutely nothing.
This fix was successfully tested, and should be backported to all
versions using the lock-free listeners, which means all those containing
commit 3f0d02bb ("MAJOR: listener: do not hold the listener lock in
listener_accept()"), hence 2.1, 2.0, 1.9.7+, 1.8.20+.
When a crt-list line using an already used ckch_store does not contain
filters, it will overwrite the ckchs->filters variable with 0.
This problem will generate all sni_ctx of this ckch_store without
filters. Filters generation mustn't be allowed in any case.
Must be backported in 2.1.
In si_cs_recv(), we can end up with a partial splice() call that will be
followed by an attempt to us rcv_buf(). Sometimes this works and places
data into the buffer, which then prevent splicing from being used, and
this causes splice() and recvfrom() calls to alternate. Better simply
refrain from calling rcv_buf() when there are data in the pipe and still
data to be forwarded. Usually this indicates that we've ate everything
available and that we still want to use splice() on subsequent calls.
This should be backported to 2.1 and 2.0.
If we cannot splice incoming data using rcv_pipe() due to remaining data
in the buffer, we must not subscribe to the mux but instead tag the
stream-int as blocked on missing Rx room. Otherwise when data are
flushed, calling si_chk_rcv() will have no effect because the WAIT_EP
flag remains present, and we'll end in an rx timeout. This case is very
hard to reproduce, and requires an inversion of the polling side in the
middle of a transfer. This can only happen when the client and the server
are using similar links and when splicing is enabled. It typically takes
hundreds of MB to GB for the problem to happen, and tends to be magnified
by the use of option contstats which causes process_stream() to be called
every 5s and to try again to recv.
This fix must be backported to 2.1, 2.0, and possibly 1.9.
Some concerns are regularly raised about the risk to inherit some Lua
files which make use of a fork (e.g. via os.execute()) as well as
whether or not some of bugs we fix might or not be exploitable to run
some code. Given that haproxy is event-driven, any foreground activity
completely stops processing and is easy to detect, but background
activity is a different story. A Lua script could very well discretely
fork a sub-process connecting to a remote location and taking commands,
and some injected code could also try to hide its activity by creating
a process or a thread without blocking the rest of the processing. While
such activities should be extremely limited when run in an empty chroot
without any permission, it would be better to get a higher assurance
they cannot happen.
This patch introduces something very simple: it limits the number of
processes and threads to zero in the workers after the last thread was
created. By doing so, it effectively instructs the system to fail on
any fork() or clone() syscall. Thus any undesired activity has to happen
in the foreground and is way easier to detect.
This will obviously break external checks (whose concept is already
totally insecure), and for this reason a new option
"insecure-fork-wanted" was added to disable this protection, and it
is suggested in the fork() error report from the checks. It is
obviously recommended not to use it and to reconsider the reasons
leading to it being enabled in the first place.
If for any reason we fail to disable forks, we still start because it
could be imaginable that some operating systems refuse to set this
limit to zero, but in this case we emit a warning, that may or may not
be reported since we're after the fork point. Ideally over the long
term it should be conditionned by strict-limits and cause a hard fail.
Since the flag STAT_SHOWADMIN was removed, the frontends heading in the HTML
output appears unaligned because the space reserved for the checkbox (not
displayed for frontends) is not inserted.
This patch fixes the issue #390. It must be backported to 2.1.
The header name configured by the directive "pass-header", in the "fcgi-app"
section, must be case insensitive. For now, it must be in lowercase to match an
header. Internally, header names are in lowercase but there is no reason to
impose this syntax in the configuration.
This patch must be backported to 2.1.
Commit 1c65fdd5 "MINOR: ssl: add extra chain compatibility" really implement
SSL_CTX_set0_chain. Since ckch can be used to init more than one ctx with
openssl < 1.0.2 (commit 89f58073 for X509_chain_up_ref compatibility),
SSL_CTX_set1_chain compatibility is required.
This patch must be backported to 2.1.
http_find_header() is used to find the next occurrence of a header matching on
its name. When found, the matching header is returned with the corresponding
value. This value may be empty. Unfortunatly, because of a bug, an empty value
make the function fail.
This patch must be backported to 2.1, 2.0 and 1.9.
`eb` being tested above, `res` cannot be null, so the condition is
not needed and introduces potential dead code.
also fix a typo in associated comment
This should fix issue #349
Reported-by: Илья Шипицин <chipitsine@gmail.com>
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Load file for crl or ca-cert is realy done with the same function in OpenSSL,
via X509_STORE_load_locations. Accordingly, deduplicate crl-file and ca-file
can share the same function.
ca-list can be extracted from ca-file already loaded in memory.
This patch set ca-list from deduplicated ca-file when needed
and share it in ca-file tree.
As a corollary, this will prevent file access for ca-list when
updating a certificate via CLI.
Typically server line like:
'server-template srv 1-1000 *:443 ssl ca-file ca-certificates.crt'
load ca-certificates.crt 1000 times and stay duplicated in memory.
Same case for bind line: ca-file is loaded for each certificate.
Same 'ca-file' can be load one time only and stay deduplicated in
memory.
As a corollary, this will prevent file access for ca-file when
updating a certificate via CLI.
Building on a 32-bit platform produces these warnings in trace code:
src/stream.c: In function 'strm_trace':
src/stream.c:226:29: warning: format '%lu' expects argument of type 'long unsigned int', but argument 9 has type 'size_t {aka const unsigned int}' [-Wformat=]
chunk_appendf(&trace_buf, " req=(%p .fl=0x%08x .ana=0x%08x .exp(r,w,a)=(%u,%u,%u) .o=%lu .tot=%llu .to_fwd=%u)",
^
src/stream.c:229:29: warning: format '%lu' expects argument of type 'long unsigned int', but argument 9 has type 'size_t {aka const unsigned int}' [-Wformat=]
chunk_appendf(&trace_buf, " res=(%p .fl=0x%08x .ana=0x%08x .exp(r,w,a)=(%u,%u,%u) .o=%lu .tot=%llu .to_fwd=%u)",
^
src/mux_fcgi.c: In function 'fcgi_trace':
src/mux_fcgi.c:443:29: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka const unsigned int}' [-Wformat=]
chunk_appendf(&trace_buf, " - VAL=%lu", *val);
^
src/mux_h1.c: In function 'h1_trace':
src/mux_h1.c:290:29: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka const unsigned int}' [-Wformat=]
chunk_appendf(&trace_buf, " - VAL=%lu", *val);
^
Let's just cast the type to long. This should be backported to 2.1.
During the H1 message parsing, the host header is tested to be sure it matches
the request's authority, if defined. When there are multiple host headers, we
also take care they are all the same. Of course, these tests must only be
performed on the requests. A host header in a response has no special meaning.
This patch must be backported to 2.1.
This commit removes the explicit checks for `if (err)` before
passing `err` to `memprintf`. `memprintf` already checks itself
whether the `**out*` parameter is `NULL` before doing anything.
This reduces the indentation depth and makes the code more readable,
before there is less boilerplate code.
Instead move the check into the ternary conditional when the error
message should be appended to a previous message. This is consistent
with the rest of ssl_sock.c and with the rest of HAProxy.
Thus this patch is the arguably cleaner fix for issue #374 and builds
upon
5f1fa7db86 and
8b453912ce
Additionally it fixes a few places where the check *still* was missing.
Since commit 88ebd40 ("MINOR: trace: add allocation of buffer-sized
trace buffers") we have a trace buffer allocated at boot time. But
there was a copy-paste error there making the test verify that the
trash was allocated instead of the trace buffer. The result is that
depending on the link order either the test will succeed or fail,
preventing haproxy from starting at all.
No backport is needed.
Christopher found another issue in the H2 backend implementation that
results from a miss in the H2 spec: the processing of a HEADERS frame
is always permitted in IDLE state, but this doesn't make sense on the
response path! And here when facing such a frame, we try to decode it
while we didn't allocate any stream, so we end up trying to fill the
idle stream's buffer (read-only) and crash.
What we're doing here is that if we get a HEADERS frame in IDLE state
from a server, we terminate the connection with a PROTOCOL_ERROR. No
such transition seems to be permitted by the spec but it seems to be
the only sane solution.
This fix must be backported as far as 1.9. Note that in 2.0 and earlier
there's no h2_frame_check_vs_state() function, instead the check is
inlined in h2_process_demux().
Tim Dsterhus found that the amount of sanitization we perform on HTTP
header field names received in H2 is insufficient. Currently we reject
upper case letters as mandated by RFC7540#8.1.2, but section 10.3 also
requires that intermediaries translating streams to HTTP/1 further
refine the filtering to also reject invalid names (which means any name
that doesn't match a token). There is a small trick here which is that
the colon character used to start pseudo-header names doesn't match a
token, so pseudo-header names fall into that category, thus we have to
swap the pseudo-header name lookup with this check so that we only check
from the second character (past the ':') in case of pseudo-header names.
Another possibility could have been to perform this check only in the
HTX-to-H1 trancoder but doing would still expose the configured rules
and logs to such header names.
This fix must be backported as far as 1.8 since this bug could be
exploited and serve as the base for an attack. In 2.0 and earlier,
functions h2_make_h1_request() and h2_make_h1_trailers() must also
be adapted to sanitize requests coming in legacy mode.
Tim Dsterhus reported an annoying problem in the H2 decoder related to
an ambiguity in the H2 spec. The spec says in section 10.3 that HTTP/2
allows header field values that are not valid (since they're binary) and
at the same time that an H2 to H1 gateway must be careful to reject headers
whose values contain \0, \r or \n.
Till now, and for the sake of the ability to maintain end-to-end binary
transparency in H2-to-H2, the H2 mux wouldn't reject this since it does
not know what version will be used on the other side.
In theory we should in fact perform such a check when converting an HTX
header to H1. But this causes a problem as it means that all our rule sets,
sample fetches, captures, logs or redirects may still find an LF in a header
coming from H2. Also in 2.0 and older in legacy mode, the frames are instantly
converted to H1 and HTX couldn't help there. So this means that in practice
we must refrain from delivering such a header upwards, regardless of any
outgoing protocol consideration.
Applying such a lookup on all headers leaving the mux comes with a
significant performance hit, especially for large ones. A first attempt
was made at placing this into the HPACK decoder to refrain from learning
invalid literals but error reporting becomes more complicated. Additional
tests show that doing this within the HTX transcoding loop benefits from
the hot L1 cache, and that by skipping up to 8 bytes per iteration the
CPU cost remains within noise margin, around ~0.5%.
This patch must be backported as far as 1.8 since this bug could be
exploited and serve as the base for an attack. In 2.0 and earlier the
fix must also be added to functions h2_make_h1_request() and
h2_make_h1_trailers() to handle legacy mode. It relies on previous patch
"MINOR: ist: add ist_find_ctl()" to speed up the control bytes lookup.
All credits go to Tim for his detailed bug report and his initial patch.
gcc complains rightfully:
src/ssl_sock.c: In function ‘ssl_sock_prepare_all_ctx’:
src/ssl_sock.c:5507:3: warning: format not a string literal and no format arguments [-Wformat-security]
ha_warning(errmsg);
^
src/ssl_sock.c:5509:3: warning: format not a string literal and no format arguments [-Wformat-security]
ha_alert(errmsg);
^
src/ssl_sock.c: In function ‘cli_io_handler_commit_cert’:
src/ssl_sock.c:10208:3: warning: format not a string literal and no format arguments [-Wformat-security]
chunk_appendf(trash, err);
Introduced in 8b453912ce.
Since commit 9a1ab08 ("CLEANUP: ssl-sock: use HA_OPENSSL_VERSION_NUMBER
instead of OPENSSL_VERSION_NUMBER") we restrict LibreSSL to the OpenSSL
1.0.1 API, to avoid breaking LibreSSL every minute. We set
HA_OPENSSL_VERSION_NUMBER to 0x1000107fL if LibreSSL is detected and
only allow curves to be configured if HA_OPENSSL_VERSION_NUMBER is at
least 0x1000200fL.
However all relevant LibreSSL releases actually support settings curves,
which is now broken. Fix this by always allowing curve configuration when
using LibreSSL.
Reported on GitHub in issue #366.
Fixes: 9a1ab08 ("CLEANUP: ssl-sock: use HA_OPENSSL_VERSION_NUMBER instead
of OPENSSL_VERSION_NUMBER").
recent commit 8b453912ce ("MINOR: ssl: ssl_sock_prepare_ctx() return an error code")
converted all errors handling; in this patch we always test `err`, but
three of them are missing. I did not found a plausible explanation about
it.
this should fix issue #374
Fixes: 8b453912ce ("MINOR: ssl: ssl_sock_prepare_ctx() return an error code")
Reported-by: Илья Шипицин <chipitsine@gmail.com>
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
The link to the known bugs page for the current version is built and
reported there. When it is a development version (less than 2 dots),
instead a link to github open issues is reported as there's no way to
be sure about the current situation in this case and it's better that
users report their trouble there.
As discussed on Discourse here:
https://discourse.haproxy.org/t/haproxy-branch-support-lifetime/4466
it's not always easy for end users to know the lifecycle of the version
they are using. This patch introduces a "Status" line in the output of
"haproxy -vv" indicating whether it's a development, stable, long-term
supported version, possibly with an estimated end of life for the branch
when it can be anticipated (e.g. for stable versions). This field should
be adjusted when creating a major release to reflect the new status.
It may make sense to backport this to other branches to clarify the
situation.
Apply the configuration of the ssl_bind_conf on the generated SSL_CTX.
It's a little bit hacky at the moment because the ssl_sock_prepare_ctx()
function was made for the configuration parsing, not for being using at
runtime. Only the 'verify' bind keyword seems to cause a file access so
we prevent it before calling the function.
Rework ssl_sock_prepare_ctx() so it fills a buffer with the error
messages instead of using ha_alert()/ha_warning(). Also returns an error
code (ERR_*) instead of the number of errors.
It was noted in #48 that there are times when a configuration
may use the server-template directive with SRV records and
simultaneously want to control weights using an agent-check or
through the runtime api. This patch adds a new option
"ignore-weight" to the "resolve-opts" directive.
When specified, any weight indicated within an SRV record will
be ignored. This is for both initial resolution and ongoing
resolution.
The previous patch on this function (36b536d6c "BUG/MEDIUM: stream-int: Don't
loose events on the CS when an EOS is reported") contains a bug. The return
value is based on the conn-stream's flags. But it may be reset if the CS is
closed. Ironically it was exactly the purpose of this patch...
This patch must be backported to 2.0 and 1.9.
When no data filter are registered on a channel, if the message length is known,
the HTTP payload is infinitely forwarded to save calls to process_stream(). When
we finally fall back again in XFER_BODY analyzers, we detect the end of the
message by checking channel flags. If CF_EOI or CF_SHUTR is set, we switch the
message in DONE state. For CF_EOI, it is relevant. But not for CF_SHUTR. a
shutdown for reads without the end of input must be interpreted as an abort for
messages with a known length.
Because of this bug, some aborts are not properly handled and reported. Instead,
we interpret it as a legitimate shutdown.
This patch must be backported to 2.0.
There are two issues with the way tunnel mode is detected on the response
path. First, when a response with an unknown content length is handled, the
request is also switched in tunnel mode. It is obviously wrong. Because it was
done on the server side only (so not during the request parsing), it is no
noticeable effects.
The second issue is about the way protocol upgrades are handled. The request is
switched in tunnel mode from the time the 101 response is processed. So an
unfinished request may be switched in tunnel mode too early. It is not a common
use, but a protocol upgrade on a POST is allowed. Thus, parsing of the payload
may be hijacked. It is especially bad for chunked payloads.
Now, conditions to switch the request in tunnel mode reflect what should be
done. Especially for the second issue. We wait the end of the request to switch
it in tunnel mode.
This patch must be backported to 2.0 and 1.9. Note that these versions are only
affected by the second issue but the patch cannot be easily splitted.
Some BUG_ON() tests emit a warning because of a potential null pointer
dereference on an HTX block. In fact, it should never happen, but now, GCC is
happy.
This patch must be backported to 2.0.
In si_cs_recv(), when a shutdown for reads is handled, the conn-stream may be
closed. It happens when the ouput channel is closed for writes or if
SI_FL_NOHALF is set on the stream-interface. In this case, conn-stream's flags
are reset. Thus, if an error (CS_FL_ERROR) or an end of input (CS_FL_EOI) is
reported by the mux, the event is lost. si_cs_recv() does not report these
events by itself. It relies on si_cs_process() to report them to the
stream-interface and/or the channel.
For instance, if CS_FL_EOS and CS_FL_EOI are set by the H1 multiplexer during a
call to si_cs_recv() on the server side, if the conn-stream is closed (read0 +
SI_FL_NOHALF), the CS_FL_EOI flag is lost. Thus, this may lead the stream to
interpret it as a server abort.
Now, conn-stream's flags are processed at the end of si_cs_recv(). The function
is responsible to set the right flags on the stream-interface and/or the
channel. Due to this patch, the function is now almost linear. Except some early
checks at the beginning, there is only one return statement. It also fixes a
potential bug because of an inconsistency between the splicing and the buffered
receipt. On the first case, CS_FL_EOS if handled before errors on the connection
or the conn-stream. On the second one, it is the opposite.
This patch must be backported to 2.0 and 1.9.
There is a compiler warning after commit a9363eb6 ("BUG/MEDIUM: ssl:
'tune.ssl.default-dh-param' value ignored with openssl > 1.1.1"):
src/ssl_sock.c: In function 'ssl_sock_prepare_ctx':
src/ssl_sock.c:4481:4: error: statement with no effect [-Werror=unused-value]
Fix it by adding a (void)
The peer flags (->flags member of peer struct) are reset by __peer_session_deinit()
function. PEER_F_ALIVE flag which is used by the heartbeat part of the peer protocol
to mark a peer as being alive was not reset by this function. This simple patch adds
add the statement to this.
Note that, at this time, there was no identified issue due to this missing reset.
Must be backported to 2.0.
Upon a reexec_on_failure, if the process tried to exit after the
initialization of the process structure but before it was filled with a
PID, the PID in the mworker_proc structure is set to -1.
In this particular case the -sf argument is filled with -1 and haproxy
will exit with the usage message because of that argument.
Should be backported in 2.0.
This patch introduces the new CLI command 'abort ssl cert' which abort
an on-going transaction and free its content.
This command takes the name of the filename of the transaction as an
argument.
As the peers protocol expects to parse at least one encoded integer value for
each stick-table data field even when not configured on the local side,
about the "server_name" data field we must emit something even if it has
not been set (no server was configured for instance).
As this data field is made of first one encoded integer which is the length
of the remaining data (the dictionary cache entry), we encode the length 0
when emitting such an absent dictionary cache entry.
On the remote side, when we decode such an integer with 0 as value, we stop
parsing the data field and that's it.
Must be backported to 2.0.
This patch adds three counters to help in debugging peers protocol issues
to "peer" struct:
->no_hbt counts the number of reconnection period without receiving heartbeat
->new_conn counts the number of reconnections after ->reconnect timeout expirations.
->proto_err counts the number of protocol errors.
Add RX/TX heartbeat counters to "peer" struct to have an idead about which
peer is alive or not.
Dump these counters values on the CLI via "show peers" command.
This patch enable us to dump the stick-table information of remote or local peers
without already opened peer session. This may be the case also for the local peer
during synchronizations with an old processus (reload).
Certificate selection in client_hello_cb (openssl >= 1.1.1) correctly
handles crt-list neg filter. Certificate selection for openssl < 1.1.1
has not been touched for a while: crt-list neg filter is not the same
than his counterpart and is wrong. Fix it to mimic the same behavior
has is counterpart.
It should be backported as far as 1.6.
With CLI cert update, sni_ctx can be removed at runtime. ssl_pkey_info_index
ex_data is filled with one of sni_ctx.kinfo pointer but SSL_CTX can be shared
between sni_ctx. Remove and free a sni_ctx can lead to a segfault when
ssl_pkey_info_index ex_data is used (in ssl_sock_get_pkey_algo). Removing the
dependency on ssl_pkey_info_index ex_data is the easiest way to fix the issue.
since the introduction of mworker, the setuid/setgid was duplicated in
two places; try to improve that by creating a dedicated function.
this patch does not introduce any functional change.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
in mworker mode used with uid/gid settings, it was not possible to get
a coredump despite the set-dumpable option.
indeed prctl(2) manual page specifies the dumpable attribute is reverted
to `/proc/sys/fs/suid_dumpable` in a few conditions such as process
effective user and group are changed.
this patch moves the whole set-dumpable logic before the polling code in
order to catch all possible cases where we could have changed the
uid/gid. It however does not cover the possible segfault at startup.
this patch should be backported in 2.0.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Allow the sc-set-gpt0 action to set GPT0 to a value dynamically evaluated from
its <expr> argument (in addition to the existing static <int> alternative).
The copy of the startup logs used to rely on a re-allocated memory area
on the fly, that would attempt to be delivered at once over the CLI. But
if it's too large (too many warnings) it will take time to start up, and
may not even show up on the CLI as it doesn't fit in a buffer.
The ring buffer infrastructure solves all this with no more code, let's
switch to this instead. It simply requires a parsing function to attach
the ring via ring_attach_cli() and all the rest is automatically handled.
Initially this was imagined as a code cleanup, until a test with a config
involving 100k backends and just one occurrence of
"load-server-state-from-file global" in the defaults section took approx
20 minutes to parse due to the O(N^2) cost of concatenating the warnings
resulting in ~1 TB of data to be copied, while it took only 0.57s with
the ring.
Ideally this patch should be backported to 2.0 and 1.9, though it relies
on the ring infrastructure which will then also need to be backported.
Configs able to trigger the bug are uncommon, so another workaround for
older versions without backporting the rings would consist in simply
limiting the size of the error message in print_message() to something
always printable, which will only return the first errors.
ring_attach_cli() is called by the keyword parsing function to dump a
ring to the CLI. It can only work with a specific handler and release
function. Let's make it set them appropriately instead of having the
caller know these functions. This way adding a command to dump a ring
is as simple as declaring a parsing function calling ring_attach_cli().
It was set to MAX_SYSLOG_LEN (1K). It is a bit short to print debug
traces. Especially when part of a buffers is dump. Now, the maximum length is
set to BUFSIZE (16K).
This is mandatory to process input one more time to add the EOM in the HTX
message and to set CS_FL_EOI on the conn-stream. Otherwise, in the stream, a
SHUTR will be reported on the corresponding channel without the EOI. It may be
erroneously interpreted as an abort.
This patch must be backported to 2.0 and 1.9.
Errors during the payload or the trailers parsing are reported with the
HTX_FL_PARSING_ERROR flag on the HTX message and not a negative return
value. This change was introduced when the fonctions to convert an H1 message to
HTX one were moved to a dedicated file. But the h1 mux was not fully updated
accordingly.
No backport needed except if the commits about file h1_htx.c are backported.
Now, for the sessions, the maximum times (queue, connect, response, total) are
reported in addition of the averages over the last 1024 connections. These
values are called qtime_max, ctime_max, rtime_max and ttime_max.
This patch is related to #272.
For backends and servers, some average times for last 1024 connections are
already calculated. For the moment, the averages for the time passed in the
queue, the connect time, the response time (for HTTP session only) and the total
time are calculated. Now, in addition, the maximum time observed for these
values are also stored.
In addition, These new counters are cleared as all other max values with the CLI
command "clear counters".
This patch is related to #272.
This change make the payload filtering uniform between TCP and HTTP
filters. Now, in TCP, like in HTTP, there is only one callback responsible to
forward data. Thus, old callbacks, tcp_data() and tcp_forward_data(), are
replaced by a single callback function, tcp_payload(). This new callback gets
the offset in the payload to (re)start the filtering and the maximum amount of
data it can forward. It is the filter's responsibility to be compatible with HTX
streams. If not, it must not set the flag FLT_CFG_FL_HTX.
Because of this change, nxt and fwd offsets are no longer needed. Thus they are
removed from the filter structure with their update functions,
flt_change_next_size() and flt_change_forward_size(). Moreover, the trace filter
has been updated accordingly.
This patch breaks the compatibility with the old API. Thus it should probably
not be backported. But, AFAIK, there is no TCP filter, thus the breakage is very
limited.
For now, TCP callbacks are incompatible with the HTX streams because they are
designed to manipulate raw buffers. A new callback will probably be added to be
used in both modes, raw and HTX. So, for HTX streams, these callbacks are
ignored. This should not be a real problem because there is no known filters,
expect the trace filter, implementing these callbacks.
This patch must be backported to 2.0 and 1.9.
A corner case was opened in the listener_accept() code by commit 3f0d02bbc2
("MAJOR: listener: do not hold the listener lock in listener_accept()"). The
issue is when one listener (or a group of) managed to eat all the proxy's or
all the process's maxconn, and another listener tries to accept a new socket.
This results in the atomic increment to detect the excess connection count
and immediately abort, without pausing the listener, thus the call is
immediately performed again. This doesn't happen when the test is run on a
single listener because this listener got limited when crossing the limit.
But with 2 or more listeners, we don't have this luxury.
The solution consists in limiting the listener as soon as we have to
decline accepting an incoming connection. This means that the listener
will not be marked full yet if it gets the exact connection count but
this is not a problem in practice since all other listeners will only be
marked full after their first attempt. Thus from now on, a listener is
only full once it has already failed taking an incoming connection.
This bug was definitely responsible for the unreproduceable occasional
reports of high CPU usage showing epoll_wait() returning immediately
without accepting an incoming connection, like in bug #129.
This fix must be backported to 1.9 and 1.8.
The "shutdown sessions" admin-mode command used to open-code the list
traversal while there's already a function for this: srv_shutdown_streams().
Better use it.
The "shutdown session server" command used to open-code the list traversal
while there's already a function for this: srv_shutdown_streams(). Better
use it.
Since previous commit a132e5efa9 ("BUG/MEDIUM: Make sure we leave the
session list in session_free().") it's pointless to delete the conn
element inside "if" blocks given that the second test is always true
as well. Let's simplify this with a single LIST_DEL_INIT() before the
test.
In session_free(), if we're about to destroy a connection that had no mux,
make sure we leave the session_list before calling conn_free(). Otherwise,
conn_free() would call session_unown_conn(), which would potentially free
the associated srv_list, but session_free() also frees it, so that would
lead to a double free, and random memory corruption.
This should be backported to 1.9 and 2.0.
There is a very short race in the queues which happens in the following
situation:
- stream A on thread 1 is being processed by a server
- stream B on thread 2 waits in the backend queue for a server
- stream B on thread 2 is fed up with waiting and expires, calls
stream_free() which calls pendconn_free(), which sees the
stream attached
- at the exact same instant, stream A finishes on thread 1, sees
one stream is waiting (B), detaches it and wakes it up
- stream B continues pendconn_free() and calls pendconn_unlink()
- pendconn_unlink() now detaches the node again and performs a
second deletion (harmless since idempotent), and decrements
srv/px->nbpend again
=> the number of connections on the proxy or server may reach -1 if/when
this race occurs.
It is extremely tight as it can only occur during the test on p->leaf_p
though it has been witnessed at least once. The solution consists in
testing leaf_p again once the lock is held to make sure the element was
not removed in the mean time.
This should be backported to 2.0 and 1.9, probably even 1.8.
We need to call vars_init() when the list is empty otherwise we
can't use variables in the response scope. This regression was
introduced by cda7f3f5 (MINOR: stream: don't prune variables if
the list is empty).
The following config reproduces the issue:
defaults
mode http
frontend in
bind *:11223
http-request set-var(req.foo) str("foo") if { path /bar }
http-request set-header bar %[var(req.foo)] if { var(req.foo) -m found }
http-response set-var(res.bar) str("bar")
http-response set-header foo %[var(res.bar)] if { var(res.bar) -m found }
use_backend out
backend out
server s1 127.0.0.1:11224
listen back
bind *:11224
http-request deny deny_status 200
> GET /ba HTTP/1.1
> Host: localhost:11223
> User-Agent: curl/7.66.0
> Accept: */*
>
< HTTP/1.0 200 OK
< Cache-Control: no-cache
< Content-Type: text/html
> GET /bar HTTP/1.1
> Host: localhost:11223
> User-Agent: curl/7.66.0
> Accept: */*
>
< HTTP/1.0 200 OK
< Cache-Control: no-cache
< Content-Type: text/html
< foo: bar
This must be backported as far as 1.9.
Documentation states that the interval between 2 DNS resolution is
driven by "timeout resolve <time>" directive.
From a code point of view, this was applied unless the latest status of
the resolution was VALID. In such case, "hold valid" was enforce.
This is a bug, because "hold" timers are not here to drive how often we
want to trigger a DNS resolution, but more how long we want to keep an
information if the status of the resolution itself as changed.
This avoid flapping and prevent shutting down an entire backend when a
DNS server is not answering.
This issue was reported by hamshiva in github issue #345.
Backport status: 1.8
As reported by David Birdsong on the ML, the HTTP action do-resolve does
not use the DNS cache.
Actually, the action is "registred" to the resolution for said name to
be resolved and wait until an other requester triggers the it. Once the
resolution is finished, then the action is updated with the result.
To trigger this, you must have a server with runtime DNS resolution
enabled and run a do-resolve action with the same fqdn AND they use the
same resolvers section.
This patch fixes this behavior by ensuring the resolution associated to
the action has a valid answer which is not considered as expired. If
those conditions are valid, then we can use it (it's the "cache").
Backport status: 2.0
Since the legacy HTTP mode was removed, the stream is always released at the end
of each HTTP transaction and a new is created to handle the next request for
keep-alive connections. So the HTTP transaction is no longer reset and the
function http_reset_txn() can be removed.
All TCP and HTTP captures are stored in 2 arrays, one for the request and
another for the response. In HAPRoxy 1.5, these arrays are part of the HTTP
transaction and thus are released during its cleanup. Because in this version,
the transaction is part of the stream (in 1.5, streams are still called
sessions), the cleanup is always performed, for HTTP and TCP streams.
In HAProxy 1.6, the HTTP transaction was moved out from the stream and is now
dynamically allocated only when required (becaues of an HTTP proxy or an HTTP
sample fetch). In addition, still in 1.6, the captures arrays were moved from
the HTTP transaction to the stream. This way, it is still possible to capture
elements from TCP rules for a full TCP stream. Unfortunately, the release is
still exclusively performed during the HTTP transaction cleanup. Thus, for a TCP
stream where the HTTP transaction is not required, the TCP captures, if any, are
never released.
Now, all captures are released when the stream is freed. This fixes the memory
leak for TCP streams. For streams with an HTTP transaction, the captures are now
released when the transaction is reset and not systematically during its
cleanup.
This patch must be backported as fas as 1.6.
Runtime traces are now supported for the streams, only if compiled with
debug. process_stream() is covered as well as TCP/HTTP analyzers and filters.
In traces, the first argument is always a stream. So it is easy to get the info
about the channels and the stream-interfaces. The second argument, when defined,
is always a HTTP transaction. And the third one is an HTTP message. The trace
message is adapted to report HTTP info when possible.
Names of these macros may enter in conflict with the macros of the runtime
tracing mechanism. So the prefix "FLT_" has been added to avoid any ambiguities.
Despite the addition of the mux layer, no change have been made on how to enable
the TCP splicing on process_stream(). We still check if transport layer on both
sides support the splicing, but we don't check the muxes support. So it is
possible to start to splice data with an unencrypted H2 connection on a side and
an H1 connection on the other. This leads to a freeze of the stream until a
client or server timeout is reached.
This patch fixed a part of the issue #356. It must be backported as far as 1.8.
The mux H1 announces the support of the TCP splicing. It only works for payload
data. It works for messages with an explicit content-length or for tunnelled
data. For chunked messages, the mux H1 should normally not try to xfer more than
the current chunk through the pipe. Unfortunately, this works on the read side
but the send is completely bogus. During the output formatting, the announced
size of chunks does not handle the size that will be spliced. Because there is
no formatting when spliced data are sent, the produced message is malformed and
rejected by the peer.
For now, because it is quick and simple, the TCP splicing is disabled for
chunked messages. I will try to enable it again in a proper way. I don't know
for now if it will be backportable in previous versions. This will depend on the
amount of changes required to handle it.
This patch fixes a part of the issue #356. It must be backported to 2.0 and 1.9.
This patch is easy to review: let's call parse_logsrv() function to parse
"log" directive as this is already for other sections for proxies.
This enable us to log incoming TCP connections for the listeners for "peers"
sections.
Update the documentation for "peers" section.
If the SSL_CTX of a previous instance (ckch_inst) was used as a
default_ctx, replace the default_ctx of the bind_conf by the first
SSL_CTX inserted in the SNI tree.
Use the RWLOCK of the sni tree to handle the change of the default_ctx.
When trying to update a certificate <file>.{rsa,ecdsa,dsa}, but this one
does not exist and if <file> was used as a regular file in the
configuration, the error was ambiguous. Correct it so we can return a
certificate not found error.
Commit bc6ca7c ("MINOR: ssl/cli: rework 'set ssl cert' as 'set/commit'")
broke the ability to commit a unique certificate which does not use a
bundle extension .{rsa,ecdsa,dsa}.
When doing an 'ssl set cert' with a certificate which does not exist in
configuration, the appctx->ctx.ssl.old_ckchs->path was duplicated while
app->ctx.ssl.old_ckchs was NULL, resulting in a NULL dereference.
Move the code so the 'not referenced' error is done before this.
The sample fetche can get srv_name without foreach
`core.backends["bk"].servers`.
Then we can get Server class quickly via
`core.backends[txn.f:be_name()].servers[txn.f:srv_name()]`.
Issue#342
Fix 541a534 ("BUG/MINOR: ssl/cli: fix build of SCTL and OCSP") was not
enough.
[wla: It will probably be better later to put the #ifdef in the
functions so they can return an error if they are not implemented]
Since we now have full URIs with h2, stats may fail to work over H2
so we must carefully only check the path there if the stats URI was
passed with a path only. This way it remains possible to intercept
proxy requests to report stats on explicit domains but it continues
to work as expected on origin requests.
No backport needed.
In case a stream tries to send on a connection error, we must report the
error so that the stream interface keeps the data available and may safely
retry on another connection. Till now this would happen only before the
connection was established, not in case of a failed handshake or an early
GOAWAY for example.
This should be backported to 2.0 and 1.9.
If a connection faces an error or a timeout, it must be removed from its
idle list ASAP. We certainly don't want to risk sending new streams on
it.
This should be backported to 2.0 (replacing MT_LIST_DEL with LIST_DEL_LOCKED)
and 1.9 (there's no lock there, the idle lists are per-thread and per-server
however a LIST_DEL_INIT will be needed).
If an H2 mux has met an error, we must not report available streams
anymore, or it risks to accumulate new streams while not being able
to process them.
This should be backported to 2.0 and 1.9.
It can be sometimes interesting to have a timestamp with a
resolution of less than a second.
It is currently painful to obtain this, because concatenation
of date and date_us lead to a shorter timestamp during first
100ms of a second, which is not parseable and needs ugly ACLs
in configuration to prepend 0s when needed.
To improve this, add an optional <unit> parameter to date sample
to report an integer with desired unit.
Also support this unit in http_date converter to report
a date string with sub-second precision.
The domain option of the cookie keyword allows to define which domain or
domains should use the the cookie value of a cookie-based server
affinity. If the domain does not start with a dot, the user agent should
only use the cookie on hosts that matches the provided domains. If the
configured domain starts with a dot, the user agent can use the cookie
with any host ending with the configured domain.
haproxy config parser helps the admin warning about a potentially buggy
config: defining a domain without an embedded dot which does not start
with a dot, which is forbidden by the RFC.
The current condition to issue the warning implements RFC2109. This
change updates the implementation to RFC6265 which allows domain without
a leading dot.
Should be backported to all supported versions. The feature exists at least
since 1.5.
Remove the leftovers of the certificate + bundle updating in 'ssl set
cert' and 'commit ssl cert'.
* Remove the it variable in appctx.ctx.ssl.
* Stop doing everything twice.
* Indent
This patch splits the 'set ssl cert' CLI command into 2 commands.
The previous way of updating the certificate on the CLI was limited with
the bundles. It was only able to apply one of the tree part of the
certificate during an update, which mean that we needed 3 updates to
update a full 3 certs bundle.
It was also not possible to apply atomically several part of a
certificate with the ability to rollback on error. (For example applying
a .pem, then a .ocsp, then a .sctl)
The command 'set ssl cert' will now duplicate the certificate (or
bundle) and update it in a temporary transaction..
The second command 'commit ssl cert' will commit all the changes made
during the transaction for the certificate.
This commit breaks the ability to update a certificate which was used as
a unique file and as a bundle in the HAProxy configuration. This way of
using the certificates wasn't making any sense.
Example:
// For a bundle:
$ echo -e "set ssl cert localhost.pem.rsa <<\n$(cat kikyo.pem.rsa)\n" | socat /tmp/sock1 -
Transaction created for certificate localhost.pem!
$ echo -e "set ssl cert localhost.pem.dsa <<\n$(cat kikyo.pem.dsa)\n" | socat /tmp/sock1 -
Transaction updated for certificate localhost.pem!
$ echo -e "set ssl cert localhost.pem.ecdsa <<\n$(cat kikyo.pem.ecdsa)\n" | socat /tmp/sock1 -
Transaction updated for certificate localhost.pem!
$ echo "commit ssl cert localhost.pem" | socat /tmp/sock1 -
Committing localhost.pem.
Success!
this patch introduces a strict-limits parameter which enforces the
setrlimit setting instead of a warning. This option can be forcingly
disable with the "no" keyword.
The general aim of this patch is to avoid bad surprises on a production
environment where you change the maxconn for example, a new fd limit is
calculated, but cannot be set because of sysfs setting. In that case you
might want to have an explicit failure to be aware of it before seeing
your traffic going down. During a global rollout it is also useful to
explictly fail as most progressive rollout would simply check the
general health check of the process.
As discussed, plan to use the strict by default mode starting from v2.3.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
in global config parsing, we currently expect to have a possible no
keyword (KWN_NO), but we never allow it in config parsing.
another patch could have been to simply remove the code handling a
possible KWN_NO.
take this opportunity to update documentation of set-dumpable.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
In connect_server(), if we're reusing a connection, only use SF_SRV_REUSED
if the connection is fully ready. We may be using a multiplexed connection
created by another stream that is not yet ready, and may fail.
If we set SF_SRV_REUSED, process_stream() will then not wait for the timeout
to expire, and retry to connect immediately.
This should be backported to 1.9 and 2.0.
This commit depends on 55234e33708c5a584fb9efea81d71ac47235d518.
Add a new method, ctl(), to muxes. It uses a "enum mux_ctl_type" to
let it know which information we're asking for, and can output it either
directly by returning the expected value, or by using an optional argument.
"output" argument.
Right now, the only known mux_ctl_type is MUX_STATUS, that will return 0 if
the mux is not ready, or MUX_STATUS_READY if the mux is ready.
We probably want to backport this to 1.9 and 2.0.
We previously relied on chunk_cat(dst, b_fromist(src)) for this but it
is not reliable as the allocated buffer is inside the expression and
may be on a temporary stack. While it's possible to allocate stack space
for a struct and return a pointer to it, it's not possible to initialize
it form a temporary variable to prevent arguments from being evaluated
multiple times. Since this is only used to append an ist after a chunk,
let's instead have a chunk_istcat() function to perform exactly this
from a native ist.
The only call place (URI computation in the cache) was updated.
Actually gcc believes it has detected a possible truncation but it
cannot since the output string is necessarily at least one char
shorter than what it expects. However addressing it is easy and
removes the need for an intermediate copy so let's do it.
The per-thread UUID string produced by generate_pseudo_uuid() could be
off by one character due to too small of size limit in snprintf(). In
practice the UUID remains large enough to avoid any collision though.
This should be backported to 2.0 and 1.9.
The gcc warning about format truncation in get_gmt_offset() is annoying
since we always call it with a valid time thus it cannot fail. However
it's true that nothing guarantees that future code reuses this function
incorrectly in the future, so better enforce the modulus on one day and
shut the warning.
Rework the 'set ssl cert' IO handler so it is clearer.
Use its own SETCERT_ST_* states insted of the STAT_ST ones.
Use an inner loop in SETCERT_ST_GEN and SETCERT_ST_INSERT to do the work
for both the certificate and the bundle.
The io_release() is now called only when the CKCH spinlock is taken so
we can unlock during a release without any condition.
Since commit 90b098c ("BUG/MINOR: cli: don't call the kw->io_release if
kw->parse failed"), the io_release() callback is not called anymore when
the parse() failed. Call it directly on the error path of the
cli_parse_set_cert() function.
This bug is pretty pernicious and have serious consequences : In 2.1, an
infinite loop in process_stream() because the backend stream-interface remains
in the ready state (SI_ST_RDY). In 2.0, a call in loop to process_stream()
because the stream-interface remains blocked in the connect state
(SI_ST_CON). In both cases, it happens after a connection retry attempt. In 1.9,
it seems to not happen. But it may be just by chance or just because it is
harder to get right conditions to trigger the bug. However, reading the code,
the bug seems to exist too.
Here is how the bug happens in 2.1. When we try to establish a new connection to
a server, the corresponding stream-interface is first set to the connect state
(SI_ST_CON). When the underlying connection is known to be connected (the flag
CO_FL_CONNECTED set), the stream-interface is switched to the ready state
(SI_ST_RDY). It is a transient state between the connect state (SI_ST_CON) and
the established state (SI_ST_EST). It must be handled on the next call to
process_stream(), which is responsible to operate the transition. During all
this time, errors can occur. A connection error or a client abort. The transient
state SI_ST_RDY was introduced to let a chance to process_stream() to catch
these errors before considering the connection as fully established.
Unfortunatly, if a read0 is catched in states SI_ST_CON or SI_ST_RDY, it is
possible to have a shutdown without transition to SI_ST_DIS (in fact, here,
SI_ST_CON is swichted to SI_ST_RDY). This happens if the request was fully
received and analyzed. In this case, the flag SI_FL_NOHALF is set on the backend
stream-interface. If an error is also reported during the connect, the behavior
is undefined because an error is returned to the client and a connection retry
is performed. So on the next connection attempt to the server, if another error
is reported, a client abort is detected. But the shutdown for writes was already
done. So the transition to the state SI_ST_DIS is impossible. We stay in the
state SI_ST_RDY. Because it is a transient state, we loop in process_stream() to
perform the transition.
It is hard to understand how the bug happens reading the code and even harder to
explain. But there is a trivial way to hit the bug by sending h2 requests to a
server only speaking h1. For instance, with the following config :
listen tst
bind *:80
server www 127.0.0.1:8000 proto h2 # in reality, it is a HTTP/1.1 server
It is a configuration error, but it is an easy way to observe the bug. Note it
may happen with a valid configuration.
So, after a careful analyzis, it appears that si_cs_recv() should never be
called for a not fully established stream-interface. This way the connection
retries will be performed before reporting an error to the client. Thus, if a
shutdown is performed because a read0 is handled, the stream-interface is
inconditionnaly set to the transient state SI_ST_DIS.
This patch must be backported to 2.0 and 1.9. However on these versions, this
patch reveals a design flaw about connections and a bad way to perform the
connection retries. We are working on it.
In h2_send(), when something is sent, we remove the flags
(H2_CF_MUX_MFULL|H2_CF_DEM_MROOM) on the h2 connection. This way, we are able to
wake up all streams waiting to send data. Unfortunatly, these flags are
unconditionally removed, even when nothing was sent. So if the h2c is blocked
because the mux buffers are full and we are unable to send anything, all streams
in the send_list are woken up for nothing. Now, we only remove these flags if at
least a send succeeds.
This patch must be backport to 2.0.
The io_release() callback of the cli_kw is supposed to be used to clean
what an io_handler() has made. It is called once the work in the IO
handler is finished, or when the connection was aborted by the client.
This patch fixes a bug where the io_release callback was called even
when the parse() callback failed. Which means that the io_release() could
called even if the io_handler() was not called.
Should be backported in every versions that have a cli_kw->release().
(as far as 1.7)
As reported in issue #343, there is one case where a NULL stream can
still be dereferenced, when getting &s->txn->flags. Let's protect all
assignments to stay on the safe side for future additions.
No backport is needed.
Debug commands will usually mark the fate of the process. We'd rather
have them counted and visible in a core or in stats output than trying
to guess how a flag combination could happen. The counter is only
incremented when the command is about to be issued however, so that
failed attempts are ignored.
Instead of relying on DEBUG_DEV for most debugging commands, which is
limiting, let's condition them to expert mode. Only one ("debug dev exec")
remains conditionned to DEBUG_DEV because it can have a security implication
on the system. The commands are not listed unless "expert-mode on" was first
entered on the CLI :
> expert-mode on
> help
debug dev close <fd> : close this file descriptor
debug dev delay [ms] : sleep this long
debug dev exec [cmd] ... : show this command's output
debug dev exit [code] : immediately exit the process
debug dev hex <addr> [len]: dump a memory area
debug dev log [msg] ... : send this msg to global logs
debug dev loop [ms] : loop this long
debug dev panic : immediately trigger a panic
debug dev stream ... : show/manipulate stream flags
debug dev tkill [thr] [sig] : send signal to thread
> debug dev stream
Usage: debug dev stream { <obj> <op> <value> | wake }*
<obj> = {strm | strm.f | sif.f | sif.s | sif.x | sib.f | sib.s | sib.x |
txn.f | req.f | req.r | req.w | res.f | res.r | res.w}
<op> = {'' (show) | '=' (assign) | '^' (xor) | '+' (or) | '-' (andnot)}
<value> = 'now' | 64-bit dec/hex integer (0x prefix supported)
'wake' wakes the stream asssigned to 'strm' (default: current)
Some commands like the debug ones are not enabled by default but can be
useful on some production environments. In order to avoid the temptation
of using them incorrectly, let's introduce an "expert" mode for a CLI
connection, which allows some commands to appear and be used. It is
enabled by command "expert-mode on" which is not listed by default.
This function adds some control by verifying that the target address is
really readable. It will not protect against writing to wrong places,
but will at least protect against a large number of mistakes such as
incorrectly copy-pasted addresses.
This new "debug dev stream" command allows to manipulate flags, timeouts,
states for streams, channels and stream interfaces, as well as waking a
stream up. These may be used to help reproduce certain bugs during
development. The operations are performed to the stream assigned by
"strm" which defaults to the CLI's stream. This stream pointer can be
chosen from one of those reported in "show sess". Example:
socat - /tmp/sock1 <<< "debug dev stream strm=0x1555b80 req.f=-1 req.r=now wake"
We never enter val_fc_time_value when an associated fetcher such as `fc_rtt` is
called without argument. meaning `type == ARGT_STOP` will never be true and so
the default `data.sint = TIME_UNIT_MS` will never be set. remove this part to
avoid thinking default data.sint is set to ms while reading the code.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
[Cf: This patch may safely backported as far as 1.7. But no matter if not.]
Commit 541a534 ("BUG/MINOR: ssl/cli: fix build of SCTL and OCSP")
introduced a bug in which we iterate outside the array durint a 'set ssl
cert' if we didn't built with the ocsp or sctl.
To avoid affecting too much the traffic during a certificate update,
create the SNIs in a IO handler which yield every 10 ckch instances.
This way haproxy continues to respond even if we tries to update a
certificate which have 50 000 instances.
When updating a certificate from the CLI, it is not possible to revert
some of the changes if part of the certicate update failed. We now
creates a copy of the ckch_store for the changes so we can revert back
if something goes wrong.
Even if the ckch_store was affected before this change, it wasn't
affecting the SSL_CTXs used for the traffic. It was only a problem if we
try to update a certificate after we failed to do it the first time.
The new ckch_store is also linked to the new sni_ctxs so it's easy to
insert the sni_ctxs before removing the old ones.
ssl_sock_copy_cert_key_and_chain() copy the content of a
<src> cert_key_and_chain to a <dst>.
It applies a refcount increasing on every SSL structures (X509, DH,
privte key..) and allocate new buffers for the other fields.
It is now possible to update new parts of a CKCH from the CLI.
Currently you will be able to update a PEM (by default), a OCSP response
in base64, an issuer file, and a SCTL file.
Each update will creates a new CKCH and new sni_ctx structure so we will
need a "commit" command later to apply several changes and create the
sni_ctx only once.
Split the ssl_sock_load_crt_file_into_ckch() in two functions:
- ssl_sock_load_files_into_ckch() which is dedicated to opening every
files related to a filename during the configuration parsing (PEM, sctl,
ocsp, issuer etc)
- ssl_sock_load_pem_into_ckch() which is dedicated to opening a PEM,
either in a file or a buffer
ssl_sock_load_issuer_file_into_ckch() is a new function which is able to
load an issuer from a buffer or from a file to a CKCH.
Use this function directly in ssl_sock_load_crt_file_into_ckch()
The ssl_sock_load_sctl_from_file() function was modified to
fill directly a struct cert_key_and_chain.
The function prototype was normalized in order to be used with the CLI
payload parser.
This function either read text from a buffer or read a file on the
filesystem.
It fills the ocsp_response buffer of the struct cert_key_and_chain.
The ssl_sock_load_ocsp_response_from_file() function was modified to
fill directly a struct cert_key_and_chain.
The function prototype was normalized in order to be used with the CLI
payload parser.
This function either read a base64 from a buffer or read a binary file
on the filesystem.
It fills the ocsp_response buffer of the struct cert_key_and_chain.
The logs were added to the H2 mux so that we can report logs in case
of errors that prevent a stream from being created, but as a side effect
these logs are emitted twice for backend connections: once by the H2 mux
itself and another time by the upper layer stream. It can even happen
more with connection retries.
This patch makes sure we do not emit logs for backend connections.
It should be backported to 2.0 and 1.9.
As reported in issue #335, a lot of contention happens on the PATLRU lock
when performing expensive regex lookups. This is absurd since the purpose
of the LRU cache was to have a fast cache for expressions, thus the cache
must not be shared between threads and must remain lockless.
This commit makes the LRU cache thread-local and gets rid of the PATLRU
lock. A test with 7 threads on 4 cores climbed from 67kH/s to 369kH/s,
or a scalability factor of 5.5.
Given the huge performance difference and the regression caused to
users migrating from processes to threads, this should be backported at
least to 2.0.
Thanks to Brian Diekelman for his detailed report about this regression.
As reported in issue #331, the code used to cast a 32-bit to a 64-bit
stick-table key is wrong. It only copies the 32 lower bits in place on
little endian machines or overwrites the 32 higher ones on big endian
machines. It ought to simply remove the wrong cast dereference.
This bug was introduced when changing stick table keys to samples in
1.6-dev4 by commit bc8c404449 ("MAJOR: stick-tables: use sample types
in place of dedicated types") so it the fix must be backported as far
as 1.6.
A trick is used to set SESSION_ID, and SESSION_ID_CONTEXT lengths
to 0 and avoid ASN1 encoding of these values.
There is no specific function to set the length of those parameters
to 0 so we fake this calling these function to a different value
with the same buffer but a length to zero.
But those functions don't seem to check the length of zero before
performing a memcpy of length zero but with src and dst buf on the
same pointer, causing valgrind to bark.
So the code was re-work to pass them different pointers even
if buffer content is un-used.
In a second time, reseting value, a memcpy overlap
happened on the SESSION_ID_CONTEXT. It was re-worked and this is
now reset using the constant global value SHCTX_APPNAME which is a
different pointer with the same content.
This patch should be backported in every version since ssl
support was added to haproxy if we want valgrind to shut up.
This is tracked in github issue #56.
Processing of SRV record weight was inaccurate and when a SRV record's
weight was set to 0, HAProxy enforced it to '1'.
This patch aims at fixing this without breaking compability with
previous behavior.
Backport status: 1.8 to 2.0
fopen() can return NULL when state file is missing. This patch adds a
check of fopen() return value so we can skip processing in such case.
No backport needed.
Previously an expression like:
path,field(2,/) -m found
always returned `true`.
Bug exists since the `field` converter exists. That is:
f399b0debf
The fix should be backported to 1.6+.
When running haproxy -c, the cache parser is trying to allocate the size
of the cache. This can be a problem in an environment where the RAM is
limited.
This patch moves the cache allocation in the post_check callback which
is not executed during a -c.
This patch may be backported at least to 2.0 and 1.9. In 1.9, the callbacks
registration mechanism is not the same. So the patch will have to be adapted. No
need to backport it to 1.8, the code is probably too different.
When a stick counter is fetched, it is important that the requested counter does
not exceed (MAX_SESS_STKCTR -1). Actually, there is no bug with a default build
because, by construction, MAX_SESS_STKCTR is defined to 3 and we know that we
never exceed the max value. scN_* sample fetches are numbered from 0 to 2. For
other sample fetches, the value is tested.
But there is a bug if MAX_SESS_STKCTR is set to a lower value. For instance
1. In this case the counters sc1_* and sc2_* may be undefined.
This patch fixes the issue #330. It must be backported as far as 1.7.
When an error occurred in the function bind_parse_tls_ticket_keys(), during the
configuration parsing, the opened file is not always closed. To fix the bug, all
errors are catched at the same place, where all ressources are released.
This patch fixes the bug #325. It must be backported as far as 1.7.
When using the master CLI with 'fd@', during a reload, the master CLI
proxy is stopped. Unfortunately if this is an inherited FD it is closed
too, and the master CLI won't be able to bind again during the
re-execution. It lead the master to fallback in waitpid mode.
This patch forbids the inherited FDs in the master's listeners to be
closed during a proxy_stop().
This patch is mandatory to use the -W option in VTest versions that contain the
-mcli feature.
(86e65f1024)
Should be backported as far as 1.9.
If openssl 1.1.1 is used, c2aae74f0 commit mistakenly enables DH automatic
feature from openssl instead of ECDH automatic feature. There is no impact for
the ECDH one because the feature is always enabled for that version. But doing
this, the 'tune.ssl.default-dh-param' was completely ignored for DH parameters.
This patch fix the bug calling 'SSL_CTX_set_ecdh_auto' instead of
'SSL_CTX_set_dh_auto'.
Currently some users may use a 2048 DH bits parameter, thinking they're using a
1024 bits one. Doing this, they may experience performance issue on light hardware.
This patch warns the user if haproxy fails to configure the given DH parameter.
In this case and if openssl version is > 1.1.0, haproxy will let openssl to
automatically choose a default DH parameter. For other openssl versions, the DH
ciphers won't be usable.
A commonly case of failure is due to the security level of openssl.cnf
which could refuse a 1024 bits DH parameter for a 2048 bits key:
$ cat /etc/ssl/openssl.cnf
...
[system_default_sect]
MinProtocol = TLSv1
CipherString = DEFAULT@SECLEVEL=2
This should be backport into any branch containing the commit c2aae74f0.
It requires all or part of the previous CLEANUP series.
This addresses github issue #324.
All bind keyword parsing message were show as alerts.
With this patch if the message is flagged only with ERR_WARN and
not ERR_ALERT it will show a label [WARNING] and not [ALERT].
ssl_sock_load_dh_params used to return >0 or -1 to indicate success
or failure. Make it return a set of ERR_* instead so that its callers
can transparently report its status. Given that its callers only used
to know about ERR_ALERT | ERR_FATAL, this is the only code returned
for now. An error message was added in the case of failure and the
comment was updated.
ssl_sock_put_ckch_into_ctx used to return 0 or >0 to indicate success
or failure. Make it return a set of ERR_* instead so that its callers
can transparently report its status. Given that its callers only used
to know about ERR_ALERT | ERR_FATAL, this is the only code returned
for now. And a comment was updated.
ckch_inst_new_load_store() and ckch_inst_new_load_multi_store used to
return 0 or >0 to indicate success or failure. Make it return a set of
ERR_* instead so that its callers can transparently report its status.
Given that its callers only used to know about ERR_ALERT | ERR_FATAL,
his is the only code returned for now. And the comment was updated.
ssl_sock_load_ckchs() used to return 0 or >0 to indicate success or
failure even though this was not documented. Make it return a set of
ERR_* instead so that its callers can transparently report its status.
Given that its callers only used to know about ERR_ALERT | ERR_FATAL,
this is the only code returned for now. And a comment was added.
These functions were returning only 0 or 1 to mention success or error,
and made it impossible to return a warning. Let's make them return error
codes from ERR_* and map all errors to ERR_ALERT|ERR_FATAL for now since
this is the only code that was set on non-zero return value.
In addition some missing comments were added or adjusted around the
functions' return values.
In mux_pt_io_cb(), instead of always calling the wake method, only do so
if nobody subscribed for receive. If we have a subscription, just wake the
associated tasklet up.
This should be backported to 1.9 and 2.0.
There's a small window where the mux_pt tasklet may be woken up, and thus
mux_pt_io_cb() get scheduled, and then the connection is attached to a new
stream. If this happen, don't do anything, and just let the stream know
by calling its wake method. If the connection had an error, the stream should
take care of destroying it by calling the detach method.
This should be backported to 2.0 and 1.9.
This reverts commit "BUG/MEDIUM: mux_pt: Make sure we don't have a
conn_stream before freeing.".
mux_pt_io_cb() is only used if we have no associated stream, so we will
never have a cs, so there's no need to check that, and we of course have to
destroy the mux in mux_pt_detach() if we have no associated session, or if
there's an error on the connection.
This should be backported to 2.0 and 1.9.
The idle cleanup tasks' masks are wrong for threads 32 to 64, which
causes the wrong thread to wake up and clean the connections that it
does not own, with a risk of crash or infinite loop depending on
concurrent accesses. For thread 32, any thread between 32 and 64 will
be woken up, but for threads 33 to 64, in fact threads 1 to 32 will
run the task instead.
This issue only affects deployments enabling more than 32 threads. While
is it not common in 1.9 where this has to be explicit, and can easily be
dealt with by lowering the number of threads, it can be more common in
2.0 since by default the thread count is determined based on the number
of available processors, hence the MAJOR tag which is mostly relevant
to 2.x.
The problem was first introduced into 1.9-dev9 by commit 0c18a6fe3
("MEDIUM: servers: Add a way to keep idle connections alive.") and was
later moved to cfgparse.c by commit 980855bd9 ("BUG/MEDIUM: server:
initialize the orphaned conns lists and tasks at the end").
This patch needs to be backported as far as 1.9, with care as 1.9 is
slightly different there (uses idle_task[] instead of idle_conn_cleanup[]
like in 2.x).
On error, make sure we don't have a conn_stream before freeing the connection
and the associated mux context. Otherwise a stream will still reference
the connection, and attempt to use it.
If we still have a conn_stream, it will properly be free'd when the detach
method is called, anyway.
This should be backported to 2.0 and 1.9.
There are 2 kinds of tcp info fetchers. Those returning a time value (fc_rtt and
fc_rttval) and those returning a counter (fc_unacked, fc_sacked, fc_retrans,
fc_fackets, fc_lost, fc_reordering). Because of a bug, the counters were handled
as time values, and by default, were divided by 1000 (because of an invalid
conversion from us to ms). To work around this bug and have the right value, the
argument "us" had to be specified.
So now, tcp info fetchers returning a counter don't support any argument
anymore. To not break old configurations, if an argument is provided, it is
ignored and a warning is emitted during the configuration parsing.
In addition, parameter validiation is now performed during the configuration
parsing.
This patch must be backported as far as 1.7.
Patch 56996da ("BUG/MINOR: mworker/ssl: close OpenSSL FDs on reload")
fixes a issue where the /dev/random FD was leaked by OpenSSL upon a
reload in master worker mode. Indeed the FD was not flagged with
CLOEXEC.
The fix was checking if ssl_used_frontend or ssl_used_backend were set
to close the FD. This is wrong, indeed the lua init code creates an SSL
server without increasing the backend value, so the deinit is never
done when you don't use SSL in your configuration.
To reproduce the problem you just need to build haproxy with openssl and
lua with an openssl which does not use the getrandom() syscall. No
openssl nor lua configuration are required for haproxy.
This patch must be backported as far as 1.8.
Fix issue #314.
The recent changes to address URI issues mixed with the recent fix to
stop caching absolute URIs have caused the cache not to cache H2 requests
anymore since these ones come with a scheme and authority. Let's unbreak
this by using absolute URIs all the time, now that we keep host and
authority in sync. So what is done now is that if we have an authority,
we take the whole URI as it is as the cache key. This covers H2 and H1
absolute requests. If no authority is present (most H1 origin requests),
then we prepend "https://" and the Host header. The reason for https://
is that most of the time we don't care about the scheme, but since about
all H2 clients use this scheme, at least we can share the cache between
H1 and H2.
No backport is needed since the breakage only affects 2.1-dev.
When a response generated by HAProxy is handled by the mux H1, if the
corresponding request has not fully been received, the close mode is
forced. Thus, the client is notified the connection will certainly be closed
abruptly, without waiting the end of the request.
The flag HTX_FL_PROXY_RESP is now set on responses generated by HAProxy,
excluding responses returned by applets and services. It is an informative flag
set by the applicative layer.
When an error file was loaded, the flag HTX_SL_F_XFER_LEN was never set on the
HTX start line because of a bug. During the headers parsing, the flag
H1_MF_XFER_LEN is never set on the h1m. But it was the condition to set
HTX_SL_F_XFER_LEN on the HTX start-line. Instead, we must only rely on the flags
H1_MF_CLEN or H1_MF_CHNK.
Because of this bug, it was impossible to keep a connection alive for a response
generated by HAProxy. Now the flag HTX_SL_F_XFER_LEN is set when an error file
have a content length (chunked responses are unsupported at this stage) and the
connection may be kept alive if there is no connection header specified to
explicitly close it.
This patch must be backported to 2.0 and 1.9.
It currently is not possible to figure the exact haproxy version from a
core file for the sole reason that the version is stored into a const
string and as such ends up in the .text section that is not part of a
core file. By turning them into variables we move them to the data
section and they appear in core files. In order to help finding them,
we just prepend an extra variable in front of them and we're able to
immediately spot the version strings from a core file:
$ strings core | fgrep -A2 'HAProxy version'
HAProxy version follows
2.1-dev2-e0f48a-88
2019/10/15
(These are haproxy_version and haproxy_date respectively). This may be
backported to 2.0 since this part is not support to impact anything but
the developer's time spent debugging.
246c024 ("MINOR: ssl: load the ocsp in/from the ckch") broke the loading
of OCSP files. The function ssl_sock_load_ocsp_response_from_file() was
not returning 0 upon success which lead to an error after the .ocsp was
read.
The error messages for OCSP in ssl_sock_load_crt_file_into_ckch() add a
double extension to the filename, that can be confusing. The messages
reference a .issuer.issuer file.
If the user agent data contains text that has special characters that
are used to format the output from the vfprintf() function, haproxy
crashes. String "%s %s %s" may be used as an example.
% curl -A "%s %s %s" localhost:10080/index.html
curl: (52) Empty reply from server
haproxy log:
00000000:WURFL-test.clireq[00c7:ffffffff]: GET /index.html HTTP/1.1
00000000:WURFL-test.clihdr[00c7:ffffffff]: host: localhost:10080
00000000:WURFL-test.clihdr[00c7:ffffffff]: user-agent: %s %s %s
00000000:WURFL-test.clihdr[00c7:ffffffff]: accept: */*
segmentation fault (core dumped)
gdb 'where' output:
#0 strlen () at ../sysdeps/x86_64/strlen.S:106
#1 0x00007f7c014a8da8 in _IO_vfprintf_internal (s=s@entry=0x7ffc808fe750, format=<optimized out>,
format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n",
ap=ap@entry=0x7ffc808fe8b8) at vfprintf.c:1637
#2 0x00007f7c014cfe89 in _IO_vsnprintf (
string=0x55cb772c34e0 "WURFL: retrieve header request returns [(null) %s %s %s B,w\313U",
maxlen=<optimized out>,
format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n",
args=args@entry=0x7ffc808fe8b8) at vsnprintf.c:114
#3 0x000055cb758f898f in send_log (p=p@entry=0x0, level=level@entry=5,
format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n")
at src/log.c:1477
#4 0x000055cb75845e0b in ha_wurfl_log (
message=message@entry=0x55cb75989460 "WURFL: retrieve header request returns [%s]\n") at src/wurfl.c:47
#5 0x000055cb7584614a in ha_wurfl_retrieve_header (header_name=<optimized out>, wh=0x7ffc808fec70)
at src/wurfl.c:763
In case WURFL (actually HAProxy) is not compiled with debug option
enabled (-DWURFL_DEBUG), this bug does not come to light.
This patch could be backported in every version supporting
the ScientiaMobile's WURFL. (as far as 1.7)
As stated in the RCF7230#5.4, a client must send a field-value for the header
host that is identical to the authority if the target URI includes one. So, now,
by default, if the authority, when provided, does not match the value of the
header host, an error is triggered. To mitigate this behavior, it is possible to
set the option "accept-invalid-http-request". In that case, an http error is
captured without interrupting the request parsing.
There is no reason for a client to send several headers host. It even may be
considered as a bug. However, it is totally invalid to have different values for
those. So now, in such case, an error is triggered during the request
parsing. In addition, when several headers host are found with the same value,
only the first instance is kept and others are skipped.
When the option "accept-invalid-http-request" is enabled, some parsing errors
are ignored. But the position of the error is reported. In legacy HTTP mode,
such errors were captured. So, we now do the same in the H1 multiplexer.
If required, this patch may be backported to 2.0 and 1.9.
When an outgoing HTX message is formatted to a raw message, DATA blocks may be
splitted to not tranfser more data than expected. But if the buffer is almost
full, the formatting is interrupted, leaving some unused free space in the
buffer, because data are too large to be copied in one time.
Now, we transfer as much data as possible. When the message is chunked, we also
count the size used to encode the data.
When an outgoing HTX message is formatted to a raw message, if we fail to copy
data of an HTX block into the output buffer, we mark it as full. Before it was
only done calling the function buf_room_for_htx_data(). But this function is
designed to optimize input processing.
This patch must be backported to 2.0 and 1.9.
In functions htx_*_to_h1(), most of time several calls to chunk_memcat() are
chained. The expected size is always compared to available room in the buffer to
be sure the full copy will succeed. But it is a bit risky because it relies on
the fact the function chunk_memcat() evaluates the available room in the buffer
in a same way than htx ones. And, unfortunately, it does not. A bug in
chunk_memcat() will always leave a byte unused in the buffer. So, for instance,
when a chunk is copied in an almost full buffer, the last CRLF may be skipped.
To fix the issue, we now rely on the result of chunk_memcat() only.
This patch must be backported to 2.0 and 1.9.
The SSL engines code was written below the OCSP #ifdef, which means you
can't build the engines code if the OCSP is deactived in the SSL lib.
Could be backported in every version since 1.8.
A NULL dereference can occur when inserting SNIs. In the case of
checking for duplicates, if there is already several sni_ctx with the
same key.
Fix issue #321.
Don't try to load the files containing the issuer and the OCSP response
each time we generate a SSL_CTX.
The .ocsp and the .issuer are now loaded in the struct
cert_key_and_chain only once and then loaded from this structure when
creating a SSL_CTX.
Don't try to load the file containing the sctl each time we generate a
SSL_CTX.
The .sctl is now loaded in the struct cert_key_and_chain only once and
then loaded from this structure when creating a SSL_CTX.
Note that this now make possible the use of sctl with multi-cert
bundles.
$ echo -e "set ssl cert certificate.pem <<\n$(cat certificate2.pem)\n" | \
socat stdio /var/run/haproxy.stat
Certificate updated!
The operation is locked at the ckch level with a HA_SPINLOCK_T which
prevents the ckch architecture (ckch_store, ckch_inst..) to be modified
at the same time. So you can't do a certificate update at the same time
from multiple CLI connections.
SNI trees are also locked with a HA_RWLOCK_T so reading operations are
locked only during a certificate update.
Bundles are supported but you need to update each file (.rsa|ecdsa|.dsa)
independently. If a file is used in the configuration as a bundle AND
as a unique certificate, both will be updated.
Bundles, directories and crt-list are supported, however filters in
crt-list are currently unsupported.
The code tries to allocate every SNIs and certificate instances first,
so it can rollback the operation if that was unsuccessful.
If you have too much instances of the certificate (at least 20000 in my
tests on my laptop), the function can take too much time and be killed
by the watchdog. This will be fixed later. Also with too much
certificates it's possible that socat exits before the end of the
generation without displaying a message, consider changing the socat
timeout in this case (-t2 for example).
The size of the certificate is currently limited by the maximum size of
a payload, that must fit in a buffer.
The ssl_sock_load_{multi}_ckchs() function were renamed and modified:
- allocate a ckch_inst and loads the sni in it
- return a ckch_inst or NULL
- the sni_ctx are not added anymore in the sni trees from there
- renamed in ckch_inst_new_load_{multi}_store()
- new ssl_sock_load_ckchs() function calls
ckch_inst_new_load_{multi}_store() and add the sni_ctx to the sni trees.
ssl_sock_load_multi_ckchs() is now able to fail without polluting the
bind_conf trees and leaking memory.
It is a prerequisite to load certificate on-the-fly with the CLI.
The insertion of the sni_ctxs in the trees are done once everything has
been allocated correctly.
ssl_sock_load_ckchn() is now able to fail without polluting the
bind_conf trees and leaking memory.
It is a prerequisite to load certificate on-the-fly with the CLI.
The insertion of the sni_ctxs in the trees are done once everything has
been allocated correctly.
In order to allow the creation of sni_ctx in runtime, we need to split
the function to allow rollback.
We need to be able to allocate all sni_ctxs required before inserting
them in case we need to rollback if we didn't succeed the allocation.
The function was splitted in 2 parts.
The first one ckch_inst_add_cert_sni() allocates a struct sni_ctx, fill
it with the right data and insert it in the ckch_inst's list of sni_ctx.
The second will take every sni_ctx in the ckch_inst and insert them in
the bind_conf's sni tree.
struct ckch_inst represents an instance of a certificate (ckch_node)
used in a bind_conf. Every sni_ctx created for 1 ckch_node in a
bind_conf are linked in this structure.
This patch allocate the ckch_inst for each bind_conf and inserts the
sni_ctx in its linked list.
The ssl_sock_populate_sni_keytypes_hplr() function does not return an
error upon an allocation failure.
The process would probably crash during the configuration parsing if the
allocation fail since it tries to copy some data in the allocated
memory.
This patch could be backported as far as 1.5.
This patch frees the sni_keytype nodes once the sni_ctxs have been
allocated in ssl_sock_load_multi_ckchn();
Could be backported in every version using the multi-cert SSL bundles.
The ssl_sock_add_cert_sni() function never return an error when a
sni_ctx allocation fail. It silently ignores the problem and continues
to try to allocate other snis.
It is unlikely that a sni allocation will succeed after one failure and
start a configuration without all the snis. But to avoid any problem we
return a -1 upon an sni allocation error and stop the configuration
parsing.
This patch must be backported in every version supporting the crt-list
sni filters. (as far as 1.5)
A ckch_store is a storage which contains a pointer to one or several
cert_key_and_chain structures.
This patch renames ckch_node to ckch_store, and ckch_n, ckchn to ckchs.
As using an mt_list for the tasklet list is costly, instead use a regular list,
but add an mt_list for tasklet woken up by other threads, to be run on the
current thread. At the beginning of process_runnable_tasks(), we just take
the new list, and merge it into the task_list.
This should give us performances comparable to before we started using a
mt_list, but allow us to use tasklet_wakeup() from other threads.
I introduced this mistake when adding the description for the stats
metrics, it's even amazing it built and worked at all! This was
reported by Travis CI on non-GNU platforms :
src/stats.c:92:39: warning: use of GNU 'missing =' extension in designator [-Wgnu-designator]
[INF_NAME] { .name = "Name", .desc = "Product name" },
^
=
No backport is needed.
In issue #277 is reported a strange problem related to a fast-spinning
applet which seems to show valid progress being made. It's uncertain how
this can happen, maybe some very specific timing patterns manage to place
just a few bytes in each buffer and result in the peers applet being called
a lot. But it appears possible to artificially cross the spinning threshold
by asking for monster stats page (500 MB) and limiting the send() size to
1 MSS (1460 bytes), causing the stats page to be called for very small
blocks which most often do not leave enough room to place a new chunk.
The idea developed in this patch consists in not crashing for an applet
which reaches a very high call rate if it shows some indication of
progress. Detecting progress on applets is not trivial but in our case
we know that they must at least not claim to wait for a buffer allocation
if this buffer is present, wait for room if the buffer is empty, ask for
more data without polling if such data are still present, nor leave with
an empty input buffer without having written anything nor read anything
from the other side while a shutw is pending.
Doing so doesn't affect normal behaviors nor abuses of our existing
applets and does at least protect against an applet performing an
early return without processing events, or one causing an endless
loop by asking for impossible conditions.
This must be backported to 2.0.
Now "show info desc", "show info typed desc" and "show stat typed desc"
will report (hopefully) accurate descriptions of each field. These ones
were verified in the code. When some metrics are specific to the process
or the thread, they are indicated. Sometimes a config option is known
for a setting and it is reported as well. The purpose mainly is to help
sysadmins in field more easily sort out issues vs non-issues. In part
inspired by this very informative talk :
https://kernel-recipes.org/en/2019/metrics-are-money/
Example:
$ socat - /var/run/haproxy.sock <<< "show info desc"
Name: HAProxy:"Product name"
Version: 2.1-dev2-991035-31:"Product version"
Release_date: 2019/10/09:"Date of latest source code update"
Nbthread: 1:"Number of started threads (global.nbthread)"
Nbproc: 1:"Number of started worker processes (global.nbproc)"
Process_num: 1:"Relative process number (1..Nbproc)"
Pid: 11975:"This worker process identifier for the system"
Uptime: 0d 0h00m10s:"How long ago this worker process was started (days+hours+minutes+seconds)"
Uptime_sec: 10:"How long ago this worker process was started (seconds)"
Memmax_MB: 0:"Worker process's hard limit on memory usage in MB (-m on command line)"
PoolAlloc_MB: 0:"Amount of memory allocated in pools (in MB)"
PoolUsed_MB: 0:"Amount of pool memory currently used (in MB)"
PoolFailed: 0:"Number of failed pool allocations since this worker was started"
Ulimit-n: 300000:"Hard limit on the number of per-process file descriptors"
Maxsock: 300000:"Hard limit on the number of per-process sockets"
Maxconn: 149982:"Hard limit on the number of per-process connections (configured or imposed by Ulimit-n)"
Hard_maxconn: 149982:"Hard limit on the number of per-process connections (imposed by Memmax_MB or Ulimit-n)"
CurrConns: 0:"Current number of connections on this worker process"
CumConns: 1:"Total number of connections on this worker process since started"
CumReq: 1:"Total number of requests on this worker process since started"
MaxSslConns: 0:"Hard limit on the number of per-process SSL endpoints (front+back), 0=unlimited"
CurrSslConns: 0:"Current number of SSL endpoints on this worker process (front+back)"
CumSslConns: 0:"Total number of SSL endpoints on this worker process since started (front+back)"
Maxpipes: 0:"Hard limit on the number of pipes for splicing, 0=unlimited"
PipesUsed: 0:"Current number of pipes in use in this worker process"
PipesFree: 0:"Current number of allocated and available pipes in this worker process"
ConnRate: 0:"Number of front connections created on this worker process over the last second"
ConnRateLimit: 0:"Hard limit for ConnRate (global.maxconnrate)"
MaxConnRate: 0:"Highest ConnRate reached on this worker process since started (in connections per second)"
SessRate: 0:"Number of sessions created on this worker process over the last second"
SessRateLimit: 0:"Hard limit for SessRate (global.maxsessrate)"
MaxSessRate: 0:"Highest SessRate reached on this worker process since started (in sessions per second)"
SslRate: 0:"Number of SSL connections created on this worker process over the last second"
SslRateLimit: 0:"Hard limit for SslRate (global.maxsslrate)"
MaxSslRate: 0:"Highest SslRate reached on this worker process since started (in connections per second)"
SslFrontendKeyRate: 0:"Number of SSL keys created on frontends in this worker process over the last second"
SslFrontendMaxKeyRate: 0:"Highest SslFrontendKeyRate reached on this worker process since started (in SSL keys per second)"
SslFrontendSessionReuse_pct: 0:"Percent of frontend SSL connections which did not require a new key"
SslBackendKeyRate: 0:"Number of SSL keys created on backends in this worker process over the last second"
SslBackendMaxKeyRate: 0:"Highest SslBackendKeyRate reached on this worker process since started (in SSL keys per second)"
SslCacheLookups: 0:"Total number of SSL session ID lookups in the SSL session cache on this worker since started"
SslCacheMisses: 0:"Total number of SSL session ID lookups that didn't find a session in the SSL session cache on this worker since started"
CompressBpsIn: 0:"Number of bytes submitted to HTTP compression in this worker process over the last second"
CompressBpsOut: 0:"Number of bytes out of HTTP compression in this worker process over the last second"
CompressBpsRateLim: 0:"Limit of CompressBpsOut beyond which HTTP compression is automatically disabled"
Tasks: 10:"Total number of tasks in the current worker process (active + sleeping)"
Run_queue: 1:"Total number of active tasks+tasklets in the current worker process"
Idle_pct: 100:"Percentage of last second spent waiting in the current worker thread"
node: wtap.local:"Node name (global.node)"
Stopping: 0:"1 if the worker process is currently stopping, otherwise zero"
Jobs: 14:"Current number of active jobs on the current worker process (frontend connections, master connections, listeners)"
Unstoppable Jobs: 0:"Current number of unstoppable jobs on the current worker process (master connections)"
Listeners: 13:"Current number of active listeners on the current worker process"
ActivePeers: 0:"Current number of verified active peers connections on the current worker process"
ConnectedPeers: 0:"Current number of peers having passed the connection step on the current worker process"
DroppedLogs: 0:"Total number of dropped logs for current worker process since started"
BusyPolling: 0:"1 if busy-polling is currently in use on the worker process, otherwise zero (config.busy-polling)"
FailedResolutions: 0:"Total number of failed DNS resolutions in current worker process since started"
TotalBytesOut: 0:"Total number of bytes emitted by current worker process since started"
BytesOutRate: 0:"Number of bytes emitted by current worker process over the last second"
Now "show info" supports "desc" after the default and "typed" formats,
and "show stat" supports this after the typed format. In both cases
this appends the description for the represented metric between double
quotes. The same could be done for JSON output but would possibly require
to update the schema first.
Several times some users have expressed the non-intuitive aspect of some
of our stat/info metrics and suggested to add some help. This patch
replaces the char* arrays with an array of name_desc so that we now have
some reserved room to store a description with each stat or info field.
These descriptions are currently empty and not reported yet.
Now "show info" and "show stat" can parse "desc" as an output format
modifier that will be passed down the chain to add some descriptions
to the fields depending on the format in use. For now it is not
exploited.
Some functions used to take flags + appctx with flags==appctx.flags,
others neither, others just one of them. Some functions used to have
the flags before the object being dumped (server) while others had
it after (listener). This patch aims at cleaning this up a little bit
by following this principle:
- low-level functions which do not need the appctx take flags only
- medium-level functions which already use the appctx for other
reasons do not keep the flags
- top-level functions which already have the stream-int don't need
the flags nor the appctx.
This flag is used to decide to show the check box in front of a proxy
on the HTML stat page. It is always equal to STAT_ADMIN except when the
proxy has no backend capability (i.e. a pure frontend) or has no server,
in which case it's only used to avoid leaving an empty column at the
beginning of the table. Not only this is pretty useless, but it also
causes the columns not to align well when mixing multiple proxies with
or without servers.
Let's simply always use STAT_ADMIN and get rid of this flag.
Now we only use the appctx flags everywhere in the code, and the uri_auth
flags are read only by the HTTP analyser which presets the appctx ones.
This will allow to simplify access to the flags everywhere.
We used to rely on some config flags defined in uri_auth.h set during
parsing, and another set of STAT_* flags defined in stats.h set at run
time, with a somewhat gray area between the two sets. This is confusing
in the stats code as both are called "flags" in various functions and
it's quite hard to know which one describes what.
This patch cleans this up by replacing all ST_* by a newly assigned
value from the STAT_* set so that we can now use unified flags to
describe both the configuration and the current state. There is no
functional change at all.
This flag was added in 1.4-rc1 by commit 329f74d463 ("[BUG] uri_auth: do
not attemp to convert uri_auth -> http-request more than once") to
address the case where two proxies inherit the stats settings from
the defaults instance, and the first one compiles the expression while
the second one uses it. In this case since they use the exact same
uri_auth pointer, only the first one should compile and the second one
must not fail the check. This was addressed by adding an ST_CONVDONE
flag indicating that the expression conversion was completed and didn't
need to be done again. But this is a hack and it becomes cumbersome in
the middle of the other flags which are all relevant to the stats
applet. Let's instead fix it by checking if we're dealing with an
alias of the defaults instance and refrain from compiling this twice.
This allows us to remove the ST_CONVDONE flag.
A typical config requiring this check is :
defaults
mode http
stats auth foo:bar
listen l1
bind :8080
listen l2
bind :8181
Without this (or previous) check it would cmoplain when checking l2's
validity since the rule was already built.
Both "show info" and "show stat" support the "typed" output format and
the "json" output format. I just never can remind them, which is an
indication that some help is missing.
H2 strongly recommends that clients exclusively use the absolute form
for requests, which contains a scheme, an authority and a path, instead
of the old format involving the Host header and a path. Thus there is
no way to distinguish between a request intended for a proxy and an
origin request, and as such proxied requests are lost.
This patch makes sure to keep the encoding of all absolute form requests
so that the URI is kept end-to-end. If the scheme is http or https, there
is an uncertainty so the request is tagged as a normalized URI so that
the other end (H1) can decide to emit it in origin form as this is by far
the most commonly expected one, and it's certain that quite a number of
H1 setups are not ready to cope with absolute URIs.
There is a direct visible impact of this change, which is that the uri
sample fetch will now return absolute URIs (as they really come on the
wire) whenever these are used. It also means that default http logs will
report absolute URIs.
If a situation is once met where a client uses H2 to join an H1 proxy
with haproxy in the middle, then it will be trivial to add an option to
ask the H1 output to use absolute encoding for such requests.
Later we may be able to consider that the normalized URI is the default
output format and stop sending them in origin form unless an option is
set.
Now chaining multiple instances keeps the semantics as far as possible
along the whole chain :
1) H1 to H1
H1:"GET /" --> H1:"GET /" # log: /
H1:"GET http://" --> H1:"GET http://" # log: http://
H1:"GET ftp://" --> H1:"GET ftp://" # log: ftp://
2) H2 to H1
H2:"GET /" --> H1:"GET /" # log: /
H2:"GET http://" --> H1:"GET /" # log: http://
H2:"GET ftp://" --> H1:"GET ftp://" # log: ftp://
3) H1 to H2 to H2 to H1
H1:"GET /" --> H2:"GET /" --> H2:"GET /" --> H1:"GET /"
H1:"GET http://" --> H2:"GET http://" --> H2:"GET http://" --> H1:"GET /"
H1:"GET ftp://" --> H2:"GET ftp://" --> H2:"GET ftp://" --> H1:"GET ftp://"
Thus there is zero loss on H1->H1, H1->H2 nor H2->H2, and H2->H1 is
normalized in origin format if ambiguous.
Instead of mapping the Host header field to :authority, we now act
differently if the request is in origin form or in absolute form.
If it's absolute, we extract the scheme and the authority from the
request, fix the path if it's empty, and drop the Host header.
Otherwise we take the scheme from the http/https flags in the HTX
layer, make the URI be the path only, and emit the Host header,
as indicated in RFC7540#8.1.2.3. This allows to distinguish between
absolute and origin requests for H1 to H2 conversions.
Till now we've been producing path components of the URI and using the
:authority header only to be placed into the host part. But this practice
is not correct, as if we're used to convey H1 proxy requests over H2 then
over H1, the absolute URI is presented as a path on output, which is not
valid. In addition the scheme on output is not updated from the absolute
URI either.
Now the request parser will continue to deliver origin-form for request
received using the http/https schemes, but will use the absolute-form
when dealing with other schemes, by concatenating the scheme, the authority
and the path if it's not '*'.
When a request start-line is converted to its raw representation, if its URI is
normalized, only the path part is used. Most of H2 clients send requests using
the absolute form (:scheme + :authority + :path), regardless the request is sent
to a proxy or not. But, when the request is relayed to an H1 origin server, it
is unusual to send it using the absolute form. And, even if the servers must
support this form, some old servers may reject it. So, for such requests, we
only get the path of the absolute URI. Most of time, it will be the right
choice. However, an option will probably by added to customize this behavior.
In HTTP, the request authority, if any, and the Host header must be identical
(excluding any userinfo subcomponent and its "@" delimiter). So now, during the
request analysis, when the Host header is updated, the start-line is also
updated. The authority of an absolute URI is changed accordingly. Symmetrically,
if the URI is changed, if it contains an authority, then then Host header is
also changed. In this latter case, the flags of the start-line are also updated
to reflect the changes on the URI.
The function http_get_authority() may be used to parse a URI and looks for the
authority, between the scheme and the path. An option may be used to skip the
user info (part before the '@'). Most of time, the user info will be ignored.
The H2 request parsing is not trivial given that we have multiple
possible syntaxes. Mainly we can have :authority or not, and when
a CONNECT method is seen, :scheme and :path are missing. This mostly
updates the functions' comments and header index assignments to make
them less confusing. Functionally there is no change.
In these muxes, when an integer value is provided in a trace, it must be the 4th
argument. The 3rd one, if defined, is always an HTX message. Unfortunately, some
traces are buggy and the 4th argument is erroneously passed in 3rd position.
No backport needed.
There are some reports of users not being able to pass "enterprise"
traffic through haproxy when using H2 because it doesn't emit CONTINUATION
frames and as such is limited to headers no longer than the negociated
max-frame-size which usually is 16 kB.
This patch implements support form emitting CONTINUATION when a HEADERS
frame cannot fit within a limit of mfs. It does this by first filling a
buffer-wise frame, then truncating it starting from the tail to append
CONTINUATION frames. This makes sure that we can truncate on any byte
without being forced to stop on a header boundary, and ensures that the
common case (no fragmentation) doesn't add any extra cost. By moving
the tail first we make sure that each byte is moved only once, thus the
performance impact remains negligible.
This addresses github issue #249.
If a request contains an absolute URI and gets its Host header field
rewritten, or just the request's URI without touching the Host header
field, it can lead to different Host and authority parts. The cache
will always concatenate the Host and the path while a server behind
would instead ignore the Host and use the authority found in the URI,
leading to incorrect content possibly being cached.
Let's simply refrain from caching absolute requests for now, which
also matches what the comment at the top of the function says. Later
we can improve this by having a special handling of the authority.
This should be backported as far as 1.8.
As for the mux h1 and h2, traces are now supported in the mux fcgi. All parts of
the multiplexer is covered by these traces. Events are splitted by categories
(fconn, fstrm, stream, rx, tx and rsp) for a total of ~40 different events with
5 verboisty levels.
In traces, the first argument is always a connection. So it is easy to get the
fconn (conn->ctx). The second argument is always a fstrm. The third one is an
HTX message. Depending on the context it is the request or the response. In all
cases it is owned by a channel. Finally, the fourth argument is an integer
value. Its meaning depends on the calling context.
When the output buffer allocation failed, we block stream processing. When
finally a buffer is available and we succed to allocate the output buffer, it
seems fair to wake up the stream.
When an outgoing h1 message is formatted, if it is considered as chunked but the
corresponding header is missing, we add it. And as all other h1 headers, if
configured so, the case of this header must be adjusted.
No backport needed.
It is not explicitly stated in the documentation, but some users rely on this
behavior. When the server name is inserted in a request, headers with the same
name are first removed.
This patch is not tagged as a bug, because it is not explicitly documented. We
choose to keep the same implicit behavior to not break existing
configuration. Because this option is used very little, it is not a big deal.
As for the mux h2, traces are now supported in the mux h1. All parts of the
multiplexer is covered by these traces. Events are splitted by categories (h1c,
h1s, stream, rx and tx) for a total of ~30 different events with 5 verboisty
levels.
In traces, the first argument is always a connection. So it is easy to get the
h1c (conn->ctx). The second argument is always a h1s. The third one is an HTX
message. Depending on the context it is the request or the response. In all
cases it is owned by a channel. Finally, the fourth argument is an integer
value. Its meaning depends on the calling context.
This function now uses the address of the pointer to the htx message where the
copy must be performed. This way, when a zero-copy is performed, there is no
need to refresh the caller's htx message. It is a bit easier to do that way,
especially to add traces in the mux-h1.
When a new H2 connection is initialized, the connection context is not changed
before the end. So, traces emitted during this initialization are buggy, except
the last one when no error occurred, because the connection context is not an
h2c.
To fix the bug, the connection context is saved and set as soon as possible. So,
the connection can always safely be used in all traces, except for the very
first one. And on error, the connection context is restored.
No need to backport.
When we configure a "peers" section without local peer, this makes haproxy
old process crash on reload.
Such a configuration file allows to reproduce this issue:
global
stats socket /tmp/sock1 mode 666 level admin
stats timeout 10s
peers peers
peer localhost 127.0.0.1:1024
This bug was introduced by this commit:
"MINOR: cfgparse: Make "peer" lines be parsed as "server" lines"
This commit introduced a new condition to detect a "peers" section without
local peer. This is a "peers" section with a frontend struct which has no ->id
initialized member. Such a "peers" section must be removed.
This patch adds this new condition to remove such peers sections without local
peer as this was always done before.
Must be backported to 2.0.
When executing tasks, don't forget to decrement tasks_run_queue once we
popped one task from the task_list. tasks_run_queue used to be decremented
by __tasklet_remove_from_tasklet_list(), but we now call MT_LIST_POP().
Alexandre Derumier reported issue #308 in which the client timeout will
strike on an H2 mux when it's shorter than the server's response time.
What happens in practice is that there is no activity on the connection
and there's no data pending on output so we can expire it. But this does
not take into account the possibility that some streams are in fact
waiting for the data layer above. So what we do now is that we enforce
the timeout when:
- there are no more streams
- some data are pending in the output buffer
- some streams are blocked on the connection's flow control
- some streams are blocked on their own flow control
- some streams are in the send/sending list
In all other cases the connection will not timeout as it means that some
streams are actively used by the data layer.
This fix must be backported to 2.0, 1.9 and probably 1.8 as well. It
depends on the new "blocked_list" field introduced by "MINOR: mux-h2:
add a per-connection list of blocked streams". It would be nice to
also backport "ebtree: make eb_is_empty() and eb_is_dup() take a const"
to avoid a build warning.
Currently the H2 mux doesn't have a list of all the streams blocking on
the H2 side. It only knows about those trying to send or waiting for a
connection window update. It is problematic to enforce timeouts because
we never know if a stream has to live as long as the data layer wants or
has to be timed out becase it's waiting for a stream window update. This
patch adds a new list, "blocked_list", to store streams blocking on
stream flow control, or later, dependencies. Streams blocked on sfctl
are now added there. It doesn't modify the rest of the logic.
This reverts commit 1263540fe8.
As discussed in issues #214 and #251, this is not the correct way to
cache CORS responses, since it relies on hacking the cache to cache
the OPTIONS method which is explicitly non-cacheable and for which
we cannot rely on any standard caching semantics (cache headers etc
are not expected there). Let's roll this back for now and keep that
for a more reliable and flexible CORS-specific solution later.
@davidmogar reported a github issue (#227) about problems with
do-resolve action when the request contains a body.
The variable was never populated in such case, despite tcpdump shows a
valid DNS response coming back.
The do-resolve action is a task in HAProxy and so it's waken by the
scheduler each time the scheduler think such task may have some work to
do.
When a simple HTTP request is sent, then the task is called, it sends
the DNS request, then the scheduler will wake up the task again later
once the DNS response is there.
Now, when the client send a PUT or a POST request (or any other type)
with a BODY, then the do-resolve action if first waken up once the
headers are processed. It sends the DNS request. Then, when the bytes
for the body are processed by HAProxy AND the DNS response has not yet
been received, then the action simply terminates and cleans up all the
data associated to this resolution...
This patch detect such behavior and if the action is now waken up while
a DNS resolution is in RUNNING state, then the action will tell the
scheduler to wake it up again later.
Backport status: 2.0 and above
src/ssl_sock.c:2928:12: warning: ‘ssl_sock_is_ckch_valid’ defined but not used [-Wunused-function]
static int ssl_sock_is_ckch_valid(struct cert_key_and_chain *ckch)
This function is only used with openssl >= 1.0.2, this patch adds a
condition to build the function.
`size` is used in conditional jumps and valgrind complains:
==24145== Conditional jump or move depends on uninitialised value(s)
==24145== at 0x4B3028: smp_is_safe (sample.h:98)
==24145== by 0x4B3028: smp_make_safe (sample.h:125)
==24145== by 0x4B3028: smp_to_stkey (stick_table.c:936)
==24145== by 0x4B3F2A: sample_conv_in_table (stick_table.c:1113)
==24145== by 0x420AD4: hlua_run_sample_conv (hlua.c:3418)
==24145== by 0x54A308F: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==24145== by 0x54AFEFC: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==24145== by 0x54A29F1: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==24145== by 0x54A3523: lua_resume (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==24145== by 0x426433: hlua_ctx_resume (hlua.c:1097)
==24145== by 0x42D7F6: hlua_action (hlua.c:6218)
==24145== by 0x43A414: http_req_get_intercept_rule (http_ana.c:3044)
==24145== by 0x43D946: http_process_req_common (http_ana.c:500)
==24145== by 0x457892: process_stream (stream.c:2084)
Found while investigating issue #306.
A variant of this issue exists since 55da165301,
which was using the old `chunk` API instead of the `buffer` API thus this patch
must be backported to HAProxy 1.6 and higher.
A break is missing in the switch statement in the function
stats_emit_json_data_field(). This bug was introduced in the commit 88a0db28a
("MINOR: stats: Add the support of float fields in stats").
This patch fixes the issue #302 and #303. It must be backported to 2.0.
Ilya reported in bug #300 that ASAN found a read overflow during startup
in the fcgi code due to a missing empty element at the end of the list
of sample fetches. The effect is that will randomly either work or crash
on startup.
No backport is needed, this is solely for 2.1-dev.
It is now possible to format stats counters as floats. But the stats applet does
not use it.
This patch is required by the Prometheus exporter to send the time averages in
seconds. If the promex change is backported, this patch must be backported
first.
the option "http-send-name-header" is an eyesore. It was responsible of several
bugs because it is handled after the message analysis. With the HTX
representation, the situation is cleaner because no rewind on forwarded data is
required. But it remains ugly.
With recent changes in HAProxy, we have the opportunity to make it fairly
better. The message formatting in now done in the HTTP multiplexers. So it seems
to be the right place to handle this option. Now, the server name is added by
the HTTP multiplexers (h1, h2 and fcgi).
A different engine-id is now generated for each thread. So, it is possible to
enable the async mode with several threads.
This patch may be backported to older versions.
Use the same algo than the sample fetch uuid(). This one was added recently. So
it is better to use the same way to generate UUIDs.
This patch may be backported to older versions.
SPOE engine-id is the same for all processes when nbproc is more than 1. So, in
async mode, an agent receiving a NOTIFY frame from a process may send the ACK to
another process. It is abviously wrong. A different engine-id must be generated
for each process.
This patch must be backported to 2.0, 1.9 and 1.8.
When a request is received, if the h2 preface is matched, an implicit upgrade
from h1 to h2 is performed. This must only be done for the first request on a
connection. But a test was missing to unsure it is really the first request.
This patch must be backported to 2.0.
When a frame is received for a unknown or already closed stream, it must be
skipped. This also happens when a stream error is reported. But we must be sure
to only skip received data. In the loop in h2_process_demux(), when such frames
are handled, all the frame lenght is systematically skipped. If the frame
payload is partially received, it leaves the demux buffer in an undefined
state. Because of this bug, all sort of errors may be observed, like crash or
intermittent freeze.
This patch must be backported to 2.0, 1.9 and 1.8.
Since the commit 6884aa3e ("BUG/MAJOR: mux-h2: Handle HEADERS frames received
after a RST_STREAM frame"), HEADERS frames received for an unknown or already
closed stream are decoded. Once decoded, an error is reported for the
stream. But because it is a dummy stream (h2_closed_stream), its state cannot be
changed. So instead, we must return the dummy error stream (h2_error_stream).
This patch must be backported to 2.0 and 1.9.
Consecutive to commit 6884aa3eb0 ("BUG/MAJOR: mux-h2: Handle HEADERS frames
received after a RST_STREAM frame") some valid frames on closed streams
(RST_STREAM, PRIORITY, WINDOW_UPDATE) were now rejected. It turns out that
the previous condition was in fact intentional to catch only sensitive
frames, which was indeed a mistake since these ones needed to be decoded
to keep HPACK synchronized. But we must absolutely accept WINDOW_UPDATES
or we risk to stall some transfers. And RST/PRIO definitely are valid.
Let's adjust the condition to reflect that and update the comment to
explain the reason for this unobvious condition.
This must be backported to 2.0 and 1.9 after the commit above is brought
there.
This way we now always have the events date which were really missing,
especially when used with traces :
<0>2019-09-26T07:57:25.183845 [00|h2|1|mux_h2.c:3024] receiving H2 HEADERS frame : h2c=0x1ddcad0(B,FRP) h2s=0x1dde9e0(3,HCL)
<0>2019-09-26T07:57:25.183845 [00|h2|4|mux_h2.c:2505] h2c_bck_handle_headers(): entering : h2c=0x1ddcad0(B,FRP) h2s=0x1dde9>
<0>2019-09-26T07:57:25.183846 [00|h2|4|mux_h2.c:4096] h2c_decode_headers(): entering : h2c=0x1ddcad0(B,FRP)
<0>2019-09-26T07:57:25.183847 [00|h2|4|mux_h2.c:4298] h2c_decode_headers(): leaving : h2c=0x1ddcad0(B,FRH)
<0>2019-09-26T07:57:25.183848 [00|h2|0|mux_h2.c:2559] rcvd H2 response : h2c=0x1ddcad0(B,FRH) : [3] H2 RES: HTTP/2.0 200
<0>2019-09-26T07:57:25.183849 [00|h2|4|mux_h2.c:2560] h2c_bck_handle_headers(): leaving : h2c=0x1ddcad0(B,FRH) h2s=0x1dde9e>
<0>2019-09-26T07:57:25.183849 [00|h2|4|mux_h2.c:2866] h2_process_demux(): no more Rx data : h2c=0x1ddcad0(B,FRH)
<0>2019-09-26T07:57:25.183849 [00|h2|4|mux_h2.c:3123] h2_process_demux(): notifying stream before switching SID : h2c=0x1dd>
<0>2019-09-26T07:57:25.183850 [00|h2|4|mux_h2.c:1014] h2s_notify_recv(): in : h2c=0x1ddcad0(B,FRH) h2s=0x1dde9e0(3,HCL)
<0>2019-09-26T07:57:25.183850 [00|h2|4|mux_h2.c:3135] h2_process_demux(): leaving : h2c=0x1ddcad0(B,FRH)
<0>2019-09-26T07:57:25.183851 [00|h2|4|mux_h2.c:3319] h2_send(): entering : h2c=0x1ddcad0(B,FRH)
<0>2019-09-26T07:57:25.183851 [00|h2|4|mux_h2.c:3145] h2_process_mux(): entering : h2c=0x1ddcad0(B,FRH)
<0>2019-09-26T07:57:25.183851 [00|h2|4|mux_h2.c:3234] h2_process_mux(): leaving : h2c=0x1ddcad0(B,FRH)
<0>2019-09-26T07:57:25.183852 [00|h2|4|mux_h2.c:3428] h2_send(): leaving with everything sent : h2c=0x1ddcad0(B,FRH)
<0>2019-09-26T07:57:25.183852 [00|h2|4|mux_h2.c:3319] h2_send(): entering : h2c=0x1ddcad0(B,FRH)
It looks like some format options could finally be separate from the sink,
or maybe enforced. For example we could imagine making the date optional
or its resolution configurable within a same buffer.
Similarly, maybe trace events would like to always emit the date even
on stdout, while traffic logs would prefer not to emit the date in the
ring buffer given that there's already one in the message.
We often need ISO time + microseconds in traces and ring buffers, thus
function does this by calling gettimeofday() and keeping a cached value
of the part representing the tv_sec value, and only rewrites the microsecond
part. The cache is per-thread so it's lockless and safe to use as-is.
Some tests already show that it's easy to see 3-4 events in a single
microsecond, thus it's likely that the nanosecond version will have to
be implemented as well. But certain comments on the net suggest that
some parsers are having trouble beyond microsecond, thus for now let's
stick to the microsecond only.
When doing a soft shutdown, we won't be making new connections anymore so
there's no point in keeping the namespace file descriptors open anymore.
Keeping these open effectively makes it impossible to properly clean up
namespaces which are no longer used in the new configuration until all
previously opened connections are closed in the old worker process.
This change introduces a cleanup function that is called during soft shutdown
that closes all namespace file descriptors by iterating over the namespace
ebtree.
In state match error cases, we don't know what frame type was received
because we don't reach the frame parsers. Let's add the demuxed frame
type and flags in the trace when it's known. For this we make sure to
always reset h2c->dsi when switching back to FRAME_H. Only one location
was missing. The state transitions were not always clear (sometimes
reported before, sometimes after), these were clarified by being
reported only before switching.
It was difficult in traces showing h2-to-h2 communications to figure the
connection side solely based on the pointer. With this patch we prepend
'F' or 'B' before the state to make this more explicit:
[06|h2|4|mux_h2.c:5487] h2_rcv_buf(): entering : h2c=0x7f6acc026440(F,FRH) h2s=0x7f6acc021720(1,CLO)
[06|h2|4|mux_h2.c:5547] h2_rcv_buf(): leaving : h2c=0x7f6acc026440(F,FRH) h2s=0x7f6acc021720(1,CLO)
[06|h2|4|mux_h2.c:4040] h2_shutw(): entering : h2c=0x7f6acc026440(F,FRH) h2s=0x7f6acc021720(1,CLO)
Now that we can wake tasklet for other threads, make sure that if the thread
is sleeping, we wake it up, or the tasklet won't be executed until it's
done sleeping.
That also means that, before going to sleep, and after we put our bit
in sleeping_thread_mask, we have to check that nobody added a tasklet for
us, just checking for global_tasks_mask isn't enough anymore.
The aim is to rassemble all scheduler information related to the current
thread. It simply points to task_per_thread[tid] without having to perform
the operation at each time. We save around 1.2 kB of code on performance
sensitive paths and increase the request rate by almost 1%.
There are a number of tests there which are enforced on tasklets while
they will never apply (various handlers, destroyed task or not, arguments,
results, ...). Instead let's have a single TASK_IS_TASKLET() test and call
the tasklet processing function directly, skipping all the rest.
It now appears visible that the only unneeded code is the update to
curr_task that is never used for tasklets, except for opportunistic
reporting in the debug handler, which can only catch si_cs_io_cb,
which in practice doesn't appear in any report so the extra cost
incurred there is pointless.
This change alone removes 700 bytes of code, mostly in
process_runnable_tasks() and increases the performance by about
1%.
In process_runnable_tasks() we perform a lot of dereferences to
task_per_thread[tid] but tid is thread_local and the compiler cannot
know that it doesn't change so this results in making lots of thread
local accesses and array dereferences. By just keeping a copy pointer
of this, we let the compiler optimize the code. Just doing this has
reduced process_runnable_tasks() by 124 bytes in the fast path. Doing
the same in wake_expired_tasks() results in 16 extra bytes saved.
In process_runnable_task(), after the task's process() function returns,
we used to check if the return is not NULL and is not a tasklet, to update
profiling measurements. This is useless since only tasks can return non-null
here. Let's remove this useless test.
As identified in issue #278, the backport of commit c594039225 ("BUG/MINOR:
checks: do not uselessly poll for reads before the connection is up")
introduced a regression in 2.0 when default checks are enabled (not
"option tcp-check"), but it did not affect 2.1.
What happens is that in 2.0 and earlier we have the fd cache which makes
a speculative call to the I/O functions after an attempt to connect, and
the __event_srv_chk_r() function was absolutely not designed to be called
while a connection attempt is still pending. Thus what happens is that the
test for success/failure expects the verdict to be final before waking up
the check task, and since the connection is not yet validated, it fails.
It will usually work over the loopback depending on scheduling, which is
why it doesn't fail in reg tests.
In 2.1 after the failed connect(), we subscribe to polling and usually come
back with a validated connection, so the function is not expected to be
called before it completes, except if it happens as a side effect of some
spurious wake calls, which should not have any effect on such a check.
The other check types are not impacted by this issue because they all
check for a minimum data length in the buffer, and wait for more data
until they are satisfied.
This patch fixes the issue by explicitly checking that the connection
is established before trying to read or to give a verdict. This way the
function becomes safe to call regardless of the connection status (even
if it's still totally ugly).
This fix must be backported to 2.0.
If an error occurred on the connection or the conn-stream, no syncrhonous send
is performed. If the error was not already processed and there is no more I/O,
it will never be processed and the stream will never be notified of this
error. This may block the stream until a timeout is reached or infinitly if
there is no timeout.
Concretly, this bug can be triggered time to time with h2spec, running the test
"http2/5.1.1/2".
This patch depends on the commit 328ed220a "BUG/MINOR: stream-int: Process
connection/CS errors first in si_cs_send()". Both must be backported to 2.0 and
probably to 1.9. In 1.9, the code is totally different, so this patch would have
to be adapted.
Errors on the connections or the conn-stream must always be processed in
si_cs_send(), even if the stream-interface is already subscribed on
sending. This patch does not fix any concrete bug per-se. But it is required by
the following one to handle those errors during synchronous sends.
This patch must be backported with the following one to 2.0 and probably to 1.9
too, but with caution because the code is really different.
Now that we can wake up a remote thread's tasklet, it's way more
interesting to use a tasklet than a task in the accept queue, as it
will avoid passing through all the scheduler. Just doing this increases
the accept rate by about 4%, overall recovering the slight loss
introduced by the tasklet change. In addition it makes sure that
even a heavily loaded scheduler (e.g. many very fast checks) will
not delay a connection accept.
When namespaces are used in the configuration, the respective namespace handles
are opened during config parsing and stored in an ebtree for lookup later.
Unfortunately, when the master process re-execs itself these file descriptors
were not closed, effectively leaking the fds and preventing destruction of
namespaces no longer present in the configuration.
This change fixes this issue by opening the namespace file handles as
close-on-exec, making sure that they will be closed during re-exec.
Change the tasklet code so that the tasklet list is now a mt_list.
That means that tasklet now do have an associated tid, for the thread it
is expected to run on, and any thread can now call tasklet_wakeup() for
that tasklet.
One can change the associated tid with tasklet_set_tid().
Instead of using the same type for regular linked lists and "autolocked"
linked lists, use a separate type, "struct mt_list", for the autolocked one,
and introduce a set of macros, similar to the LIST_* macros, with the
MT_ prefix.
When we use the same entry for both regular list and autolocked list, as
is done for the "list" field in struct connection, we know have to explicitely
cast it to struct mt_list when using MT_ macros.
See GH issues #141 for all the context. In short, registered signal
handlers are not inherited by other threads during startup, which is
normally not a problem, except that we need that the same thread as
the one doing the fork() cleans up the old process using waitpid()
once its death is reported via SIGCHLD, as happens in external checks.
The only simple solution to this at the moment is to make sure that
external checks are exclusively run on the first thread, the one
which registered the signal handlers on startup. It will be far more
than enough anyway given that external checks must not require to be
load balanced on multiple threads! A more complex solution could be
designed over the long term to let each thread deal with all signals
but it sounds overkill.
This must be backported as far as 1.8.
As stated in the RFC7540#5.1, an endpoint that receives any frame other than
PRIORITY after receiving a RST_STREAM MUST treat that as a stream error of type
STREAM_CLOSED. However, frames carrying compression state must still be
processed before being dropped to keep the HPACK decoder synchronized. This had
to be the purpose of the commit 8d9ac3ed8b ("BUG/MEDIUM: mux-h2: do not abort
HEADERS frame before decoding them"). But, the test on the frame type was
inverted.
This bug is major because desynchronizing the HPACK decoder leads to mixup
indexed headers in messages. From the time an HEADERS frame is received and
ignored for a closed stream, wrong headers may be sent to the following streams.
This patch may fix several bugs reported on github (#116, #290, #292). It must
be backported to 2.0 and 1.9.
The function parse_fcgi_flt() is called when the keyword "fcgi-app" is found on
a filter line. We don't need to compare it again in the function.
This patch fixes the issue #284. No backport needed.
This multiplexer is only available on the backend side. It may handle
multiplexed connections if the FCGI application supports it. A FCGI application
must be configured on the backend to be used. If not redefined during the
request processing by the FCGI filter, this mux handles all mandatory
parameters.
There is a limitation on the way the requests are processed. The parameters must
be encoded into a uniq PARAMS record. It means, once encoded, all HTTP headers
and FCGI parameters must small enough to be store in a buffer. Otherwise, an
internal processing error is returned.
The FCGI application handles all the configuration parameters used to format
requests sent to an application. The configuration of an application is grouped
in a dedicated section (fcgi-app <name>) and referenced in a backend to be used
(use-fcgi-app <name>). To be valid, a FCGI application must at least define a
document root. But it is also possible to set the default index, a regex to
split the script name and the path-info from the request URI, parameters to set
or unset... In addition, this patch also adds a FCGI filter, responsible for
all processing on a stream.
When an HTX message is formatted to an H1 or H2 message, pseudo-headers (with
header names starting by a colon (':')) are now ignored. In fact, for now, only
H2 messages have such headers, and the H2 mux already skips them when it creates
the HTX message. But in the futur, it may be useful to keep these headers in the
HTX message to help the message analysis or to do some processing during the
HTTP formatting. It would also be a good idea to have scopes for pseudo-headers
(:h1-, :h2-, :fcgi-...) to limit their usage to a specific mux.
This function will try to do a zero-copy transfer. Otherwise, it adds a data
block. The same is used for messages with a content-length, chunked messages and
messages with unknown body length.
To avoid code duplication in the futur mux FCGI, functions parsing H1 messages
and converting them into HTX have been moved in the file h1_htx.c. Some
specific parts remain in the mux H1. But most of the parsing is now generic.
Application is a generic term here. It is a modules which handle its own log
server list, with no dependency on a proxy. Such applications can now call the
function app_log() to log messages, passing a log server list and a tag as
parameters. Internally, the function __send_log() has been adapted accordingly.
Now, following sample fetches may be used to get information about
authentication:
* http_auth_type : returns the auth method as supplied in Authorization header
* http_auth_user : returns the auth user as supplied in Authorization header
* http_auth_pass : returns the auth pass as supplied in Authorization header
Only Basic authentication is supported.
Most of times, when a keyword is added in proxy section or on the server line,
we need to have a post-parser callback to check the config validity for the
proxy or the server which uses this keyword.
It is possible to register a global post-parser callback. But all these
callbacks need to loop on the proxies and servers to do their job. It is neither
handy nor efficient. Instead, it is now possible to register per-proxy and
per-server post-check callbacks.
Most of times, when any allocation is done during configuration parsing because
of a new keyword in proxy section or on the server line, we must add a call in
the deinit() function to release allocated ressources. It is now possible to
register a post-deinit callback because, at this stage, the proxies and the
servers are already releases.
Now, it is possible to register deinit callbacks per-proxy or per-server. These
callbacks will be called for each proxy and server before releasing them.
When an error occurred in a mux, most of time, an error is also reported on the
conn-stream, leading to an error (read and/or write) on the channel. When a
parsing or a processing error is reported for the HTX message, it is better to
handle it first.
Since the commit 1b8e68e8 ("MEDIUM: stick-table: Stop handling stick-tables as
proxies."), the target field into the table context of the CLI applet was not
anymore a pointer to a proxy. It was replaced by a pointer to a stktable. But,
some parts of the code was not updated accordingly. the function
table_prepare_data_request() still tries to cast it to a pointer to a proxy. The
result is totally undefined. With a bit of luck, when the "show table" command
is used with a data type, we failed to find a table and the error "Data type not
stored in this table" is returned. But crashes may also be experienced.
This patch fixes the issue #262. It must be backported to 2.0.
Recently Lua code which uses Proxy class (get_stats method) stopped
working ("table index is nil from [C] method 'get_stats'")
It probably affects other codepaths too.
This should be backported do 2.0 and 1.9.
In the function connect_server(), when we are not able to reuse a connection and
too many FDs are opened, the variable srv must be defined to kill an idle
connection.
This patch fixes the issue #257. It must be backported to 2.0
This only happens during the configuration parsing. First leak is the string
representing the last converter parsed, if any. The second one is on the error
path, when the allocation of the ACL expression failed. In this case, the sample
was not released.
This patch fixes the issue #256. It must be backported to all stable versions.
Since the legacy HTTP mode has been removed, this flag is not necessary
anymore. Removing this flag, a test on the HTX message at the end of the
function h2c_decode_headers() has also been removed fixing the github
issue #244.
No backport needed.
When a filter returns an error during the HTTP analysis, an error must be
returned if the status code is not already set. On the request path, an error
400 is returned. On the response path, an error 502 is returned. The status is
considered as unset if its value is not strictly positive.
If needed, this patch may be backported to all versions having filters (as far
as 1.7). Because nobody have never report any bug, the backport to 2.0 is
probably enough.
It is now possible to export stats using the JSON format from the HTTP stats
page. Like for the CSV export, to export stats in JSON, you must add the option
";json" on the stats URL. It is also possible to dump the JSON schema with the
option ";json-schema". Corresponding Links have been added on the HTML page.
This patch fixes the issue #263.
In several SSL functions, the XPRT context is retrieved before any check on the
connection. In the function ssl_sock_is_ssl(), a test suggests the connection
may be null. So, it is safer to test the ssl connection before retrieving its
XPRT context. It removes any ambiguities and prevents possible null pointer
dereferences.
This patch fixes the issue #265. It must be backported to 2.0.
It seems to be possible to have no frontend for a listener. A test was missing
before dereferencing it at the end of the function listener_accept().
This patch fixes the issue #264. It must be backported to 2.0 and 1.9.
This adds two extra fields to the stats, one for the current number of idle
connections and one for the configured limit. A tooltip link now appears on
the HTML page to show these values in front of the active connection values.
This should be backported to 2.0 and 1.9 as it's the only way to monitor
the idle connections behaviour.
As mentioned in previous commit, these flags do not map well to
modern poller capabilities. Let's use the FD_EV_*_{R,W} flags instead.
This first patch only performs a 1-to-1 mapping making sure that the
previously reported flags are still reported identically while using
the closest possible semantics in the pollers.
It's worth noting that kqueue will now support improvements such as
returning distinctions between shut and errors on each direction,
though this is not exploited for now.
In order to address the absurd polling sequence described in issue #253,
let's make sure we disable receiving on a connection until it's established.
Previously with bottom-top I/Os, we were almost certain that a connection
was ready when the first I/O was confirmed. Now we can enter various
functions, including process_stream(), which will attempt to read
something, will fail, and will then subscribe. But we don't want them
to try to receive if we know the connection didn't complete. The first
prerequisite for this is to mark the connection as not ready for receiving
until it's validated. But we don't want to mark it as not ready for sending
because we know that attempting I/Os later is extremely likely to work
without polling.
Once the connection is confirmed we re-enable recv readiness. In order
for this event to be taken into account, the call to tcp_connect_probe()
was moved earlier, between the attempt to send() and the attempt to recv().
This way if tcp_connect_probe() enables reading, we have a chance to
immediately fall back to this and read the possibly pending data.
Now the trace looks like the following. It's far from being perfect
but we've already saved one recvfrom() and one epollctl():
epoll_wait(3, [], 200, 0) = 0
socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
connect(7, {sa_family=AF_INET, sin_port=htons(8000), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLIN|EPOLLOUT|EPOLLRDHUP, {u32=7, u64=7}}) = 0
epoll_wait(3, [{EPOLLOUT, {u32=7, u64=7}}], 200, 1000) = 1
connect(7, {sa_family=AF_INET, sin_port=htons(8000), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
getsockopt(7, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
sendto(7, "OPTIONS / HTTP/1.0\r\n\r\n", 22, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 22
epoll_ctl(3, EPOLL_CTL_MOD, 7, {EPOLLIN|EPOLLRDHUP, {u32=7, u64=7}}) = 0
epoll_wait(3, [{EPOLLIN|EPOLLRDHUP, {u32=7, u64=7}}], 200, 1000) = 1
getsockopt(7, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
getsockopt(7, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
recvfrom(7, "HTTP/1.0 200\r\nContent-length: 0\r\nX-req: size=22, time=0 ms\r\nX-rsp: id=dummy, code=200, cache=1, size=0, time=0 ms (0 real)\r\n\r\n", 16384, 0, NULL, NULL) = 126
close(7) = 0
'ssl_sock' wasn't fully initialized so a new session can inherit some
flags from an old one.
This causes some fetches, related to client's certificate presence or
its verify status and errors, returning erroneous values.
This issue could generate other unexpected behaviors because a new
session could also inherit other flags such as SSL_SOCK_ST_FL_16K_WBFSIZE,
SSL_SOCK_SEND_UNLIMITED, or SSL_SOCK_RECV_HEARTBEAT from an old session.
This must be backported to 2.0 but it's useless for previous.
As discussed in issue #178, the change brought around 1.9-dev11 by commit
1eb6c55808 ("MINOR: lb: make the leastconn algorithm more accurate")
causes some harm in the situation it tried to improve. By always applying
the server's weight even for no connection, we end up always picking the
same servers for the first connections, so under a low load, if servers
only have either 0 or 1 connections, in practice the same servers will
always be picked.
This patch partially restores the original behaviour but still keeping
the spirit of the aforementioned patch. Now what is done is that servers
with no connections will always be picked first, regardless of their
weight, so they will effectively follow round-robin. Only servers with
one connection or more will see an accurate weight applied.
This patch was developed and tested by @malsumis and @jaroslawr who
reported the initial issue. It should be backported to 2.0 and 1.9.
When an error occurs in the post-parser callback which checks configuration
validity of the option outgoing-headers-case-adjust-file, the error message is
freed too early, before being used.
No backport needed. It fixes the github issue #258.
It's pointless to start to perform a recv() call on a connection that is
not yet established. The only purpose used to be to subscribe but that
causes many extra syscalls when we know we can do it later.
This patch only attempts a read if the connection is established or if
there is no write planed, since we want to be certain to be called. And
in wake_srv_chk() we continue to attempt to read if the reader was not
subscribed, so as to perform the first read attempt. In case a first
result is provided, __event_srv_chk_r() will not do anything anyway so
this is totally harmless in this case.
This fix requires that commit "BUG/MINOR: checks: make __event_chk_srv_r()
report success before closing" is applied before, otherwise it will break
some checks (notably SSL) by doing them again after the connection is shut
down. This completes the fixes on the checks described in issue #253 by
roughly cutting the number of syscalls in half. It must be backported to
2.0.
On a plain TCP check, this function will do nothing except shutting the
connection down and will not even update the status. This prevents it
from being called again, which is the reason why we attempt to do it
once too early. Let's first fix this function to make it report success
on plain TCP checks before closing, as it does for all other ones.
This must be backported to 2.0. It should be safe to backport to older
versions but it doesn't seem it would fix anything there.
Since the change of I/O direction, we must not wait for an empty connect
callback before sending the request, we must attempt to send it as soon
as possible so that we don't uselessly poll. This is what this patch
does. This reduces the total check duration by a complete poll loop
compared to what is described in issue #253.
This must be backported to 2.0.
Since the change of I/O direction, we perform the connect() call and
the send() call together from the top. But the send call must at least
disable polling for writes once it does not have anything left to send.
This bug is partially responsible for the waste of resources described
in issue #253.
This must be backported to 2.0.
Since commit 7ac0e35f2 in 1.9-dev1 ("MAJOR: fd: compute the new fd polling
state out of the fd lock") we've started to update the FD POLLED bit a
bit more aggressively. Lately with the removal of the FD cache, this bit
is always equal to the ACTIVE bit. There's no point continuing to watch
it and update it anymore, all it does is create confusion and complicate
the code. One interesting side effect is that it now becomes visible that
all fd_*_{send,recv}() operations systematically call updt_fd_polling(),
except fd_cant_recv()/fd_cant_send() which never saw it change.
HTTP responses with headers than impinge upon the reserve must not be
cached. Otherwise, there is no warranty to have enough space to add the header
"Age" when such cached responses are delivered.
This patch must be backported to 2.0 and 1.9. For these versions, the same must
be done for the legacy HTTP mode.
In the cache, huge HTTP headers will use several shctx blocks. When a response
is returned from the cache, these headers must be properly copied in the
corresponding HTX message by updating the pointer where to copied a header
part.
This patch must be backported to 2.0 and 1.9.
The loop is now stopped only when nothing else is consumed from the input buffer
or if a parsing error is encountered. This will let a chance to detect cases
when we fail to add the EOM.
For instance, when the max is reached after the headers parsing and all the
message is received. In this case, we may have the flag H1S_F_REOS set without
the flag H1S_F_APPEND_EOM and no pending input data, leading to an error because
we think it is an abort.
This patch must be backported to 2.0. This bug does not affect 1.9.
Otherwise some processing may be performed twice. For instance, if the header
"Content-Length" is parsed on the first pass, when the parsing is restarted, we
skip it because we think another header with the same value was already seen. In
fact, it is currently the only existing bug that can be encountered. But it is
safer to reset all the h1m on restart to avoid any future bugs.
This patch must be backported to 2.0 and 1.9
Otherwise, the following final response could inherit of some of these
flags. For instance, because informational responses have no body, the flag
HTTP_MSGF_BODYLESS is set for 1xx messages. If it is not reset, this flag will
be kept for the final response.
One of visible effect of this bug concerns the HTTP compression. When the final
response is preceded by an 1xx message, the compression is not performed. This
was reported in github issue #229.
This patch must be backported to 2.0 and 1.9. Note that the file http_ana.c does
not exist for these branches, the patch must be applied on proto_htx.c instead.
Commit 8a4ffa0a ("MINOR: send-proxy-v2: sends authority TLV according
to TLV received") is missing parentheses around a variable assignment
used as condition in an if statement, and gcc isn't happy about it.
This bug came with 015e4d7 commit: "MINOR: stick-tables: Add peers process
binding computing" where the "stick" rules cases were missing when computing
the peer local listener process binding. At parsing time we store in the
stick-table struct ->proxies_list the proxies which refer to this stick-table.
The process binding is computed after having parsed the entire configuration file
with this simple loop in cfgparse.c:
/* compute the required process bindings for the peers from <stktables_list>
* for all the stick-tables, the ones coming with "peers" sections included.
*/
for (t = stktables_list; t; t = t->next) {
struct proxy *p;
for (p = t->proxies_list; p; p = p->next_stkt_ref) {
if (t->peers.p && t->peers.p->peers_fe) {
t->peers.p->peers_fe->bind_proc |= p->bind_proc;
}
}
}
Note that if this process binding is not correctly initialized, the child forked
by the master-worker stops the peer local listener. Should be also the case
when daemonizing haproxy.
Must be backported to 2.0.
Since patch "7185b789", the authority TLV in a PROXYv2 header from a
client connection is stored. Authority TLV sends in PROXYv2 should be
taken into account to allow chaining PROXYv2 without droping it.
Now by prefixing a log server with "ring@<name>" it's possible to send
the logs to a ring buffer. One nice thing is that it allows multiple
sessions to consult the logs in real time in parallel over the CLI, and
without requiring file system access. At the moment, ring0 is created as
a default sink for tracing purposes and is available. No option is
provided to create new rings though this is trivial to add to the global
section.
Instead of detecting an AF_UNSPEC address family for a log server and
to deduce a file descriptor, let's create a target type field and
explicitly mention that the socket is of type FD.
When logging to a file descriptor, we'd rather use the unified
fd_write_frag_line() which uses the FD's lock than perform the
writev() ourselves and use a per-server lock, because if several
loggers point to the same output (e.g. stdout) they are still
not locked and their logs may interleave. The function above
instead relies on the fd's lock so this is safer and will even
protect against concurrent accesses from other areas (e.g traces).
The function also deals with the FD's non-blocking mode so we do
not have to keep specific code for this anymore in the logs.
Logs and sinks were resorting to dirty hacks to initialize an FD to
non-blocking mode. Now we have a bit for this in the fd tab so we can
do it on the fly on first use of the file descriptor. Previously it was
set per log server by writing value 1 to the port, or during a sink
initialization regardless of the usage of the fd.
The "cache" entry was still present in the fdtab struct and it was
reported in "show sess". Removing it broke the cache-line alignment
on 64-bit machines which is important for threads, so it was fixed
by adding an attribute(aligned()) when threads are in use. Doing it
only in this case allows 32-bit thread-less platforms to see the
struct fit into 32 bytes.
Ilya reported in issue #242 that h2c_handle_priority() was having
unreachable code... Obviously, I missed the braces around the "if",
leaving an unconditional return.
No backport is needed.
Now it is possible for a reader to subscribe and wait for new events
sent to a ring buffer. When new events are written to a ring buffer,
the applets that are subscribed are woken up to display new events.
For now we only support this with the CLI applet called by "show events"
since the I/O handler is indeed a CLI I/O handler. But it's not
complicated to add other mechanisms to consume events and forward them
to external log servers for example. The wait mode is enabled by adding
"-w" after "show events <sink>". An extra "-n" was added to directly
seek to new events only.
Some CLI parsers are currently abusing the CLI context types such as
pointers to stuff longs into them by lack of room. But the context is
80 bytes while cli is only 48, thus there's some room left. This patch
adds a list element and two size_t usable as various offsets. The list
element is initialized.
There are two problems with the way we count watchers on a ring:
- the test for >=255 was accidently kept to 1 used during QA
- if the producer deletes all data up to the reader's position
and the reader is called, cannot write, and is called again,
it will have a zero offset, causing it to initialize itself
multiple times and each time adding a new refcount.
Let's change this and instead use ~0 as the initial offset as it's
not possible to have it as a valid one. We now store it into the
CLI context's size_t o0 instead of casting it to a void*.
No backport needed.
user-level traces are more readable when visually aligned. This is easily
done by writing "rcvd" instead of "received" to align with "sent" :
$ socat - /tmp/sock1 <<< "show events buf0"
[00|h2|0|mux_h2.c:2465] rcvd H2 request : [1] H2 REQ: GET /?s=10k HTTP/2.0
[00|h2|0|mux_h2.c:4563] sent H2 response : [1] H2 RES: HTTP/1.1 200
h2c->dsi is only for demuxing, and needed while decoding a new request.
But if we already have a valid stream ID (e.g. response or outgoing
request), we should use it instead. This avoids seeing [0] in front of
the responses at user level.
This is as documented in "trace h2 verbosity", level "minimal" only
features flags and doesn't perform any decoding at all, "simple" does,
just like "clean" which is the default for end uesrs.
The "clean" output will be suitable for user and proto-level output
where the internal stuff (state, pointers, etc) is not desired but
just the basic protocol elements.
We need this all the time in traces, let's have it now. For the sake
of compact outputs, the strings are all 3-chars long. The "show fd"
output was improved to make use of this.
All functions of the h2 data path were updated to receive one or multiple
TRACE() calls, at least one pair of TRACE_ENTER()/TRACE_LEAVE(), and those
manipulating protocol elements have been improved to report frame types,
special state transitions or detected errors. Even with careful tests, no
performance impact was measured when traces are disabled.
They are not completely exploited yet, the callback function tries to
dump a lot about them, but still doesn't provide buffer dumps, nor does
it indicate the stream or connection error codes.
The first argument is always set to the connection when known. The second
argument is set to the h2s when known, sometimes a 3rd argument is set to
a buffer, generally the rxbuf or htx, and occasionally the 4th argument
points to an integer (number of bytes read/sent, error code).
Retrieving a 10kB object produces roughly 240 lines when at developer
level, 35 lines at data level, 27 at state level, and 10 at proto level
and 2 at user level.
For now the headers are not dumped, but the start line are emitted in
each direction at user level.
The patch is marked medium because it touches lots of places, though
it takes care not to change the execution path.
The new function h2_trace() is called when relevant by the trace subsystem
in order to provide extra information about the trace being produced. It
can for example display the connection pointer, the stream pointer, etc.
It is declared in the trace source as the default callback as we expect
it to be versatile enough to enrich most traces.
In addition, for requests and responses, if we have a buffer and we can
decode it as an HTX buffer, we can extract user-friendly information from
the start line.
For now the traces are not used. Supported events are categorized by
where the activity comes from (h2c, h2s, stream, etc), a direction
(send/recv/wake), and a list of possibilities for each of them (frame
types, errors, shut, ...). This results in ~50 different events that
try to cover a lot of possibilities when it's needed to filter on
something specific. Special events like protocol error are handled.
A few aggregate events like "rx_frame" or "tx_frame" are planed to
cover all frame types at once by being placed all the time with any
of the other ones.
We also state that the first argument is always the connection. This way
the trace subsystem will be able to safely retrieve some useful info, and
we'll still be able to get the h2c from there (conn->ctx) in a pretty
print function. The second argument will always be an h2s, and in order
to propose it for tracking, we add its description. We also define 4
verbosity levels, which seems more than enough.
The detail level initially based on syslog levels is not used, while
something related is missing, trace verbosity, to indicate whether or
not we want to call the decoding callback and what level of decoding
we want (raw captures etc). Let's change the field to "verbosity" for
this. A verbosity of zero means that the decoding callback is not
called, and all other levels are handled by this callback and are
source-specific. The source is now prompted to list the levels that
are proposed to the user. When the source doesn't define anything,
"quiet" and "default" are available.
It's very convenient to know the trace level in the output, at least to
grep/grep -v on it. The main usage is to filter/filter out the developer
traces at level DEVEL. For this we now add the numeric level (0 to 4) just
after the source name in the brackets. The output now looks like this :
[00|h2|4|mux_h2.c:3174] h2_send(): entering : h2c=0x27d75a0 st=2
-, -, | ------------, --------, -------------------> message
| | | | '-> function name
| | | '-> source file location
| | '-> trace level (4=dev)
| '-> trace source
'-> thread number
Working on adding traces to mux-h2 revealed that the function names are
manually copied a lot in developer traces. The reason is that they are
not preprocessor macros and as such cannot be concatenated. Let's
slightly adjust the trace() function call to take a function name just
after the file:line argument. This argument is only added for the
TRACE_DEVEL and 3 new TRACE_ENTER, TRACE_LEAVE, and TRACE_POINT macros
and left NULL for others. This way the function name is only reported
for traces aimed at the developers. The pretty-print callback was also
extended to benefit from this. This will also significantly shrink the
data segment as the "entering" and "leaving" strings will now be merged.
One technical point worth mentioning is that the function name is *not*
passed as an ist to the inline function because it's not considered as
a builtin constant by the compiler, and would lead to strlen() being
run on it from all call places before calling the inline function. Thus
instead we pass the const char * (that the compiler knows where to find)
and it's the __trace() function that converts it to an ist for internal
consumption and for the pretty-print callback. Doing this avoids losing
5-10% peak performance.
The "payload" trace level was ambigous because its initial purpose was
to be able to dump received data. But it doesn't make sense to force to
report data transfers just to be able to report state changes. For
example, all snd_buf()/rcv_buf() operations coming from the application
layer should be tagged at this level. So here we move this payload level
above the state transitions and rename it to avoid the ambiguity making
one think it's only about request/response payload. Now it clearly is
about any data transfer and is thus just below the developer level. The
help messages on the CLI and the doc were slightly reworded to help
remove this ambiguity.
Save the authority TLV in a PROXYv2 header from the client connection,
if present, and make it available as fc_pp_authority.
The fetch can be used, for example, to set the SNI for a backend TLS
connection.
Traces were missing the thread number and the source name, which was
really annoying. Now the thread number is emitted on two digits inside
the square brackets, followed by the source name then the line location,
each delimited with a vertical bar, such as below :
[00|h2|mux_h2.c:2651] Notifying stream about SID change : h2c=0x7f3284581ae0 st=3 h2s=0x7f3284297f00 id=523 st=4
[00|h2|mux_h2.c:2708] receiving H2 HEADERS frame : h2c=0x7f3284581ae0 st=3 dsi=525 (st=0)
[02|h2|mux_h2.c:2194] Received H2 request : h2c=0x7f328d3d1ae0 st=2 : [525] H2 REQ: GET / HTTP/2.0
[02|h2|mux_h2.c:2561] Expecting H2 frame header : h2c=0x7f328d3d1ae0 st=2
It becomes apparent that most traces will use a single trace pretty
print callback, so let's allow the trace source to declare a default
one so that it can be omitted from trace calls, and will be used if
no other one is specified.
The principle is that when emitting a message, if some dropped events
were logged, we first attempt to report this counter before going
further. This is done under an exclusive lock while all logs are
produced under a shared lock. This ensures that the dropped line is
accurately reported and doesn't accidently arrive after a later
event.
The new "show events" CLI keyword lists supported event sinks. When
passed a buffer-type sink it completely dumps it.
no drops at all during attachment even at 8 millon evts/s.
still missing the attachment limit though.
This now provides sink_new_buf() which allocates a ring buffer. One such
ring ("buf0") of 1 MB is created already, and may be used by sink_write().
The sink's creation should probably be moved somewhere else later.
The three functions (attach, IO handler, and release) are meant to be
called by any CLI command which requires to dump the contents of a ring
buffer. We do not implement anything generic to dump any ring buffer on
the CLI since it's meant to be used by other functionalities above.
However these functions deal with locking and everything so it's trivial
to embed them in other code.
This function tries to write to the ring buffer, possibly removing enough
old messages to make room for the new one. It takes two arrays of fragments
on input to ease the insertion of prefixes by the caller. It atomically
writes the message, possibly truncating it if desired, and returns the
operation's status.
Our circular buffers are well suited for being used as ring buffers for
not-so-structured data. The machanism here consists in making room in a
buffer before inserting a new record which is prefixed by its size, and
looking up next record based on the previous one's offset and size. We
can have up to 255 consumers watching for data (dump in progress, tail)
which guarantee that entrees are not recycled while they're being dumped.
The complete representation is described in the header file. For now only
ring_new(), ring_resize() and ring_free() are created.
Let's not mess up with fd-specific code, locking nor message formating
here, and use the new generic function instead. This substantially
simplifies the sink_write() code and makes it more agnostic to the
output representation and storage.
Currently both logs and event sinks may use a file descriptor to
atomically emit some output contents. The two may use the same FD though
nothing is done to make sure they use the same lock. Also there is quite
some redundancy between the two. Better make a specific function to send
a fragmented message to a file descriptor which will take care of the
locking via the fd's lock. The function is also able to truncate a
message and to enforce addition of a trailing LF when building the
output message.
It will sometimes be useful to encode varints to know the output size in
advance. Two versions are provided, one inline using a switch/case construct
which will be trivial for use with constants (and will be very fast albeit
huge) and one function iterating on the number which is 5 times smaller,
for use with variables.
The converter can be useful to look up a server queue from a dynamic value.
It takes an input value of type string, either a server name or
<backend>/<server> format and returns the number of queued sessions
on that server. Can be used in places where we want to look up
queued sessions from a dynamic name, like a cookie value (e.g.
req.cook(SRVID),srv_queue) and then make a decision to break
persistence or direct a request elsewhere.
Signed-off-by: Nenad Merdanovic <nmerdan@haproxy.com>
The url32 sample fetch does not take the path part of the URL into
account. This is because in smp_fetch_url32() we erroneously modify
path.len and path.ptr before testing their value and building the
path based part of the hash.
This fixes issue #235
This must be backported as far as 1.9, when HTX was introduced.
The delete_listener() function takes the listener's lock before taking
the proto_lock, which is contrary to what other functions do, possibly
causing an AB/BA deadlock. In practice the two only places where both
are taken are during protocol_enable_all() and delete_listener(), the
former being used during startup and the latter during stop. In practice
during reload floods, it is technically possible for a thread to be
initializing the listeners while another one is stopping. While this
is too hard to trigger on 2.0 and above due to the synchronization of
all threads during startup, it's reasonably easy to do in 1.9 by having
hundreds of listeners, starting 64 threads and flooding them with reloads
like this :
$ while usleep 50000; do killall -USR2 haproxy; done
Usually in less than a minute, all threads will be deadlocked. The fix
consists in always taking the proto_lock before the listener lock. It
seems to be the only place where these two locks were reversed. This
fix needs to be backported to 2.0, 1.9, and 1.8.
If haproxy is built with profiling enabled with -pg, it is possible to
see the master quit during a reload while it's re-executing itself with
error code 155 (signal 27) saying "Profile timer expired)". This happens
if the SIGPROF signal is delivered during the execve() call while the
handler was already unregistered. The issue itself is not directly inside
haproxy but it's easy to address. This patch disables this signal before
calling execvp() during a master reload. A simple test for this consists
in running this little script with haproxy started in master-worker mode :
$ while usleep 50000; do killall -USR2 haproxy; done
This fix should be backported to all versions using the master-worker
model.
If a receipt ends with the HTX buffer full and everything is completed except
appending the HTX EOM block, we end up detecting an error because the H1
parser did not switch to H1_MSG_DONE yet while all conditions for an end of
stream and end of buffer are met. This can be detected by retrieving 31532
or 31533 chunk-encoded bytes over H1 and seeing haproxy log "SD--" at the
end of a successful transfer.
Ideally the EOM part should be totally independent on the H1 message state
since the block was really parsed and finished. So we should switch to a
last state requiring to send only EOM. However this needs a few risky
changes. This patch aims for simplicity and backport safety, thus it only
adds a flag to the H1 stream indicating that an EOM is still needed, and
excludes this condition from the ones used to detect end of processing. A
cleaner approach needs to be studied, either by adding a state before DONE
or by setting DONE once the various blocks are parsed and before trying to
send EOM.
This fix must be backported to 2.0. The issue does not seem to affect 1.9
though it is not yet known why, probably that it is related to the different
encoding of trailers which always leaves a bit of room to let EOM be stored.
The H1 message parser calls the various message block parsers with an
offset indicating where in the buffer to start from, and only consumes
the data at the end of the parsing. The headers and trailers parsers
have a condition detecting if a headers or trailers block is too large
to fit into the buffer. This is detected by an incomplete block while
the buffer is full. Unfortunately it doesn't take into account the fact
that the block may be parsed after other blocks that are still present
in the buffer, resulting in aborting some transfers early as reported
in issue #231. This typically happens if a trailers block is incomplete
at the end of a buffer full of data, which typically happens with data
sizes multiple of the buffer size minus less than the trailers block
size. It also happens with the CRLF that follows the 0-sized chunk of
any transfer-encoded contents is itself on such a boundary since this
CRLF is technically part of the trailers block. This can be reproduced
by asking a server to retrieve exactly 31532 or 31533 bytes of static
data using chunked encoding with curl, which reports:
transfer closed with outstanding read data remaining
This issue was revealed in 2.0 and does not affect 1.9 because in 1.9
the trailers block was processed at once as part of the data block
processing, and would simply give up and wait for the rest of the data
to arrive.
It's interesting to note that the headers block parsing is also affected
by this issue but in practice it has a much more limited impact since a
headers block is normally only parsed at the beginning of a buffer. The
only case where it seems to matter is when dealing with a response buffer
full of 100-continue header blocks followed by a regular header block,
which will then be rejected for the same reason.
This fix must be backported to 2.0 and partially to 1.9 (the headers
block part).
Now we try to find frontend, listener, backend, server, connection,
session, stream, from the presented argument of type connection,
stream or session. Various combinations and bounces allow to
retrieve most of them almost all the time. The extraction is
performed early so that we'll be able to apply filters later.
The lock-on is set if it was not there while the trace is running and
a valid pointer is available. If it was already set and doesn't match,
no trace is produced.
When no criterion is provided, it carefully enumerates all available
ones, including those provided by the source itself. Otherwise it sets
the new criterion and resets the lockon pointer.
When we stop or pause a trace (either on a matching event or by hand),
we must also stop the lock-on feature so that we don't follow any
further activity on this pointer even if it is recycled. For now this
is not exploited.
The trace() call will support an optional decoding callback and 4
arguments that this function is supposed to know how to use to provide
extra information. The output remains unchanged when the function is
NULL. Otherwise, the message is pre-filled into the thread-local
trace_buf, and the function is called with all arguments so that it
completes the buffer in a readable form depending on the expected
level of detail.
This new "level" argument will allow the trace sources to label the
traces for different purposes, and filter out some of them if they
are not relevant to the current target. Right now we have 5 different
levels:
- USER : the least verbose one, only a few functional information
- PAYLOAD: like user but also displays some payload-related information
- PROTO: focuses on the protocol's framing
- STATE: also indicate state internal transitions or non-transitions
- DEVELOPER: adds extra info about branches taken in the code (break
points, return points)
We now pass an extra argument "where" to the trace() call, which
is supposed to be an ist made of the concatenation of the filename
and the line number. We only keep the last 10 chars from this string
since the end of file names is most often easy to recognize. This
gives developers useful information at very low cost.
For now it remains quite basic. It performs a few state checks, calls
the source's sink if defined, and performs the transitions between
RUNNING, STOPPED and WAITING when the configured events match.
The new "show trace" CLI command lists available trace sources and
indicates their status, their sink, and number of dropped packets.
When "show trace <source>" is used, the list of known events is
also listed with their status per action (report/start/stop/pause).
The "level" keyword allows to indicate the expected level of verbosity
in the traces, among "user" (least verbose, just synthetic info) to
"developer" (very detailed, including function entry/leaving). It's only
displayed and set but not used yet.
For now it lists the sources if one is not provided, and checks
for the source's existence. It lists the events if not provided,
checks for their existence if provided, and adjusts reported
events/start/stop/pause events, and performs state transitions.
It lists sinks and adjusts them as well. Filters, lock, and
level are not implemented yet.
The principle of this subsystem will be to support taking live traces
at various places in the code with conditional triggers, filters, and
ability to lock on some elements. The traces will support typed events
and will be sent into sinks made of ring buffers, file descriptors or
remote servers.
This is the most basic type of sink. It pre-registers "stdout" and
"stderr", and is able to use writev() on them. The writev() operation
is locked to avoid mixing outputs. It's likely that the registration
should move somewhere else to take into account the fact that stdout
and stderr are still opened or are closed.
The principle will be to be able to dispatch events to various destinations
called "sinks". This is already done in part in logs where log servers can
be either a UDP socket or a file descriptor. This will be needed with the
new trace subsystem where we may also want to add ring buffers. And it turns
out that all such destinations make sense at all places. Logs may need to be
sent to a TCP server via a ring buffer, or consulted from the CLI. Trace
events may need to be sent to stdout/stderr as well as to remote log servers.
This patch creates a new structure "sink" aiming at addressing these similar
needs. The goal is to merge together what is common to all of them, such as
the output format, the dropped events count, etc, and also keep separately
the target identification (network address, file descriptor). Provisions
were made to have a "waiter" on the sink. For a TCP log server it will be
the task to wake up after writing to the log buffer. For a ring buffer, it
could be the list of watchers on the CLI running a "tail" operation and
waiting for new events. A lock was also placed in the struct since many
operations will require some locking, including the FD ones. The output
formats covers those in use by logs and two extra ones prepending the ISO
time in front of the message (convenient for stdio/buffer).
For now only the generic infrastructure is present, no type-specific
output is implemented. There's the sink_write() function which prepares
and formats a message to be sent, trying hard to avoid copies and only
using pointer manipulation, where the type-specific code just has to be
added. Dropped messages are already counted (for now 100% drop). The
message is put into an iovec array as it will be trivial to use with
file descriptors and sockets.
The function call tracing code is a quite old and was never ported to
support threads. It's not even sure whether it still works well, but
at least its presence creates confusion for future work so let's rename
it to calltrace.c and add a comment about its lack of thread-safety.
In h1_rcv_buf(), wake the h1c tasklet as long as we're not done reading the
request/response, and the h1c is not already subscribed for receiving. Now
that we no longer subscribe in h1_recv() if we managed to read data, we
rely on h1_rcv_buf() calling us again, but h1_process_input() may have
returned 0 if we only received part of the request, so we have to wake
the tasklet to be sure to get more data again.
When we dump a thread's state (show thread, panic) we don't know if
anything is happening in Lua, which can be problematic especially when
calling external functions. With this patch, the thread dump code can
now detect if we're running in a global Lua task (hlua_process_task),
or in a TCP or HTTP Lua service (task_run_applet and applet.fct ==
hlua_applet_tcp_fct or http_applet_http_fct), or a fetch/converter
from an analyser (s->hlua != NULL). In such situations, it's able to
append a formatted Lua backtrace of the Lua execution path with
function names, file names and line numbers.
Note that a shorter alternative could be to call "luaL_where(hlua->T,0)"
which only prints the current location, but it's not necessarily sufficient
for complex code.
The current functions are seen outside from the debugging code and are
convenient to export so that we can improve the thread dump output :
void hlua_applet_tcp_fct(struct appctx *ctx);
void hlua_applet_http_fct(struct appctx *ctx);
struct task *hlua_process_task(struct task *task, void *context, unsigned short state);
Of course they are only available when USE_LUA is defined.
This is somewhat related to indent_msg() except that this one places a
known prefix at the beginning of each line, allows to replace the EOL
character, and not to insert a prefix on the first line if not desired.
It works with a normal output buffer/chunk so it doesn't need to allocate
anything nor to modify the input string. It is suitable for use in multi-
line backtraces.
In mux_pt_attach(), don't inconditionally call unsubscribe, and only do so
if we were subscribed. The idea was that at this point we would always be
subscribed, as for the mux_pt attach would only be called after at least one
request, after which the mux_pt would have subscribed, but this is wrong.
We can also be called if for some reason the connection failed before the
xprt was created. And with no xprt, attempting to call unsubscribe will
probably lead to a crash.
This should be backported to 2.0.
The stats applet waits to have a full body to process POST requests. Because
when it is waiting for the end of a request it does not produce anything, the
applet may be blocked. The client side is blocked because the stats applet does
not consume anything and the applet is waiting because all the body is not
received. Registering the analyzer AN_REQ_HTTP_BODY when a POST request is sent
for the stats applet solves the issue.
This patch must be backported to 2.0.
This bug was introduced by the commit bfab2ddd ("MINOR: hlua: Add a flag on the
lua txn to know in which context it can be used"). The wrong test was done. So
the timeout was always set on the response channel. It may lead to an infinite
loop.
This patch must be backported everywhere the commit bfab2ddd is. For now, at
least to 2.0, 1.9 and 1.8.
In the REORG of commit 1a18b5414 ("REORG: connection: centralize the
conn_set_{tos,mark,quickack} functions") a bug was introduced by
calling conn_set_tos instead of conn_set_mark.
This was reported in issue #212
This should be backported to 1.9 and 2.0.
When we upgrade the mux from TCP to H2/HTX, don't use cs_destroy() to free
the conn_stream, use cs_free() instead. Using cs_destroy() would call the
mux detach method, and at that point of time the mux would be the H2 mux,
which knows nothing about that conn_stream, so bad things would happen.
This should eventually make upgrade from TCP to H2/HTX work, and fix
the github issue #196.
This should be backported to 2.0.
In stream_end_backend(), if we're upgrading from TCP to H1/HTX, as we don't
destroy the stream, we have to add the SF_HTX flag on the stream, or bad
things will happen.
This was broken when attempting to fix github issue #196.
This should be backported to 2.0.
There were 221 places where a status message or an error message were built
to be returned on the CLI. All of them were replaced to use cli_err(),
cli_msg(), cli_dynerr() or cli_dynmsg() depending on what was expected.
This removed a lot of duplicated code because most of the times, 4 lines
are replaced by a single, safer one.
Right now we used to have extremely inconsistent states to report output,
one is CLI_ST_PRINT which prints constant message cli->msg with the
assigned severity, and CLI_ST_PRINT_FREE which prints dynamically
allocated cli->err with severity LOG_ERR, and nothing in between,
eventhough it's useful to be able to report dynamically allocated
messages as well as constant error messages.
This patch adds two extra states, which are not particularly well named
given the constraints imposed by existing ones. One is CLI_ST_PRINT_ERR
which prints a constant error message. The other one is CLI_ST_PRINT_DYN
which prints a dynamically allocated message. By doing so we maintain
the compatibility with current code.
It is important to keep in mind that we cannot pre-initialize pointers
and automatically detect what message type it is based on the assigned
fields, because the CLI's context is in a union shared with all other
users, thus unused fields contain anything upon return. This is why we
have no choice but using 4 states. Keeping the two fields <msg> and
<err> remains useful because one is const and not the other one, and
this catches may copy-paste mistakes. It's just that <err> is pretty
confusing here, it should be renamed.
Since last commit there's no point anymore in having two variants of the
same function, let's switch to b_free() only. __b_drop() was renamed to
__b_free() for obvious consistency reasons.
CO_FL_EARLY_SSL_HS/CO_FL_EARLY_DATA are removed for BoringSSL. Early
data can be checked via BoringSSL API and ssl_fc_has_early can used it.
This should be backported to all versions till 1.8.
Since BoringSSL commit 777a2391 "Hold off flushing NewSessionTicket until write.",
0-RTT doesn't work. It appears that half-RTT data (response from 0-RTT) never
worked before the BoringSSL fix. For HAProxy the regression come from 010941f8
"BUG/MEDIUM: ssl: Use the early_data API the right way.": the problem is link to
the logic of CO_FL_EARLY_SSL_HS used for OpenSSL. With BoringSSL, handshake is
done before reading early data, 0-RTT data and half-RTT data are processed as
normal data: CO_FL_EARLY_SSL_HS/CO_FL_EARLY_DATA is not needed, simply remove
it.
This should be backported to all versions till 1.8.
Allow HAProxy to cache responses to OPTIONS HTTP requests.
This is useful in the use case of "Cross-Origin Resource Sharing" (cors)
to cache CORS responses from API servers.
Since HAProxy does not support Vary header for now, this would be only
useful for "access-control-allow-origin: *" use case.
Current HTTP cache hash contains only the Host header and the url path.
That said, request method should also be added to the mix to support
caching other request methods on the same URL. IE GET and OPTIONS.
The frame check code in the demuxer was moved to its own function to
keep the demux function clean enough. This also simplifies the test
case as we can now simply call this function once in H2_CS_FRAME_P
state.
When parsing references to stick-tables declared as backends, they are added to
a list of proxies (they are proxies!) which refer to this stick-tables.
Before this patch we added them to these list without checking they were already
present, making the silly hypothesis the actions/sample were checked/resolved in the same
order the proxies are parsed.
This patch implement a simple inline function to in_proxies_list() to test
the presence of a proxy in a list of proxies. We use this function when resolving
/checking samples/actions.
This bug was introduced by 015e4d7 commit.
Must be backported to 2.0.
In SMTP, MySQL and PgSQL checks, we're supposed to finish with a message
to politely quit the server, otherwise some of them will log some errors.
This is the case with Postfix as reported in GH issue #187. Since commit
fe4abe6 ("BUG/MEDIUM: connections: Don't call shutdown() if we want to
disable linger.") we are a bit more aggressive on outgoing connection
closure and checks were not prepared for this.
This patch makes the 3 checks above disable the linger_risk for these
checks so that we close cleanly, with the side effect that it will leave
some TIME_WAIT connections behind (hence why it should not be generalized
to all checks). It's worth noting that in issue #187 it's mentioned that
this patch doesn't seem to be sufficient for Postfix, however based only
on local network activity this looks OK, so maybe this will need to be
improved later.
Given that the patch above was backported to 2.0 and 1.9, this one should
as well.
In Patrick's trace it was visible that after a stream had been missed,
the next stream would receive a WINDOW_UPDATE with the first one's
credit added to its own. This makes sense because in case of error
h2c->rcvd_s is not reset. Given that this counter is per frame, better
reset it when starting to parse a new frame, it's easier and safer.
This must be backported as far as 1.8.
In h2_process_mux() if we have some room and an attempt to send a window
update for the connection was pending, it's done first. But it's not done
for the stream, which will have for effect of postponing this attempt
till next pass into h2_process_demux(), at the risk of seeing the send
buffer full again. Let's always try to send both pending frames as soon
as possible.
This should be backported as far as 1.8.
Patrick Hemmer reported a rare case where the H2 mux emits spurious
RST_STREAM(STREAM_CLOSED) that are triggered by the send path and do
not even appear to be associated with a previous incoming frame, while
the send path never emits such a thing.
The problem is particularly complex (hence its rarity). What happens is
that when data are uploaded (POST) we must refill the sending stream's
window by sending a WINDOW_UPDATE message (and we must refill the
connection's too). But in a highly bidirectional traffic, it is possible
that the mux's buffer will be full and that there is no more room to build
this WINDOW_UPDATE frame. In this case the demux parser switches to the
H2_CS_FRAME_A state, noting that an "acknowledgement" is needed for the
current frame, and it doesn't change the current stream nor frame type.
But the stream's state was possibly updated (typically OPEN->HREM when
a DATA frame carried the ES flag).
Later the data can leave the buffer, wake up h2_io_cb(), which calls
h2_send() to send pending data, itself calling h2_process_mux() which
detects that there are unacked data in the connection's window so it
emits a WINDOW_UPDATE for the connection and resets the counter. so it
emits a WINDOW_UPDATE for the connection and resets the counter. Then
h2_process() calls h2_process_demux() which continues the processing
based on the current frame type and the current state H2_CS_FRAME_A.
Unfortunately the protocol compliance checks matching the frame type
against the current state are still present. These tests are designed
for new frames only, not for those in progress, but they are not
limited by frame types. Thus the current DATA frame is checked again
against the current stream state that is now HREM, and fails the test
with a STREAM_CLOSED error.
The quick and backportable solution consists in adding the test for
this ACK and bypass all these checks that were already validated prior
to the state transition. A better long-term solution would consist in
having a new state between H and P indicating the frame is new and
needs to be checked ("N" for new?) and apply the protocol tests only
in this state. In addition everywhere we decide to send a window
update, we should send a stream WU first if there are unacked data
for the current stream. Last, rcvd_s should always be reset when
transitioning to FRAME_H (and a BUGON for this in dev would help).
The bug will be way harder to trigger on 2.0 than on 1.8/1.9 because
we have a ring buffer for the connection so the buffer full situations
are extremely rare.
This fix must be backpored to all versions having H2 (as far as 1.8).
Special thanks to Patrick for providing exploitable traces.
If the server decides to close early, we don't want to send a
REFUSED_STREAM error but a CANCEL, so that the client doesn't want
to retry. The test in h2_do_shutw() was wrong for this as it would
handle the HLOC case like the case where nothing had been sent for
this stream, which is wrong. Now h2_do_shutw() does nothing in this
case and lets h2_do_shutr() decide.
Note that this partially undoes f983d00a1 ("BUG/MINOR: mux-h2: make
the do_shut{r,w} functions more robust against retries").
This must be backported to 2.0. The patch above was not backported to
1.9 for being too risky there, but if it eventually gets to it, this
one will be needed as well.
There is a test on the existence of the conn_stream when receiving data,
to be sure to have somewhere to deliver it. Right now it responds with
STREAM_CLOSED, which is not correct since from an H2 point of view the
stream is not closed and a peer could be upset to see this. After some
analysis, it is important to keep this test to be sure not to fill the
rxbuf then stall the connection. Another option could be to modiffy
h2_frt_transfer_data() to silently discard any contents but the CANCEL
error code is designed exactly for this and to save the peer from
continuing to stream data that will be discarded, so better switch to
using this.
This must be backported as far as 1.8.
The test in h2s_send_rst_stream() is excessive, it refrains from sending an
RST_STREAM if *the last frame* was an RST_STREAM, regardless of the stream
ID. In a context where both clients and servers abort a lot, it could happen
that one RST_STREAM is dropped from responses from time to time, causing
delays to the client.
This must be backported to 2.0, 1.9 and 1.8.
In _update_fd(), if we're about to remove the FD from the poller, remove
both the receive and the send bits, instead of removing the receive bits
twice.
In fd_dodelete(), always reset the polled_mask bits, instead on only doing
it if we're closing the file descriptor. We call the poller clo() method
anyway, and failing to do so means that if fd_remove() is used while the
fd is polled, the poller won't attempt to poll on a fd with the same value
as the old one.
This leads to fd being stuck in the SSL code while using the async engine.
This should be backported to 2.0, 1.9 and 1.8.
In stream_set_backend(), if we have a TCP stream, and we want to upgrade it
to H2 instead of attempting ot reuse the stream, just destroy the
conn_stream, make sure we don't log anything about the stream, and pretend
we failed setting the backend, so that the stream will get destroyed.
New streams will then be created by the mux, as if the connection just
happened.
This fixes a crash when upgrading from TCP to H2, as the H2 mux totally
ignored the conn_stream provided by the upgrade, as reported in github
issue #196.
This should be backported to 2.0.
The SETTINGS frame parser updates all streams' window for each
INITIAL_WINDOW_SIZE setting received on the connection (like h2spec
does in test 6.5.3), which can start to be expensive if repeated when
there are many streams (up to 100 by default). A quick test shows that
it's possible to parse only 35000 settings per second on a 3 GHz core
for 100 streams, which is rather small.
Given that window sizes are relative and may be negative, there's no
point in pre-initializing them for each stream and update them from
the settings. Instead, let's make them relative to the connection's
initial window size so that any change immediately affects all streams.
The only thing that remains needed is to wake up the streams that were
unblocked by the update, which is now done once at the end of
h2_process_demux() instead of once per setting. This now results in
5.7 million settings being processed per second, which is way better.
In order to keep the change small, the h2s' mws field was renamed to
"sws" for "stream window size", and an h2s_mws() function was added
to add it to the connection's initial window setting and determine the
window size to use when muxing. The h2c_update_all_ws() function was
renamed to h2c_unblock_sfctl() since it's now only used to unblock
previously blocked streams.
This needs to be backported to all versions till 1.8.
Recent optimization in commit 4d7a88482 ("MEDIUM: mux-h2: don't try to
read more than needed") broke the receipt of large DATA frames because
it would unconditionally subscribe if there was some room left, thus
preventing any new rx from being done since subscription may only be
done once the end was reached, as indicated by ret == 0.
However, fixing this uncovered that in HTX mode previous versions might
occasionally be affected as well, when an available frame is the same
size as the maximum data that may fit into an HTX buffer, we may end
up reading that whole frame and still subscribe since it's still allowed
to receive, thus causing issues to read the next frame.
This patch will only work for 2.1-dev but a minor adaptation will be
needed for earlier versions (down to 1.9, where subscribe() was added).
Between 1.6 and 1.7, some parts of the stream forwarding process were
moved into lower layers and the stream-interface had to keep the
stream's task up to date regarding the timeouts. The analyser timeouts
were not updated there as it was believed this was not needed during
forwarding, but actually there is a case for this which is "option
contstats" which periodically triggers the analyser timeout, and this
change broke the option in case of sustained traffic (if there is some
I/O activity during the same millisecond as the timeout expires, then
the update will be missed).
This patch simply brings back the analyser expiration updates from
process_stream() to stream_int_notify().
It may be backported as far as 1.7, taking care to adjust the fields
names if needed.
There are some situations, after sending a request or response, upon I/O
completion, or applet execution, where we end up with an empty buffer that
was not released. This results in excessive memory usage (back to 1.5) and
a lower CPU cache efficiency since buffers are not recycled as fast. This
has changed since the places where we send have changed with the new
layering, but not all cases susceptible of leaving an empty buffer were
properly spotted. Doing so reduces the memory pressure on buffers by about
2/3 in high traffic tests.
This should be backported to 2.0 and maybe 1.9.
Multiple calls to smp_fetch_fhdr use the header context to keep track of
header parsing position; however, when using header sampling on a raw
connection, the raw buffer is converted into an HTX structure each time, and
this was done in the trash areas; so the block reference would be invalid on
subsequent calls.
This patch must be backported to 2.0 and 1.9.
When the server weight is increased in consistant hash, extra nodes have to be
allocated. So a realloc() is performed on the nodes array of the server. the
previous commit 962ea7732 ("BUG/MEDIUM: lb-chash: Remove all server's entries
before realloc() to re-insert them after") have fixed the size used during the
realloc() to avoid segfaults. But another bug remains. After the realloc(), the
memory area allocated for the nodes array may change, invalidating all node
addresses in the chash tree.
So, to fix the bug, we must remove all server's entries from the chash tree
before the realloc to insert all of them after, old nodes and new ones. The
insert will be automatically handled by the loop at the end of the function
chash_queue_dequeue_srv().
Note that if the call to realloc() failed, no new entries will be created for
the server, so the effective server weight will be unchanged.
This issue was reported on Github (#189).
This patch must be backported to all versions since the 1.6.
When using a ckch we should never try to free its content, because it
won't be usable after and can result in a NULL derefence during
parsing.
The content was previously freed because the ckch wasn't stored in a
tree to be used later, now that we use it multiple time, we need to keep
the data.
It happens that upon looping threads the watchdog fires, starts a dump,
and other threads expire their budget while waiting for the other threads
to get dumped and trigger a watchdog event again, adding some confusion
to the traces. With this patch the situation becomes clearer as we export
the list of threads being dumped so that the watchdog can check it before
deciding to trigger. This way such threads in queue for being dumped are
not attempted to be reported in turn.
This should be backported to 2.0 as it helps understand stack traces.
If a thread dump is requested from a signal handler, it may interrupt
a thread already waiting for a dump to complete, and may see the
threads_to_dump variable go to zero while others are waiting, steal
the lock and prevent other threads from ever completing. This tends
to happen when dumping many threads upon a watchdog timeout, to threads
waiting for their turn.
Instead now we proceed in two steps :
1) the last dumped thread sets all bits again
2) all threads only wait for their own bit to appear, then clear it
and quit
This way there's no risk that a bit performs a double flip in the same
loop and threads cannot get stuck here anymore.
This should be backported to 2.0 as it clarifies stack traces.
Recent changes use struct cert_key_and_chain to load all certificates in
frontends, this structure was previously used only to load multi-cert
bundle, which is supported only on >= 1.0.2.
The h2_recv() loop was historically built around a loop to deal with
the callback model but this is not needed anymore, as it the upper
layer wants more data, it will simply try to read again. Right now
50% of the recvfrom() calls made over H2 return EAGAIN. With this
change it doesn't happen anymore. Note that the code simply consists
in breaking the loop, and reporting real data receipt instead of
always returning 1.
A test was made not to subscribe if we actually read data but it
doesn't change anything since we might be subscribed very early
already.
In the poller code, instead of just remembering if we're currently polling
a fd or not, remember if we're polling it for writing and/or for reading, that
way, we can avoid to modify the polling if it's already polled as needed.
Now that the architecture was changed so that attempts to receive/send data
always come from the upper layers, instead of them only trying to do so when
the lower layer let them know they could try, we can finally get rid of the
fd cache. We don't really need it anymore, and removing it gives us a small
performance boost.
OPENSSL_NO_DH can be defined to avoid obsolete and heavy DH processing.
With OPENSSL_NO_DH, parse the entire PEM file to look at DHparam is wast
of time.
Load a PEM certificate and use it in CTX are now decorrelated.
Checking the certificate and private key consistency can be done
earlier: in loading phase instead CTX set phase.
cert_key_and_chain handling is now outside openssl 1.0.2 #if: the
code must be libssl compatible. SSL_CTX_add1_chain_cert and
SSL_CTX_set1_chain requires openssl >= 1.0.2, replace it by legacy
SSL_CTX_add_extra_chain_cert when SSL_CTX_set1_chain is not provided.
A problem involving server slowstart was reported by @max2k1 in issue #197.
The problem is that pendconn_grab_from_px() takes the proxy lock while
already under the server's lock while process_srv_queue() first takes the
proxy's lock then the server's lock.
While the latter seems more natural, it is fundamentally incompatible with
mayn other operations performed on servers, namely state change propagation,
where the proxy is only known after the server and cannot be locked around
the servers. Howwever reversing the lock in process_srv_queue() is trivial
and only the few functions related to dynamic cookies need to be adjusted
for this so that the proxy's lock is taken for each server operation. This
is possible because the proxy's server list is built once at boot time and
remains stable. So this is what this patch does.
The comments in the proxy and server structs were updated to mention this
rule that the server's lock may not be taken under the proxy's lock but
may enclose it.
Another approach could consist in using a second lock for the proxy's queue
which would be different from the regular proxy's lock, but given that the
operations above are rare and operate on small servers list, there is no
reason for overdesigning a solution.
This fix was successfully tested with 10000 servers in a backend where
adjusting the dyncookies in loops over the CLI didn't have a measurable
impact on the traffic.
The only workaround without the fix is to disable any occurrence of
"slowstart" on server lines, or to disable threads using "nbthread 1".
This must be backported as far as 1.8.
Load the DH param at the same time as the certificate, we don't need to
open the file once more and read it again. We store it in the ckch_node.
There is a minor change comparing to the previous way of loading the DH
param in a bundle. With a bundle, the DH param in a certificate file was
never loaded, it only used the global DH or the default DH, now it's
able to use the DH param from a certificate file.
Don't read a certificate file again if it was already stored in the
ckchn tree. It allows HAProxy to start more quickly if the same
certificate is used at different places in the configuration.
HAProxy lookup in the ssl_sock_load_cert() function, doing it at this
level allows to skip the reading of the certificate in the filesystem.
If the certificate is not found in the tree, we insert the ckch_node in
the tree once the certificate is read on the filesystem, the filename or
the bundle name is used as the key.
Split the functions which open the certificates.
Instead of opening directly the certificates and inserting them directly
into a SSL_CTX, we use a struct cert_key_and_chain to store them in
memory and then we associate a SSL_CTX to the certificate stored in that
structure.
Introduce the struct ckch_node for the multi-cert bundles so we can
store multiple cert_key_and_chain in the same structure.
The functions ssl_sock_load_multi_cert() and ssl_sock_load_cert_file()
were modified so they don't open the certicates anymore on the
filesystem. (they still open the sctl and ocsp though). These functions
were renamed ssl_sock_load_ckchn() and ssl_sock_load_multi_ckchn().
The new function ckchn_load_cert_file() is in charge of loading the
files in the cert_key_and_chain. (TODO: load ocsp and sctl from there
too).
The ultimate goal is to be able to load a certificate from a certificate
tree without doing any filesystem access, so we don't try to open it
again if it was already loaded, and we share its configuration.
This structure was only used in the case of the multi-cert bundle.
Using these primitives everywhere when we load the file are a first step
in the deduplication of the code.
This commit merges the function ssl_sock_load_cert_file() and
ssl_sock_load_cert_chain_file().
The goal is to refactor the SSL code and use the cert_key_and_chain
struct to load everything.
When the payload of a block is shrinked or enlarged, addresses of the free
spaces must be updated. There are many possible cases. One of them is
buggy. When there is only one block in the HTX message and its payload is just
before the tail room and it needs to be moved in the head room to be enlarged,
addresses are not correctly updated. This bug may be hit by the compression
filter.
This patch must be backported to 2.0.
The flag HLUA_TXN_HTTP_RDY was added in the previous commit to know when a
function is called for a channel with a valid HTTP message or not. Of course it
also depends on the calling direction. In this commit, we allow the execution of
functions of the HTTP class only if this flag is set.
Nobody seems to use them from an unsupported context (for instance, trying to
set an HTTP header from a tcp-request rule). But it remains a bug leading to
undefined behaviors or crashes.
This patch may be backported to all versions since the 1.6. It depends on the
commits "MINOR: hlua: Add a flag on the lua txn to know in which context it can
be used" and "MINOR: hlua: Don't set request analyzers on response channel for
lua actions".
When a lua action or a lua sample fetch is called, a lua transaction is
created. It is an entry in the stack containing the class TXN. Thanks to it, we
can know the direction (request or response) of the call. But, for some
functions, it is also necessary to know if the buffer is "HTTP ready" for the
given direction. "HTTP ready" means there is a valid HTTP message in the
channel's buffer. So, when a lua action or a lua sample fetch is called, the
flag HLUA_TXN_HTTP_RDY is set if it is appropriate.
Setting some requests analyzers on the response channel was an old trick to be
sure to re-evaluate the request's analyers after the response's ones have been
called. It is no more necessary. In fact, this trick was removed in the version
1.8 and backported up to the version 1.6.
This patch must be backported to all versions since 1.6 to ease the backports of
fixes on the lua code.
It is invalid to manipulate responses from http-request rules or to manipulate
requests from http-response rules. When http-request rules are evaluated, the
connection to server is not yet established, so there is no response at all. And
when http-response rules are evaluated, the request has already been sent to the
server.
Now, the calling direction is checked. So functions "txn.http:req_*" can now
only be called from http-request rules and the functions "txn.http:res_*" can
only be called from http-response rules.
This issue was reported on Github (#190).
This patch must be backported to all versions since the 1.6.
For HTX streams, when txn:done() is called, the work is delegated to the
function http_reply_and_close(). But it is not enough. The channel's analyzers
must also be reset. Otherwise, some analyzers may still be called while
processing should be aborted.
For instance, if the function is called from an http-request rules on the
frontend, request analyzers on the backend side are still called. So we may try
to add an header to the request, while this one was already reset.
This patch must be backported to 2.0 and 1.9.
In h1_rcv_buf(), don't wake the H1 tasklet to attempt to receive more data
if we got the whole request. It will lead to a recv and maybe to a subscribe
while it may not be needed.
If the connection is keep alive, the tasklet will be woken up later by
h1_detach(), so that we'll be able to get the next request, or an end of
connection.
In h1_recv(), don't subscribe if we managed to receive data. We may not have
to, if we received a complete request, and a new receive will be attempted
later, as the tasklet is woken up either by h1_rcv_buf() or by h1_detach.
Since 1.9 we support sending logs to various non-blocking outputs like
stdou/stderr or flies, by using writev() which guarantees that it only
returns after having written everything or nothing. However the syscall
may be interrupted while doing so, and this is visible when writing to
a tty during debug sessions, as some logs occasionally appear interleaved
if an xterm or SSH connection is not very fast. Performance here is not a
critical concern, log correctness is. Let's simply take the logger's lock
around the writev() call to prevent multiple senders from stepping onto
each other's toes.
This may be backported to 2.0 and 1.9.
In sess_established(), don't immediately switch the backend stream_interface
to SI_ST_DIS if we only got a SHUTR. We may still have something to send,
ie if the request is a POST, and we should be switched to SI_ST8DIS later
when the shutw will happen.
This should be backported to 2.0 and 1.9.
When the number of nodes is increased because the server weight is changed, the
nodes array must be realloc. But its new size is not correctly set. Only the
total number of nodes is used to set the new size. But it must also depends on
the size of a node. It must be the total nomber of nodes times the size of a
node.
This issue was reported on Github (#189).
This patch must be backported to all versions since the 1.6.
There is no standard case for HTTP header names because, as stated in the
RFC7230, they are case-insensitive. So applications must handle them in a
case-insensitive manner. But some bogus applications erroneously rely on the
case used by most browsers. This problem becomes critical with HTTP/2
because all header names must be exchanged in lowercase. And HAProxy uses the
same convention. All header names are sent in lowercase to clients and servers,
regardless of the HTTP version.
This design choice is linked to the HTX implementation. So, for previous
versions (2.0 and 1.9), a workaround is to disable the HTX mode to fall
back to the legacy HTTP mode.
Since the legacy HTTP mode was removed, some users reported interoperability
issues because their application was not able anymore to handle HTTP/1 message
received from HAProxy. So, we've decided to add a way to change the case of some
headers before sending them. It is now possible to define a "mapping" between a
lowercase header name and a version supported by the bogus application. To do
so, you must use the global directives "h1-case-adjust" and
"h1-case-adjust-file". Then options "h1-case-adjust-bogus-client" and
"h1-case-adjust-bogus-server" may be used in proxy sections to enable the
conversion. See the configuration manual for more info.
Of course, our advice is to urgently upgrade these applications for
interoperability concerns and because they may be vulnerable to various types of
content smuggling attacks. But, if your are really forced to use an unmaintained
bogus application, you may use these directive, at your own risks.
If it is relevant, this feature may be backported to 2.0.
There is one unprotected call to stop_proxy() from the manage_proxy()
task, so there is a single caller by definition, but there is also
another such call from the CLI's "shutdown frontend" parser. This
one does it under the proxy's lock but the first one doesn't use it.
Thus it is theorically possible to corrupt the list of listeners in a
proxy by issuing "shutdown frontend" and SIGUSR1 exactly at the same
time. While it sounds particularly contrived or stupid, it could
possibly happen with automated tools that would send actions via
various channels. This could cause the process to loop forever or
to crash and thus stop faster than expected.
This might be backported as far as 1.8.
Dragan Dosen found that the listeners lock is not sufficient to protect
the listeners list when proxies are stopping because the listeners are
also unlinked from the protocol list, and under certain situations like
bombing with soft-stop signals or shutting down many frontends in parallel
from multiple CLI connections, it could be possible to provoke multiple
instances of delete_listener() to be called in parallel for different
listeners, thus corrupting the protocol lists.
Such operations are pretty rare, they are performed once per proxy upon
startup and once per proxy on shut down. Thus there is no point trying
to optimize anything and we can use a global lock to protect the protocol
lists during these manipulations.
This fix (or a variant) will have to be backported as far as 1.8.
When client-side or server-side cookies are parsed, HAProxy enters in an
infinite loop if a Cookie/Set-Cookie header value starts by a delimiter (a colon
or a semicolon). Depending on the operating system, the service may become
degraded, unresponsive, or may trigger haproxy's watchdog causing a service stop
or automatic restart.
To fix this bug, in the loop parsing the attributes, we must be sure to always
skip delimiters once the first attribute-value pair was parsed, empty or
not. The credit for the fix goes to Olivier.
CVE-2019-14241 was assigned to this bug. This patch fixes the Github issue #181.
This patch must be backported to 2.0 and 1.9. However, the patch will have to be
adapted.
Empty error files may be used to disable the sending of any message for specific
error codes. A common use-case is to use the file "/dev/null". This way the
default error message is overridden and no message is returned to the client. It
was supported in the legacy HTTP mode, but not in HTX. Because of a bug, such
messages triggered an error.
This patch must be backported to 2.0 and 1.9. However, the patch will have to be
adapted.
In http_reply_and_close() and http_server_error(), we must be sure to have an
allocated buffer (buf.size > 0) to consider it as a valid HTX message. For now,
there is no way to hit this bug. But a fix to support "empty" error messages in
HTX is pending. Such empty messages, after parsing, will be converted into
unallocated buffer (buf.size == 0).
This patch must be backported to 2.0 and 1.9. owever, the patch will have to be
adapted.
Github user @jpulz reported a crash with tcp-checks in issue #184
where cs==NULL. If we enter the function with cs==NULL and check->result
!= CHK_RES_UKNOWN, we'll go directly to out_end_tcpcheck and dereference
cs. We must validate there that cs is valid (and conn at the same time
since it would be NULL as well).
This fix must be backported as far as 1.8.
With the previous commit 03627245c ("BUG/MEDIUM: mux-h1: Trim excess server data
at the end of a transaction"), we try to avoid to handle junk data coming from a
server as a response. But it only works for data already received. Starting from
the moment a server sends an invalid response, it is safer to close the
connection too, because more data may come after and there is no good reason to
handle them.
So now, when a conn_stream is detached from a server connection, if there are
some unexpected input data, we simply trim them and close the connection
ASAP. We don't close it immediately only if there are still some outgoing data
to deliver to the server.
This patch must be backported to 2.0 and 1.9.
Now that we start by releasing any possibly existing connection,
the conditions simplify a little bit and some of the complex cases
can be removed. A few comments were also added for non-obvious cases.
When entering connect_server() we're not supposed to have a connection
already, except when retrying a failed connection, which is pretty rare.
Let's simplify the code by starting to unconditionally release any existing
connection. For now we don't go further, as this change alone will lead to
quite some simplification that'd rather be done as a separate cleanup.
Lua cosockets do not need to allocate the remote connection anymore.
However this was trickier than expected because some tests were made
on this remote connection's existence to detect establishment instead
of relying on the stream interface's state (which is how it's now done).
The flag SF_ADDR_SET was set a bit too early (before assigning the
address) so this was moved to the right place. It should not have had
any impact beyond confusing debugging.
The only remaining occurrence of the remote connection knowledge now
is for getsockname() which requires to access the connection to send
the syscall, and it's unlikely that we'll need to change this before
QUIC or so.
When forcing the outgoing address of a connection, till now we used to
allocate this outgoing connection and set the address into it, then set
SF_ADDR_SET. With connection reuse this causes a whole lot of issues and
difficulties in the code.
Thanks to the previous changes, it is now possible to store the target
address into the stream instead, and copy the address from the stream to
the connection when initializing the connection. assign_server_address()
does this and as a result SF_ADDR_SET now reflects the presence of the
target address in the stream, not in the connection. The http_proxy mode,
the peers and the master's CLI now use the same mechanism. For now the
existing connection code was not removed to limit the amount of tricky
changes, but the allocated connection is not used anymore.
This change also revealed a latent issue that we've been having around
option http_proxy : the address was set in the connection but neither the
SF_ADDR_SET nor the SF_ASSIGNED flags were set. It looks like the connection
could establish only due to the fact that it existed with a non-null
destination address.
The purpose will be to store the target address there and not to
allocate a connection just for this anymore. For now it's only placed
in the struct, a few fields were moved to plug some holes, and the
entry is freed on release (never allocated yet for now). This must
have no impact. Note that in order to fit, the store_count which
previously was an int was turned into a short, which is way more
than enough given that the hard-coded limit is 8.
Now that we have dynamically allocated addresses, there's no need to
clear an address before reusing it, just release it. Note that this
is not equivalent to saying that an address is never zero, as shown in
assign_server_address() where an address 0.0.0.0 can still be assigned
to a connection for the time it takes to modify it.
This commit places calls to sockaddr_alloc() at the places where an address
is needed, and makes sure that the allocation is properly tested. This does
not add too many error paths since connection allocations are already in the
vicinity and share the same error paths. For the two cases where a
clear_addr() was called, instead the address was not allocated.
This pool will be used to allocate storage for source and destination
addresses used in connections. Two functions sockaddr_{alloc,free}()
were added and will have to be used everywhere an address is needed.
These ones are safe for progressive replacement as they check that the
existing pointer is set before replacing it. The pool is not yet used
during allocation nor freeing. Also they operate on pointers to pointers
so they will perform checks and replace values. The free one nulls the
pointer.