Commit 4528611 ("MEDIUM: mworker: store the leaving state of a process")
introduced a bug in the mworker_env_to_proc_list() function.
This is very unlikely to occur since the PID should always be assigned.
It can probably happen if the environment variable is corrupted.
No backport needed.
These functions may fail to emit an RST or an empty DATA frame because
the mux is full or busy. Then they subscribe the h2s and try again.
However when doing so, they will already have marked the error state on
the stream and will not pass anymore through the sequence resulting in
the failed frame to be attempted to be sent again nor to the close to
be done, instead they will return a success.
It is important to only leave when the stream is already closed, but
to go through the whole sequence otherwise.
This patch should ideally be backported to 1.9 though it's possible that
the lack of the WANT_SHUT* flags makes this difficult or dangerous. The
severity is low enough to avoid this in case of trouble.
This patch fixes an issue introduced by 0bad840b commit
"MINOR: log: Extract some code to send syslog messages" which leaded
to wrong log format variable initializations at least for "short" and "raw" format.
This commit skipped the cases where even if passed to __do_send_log(), the
syslog tag and syslog pid string must not be used to format the log message
with "short" and "raw". This is done iniatilizing "tag_max" and "pid_max"
variables (the lengths of the tag and pid strings) to 0, then updating to them to
the length of the tag and pid strings passed as variables to __do_send_log()
depending on the log format and in every cases using this length for the iovec
variable used to send() the log.
This bug is specific to 2.0.
It was only set and not consumed after the previous change. The reason
is that the task's context always contains the relevant information,
so there is no need for a second pointer.
This one used to rely on the combined return statuses of the shutr/w
functions but now that we have the H2_SF_WANT_SHUT{R,W} flags we don't
need this anymore if we properly remove these flags after their operations
succeed. This is what this patch does.
Currently when a shutr/shutw fails due to lack of buffer space, we abuse
the wait_event's handle pointer to place up to two bits there in addition
to the original pointer. This pointer is not used for anything but this
and overall the intent becomes clearer with h2s flags than with these
two alien bits in the pointer, so let's use clean flags now.
Lots of places were using LIST_ISEMPTY() to detect if a stream belongs
to one of the send lists or to detect if a connection was already
waiting for a buffer or attached to an idle list. Since these ones are
not list heads but list elements, let's use LIST_ADDED() instead.
7b5fd1e ("MEDIUM: connections: Move some fields from struct connection
to ssl_sock_ctx.") introduced a bug in the heartbleed mitigation code.
Indeed the code used conn->ctx instead of conn->xprt_ctx for the ssl
context, resulting in a null dereference.
vars_check_arg previously leaked the string containing the variable
name:
Consider this config:
frontend fe1
mode http
bind :8080
http-request set-header X %[var(txn.host)]
Starting HAProxy and immediately stopping it by sending a SIGINT makes
Valgrind report this leak:
==7795== 9 bytes in 1 blocks are definitely lost in loss record 15 of 71
==7795== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==7795== by 0x4AA2AD: my_strndup (standard.c:2227)
==7795== by 0x51FCC5: make_arg_list (arg.c:146)
==7795== by 0x4CF095: sample_parse_expr (sample.c:897)
==7795== by 0x4BA7D7: add_sample_to_logformat_list (log.c:495)
==7795== by 0x4BBB62: parse_logformat_string (log.c:688)
==7795== by 0x4E70A9: parse_http_req_cond (http_rules.c:239)
==7795== by 0x41CD7B: cfg_parse_listen (cfgparse-listen.c:1466)
==7795== by 0x480383: readcfgfile (cfgparse.c:2089)
==7795== by 0x47A081: init (haproxy.c:1581)
==7795== by 0x4049F2: main (haproxy.c:2591)
This leak can be detected even in HAProxy 1.6, this patch thus should
be backported to all supported branches
[Cf: This fix was reverted because the chunk's area was inconditionnaly
released, making haproxy to crash when spoe was enabled. Now the chunk is
released by calling chunk_destroy(). This function takes care of the
chunk's size to release it or not. It is the responsibility of callers to
set or not the chunk's size.]
When SPOE variables are registered during HAProxy startup, the argument used to
call the function vars_check_arg() uses the trash area. To be sure it is never
released by the callee function, the size of the internal chunk (arg.data.str)
is set to 0. It is important to do so because, to fix a memory leak, this buffer
must be released by the function vars_check_arg().
This patch must be backported to 1.9.
smp_prefetch_htx() is used when trying to access the contents of an HTTP
buffer from the TCP rulesets. The method was not properly set in this
case, which will cause the sample fetch methods relying on the method
to randomly fail in this case.
Thanks to Tim Düsterhus for reporting this issue (#97).
This fix must be backported to 1.9.
cfg_parse_peers previously leaked the contents of the `kws` string,
as it was unconditionally filled using bind_dump_kws, but only used
(and freed) within the error case.
Move the dumping into the error case to:
1. Ensure that the registered keywords are actually printed as least once.
2. The contents of kws are not leaked.
This move allows to narrow the scope of `kws`, so this is done as well.
This bug was found using valgrind:
==28217== 590 bytes in 1 blocks are definitely lost in loss record 51 of 71
==28217== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==28217== by 0x4AD4C7: indent_msg (standard.c:3676)
==28217== by 0x47E962: cfg_parse_peers (cfgparse.c:700)
==28217== by 0x480273: readcfgfile (cfgparse.c:2147)
==28217== by 0x479D51: init (haproxy.c:1585)
==28217== by 0x404A02: main (haproxy.c:2585)
with this super simple configuration:
peers peers
bind :8081
server A
This bug exists since the introduction of cfg_parse_peers in commit
355b2033ec0c89660db179b23d6f77b678d8c26d (which was introduced for HAProxy
2.0, but marked as backportable). It should be backported to all branches
containing that commit.
This reverts commit 6ea00195c479d96c5aa651adcca3bc3637e3eceb.
As found by Christopher, this fix is not correct due to the way args
are built at various places. For example some config or runtime parsers
will place a substring pointer there, and calling free() on it will
immediately crash the program. A quick audit of the code shows that
there are not that many users, but the way it's done requires to
properly set the string as a regular chunk (size=0 if free not desired,
then call chunk_destroy() at release time), and given that the size is
currently set to len+1 in all parsers, a deeper audit needs to be done
to figure the impacts of not setting it anymore.
Thus for now better leave this harmless leak which impacts only the
config parsing time.
This fix must be backported to all branches containing the fix above.
In this long thread, Maciej Zdeb reported that the H2 mux was still
going through endless loops from time to time :
https://www.mail-archive.com/haproxy@formilux.org/msg33709.html
What happens is the following :
- in h2s_frt_make_resp_data() we can set H2_SF_BLK_SFCTL and remove the
stream from the send_list
- then in h2_shutr() and h2_shutw(), we check if the list is empty before
subscribing the element, which is true after the case above
- then in h2c_update_all_ws() we still have H2_SF_BLK_SFCTL with the item
in the send_list, thus LIST_ADDQ() adds it a second time.
This patch adds a check of list emptiness before performing the LIST_ADDQ()
when the flow control window opens. Maciej reported that it reliably fixed
the problem for him.
As later discussed with Olivier, this fixes the consequence of the issue
rather than its cause. The root cause is that a stream should never be in
the send_list with a blocking flag set and the various places that can lead
to this situation must be revisited. Thus another fix is expected soon for
this issue, which will require some observation. In the mean time this one
is easy enough to validate and to backport.
Many thanks to Maciej for testing several versions of the patch, each
time providing detailed traces which allowed to nail the problem down.
This patch must be backported to 1.9.
This low-level asm implementation of a double CAS was implemented only
for certain architectures (x86_64, armv7, armv8). When threads are not
used, they were not defined, but since they were called directly from
a few locations, they were causing build issues on certain platforms
with threads disabled. This was addressed in commit f4436e1 ("BUILD:
threads: Add __ha_cas_dw fallback for single threaded builds") by
making it fall back to HA_ATOMIC_CAS() when threads are not defined,
but this actually made the situation worse by breaking other cases.
This patch fixes this by creating a high-level macro HA_ATOMIC_DWCAS()
which is similar to HA_ATOMIC_CAS() except that it's intended to work
on a double word, and which rely on the asm implementations when threads
are in use, and uses its own open-coded implementation when threads are
not used. The 3 call places relying on __ha_cas_dw() were updated to
use HA_ATOMIC_DWCAS() instead.
This change was tested on i586, x86_64, armv7, armv8 with and without
threads with gcc 4.7, armv8 with gcc 5.4 with and without threads, as
well as i586 with gcc-3.4 without threads. It will need to be backported
to 1.9 along with the fix above to fix build on armv7 with threads
disabled.
The following macros are now defined for openssl < 1.1 so that we
can remove the code performing direct access to the structures :
BIO_get_data(), BIO_set_data(), BIO_set_init(), BIO_meth_free(),
BIO_meth_new(), BIO_meth_set_gets(), BIO_meth_set_puts(),
BIO_meth_set_read(), BIO_meth_set_write(), BIO_meth_set_create(),
BIO_meth_set_ctrl(), BIO_meth_set_destroy()
vars_check_arg previously leaked the string containing the variable
name:
Consider this config:
frontend fe1
mode http
bind :8080
http-request set-header X %[var(txn.host)]
Starting HAProxy and immediately stopping it by sending a SIGINT makes
Valgrind report this leak:
==7795== 9 bytes in 1 blocks are definitely lost in loss record 15 of 71
==7795== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==7795== by 0x4AA2AD: my_strndup (standard.c:2227)
==7795== by 0x51FCC5: make_arg_list (arg.c:146)
==7795== by 0x4CF095: sample_parse_expr (sample.c:897)
==7795== by 0x4BA7D7: add_sample_to_logformat_list (log.c:495)
==7795== by 0x4BBB62: parse_logformat_string (log.c:688)
==7795== by 0x4E70A9: parse_http_req_cond (http_rules.c:239)
==7795== by 0x41CD7B: cfg_parse_listen (cfgparse-listen.c:1466)
==7795== by 0x480383: readcfgfile (cfgparse.c:2089)
==7795== by 0x47A081: init (haproxy.c:1581)
==7795== by 0x4049F2: main (haproxy.c:2591)
This leak can be detected even in HAProxy 1.6, this patch thus should
be backported to all supported branches.
Add a new retry-on keyword, "all-retryable-errors", that activates retry
for all errors that are considered retryable.
This currently activates retry for "conn-failure", "empty-response",
"junk-respones", "response-timeout", "0rtt-rejected", "500", "502", "503" and
"504".
Add a new action for http-request, disable-l7-retry, that can be used to
disable any attempt at retry requests (see retry-on) if it fails for any
reason other than a connection failure.
This is useful for example to make sure POST requests aren't retried.
In a few cases, we'd just check if the backend is configured to do retries,
and not if it's still allowed on the stream_interface.
The SI_FL_L7_RETRY flag could have been removed because we failed to allocate
a buffer, or because the request was too big to fit in a single buffer,
so make sure it's there before attempting a retry.
When we have to stop sending due to the stream flow control, don't check
if send_wait is NULL to know if we're in the send_list, because at this
point it'll always be NULL, while we're probably in the list.
Use LIST_ISEMPTY(&h2s->list) instead.
Failing to do so mean we might be added in the send_list when flow control
allows us to emit again, while we're already in it.
While I'm here, replace LIST_DEL + LIST_INIT by LIST_DEL_INIT.
This should be backported to 1.9.
In the legacy HTTP, when the message headers are parsed, in http_msg_analyzer(),
we must use the begining of input and not the head of the buffer. Most of time,
it will be the same pointers because there is no outgoing data when a new
message is received. But when a 1xx informational response is parsed, it is
forwarded and the parsing restarts immediatly. In this case, we have outgoing
data when the next response is parsed.
This patch must be backported to 1.9.
A backend stream-interface attached to a reused connection remains in the state
SI_ST_CONN until some data are sent to validate the connection. But when the
url_param algorithm is used to balance connections, no data are sent while the
connection is not established. So it is a chicken and egg situation.
To solve the problem, if no error is detected and when the request channel is
waiting for the connect(), we mark the read side as attached on the response
channel as soon as possible and we wake the request channel up once. This
happens in 2 places. The first one is right after the connect(), when the
stream-interface is still in state SI_ST_CON, in the function
sess_update_st_con_tcp(). The second one is when an applet is used instead of a
real connection to a server, in the function sess_prepare_conn_req(). In fact,
it is done when the backend stream-interface is set to the state SI_ST_EST.
This patch must be backported to 1.9.
It's always a pain to have to stuff lots of #ifdef USE_OPENSSL around
ssl headers, it even results in some of them appearing in a random order
and multiple times just to benefit form an existing ifdef block. Let's
make these headers safe for inclusion when USE_OPENSSL is not defined,
they now perform the test themselves and do nothing if USE_OPENSSL is
not defined. This allows to remove no less than 8 such ifdef blocks
and make include blocks more readable.
Since we're providing a compatibility layer for multiple OpenSSL
implementations and their derivatives, it is important that no C file
directly includes openssl headers but only passes via openssl-compat
instead. As a bonus this also gets rid of redundant complex rules for
inclusion of certain files (engines etc).
Some defines like OPENSSL_VERSION or X509_getm_notBefore() have nothing
to do in ssl_sock and must move to openssl-compat.h so that they are
consistently shared by the whole code. A warning in the code was added
against wild additions of macros there.
Enabling aes-gcm-enc in last commit (MINOR: ssl: enable aes_gcm_dec
on LibreSSL) uncovered a wrong condition on the define of the
EVP_CTRL_AEAD_SET_IVLEN macro which I forgot to add when making the
commit, resulting in breaking libressl build again. In case libressl
later defines this macro, the test will have to change for a version
range instead.
This one requires OpenSSL 1.0.1 and above, and libressl was forked from
1.0.1g and is compatible (build-tested). No need to exclude it anymore
from using this converter.
They were all check to comply with the advertised openssl version. Now
that libressl doesn't pretend to be a more recent openssl anymore, we
can simply rely on the regular openssl version tests without having to
deal with exceptions for libressl.
LibreSSL causes lots of build issues by pretending to be OpenSSL 2.0.0,
and it requires lots of care for each #if added to cover any specific
OpenSSL features.
This commit addresses the problem by making LibreSSL only advertise the
version it forked from (1.0.1g) and by starting to use tests based on
its real version to enable features instead of working by exclusion.
Most tests on OPENSSL_VERSION_NUMBER have become complex and break all
the time because this number is fake for some derivatives like LibreSSL.
This patch creates a new macro, HA_OPENSSL_VERSION_NUMBER, which will
carry the real openssl version defining the compatibility level, and
this version will be adjusted depending on the variants.
As with every single OpenSSL fix, LibreSSL build broke again, this time
after commit 56996dabe ("BUG/MINOR: mworker/ssl: close OpenSSL FDs on
reload"). A definitive solution will have to be found quickly. For now,
let's exclude libressl from the version test.
This patch must be backported to 1.9 since the fix above was already
backported there.
In h2_detach(), if we still have a send_wait pointer, because we woke the
tasklet up, but it hasn't ran yet, explicitely set send_wait to NULL after
we removed the tasklet from the task list.
Failure to do so may lead to crashes if the h2s isn't immediately destroyed,
because we considered there were still something to send.
This should be backported to 1.9.
In H1 and H2, one and only one trailer block is emitted during the HTTP
parsing. So it is useless to try to append this block with the previous one,
like for data block.
This patch may be backported to 1.9.
When htx_xfer_blks() is called to move blocks from an HTX message to another
one, most of blocks must be transferred atomically. But some may be splitted if
there is not enough space to move all the block. This was true for DATA and TLR
blocks. But it is a bad idea to split trailers. During HTTP parsing, only one
TLR block is emitted. It simplifies the processing of trailers to keep the block
untouched.
This patch must be backported to 1.9 because some fixes may depend on it.
When the maximum free space available for data in the HTX message is compared to
the number of bytes to transfer, we must take into account the amount of data
already transferred. Otherwise we may move more data than expected.
This patch must be backported to 1.9.
Unlike other H1 parsing functions, the 3rd parameter of the function
h1_measure_trailers() is the maximum number of bytes to read. For others
functions, it is the relative offset where to stop the parsing.
This patch must be backported to 1.9.
When a sample fetch is encoded, we use its context to set info about the
fragmentation. But if the sample is not found, the function sample_process()
returns NULL. So we me be sure the sample exists before setting its context.
This patch must be backported to 1.9 and 1.8.
A typo was introduced in the following commit : 927b88ba0 ("BUG/MAJOR:
mux-h2: fix race condition between close on both ends") making the test
on h2s->cs never being done and h2c->cs being dereferenced without being
tested. This also confirms that this condition does not happen on this
side but better fix it right now to be safe.
This must be backported to 1.9.
This patch implements a new global parameter for the master-worker mode.
When setting the mworker-max-reloads value, a worker receive a SIGTERM
if its number of reloads is greater than this value.
Since previous commit it's not needed anymore to test a task pointer
before calling task_destory() so let's just remove these tests from
the various callers before they become confusing. The function's
arguments were also documented. The same should probably be done
with tasklet_free() which involves a test in roughly half of the
call places.
In commit 1b8e68e ("MEDIUM: stick-table: Stop handling stick-tables as
proxies."), the ->table member of proxy struct was replaced by a pointer
that is not always checked and in some situations can cause a segfault,
eg. during reload or while using "show table" on CLI socket.
No backport is needed.
From OpenSSL 1.1.1, the default behaviour is to maintain open FDs to any
random devices that get used by the random number library. As a result,
those FDs leak when the master re-execs on reload; since those FDs are
not marked FD_CLOEXEC or O_CLOEXEC, they also get inherited by children.
Eventually both master and children run out of FDs.
OpenSSL 1.1.1 introduces a new function to control whether the random
devices are kept open. When clearing the keep-open flag, it also closes
any currently open FDs, so it can be used to clean-up open FDs too.
Therefore, a call to this function is made in mworker_reload prior to
re-exec.
The call is guarded by whether SSL is in use, because it will cause
initialisation of the OpenSSL random number library if that has not
already been done.
This should be backported to 1.9 and 1.8.
If when receiving an H2 response we fail to add an EOM block after too
large a trailers block, we must not leave the trailers block alone as it
violates the internal assumptions by not being followed by an EOM, even
when an error is reported. We must then make sure the error will safely
be reported to upper layers and that no attempt will be made to forward
partial blocks.
This must be backported to 1.9.
In message https://www.mail-archive.com/haproxy@formilux.org/msg33541.html
Patrick Hemmer reported an interesting bug affecting H2 and trailers.
The problem is that in order to close the stream we have to see the EOM
block, but nothing guarantees it will atomically be delivered with the
trailers block(s). So the code currently waits for it by returning zero
when it was not found, resulting in the caller (h2_snd_buf()) to loop
forever calling it again.
The current internal connection/connstream API doesn't allow a send
actor to notify its caller that it cannot process the data until it
gets more, so even returning zero will only lead to calls in loops
without any guarantee that any progress will be made.
Some late amendments to HTX already guaranteed the atomicity of the
trailers block during snd_buf(), which is currently ensured by the
fact that producers create exactly one such trailers block for all
trailers. So in practice we can only loop between trailers and EOM.
This patch changes the behaviour by making h2s_htx_make_trailers()
become atomic by not consuming the EOM block. This way either it finds
the end of trailers marker (empty line) or it fails. Once it sends the
trailers block, ES is set so the stream turns HLOC or CLOSED. Thanks
to previous patch "MEDIUM: mux-h2: discard contents that are to be sent
after a shutdown" is is now safe to interrupt outgoing data processing,
and the late EOM block will silently be discarded when the caller
finally sends it.
This is a bit tricky but should remain solid by design, and seems like
the only option we have that is compatible with 1.9, where it must be
backported along with the aforementioned patch.
In h2_snd_buf() we discard any possible buffer contents requested to be
sent after a close or an error. But in practice we can extend this to
any case where the stream is locally half-closed since it means we will
never be able to send these data anymore.
For now it must not change anything, but it will be used by subsequent
patches to discard lone a HTX EOM block arriving after the trailers
block.