The ARGT_PTR argument type may now be used to keep a reference to opaque data in
the argument array used by sample fetches and converters. It is a generic way to
point on data. I guess it could be used for some other arguments, like proxy,
server, map or stick-table.
In sample_load_map() function, the global mode is now tested to be sure to be in
the starting mode. If not, an error is returned.
At first glance, this patch may seem useless because maps are loaded during the
configuration parsing. But in fact, it is possible to load a map from the lua,
using Map:new() method. And, there is nothing to forbid to call this method at
runtime, during a script execution. It must never be done because it may perform
an filesystem access for unknown maps or allocation for known ones. So at
runtime, it means a blocking call or a memroy leak. Note it is still possible to
load a map from the lua, but in the global part of a script only. This part is
executed during the configuration parsing.
This patch must be backported in all stable versions.
This bug affects all version of HAProxy since the OCSP data is not free
in the deinit(), but leaking on exit() is not really an issue. However,
when doing dynamic update of certificates over the CLI, those data are
not free'd upon the free of the SSL_CTX.
3 leaks are happening, the first leak is the one of the ocsp_arg
structure which serves the purpose of containing the pointers in the
case of a multi-certificate bundle. The second leak is the one ocsp
struct. And the third leak is the one of the struct buffer in the
ocsp_struct.
The problem lies with SSL_CTX_set_tlsext_status_arg() which does not
provide a way to free the argument upon an SSL_CTX_free().
This fix uses ex index functions instead of registering a
tlsext_status_arg(). This is really convenient because it allows to
register a free callback which will free the ex index content upon a
SSL_CTX_free().
A refcount was also added to the ocsp_response structure since it is
stored in a tree and can be reused in another SSL_CTX.
Should fix part of the issue #746.
This must be backported in 2.2 and 2.1.
Fix a memory leak when loading an OCSP file when the file was already
loaded elsewhere in the configuration.
Indeed, if the OCSP file already exists, a useless chunk_dup() will be
done during the load.
To fix it we reverts "ocsp" to "iocsp" like it was done previously.
This was introduced by commit 246c024 ("MINOR: ssl: load the ocsp
in/from the ckch").
Should fix part of the issue #746.
It must be backported in 2.1 and 2.2.
From https://www.python.org/dev/peps/pep-0353/
"A new type Py_ssize_t is introduced, which has the same size as the
compiler's size_t type, but is signed. It will be a typedef for ssize_t
where available."
For integer types, causes printf to expect a size_t-sized integer
argument.
This should fix github issue #702
This should be backported to >= v2.2
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
A regression was introduced by 13a9232ebc
when I added support for Additional section of the SRV responses..
Basically, when a server is managed through SRV records additional
section and it's disabled (because its associated Additional record has
disappeared), it never leaves its MAINT state and so never comes back to
production.
This patch updates the "snr_update_srv_status()" function to clear the
MAINT status when the server now has an IP address and also ensure this
function is called when parsing Additional records (and associating them
to new servers).
This can cause severe outage for people using HAProxy + consul (or any
other service registry) through DNS service discovery).
This should fix issue #793.
This should be backported to 2.2.
The H1 multiplexer is able to perform synchronous send. When a large body is
transfer, if nothing is received and if no error or shutdown occurs, it is
possible to not go down at the H1 connection level to do I/O for a long
time. When this happens, we must still take care to refresh the H1 connection
timeout. Otherwise it is possible to hit the connection timeout during the
transfer while it should not expire.
This bug exists because only h1_process() refresh the H1 connection timeout. To
fix the bug, h1_snd_buf() must also refresh this timeout. To make things more
readable, a dedicated function has been introduced and called to refresh the
timeout.
This bug exists on all HTX versions. But it is harder to hit it on 2.1 and below
because when a H1 mux is initialized, we actively try to read data instead of
subscribing for receiving. So there is at least one call to h1_process().
This patch should fix the issue #790. It must be backported as far as 2.0.
The SSL_INC and SSL_LIB variables were not initialized in the Makefile,
so they could be accidently inherited from the environment. We require
that any makefile variable is explicitly set on the command line so they
must be initialized.
Note that the Travis scripts used to rely only on these variables to be
exported, so it was adjusted as well.
It's cumbersome to copy-paste a commit ID into another window after having
typed "git cherry-pick -sx", so let's have the suggested output format of
git-show prepare this line just before the subject line, it remains at a
stable position on the terminal when searching for "/^commit". One just
has to copy-paste the line into another terminal will result in the commit
being properly picked.
We've never used the output of the rightmost branch with this tool,
and it systematically causes two identical outputs making the job
harder during backport sessions. Let's simply remove the right part
when it's identical to the left one. This also adds a few line feeds
to make the output more readable.
Released version 2.3-dev2 with the following main changes :
- DOC: ssl: req_ssl_sni needs implicit TLS
- BUG/MEDIUM: arg: empty args list must be dropped
- BUG/MEDIUM: resolve: fix init resolving for ring and peers section.
- BUG/MAJOR: tasks: don't requeue global tasks into the local queue
- MINOR: tasks/debug: make the thread affinity BUG_ON check a bit stricter
- MINOR: tasks/debug: add a few BUG_ON() to detect use of wrong timer queue
- MINOR: tasks/debug: add a BUG_ON() check to detect requeued task on free
- BUG/MAJOR: dns: Make the do-resolve action thread-safe
- BUG/MEDIUM: dns: Release answer items when a DNS resolution is freed
- MEDIUM: htx: Add a flag on a HTX message when no more data are expected
- BUG/MEDIUM: stream-int: Don't set MSG_MORE flag if no more data are expected
- BUG/MEDIUM: http-ana: Only set CF_EXPECT_MORE flag on data filtering
- CLEANUP: dns: remove 45 "return" statements from dns_validate_dns_response()
- BUG/MINOR: htx: add two missing HTX_FL_EOI and remove an unexpected one
- BUG/MINOR: mux-fcgi: Don't url-decode the QUERY_STRING parameter anymore
- BUILD: tools: fix build with static only toolchains
- DOC: Use gender neutral language
- BUG/MINOR: debug: Don't dump the lua stack if it is not initialized
- BUG/MAJOR: dns: fix null pointer dereference in snr_update_srv_status
- BUG/MAJOR: dns: don't treat Authority records as an error
- CI : travis-ci : prepare for using stock OpenSSL
- CI: travis-ci : switch to stock openssl when openssl-1.1.1 is used
- MEDIUM: lua: Add support for the Lua 5.4
- BUG/MEDIUM: dns: Don't yield in do-resolve action on a final evaluation
- BUG/MINOR: lua: Abort execution of actions that yield on a final evaluation
- MINOR: tcp-rules: Return an internal error if an action yields on a final eval
- BUG/MINOR: tcp-rules: Preserve the right filter analyser on content eval abort
- BUG/MINOR: tcp-rules: Set the inspect-delay when a tcp-response action yields
- MEDIUM: tcp-rules: Use a dedicated expiration date for tcp ruleset
- MEDIUM: lua: Set the analyse expiration date with smaller wake_time only
- BUG/MEDIUM: connection: Be sure to always install a mux for sync connect
- MINOR: connection: Preinstall the mux for non-ssl connect
- MINOR: stream-int: Be sure to have a mux to do sends and receives
- BUG/MINOR: lua: Fix a possible null pointer deref on lua ctx
- SCRIPTS: announce-release: add the link to the wiki in the announce messages
- CI: travis-ci: use better name for Coverity scan job
- CI: travis-ci: use proper linking flags for SLZ build
- BUG/MEDIUM: backend: always attach the transport before installing the mux
- BUG/MEDIUM: tcp-checks: always attach the transport before installing the mux
- MINOR: connection: avoid a useless recvfrom() on outgoing connections
- MINOR: mux-h1: do not even try to receive if the connection is not fully set up
- MINOR: mux-h1: do not try to receive on backend before sending a request
- CLEANUP: assorted typo fixes in the code and comments
- BUG/MEDIUM: ssl: check OCSP calloc in ssl_sock_load_ocsp()
Check the return of the calloc in ssl_sock_load_ocsp() which could lead
to a NULL dereference.
This was introduced by commit be2774d ("MEDIUM: ssl: Added support for
Multi-Cert OCSP Stapling").
Could be backported as far as 1.7.
There's no point trying to perform an recv() on a back connection if we
have a stream before having sent a request, as it's expected to fail.
It's likely that this may avoid some spurious subscribe() calls in some
keep-alive cases (the close case was already addressed at the connection
level by "MINOR: connection: avoid a useless recvfrom() on outgoing
connections").
When a connect() doesn't immediately succeed (i.e. most of the times),
fd_cant_send() is called to enable polling. But given that we don't
mark that we cannot receive either, we end up performing a failed
recvfrom() immediately when the connect() is finally confirmed, as
indicated in issue #253.
This patch simply adds fd_cant_recv() as well so that we're only
notified once the recv path is ready. The reason it was not there
is purely historic, as in the past when there was the fd cache,
doing it would have caused a pending recv request to be placed into
the fd cache, hence a useless recvfrom() upon success (i.e. what
happens now).
Without this patch, forwarding 100k connections does this:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
17.51 0.704229 7 100000 100000 connect
16.75 0.673875 3 200000 sendto
16.24 0.653222 3 200036 close
10.82 0.435082 1 300000 100000 recvfrom
10.37 0.417266 1 300012 setsockopt
7.12 0.286511 1 199954 epoll_ctl
6.80 0.273447 2 100000 shutdown
5.34 0.214942 2 100005 socket
4.65 0.187137 1 105002 5002 accept4
3.35 0.134757 1 100004 fcntl
0.61 0.024585 4 5858 epoll_wait
With the patch:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
18.04 0.697365 6 100000 100000 connect
17.40 0.672471 3 200000 sendto
17.03 0.658134 3 200036 close
10.57 0.408459 1 300012 setsockopt
7.69 0.297270 1 200000 recvfrom
7.32 0.282934 1 199922 epoll_ctl
7.09 0.274027 2 100000 shutdown
5.59 0.216041 2 100005 socket
4.87 0.188352 1 104697 4697 accept4
3.35 0.129641 1 100004 fcntl
0.65 0.024959 4 5337 1 epoll_wait
Note the total disappearance of 1/3 of failed recvfrom() *without*
adding any extra syscall anywhere else.
The trace of an HTTP health check is now totally clean, with no useless
syscall at all anymore:
09:14:21.959255 connect(9, {sa_family=AF_INET, sin_port=htons(8000), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
09:14:21.959292 epoll_ctl(4, EPOLL_CTL_ADD, 9, {EPOLLIN|EPOLLOUT|EPOLLRDHUP, {u32=9, u64=9}}) = 0
09:14:21.959315 epoll_wait(4, [{EPOLLOUT, {u32=9, u64=9}}], 200, 1000) = 1
09:14:21.959376 sendto(9, "OPTIONS / HTTP/1.0\r\ncontent-leng"..., 41, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 41
09:14:21.959436 epoll_wait(4, [{EPOLLOUT, {u32=9, u64=9}}], 200, 1000) = 1
09:14:21.959456 epoll_ctl(4, EPOLL_CTL_MOD, 9, {EPOLLIN|EPOLLRDHUP, {u32=9, u64=9}}) = 0
09:14:21.959512 epoll_wait(4, [{EPOLLIN|EPOLLRDHUP, {u32=9, u64=9}}], 200, 1000) = 1
09:14:21.959548 recvfrom(9, "HTTP/1.0 200\r\nContent-length: 0\r"..., 16320, 0, NULL, NULL) = 126
09:14:21.959570 close(9) = 0
With the edge-triggered poller, it gets even better:
09:29:15.776201 connect(9, {sa_family=AF_INET, sin_port=htons(8000), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
09:29:15.776256 epoll_ctl(4, EPOLL_CTL_ADD, 9, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=9, u64=9}}) = 0
09:29:15.776287 epoll_wait(4, [{EPOLLOUT, {u32=9, u64=9}}], 200, 1000) = 1
09:29:15.776320 sendto(9, "OPTIONS / HTTP/1.0\r\ncontent-leng"..., 41, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 41
09:29:15.776374 epoll_wait(4, [{EPOLLIN|EPOLLOUT|EPOLLRDHUP, {u32=9, u64=9}}], 200, 1000) = 1
09:29:15.776406 recvfrom(9, "HTTP/1.0 200\r\nContent-length: 0\r"..., 16320, 0, NULL, NULL) = 126
09:29:15.776434 close(9) = 0
It could make sense to backport this patch to 2.2 and maybe 2.1 after
it has been sufficiently checked for absence of side effects in 2.3-dev,
as some people had reported an extra overhead like in issue #168.
Similarly to the issue described in commit "BUG/MEDIUM: backend: always
attach the transport before installing the mux", in tcpcheck_eval_connect()
we can install a handshake transport layer underneath the mux and replace
its subscriptions, causing a crash if the mux had already subscribed for
whatever reason.
A simple reproducer consists in adding fd_cant_recv() after fd_cant_send()
in tcp_connect_server() and running it on this config, as discussed in issue
listen foo
bind :8181
mode http
option httpchk
server srv1 127.0.0.1:8888 send-proxy-v2 check inter 1000
The mux may only be installed *after* xprt_handshake is set up, so that
it registers against it and not against raw_sock or ssl_sock. This needs
to be backported to 2.2 which is the first version using muxes for checks.
In connect_server(), we can enter in a stupid situation:
- conn_install_mux_be() is called to install the mux. This one
subscribes for receiving and quits ;
- then we discover that a handshake is required on the connection
(e.g. send-proxy), so xprt_add_hs() is called and subscribes as
well.
- we crash in conn_subscribe() which gets a different subscriber.
And if BUG_ON is disabled, we'd likely lose one event.
Note that it doesn't seem to happen by default, but definitely does
if connect() rightfully performs fd_cant_recv(), so it's a matter of
who does what and in what order.
A simple reproducer consists in adding fd_cant_recv() after fd_cant_send()
in tcp_connect_server() and running it on this config, as discussed in issue
listen foo
bind :8181
mode http
server srv1 127.0.0.1:8888 send-proxy-v2
The root cause is that xprt_add_hs() installs an xprt layer underneath
the mux without taking over its subscriptions. Ideally if we want to
support this, we'd need to steal the connection's wait_events and
replace them by new ones. But there doesn't seem to be any case where
we're interested in doing this so better simply always install the
transport layer before installing the mux, that's safer and simpler.
This needs to be backported to 2.1 which is constructed the same way
and thus suffers from the same issue, though the code is slightly
different there.
previously SSL_INC and SSL_INC were set for all builds, and SLZ lib
was linked because of those flags. After we switched SLZ build to
stock openssl lib, SSL_INC, SSL_LIB are not set anymore.
Good time to set SLZ_INC, SLZ_LIB for such builds
This bug was introduced by the commit 8f587ea3 ("MEDIUM: lua: Set the analyse
expiration date with smaller wake_time only"). At the end of hlua_action(), the
lua context may be null if the alloc failed.
No backport needed, this is 2.3-dev.
In si_cs_send() and si_cs_recv(), we explicitly test the connection's mux is
defined to proceed. For si_cs_recv(), it is probably a bit overkill. But
opportunistic sends are possible from the moment the server connection is
created. So it is safer to do this test.
This patch may be backported as far as 1.9 if necessary.
In the connect_server() function, there is an optim to install the mux as soon
as possible. It is possible if we can determine the mux to use from the
configuration only. For instance if the mux is explicitly specified or if no ALPN
is set. This patch adds a new condition to preinstall the mux for non-ssl
connection. In this case, by default, we always use the mux_pt for raw
connections and the mux-h1 for HTTP ones.
This patch is related to the issue #762. It may be backported to 2.2 (and
possibly as far as 1.9 if necessary).
Sometime, a server connection may be performed synchronously. Most of time on
TCP socket, it does not happen. It is easier to have sync connect with unix
socket. When it happens, if we are not waiting for any hanshake completion, we
must be sure to have a mux installed before leaving the connect_server()
function because an attempt to send may be done before the I/O connection
handler have a chance to be executed to install the mux, if not already done.
For now, It is not expected to perform a send with no mux installed, leading to
a crash if it happens.
This patch should fix the issue #762 and probably #779 too. It must be
backported as far as 1.9.
If a lua action yields for any reason and if the wake timeout is set, it only
override the analyse expiration date if it is smaller. This way, a lower
inspect-delay will be respected, if any.
A dedicated expiration date is now used to apply the inspect-delay of the
tcp-request or tcp-response rulesets. Before, the analyse expiratation date was
used but it may also be updated by the lua (at least). So a lua script may
extend or reduce the inspect-delay by side effect. This is not expected. If it
becomes necessary, a specific function will be added to do this. Because, for
now, it is a bit confusing.
On a tcp-response content ruleset evaluation, the inspect-delay is engaged when
rule's conditions are not validated but not when the rule's action yields.
This patch must be backported to all supported versions.
When a tcp-request or a tcp-response content ruleset evaluation is aborted, the
corresponding FLT_END analyser must be preserved, if any. But because of a typo
error, on the tcp-response content ruleset evaluation, we try to preserve the
request analyser instead of the response one.
This patch must be backported to 2.2.
On a final evaluation of a tcp-request or tcp-response content ruleset, it is
forbidden for an action to yield. To quickly identify bugs an internal error is
now returned if it happens and a warning log message is emitted.
A Lua action may yield. It may happen because the action returns explicitly
act.YIELD or because the script itself yield. In the first case, we must abort
the script execution if it is the final rule evaluation, i.e if the
ACT_OPT_FINAL flag is set. The second case is already covered.
This patch must be backported to 2.2.
When an action is evaluated, flags are passed to know if it is the first call
(ACT_OPT_FIRST) and if it must be the last one (ACT_OPT_FINAL). For the
do-resolve DNS action, the ACT_OPT_FINAL flag must be handled because the
action may yield. It must never yield when this flag is set. Otherwise, it may
lead to a wakeup loop of the stream because the inspected-delay of a tcp-request
content ruleset was reached without stopping the rules evaluation.
This patch is related to the issue #222. It must be backported as far as 2.0.
On Lua 5.4, some API changes make HAProxy compilation to fail. Among other
things, the lua_resume() function has changed and now takes an extra argument in
Lua 5.4 and the error LUA_ERRGCMM was removed. Thus the LUA_VERSION_NUM macro is
now tested to know the lua version is used and adapt the code accordingly.
Here are listed the incompatibilities with the previous Lua versions :
http://www.lua.org/manual/5.4/manual.html#8
This patch comes from the HAproxy's fedora RPM, committed by Tom Callaway :
https://src.fedoraproject.org/rpms/haproxy/blob/db970613/f/haproxy-2.2.0-lua-5.4.patch
This patch should fix the issue #730. It must be backported to 2.2 and probably
as far as 2.0.
initially SSL_LIB and SSL_INC were set globally and we assumed
that any OpenSSL variant is supposed to be built using "script/build-ssl.sh".
starting with ARM64 build we use stock openssl, also it makes sense
to use stock openssl for 1.1.1 builds for velocity sake.
Let us make stock openssl lib first class citizen.
SSL_LIB and SSL_INC are only set when custom openssl variant
is built.
Support for DNS Service Discovery by means of SRV records was enhanced with
commit 13a9232eb ("MEDIUM: dns: use Additional records from SRV responses")
to use the content of the answers Additional records when present.
If there are Authority records before the Additional records we mistakenly
treat that as an invalid response. To fix this, just ignore the Authority
section if it exist and skip to the Additional records.
As 13a9232eb was introduced during 2.2-dev, it must be backported to 2.2.
This is a fix for issue #778
Since commit 13a9232eb ("MEDIUM: dns: use Additional records from SRV
responses"), a struct server can have a NULL dns_requester->resolution,
when SRV records are used and DNS answers contain an Additional section.
This is a problem when we call snr_update_srv_status() because it does
not check that resolution is NULL, and dereferences it. This patch
simply adds a test for resolution being NULL. When that happens, it means
we are using SRV records with Additional records, and an entry was removed.
This should fix issue #775.
This should be backported to 2.2.
When the watchdog is fired because of the lua, the stack of the corresponding
lua context is dumped. But we must be sure the lua context is fully initialized
to do so. If we are blocked on the global lua lock, during the lua context
initialization, the lua stask may be NULL.
This patch should fix the issue #776. It must be backported as far as 2.0.
uClibc toolchains built with no dynamic library support don't provide
the dlfcn.h header. That leads to build failure:
CC src/tools.o
src/tools.c:15:10: fatal error: dlfcn.h: No such file or directory
#include <dlfcn.h>
^~~~~~~~~
Enable dladdr on Linux platforms only when USE_DL is defined.
This should be backported wherever 109201fc5 ("BUILD: tools: rely on
__ELF__ not USE_DL to enable use of dladdr()") is backported (currently
only 2.2 and 2.1).
In the CGI/1.1 specification, it is specified the QUERY_STRING must not be
url-decoded. However, this parameter is sent decoded because it is extracted
after the URI's path decoding. Now, the query-string is first extracted, then
the script part of the path is url-decoded. This way, the QUERY_STRING parameter
is no longer decoded.
This patch should fix the issue #769. It must be backported as far as 2.1.
A workaround for some difficulties encountered to anticipate end of
messages was addressed by commit 810df0614 ("MEDIUM: htx: Add a flag on
a HTX message when no more data are expected"), but there were 3 issues
in it (with minor impact):
- the flag was mistakenly set before an EOH in Lua, which would only
cause incomplete packets to be emitted for now but could cause
truncated responses in the future. It's not needed to add it on
the next EOM block as http_forward_proxy_resp() already does it.
- one was still missing in hlua_applet_http_fct(), possibly causing
delays on Lua services
- one was missing in the Prometheus exporter.
All this simply shows that this mechanism is still quite fragile and
not trivial to use, especially in order to deal with the impossibility
to append the EOM, so we'll need to improve the solution in the future
and future backports should not be completely ruled out.
This fix must be backported where the patch above is backported,
typically 2.1 and later as it was required for a set of fixes.
The previous leak on do-resolve was particularly tricky to check due
to the important code repetition in dns_validate_dns_response() which
required careful examination of all return statements to check whether
they needed a pool_free() or not. Let's clean all this up using a common
leave point which releases the element itself. This also encourages
to properly set the current response to null right after freeing or
adding it so that it doesn't get added. 45 return and 22 pool_free()
were replaced by one of each.
This flag is set by HTTP analyzers to notify that more data are epxected. It is
used to know if the CO_SFL_MSG_MORE flag must be set on the connection when data
are sent. Historically, it was set on chuncked messages and on compressed
responses. But in HTX, the chunked messages are parsed by the H1 multipexer. So
for this case, the infinite forwarding is enabled and the flag must no longer be
set. For the compression, the test must be extended and be applied on all data
filters. Thus it is also true for the request channel.
So, now, CF_EXPECT_MORE flag is set on a request or a response channel if there
is at least one data filter attached to the stream. In addition, the flag is
removed when the HTTP message analysis is finished.
This patch should partially fix the issue #756. It must be backported to 2.1.
In HTX, if the HTX_FL_EOI message is set on the message, we don't set the
CO_SFL_MSG_MORE flag on the connection. This way, the send is not delayed if
only the EOM is missing in the HTX message.
This patch depends on the commit "MEDIUM: htx: Add a flag on a HTX message when
no more data are expected".
This patch should partially fix the issue #756. It must be backported to
2.1. For earlier versions, CO_SFL_MSG_MORE is ignored by HTX muxes.
The HTX_FL_EOI flag must now be set on a HTX message when no more data are
expected. Most of time, it must be set before adding the EOM block. Thus, if
there is no space for the EOM, there is still an information to know all data
were received and pushed in the HTX message. There is only an exception for the
HTTP replies (deny, return...). For these messages, the flag is set after all
blocks are pushed in the message, including the EOM block, because, on error,
we remove all inserted data.
When a DNS resolution is freed, the remaining items in .ar_list and .answer_list
are also released. It must be done to avoid a memory leak. And it is the last
chance to release these objects. I've honestly no idea if there is a better
place to release them earlier. But at least, there is no more leak.
This patch should solve the issue #222. It must be backported, at least, as far
as 2.0, and probably, with caution, as far as 1.8 or 1.7.