This patch adds -m flag which allows to specify header name
matching method when deleting headers from http request/response.
Currently beg, end, sub, str and reg are supported.
This is related to GitHub issue #909
Function __http_find_header is used to search headers by name using specified
matching method. Matching by substring returned unexpected results due to wrong
length of substring supplied to strnistr function.
Fixed also the boolean condition by inverting it, as we're interested in
headers that contains the substring.
This patch should be backported as far as 2.2
As indicated in issue #907 it very frequently fails on FreeBSD, and
looking at it, it's broken in multiple ways. It relies on log ordering
between two layers, the first one being allowed to support h2. Given
that on FreeBSD it usually ends up in timeouts, it's very likely that
for some reason one frontend logs before the other one or that curl
uses h2 instead of h1 there, and that the log instance waits forever
for its lines.
Usually it works fine when run locally though, so let's not remove it
and mark it as broken instead so that it can still be used when relevant.
Baptiste reported a new crash affecting 2.3 which can be triggered
when using H2 on the backend, with http-reuse always and with a tens
of clients doing close only. There are a few combined cases which cause
this to happen, but each time the issue is the same, an already freed
session is dereferenced in session_unown_conn().
Two cases were identified to cause this:
- a connection referencing a session as its owner, which is detached
from the session's list and is destroyed after this session ends.
The test on conn->owner before calling session_unown_conn() is not
sufficent as the pointer is not null but is not valid anymore.
- a connection that never goes idle and that gets killed form the
mux, where session_free() is called first, then conn_free() calls
session_unown_conn() which scans the just freed session for older
connections. This one is only triggered with DEBUG_UAF
The reason for this session to be present here is that it's needed during
the connection setup, to be passed to conn_install_mux_be() to mux->init()
as the owning session, but it's never deleted aftrewards. Furthermore, even
conn_session_free() doesn't delete this pointer after freeing the session
that lies there. Both do definitely result in a use-after-free that's more
easily triggered under DEBUG_UAF.
This patch makes sure that the owner is always deleted after detaching
or killing the session. However it is currently not possible to clear
the owner right after a synchronous init because the proxy protocol
apparently needs it (a reg test checks this), and if we leave it past
the connection setup with the session not attached anywhere, it's hard
to catch the right moment to detach it. This means that the session may
remain in conn->owner as long as the connection has never been added to
nor removed from the session's idle list. Given that this patch needs to
remain simple enough to be backported, instead it adds a workaround in
session_unown_conn() to detect that the element is already not attached
anywhere.
This fix absolutely requires previous patch "CLEANUP: connection: do not
use conn->owner when the session is known" otherwise the situation will
be even worse, as some places used to rely on conn->owner instead of the
session.
The fix could theorically be backported as far as 1.8. However, the code
in this area has significantly changed along versions and there are more
risks of breaking working stuff than fixing real issues there. The issue
was really woken up in two steps during 2.3-dev when slightly reworking
the idle conns with commit 08016ab82 ("MEDIUM: connection: Add private
connections synchronously in session server list") and when adding
support for storing used H2 connections in the session and adding the
necessary call to session_unown_conn() in the muxes. But the same test
managed to crash 2.2 when built in DEBUG_UAF and patched like this,
proving that we used to already leave dangling pointers behind us:
| diff --git a/include/haproxy/connection.h b/include/haproxy/connection.h
| index f8f235c1a..dd30b5f80 100644
| --- a/include/haproxy/connection.h
| +++ b/include/haproxy/connection.h
| @@ -458,6 +458,10 @@ static inline void conn_free(struct connection *conn)
| sess->idle_conns--;
| session_unown_conn(sess, conn);
| }
| + else {
| + struct session *sess = conn->owner;
| + BUG_ON(sess && sess->origin != &conn->obj_type);
| + }
|
| sockaddr_free(&conn->src);
| sockaddr_free(&conn->dst);
It's uncertain whether an existing code path there can lead to dereferencing
conn->owner when it's bad, though certain suspicious memory corruption bugs
make one think it's a likely candidate. The patch should not be hard to
adapt there.
Backports to 2.1 and older are left to the appreciation of the person
doing the backport.
A reproducer consists in this:
global
nbthread 1
listen l
bind :9000
mode http
http-reuse always
server s 127.0.0.1:8999 proto h2
frontend f
bind :8999 proto h2
mode http
http-request return status 200
Then this will make it crash within 2-3 seconds:
$ h1load -e -r 1 -c 10 http://0:9000/
If it does not, it might be that DEBUG_UAF was not used (it's harder then)
and it might be useful to restart.
At a few places we used to rely on conn->owner to retrieve the session
while the session is already known. This is not correct because at some
of these points the reason the connection's owner was still the session
(instead of NULL) is a mistake. At one place a comparison is even made
between the session and conn->owner assuming it's valid without checking
if it's NULL. Let's clean this up to use the session all the time.
Note that this will be needed for a forthcoming fix and will have to be
backported.
HAProxy uses CN and SAN of the certificates to match incoming SNI, and
use the matching certificate in the TLS handshake. `crt-list` goes
further and allows to configure SNI filters to explicitly define the
FQDNs that should match a certificate.
The first declared certificate of the `crt-list` option follows the same
rules, and it's also used as a fallback - the certificate that should be
used if SNI isn't provided or the provided one cannot match any
certificate or SNI filter. If a provided SNI matches the CN or SAN of
the first certificate, the first certificate would be used even if a
matching SNI filter is declared later.
This change clarifies this scenario and documents a filter that can be
used to convert the first declared certificate as a proper fallback.
Should be merged as far as the first SNI filter implementation.
This patch cleans up the Windows CI to look more similar to the refactored
Linux CI on GitHub Actions.
It switches the environment set-up from some manual cygwin setup via choco to
the msys2/setup-msys2@v2 action which just works and allows later steps to look
like any others without need to manually specify the shell.
This new setup is much faster than before where a single Windows build required
more than 10 minutes with more than 5 minutes just spent setting up the
environment and more than 6 minutes compiling HAProxy.
With this patch the setting of of the environment is done in less than a minute
and HAProxy is compiled in less than 2 minutes.
The only drawback is that Lua does not appear to be readily available. I expect
this to be acceptable and that the benefits far outweight this small drawback.
This is a preparation to later run some matrix entries on schedule only.
Within the matrix.py script it can now be detected whether the workflow is
running on schedule by using:
if build_type == "schedule":
matrix.append(...)
HAVE_SSL_CTX_SET_CIPHERSUITES is newly defined macro set in openssl-compat.h,
which helps to identify ssl libs (currently OpenSSL-1.1.1 only) that supports
TLS13 cipersuites manipulation on TLS13 context
Allow OpenBSD to support encrypted passwords in Userlists.
OpenBSD's crypt(3) function is provided directly by libc and does not
require -lcrypt.
Signed-off-by: Matthieu Guegan <matthieu.guegan@deindeal.ch>
When a file from a crt-list was not found, this one was ignored silently
letting HAProxy starts without it.
This bug was introduced by 47da821 ("MEDIUM: ssl: emulates the
multi-cert bundles in the crtlist").
This commit adds a found variable which is checked once we tried every
bundle combination so we can exits with an error if none were found.
Must be backported in 2.3.
When a non-existing file was specified in the configuration, haproxy
does not exits with an error which is not normal.
This bug was introduced by dfa93be ("MEDIUM: ssl: emulate multi-cert
bundles loading in standard loading") which does nothing if the stat
failed.
This patch introduce a "found" variable which is checked at the end of
the function so we exit with an error if no find were found.
Must be backported to 2.3.
In issue #970 it was reported that the bundle loading does not work
anymore with crt-list.
This bug was introduced by 47da821 ("MEDIUM: ssl: emulates the
multi-cert bundles in the crtlist") which incorrectly uses "path"
instead of "crt_path" in the name resolution.
Must be backported to 2.3.
It is not possible on response comming from a server, but an errorfile may be
empty. In this case, the http-after-response ruleset must not be evaluated
because it is totally unexpected to manipulate headers on an empty HTX message.
This patch must be backported everywhere the http-after-response rules are
supported, i.e as far as 2.2.
In bug #959 it was reported that haproxy segfault on startup when trying
to load a certifcate which use the X509v3 AKID extension but without the
keyid field.
This field is not mandatory and could be replaced by the serial or the
DirName.
For example:
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
X509v3 Subject Key Identifier:
42:7D:5F:6C:3E:0D:B7:2C:FD:6A:8A:32:C6:C6:B9:90:05:D1:B2:9B
X509v3 Authority Key Identifier:
DirName:/O=HAProxy Technologies/CN=HAProxy Test Intermediate CA
serial:F2:AB:C1:41:9F:AB:45:8E:86:23:AD:C5:54:ED:DF:FA
This bug was introduced by 70df7b ("MINOR: ssl: add "issuers-chain-path" directive").
This patch must be backported as far as 2.2.
It's really annoying to see that in 2020 we're still facing bugs caused
by dangling pointers in the code that result from poorly written rules
about how these pointers are supposed to be handled, set and reset. Let's
add a few supposedly obvious (but apparently not) rules about how pointers
have to be used through out the code in hope to make such bad practices
disappear (or at least have something to point the authors to after
reviewing their code).
in the context of a progressive backend migration, we want to be able to
activate SSL on outgoing connections to the server at runtime without
reloading.
This patch adds a `set server ssl` command; in order to allow that:
- add `srv_use_ssl` to `show servers state` command for compatibility,
also update associated parsing
- when using default-server ssl setting, and `no-ssl` on server line,
init SSL ctx without activating it
- when triggering ssl API, de/activate SSL connections as requested
- clean ongoing connections as it is done for addr/port changes, without
checking prior server state
example config:
backend be_foo
default-server ssl
server srv0 127.0.0.1:6011 weight 1 no-ssl
show servers state:
5 be_foo 1 srv0 127.0.0.1 2 0 1 1 15 1 0 4 0 0 0 0 - 6011 - -1
where srv0 can switch to ssl later during the runtime:
set server be_foo/srv0 ssl on
5 be_foo 1 srv0 127.0.0.1 2 0 1 1 15 1 0 4 0 0 0 0 - 6011 - 1
Also update existing tests and create a new one.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
a common init for ssl_ctx will be later usable in other functions in
order to support hot enable of ssl during runtime.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
This PR aims to make the workflow more consistent, by reusing the same wording
for the step names and the same commands to make it look like the vtest
workflow as much as possible.
It was renamed to compliance.yml to match the human readable name better. This
also allows to extend the workflow with other compliance tools later on, nicely
grouping those jobs together in a single file.
No functional changes have been made.
while looking at `url_dec` implementation I realised there was not yet a
simple test to avoid future regressions.
This one is testing simple case, including the "+" behaviour depending
on the argument passed to `url_dec`
Signed-off-by: William Dauchy <wdauchy@gmail.com>
There is apparently a race in this test that would require relying on
haproxy's output to make it reliably work, but currently vtest doesn't
have this option. Let's mark it broken again to avoid polluting the CI.
This is discussed in github issue #950.
Three filters are used. The compression filter is enclose by two trace filters,
both with the random forwarding enabled. An HTTP test is performed but also a
TCP test using an HTTP tunnel.
CONNECT requests are bodyless messages but with no EOM blocks. Thus, conditions
to stop waiting for the message payload are not suited to this kind of
messages. Indeed, the message finishes on an EOH block. But the tunnel mode at
the stream level is only set in HTTP_XFER_BODY analyser. So, the stream is
blocked, waiting for a body that does not exist till a timeout expires.
To fix this bug, we just stop waiting for a body for CONNECT requests. Another
solution is to rely on HTX_SL_F_BODYLESS/HTTP_MSGF_BODYLESS flags. But this one
is less intrusive.
This message must be backported as far as 2.0. For the 2.0, only the HTX part
must be fixed.
When http filtering ends, if there are some filtered data not forwarded yet, we
forward them, in flt_http_end(). Most of time, this doesn't happen, except when
a tunnel is established using a CONNECT. In this case, there is not EOM on the
request and there is no body. Thus the headers are never forwarded, blocking the
stream.
This patch must be backported as far as 2.0. Prior versions don't suffer of this
bug because there is no HTX support. On the 2.0, the change is only applicable
on HTX streams. A special test must be performed to make sure.
Add a reg-test verifying the fix in dea7c209f8.
Some parts of the configuration used in the were taken from the initial bug
report from Maciej.
Should be backported together with dea7c209f8
(all stable versions).
Co-authored-by: Maciej Zdeb <maciej@zdeb.pl>
Return ERR_NONE instead of 0 on success for all config callbacks that should
return ERR_* codes. There is no change because ERR_NONE is a macro equals to
0. But this makes the return value more explicit.
post-check function callbacks must return ERR_* flags. Thus, init_h2() is fixed
to return ERR_NONE on success or (ERR_ALERT|ERR_FATAL) on error.
This patch may be backported as far as 2.2.
Functions registered to release memory per-thread have no return value. But the
registering function and the function pointer in per_thread_free_fct structure
specify it should return an integer. This patch fixes it.
This patch may be backported as far as 2.0.
When tcp-check or http-check rules are used, if the corresponding check option
(option tcp-check and option httpchk) is declared after the ruleset, a warning
is emitted about an unused check ruleset while there is no problem in reality.
This patch must be backported as far as 2.2.
In sync mode, if an applet receives a ack while the processing delay has already
expired, there is not frame waiting for this ack. But there is no reason to
close the connection in this case. The ack may be ignored and the connection may
be reused to process another frame. The only reason to trigger an error and
close the connection is when the wrong ack is received while there is still a
frame waiting for its ack. In sync mode, this should never happen.
This patch may be backported in all versions supporting the SPOE.
When a SPOE applet is used to send a frame, a reference on this applet is saved
in the spoe context of the offladed stream. But, if the applet is released
before receving the corresponding ack, we must be sure to remove this
reference. This was performed for fragmented frames only. But it must also be
performed for a spoe contexts in the applet waiting_queue and in the thread
waiting_queue (used in async mode).
This bug leads to a memory corruption when an offloaded stream try to update the
state of a released applet because it still have a reference on it. There are
many ways to trigger this bug. The easiest is probably during reloads. On the
old process, all applets are woken up to be released ASAP.
Many thanks to Maciej Zdeb to report the bug and to work on it for 2
months. Without his help, it would have been much more difficult to fix the
bug. It is always a huge pleasure to see how some users are enthousiast and
helpful. Thanks again Maciej !
This patch must be backported to all versions where the spoe is supported (>=
1.7).