If an error is raised during the ClientHello callback on the server side
(ssl_sock_switchctx_cbk), the servername callback won't be called and
the client's SNI will not be saved in the SSL context. But since we use
the SSL_get_servername function to return this SNI in the ssl_fc_sni
sample fetch, that means that in case of error, such as an SNI mismatch
with a frontend having the strict-sni option enabled, the sample fetch
would not work (making strict-sni related errors hard to debug).
This patch fixes that by storing the SNI as an ex_data in the SSL
context in case the ClientHello callback returns an error. This way the
sample fetch can fallback to getting the SNI this way. It will still
first call the SSL_get_servername function first since it is the proper
way of getting a client's SNI when the handshake succeeded.
In order to avoid memory allocations are runtime into this highly used
runtime function, a new memory pool was created to store those client
SNIs. Its entry size is set to 256 bytes since SNIs can't be longer than
255 characters.
This fixes GitHub #1484.
It can be backported in 2.5.
Patch 2c776f1 ("BUG/MEDIUM: ssl: initialize correctly ssl w/
default-server") added tests that are not relevant anymore and broke the
reg-test. revert them.
This bug was introduced by d817dc73 ("MEDIUM: ssl: Load client
certificates in a ckch for backend servers") in which the creation of
the SSL_CTX for a server was moved to the configuration parser when
using a "crt" keyword instead of being done in ssl_sock_prepare_srv_ctx().
The patch 0498fa40 ("BUG/MINOR: ssl: Default-server configuration ignored by
server") made it worse by setting the same SSL_CTX for every servers
using a default-server. Resulting in any SSL option on a server applied
to every server in its backend.
This patch fixes the issue by reintroducing a string which store the
path of certificate inside the server structure, and loading the
certificate in ssl_sock_prepare_srv_ctx() again.
This is a quick fix to backport, a cleaner way can be achieve by always
creating the SSL_CTX in ssl_sock_prepare_srv_ctx() and splitting
properly the ssl_sock_load_srv_cert() function.
This patch fixes issue #1488.
Must be backported as far as 2.4.
LibreSSL-3.4.2 introduced cert revocation check behaviour change, for some
checks now X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY (20) is returned.
https://github.com/libressl-portable/portable/issues/697
let us modify vtc accordingly
During post-parsing stage, the SSL context of a server is initialized if SSL
is configured on the server or its default-server. It is required to be able
to enable SSL at runtime. However a regression was introduced, because the
last parsed default-server is used. But it is not necessarily the
default-server line used to configure the server. This may lead to
erroneously initialize the SSL context for a server without SSL parameter or
the skip it while it should be done.
The problem is the default-server used to configure a server is not saved
during configuration parsing. So, the information is lost during the
post-parsing. To fix the bug, the SRV_F_DEFSRV_USE_SSL flag is
introduced. It is used to know when a server was initialized with a
default-server using SSL.
For the record, the commit f63704488e ("MEDIUM: cli/ssl: configure ssl on
server at runtime") has introduced the bug.
This patch must be backported as far as 2.4.
This patch allows to replace the host header generated by the
httpclient instead of adding a new one, resulting in the server replying
an error 400.
The host header is now generated from the uri only if it wasn't found in
the list of headers.
Also add a new request in the VTC file to test this.
This patch must be backported in 2.5.
With the CI occasionally slowing down, we're starting to see again some
spurious failures despite the long 1-second timeouts. This reports false
positives that are disturbing and doesn't provide as much value as this
could. However at this delay it already becomes a pain for developers
to wait for the tests to complete.
This commit adds support for the new environment variable
HAPROXY_TEST_TIMEOUT that will allow anyone to modify the connect,
client and server timeouts. It was set to 5 seconds by default, which
should be plenty for quite some time in the CI. All relevant values
that were 200ms or above were replaced by this one. A few larger
values were left as they are special. One test for the set-timeout
action that used to rely on a fixed 1-sec value was extended to a
fixed 5-sec, as the timeout is normally not reached, but it needs
to be known to compare the old and new values.
This test was dependent on the check/server timeout to detect a failure,
which is not logical since we also use that one as an upper bound for
success in the second test, and that needlessly extends the test duration.
Let's make sur the timeout strikes immediately with only 1 ms timeout. Now
the total tests time is around 5.3-5.4s down from 8.7s in dev14. There is
still quite some room for improvement in the peers tests which all wait 2s
before starting but this will be more complicated.
Some tests expect a 503, typically those that check that wrong CA/CRL
will not be accepted between a server and a frontend. But such tests
tend to last very long simply because of the 1-second turn-around on
connection retries that happens during the failure. Let's properly set
the retries count to zero for these ones. One test purposely wants to
exhaust the retries so the retries was set to 1 instead.
The feature cmd was incorrectly set to:
feature cmd "$HAPROXY_PROGRAM -cc 'feature(OPENSSL)' && 'openssl_version_atleast(1.1.1)'"
Which was incorrect since the quotes must surrendered the -cc argument.
Also the test requires openssl and does not work with libressl.
This reverts commit 597909f4e6
http-after-response rules evaluation was changed to do the same that was
done for http-response, in the code. However, the opposite must be performed
instead. Only the rules of the current section must be stopped. Thus the
above commit is reverted and the http-response rules evaluation will be
fixed instead.
Note that only "allow" action is concerned. It is most probably an uncommon
action for an http-after-request rule.
This patch must be backported as far as 2.2 if the above commit was
backported.
A TCP/HTTP action can stop the rules evaluation. However, it should be
applied on the current section only. For instance, for http-requests rules,
an "allow" on a frontend must stop evaluation of rules defined in this
frontend. But the backend rules, if any, must still be evaluated.
For http-response rulesets, according the configuration manual, the same
must be true. Only "allow" action is concerned. However, since the
beginning, this action stops evaluation of all remaining rules, not only
those of the current section.
This patch may be backported to all supported versions. But it is not so
critical because the bug exists since a while. I doubt it will break any
existing configuration because the current behavior is
counterintuitive.
During a troublehooting it came obvious that the SNI always ought to
be logged on httpslog, as it explains errors caused by selection of
the default certificate (or failure to do so in case of strict-sni).
This expectation was also confirmed on the mailing list.
Since the field may be empty it appeared important not to leave an
empty string in the current format, so it was decided to place the
field before a '/' preceding the SSL version and ciphers, so that
in the worst case a missing field leads to a field looking like
"/TLSv1.2/AES...", though usually a missing element still results
in a "-" in logs.
This will change the log format for users who already deployed the
2.5-dev versions (hence the medium level) but no released version
was using this format yet so there's no harm for stable deployments.
The reg-test was updated to check for "-" there since we don't send
SNI in reg-tests.
Link: https://www.mail-archive.com/haproxy@formilux.org/msg41410.html
Cc: William Lallemand <wlallemand@haproxy.org>
Commit 3d2093af9 ("MINOR: connection: Add a connection error code sample
fetch") added these convenient sample-fetch functions but it appears that
due to a misunderstanding the redundant "conn" part was kept in their
name, causing confusion, since "fc" already stands for "front connection".
Let's simply call them "fc_err" and "bc_err" to match all other related
ones before they appear in a final release. The VTC they appeared in were
also updated, and the alpha sort in the keywords table updated.
Cc: William Lallemand <wlallemand@haproxy.org>
Implement parsing for the server keyword 'ws'. This is used to configure
the mode of selection for websocket protocol. The configuration
documentation has been updated.
A new regtest has been created to test the proper behavior of the
keyword.
The RFC8441 was not respected by haproxy in regards with server support
for Extended CONNECT. The Extended CONNECT method was used to convert an
Upgrade header stream even if no SETTINGS_ENABLE_CONNECT_PROTOCOL was
received, which is forbidden by the RFC8441. In this case, the behavior
of the http/2 server is unspecified.
Fix this by flagging the connection on receiption of the RFC8441
settings SETTINGS_ENABLE_CONNECT_PROTOCOL. Extended CONNECT is thus only
be used if the flag is present. In the other case, the stream is
immediatly closed as there is no way to handle it in http/2. It results
in a http/1.1 502 or http/2 RESET_STREAM to the client side.
The protocol-upgrade regtest has been extended to test that haproxy does
not emit Extended CONNECT on servers without RFC8441 support.
It must be backported up to 2.4.
A long-standing issue was reported in issue #1215.
In short, var() was initially internally declared as returning a string
because it was not possible by then to return "any type". As such, users
regularly get trapped thinking that when they're storing an integer there,
then the integer matching method automatically applies. Except that this
is not possible since this is related to the config parser and is decided
at boot time where the variable's type is not known yet.
As such, what is done is that the output being declared as type string,
the string match will automatically apply, and any value will first be
converted to a string. This results in several issues like:
http-request set-var(txn.foo) int(-1)
http-request deny if { var(txn.foo) lt 0 }
not working. This is because the string match on the second line will in
fact compare the string representation of the variable against strings
"lt" and "0", none of which matches.
The doc says that the matching method is mandatory, though that's not
the case in the code due to that default string type being permissive.
There's not even a warning when no explicit match is placed, because
this happens very deep in the expression evaluator and making a special
case just for "var" can reveal very complicated.
The set-var() converter already mandates a matching method, as the
following will be rejected:
... if { int(12),set-var(txn.truc) 12 }
while this one will work:
... if { int(12),set-var(txn.truc) -m int 12 }
As such, this patch this modifies var() to match the doc, returning the
type "any", and mandating the matching method, implying that this bogus
config which does not work:
http-request set-var(txn.foo) int(-1)
http-request deny if { var(txn.foo) lt 0 }
will need to be written like this:
http-request set-var(txn.foo) int(-1)
http-request deny if { var(txn.foo) -m int lt 0 }
This *will* break some configs (and even 3 of our regtests relied on
this), but except those which already match string exclusively, all
other ones are already broken and silently fail (and one of the 3
regtests, the one on FIX, was bogus regarding this).
In order to fix existing configs, one can simply append "-m str"
after a "var()" in an ACL or "if" expression:
http-request deny unless { var(txn.jwt_alg) "ES" }
must become:
http-request deny unless { var(txn.jwt_alg) -m str "ES" }
Most commonly, patterns such as "le", "lt", "ge", "gt", "eq", "ne" in
front of a number indicate that the intent was to match an integer,
and in this case "-m int" would be desired:
tcp-response content reject if ! { var(res.size) gt 3800 }
ought to become:
tcp-response content reject if ! { var(res.size) -m int gt 3800 }
This must not be backported, but if a solution is found to at least
detect this exact condition in the generic expression parser and
emit a warning, this could probably help spot configuration bugs.
Link: https://www.mail-archive.com/haproxy@formilux.org/msg41341.html
Cc: Christopher Faulet <cfaulet@haproxy.com>
Cc: Tim Dsterhus <tim@bastelstu.be>
The http_auth_bearer sample fetch can take a header name as parameter,
in which case it will try to extract a Bearer value out of the given
header name instead of the default "Authorization" one. In this case,
the extraction would not have worked because of a misuse of strncasecmp.
This patch fixes this by replacing the standard string functions by ist
ones.
It also properly manages the multiple spaces that could be found between
the scheme and its value.
No backport needed, that's part of JWT which is only in 2.5.
Co-authored-by: Tim Duesterhus <tim@bastelstu.be>
Improve the httpclient reg-tests to test the streaming,
The regtest now sends a big payload to vtest, then receive a payload
from vtest and send it again.
Add support for HEAD/PUT/POST/DELETE method with the lua httpclient.
This patch use the httpclient_req_gen() function with a different meth
parameter to implement this.
Also change the reg-test to support a POST request with a body.
httpclient_req_gen() takes a payload argument which can be use to put a
payload in the request. This payload can only fit a request buffer.
This payload can also be specified by the "body" named parameter within
the lua. httpclient.
It is also used within the CLI httpclient when specified as a CLI
payload with "<<".
In order for all the error return values to be distributed on the same
side (instead of surrounding the success error code), the return values
for errors other than a simple verification failure are switched to
negative values. This way the result of the jwt_verify converter can be
compared strictly to 1 as well relative to 0 (any <= 0 return value is
an error).
The documentation was also modified to discourage conversion of the
return value into a boolean (which would definitely not work).
3 scripts are added:
* startup/default_rules.vtc to check configuration parsing
* http-rules/default_rules.vtc to check evaluation of HTTP rules
* tcp-rules/default_rules.vtc to check evaluation of TCP rules
http-after-response rules evaluation must be stopped after a "allow". It
means the frontend ruleset must not be evaluated if a "allow" was performed
in the backend ruleset. Internally, the evaluation must be stopped if on
HTTP_RULE_RES_STOP return value. Only the "allow" action is concerned by
this change.
Thanks to this patch, http-response and http-after-response behave in the
same way.
This patch should be backported as far as 2.2.
Tim reported that a decoding error from the base64 function wouldn't
be matched in case of bad input, and could possibly cause trouble
with -1 being passed in decoded_sig->data. In the case of HMAC+SHA
it is harmless as the comparison is made using memcmp() after checking
for length equality, but in the case of RSA/ECDSA this result is passed
as a size_t to EVP_DigetVerifyFinal() and may depend on the lib's mood.
The fix simply consists in checking the intermediary result before
storing it.
That's precisely what happens with one of the regtests which returned
0 instead of 4 on the intentionally defective token, so the regtest
was fixed as well.
No backport is needed as this is new in this release.
In order for the test to run with OpenSSL 1.0.2 the test will now mostly
use TLSv1.2 and use TLS 1.3 only on some specific tests (covered by
preconditions).
The test is strongly dependent on the way the errors are output by the
SSL library so it is not possible to perform the same checks when using
OpenSSL or LibreSSL. It is then reenabled for OpenSSL (whatever the
version) but still disabled for LibreSSL.
This limitation is added thanks to the new ssllib_name_startswith
precondition check.
The OpenSSL error codes for the same errors are not consistent between
OpenSSL versions. The ssl_errors test needs to be modified to only take
into account a fixed part of those error codes.
This patch focuses on the reason part of the error code by applying a
mask on the error code (whose size varies depending on the lib version).
The log-error-via-logformat option was removed in commit
3d6350e108 and was replaced by a dedicated
error-log-format option. The references to this option need to be
removed from the test's description.
Migrate the httpclient:get() method to named arguments so we can
specify optional arguments.
This allows to pass headers as an optional argument as an array.
The () in the method call must be replaced by {}:
local res = httpclient:get{url="http://127.0.0.1:9000/?s=99",
headers= {["X-foo"] = { "salt" }, ["X-bar"] = {"pepper" }}}
When a filter is attached to a stream, the wrong FLT_END analyzer is added
on the request channel. AN_REQ_FLT_END must be added instead of
AN_RES_FLT_END. Because of this bug, the stream may hang on the filter
release stage.
It seems to be ok for HTTP filters (cache & compression) in HTTP mode. But
when enabled on a TCP proxy, the stream is blocked until the client or the
server timeout expire because data forwarding is blocked. The stream is then
prematurely aborted.
This bug was introduced by commit 26eb5ea35 ("BUG/MINOR: filters: Always set
FLT_END analyser when CF_FLT_ANALYZE flag is set"). The patch must be
backported in all stable versions.
ssl_crt-list_filters.vtc was deactivated because they were not compatible with
previous version of OpenSSL and it was not possible to
filter by versions.
Activate it again with a openssl_version_atleast(1.1.1)
check.
The ssl_bc_hsk_err sample fetch will need to raise more errors than only
handshake related ones hence its renaming to a more generic ssl_bc_err.
This patch is required because some handshake failures that should have
been caught by this fetch (verify error on the server side for instance)
were missed. This is caused by a change in TLS1.3 in which the
'Finished' state on the client is reached before its certificate is sent
(and verified) on the server side (see the "Protocol Overview" part of
RFC 8446).
This means that the SSL_do_handshake call is finished long before the
server can verify and potentially reject the client certificate.
The ssl_bc_hsk_err will then need to be expanded to catch other types of
errors.
This change is also applied to the frontend fetches (ssl_fc_hsk_err
becomes ssl_fc_err) and to their string counterparts.
Some changes were pushed to improve parsing of the Transfer-Encoding header
parsing annd all related stuff. This new script adds some tests to validate
these changes.
This reg-test is heavily inspired by the lua_socket.vtc one.
It replaces the HTTP/1.1 request made manually with a socket object with
an httpclient object.
When a message is parsed and copied into the channel buffer, in
h1_process_demux(), more space is requested if some pending data remain
after the parsing while the channel buffer is not empty. To do so,
CS_FL_WANT_ROOM flag is set. It means the H1 parser needs more space in the
channel buffer to continue. In the stream-interface, when this flag is set,
the SI is considered as blocked on the RX path. It is only unblocked when
some data are sent.
However, it is not accurrate because the parsing may be stopped because
there is not enough data to continue. For instance in the middle of a chunk
size. In this case, some data may have been already copied but the parser is
blocked because it must receive more data to continue. If the calling SI is
blocked on RX at this stage when the stream is waiting for the payload
(because http-buffer-request is set for instance), the stream remains stuck
infinitely.
To fix the bug, we must request more space to the app layer only when it is
not possible to copied more data. Actually, this happens when data remain in
the input buffer while the H1 parser is in states MSG_DATA or MSG_TUNNEL, or
when we are unable to copy headers or trailers into a non-empty buffer.
The first condition is quite easy to handle. The second one requires an API
refactoring. h1_parse_msg_hdrs() and h1_parse_msg_tlrs() fnuctions have been
updated. Now it is possible to know when we need more space in the buffer to
copy headers or trailers (-2 is returned). In the H1 mux, a new H1S flag
(H1S_F_RX_CONGESTED) is used to track this state inside h1_process_demux().
This patch is part of a series related to the issue #1362. It should be
backported as far as 2.0, probably with some adaptations. So be careful
during backports.
When the LDAP response is parsed, the message length is not properly
decoded. While it works for LDAP servers encoding it on 1 byte, it does not
work for those using a multi-bytes encoding. Among others, Active Directory
servers seems to encode messages or elements length on 4 bytes.
In this patch, we only handle length of BindResponse messages encoded on 1,
2 or 4 bytes. In theory, it may be encoded on any bytes number less than 127
bytes. But it is useless to make this part too complex. It should be ok this
way.
This patch should fix the issue #1390. It should be backported to all stable
versions. While it should be easy to backport it as far as 2.2, the patch
will have to be totally rewritten for lower versions.
The global table of known variables names can only grow and was designed
for static names that are registered at boot. Nowadays it's possible to
set dynamic variable names from Lua or from the CLI, which causes a real
problem that was partially addressed in 2.2 with commit 4e172c93f
("MEDIUM: lua: Add `ifexist` parameter to `set_var`"). Please see github
issue #624 for more context.
This patch simplifies all this by removing the need for a central
registry of known names, and storing 64-bit hashes instead. This is
highly sufficient given the low number of variables in each context.
The hash is calculated using XXH64() which is bijective over the 64-bit
space thus is guaranteed collision-free for 1..8 chars. Above that the
risk remains around 1/2^64 per extra 8 chars so in practice this is
highly sufficient for our usage. A random seed is used at boot to seed
the hash so that it's not attackable from Lua for example.
There's one particular nit though. The "ifexist" hack mentioned above
is now limited to variables of scope "proc" only, and will only match
variables that were already created or declared, but will now verify
the scope as well. This may affect some bogus Lua scripts and SPOE
agents which used to accidentally work because a similarly named
variable used to exist in a different scope. These ones may need to be
fixed to comply with the doc.
Now we can sum up the situation as this one:
- ephemeral variables (scopes sess, txn, req, res) will always be
usable, regardless of any prior declaration. This effectively
addresses the most problematic change from the commit above that
in order to work well could have required some script auditing ;
- process-wide variables (scope proc) that are mentioned in the
configuration, referenced in a "register-var-names" SPOE directive,
or created via "set-var" in the global section or the CLI, are
permanent and will always accept to be set, with or without the
"ifexist" restriction (SPOE uses this internally as well).
- process-wide variables (scope proc) that are only created via a
set-var() tcp/http action, via Lua's set_var() calls, or via an
SPOE with the "force-set-var" directive), will not be permanent
but will always accept to be replaced once they are created, even
if "ifexist" is present
- process-wide variables (scope proc) that do not exist will only
support being created via the set-var() tcp/http action, Lua's
set_var() calls without "ifexist", or an SPOE declared with
"force-set-var".
This means that non-proc variables do not care about "ifexist" nor
prior declaration, and that using "ifexist" should most often be
reliable in Lua and that SPOE should most often work without any
prior declaration. It may be doable to turn "ifexist" to 1 by default
in Lua to further ease the transition. Note: regtests were adjusted.
Cc: Tim Dsterhus <tim@bastelstu.be>
It is quite common to see in configurations constructions like the
following one:
http-request set-var(txn.bodylen) 0
http-request set-var(txn.bodylen) req.hdr(content-length)
...
http-request set-header orig-len %[var(txn.bodylen)]
The set-var() rules are almost always duplicated when manipulating
integers or any other value that is mandatory along operations. This is
a problem because it makes the configurations complicated to maintain
and slower than needed. And it becomes even more complicated when several
conditions may set the same variable because the risk of forgetting to
initialize it or to accidentally reset it is high.
This patch extends the var() sample fetch function to take an optional
argument which contains a default value to be returned if the variable
was not set. This way it becomes much simpler to use the variable, just
set it where needed, and read it with a fall back to the default value:
http-request set-var(txn.bodylen) req.hdr(content-length)
...
http-request set-header orig-len %[var(txn.bodylen,0)]
The default value is always passed as a string, thus it will experience
a cast to the output type. It doesn't seem userful to complicate the
configuration to pass an explicit type at this point.
The vars.vtc regtest was updated accordingly.
Most often "set var" on the CLI is used to set a string, and using only
expressions is not always convenient, particularly when trying to
concatenate variables sur as host names and paths.
Now the "set var" command supports an optional keyword before the value
to indicate its type. "expr" takes an expression just like before this
patch, and "fmt" a format string, making it work like the "set-var-fmt"
actions.
The VTC was updated to include a test on the format string.
Just like the set-var-fmt action for tcp/http rules, the set-var-fmt
directive in global sections allows to pre-set process-wide variables
using a format string instead of a sample expression. This is often
more convenient when it is required to concatenate multiple fields,
or when emitting just one word.
The set-var() action is convenient because it preserves the input type
but it's a pain to deal with when trying to concatenate values. The
most recurring example is when it's needed to build a variable composed
of the source address and the source port. Usually it ends up like this:
tcp-request session set-var(sess.port) src_port
tcp-request session set-var(sess.addr) src,concat(":",sess.port)
This is even worse when trying to aggregate multiple fields from stick-table
data for example. Due to this a lot of users instead abuse headers from HTTP
rules:
http-request set-header(x-addr) %[src]:%[src_port]
But this requires some careful cleanups to make sure they won't leak, and
it's significantly more expensive to deal with. And generally speaking it's
not clean. Plus it must be performed for each and every request, which is
expensive for this common case of ip+port that doesn't change for the whole
session.
This patch addresses this limitation by implementing a new "set-var-fmt"
action which performs the same work as "set-var" but takes a format string
in argument instead of an expression. This way it becomes pretty simple to
just write:
tcp-request session set-var-fmt(sess.addr) %[src]:%[src_port]
It is usable in all rulesets that already support the "set-var" action.
It is not yet implemented for the global "set-var" directive (which already
takes a string) and the CLI's "set var" command, which would definitely
benefit from it but currently uses its own parser and engine, thus it
must be reworked.
The doc and regtests were updated.
Sometimes it is convenient to remap large sets of URIs to new ones (e.g.
after a site migration for example). This can be achieved using
"http-request redirect" combined with maps, but one difficulty there is
that non-matching entries will return an empty response. In order to
avoid this, duplicating the operation as an ACL condition ending in
"-m found" is possible but it becomes complex and error-prone while it's
known that an empty URL is not valid in a location header.
This patch addresses this by improving the redirect rules to be able to
simply ignore the rule and skip to the next one if the result of the
evaluation of the "location" expression is empty. However in order not
to break existing setups, it requires a new "ignore-empty" keyword.
There used to be an ACT_FLAG_FINAL on redirect rules that's used during
the parsing to emit a warning if followed by another rule, so here we
only set it if the option is not there. The http_apply_redirect_rule()
function now returns a 3rd value to mention that it did nothing and
that this was not an error, so that callers can just ignore the rule.
The regular "redirect" rules were not modified however since this does
not apply there.
The map_redirect VTC was completed with such a test and updated to 2.5
and an example was added into the documentation.
Those fetches are used to identify connection errors and SSL handshake
errors on the backend side of a connection. They can for instance be
used in a log-format line as in the regtest.
This option can be used to define a specific log format that will be
used in case of error, timeout, connection failure on a frontend... It
will be used for any log line concerned by the log-separate-errors
option. It will also replace the format of specific error messages
decribed in section 8.2.6.
If no "error-log-format" is defined, the legacy error messages are still
emitted and the other error logs keep using the regular log-format.
The test that removes server via CLI is using LUA to check that servers
referenced in a LUA script cannot be removed. This requires LUA support
to be built in haproxy.
Split the test and create a new one containing only the LUA relevant
test. Mark it as LUA dependant.
Relax the condition on "delete server" CLI handler to be able to remove
all servers, even non dynamic, except if they are flagged as non
purgeable.
This change is necessary to extend the use cases for dynamic servers
with reload. It's expected that each dynamic server created via the CLI
is manually commited in the haproxy configuration by the user. Dynamic
servers will be present on reload only if they are present in the
configuration file. This means that non-dynamic servers must be allowed
to be removable at runtime.
The dynamic servers removal reg-test has been updated and renamed to
reflect its purpose. A new test is present to check that non-purgeable
servers cannot be removed.
Since commit 8d6c6bd ("Leak-plugging on barriers") VTest has become
stricter in its expectations, making this one fail. The agent-check
test was expecting a close on the server, which normally does not
happen before the server responds. In addition, it was really sending
"hello" (with the quotes) due to the config file syntax, which explains
why test test log reported that '"hell' was received, and complained
that 0x6f ('o') was read instead of a shutdown. This has been fixed
as well by using single-quotes.
There is no need to backport this test as it's only in 2.5.
The abortonclose test was only expecting a close after all server
retries were exhausted, it didn't check for the pending 503, which
fails with new versions of vtest starting with commit 8d6c6bd
("Leak-plugging on barriers").
This may be backported, but carefully in case older versions would
really close without responding.
Commit e1b9e1bb1 ("REGTESTS: Add script to tests TCP to HTTP upgrades")
included a mistake in the TCP->H1->H2 test, it expected a close while
it ought to expect a 400 bad req, which is what the mux returns in this
case. It happens that this used to work fine with older versions of
vtest which see the close regardless of the 400, but since Vtest commit
8d6c6bd ("Leak-plugging on barriers"), this doesn't work anymore.
Let's fix this by expecting the proper response. This should be backported
where this regtest is present, but only after verifying that it still
works; indeed at the time of writing it's uncertain whether an earlier
version used to immediately close.
Most of the SSL sample fetches related to the client certificate were
based on the SSL_get_peer_certificate function which returns NULL when
the verification process failed. This made it impossible to use those
fetches in a log format since they would always be empty.
The patch adds a reference to the X509 object representing the client
certificate in the SSL structure and makes use of this reference in the
fetches.
The reference can only be obtained in ssl_sock_bind_verifycbk which
means that in case of an SSL error occurring before the verification
process ("no shared cipher" for instance, which happens while processing
the Client Hello), we won't ever start the verification process and it
will be impossible to get information about the client certificate.
This patch also allows most of the ssl_c_XXX fetches to return a usable
value in case of connection failure (because of a verification error for
instance) by making the "conn->flags & CO_FL_WAIT_XPRT" test (which
requires a connection to be established) less strict.
Thanks to this patch, a log-format such as the following should return
usable information in case of an error occurring during the verification
process :
log-format "DN=%{+Q}[ssl_c_s_dn] serial=%[ssl_c_serial,hex] \
hash=%[ssl_c_sha1,hex]"
It should answer to GitHub issue #693.
This test ensure that h2 pseudo headers are properly checked for invalid
characters and the host header is ignored if :authority is present. This
is necessary to prevent h2 desync attacks as described here
https://portswigger.net/research/http2
Add a missing 'rxreq' statement in first server. Without it the test is
unstable. The issue is frequent when running with one thread only.
This should fix github issue #1342.
Write a regtest to validate check support by dynamic servers. Three
differents servers are added on various configuration :
- server OK
- server DOWN
- agent-check
Rename the 'dontloglegacyconnerr' option to 'log-error-via-logformat'
which is much more self-explanatory and readable.
Note: only legacy keywords don't use hyphens, it is recommended to
separate words with them in new keywords.
Disable the new ssl_errors.vtc reg-tests because in does not work
correctly on the CI since it requires a version of OpenSSL which is
compatible with TLSv1.3 and the ciphersuites keyword.
This reg-test checks that the connection and SSL sample fetches related
to errors are functioning properly. It also tests the proper behaviour
of the default HTTPS log format and of the log-legacy-conn-error option
which enables or disables the output of a special error message in case
of connection failure (otherwise a line following the configured
log-format is output).
Now that we support logic expressions, variables and parenthesis, let's
add a few more tests to check_conditions.vtc. The tests are conditionned
by the version being at least 2.5-dev2 so that it will not cause failures
during a possible later bisect session or if backported.
The test verifies that exported variables are seen, that operators precedence
works as expected, that parenthesis work at least through two levels, that an
empty condition is false while a negative number is true, and that extraneous
chars in an expression, or unfinished strings are properly caught.
The test consists in a sequence of shell commands, but the shell is not
necessarily started with strict errors enabled, so only the last command
provides the verdict. Let's add "set -e" to make it fail on the first
test that fails.
Create a regtest for the 'track' keyword support by dynamic servers.
First checks are executed to ensure that tracking cannot be activated on
non-check server or dynamic servers.
Then, 3 scenarii are written to ensure that the deletion of a dynamic
server with track is properly handled and other servers in the track
chain are properly maintained.
When a default-server line specified a client certificate to use, the
frontend would not take it into account and create an empty SSL context,
which would raise an error on the backend side ("peer did not return a
certificate").
This bug was introduced by d817dc733e in
which the SSL contexts are created earlier than before (during the
default-server line parsing) without setting it in the corresponding
server structures. It then made the server create an empty SSL context
in ssl_sock_prepare_srv_ctx because it thought it needed one.
It was raised on redmine, in Bug #3906.
It can be backported to 2.4.
This test ensure that http scheme-based normalization is properly
applied on target URL and host header. It uses h2 clients as it is not
possible to specify an absolute url for h1 vtc clients.
As specified by the MQTT specification (MQTT-3.1.3-6), the client ID may be
empty. That means the length of the client ID string may be 0. However, The
MQTT parser does not support empty strings.
So, to fix the bug, the mqtt_read_string() function may now parse empty
string. 2 bytes must be found to decode the string length, but the length
may be 0 now. It is the caller responsibility to test the string emptiness
if necessary. In addition, in mqtt_parse_connect(), the client ID may be
empty now.
This patch should partely fix the issue #1310. It must be backported to 2.4.
Parsing of too long strings (> 127 characters) was buggy because of a wrong
cast on the length bytes. To fix the bug, we rely on mqtt_read_2byte_int()
function. This way, the string length is properly decoded.
This patch should partely fix the issue #1310. It must be backported to 2.4.
Correct the typo in the parameter used to update the 'maxconn' via
agent-check. The test is also completed to detect the update of maxconn
using CLI 'show stats'.
Create a new regtest to test SSL support for dynamic servers.
The first step of the test is to create the ca-file via the CLI. Then a
dynamic server is created with the ssl option using the ca-file. A
client request is made through it to achieve the test.
RFC 7231#5.3.4 makes a difference between a completely missing
'accept-encoding' header and an 'accept-encoding' header without any values.
This case was already correctly handled by accident, because an empty accept
encoding does not match any known encoding. However this resulted in the
'other' encoding being added to the bitmap. Usually this also succeeds in
serving cached responses, because the cached response likely has no
'content-encoding', thus matching the identity case instead of not serving the
response, due to the 'other' encoding. But it's technically not 100% correct.
Fix this by special-casing 'accept-encoding' values with a length of zero and
extend the test to check that an empty accept-encoding is correctly handled.
Due to the reasons given above the test also passes without the change in
cache.c.
Vary support was added in HAProxy 2.4. This fix should be backported to 2.4+.
To perform servers resolution, the resolver's lock is first acquired then
the server's lock when necessary. However, when the fqdn is set via the CLI,
the opposite is performed. So, it is possible to experience an ABBA
deadlock.
To fix this bug, the server's lock is acquired and released for each
subcommand of "set server" with an exception when the fqdn is set. The
resolver's lock is first acquired. Of course, this means we must be sure to
have a resolver to lock.
This patch must be backported as far as 1.8.
This one was deprecated in 2.3 and marked for removal in 2.5. It suffers
too many limitations compared to threads, and prevents some improvements
from being engaged. Instead of a bypassable startup error, there is now
a hard error.
The parsing code was removed, and very few obvious cases were as well.
The code is deeply rooted at certain places (e.g. "for" loops iterating
from 0 to nbproc) so it will not be that trivial to remove everywhere.
The "bind" and "bind-process" parsers will have to be adjusted, though
maybe not completely changed if we later want to support thread groups
for large NUMA machines. Some stats socket restrictions were removed,
and the doc was updated according to what was done. A few places in the
doc still refer to nbproc and will have to be revisited. The master-worker
code also refers to the process number to distinguish between master and
workers and will have to be carefully adjusted. The MAX_PROCS macro was
reset to 1, this will at least reduce the size of some remaining arrays.
Two regtests were dependieng on this directive, one with an explicit
"nbproc 1" and another one testing the master's CLI using nbproc 4.
Both were adapted.
The `show ssl ocsp-response` feature is not available with BoringSSL,
but we don't have a way to disable this feature only with boringSSL on
the CI. Disable the reg-test until we do.
This file adds tests for the new "show ssl ocsp-response" command and
the new "show ssl cert foo.pem.ocsp" and "show ssl cert *foo.pem.ocsp"
special cases. They are all used to display information about an OCSP
response, committed or not.
This patch adds the `-cc` (check condition) argument to evaluate conditions on
startup and return the result as the exit code.
As an example this can be used to easily check HAProxy's version in scripts:
haproxy -cc 'version_atleast(2.4)'
This resolves GitHub issue #1246.
Co-authored-by: Tim Duesterhus <tim@bastelstu.be>
Since the commit 5e702fcad ("MINOR: http-ana: Use -1 status for client
aborts during queuing and connect"), -1 status is reported in the log
message when the client aborts during queuing and
connect. http_abortonclose.vtc script must be update accordingly.
This vtc tests the "new ssl crl-file" which allows to create a new empty
CRL file that can then be set through a "set+commit ssl crl-file"
command pair. It also tests the "del ssl crl-file" command which allows
to delete an unused CRL file.
This vtc tests the "new ssl ca-file" which allows to create a new empty
CA file that can then be set through a "set+commit ssl ca-file" command
pair. It also tests the "del ssl ca-file" command which allows to delete
an unused CA file.
This patch adds the "show ssl ca-file [<cafile>[:index]]" CLI command.
This command can be used to display the list of all the known CA files
when no specific file name is specified, or to display the details of a
specific CA file when a name is given. If an index is given as well, the
command will only display the certificate having the specified index in
the CA file (if it exists).
The details displayed for each certificate are the same as the ones
showed when using the "show ssl cert" command on a single certificate.
This fixes a subpart of GitHub issue #1057.
The "abort" command aborts an ongoing transaction started by a "set ssl
ca-file" command. Since the updated CA file data is not pushed into the
cafile tree until a "commit ssl ca-file" call is performed, the abort
command simply clears the new cafile_entry that was stored in the
cafile_transaction.
This fixes a subpart of GitHub issue #1057.
Some regtests involve multiple requests from multiple clients, which can
be dispatched as multiple requests to a server. It turns out that the
idle connection sharing works so well that very quickly few connections
are used, and regularly some of the remaining idle server connections
time out at the moment they were going to be reused, causing those random
"HTTP header incomplete" traces in the logs that make them fail often. In
the end this is only an artefact of the test environment.
And indeed, some tests like normalize-uri which perform a lot of reuse
fail very often, about 20-30% of the times in the CI, and 100% of the
time in local when running 1000 tests in a row. Others like ubase64,
sample_fetches or vary_* fail less often but still a lot in tests.
This patch addresses this by adding "tune.idle-pool.shared off" to all
tests which have at least twice as many requests as clients. It proves
very effective as no single error happens on normalize-uri anymore after
10000 tests. Also 100 full runs of all tests yield no error anymore.
One test is tricky, http_abortonclose, it used to fail ~10 times per
1000 runs and with this workaround still fails once every 1000 runs.
But the test is complex and there's a warning in it mentioning a
possible issue when run in parallel due to a port reuse.
normalize-uri http rule is marked as experimental, so it cannot be
activated without the global 'expose-experimental-directives'. The
associated vtc is updated to be able to use it.
This script test abortonclose option for HTTP/1 client only. It may be
backported as far as 2.0. But on the 2.2 and prior, the syslog part must be
adapted to catch log messages emitted by proxy during HAProxy
startup. Following lines must be added :
recv
expect ~ "[^:\\[ ]\\[${h1_pid}\\]: Proxy fe1 started."
recv
expect ~ "[^:\\[ ]\\[${h1_pid}\\]: Proxy fe2 started."
The map_redirect test already tests for "show map", "del map" and
"clear map" but doesn't have any "add map" command. Let's add some
trivial ones involving one regular entry and two other ones added as
payload, checking they are properly returned.
Error output for dynamic server creation if invalid lb algo has changed
since previous commit :
MINOR: server: fix doc/trace on lb algo for dynamic server creation
The vtest regex should have been updated has well to match it.
This normalizer removes "/./" segments from the path component.
Usually the dot refers to the current directory which renders those segments redundant.
See GitHub Issue #714.
Implement a new CLI command 'del server'. It can be used to removed a
dynamically added server. Only servers in maintenance mode can be
removed, and without pending/active/idle connection on it.
Add a new reg-test for this feature. The scenario of the reg-test need
to first add a dynamic server. It is then deleted and a client is used
to ensure that the server is non joinable.
The management doc is updated with the new command 'del server'.
This patch renames all existing uri-normalizers into a more consistent naming
scheme:
1. The part of the URI that is being touched.
2. The modification being performed as an explicit verb.
This normalizer merges `../` path segments with the predecing segment, removing
both the preceding segment and the `../`.
Empty segments do not receive special treatment. The `merge-slashes` normalizer
should be executed first.
See GitHub Issue #714.
With the json_query can a JSON value be extacted from a header
or body of the request and saved to a variable.
This converter makes it possible to handle some JSON workload
to route requests to different backends.
ub64dec and ub64enc are the base64url equivalent of b64dec and base64
converters. base64url encoding is the "URL and Filename Safe Alphabet"
variant of base64 encoding. It is also used in in JWT (JSON Web Token)
standard.
RFC1421 mention in base64.c file is deprecated so it was replaced with
RFC4648 to which existing converters, base64/b64dec, still apply.
Example:
HAProxy:
http-request return content-type text/plain lf-string %[req.hdr(Authorization),word(2,.),ub64dec]
Client:
Token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyIjoiZm9vIiwia2V5IjoiY2hhZTZBaFhhaTZlIn0.5VsVj7mdxVvo1wP5c0dVHnr-S_khnIdFkThqvwukmdg
$ curl -H "Authorization: Bearer ${TOKEN}" http://haproxy.local
{"user":"foo","key":"chae6AhXai6e"}
set_ssl_cert_bundle.vtc requires at least OpenSSL 1.1.0 and we don't
have any way to check this when launching the reg-tests suite.
Mark the reg-test as broken since it will fails on old versions of
openSSL and libreSSL.
This test loads a configuration which uses multi-certificates bundle and
tries to change them over the CLI.
Could be backported as far as 2.2, however the 2.2 version must be
adapted to commit the bundle and not each certificate individually.
set var <name> <expression>
Allows to set or overwrite the process-wide variable 'name' with the result
of expression <expression>. Only process-wide variables may be used, so the
name must begin with 'proc.' otherwise no variable will be set. The
<expression> may only involve "internal" sample fetch keywords and converters
even though the most likely useful ones will be str('something') or int().
Note that the command line parser doesn't know about quotes, so any space in
the expression must be preceeded by a backslash. This command requires levels
"operator" or "admin". This command is only supported on a CLI connection
running in experimental mode (see "experimental-mode on").
Just like for "set-var" in the global section, the command uses a temporary
dummy proxy to create a temporary "set-var(name)" rule to assign the value.
The reg test was updated to verify that an updated global variable is properly
reflected in subsequent HTTP responses.
Process-wide variables can now be displayed from the CLI using "get var"
followed by the variable name. They must all start with "proc." otherwise
they will not be found. The output is very similar to the one of the
debug converter, with a type and value being reported for the embedded
sample.
This command is limited to clients with the level "operator" or higher,
since it can possibly expose traffic-related data.
This reg-test tests "set-var" in the global section, with some overlapping
variables and using a few samples and converters, then at the TCP and HTTP
levels using proc/sess/req variables.
If the first active line of a crt-list file is also the first mentioned
certificate of a frontend that does not have the strict-sni option
enabled, then its certificate will be used as the default one. We then
do not want this instance to be removable since it would make a frontend
lose its default certificate.
Considering that a crt-list file can be used by multiple frontends, and
that its first mentioned certificate can be used as default certificate
for only a subset of those frontends, we do not want the line to be
removable for some frontends and not the others. So if any of the ckch
instances corresponding to a crt-list line is a default instance, the
removal of the crt-list line will be forbidden.
It can be backported as far as 2.2.
The default SSL_CTX used by a specific frontend is the one of the first
ckch instance created for this frontend. If this instance has SNIs, then
the SSL context is linked to the instance through the list of SNIs
contained in it. If the instance does not have any SNIs though, then the
SSL_CTX is only referenced by the bind_conf structure and the instance
itself has no link to it.
When trying to update a certificate used by the default instance through
a cli command, a new version of the default instance was rebuilt but the
default SSL context referenced in the bind_conf structure would not be
changed, resulting in a buggy behavior in which depending on the SNI
used by the client, he could either use the new version of the updated
certificate or the original one.
This patch adds a reference to the default SSL context in the default
ckch instances so that it can be hot swapped during a certificate
update.
This should fix GitHub issue #1143.
It can be backported as far as 2.2.
If an unknown CA file was first mentioned in an "add ssl crt-list" CLI
command, it would result in a call to X509_STORE_load_locations which
performs a disk access which is forbidden during runtime. The same would
happen if a "ca-verify-file" or "crl-file" was specified. This was due
to the fact that the crt-list file parsing and the crt-list related CLI
commands parsing use the same functions.
The patch simply adds a new parameter to all the ssl_bind parsing
functions so that they know if the call is made during init or by the
CLI, and the ssl_store_load_locations function can then reject any new
cafile_entry creation coming from a CLI call.
It can be backported as far as 2.2.
Add an empty expect statement after the 'enable server' cli command.
This ensures that the command has been properly handled by haproxy and
its processing is over.
It should fix the unstable behavior of the test which causes reports of
503 even after the server has been enabled.
This should fix the github issue #1188.
Allow to specify the mux proto for a dynamic server. It must be
compatible with the backend mode to be accepted. The reg-tests has been
extended for this error case.
Write a regtest for the cli command 'add server'. This test will execute
some invalid commands and validates the reported error. A client will
then try to connect to a dynamic server just created and activated.
The recently introduced Financial Information eXchange (FIX)
converters have some hard coded tags based on the specification that
were misspelled. Specifically, SenderComID and TargetComID should
be SenderCompID and TargetCompID according to the specification [1][2].
This patch updates all references, which includes the converters
themselves, the regression test, and the documentation.
[1] https://fiximate.fixtrading.org/en/FIX.5.0SP2_EP264/tag49.html
[2] https://fiximate.fixtrading.org/en/FIX.5.0SP2_EP264/tag56.html
Add two new regtests which check the behavior of http-reuse when the
connection target is not a server. More specifically check the dispatch
and transparent backend. In these cases, the behavior should be similar
to http-reuse never mode.
as we previously briefly broke labels handling, test them to make sure
we don't introduce regressions in the future.
see also commit 040b1195f7 ("BUG/MINOR:
contrib/prometheus-exporter: Restart labels dump at the right pos") for
reference
Signed-off-by: William Dauchy <wdauchy@gmail.com>
In order to make sure we detect when we change default behaviour for
some metrics, test the NaN value when it is expected.
Those metrics were listed since our last rework as their default value
changed, unless the appropriate config is set.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
Disable the ssl-reuse for the sni test on http_reuse_conn_hash vtc. This
seems to be the origin of a crash with libressl environment from 3.2.2
up to 3.3.1 included.
For now, it is not determined if the root cause is in haproxy or
libressl.
Please look for the github issue #1115 for all the details.
Try to fix http_reuse_conn_hash proxy protocol for both single and
multi-thread environment. Schedule a new set of requests to be sure that
takeover will be functional even with pool-low-count set to 2.
this was a missing piece for a while now even though it was planned. This
patch adds listen stats.
Nothing in particular but we make use of the status helper previously
added. `promex_st_metrics` diff also looks scary, but I had to realign
all lines.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
logical followup from cli commands addition, so that the state server
file stays compatible with the changes made at runtime; use previously
added helper to load server attributes.
also alloc a specific chunk to avoid mixing with other called functions
using it
Signed-off-by: William Dauchy <wdauchy@gmail.com>
libressl 3.3.0 is stricter on the sni field and fails if it contains
illegal characters such as the underscore. Replace sni field with proper
name to pass the test on the CI environment.
Use the proxy protocol frame if proxy protocol is activated on the
server line. Do not add anymore these connections in the private list.
If some requests are made with the same proxy fields, they can reuse
the idle connection.
The reg-tests proxy_protocol_send_unique_id must be adapted has it
relied on the side effect behavior that every requests from a same
connection reused a private server connection. Now, a new connection is
created as expected if the proxy protocol fields differ.
Historically we've been counting lots of client-triggered events in stick
tables to help detect misbehaving ones, but we've been missing the same on
the server side, and there's been repeated requests for being able to count
the server errors per URL in order to precisely monitor the quality of
service or even to avoid routing requests to certain dead services, which
is also called "circuit breaking" nowadays.
This commit introduces http_fail_cnt and http_fail_rate, which work like
http_err_cnt and http_err_rate in that they respectively count events and
their frequency, but they only consider server-side issues such as network
errors, unparsable and truncated responses, and 5xx status codes other
than 501 and 505 (since these ones are usually triggered by the client).
Note that retryable errors are purposely not accounted for, so that only
what the client really sees is considered.
With this it becomes very simple to put some protective measures in place
to perform a redirect or return an excuse page when the error rate goes
beyond a certain threshold for a given URL, and give more chances to the
server to recover from this condition. Typically it could look like this
to bypass a URL causing more than 10 requests per second:
stick-table type string len 80 size 4k expire 1m store http_fail_rate(1m)
http-request track-sc0 base # track host+path, ignore query string
http-request return status 503 content-type text/html \
lf-file excuse.html if { sc0_http_fail_rate gt 10 }
A more advanced mechanism using gpt0 could even implement high/low rates
to disable/enable the service.
Reg-test converteers_ref_cnt_never_dec.vtc was updated to test it.
I saw some people falling back to unix socket to collect some data they
could not find in prometheus exporter. One of them is base info from
stick tables (used/size).
I do not plan to extend it more for now; keys are quite a mess to
handle.
This should resolve github issue #1008.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
As noticed by Christopher, I messed up the version fix in commit
cb4ed02ef ("REGTESTS: mark http-check-send.vtc as 2.4-only"), as while
looking up the commit introducing the change I accidently reverted it.
Let's reinsert the contents of the file prior to that fix, except the
version, of course.
Commit 9eea56009 ("REGTESTS: add tests for the xxh3 converter") introduced
the xxh3 to the tests thus made it incompatible with 2.3 and older, let's
upgrade the version requirement.
We can currently change the check-port using the cli command `set server
check-port` but there is a consistency issue when using server state.
This patch aims to fix this problem but will be also a good preparation
work to get rid of checkport flag, so we are able to know when checkport
was set by config.
I am fully aware this is not making github #953 moving forward, I
however think this might be acceptable while waiting for a proper
solution and resolve consistency problem faced with port settings.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
Flush the SSL session cache when updating a certificate which is used on a
server line. This prevent connections to be established with a cached
session which was using the previous SSL_CTX.
This patch also replace the ha_barrier with a thread_isolate() since there
are more operations to do. The reg-test was also updated to remove the
'no-ssl-reuse' keyword which is now uneeded.
The "abort ssl cert" command is buggy and removes the current ckch store,
and instances, leading to SNI removal. It must only removes the new one.
This patch also adds a check in set_ssl_cert.vtc and
set_ssl_server_cert.vtc.
Must be backported as far as 2.2.
In a previous commit this test was disabled because I though the
feature was broken, but in fact this is the test which is broken.
Indeed the connection between the server and the client was not
renegociated and was using the SSL cache or a ticket. To be work
correctly these 2 features must be disabled or a new connection must be
established after the ticket timeout, which is too long for a regtest.
Also a "nbthread 1" was added as it was easier to reproduce the problem
with it.
Now, some conformance tests are performed when an HTTP connection is
upgraded to websocket. This make the http-check-send.vtc script failed for
the backend <be6_ws>. Because the purpose of this health-check is to pass a
"Connection: Upgrade" header on an http-check send rule, we may use a dummy
protocal instead.
Test the conformance of websocket rfc6455 in haproxy. In particular, if
a missing key is detected on a h1 message, haproxy must close the
connection.
Note that the case h2 client/h1 srv is not tested as I did not find a
way to calculate the key on the server side.
The EOM block will be removed on the 2.4, thus this script will be broken on
this version. Now it is skipped for this version. It remains valid for 2.3
and 2.2.
When trying to update a backend certificate, we should find a
server-side ckch instance thanks to which we can rebuild a new ssl
context and a new ckch instance that replace the previous ones in the
server structure. This way any new ssl session will be built out of the
new ssl context and the newly updated certificate.
This resolves a subpart of GitHub issue #427 (the certificate part)
It is only a problem on the response path because the request payload length
it always known. But when a filter is registered to analyze the response
payload, the filtering may hang if the server closes just after the headers.
The root cause of the bug comes from an attempt to allow the filters to not
immediately forward the headers if necessary. A filter may choose to hold
the headers by not forwarding any bytes of the payload. For a message with
no payload but a known payload length, there is always a EOM block to
forward. Thus holding the EOM block for bodyless messages is a good way to
also hold the headers. However, messages with an unknown payload length,
there is no EOM block finishing the message, but only a SHUTR flag on the
channel to mark the end of the stream. If there is no payload when it
happens, there is no payload at all to forward. In the filters API, it is
wrongly detected as a condition to not forward the headers.
Because it is not the most used feature and not the obvious one, this patch
introduces another way to hold the message headers at the begining of the
forwarding. A filter flag is added to explicitly says the headers should be
hold. A filter may choose to set the STRM_FLT_FL_HOLD_HTTP_HDRS flag and not
forwad anything to hold the headers. This flag is removed at each call, thus
it must always be explicitly set by filters. This flag is only evaluated if
no byte has ever been forwarded because the headers are forwarded with the
first byte of the payload.
reg-tests/filters/random-forwarding.vtc reg-test is updated to also test
responses with unknown payload length (with and without payload).
This patch must be backported as far as 2.0.
If a server varies on the accept-encoding header and it sends a response
with an encoding we do not know (see parse_encoding_value function), we
will not store it. This will prevent unexpected errors caused by
cache collisions that could happen in accept_encoding_hash_cmp.
it does not seem to have a reason to close connections after each
request; reflect that in tests by doing all requests within the same
client.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
Add a base test to start with something, even though this is not
necessarily complete.
Also make use of the recent REQUIRE_SERVICE option to exclude it from
test list of it was not build with prometheus included.
note: I thought it was possible to send multiple requests within the
same client, but I'm getting "HTTP header is incomplete" from the second
request
Signed-off-by: William Dauchy <wdauchy@gmail.com>
- check functions are never called with a NULL args list, it is always
an array, so first check can be removed
- the expression parser guarantees that we can't have anything else,
because we mentioned json converter takes a mandatory string argument.
Thus test on `ARGT_STR` can be removed as well
- also add breaking line between enum and function declaration
In order to validate it, add a simple json test testing very simple
cases but can be improved in the future:
- default json converter without args
- json converter failing on error (utf8)
- json converter with error being removed (utf8s)
Signed-off-by: William Dauchy <wdauchy@gmail.com>
add base support for url encode following RFC3986, supporting `query`
type only.
- add test checking url_enc/url_dec/url_enc
- update documentation
- leave the door open for future changes
this should resolve github issue #941
Signed-off-by: William Dauchy <wdauchy@gmail.com>
This patch fixes GitHub Issue #988. Commit ce9e7b2521
was not sufficient, because it fell back to a hash comparison if the bitmap
of known encodings was not acceptable instead of directly returning the the
cached response is not compatible.
This patch also extends the reg-test to test the hash collision that was
mentioned in #988.
Vary handling is 2.4, no backport needed.
The accept-encoding part of the secondary key (vary) was only built out
of the first occurrence of the header. So if a client had two
accept-encoding headers, gzip and br for instance, the key would have
been built out of the gzip string. So another client that only managed
gzip would have been sent the cached resource, even if it was a br resource.
The http_find_header function is now called directly by the normalizers
so that they can manage multiple headers if needed.
A request that has more than 16 encodings will be considered as an
illegitimate request and its response will not be stored.
This fixes GitHub issue #987.
It does not need any backport.
Add a new check for a pseudo-websocket handshake, specifying the
Connection header to verify if it is properly handled by http-check send
directive. Also check that default http/1.1 checks have the header
Connection: close.
In case of successful unsafe method on a stored resource, the cached entry
must be invalidated (see RFC7234#4.4).
A "non-error response" is one with a 2xx (Successful) or 3xx (Redirection)
status code.
This implies that the primary hash must now be calculated on requests
that have an unsafe method (POST or PUT for instance) so that we can
disable the corresponding entries when we process the response.
When a response has an Age header (filled in by another cache on the
message's path) that is greater than its defined maximum age (extracted
either from cache-control directives or an expires header), it is
already stale and should not be cached.
This is simply txn_get_priv.vtc with the loading made using
lua-load-per-thread, allowing all threads to run independently.
There's nothing global nor thread-specific in this test, which makes
an excellent example of something that should work out of the box.
Four concurrent clients are initialized with 4 loops each so as to
give a little chance to various threads to run concurrently.
Turn the "Accept-Encoding" value to lower case before processing it.
Calculate the CRC on every token instead of a sorted concatenation of
them all (in order to avoir copying them) then XOR all the CRCs into a
single hash (while ignoring duplicates).
This patch adds a new logging variable '%HPO' for logging HTTP path only
(without query string) from relative or absolute URI.
For example:
log-format "hpo=%HPO hp=%HP hu=%HU hq=%HQ"
GET /r/1 HTTP/1.1
=>
hpo=/r/1 hp=/r/1 hu=/r/1 hq=
GET /r/2?q=2 HTTP/1.1
=>
hpo=/r/2 hp=/r/2 hu=/r/2?q=2 hq=?q=2
GET http://host/r/3 HTTP/1.1
=>
hpo=/r/3 hp=http://host/r/3 hu=http://host/r/3 hq=
GET http://host/r/4?q=4 HTTP/1.1
=>
hpo=/r/4 hp=http://host/r/4 hu=http://host/r/4?q=4 hq=?q=4
Historically, the input and output buffers of a check are allocated by hand
during the startup, with a specific size (not necessarily the same than
other buffers). But since the recent refactoring of the checks to rely
exclusively on the tcp-checks and to use the underlying mux layer, this part
is totally buggy. Indeed, because these buffers are now passed to a mux,
they maybe be swapped if a zero-copy is possible. In fact, for now it is
only possible in h2_rcv_buf(). Thus the bug concretely only exists if a h2
health-check is performed. But, it is a latent bug for other muxes.
Another problem is the size of these buffers. because it may differ for the
other buffer size, it might be source of bugs.
Finally, for configurations with hundreds of thousands of servers, having 2
buffers per check always allocated may be an issue.
To fix the bug, we now allocate these buffers when required using the buffer
pool. Thus not-running checks don't waste memory and muxes may swap them if
possible. The only drawback is the check buffers have now always the same
size than buffers used by the streams. This deprecates indirectly the
"tune.chksize" global option.
In addition, the http-check regtest have been update to perform some h2
health-checks.
Many thanks to @VigneshSP94 for its help on this bug.
This patch should solve the issue #936. It relies on the commit "MINOR:
tcpcheck: Don't handle anymore in-progress send rules in tcpcheck_main".
Both must be backport as far as 2.2.
bla
The cache section's process-vary option takes a 0 or 1 value to disable
or enable the vary processing.
When disabled, a response containing such a header will never be cached.
When enabled, we will calculate a preliminary hash for a subset of request
headers on all the incoming requests (which might come with a cpu cost) which
will be used to build a secondary key for a given request (see RFC 7234#4.1).
The default value is 0 (disabled).
Calculate a preliminary secondary key for every request we see so that
we can have a real secondary key if the response is cacheable and
contains a manageable Vary header.
The cache's ebtree is now allowed to have multiple entries with the same
primary key. Two of those entries will be distinguished thanks to
secondary keys stored in the cache_entry (based on hashes of a subset of
their headers).
When looking for an entry in the cache (cache_use), we still use the
primary key (built the same way as before), but in case of match, we
also need to check if the entry has a vary signature. If it has one, we
need to perform an extra check based on the newly built secondary key.
We will only be able to forge a response out of the cache if both the
primary and secondary keys match with one of our entries. Otherwise the
request will be forwarder to the server.
This patch adds -m flag which allows to specify header name
matching method when deleting headers from http request/response.
Currently beg, end, sub, str and reg are supported.
This is related to GitHub issue #909
As indicated in issue #907 it very frequently fails on FreeBSD, and
looking at it, it's broken in multiple ways. It relies on log ordering
between two layers, the first one being allowed to support h2. Given
that on FreeBSD it usually ends up in timeouts, it's very likely that
for some reason one frontend logs before the other one or that curl
uses h2 instead of h1 there, and that the log instance waits forever
for its lines.
Usually it works fine when run locally though, so let's not remove it
and mark it as broken instead so that it can still be used when relevant.
in the context of a progressive backend migration, we want to be able to
activate SSL on outgoing connections to the server at runtime without
reloading.
This patch adds a `set server ssl` command; in order to allow that:
- add `srv_use_ssl` to `show servers state` command for compatibility,
also update associated parsing
- when using default-server ssl setting, and `no-ssl` on server line,
init SSL ctx without activating it
- when triggering ssl API, de/activate SSL connections as requested
- clean ongoing connections as it is done for addr/port changes, without
checking prior server state
example config:
backend be_foo
default-server ssl
server srv0 127.0.0.1:6011 weight 1 no-ssl
show servers state:
5 be_foo 1 srv0 127.0.0.1 2 0 1 1 15 1 0 4 0 0 0 0 - 6011 - -1
where srv0 can switch to ssl later during the runtime:
set server be_foo/srv0 ssl on
5 be_foo 1 srv0 127.0.0.1 2 0 1 1 15 1 0 4 0 0 0 0 - 6011 - 1
Also update existing tests and create a new one.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
while looking at `url_dec` implementation I realised there was not yet a
simple test to avoid future regressions.
This one is testing simple case, including the "+" behaviour depending
on the argument passed to `url_dec`
Signed-off-by: William Dauchy <wdauchy@gmail.com>
There is apparently a race in this test that would require relying on
haproxy's output to make it reliably work, but currently vtest doesn't
have this option. Let's mark it broken again to avoid polluting the CI.
This is discussed in github issue #950.
Three filters are used. The compression filter is enclose by two trace filters,
both with the random forwarding enabled. An HTTP test is performed but also a
TCP test using an HTTP tunnel.
Add a reg-test verifying the fix in dea7c209f8.
Some parts of the configuration used in the were taken from the initial bug
report from Maciej.
Should be backported together with dea7c209f8
(all stable versions).
Co-authored-by: Maciej Zdeb <maciej@zdeb.pl>
Do not cache responses that do not have an explicit expiration time
(s-maxage or max-age Cache-Control directives or Expires header) or a
validator (ETag or Last-Modified headers) anymore, as suggested in
RFC 7234#3.
The TX_FLAG_IGNORE flag is used instead of the TX_FLAG_CACHEABLE so as
not to change the behavior of the checkcache option.
This regtest requires a version of OpenSSL which supports the
ClientHello callback which is only the case of recents SSL libraries
(openssl 1.1.1).
This was reported in issue #944.
In issue #940, it was reported that the crt-list does not work correctly
anymore. Indeed when inserting a crt-list line which use a certificate
previously seen in the crt-list, this one won't be inserted in the SNI
list and will be silently ignored.
This bug was introduced by commit 47da821 "MEDIUM: ssl: emulates the
multi-cert bundles in the crtlist".
This patch also includes a reg-test which tests this issue.
This bugfix must be backported in 2.3.
This test checks that the bug #818 and #810 are fixed.
It test if there is no inconsistency with multiple certificate types and
that the exclusion of the certificate is correctly working with a negative
filter.
This new script tests mqtt_is_valid() and mqtt_get_field_value() converters used
to validate and extract information from a MQTT (Message Queuing Telemetry
Transport) message.
This new script tests fix_is_valid() and fix_tag_value() converters used to
validate and extract information from a FIX (Financial Information eXchange)
message.
William noticed that the real issue in the abns test was that it was
failing to rebind some listeners, issues which were addressed in the
previous commits (in some cases, the master would even accept traffic
on the worker's socket which was not properly disabled, causing some
of the strange error messages).
The other issue documented in commit 2ea15a080 was the lack of
reliability when the test was run in parallel. This is caused by the
abns socket which uses a hard-coded address, making all tests in
parallel to step onto each others' toes. Since vtest cannot provide
abns sockets, we're instead concatenating the number of the listening
port that vtest allocated for another frontend to the abns path, which
guarantees to make them unique in the system.
The test works fine in all cases now, even with 100 in parallel.
When no Cache-Control max-age or s-maxage information is present in a
cached response, we need to parse the Expires header value (RFC 7234#5.3).
An invalid Expires date value or a date earlier than the reception date
will make the cache_entry stale upon creation.
For now, the Cache-Control and Expires headers are parsed after the
insertion of the response in the cache so even if the parsing of the
Expires results in an already stale entry, the entry will exist in the
cache.
Res.cache_hit sample fetch returns a boolean which is true when the HTTP
response was built out of a cache. The cache's name is returned by the
res.cache_name sample_fetch.
This resolves GitHub issue #900.
If a client sends a conditional request containing an If-Modified-Since
header (and no If-None-Match header), we try to compare the date with
the one stored in the cache entry (coming either from a Last-Modified
head, or a Date header, or corresponding to the first response's
reception time). If the request's date is earlier than the stored one,
we send a "304 Not Modified" response back. Otherwise, the stored is sent
(through a 200 OK response).
This resolves GitHub issue #821.
This reg-test tests the "set ssl cert" command the same way the
set_ssl_cert.vtc does, but with separate key/crt files and with the
ssl-load-extra-del-ext.
It introduces new key/.crt files that contains the same pair as the
existing .pem.
Test that if-none-match header is properly taken into account and that
when the conditions are fulfilled, a "304 Not Modified" response can be
sent to the client.
Co-authored-by: Tim Duesterhus <tim@bastelstu.be>
Its sole remaining purpose was to display "proxy foo started", which
has little benefit and pollutes output for those with plenty of proxies.
Let's remove it now.
The VTCs were updated to reflect this, because many of them had explicit
counts of dropped lines to match this message.
This is tagged as MEDIUM because some users may be surprized by the
loss of this quite old message.
This test is inherently racy. It regularly pops up on the CI, and I've
spent one hour chasing a bug that apparently doesn't exist, just because
I'm running it 10 times in a row and it reports from 4 to 8 failures
when built at -O2 and generally even more at -O0. The logs are very
confusing, often reporting that it failed with status 0, with nothing
else wrong. I suspect it might sometimes be the shell command that fails
if it executes faster than haproxy finishes to start up, which would
also explain the relation with the optimization level. E.g:
> Testing with haproxy version: 2.2.0
> # top TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (3.006) exit=2
> # top TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (3.006) exit=2
> # top TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (3.009) exit=2
> # top TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (3.008) exit=2
> # top TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (3.007) exit=2
> # top TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (3.007) exit=2
> 6 tests failed, 0 tests skipped, 4 tests passed
Some of the failures include this, suggesting that some barriers could
help:
---- h1 haproxy h1 PID file check failed:
Could not read PID file '/tmp/haregtests-2020-10-09_11-19-40.kgsDB4/vtc.30539.04dbea7f/h1/pid
Since it has been causing false positives and consumed way more
troubleshooting time than it saved, let's mark it as broken so that it
doesn't waste more time. We can bring it back when someone manages to
figure what the problem is.
Since the health-check refactoring in the 2.2, the checks through a socks4 proxy
are broken. To fix this bug, CO_FL_SOCKS4 flag must be set on the connection
before calling the connect() callback function because this flags is checked to
use the right destination address. The same is done for the CO_FL_SEND_PROXY
flag for a consistency purpose.
A reg-test has been added to test the "check-via-socks4" directive.
This patch must be backported to 2.2.
We used to set it to ${h1_px_addr} but it randomly fails on certain
hosts (FreeBSD and OSX) where the address is surprisingly set to "::1"
while the Host field contains 127.0.0.1 (hence two different address
families). While this is likely a minor issue in vtest, we don't need
to depend on this and can easily hard-code 127.0.0.1 which is already
used in other tests.
This adds "balance-rr" to test round robin, "balance-uri" to test the
default balance-uri method, and "balance-uri-path-only" which mixes H1
and H2 through "balance uri path-only" and verifies that they reach
the same server.
Note that for the latter, "proto h2" explicitly had to be placed on
the listening socket otherwise it would timeout. This may indicate an
issue in the H1->H2 upgrade depending how the H2 preface is sent maybe.
iif() takes a boolean as input and returns one of the two argument
strings depending on whether the boolean is true.
This converter most likely is most useful to return the proper scheme
depending on the value returned by the `ssl_fc` fetch, e.g. for use within
the `x-forwarded-proto` request header.
However it can also be useful for use within a template that is sent to
the client using `http-request return` with a `lf-file`. It allows the
administrator to implement a simple condition, without needing to prefill
variables within the regular configuration using `http-request
set-var(req.foo)`.
This new script tests set-path/replace-path and set-pathq/replace-pathq
rules. It also tests path and pathq sample fetches.
This patch should be backported to 2.2 if corresponding keywords are also
backported.
A few regtests continue to regularly fail in highly loaded VMs because
they have very short timeouts. Actually the goal of running with short
timeouts was to make sure we do not uselessly wait during tests designed
to trigger them, but these timeouts here are never supposed to fire at
all, so they don't need to be kept in the 15-20ms range. They do not
pose any issue on any regular machine, but VMs are often suffering from
huge time jumps and cannot always produce responses in that short of a
time.
Just like with commit ce6fc25b1 ("REGTEST: increase timeouts on the
seamless-reload test"), let's raise these short timeouts to 1 second.
A few other ones remain set to 150-200ms and do not seem to cause any
issue. Some are actually expected to trigger so let's not touch them
for now.
Since commit f92afb732 ("MEDIUM: cfgparse: Emit hard error on truncated lines")
we now produce parsing errors on truncated lines, in an effort to clean
up dangerous or broken config files. And it turns out that one of our own
regression tests was suffering from this, as diagnosed by William and Tim.
The cause is the leading spaces in front of "} -start" that vtest makes
part of the output file, so the file finishes with a partial line made
of spaces.
Following work from Arjen and Mathilde, it adds ssl_{c,s}_chain_der
methods; it returns DER encoded certs from SSL_get_peer_cert_chain
Also update existing vtc tests to add random intermediate certificates
When getting the result through this header:
http-response add-header x-ssl-chain-der %[ssl_c_chain_der,hex]
One can parse it with any lib accepting ASN.1 DER data, such as in go:
bin, err := encoding/hex.DecodeString(cert)
certs_parsed, err := x509.ParseCertificates(bin)
Cc: Arjen Nienhuis <arjen@zorgdoc.nl>
Signed-off-by: Mathilde Gilles <m.gilles@criteo.com>
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
The parsing of http deny rules with no argument or only the deny_status argument
is buggy if followed by an ACLs expression (starting with "if" or "unless"
keyword). Instead of using the proxy errorfiles, a dummy error is used. To fix
the bug, the parsing function must also check for "if" or "unless" keyword in
such cases.
This patch should fix the issue #720. No backport is needed.
Test the following ssl sample fetches:
ssl_c_der, ssl_c_sha1,hex, ssl_c_notafter, ssl_c_notbefore,
ssl_c_sig_alg, ssl_c_i_dn, ssl_c_s_dn, ssl_c_serial,hex, ssl_c_key_alg,
ssl_c_version
This reg-test could be used as far as haproxy 1.6.
Test the following ssl sample fetches:
ssl_f_der, ssl_f_sha1,hex, ssl_f_notafter, ssl_f_notbefore,
ssl_f_sig_alg, ssl_f_i_dn, ssl_f_s_dn, ssl_f_serial,hex, ssl_f_key_alg,
ssl_f_version
This reg-test could be used as far as haproxy 1.5.
This commit adds some sample fetches that were lacking on the server
side:
ssl_s_key_alg, ssl_s_notafter, ssl_s_notbefore, ssl_s_sig_alg,
ssl_s_i_dn, ssl_s_s_dn, ssl_s_serial, ssl_s_sha1, ssl_s_der,
ssl_s_version
Trailing slashes were not handled in crt-list commands on CLI which can
be useful when you use the commands with a directory.
Strip the slashes before looking for the crtlist in the tree.