As this feature has a dependency on resolvers being configured,
this test acts as good documentation as well.
This change also has a spelling fix for filename.
This commit makes sure that if three is no "alpn", "npn" nor "no-alpn"
setting on a "bind" line which corresponds to an HTTPS or QUIC frontend,
we automatically turn on "h2,http/1.1" as an ALPN default for an HTTP
listener, and "h3" for a QUIC listener. This simplifies the configuration
for end users since they won't have to explicitly configure the ALPN
string to enable H2, considering that at the time of writing, HTTP/1.1
represents less than 7% of the traffic on large infrastructures. The
doc and regtests were updated. For more info, refer to the following
thread:
https://www.mail-archive.com/haproxy@formilux.org/msg43410.html
Since the commit f2b02cfd9 ("MAJOR: http-ana: Review error handling during
HTTP payload forwarding"), during the payload forwarding, we are analyzing a
side, we stop to test the opposite side. It means when the HTTP request
forwarding analyzer is called, we no longer check the response side and vice
versa.
Unfortunately, since then, the HTTP tunneling is broken after a protocol
upgrade. On the response is switch in TUNNEL mode. The request remains in
DONE state. As a consequence, data received from the server are forwarded to
the client but not data received from the client.
To fix the bug, when both sides are in DONE state, both are switched in same
time in TUNNEL mode if it was requested. It is performed in the same way in
http_end_request() and http_end_response().
This patch should fix the issue #2125. It is 2.8-specific. No backport
needed.
This patch proposes to enumerate servers using internal HAProxy list.
Also, remove the flag SRV_F_NON_PURGEABLE which makes the server non
purgeable each time Lua uses the server.
Removing reg-tests/cli_delete_server_lua.vtc since this test is no
longer relevant (we don't set the SRV_F_NON_PURGEABLE flag anymore)
and we already have a more generic test:
reg-tests/server/cli_delete_server.vtc
Co-authored-by: Aurelien DARRAGON <adarragon@haproxy.com>
In order to increase usability, the "show ssl ocsp-response" also takes
a frontend certificate path as parameter. In such a case, it behaves the
same way as "show ssl cert foo.pem.ocsp".
Instead of having a dedicated httpclient instance and its own code
decorrelated from the actual auto update one, the "update ssl
ocsp-response" will now use the update task in order to perform updates.
Since the cli command allows to update responses that were never
included in the auto update tree, a new flag was added to the
certificate_ocsp structure so that the said entry can be inserted into
the tree "by hand" and it won't be reinserted back into the tree after
the update process is performed. The 'update_once' flag "stole" a bit
from the 'fail_count' counter since it is the one less likely to reach
UINT_MAX among the ocsp counters of the certificate_ocsp structure.
This new logic required that every certificate_ocsp entry contained all
the ocsp-related information at all time since entries that are not
supposed to be configured automatically can still be updated through the
cli. The logic of the ssl_sock_load_ocsp was changed accordingly.
This patch adds the support for the PS algorithms when verifying JWT
signatures (rsa-pss). It was not managed during the first implementation
and previously raised an "Unmanaged algorithm" error.
The tests use the same rsa signature as the plain rsa tests (RS256 ...)
and the implementation simply adds a call to
EVP_PKEY_CTX_set_rsa_padding in the function that manages rsa and ecdsa
signatures.
The signatures in the reg-test were built thanks to the PyJWT python
library once again.
When adding a new certificate through the CLI and appending it to a
crt-list with the 'ocsp-update' option set, the new certificate would
not be added to the OCSP response update list.
The only thing that was missing was the copy of the ocsp_update mode
from the ssl_bind_conf into the ckch_store's object.
An extra wakeup of the update task also needed to happen in case the
newly inserted entry needs to be updated before the next wakeup of the
task.
This patch does not need to be backported.
Add tests for the "show ssl ocsp-updates" cli command as well as the new
'base64' parameter that can be passed to the "show ssl ocsp-response"
command.
The options were after the filters which does not work well and now
raises a warning. It did not break the regtest because the crt-lists
were not actually used by clients.
Added new testcases for all 4 branches of smp_fetch_hdr_ip():
- a plain IPv4 address
- an IPv4 address with an port number
- a plain IPv6 address
- an IPv6 address wrapped in [] brackets
304 responses contains "Content-length" or "Transfer-encoding"
headers. rxresp action expects to get a payload in this case, even if 304
reponses must not have any payload. A workaround was added to remove these
headers from the 304 responses. However, a better solution is to only get
the response headers from clients using rxresphdrs action.
If a payload is erroneously added in these reponses, the scripts will fail
the same way. So it is safe.
Since commit cc9bf2e5f "MEDIUM: cache: Change caching conditions"
responses that do not have an explicit expiration time are not cached
anymore. But this mechanism wrongly used the TX_CACHE_IGNORE flag
instead of the TX_CACHEABLE one. The effect this had is that a cacheable
response that corresponded to a request having a "Cache-Control:
no-cache" for instance would not be cached.
Contrary to what was said in the other commit message, the "checkcache"
option should not be impacted by the use of the TX_CACHEABLE flag
instead of the TX_CACHE_IGNORE one. The response is indeed considered as
not cacheable if it has no expiration time, regardless of the presence
of a cookie in the response.
This should fix GitHub issue #2048.
This patch can be backported up to branch 2.4.
In this scripts, several clients perform a requests and exit because an SSL
error is expected and thus no response is sent. However, we must explicitly
wait for the connection close, via an "expect_close" statement. Otherwise,
depending on the timing, HAProxy may detect the client abort before any
connection attempt on the server side and no SSL error is reported, making
the script to fail.
If "-dF" command line argument is passed to haproxy to execute the script,
by sepcifying HAPROXY_ARGS variable, http_splicing.vtc is now skipped.
Without this patch, the script fails when the fast-forward is disabled.
A feature command was added to detect if infinite forward is disabled to be
able to skip the script. Unfortunately, it is no supported to evaluate such
expression. Thus remove it. For now, reg-tests must not be executed with
"-dF" option.
The -dF option can now be used to disable data fast-forward. It does the
same than the global option "tune.fast-forward off". Some reg-tests may rely
on this optim. To detect the feature and skip such script, the following
vtest command must be used:
feature cmd "$HAPROXY_PROGRAM -cc '!(globa.tune & GTUNE_NO_FAST_FWD)'"
Add a new test to prevent any regression for the if-none parameter in
the "forwardfor" proxy option.
This will ensure upcoming refactors don't break reference behavior.
The wrong return value was checked, resulting in dead code and
potential bugs.
It should fix GitHub issue #2005.
This patch should be backported up to 2.5.
When the JWT token signature is using ECDSA algorithm (ES256 for
instance), the signature is a direct concatenation of the R and S
parameters instead of OpenSSL's DER format (see section
3.4 of RFC7518).
The code that verified the signatures wrongly assumed that they came in
OpenSSL's format and it did not actually work.
We now have the extra step of converting the signature into a complete
ECDSA_SIG that can be fed into OpenSSL's digest verification functions.
The ECDSA signatures in the regtest had to be recalculated and it was
made via the PyJWT python library so that we don't end up checking
signatures that we built ourselves anymore.
This patch should fix GitHub issue #2001.
It should be backported up to branch 2.5.
The error handling in the HTTP payload forwarding is far to be ideal because
both sides (request and response) are tested each time. It is espcially ugly
on the request side. To report a server error instead of a client error,
there are some workarounds to delay the error handling. The reason is that
the request analyzer is evaluated before the response one. In addition,
errors are tested before the data analysis. It means it is possible to
truncate data because errors may be handled to early.
So the error handling at this stages was totally reviewed. Aborts are now
handled after the data analysis. We also stop to finish the response on
request error or the opposite. As a side effect, the HTTP_MSG_ERROR state is
now useless. As another side effect, the termination flags are now set by
the HTTP analysers and not process_stream().
When we wait for the request body, we are still in the request analysis. So
a SF_FINST_R flag must be reported in logs. Even if some data are already
received, at this staged, nothing is sent to the server.
This patch could be backported in all stable versions.
Tests a subpart of the ocsp auto update feature. It will mainly focus on
the 'auto' mode since the 'on' one relies strongly on timers way too
long to be used in a regtest context.
testdir can be a very long directory since it depends on source
directory path, this can lead to failure during tests when UNIX socket
path exceeds maximum allowed length of 97 characters as defined in
str2sa_range().
16:48:14 [ALERT] *** h1 debug| (10082) : config : parsing [/tmp/haregtests-2022-12-17_16-47-39.4RNzIN/vtc.4850.5d0d728a/h1/cfg:19] : 'bind' : socket path 'unix@/local/p4clients/pkgbuild-bB20r/workspace/build/HAProxy/HAProxy-2.7.x.68.0/AL2_x86_64/DEV.STD.PTHREAD/build/private/HAProxy-2.7.x/src/reg-tests/lua/srv3' too long (max 97)
Also, it is not advisable to create UNIX socket in actual source
directory, but instead use dedicated temporary directory create for test
purpose.
This should be backported to 2.6
The test still need to have more start condition, like ulimit checks
and less strict value checks.
To be backported where it was activated (as far as 2.5)
When an error occurred during the request parsing, the H1 multiplexer is
responsible to sent a response to the client and to release the H1 stream
and the H1 connection. In HTTP mode, it is not an issue because at this
stage the H1 connection is in embryonic state. Thus it can be released
immediately.
However, it is a problem if the connection was first upgraded from a TCP
connection. In this case, a stream-connector is attached. The H1 stream is
not orphan. Thus it must not be released at this stage. It must be detached
first. Otherwise a BUG_ON() is triggered in h1s_destroy().
So now, the H1S is destroyed on early errors but only if the H1C is in
embryonic state.
This patch may be related to #1966. It must be backported to 2.7.
Check if USE_OBSOLETE_LINK=1 was used so it could run this test when
ASAN is not built, since ASAN require this option.
For this test to work, the ulimit -n value must be big enough.
Could be backported at least to 2.5.
change the expected maxconn from 10000 to 11000 in
automatic_maxconn.vtc
To be backported only if the test failed, the value might be the right
one in previous versions.
When I added commit 16b282f4b ("MINOR: stick-table: show the shard
number in each entry's "show table" output"), I don't know how but
I managed to mess up my reg tests since everything worked fine,
most likely by running it on a binary built in the wrong branch.
Several reg tests include some table outputs that were upset by the
new "shard=" field. This test added them and revealed at the same
time that entries learned over peers are not properly initialized,
which will be fixed in a future series of fixes.
This commit requires previous fix "BUG/MINOR: peers: always
initialize the stksess shard value" so as not to trip on entries
learned from peers.
VTEST does not properly handle 304-Not-Modified responses. If a
Transfer-Encoding header (and probably a Content-Lenght header too), it
waits for a body. Waiting for a fix, the Transfer-Encoding encoding of
cached responses in 2 VTEST scripts are removed.
Note it is now an issue because of a fix in the H1 multiplexer :
* 226082d13a "BUG/MINOR: mux-h1: Do not send a last null chunk on body-less answers"
This patch must be backported with the above commit.
The ca-ignore-err and crt-ignore-err directives are now able to use the
openssl X509_V_ERR constant names instead of the numerical values.
This allow a configuration to survive an OpenSSL upgrade, because the
numerical ID can change between versions. For example
X509_V_ERR_INVALID_CA was 24 in OpenSSL 1 and is 79 in OpenSSL 3.
The list of errors must be updated when a new major OpenSSL version is
released.
Test the httpclient when the lua action timeout. The lua timeout is
reached before the httpclient is ended. This test that the httpclient
are correctly cleaned when destroying the hlua context.
Must be backported as far as 2.5.
This patch adds support to the following authentication methods:
- AUTH_REQ_GSS (7)
- AUTH_REQ_SSPI (9)
- AUTH_REQ_SASL (10)
Note that since AUTH_REQ_SASL allows multiple authentication mechanisms
such as SCRAM-SHA-256 or SCRAM-SHA-256-PLUS, the auth payload length may
vary since the method is sent in plaintext. In order to allow this, the
regex now matches any payload length.
This partially fixes Github issue #1508 since user authentication is
still broken but should restore pre-2.2 behavior.
This should be backported up to 2.2.
Signed-off-by: Fatih Acar <facar@scaleway.com>
At present option smtpchk closes the TCP connection abruptly on completion of service checking,
even if successful. This can result in a very high volume of errors in backend SMTP server logs.
This patch ensures an SMTP QUIT is sent and a positive 2xx response is received from the SMTP
server prior to disconnection.
This patch depends on the following one:
* MINOR: smtpchk: Update expect rule to fully match replies to EHLO commands
This patch should fix the issue #1812. It may be backported as far as 2.2
with the commit above On the 2.2, proxy_parse_smtpchk_opt() function is
located in src/check.c
[cf: I updated reg-tests script accordingly]
The s1 server is acting like a SMTP server. But it sends two CRLF at the end of
each line, while only one CRLF must be returned. It only works becaue both CRLF
are received at the same time.
Depending on the timing, the conneciton on lisrv listener may be fully
accepted before any reject. Thus, instead of getting a socket error, an
invalid L7 response is reported. There is no reason to be strick on the
error type. Any failure is good here, because we just want to test the
email-alert feature.
This patch should fix issue #1857. It may be backported as far as 2.2.
This trick is deprecated since the health-check refactoring, It is now
invalid. It means the following line will trigger an error during the
configuration parsing:
option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www
It must be replaced by:
option httpchk OPTIONS * HTTP/1.1
http-check send hdr Host www
Depending on the timing, time to time, the log messages can be mixed. A
client can start and be fully handled by HAProxy (including its log message)
before the log message of the previous client was emitted or received. To
fix the issue, a barrier was added to be sure to eval the "expect" rule on
logs before starting the next client.
This patch should fix the issue #1847. It may be backported to all branches
containing this reg-tests.
TCP Health-checks are enabled on server "s2". However it expects to receive
an HTTP requests. So HAProxy configuration must be changed to perform HTTP
health-checks instead. Otherwise, depending on the timing, an error can be
triggered if a check is performed before the end of the script.
This scripts never failed because TCP_QUICKACK was disabled, adding some
latency on health-checks. But since the last fix, it is an issue.
This patch should be backported as far as 2.4.
We don't have enough tests with the mworker mode, and even less that
have no master CLI (-S) configured. Let's run this one with -W, it
shouldn't have any impact.
VTest won't be able to catch a lot of things for now, but that's a first
step.
In ticket #1805 an user is impacted by the limitation of size of the CLI
buffer when updating a ca-file.
This patch allows a user to append new certificates to a ca-file instead
of trying to put them all with "set ssl ca-file"
The implementation use a new function ssl_store_dup_cafile_entry() which
duplicates a cafile_entry and its X509_STORE.
ssl_store_load_ca_from_buf() was modified to take an apped parameter so
we could share the function for "set" and "add".
When using `option http-restrict-req-hdr-names delete`, HAproxy may
crash or delete wrong header after receiving request containing multiple
forbidden characters in single header name; exact behavior depends on
number of request headers, number of forbidden characters and position
of header containing them.
This patch fixes GitHub issue #1822.
Must be backported as far as 2.2 (buggy feature got included in 2.2.25,
2.4.18 and 2.5.8).
From time to time, users complain to get 400-Bad-request responses for
totally valid CONNECT requests. After analysis, it is due to the H1 parser
performs an exact match between the authority and the host header value. For
non-CONNECT requests, it is valid. But for CONNECT requests the authority
must contain a port while it is often omitted from the host header value
(for default ports).
So, to be sure to not reject valid CONNECT requests, a basic authority
validation is now performed during the message parsing. In addition, the
host header value is normalized. It means the default port is removed if
possible.
This patch should solve the issue #1761. It must be backported to 2.6 and
probably as far as 2.4.
The crash occures when the same certificate which is used on both a
server line and a bind line is inserted in a crt-list over the CLI.
This is quite uncommon as using the same file for a client and a server
certificate does not make sense in a lot of environments.
This patch fixes the issue by skipping the insertion of the SNI when no
bind_conf is available in the ckch_inst.
Change the reg-test to reproduce this corner case.
Should fix issue #1748.
Must be backported as far as 2.2. (it was previously in ssl_sock.c)
The info field in the log message may change. For instance, on FreeBSD, a
"broken pipe" is reported. Thus, the expected log message must be more
generic.
The default client timeout is too small to be sure to always wait end of
slow clients (the last 2 clients use a delay to send their request). But it
cannot be increased because it will slow down the regtest execution. So a
dedicated frontend with a higher client timeout has been added. This
frontend is used by "slow" clients. The other one is used for normal
requests.
Depending on the timing, time to time, the log message for "/c4" request can
be received before the one for "/c2" request. To (hopefully) fix the issue,
a barrier has been added to wait "/c2" log message before sending other
requests.
Introduced in:
18c13d3bd MEDIUM: http-ana: Add a proxy option to restrict chars in request header names
see also:
fbbbc33df REGTESTS: Do not use REQUIRE_VERSION for HAProxy 2.5+
Depending on the timing, the second client that should be reported as a
client abort during connection attempt ("CC--" termination state) is
sometime logged as a server close ("SC--" termination state) instead. It
happens because sometime the connection failure to the server s1 is detected
by haproxy before the client c2 aborts. There is no retries and the
connection timeout is set to 100ms. So, to work, the client abort must be
performed and detected by haproxy in less than 100ms.
To fix the issue, the c2 client is now routed to a backend with a connection
timeout set to 1 second and 10 retries. It should be large enough to detect
the client aborts (~10s)
In addition, there is another race condition when the script is
started. sometime, server s1 is not stopped when the first client sends its
request. So a barrier was added to be sure it is stopped before starting to
send requests. And we wait to be sure the server is detected as DOWN to
unblock the barrier. It is performed by a dedicated backend with an
healthcheck on the server s1.
This patch should solve issue #1664.
The "http-restrict-req-hdr-names" option can now be set to restrict allowed
characters in the request header names to the "[a-zA-Z0-9-]" charset.
Idea of this option is to not send header names with non-alphanumeric or
hyphen character. It is especially important for FastCGI application because
all those characters are converted to underscore. For instance,
"X-Forwarded-For" and "X_Forwarded_For" are both converted to
"HTTP_X_FORWARDED_FOR". So, header names can be mixed up by FastCGI
applications. And some HAProxy rules may be bypassed by mangling header
names. In addition, some non-HTTP compliant servers may incorrectly handle
requests when header names contain characters ouside the "[a-zA-Z0-9-]"
charset.
When this option is set, the policy must be specify:
* preserve: It disables the filtering. It is the default mode for HTTP
proxies with no FastCGI application configured.
* delete: It removes request headers with a name containing a character
outside the "[a-zA-Z0-9-]" charset. It is the default mode for
HTTP backends with a configured FastCGI application.
* reject: It rejects the request with a 403-Forbidden response if it
contains a header name with a character outside the
"[a-zA-Z0-9-]" charset.
The option is evaluated per-proxy and after http-request rules evaluation.
This patch may be backported to avoid any secuirty issue with FastCGI
application (so as far as 2.2).
Since the 2.5, it is possible to define TCP/HTTP ruleset in defaults
sections. However, rules defining a capture in defaults sections was not
properly handled because they was not shared with the proxies inheriting
from the defaults section. This led to crash when haproxy tried to store a
new capture.
So now, to fix the issue, when a new proxy is created, the list of captures
points to the list of its defaults section. It may be NULL or not. All new
caputres are prepended to this list. It is not a problem to share the same
defaults section between several proxies, because it is not altered and we
take care to not release it when corresponding proxies are freed but only
when defaults proxies are freed. To do so, defaults proxies are now
unreferenced at the end of free_proxy() function instead of the beginning.
This patch should fix the issue #1674. It must be backported to 2.5.
DHE ciphers do not present a security risk if the key is big enough but
they are slow and mostly obsoleted by ECDHE. This patch removes any
default DH parameters. This will effectively disable all DHE ciphers
unless a global ssl-dh-param-file is defined, or
tune.ssl.default-dh-param is set, or a frontend has DH parameters
included in its PEM certificate. In this latter case, only the frontends
that have DH parameters will have DHE ciphers enabled.
Adding explicitely a DHE ciphers in a "bind" line will not be enough to
actually enable DHE. We would still need to know which DH parameters to
use so one of the three conditions described above must be met.
This request was described in GitHub issue #1604.
This new converter is similar to the concat converter and can be used to
build new variables made of a succession of other variables but the main
difference is that it does the checks if adding a delimiter makes sense as
wouldn't be the case if e.g the current input sample is empty. That
situation would require 2 separate rules using concat converter where the
first rule would have to check if the current sample string is empty before
adding a delimiter. This resolves GitHub Issue #1621.
In MQTTv3.1, protocol name is "MQIsdp" and protocol level is 3. The mqtt
converters(mqtt_is_valid and mqtt_field_value) did not work for clients on
mqttv3.1 because the mqtt_parse_connect() marked the CONNECT message invalid
if either the protocol name is not "MQTT" or the protocol version is other than
v3.1.1 or v5.0. To fix it, we have added the mqttv3.1 protocol name and version
as part of the checks.
This patch fixes the mqtt converters to support mqttv3.1 clients as well (issue #1600).
It must be backported to 2.4.
In the same way than for be2hex.vtc, a "Connection: close" header is added
to all responses to avoid any connection reuse. This should avoid any "HTTP
header incomplete" errors.
Dynamic servers feature is now judged to be stable enough. Remove the
experimental-mode requirement for "add/del server" commands. This should
facilitate dynamic servers adoption.
These two sample fetch methods report respectively the file name and the
line number where was located the last rule that was final. This is aimed
at being used on log-format lines to help admins figure what rule in the
configuration gave a final verdict, and help understand the condition
that led to the action.
For example, it's now possible to log the last matched rule by adding
this to the log-format:
... lr=%[last_rule_file]:%[last_rule_line]
A regtest is provided to test various combinations of final rules, some
even on top of each other from different rulesets.
In the same way than for normalize_uri.vtc, a "Connection: close" header is
added to all responses to avoid any connection reuse. This should avoid any
"HTTP header incomplete" errors.
There is no connection reuse to avoid race conditions in HTTP reg-tests. But
time to time, normalize_uri.vtc still report "HTTP header incomplete"
error. It seems to be because HTTP keep-alive is still used at the session
level. Thus when the same server section is used to handle multiple requests
for the same client, via a "-repeat" statement, a new request for this client
may be handled by HAProxy before the server is restarted.
To avoid any trouble, HTTP keep-alive is disabled on the server side by
adding "Connection: close" header in responses. It seems to be ok now. We
let the CI decide.
This one started to randomly fail on me again and I could figure the
problem. It mixes one checked server with one unchecked on in each
backend, and tries to make sure that each checked server receives
exactly one request. But that doesn't work and is entirely time-
dependent because if the check starts before the client, a pure
TCP check is sent to the server, which sees an aborted connection
and makes the whole check fail.
Here what is done is that we make sure that only the second server
and not the first one is checked. The traffic is delivered to all
first servers, and each HTTP server must always receive a valid HTTP
request. In parallel, checks must not fail as they're delivered to
dummy servers. The check doesn't fail anymore, even when started on
a single thread at nice +5 while 8 processes are fighting on the same
core to inject HTTP traffic at 25 Gbps, which used to systematically
make it fail previously.
Since it took more than one hour to fix the "expect" line for the stats
output, I did it using a small script that I pasted into the vtc file
in case it's needed later. The relevance of this test is questionable
once its complexity is factored in. Let's keep it as long as it works
without too much effort.
The 'dst' optionnal field on a httpclient request can be used to set an
alternative server address in the haproxy address format. Which means it
could be use with unix@, ipv6@ etc.
Should fix issue #1471.
tls_basic_sync_wo_stkt_backend fails once every 200 runs for me. This
seems to be because the startup delay doesn't always allow peers to
perform a simultaneous connect, close and new attempt. With 3s I can't
see it fail anymore. In addition the long "delay 0.2" are still way too
much since we do not really care about the startup order in practice.