The accept-encoding part of the secondary key (vary) was only built out
of the first occurrence of the header. So if a client had two
accept-encoding headers, gzip and br for instance, the key would have
been built out of the gzip string. So another client that only managed
gzip would have been sent the cached resource, even if it was a br resource.
The http_find_header function is now called directly by the normalizers
so that they can manage multiple headers if needed.
A request that has more than 16 encodings will be considered as an
illegitimate request and its response will not be stored.
This fixes GitHub issue #987.
It does not need any backport.
Add a new check for a pseudo-websocket handshake, specifying the
Connection header to verify if it is properly handled by http-check send
directive. Also check that default http/1.1 checks have the header
Connection: close.
In case of successful unsafe method on a stored resource, the cached entry
must be invalidated (see RFC7234#4.4).
A "non-error response" is one with a 2xx (Successful) or 3xx (Redirection)
status code.
This implies that the primary hash must now be calculated on requests
that have an unsafe method (POST or PUT for instance) so that we can
disable the corresponding entries when we process the response.
When a response has an Age header (filled in by another cache on the
message's path) that is greater than its defined maximum age (extracted
either from cache-control directives or an expires header), it is
already stale and should not be cached.
This is simply txn_get_priv.vtc with the loading made using
lua-load-per-thread, allowing all threads to run independently.
There's nothing global nor thread-specific in this test, which makes
an excellent example of something that should work out of the box.
Four concurrent clients are initialized with 4 loops each so as to
give a little chance to various threads to run concurrently.
Turn the "Accept-Encoding" value to lower case before processing it.
Calculate the CRC on every token instead of a sorted concatenation of
them all (in order to avoir copying them) then XOR all the CRCs into a
single hash (while ignoring duplicates).
This patch adds a new logging variable '%HPO' for logging HTTP path only
(without query string) from relative or absolute URI.
For example:
log-format "hpo=%HPO hp=%HP hu=%HU hq=%HQ"
GET /r/1 HTTP/1.1
=>
hpo=/r/1 hp=/r/1 hu=/r/1 hq=
GET /r/2?q=2 HTTP/1.1
=>
hpo=/r/2 hp=/r/2 hu=/r/2?q=2 hq=?q=2
GET http://host/r/3 HTTP/1.1
=>
hpo=/r/3 hp=http://host/r/3 hu=http://host/r/3 hq=
GET http://host/r/4?q=4 HTTP/1.1
=>
hpo=/r/4 hp=http://host/r/4 hu=http://host/r/4?q=4 hq=?q=4
Historically, the input and output buffers of a check are allocated by hand
during the startup, with a specific size (not necessarily the same than
other buffers). But since the recent refactoring of the checks to rely
exclusively on the tcp-checks and to use the underlying mux layer, this part
is totally buggy. Indeed, because these buffers are now passed to a mux,
they maybe be swapped if a zero-copy is possible. In fact, for now it is
only possible in h2_rcv_buf(). Thus the bug concretely only exists if a h2
health-check is performed. But, it is a latent bug for other muxes.
Another problem is the size of these buffers. because it may differ for the
other buffer size, it might be source of bugs.
Finally, for configurations with hundreds of thousands of servers, having 2
buffers per check always allocated may be an issue.
To fix the bug, we now allocate these buffers when required using the buffer
pool. Thus not-running checks don't waste memory and muxes may swap them if
possible. The only drawback is the check buffers have now always the same
size than buffers used by the streams. This deprecates indirectly the
"tune.chksize" global option.
In addition, the http-check regtest have been update to perform some h2
health-checks.
Many thanks to @VigneshSP94 for its help on this bug.
This patch should solve the issue #936. It relies on the commit "MINOR:
tcpcheck: Don't handle anymore in-progress send rules in tcpcheck_main".
Both must be backport as far as 2.2.
bla
The cache section's process-vary option takes a 0 or 1 value to disable
or enable the vary processing.
When disabled, a response containing such a header will never be cached.
When enabled, we will calculate a preliminary hash for a subset of request
headers on all the incoming requests (which might come with a cpu cost) which
will be used to build a secondary key for a given request (see RFC 7234#4.1).
The default value is 0 (disabled).
Calculate a preliminary secondary key for every request we see so that
we can have a real secondary key if the response is cacheable and
contains a manageable Vary header.
The cache's ebtree is now allowed to have multiple entries with the same
primary key. Two of those entries will be distinguished thanks to
secondary keys stored in the cache_entry (based on hashes of a subset of
their headers).
When looking for an entry in the cache (cache_use), we still use the
primary key (built the same way as before), but in case of match, we
also need to check if the entry has a vary signature. If it has one, we
need to perform an extra check based on the newly built secondary key.
We will only be able to forge a response out of the cache if both the
primary and secondary keys match with one of our entries. Otherwise the
request will be forwarder to the server.
This patch adds -m flag which allows to specify header name
matching method when deleting headers from http request/response.
Currently beg, end, sub, str and reg are supported.
This is related to GitHub issue #909
As indicated in issue #907 it very frequently fails on FreeBSD, and
looking at it, it's broken in multiple ways. It relies on log ordering
between two layers, the first one being allowed to support h2. Given
that on FreeBSD it usually ends up in timeouts, it's very likely that
for some reason one frontend logs before the other one or that curl
uses h2 instead of h1 there, and that the log instance waits forever
for its lines.
Usually it works fine when run locally though, so let's not remove it
and mark it as broken instead so that it can still be used when relevant.
in the context of a progressive backend migration, we want to be able to
activate SSL on outgoing connections to the server at runtime without
reloading.
This patch adds a `set server ssl` command; in order to allow that:
- add `srv_use_ssl` to `show servers state` command for compatibility,
also update associated parsing
- when using default-server ssl setting, and `no-ssl` on server line,
init SSL ctx without activating it
- when triggering ssl API, de/activate SSL connections as requested
- clean ongoing connections as it is done for addr/port changes, without
checking prior server state
example config:
backend be_foo
default-server ssl
server srv0 127.0.0.1:6011 weight 1 no-ssl
show servers state:
5 be_foo 1 srv0 127.0.0.1 2 0 1 1 15 1 0 4 0 0 0 0 - 6011 - -1
where srv0 can switch to ssl later during the runtime:
set server be_foo/srv0 ssl on
5 be_foo 1 srv0 127.0.0.1 2 0 1 1 15 1 0 4 0 0 0 0 - 6011 - 1
Also update existing tests and create a new one.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
while looking at `url_dec` implementation I realised there was not yet a
simple test to avoid future regressions.
This one is testing simple case, including the "+" behaviour depending
on the argument passed to `url_dec`
Signed-off-by: William Dauchy <wdauchy@gmail.com>
There is apparently a race in this test that would require relying on
haproxy's output to make it reliably work, but currently vtest doesn't
have this option. Let's mark it broken again to avoid polluting the CI.
This is discussed in github issue #950.
Three filters are used. The compression filter is enclose by two trace filters,
both with the random forwarding enabled. An HTTP test is performed but also a
TCP test using an HTTP tunnel.
Add a reg-test verifying the fix in dea7c209f8.
Some parts of the configuration used in the were taken from the initial bug
report from Maciej.
Should be backported together with dea7c209f8
(all stable versions).
Co-authored-by: Maciej Zdeb <maciej@zdeb.pl>
Do not cache responses that do not have an explicit expiration time
(s-maxage or max-age Cache-Control directives or Expires header) or a
validator (ETag or Last-Modified headers) anymore, as suggested in
RFC 7234#3.
The TX_FLAG_IGNORE flag is used instead of the TX_FLAG_CACHEABLE so as
not to change the behavior of the checkcache option.
This regtest requires a version of OpenSSL which supports the
ClientHello callback which is only the case of recents SSL libraries
(openssl 1.1.1).
This was reported in issue #944.
In issue #940, it was reported that the crt-list does not work correctly
anymore. Indeed when inserting a crt-list line which use a certificate
previously seen in the crt-list, this one won't be inserted in the SNI
list and will be silently ignored.
This bug was introduced by commit 47da821 "MEDIUM: ssl: emulates the
multi-cert bundles in the crtlist".
This patch also includes a reg-test which tests this issue.
This bugfix must be backported in 2.3.
This test checks that the bug #818 and #810 are fixed.
It test if there is no inconsistency with multiple certificate types and
that the exclusion of the certificate is correctly working with a negative
filter.
This new script tests mqtt_is_valid() and mqtt_get_field_value() converters used
to validate and extract information from a MQTT (Message Queuing Telemetry
Transport) message.
This new script tests fix_is_valid() and fix_tag_value() converters used to
validate and extract information from a FIX (Financial Information eXchange)
message.
William noticed that the real issue in the abns test was that it was
failing to rebind some listeners, issues which were addressed in the
previous commits (in some cases, the master would even accept traffic
on the worker's socket which was not properly disabled, causing some
of the strange error messages).
The other issue documented in commit 2ea15a080 was the lack of
reliability when the test was run in parallel. This is caused by the
abns socket which uses a hard-coded address, making all tests in
parallel to step onto each others' toes. Since vtest cannot provide
abns sockets, we're instead concatenating the number of the listening
port that vtest allocated for another frontend to the abns path, which
guarantees to make them unique in the system.
The test works fine in all cases now, even with 100 in parallel.
When no Cache-Control max-age or s-maxage information is present in a
cached response, we need to parse the Expires header value (RFC 7234#5.3).
An invalid Expires date value or a date earlier than the reception date
will make the cache_entry stale upon creation.
For now, the Cache-Control and Expires headers are parsed after the
insertion of the response in the cache so even if the parsing of the
Expires results in an already stale entry, the entry will exist in the
cache.
Res.cache_hit sample fetch returns a boolean which is true when the HTTP
response was built out of a cache. The cache's name is returned by the
res.cache_name sample_fetch.
This resolves GitHub issue #900.
If a client sends a conditional request containing an If-Modified-Since
header (and no If-None-Match header), we try to compare the date with
the one stored in the cache entry (coming either from a Last-Modified
head, or a Date header, or corresponding to the first response's
reception time). If the request's date is earlier than the stored one,
we send a "304 Not Modified" response back. Otherwise, the stored is sent
(through a 200 OK response).
This resolves GitHub issue #821.
This reg-test tests the "set ssl cert" command the same way the
set_ssl_cert.vtc does, but with separate key/crt files and with the
ssl-load-extra-del-ext.
It introduces new key/.crt files that contains the same pair as the
existing .pem.
Test that if-none-match header is properly taken into account and that
when the conditions are fulfilled, a "304 Not Modified" response can be
sent to the client.
Co-authored-by: Tim Duesterhus <tim@bastelstu.be>
Its sole remaining purpose was to display "proxy foo started", which
has little benefit and pollutes output for those with plenty of proxies.
Let's remove it now.
The VTCs were updated to reflect this, because many of them had explicit
counts of dropped lines to match this message.
This is tagged as MEDIUM because some users may be surprized by the
loss of this quite old message.
This test is inherently racy. It regularly pops up on the CI, and I've
spent one hour chasing a bug that apparently doesn't exist, just because
I'm running it 10 times in a row and it reports from 4 to 8 failures
when built at -O2 and generally even more at -O0. The logs are very
confusing, often reporting that it failed with status 0, with nothing
else wrong. I suspect it might sometimes be the shell command that fails
if it executes faster than haproxy finishes to start up, which would
also explain the relation with the optimization level. E.g:
> Testing with haproxy version: 2.2.0
> # top TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (3.006) exit=2
> # top TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (3.006) exit=2
> # top TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (3.009) exit=2
> # top TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (3.008) exit=2
> # top TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (3.007) exit=2
> # top TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (3.007) exit=2
> 6 tests failed, 0 tests skipped, 4 tests passed
Some of the failures include this, suggesting that some barriers could
help:
---- h1 haproxy h1 PID file check failed:
Could not read PID file '/tmp/haregtests-2020-10-09_11-19-40.kgsDB4/vtc.30539.04dbea7f/h1/pid
Since it has been causing false positives and consumed way more
troubleshooting time than it saved, let's mark it as broken so that it
doesn't waste more time. We can bring it back when someone manages to
figure what the problem is.
Since the health-check refactoring in the 2.2, the checks through a socks4 proxy
are broken. To fix this bug, CO_FL_SOCKS4 flag must be set on the connection
before calling the connect() callback function because this flags is checked to
use the right destination address. The same is done for the CO_FL_SEND_PROXY
flag for a consistency purpose.
A reg-test has been added to test the "check-via-socks4" directive.
This patch must be backported to 2.2.
We used to set it to ${h1_px_addr} but it randomly fails on certain
hosts (FreeBSD and OSX) where the address is surprisingly set to "::1"
while the Host field contains 127.0.0.1 (hence two different address
families). While this is likely a minor issue in vtest, we don't need
to depend on this and can easily hard-code 127.0.0.1 which is already
used in other tests.
This adds "balance-rr" to test round robin, "balance-uri" to test the
default balance-uri method, and "balance-uri-path-only" which mixes H1
and H2 through "balance uri path-only" and verifies that they reach
the same server.
Note that for the latter, "proto h2" explicitly had to be placed on
the listening socket otherwise it would timeout. This may indicate an
issue in the H1->H2 upgrade depending how the H2 preface is sent maybe.
iif() takes a boolean as input and returns one of the two argument
strings depending on whether the boolean is true.
This converter most likely is most useful to return the proper scheme
depending on the value returned by the `ssl_fc` fetch, e.g. for use within
the `x-forwarded-proto` request header.
However it can also be useful for use within a template that is sent to
the client using `http-request return` with a `lf-file`. It allows the
administrator to implement a simple condition, without needing to prefill
variables within the regular configuration using `http-request
set-var(req.foo)`.
This new script tests set-path/replace-path and set-pathq/replace-pathq
rules. It also tests path and pathq sample fetches.
This patch should be backported to 2.2 if corresponding keywords are also
backported.
A few regtests continue to regularly fail in highly loaded VMs because
they have very short timeouts. Actually the goal of running with short
timeouts was to make sure we do not uselessly wait during tests designed
to trigger them, but these timeouts here are never supposed to fire at
all, so they don't need to be kept in the 15-20ms range. They do not
pose any issue on any regular machine, but VMs are often suffering from
huge time jumps and cannot always produce responses in that short of a
time.
Just like with commit ce6fc25b1 ("REGTEST: increase timeouts on the
seamless-reload test"), let's raise these short timeouts to 1 second.
A few other ones remain set to 150-200ms and do not seem to cause any
issue. Some are actually expected to trigger so let's not touch them
for now.
Since commit f92afb732 ("MEDIUM: cfgparse: Emit hard error on truncated lines")
we now produce parsing errors on truncated lines, in an effort to clean
up dangerous or broken config files. And it turns out that one of our own
regression tests was suffering from this, as diagnosed by William and Tim.
The cause is the leading spaces in front of "} -start" that vtest makes
part of the output file, so the file finishes with a partial line made
of spaces.
Following work from Arjen and Mathilde, it adds ssl_{c,s}_chain_der
methods; it returns DER encoded certs from SSL_get_peer_cert_chain
Also update existing vtc tests to add random intermediate certificates
When getting the result through this header:
http-response add-header x-ssl-chain-der %[ssl_c_chain_der,hex]
One can parse it with any lib accepting ASN.1 DER data, such as in go:
bin, err := encoding/hex.DecodeString(cert)
certs_parsed, err := x509.ParseCertificates(bin)
Cc: Arjen Nienhuis <arjen@zorgdoc.nl>
Signed-off-by: Mathilde Gilles <m.gilles@criteo.com>
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
The parsing of http deny rules with no argument or only the deny_status argument
is buggy if followed by an ACLs expression (starting with "if" or "unless"
keyword). Instead of using the proxy errorfiles, a dummy error is used. To fix
the bug, the parsing function must also check for "if" or "unless" keyword in
such cases.
This patch should fix the issue #720. No backport is needed.
Test the following ssl sample fetches:
ssl_c_der, ssl_c_sha1,hex, ssl_c_notafter, ssl_c_notbefore,
ssl_c_sig_alg, ssl_c_i_dn, ssl_c_s_dn, ssl_c_serial,hex, ssl_c_key_alg,
ssl_c_version
This reg-test could be used as far as haproxy 1.6.
Test the following ssl sample fetches:
ssl_f_der, ssl_f_sha1,hex, ssl_f_notafter, ssl_f_notbefore,
ssl_f_sig_alg, ssl_f_i_dn, ssl_f_s_dn, ssl_f_serial,hex, ssl_f_key_alg,
ssl_f_version
This reg-test could be used as far as haproxy 1.5.
This commit adds some sample fetches that were lacking on the server
side:
ssl_s_key_alg, ssl_s_notafter, ssl_s_notbefore, ssl_s_sig_alg,
ssl_s_i_dn, ssl_s_s_dn, ssl_s_serial, ssl_s_sha1, ssl_s_der,
ssl_s_version
Trailing slashes were not handled in crt-list commands on CLI which can
be useful when you use the commands with a directory.
Strip the slashes before looking for the crtlist in the tree.
This reg-test tests the spaces in an ACL file, it tries to add new
entries with spaces from the CLI
This reg-test could backported in all stable branches if the fix for
spaces on the CLI was backported.
This reg-test checks that sending unique IDs via PPv2 works for servers
with the `alpn` option specified (issue #640). As a side effect it also
checks that PPv2 works with ALPN (issue #651).
It has been verified that the test fails without the following commits
applied and succeeds with them applied.
1f9a4ecea BUG/MEDIUM: backend: set the connection owner to the session when using alpn.
083fd42d5 BUG/MEDIUM: connection: Ignore PP2 unique ID for stream-less connections
eb9ba3cb2 BUG/MINOR: connection: Always get the stream when available to send PP2 line
Without the first two commits HAProxy crashes during execution of the
test. Without the last commit the test will fail, because no unique ID
is received.
As discussed in GitHub issue #624 Lua scripts should not use
variables that are never going to be read, because the memory
for variable names is never going to be freed.
Add an optional `ifexist` parameter to the `set_var` function
that allows a Lua developer to set variables that are going to
be ignored if the variable name was not used elsewhere before.
Usually this mean that there is no `var()` sample fetch for the
variable in question within the configuration.
In tls_health_checks.vtc, when IPv6 addresses are used, A config error is
reported because of the "addr" server parameter. Because there is no specified
port, the IPv6 address must be enclosed into brackets to be properly parsed. It
also works with IPv4 addresses. But instead, a dummy port is added to the addr
parameter. This way, we also check the port parameter, when specified, is used
in priority over the port found in the addr parameter.
This patch should fix the issue #646.
When a check port or a check address is specified, the check transport layer is
ignored. So it is impossible to do a SSL check in this case. This bug was
introduced by the commit 8892e5d30 ("BUG/MEDIUM: server/checks: Init server
check during config validity check").
This patch should fix the issue #643. It must be backported to all branches
where the above commit was backported.
The http-error directive can now be used instead of errorfile to define an error
message in a proxy section (including default sections). This directive uses the
same syntax that http return rules. The only real difference is the limitation
on status code that may be specified. Only status codes supported by errorfile
directives are supported for this new directive. Parsing of errorfile directive
remains independent from http-error parsing. But functionally, it may be
expressed in terms of http-errors :
errorfile <status> <file> ==> http-errror status <status> errorfile <file>
"http-request deny", "http-request tarpit" and "http-response deny" rules now
use the same syntax than http return rules and internally rely on the http
replies. The behaviour is not the same when no argument is specified (or only
the status code). For http replies, a dummy response is produced, with no
payload. For old deny/tarpit rules, the proxy's error messages are used. Thus,
to be compatible with existing configuration, the "default-errorfiles" parameter
is implied. For instance :
http-request deny deny_status 404
is now an alias of
http-request deny status 404 default-errorfiles
In the CLI command 'show ssl crt-list', the ssl-min-ver and the
ssl-min-max arguments were always displayed because the dumped versions
were the actual version computed and used by haproxy, instead of the
version found in the configuration.
To fix the problem, this patch separates the variables to have one with
the configured version, and one with the actual version used. The dump
only shows the configured version.
MySQL 4.1 is old enough to be the default mode for mysql checks. So now, once a
username is defined, post-41 mode is automatically used. To do mysql checks on
previous MySQL version, the argument "pre-41" must be used.
Note, it is a compatibility breakage for everyone using an antique and
unsupported MySQL version.
Make the digest and HMAC function of OpenSSL accessible to the user via
converters. They can be used to sign and validate content.
Reviewed-by: Tim Duesterhus <tim@bastelstu.be>
For http-check send rules, it is now possible to use a log-format string to set
the request's body. the keyword "body-lf" should be used instead of "body". If the
string eval fails, no body is added.
For http-check send rules, it is now possible to use a log-format string to set
the request URI. the keyword "uri-lf" should be used instead of "uri". If the
string eval fails, we fall back on the default uri "/".
Depending on the SSL library version, the reported error may differ when the
connection is rejected during the handshake. An empty handshke may be detected
or just an generic handshake error. So tcp-check-ssl.vtc has been adapted to
support both error messages.
Extra parameters on http-check expect rules, for the header matching method, to
use log-format string or to match full header line have been removed. There is
now separate matching methods to match a full header line or to match each
comma-separated values. "http-check expect fhdr" must be used in the first case,
and "http-check expect hdr" in the second one. In addition, to match log-format
header name or value, "-lf" suffix must be added to "name" or "value"
keyword. For intance:
http-check expect hdr name "set-cookie" value-lf -m beg "sessid=%[var(check.cookie)]"
Thanks to this changes, each parameter may only be interpreted in one way.
All sample fetches in the scope "check." have been removed. Response sample
fetches must be used instead. It avoids keyword duplication. So, for instance,
res.hdr() must be now used instead of check.hdr().
To do so, following sample fetches have been added on the response :
* res.body, res.body_len and res.body_size
* res.hdrs and res.hdrs_bin
Sample feches dealing with the response's body are only useful in the health
checks context. When called from a stream context, there is no warranty on the
body presence. There is no option to wait the response's body.
It is now possible to add http-check expect rules matching HTTP header names and
values. Here is the format of these rules:
http-check expect header name [ -m <meth> ] <name> [log-format] \
[ value [ -m <meth> ] <value> [log-format] [full] ]
the name pattern (name ...) is mandatory but the value pattern (value ...) is
optionnal. If not specified, only the header presence is verified. <meth> is the
matching method, applied on the header name or the header value. Supported
matching methods are:
* "str" (exact match)
* "beg" (prefix match)
* "end" (suffix match)
* "sub" (substring match)
* "reg" (regex match)
If not specified, exact matching method is used. If the "log-format" option is
used, the pattern (<name> or <value>) is evaluated as a log-format string. This
option cannot be used with the regex matching method. Finally, by default, the
header value is considered as comma-separated list. Each part may be tested. The
"full" option may be used to test the full header line. Note that matchings are
case insensitive on the header names.
agent-check.vtc script fails time to time because the 2nd cli command is sent to
early. Waiting for the connection close in the s1 server should be enough to be
sure the server state is updated.
Improve the test by removing the curl command and using the same proxy
chaining technique as in commit 3ed722f ("REGTEST: ssl: remove curl from
the "add ssl crt-list" test").
A 3rd request was added which must fail, to ensure that the SNI was
effectively removed from HAProxy.
This patch also adds timeouts in the default section, logs on stderr and
fix some indentation issues.
Using curl for SSL tests can be a problem if it wasn't compiled with the
right SSL library and if it didn't share any cipher with HAProxy. To
have more robust tests we now use HAProxy as an SSL client, so we are
sure that the client and the server share the same SSL requirements.
This patch also adds timeouts in the default section, logs on stderr and
fix some indentation issues.
This reg-test tests the client auth feature of HAProxy for both the
backend and frontend section with a CRL list.
This reg-test uses 2 chained listeners because vtest does not handle the
SSL. Test the frontend client auth and the backend side at the same
time.
It sends 3 requests: one with a correct certificate, one with an expired
one and one which was revoked. The client then checks if we received the
right one with the right error.
Certificates, CA and CRL are expiring in 2050 so it should be fine for
the CI.
This test could be backported as far as HAProxy 1.6
This reverts commit 1979943c30ef285ed04f07ecf829514de971d9b2.
Captures in comment was only used when a tcp-check expect based on a negative
regex matching failed to eventually report what was captured while it was not
expected. It is a bit far-fetched to be useable IMHO. on-error and on-success
log-format strings are far more usable. For now there is few check sample
fetches (in fact only one...). But it could be really powerful to report info in
logs.
Parse back-references in comments of tcp-check expect rules. If references are
made, capture groups in the match and replace references to it within the
comment when logging the error. Both text and binary regex can caputre groups
and reference them in the expect rule comment.
[Cf: I slightly updated the patch. exp_replace() function is used instead of a
custom one. And if the trash buffer is too small to contain the comment during
the substitution, the comment is ignored.]
Some expect rules cannot be satisfied due to inherent ambiguity towards
the received data: in the absence of match, the current behavior is to
be forced to wait either the end of the connection or a buffer full,
whichever comes first. Only then does the matching diagnostic is
considered conclusive. For instance :
tcp-check connect
tcp-check expect !rstring "^error"
tcp-check expect string "valid"
This check will only succeed if the connection is closed by the server before
the check timeout. Otherwise the first expect rule will wait for more data until
"^error" regex matches or the check expires.
Allow the user to explicitly define an amount of data that will be
considered enough to determine the value of the check.
This allows succeeding on negative rstring rules, as previously
in valid condition no match happened, and the matching was repeated
until the end of the connection. This could timeout the check
while no error was happening.
[Cf: I slighly updated the patch. The parameter was renamed and the value is a
signed integer to support -1 as default value to ignore the parameter.]
The 'http-check send' directive have been added to add headers and optionnaly a
payload to the request sent during HTTP healthchecks. The request line may be
customized by the "option httpchk" directive but there was not official way to
add extra headers. An old trick consisted to hide these headers at the end of
the version string, on the "option httpchk" line. And it was impossible to add
an extra payload with an "http-check expect" directive because of the
"Connection: close" header appended to the request (See issue #16 for details).
So to make things official and fully support payload additions, the "http-check
send" directive have been added :
option httpchk POST /status HTTP/1.1
http-check send hdr Content-Type "application/json;charset=UTF-8" \
hdr X-test-1 value1 hdr X-test-2 value2 \
body "{id: 1, field: \"value\"}"
When a payload is defined, the Content-Length header is automatically added. So
chunk-encoded requests are not supported yet. For now, there is no special
validity checks on the extra headers.
This patch is inspired by Kiran Gavali's work. It should fix the issue #16 and
as far as possible, it may be backported, at least as far as 1.8.
Regtest unique-id.vtc was added by commit 5fcec84c58 ("REGTEST: Add
unique-id reg-test") but it relies on the "uuid" sample fetch which
is only available in version 2.0 and above. Let's reflect that in
the REQUIRE_VERSION tag.
Regtest proxy_protocol_tlv_validation was added by commit 488ee7fb6e
("BUG/MAJOR: proxy_protocol: Properly validate TLV lengths") but it
relies on a trick involving http-after-response to append a header
after a 400-badreq response, which is not possible in earlier versions,
so make it depend on 2.2.