Historically we've been counting lots of client-triggered events in stick
tables to help detect misbehaving ones, but we've been missing the same on
the server side, and there's been repeated requests for being able to count
the server errors per URL in order to precisely monitor the quality of
service or even to avoid routing requests to certain dead services, which
is also called "circuit breaking" nowadays.
This commit introduces http_fail_cnt and http_fail_rate, which work like
http_err_cnt and http_err_rate in that they respectively count events and
their frequency, but they only consider server-side issues such as network
errors, unparsable and truncated responses, and 5xx status codes other
than 501 and 505 (since these ones are usually triggered by the client).
Note that retryable errors are purposely not accounted for, so that only
what the client really sees is considered.
With this it becomes very simple to put some protective measures in place
to perform a redirect or return an excuse page when the error rate goes
beyond a certain threshold for a given URL, and give more chances to the
server to recover from this condition. Typically it could look like this
to bypass a URL causing more than 10 requests per second:
stick-table type string len 80 size 4k expire 1m store http_fail_rate(1m)
http-request track-sc0 base # track host+path, ignore query string
http-request return status 503 content-type text/html \
lf-file excuse.html if { sc0_http_fail_rate gt 10 }
A more advanced mechanism using gpt0 could even implement high/low rates
to disable/enable the service.
Reg-test converteers_ref_cnt_never_dec.vtc was updated to test it.
I saw some people falling back to unix socket to collect some data they
could not find in prometheus exporter. One of them is base info from
stick tables (used/size).
I do not plan to extend it more for now; keys are quite a mess to
handle.
This should resolve github issue #1008.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
As noticed by Christopher, I messed up the version fix in commit
cb4ed02ef ("REGTESTS: mark http-check-send.vtc as 2.4-only"), as while
looking up the commit introducing the change I accidently reverted it.
Let's reinsert the contents of the file prior to that fix, except the
version, of course.
Commit 9eea56009 ("REGTESTS: add tests for the xxh3 converter") introduced
the xxh3 to the tests thus made it incompatible with 2.3 and older, let's
upgrade the version requirement.
We can currently change the check-port using the cli command `set server
check-port` but there is a consistency issue when using server state.
This patch aims to fix this problem but will be also a good preparation
work to get rid of checkport flag, so we are able to know when checkport
was set by config.
I am fully aware this is not making github #953 moving forward, I
however think this might be acceptable while waiting for a proper
solution and resolve consistency problem faced with port settings.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
Flush the SSL session cache when updating a certificate which is used on a
server line. This prevent connections to be established with a cached
session which was using the previous SSL_CTX.
This patch also replace the ha_barrier with a thread_isolate() since there
are more operations to do. The reg-test was also updated to remove the
'no-ssl-reuse' keyword which is now uneeded.
The "abort ssl cert" command is buggy and removes the current ckch store,
and instances, leading to SNI removal. It must only removes the new one.
This patch also adds a check in set_ssl_cert.vtc and
set_ssl_server_cert.vtc.
Must be backported as far as 2.2.
In a previous commit this test was disabled because I though the
feature was broken, but in fact this is the test which is broken.
Indeed the connection between the server and the client was not
renegociated and was using the SSL cache or a ticket. To be work
correctly these 2 features must be disabled or a new connection must be
established after the ticket timeout, which is too long for a regtest.
Also a "nbthread 1" was added as it was easier to reproduce the problem
with it.
Now, some conformance tests are performed when an HTTP connection is
upgraded to websocket. This make the http-check-send.vtc script failed for
the backend <be6_ws>. Because the purpose of this health-check is to pass a
"Connection: Upgrade" header on an http-check send rule, we may use a dummy
protocal instead.
Test the conformance of websocket rfc6455 in haproxy. In particular, if
a missing key is detected on a h1 message, haproxy must close the
connection.
Note that the case h2 client/h1 srv is not tested as I did not find a
way to calculate the key on the server side.
The EOM block will be removed on the 2.4, thus this script will be broken on
this version. Now it is skipped for this version. It remains valid for 2.3
and 2.2.
When trying to update a backend certificate, we should find a
server-side ckch instance thanks to which we can rebuild a new ssl
context and a new ckch instance that replace the previous ones in the
server structure. This way any new ssl session will be built out of the
new ssl context and the newly updated certificate.
This resolves a subpart of GitHub issue #427 (the certificate part)
It is only a problem on the response path because the request payload length
it always known. But when a filter is registered to analyze the response
payload, the filtering may hang if the server closes just after the headers.
The root cause of the bug comes from an attempt to allow the filters to not
immediately forward the headers if necessary. A filter may choose to hold
the headers by not forwarding any bytes of the payload. For a message with
no payload but a known payload length, there is always a EOM block to
forward. Thus holding the EOM block for bodyless messages is a good way to
also hold the headers. However, messages with an unknown payload length,
there is no EOM block finishing the message, but only a SHUTR flag on the
channel to mark the end of the stream. If there is no payload when it
happens, there is no payload at all to forward. In the filters API, it is
wrongly detected as a condition to not forward the headers.
Because it is not the most used feature and not the obvious one, this patch
introduces another way to hold the message headers at the begining of the
forwarding. A filter flag is added to explicitly says the headers should be
hold. A filter may choose to set the STRM_FLT_FL_HOLD_HTTP_HDRS flag and not
forwad anything to hold the headers. This flag is removed at each call, thus
it must always be explicitly set by filters. This flag is only evaluated if
no byte has ever been forwarded because the headers are forwarded with the
first byte of the payload.
reg-tests/filters/random-forwarding.vtc reg-test is updated to also test
responses with unknown payload length (with and without payload).
This patch must be backported as far as 2.0.
If a server varies on the accept-encoding header and it sends a response
with an encoding we do not know (see parse_encoding_value function), we
will not store it. This will prevent unexpected errors caused by
cache collisions that could happen in accept_encoding_hash_cmp.
it does not seem to have a reason to close connections after each
request; reflect that in tests by doing all requests within the same
client.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
Add a base test to start with something, even though this is not
necessarily complete.
Also make use of the recent REQUIRE_SERVICE option to exclude it from
test list of it was not build with prometheus included.
note: I thought it was possible to send multiple requests within the
same client, but I'm getting "HTTP header is incomplete" from the second
request
Signed-off-by: William Dauchy <wdauchy@gmail.com>
- check functions are never called with a NULL args list, it is always
an array, so first check can be removed
- the expression parser guarantees that we can't have anything else,
because we mentioned json converter takes a mandatory string argument.
Thus test on `ARGT_STR` can be removed as well
- also add breaking line between enum and function declaration
In order to validate it, add a simple json test testing very simple
cases but can be improved in the future:
- default json converter without args
- json converter failing on error (utf8)
- json converter with error being removed (utf8s)
Signed-off-by: William Dauchy <wdauchy@gmail.com>
add base support for url encode following RFC3986, supporting `query`
type only.
- add test checking url_enc/url_dec/url_enc
- update documentation
- leave the door open for future changes
this should resolve github issue #941
Signed-off-by: William Dauchy <wdauchy@gmail.com>
This patch fixes GitHub Issue #988. Commit ce9e7b2521
was not sufficient, because it fell back to a hash comparison if the bitmap
of known encodings was not acceptable instead of directly returning the the
cached response is not compatible.
This patch also extends the reg-test to test the hash collision that was
mentioned in #988.
Vary handling is 2.4, no backport needed.
The accept-encoding part of the secondary key (vary) was only built out
of the first occurrence of the header. So if a client had two
accept-encoding headers, gzip and br for instance, the key would have
been built out of the gzip string. So another client that only managed
gzip would have been sent the cached resource, even if it was a br resource.
The http_find_header function is now called directly by the normalizers
so that they can manage multiple headers if needed.
A request that has more than 16 encodings will be considered as an
illegitimate request and its response will not be stored.
This fixes GitHub issue #987.
It does not need any backport.
Add a new check for a pseudo-websocket handshake, specifying the
Connection header to verify if it is properly handled by http-check send
directive. Also check that default http/1.1 checks have the header
Connection: close.
In case of successful unsafe method on a stored resource, the cached entry
must be invalidated (see RFC7234#4.4).
A "non-error response" is one with a 2xx (Successful) or 3xx (Redirection)
status code.
This implies that the primary hash must now be calculated on requests
that have an unsafe method (POST or PUT for instance) so that we can
disable the corresponding entries when we process the response.
When a response has an Age header (filled in by another cache on the
message's path) that is greater than its defined maximum age (extracted
either from cache-control directives or an expires header), it is
already stale and should not be cached.
This is simply txn_get_priv.vtc with the loading made using
lua-load-per-thread, allowing all threads to run independently.
There's nothing global nor thread-specific in this test, which makes
an excellent example of something that should work out of the box.
Four concurrent clients are initialized with 4 loops each so as to
give a little chance to various threads to run concurrently.
Turn the "Accept-Encoding" value to lower case before processing it.
Calculate the CRC on every token instead of a sorted concatenation of
them all (in order to avoir copying them) then XOR all the CRCs into a
single hash (while ignoring duplicates).
This patch adds a new logging variable '%HPO' for logging HTTP path only
(without query string) from relative or absolute URI.
For example:
log-format "hpo=%HPO hp=%HP hu=%HU hq=%HQ"
GET /r/1 HTTP/1.1
=>
hpo=/r/1 hp=/r/1 hu=/r/1 hq=
GET /r/2?q=2 HTTP/1.1
=>
hpo=/r/2 hp=/r/2 hu=/r/2?q=2 hq=?q=2
GET http://host/r/3 HTTP/1.1
=>
hpo=/r/3 hp=http://host/r/3 hu=http://host/r/3 hq=
GET http://host/r/4?q=4 HTTP/1.1
=>
hpo=/r/4 hp=http://host/r/4 hu=http://host/r/4?q=4 hq=?q=4
Historically, the input and output buffers of a check are allocated by hand
during the startup, with a specific size (not necessarily the same than
other buffers). But since the recent refactoring of the checks to rely
exclusively on the tcp-checks and to use the underlying mux layer, this part
is totally buggy. Indeed, because these buffers are now passed to a mux,
they maybe be swapped if a zero-copy is possible. In fact, for now it is
only possible in h2_rcv_buf(). Thus the bug concretely only exists if a h2
health-check is performed. But, it is a latent bug for other muxes.
Another problem is the size of these buffers. because it may differ for the
other buffer size, it might be source of bugs.
Finally, for configurations with hundreds of thousands of servers, having 2
buffers per check always allocated may be an issue.
To fix the bug, we now allocate these buffers when required using the buffer
pool. Thus not-running checks don't waste memory and muxes may swap them if
possible. The only drawback is the check buffers have now always the same
size than buffers used by the streams. This deprecates indirectly the
"tune.chksize" global option.
In addition, the http-check regtest have been update to perform some h2
health-checks.
Many thanks to @VigneshSP94 for its help on this bug.
This patch should solve the issue #936. It relies on the commit "MINOR:
tcpcheck: Don't handle anymore in-progress send rules in tcpcheck_main".
Both must be backport as far as 2.2.
bla
The cache section's process-vary option takes a 0 or 1 value to disable
or enable the vary processing.
When disabled, a response containing such a header will never be cached.
When enabled, we will calculate a preliminary hash for a subset of request
headers on all the incoming requests (which might come with a cpu cost) which
will be used to build a secondary key for a given request (see RFC 7234#4.1).
The default value is 0 (disabled).
Calculate a preliminary secondary key for every request we see so that
we can have a real secondary key if the response is cacheable and
contains a manageable Vary header.
The cache's ebtree is now allowed to have multiple entries with the same
primary key. Two of those entries will be distinguished thanks to
secondary keys stored in the cache_entry (based on hashes of a subset of
their headers).
When looking for an entry in the cache (cache_use), we still use the
primary key (built the same way as before), but in case of match, we
also need to check if the entry has a vary signature. If it has one, we
need to perform an extra check based on the newly built secondary key.
We will only be able to forge a response out of the cache if both the
primary and secondary keys match with one of our entries. Otherwise the
request will be forwarder to the server.
This patch adds -m flag which allows to specify header name
matching method when deleting headers from http request/response.
Currently beg, end, sub, str and reg are supported.
This is related to GitHub issue #909
As indicated in issue #907 it very frequently fails on FreeBSD, and
looking at it, it's broken in multiple ways. It relies on log ordering
between two layers, the first one being allowed to support h2. Given
that on FreeBSD it usually ends up in timeouts, it's very likely that
for some reason one frontend logs before the other one or that curl
uses h2 instead of h1 there, and that the log instance waits forever
for its lines.
Usually it works fine when run locally though, so let's not remove it
and mark it as broken instead so that it can still be used when relevant.
in the context of a progressive backend migration, we want to be able to
activate SSL on outgoing connections to the server at runtime without
reloading.
This patch adds a `set server ssl` command; in order to allow that:
- add `srv_use_ssl` to `show servers state` command for compatibility,
also update associated parsing
- when using default-server ssl setting, and `no-ssl` on server line,
init SSL ctx without activating it
- when triggering ssl API, de/activate SSL connections as requested
- clean ongoing connections as it is done for addr/port changes, without
checking prior server state
example config:
backend be_foo
default-server ssl
server srv0 127.0.0.1:6011 weight 1 no-ssl
show servers state:
5 be_foo 1 srv0 127.0.0.1 2 0 1 1 15 1 0 4 0 0 0 0 - 6011 - -1
where srv0 can switch to ssl later during the runtime:
set server be_foo/srv0 ssl on
5 be_foo 1 srv0 127.0.0.1 2 0 1 1 15 1 0 4 0 0 0 0 - 6011 - 1
Also update existing tests and create a new one.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
while looking at `url_dec` implementation I realised there was not yet a
simple test to avoid future regressions.
This one is testing simple case, including the "+" behaviour depending
on the argument passed to `url_dec`
Signed-off-by: William Dauchy <wdauchy@gmail.com>
There is apparently a race in this test that would require relying on
haproxy's output to make it reliably work, but currently vtest doesn't
have this option. Let's mark it broken again to avoid polluting the CI.
This is discussed in github issue #950.
Three filters are used. The compression filter is enclose by two trace filters,
both with the random forwarding enabled. An HTTP test is performed but also a
TCP test using an HTTP tunnel.
Add a reg-test verifying the fix in dea7c209f8.
Some parts of the configuration used in the were taken from the initial bug
report from Maciej.
Should be backported together with dea7c209f8
(all stable versions).
Co-authored-by: Maciej Zdeb <maciej@zdeb.pl>
Do not cache responses that do not have an explicit expiration time
(s-maxage or max-age Cache-Control directives or Expires header) or a
validator (ETag or Last-Modified headers) anymore, as suggested in
RFC 7234#3.
The TX_FLAG_IGNORE flag is used instead of the TX_FLAG_CACHEABLE so as
not to change the behavior of the checkcache option.
This regtest requires a version of OpenSSL which supports the
ClientHello callback which is only the case of recents SSL libraries
(openssl 1.1.1).
This was reported in issue #944.
In issue #940, it was reported that the crt-list does not work correctly
anymore. Indeed when inserting a crt-list line which use a certificate
previously seen in the crt-list, this one won't be inserted in the SNI
list and will be silently ignored.
This bug was introduced by commit 47da821 "MEDIUM: ssl: emulates the
multi-cert bundles in the crtlist".
This patch also includes a reg-test which tests this issue.
This bugfix must be backported in 2.3.
This test checks that the bug #818 and #810 are fixed.
It test if there is no inconsistency with multiple certificate types and
that the exclusion of the certificate is correctly working with a negative
filter.
This new script tests mqtt_is_valid() and mqtt_get_field_value() converters used
to validate and extract information from a MQTT (Message Queuing Telemetry
Transport) message.
This new script tests fix_is_valid() and fix_tag_value() converters used to
validate and extract information from a FIX (Financial Information eXchange)
message.
William noticed that the real issue in the abns test was that it was
failing to rebind some listeners, issues which were addressed in the
previous commits (in some cases, the master would even accept traffic
on the worker's socket which was not properly disabled, causing some
of the strange error messages).
The other issue documented in commit 2ea15a080 was the lack of
reliability when the test was run in parallel. This is caused by the
abns socket which uses a hard-coded address, making all tests in
parallel to step onto each others' toes. Since vtest cannot provide
abns sockets, we're instead concatenating the number of the listening
port that vtest allocated for another frontend to the abns path, which
guarantees to make them unique in the system.
The test works fine in all cases now, even with 100 in parallel.
When no Cache-Control max-age or s-maxage information is present in a
cached response, we need to parse the Expires header value (RFC 7234#5.3).
An invalid Expires date value or a date earlier than the reception date
will make the cache_entry stale upon creation.
For now, the Cache-Control and Expires headers are parsed after the
insertion of the response in the cache so even if the parsing of the
Expires results in an already stale entry, the entry will exist in the
cache.
Res.cache_hit sample fetch returns a boolean which is true when the HTTP
response was built out of a cache. The cache's name is returned by the
res.cache_name sample_fetch.
This resolves GitHub issue #900.
If a client sends a conditional request containing an If-Modified-Since
header (and no If-None-Match header), we try to compare the date with
the one stored in the cache entry (coming either from a Last-Modified
head, or a Date header, or corresponding to the first response's
reception time). If the request's date is earlier than the stored one,
we send a "304 Not Modified" response back. Otherwise, the stored is sent
(through a 200 OK response).
This resolves GitHub issue #821.
This reg-test tests the "set ssl cert" command the same way the
set_ssl_cert.vtc does, but with separate key/crt files and with the
ssl-load-extra-del-ext.
It introduces new key/.crt files that contains the same pair as the
existing .pem.
Test that if-none-match header is properly taken into account and that
when the conditions are fulfilled, a "304 Not Modified" response can be
sent to the client.
Co-authored-by: Tim Duesterhus <tim@bastelstu.be>
Its sole remaining purpose was to display "proxy foo started", which
has little benefit and pollutes output for those with plenty of proxies.
Let's remove it now.
The VTCs were updated to reflect this, because many of them had explicit
counts of dropped lines to match this message.
This is tagged as MEDIUM because some users may be surprized by the
loss of this quite old message.
This test is inherently racy. It regularly pops up on the CI, and I've
spent one hour chasing a bug that apparently doesn't exist, just because
I'm running it 10 times in a row and it reports from 4 to 8 failures
when built at -O2 and generally even more at -O0. The logs are very
confusing, often reporting that it failed with status 0, with nothing
else wrong. I suspect it might sometimes be the shell command that fails
if it executes faster than haproxy finishes to start up, which would
also explain the relation with the optimization level. E.g:
> Testing with haproxy version: 2.2.0
> # top TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (3.006) exit=2
> # top TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (3.006) exit=2
> # top TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (3.009) exit=2
> # top TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (3.008) exit=2
> # top TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (3.007) exit=2
> # top TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (3.007) exit=2
> 6 tests failed, 0 tests skipped, 4 tests passed
Some of the failures include this, suggesting that some barriers could
help:
---- h1 haproxy h1 PID file check failed:
Could not read PID file '/tmp/haregtests-2020-10-09_11-19-40.kgsDB4/vtc.30539.04dbea7f/h1/pid
Since it has been causing false positives and consumed way more
troubleshooting time than it saved, let's mark it as broken so that it
doesn't waste more time. We can bring it back when someone manages to
figure what the problem is.
Since the health-check refactoring in the 2.2, the checks through a socks4 proxy
are broken. To fix this bug, CO_FL_SOCKS4 flag must be set on the connection
before calling the connect() callback function because this flags is checked to
use the right destination address. The same is done for the CO_FL_SEND_PROXY
flag for a consistency purpose.
A reg-test has been added to test the "check-via-socks4" directive.
This patch must be backported to 2.2.
We used to set it to ${h1_px_addr} but it randomly fails on certain
hosts (FreeBSD and OSX) where the address is surprisingly set to "::1"
while the Host field contains 127.0.0.1 (hence two different address
families). While this is likely a minor issue in vtest, we don't need
to depend on this and can easily hard-code 127.0.0.1 which is already
used in other tests.
This adds "balance-rr" to test round robin, "balance-uri" to test the
default balance-uri method, and "balance-uri-path-only" which mixes H1
and H2 through "balance uri path-only" and verifies that they reach
the same server.
Note that for the latter, "proto h2" explicitly had to be placed on
the listening socket otherwise it would timeout. This may indicate an
issue in the H1->H2 upgrade depending how the H2 preface is sent maybe.
iif() takes a boolean as input and returns one of the two argument
strings depending on whether the boolean is true.
This converter most likely is most useful to return the proper scheme
depending on the value returned by the `ssl_fc` fetch, e.g. for use within
the `x-forwarded-proto` request header.
However it can also be useful for use within a template that is sent to
the client using `http-request return` with a `lf-file`. It allows the
administrator to implement a simple condition, without needing to prefill
variables within the regular configuration using `http-request
set-var(req.foo)`.
This new script tests set-path/replace-path and set-pathq/replace-pathq
rules. It also tests path and pathq sample fetches.
This patch should be backported to 2.2 if corresponding keywords are also
backported.
A few regtests continue to regularly fail in highly loaded VMs because
they have very short timeouts. Actually the goal of running with short
timeouts was to make sure we do not uselessly wait during tests designed
to trigger them, but these timeouts here are never supposed to fire at
all, so they don't need to be kept in the 15-20ms range. They do not
pose any issue on any regular machine, but VMs are often suffering from
huge time jumps and cannot always produce responses in that short of a
time.
Just like with commit ce6fc25b1 ("REGTEST: increase timeouts on the
seamless-reload test"), let's raise these short timeouts to 1 second.
A few other ones remain set to 150-200ms and do not seem to cause any
issue. Some are actually expected to trigger so let's not touch them
for now.
Since commit f92afb732 ("MEDIUM: cfgparse: Emit hard error on truncated lines")
we now produce parsing errors on truncated lines, in an effort to clean
up dangerous or broken config files. And it turns out that one of our own
regression tests was suffering from this, as diagnosed by William and Tim.
The cause is the leading spaces in front of "} -start" that vtest makes
part of the output file, so the file finishes with a partial line made
of spaces.
Following work from Arjen and Mathilde, it adds ssl_{c,s}_chain_der
methods; it returns DER encoded certs from SSL_get_peer_cert_chain
Also update existing vtc tests to add random intermediate certificates
When getting the result through this header:
http-response add-header x-ssl-chain-der %[ssl_c_chain_der,hex]
One can parse it with any lib accepting ASN.1 DER data, such as in go:
bin, err := encoding/hex.DecodeString(cert)
certs_parsed, err := x509.ParseCertificates(bin)
Cc: Arjen Nienhuis <arjen@zorgdoc.nl>
Signed-off-by: Mathilde Gilles <m.gilles@criteo.com>
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
The parsing of http deny rules with no argument or only the deny_status argument
is buggy if followed by an ACLs expression (starting with "if" or "unless"
keyword). Instead of using the proxy errorfiles, a dummy error is used. To fix
the bug, the parsing function must also check for "if" or "unless" keyword in
such cases.
This patch should fix the issue #720. No backport is needed.
Test the following ssl sample fetches:
ssl_c_der, ssl_c_sha1,hex, ssl_c_notafter, ssl_c_notbefore,
ssl_c_sig_alg, ssl_c_i_dn, ssl_c_s_dn, ssl_c_serial,hex, ssl_c_key_alg,
ssl_c_version
This reg-test could be used as far as haproxy 1.6.
Test the following ssl sample fetches:
ssl_f_der, ssl_f_sha1,hex, ssl_f_notafter, ssl_f_notbefore,
ssl_f_sig_alg, ssl_f_i_dn, ssl_f_s_dn, ssl_f_serial,hex, ssl_f_key_alg,
ssl_f_version
This reg-test could be used as far as haproxy 1.5.
This commit adds some sample fetches that were lacking on the server
side:
ssl_s_key_alg, ssl_s_notafter, ssl_s_notbefore, ssl_s_sig_alg,
ssl_s_i_dn, ssl_s_s_dn, ssl_s_serial, ssl_s_sha1, ssl_s_der,
ssl_s_version
Trailing slashes were not handled in crt-list commands on CLI which can
be useful when you use the commands with a directory.
Strip the slashes before looking for the crtlist in the tree.
This reg-test tests the spaces in an ACL file, it tries to add new
entries with spaces from the CLI
This reg-test could backported in all stable branches if the fix for
spaces on the CLI was backported.
This reg-test checks that sending unique IDs via PPv2 works for servers
with the `alpn` option specified (issue #640). As a side effect it also
checks that PPv2 works with ALPN (issue #651).
It has been verified that the test fails without the following commits
applied and succeeds with them applied.
1f9a4ecea BUG/MEDIUM: backend: set the connection owner to the session when using alpn.
083fd42d5 BUG/MEDIUM: connection: Ignore PP2 unique ID for stream-less connections
eb9ba3cb2 BUG/MINOR: connection: Always get the stream when available to send PP2 line
Without the first two commits HAProxy crashes during execution of the
test. Without the last commit the test will fail, because no unique ID
is received.
As discussed in GitHub issue #624 Lua scripts should not use
variables that are never going to be read, because the memory
for variable names is never going to be freed.
Add an optional `ifexist` parameter to the `set_var` function
that allows a Lua developer to set variables that are going to
be ignored if the variable name was not used elsewhere before.
Usually this mean that there is no `var()` sample fetch for the
variable in question within the configuration.