We're using a dummy temporary proxy when creating global variables in
the configuration file, it was copied from the CLI's code and was
mistakenly called "CLI", better name it "CFG". It should not appear
anywhere except maybe when debugging cores.
When attempting to set a variable does not start with the "proc" scope on
the CLI, we used to emit "only proc is permitted in the global section"
which obviously is a leftover from the initial code.
This may be backported to 2.4.
When a variable starts with the wrong scope, it is named without stripping
the extra characters that follow it, which usually are closing parenthesis.
Let's make sure we only report what is expected.
This may be backported to 2.4.
In commit 9a621ae76 ("MEDIUM: vars: add a new "set-var-fmt" action")
we introduced the support for format strings in variables with the
ability to release them on exit, except that it's the wrong list that
was being scanned for the rule (http vs vars), resulting in random
crashes during deinit.
This was a recent commit in 2.5-dev, no backport is needed.
The set-var() action is convenient because it preserves the input type
but it's a pain to deal with when trying to concatenate values. The
most recurring example is when it's needed to build a variable composed
of the source address and the source port. Usually it ends up like this:
tcp-request session set-var(sess.port) src_port
tcp-request session set-var(sess.addr) src,concat(":",sess.port)
This is even worse when trying to aggregate multiple fields from stick-table
data for example. Due to this a lot of users instead abuse headers from HTTP
rules:
http-request set-header(x-addr) %[src]:%[src_port]
But this requires some careful cleanups to make sure they won't leak, and
it's significantly more expensive to deal with. And generally speaking it's
not clean. Plus it must be performed for each and every request, which is
expensive for this common case of ip+port that doesn't change for the whole
session.
This patch addresses this limitation by implementing a new "set-var-fmt"
action which performs the same work as "set-var" but takes a format string
in argument instead of an expression. This way it becomes pretty simple to
just write:
tcp-request session set-var-fmt(sess.addr) %[src]:%[src_port]
It is usable in all rulesets that already support the "set-var" action.
It is not yet implemented for the global "set-var" directive (which already
takes a string) and the CLI's "set var" command, which would definitely
benefit from it but currently uses its own parser and engine, thus it
must be reworked.
The doc and regtests were updated.
When the expression called in "set-var" uses argments that require late
resolution, the context must be set. At the moment, any unknown argument
is misleadingly reported as "ACL":
frontend f
bind :8080
mode http
http-request set-var(proc.a) be_conn(foo)
parsing [b1.cfg:4]: unable to find backend 'foo' referenced in arg 1 \
of ACL keyword 'be_conn' in proxy 'f'.
Once the context is properly set, it now says the truth:
parsing [b1.cfg:8]: unable to find backend 'foo' referenced in arg 1 \
of sample fetch keyword 'be_conn' in http-request expression in proxy 'f'.
This may be backported but is not really important. If so, the preceeding
patches "BUG/MINOR: vars: improve accuracy of the rules used to check
expression validity" and "MINOR: sample: add missing ARGC_ entries" must
be backported as well.
For a long time we couldn't have arguments in expressions used in
tcp-request, tcp-response etc rules. But now due to the variables
it's possible, and their context in case of failure to resolve an
argument (e.g. backend name not found) is not properly reported
because there is no arg context values in ARGC_* to report them.
Let's add a number of missing ones for tcp-request {connection,
session,content}, tcp-response content, tcp-check, the config
parser (for "set-var" in the global section) and the CLI parser
(for "set-var" on the CLI).
The set-var() expression naturally checks whether expressions are valid
in the context of the rule, but it fails to differentiate frontends from
backends. As such for tcp-content and http-request rules, it will only
accept frontend-compatible sample-fetches, excluding those declared with
SMP_UES_BKEND (a few such as be_id, be_name). For the response it accepts
the backend-compatible expressions only, though it seems that there are
no sample-fetch function that are valid only in the frontend's content,
so that should not cause any problem.
Note that while allowing valid configs to be used, the fix might also
uncover some incorrect configurations where some expressions currently
return nothing (e.g. something depending on frontend declared in a
backend), and which could be rejected, but there does not seem to be
any such keyword. Thus while it should be backported, better not backport
it too far (2.4 and possibly 2.3 only).
The parser checks first for "set-var" then "unset-var" from the updated
offset instead of testing it only when the other one fails, so it
validates this rule as "unset-var":
http-request set-varunset-var(proc.a)
This should be backported everywhere relevant, though it's mostly harmless
as it's unlikely that some users are purposely writing this in their conf!
Sometimes it is convenient to remap large sets of URIs to new ones (e.g.
after a site migration for example). This can be achieved using
"http-request redirect" combined with maps, but one difficulty there is
that non-matching entries will return an empty response. In order to
avoid this, duplicating the operation as an ACL condition ending in
"-m found" is possible but it becomes complex and error-prone while it's
known that an empty URL is not valid in a location header.
This patch addresses this by improving the redirect rules to be able to
simply ignore the rule and skip to the next one if the result of the
evaluation of the "location" expression is empty. However in order not
to break existing setups, it requires a new "ignore-empty" keyword.
There used to be an ACT_FLAG_FINAL on redirect rules that's used during
the parsing to emit a warning if followed by another rule, so here we
only set it if the option is not there. The http_apply_redirect_rule()
function now returns a 3rd value to mention that it did nothing and
that this was not an error, so that callers can just ignore the rule.
The regular "redirect" rules were not modified however since this does
not apply there.
The map_redirect VTC was completed with such a test and updated to 2.5
and an example was added into the documentation.
The bc_conn_err and bc_conn_err_str sample fetches give the status of
the connection on the backend side. The error codes and error messages
are the same than the ones that can be raised by the fc_conn_err fetch.
This new sample fetch along the ssl_bc_hsk_err_str fetch contain the
last SSL error of the error stack that occurred during the SSL
handshake (from the backend's perspective).
The locking in the dequeuing process was significantly improved by commit
49667c14b ("MEDIUM: queue: take the proxy lock only during the px queue
accesses") in that it tries hard to limit the time during which the
proxy's queue lock is held to the strict minimum. Unfortunately it's not
enough anymore, because we take up the task and manipulate a few pendconn
elements after releasing the proxy's lock (while we're under the server's
lock) but the task will not necessarily hold the server lock since it may
not have successfully found one (e.g. timeout in the backend queue). As
such, stream_free() calling pendconn_free() may release the pendconn
immediately after the proxy's lock is released while the other thread
currently proceeding with the dequeuing tries to wake up the owner's
task and dies in task_wakeup().
One solution consists in releasing le proxy's lock later. But tests have
shown that we'd have to sacrifice a significant share of the performance
gained with the patch above (roughly a 20% loss).
This patch takes another approach. It adds a "del_lock" to each pendconn
struct, that allows to keep it referenced while the proxy's lock is being
released. It's mostly a serialization lock like a refcount, just to maintain
the pendconn alive till the task_wakeup() call is complete. This way we can
continue to release the proxy's lock early while keeping this one. It had
to be added to the few points where we're about to free a pendconn, namely
in pendconn_dequeue() and pendconn_unlink(). This way we continue to
release the proxy's lock very early and there is no performance degradation.
This lock may only be held under the queue's lock to prevent lock
inversion.
No backport is needed since the patch above was merged in 2.5-dev only.
This option can be used to define a specific log format that will be
used in case of error, timeout, connection failure on a frontend... It
will be used for any log line concerned by the log-separate-errors
option. It will also replace the format of specific error messages
decribed in section 8.2.6.
If no "error-log-format" is defined, the legacy error messages are still
emitted and the other error logs keep using the regular log-format.
This option will be replaced by a "error-log-format" that enables to use
a dedicated log-format for connection error messages instead of the
regular log-format (in which most of the fields would be invalid in such
a case).
The "log-error-via-logformat" mechanism will then be replaced by a test
on the presence of such an error log format or not. If a format is
defined, it is used for connection error messages, otherwise the legacy
error log format is used.
One was in backend.c and the other one in hlua.c. No other candidate
was found with "git grep '^#if\s*USE'". It's worth noting that 3
other such tests exist for SSL_OP_NO_{SSLv3,TLSv1_1,TLSv1_2} but
that these ones are properly set to 0 in openssl-compat.h when not
defined.
The condition should first check whether `bsize` is reached, before
dereferencing the offset. Even if this always works fine, due to the
string being null-terminated, this certainly looks odd.
Found using GitHub's CodeQL scan.
This bug traces back to at least 97c2ae13bc0d7961a348102d6719fbcaf34d46d5
(1.7.0+) and this patch should be backported accordingly.
Using localtime / gmtime is not thread-safe, whereas the `get_*` wrappers are.
Found using GitHub's CodeQL scan.
The use in sample_conv_ltime() can be traced back to at least
fac9ccfb705702f211f99e67d5f5d5129002086a (first appearing in 1.6-dev3), so all
supported branches with thread support are affected.
The test on FIND_OPTIMAL_MATCH for the experimental code can yield a
build warning when using -Wundef, let's turn it into a regular ifdef.
This is slz upstream commit 05630ae8f22b71022803809eb1e7deb707bb30fb
Before threads were introduced in 1.8, idle_pct used to be a global
variable indicating the overall process idle time. Threads made it
thread-local, meaning that its reporting in the stats made little
sense, though this was not easy to spot. In 2.0, the idle_pct variable
moved to the struct thread_info via commit 81036f273 ("MINOR: time:
move the cpu, mono, and idle time to thread_info"). It made it more
obvious that the idle_pct was per thread, and also allowed to more
accurately measure it. But no more effort was made in that direction.
This patch introduces a new report_idle() function that accurately
averages the per-thread idle time over all running threads (i.e. it
should remain valid even if some threads are paused or stopped), and
makes use of it in the stats / "show info" reports.
Sending traffic over only two connections of an 8-thread process
would previously show this erratic CPU usage pattern:
$ while :; do socat /tmp/sock1 - <<< "show info"|grep ^Idle;sleep 0.1;done
Idle_pct: 30
Idle_pct: 35
Idle_pct: 100
Idle_pct: 100
Idle_pct: 100
Idle_pct: 100
Idle_pct: 100
Idle_pct: 100
Idle_pct: 35
Idle_pct: 33
Idle_pct: 100
Idle_pct: 100
Idle_pct: 100
Idle_pct: 100
Idle_pct: 100
Idle_pct: 100
Now it shows this more accurate measurement:
$ while :; do socat /tmp/sock1 - <<< "show info"|grep ^Idle;sleep 0.1;done
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
This is not technically a bug but this lack of precision definitely affects
some users who rely on the idle_pct measurement. This should at least be
backported to 2.4, and might be to some older releases depending on users
demand.
To be able to provide JA3 compatible TLS Fingerprints we need to expose
all Client Hello captured data using fetchers. Patch provides new
and modifies existing fetchers to add ability to filter out GREASE values:
- ssl_fc_cipherlist_*
- ssl_fc_ecformats_bin
- ssl_fc_eclist_bin
- ssl_fc_extlist_bin
- ssl_fc_protocol_hello_id
When we set tune.ssl.capture-cipherlist-size to a non-zero value
we are able to capture cipherlist supported by the client. To be able to
provide JA3 compatible TLS fingerprinting we need to capture more
information from Client Hello message:
- SSL Version
- SSL Extensions
- Elliptic Curves
- Elliptic Curve Point Formats
This patch allows HAProxy to capture such information and store it for
later use.
The lua initialization code which creates the Lua mapping of all converters
and sample fetch keywords makes use of strncpy(), and as such can take ages
to start with large values of tune.bufsize because it spends its time zeroing
gigabytes of memory for nothing. A test performed with an extreme value of
16 MB takes roughly 4 seconds, so it's possible that some users with huge
1 MB buffers (e.g. for payload analysis) notice a small startup latency.
However this does not affect config checks since the Lua stack is not yet
started. Let's replace this with strlcpy2().
This should be backported to all supported versions.
When a server is configured with name-resolution, resolvers objects are
created with reference to this server. Thus the server is marked as non
purgeable to prevent its removal at runtime.
This does not need to be backport.
Patch 211c967 ("MINOR: httpclient: add the server to the proxy") broke
the reg-tests that do a "show servers state".
Indeed the servers of the proxies flagged with PR_CAP_INT are dumped in
the output of this CLI command.
This patch fixes the issue par ignoring the PR_CA_INT proxies in the
dump.
Without this fix, the decode function would proceed even when the output
buffer is not large enough, because the padding was not considered. For
example, it would not fail with the input length of 23 and the output
buffer size of 15, even the actual decoded output size is 17.
This patch should be backported to all stable branches that have a
base64urldec() function available.
Relax the condition on "delete server" CLI handler to be able to remove
all servers, even non dynamic, except if they are flagged as non
purgeable.
This change is necessary to extend the use cases for dynamic servers
with reload. It's expected that each dynamic server created via the CLI
is manually commited in the haproxy configuration by the user. Dynamic
servers will be present on reload only if they are present in the
configuration file. This means that non-dynamic servers must be allowed
to be removable at runtime.
The dynamic servers removal reg-test has been updated and renamed to
reflect its purpose. A new test is present to check that non-purgeable
servers cannot be removed.
Mark servers that are referenced by configuration elements as non
purgeable. This includes the following list :
- tracked servers
- servers referenced in a use-server rule
- servers referenced in a sample fetch
In a future patch, it will be possible to remove at runtime every
servers, both static and dynamic. This requires to extend the server
refcount for all instances.
First, refcount manipulation functions have been renamed to better
express the API usage.
* srv_refcount_use -> srv_take
The refcount is always initialize to 1 on the server creation in
new_server. It's also incremented for each check/agent configured on a
server instance.
* free_server -> srv_drop
This decrements the refcount and if null, the server is freed, so code
calling it must not use the server reference after it. As a bonus, this
function now returns the next server instance. This is useful when
calling on the server loop without having to save the next pointer
before each invocation.
In these functions, remove the checks that prevent refcount on
non-dynamic servers. Each reference to "dynamic" in variable/function
naming have been eliminated as well.
A dynamic server may be deleted at runtime at the same moment when the
stats applet is pointing to it. Use the server refcount to prevent
deletion in this case.
This should be backported up to 2.4, with an observability period of 2
weeks. Note that it requires the dynamic server refcounting feature
which has been implemented on 2.5; the following commits are required :
- MINOR: server: implement a refcount for dynamic servers
- BUG/MINOR: server: do not use refcount in free_server in stopping mode
- MINOR: server: return the next srv instance on free_server
As a convenience, return the next server instance from servers list on
free_server.
This is particularily useful when using this function on the servers
list without having to save of the next pointer before calling it.
using the procctl api to set the current process as traceable, thus being able to produce a core dump as well.
making it as compile option if not wished or using freebsd prior to 11.x (last no EOL release).
THe http_update_update_host function takes an URL and extract the domain
to use as a host header. However it only update an existing host header
and does not create one.
This patch add an empty host header so the function can update it.
Add the raw and ssl server to the proxy list so they can be freed during
the deinit() of HAProxy. As a side effect the 2 servers need to have a
different ID so the SSL one was renamed "<HTTPSCLIENT>".
Ensure that no more than olen bytes is written to the output buffer,
otherwise we might experience an unexpected behavior.
While the original code used to validate that the output size was
always large enough before starting to write, this validation was
later broken by the commit below, allowing to 3-byte blocks to
areas whose size is not multiple of 3:
commit ed697e4856e5ac0b9931fd50fd8ff1b7739e5d88
Author: Emeric Brun <ebrun@haproxy.com>
Date: Mon Jan 14 14:38:39 2019 +0100
BUG/MINOR: base64: dec func ignores padding for output size checking
Decode function returns an error even if the ouptut buffer is
large enought because the padding was not considered. This
case was never met with current code base.
For base64urldec(), it's basically the same problem except that since
the input format supports arbitrary lengths, the problem has always
been there since its introduction in 2.4.
This should be backported to all stable branches having a backport of
the patch above (i.e. 2.0), with some adjustments depending on the
availability of the base64dec() and base64urldec().
The httpclient does a free of the servers and proxies it uses, however
since we are including them in the global proxy list, haproxy already
free them during the deinit. We can safely remove these free.
The sc-set-gpt0() parser was extended in 2.1 by commit 0d7712dff ("MINOR:
stick-table: allow sc-set-gpt0 to set value from an expression") to support
sample expressions in addition to plain integers. However there is a
subtlety there, which is that while the arg position must be incremented
when parsing an integer, it must not be touched when calling an expression
since the expression parser already does it.
The effect is that rules making use of sc-set-gpt0() followed by an
expression always ignore one word after that expression, and will typically
fail to parse if followed by an "if" as the parser will restart after the
"if". With no condition it's different because an empty condition doesn't
result in trying to parse anything.
This patch moves the increment at the right place and adds a few
explanations for a code part that was far from being obvious.
This should be backported to branches having the commit above (2.1+).
Implements a way of checking the running openssl version:
If the OpenSSL support was not compiled within HAProxy it will returns a
error, so it's recommanded to do a SSL feature check before:
$ ./haproxy -cc 'feature(OPENSSL) && openssl_version_atleast(0.9.8zh) && openssl_version_before(3.0.0)'
This will allow to select the SSL reg-tests more carefully.