The issue is introduced with the commit c41d8bd65 ("CLEANUP: flt-trace:
Remove unused random-parsing option").
This must be backported everywhere the above commit is.
appctx_new() is exclusively called with tid_bit and it only uses the
mask to pass it to the accompanying task. There is no point requiring
the caller to know about a mask there, nor is there any point in
creating an applet outside of the context of its own thread anyway.
Let's drop this and pass tid_bit to task_new() directly.
Ilya reports in GH #1392 that clang 13 complains about totlen being
calculated and not used in fd_write_frag_line(), which is true. It's
a leftover of some older code.
Ilya reports in GH #1392 that clang 13 complains about a flag being added
to the "flags" parameter without being used later. That's generic code
that was shared from TCP but we can indeed drop this flag since it's used
for TFO which we don't have in socketpairs.
The CLI's payload parser is over-complicated and as such contains more
bugs than needed. One of them is that it uses strstr() to find the
ending tag, ignoring spaces before it, while the argument locator
creates a new arg on each space, without checking if the end of the
word appears past the previously found end. This results in "<<" being
considered as the start of a new argument if preceeded by more than
one space, and the payload being damaged with a \0 inserted at the
first space or tab.
Let's make an easily backportable fix for now. This fix makes sure that
the trailing zero from the first line is properly kept after '<<' and
that the end tag is looked for only as an isolated argument and nothing
else. This also gets rid of the unsuitable strstr() call and now makes
sure that strcspn() will not return elements that are found in the
payload.
For the long term the loop must be rewritten to get rid of those
unsuitable strcspn() and strstr() calls which work past each other, and
the cli_parse_request() function should be split into a tokenizer and
an executor that are used from the caller instead of letting the caller
play games with what it finds there.
This should be backported wherever CLI payload is supported, i.e. 2.0+.
Move the code to allocate/free the mux cleanup task outside of the polling
loop. A new thread_alloc/free handler is registered for this in
connection.c.
This has the benefit to clean up the polling loop code. And as another
benefit, if the task allocation fails, the handler can report an error
to exit the haproxy process. This prevents a potential null pointer
dereferencing.
This should fix the github issue #1389.
This must be backported up to 2.4.
When the LDAP response is parsed, the message length is not properly
decoded. While it works for LDAP servers encoding it on 1 byte, it does not
work for those using a multi-bytes encoding. Among others, Active Directory
servers seems to encode messages or elements length on 4 bytes.
In this patch, we only handle length of BindResponse messages encoded on 1,
2 or 4 bytes. In theory, it may be encoded on any bytes number less than 127
bytes. But it is useless to make this part too complex. It should be ok this
way.
This patch should fix the issue #1390. It should be backported to all stable
versions. While it should be easy to backport it as far as 2.2, the patch
will have to be totally rewritten for lower versions.
Ilya reported in issue #1391 a build warning on Fedora about mallinfo()
being deprecated in favor of mallinfo2() since glibc-2.33. Let's add
support for it. This should be backported where the following commit is
also backported: 157e39303 ("MINOR: pools: automatically disable
malloc_trim() with external allocators").
If an error was already reported on the H1 connection, pending input data
must not be (re)evaluated in h1_process(). Otherwise an unexpected internal
error will be reported, in addition of the first one. And on some
conditions, this may generate an infinite loop because the mux tries to send
an internal error but it fails to do so thus it loops to retry.
This patch should fix the issue #1356. It must be backported to 2.4.
The "unresolved" variable is unused since commit 9fa0df5 ("BUG/MINOR: acl:
Fix freeing of expr->smp in prune_acl_expr").
This patch should fix the issue #1359.
Pierre Cheynier reported some occasional crashes in malloc_trim() on a
recent glibc when running with jemalloc(). While in theory there should
not be any link between the two, it remains plausible that something
allocated early with one is tentatively freed with the other and that
attempts to trim end up badly. There's no point calling the glibc specific
malloc_trim() with external allocators anyway. However these ones are often
enabled at link time or even at run time with LD_PRELOAD, so we cannot rely
on build options for this.
This patch implements runtime detection for the allocator in use by checking
with mallinfo() that a malloc() call is properly accounted for in glibc's
malloc. It only enables malloc_trim() in this case, and ignores it for
other cases. It's fine to proceed like this because mallinfo() is provided
by a wider range of glibcs than malloc_trim().
This could be backported to 2.4 and 2.3. If so, it will also need previous
patch "CLEANUP: pools: factor all malloc_trim() calls into trim_all_pools()".
The sizeof() was printed as a long but it's just an unsigned on some
32-bit platforms, hence the format warning. No backport is needed, as
this arrived in 2.5 with commit 40ca09c7b ("MINOR: sample: Add be2dec
converter").
In one case before exiting leaving the function the panic handler was not
reset.
Introduced in 69c581a09271e91d306e7b9080502a36abdc415e, which is 2.5+.
No backport required.
This solves setting XXH_INLINE_ALL in a cleaner way, because the imported
header is not modified, easing future updates.
see 6f7cc11e6dd0f01b437fba893da2edd2362660a2
A bug was introduced by the commit 26eb5ea35 ("BUG/MINOR: filters: Always
set FLT_END analyser when CF_FLT_ANALYZE flag is set"). Depending on the
channel evaluated, the rigth FLT_END analyser must be set. AN_REQ_FLT_END
for the request channel and AN_RES_FLT_END for the response one.
Ths patch must be backported everywhere the above commit was backported.
When an error is returned to the client, via a call to
http_reply_and_close(), the request channel is flushed and shut down and
HTTP analysis on both direction is finished. So it is safer to centralize
reset of channels analysers at this place. It is especially important when a
filter is attached to the stream when a client abort is detected. Because,
otherwise, the stream remains blocked because request analysers are not
reset.
This bug was hidden for a while. But since the fix 6fcd2d328 ("BUG/MINOR:
stream: Don't release a stream if FLT_END is still registered"), it is
possible to trigger it.
This patch must be backported everywhere the above commit was backported.
If the end of input is reported by the mux on the conn-stream during a
receive, we leave without evaluating the channel policies. It is especially
important to be able to catch client aborts during server connection
establishment. Indeed, in this case, without this patch, the
stream-interface remains blocked and read events are not forwarded to the
stream. It means it is not possible to detect client aborts.
Thanks to this fix, the abortonclose option should fixed for HAProxy 2.3 and
lower. On 2.4 and 2.5, it seems to work because the stream is created after
the request parsing.
Note that a previous fix of abortonclose option was reverted. This one
should be the right way to fix it. It must carefully be backported as far as
2.0. A observation period on the 2.3 is probably a good idea.
Now, "Upgrade:" header is removed from such requests. Thus, the condition to
reject them is now useless and can be removed. Code to handle unimplemented
features is now unused but is preserved for future uses.
This patch may be backported to 2.4.
Instead of returning a 501-Not-implemented error when "Ugrade:" header is
found for a request with a payload, the header is removed. This way, the
upgrade is disabled and the request is still sent to the server. It is
required because some frameworks seem to try to perform H2 upgrade on every
requests, including POST ones.
The h2 mux was slightly fixed to convert Upgrade requests to extended
connect ones only if the rigth HTX flag is set.
This patch should fix the issue #1381. It must be backported to 2.4.
The sole purpose of the variable's usage accounting is to enforce
limits at the session or process level, but very commonly these are not
set, yet the bookkeeping (especially at the process level) is extremely
expensive.
Let's simply disable it when the limits are not set. This further
increases the performance of 12 variables on 16-thread from 1.06M
to 1.24M req/s.
Right now we have a per-process max variable size and a per-scope one,
with the proc scope covering all others. As such, the per-process global
one is always exactly equal to the per-proc-scope one. And bookkeeping
on these process-wide variables is extremely expensive (up to 38% CPU
seen in var_accounting_diff() just for them).
Let's kill vars_global_size and only rely on the proc one. Doing this
increased the request rate from 770k to 1.06M in a config having only
12 variables on a 16-thread machine.
The global table of known variables names can only grow and was designed
for static names that are registered at boot. Nowadays it's possible to
set dynamic variable names from Lua or from the CLI, which causes a real
problem that was partially addressed in 2.2 with commit 4e172c93f
("MEDIUM: lua: Add `ifexist` parameter to `set_var`"). Please see github
issue #624 for more context.
This patch simplifies all this by removing the need for a central
registry of known names, and storing 64-bit hashes instead. This is
highly sufficient given the low number of variables in each context.
The hash is calculated using XXH64() which is bijective over the 64-bit
space thus is guaranteed collision-free for 1..8 chars. Above that the
risk remains around 1/2^64 per extra 8 chars so in practice this is
highly sufficient for our usage. A random seed is used at boot to seed
the hash so that it's not attackable from Lua for example.
There's one particular nit though. The "ifexist" hack mentioned above
is now limited to variables of scope "proc" only, and will only match
variables that were already created or declared, but will now verify
the scope as well. This may affect some bogus Lua scripts and SPOE
agents which used to accidentally work because a similarly named
variable used to exist in a different scope. These ones may need to be
fixed to comply with the doc.
Now we can sum up the situation as this one:
- ephemeral variables (scopes sess, txn, req, res) will always be
usable, regardless of any prior declaration. This effectively
addresses the most problematic change from the commit above that
in order to work well could have required some script auditing ;
- process-wide variables (scope proc) that are mentioned in the
configuration, referenced in a "register-var-names" SPOE directive,
or created via "set-var" in the global section or the CLI, are
permanent and will always accept to be set, with or without the
"ifexist" restriction (SPOE uses this internally as well).
- process-wide variables (scope proc) that are only created via a
set-var() tcp/http action, via Lua's set_var() calls, or via an
SPOE with the "force-set-var" directive), will not be permanent
but will always accept to be replaced once they are created, even
if "ifexist" is present
- process-wide variables (scope proc) that do not exist will only
support being created via the set-var() tcp/http action, Lua's
set_var() calls without "ifexist", or an SPOE declared with
"force-set-var".
This means that non-proc variables do not care about "ifexist" nor
prior declaration, and that using "ifexist" should most often be
reliable in Lua and that SPOE should most often work without any
prior declaration. It may be doable to turn "ifexist" to 1 by default
in Lua to further ease the transition. Note: regtests were adjusted.
Cc: Tim Dsterhus <tim@bastelstu.be>
Variables names will be hashed, but for this we need a random seed.
The XXH3() algorithms is bijective over the whole 64-bit space, which
is great as it guarantees no collision for 1..8 byte names. But above
that even if the risk is extremely faint, it theoretically exists and
since variables may be set from Lua we'd rather do our best to limit
the risk of controlled collision, hence the random seed.
All variables whose names are parsed by the config parser, the
command-line parser or the SPOE's register-var-names parser are
now preset as permanent. This will guarantee that these variables
will exist through out all the process' life, and that it will be
possible to implement the "ifexist" feature by looking them up.
This was marked medium because pre-setting a variable with an empty
value may always have side effects, even though none was spotted at
this stage.
We certainly do not want that a permanent variable (one that is listed
in the configuration) be erased by accident by an "unset-var" action.
Let's make sure these ones are only reset to an empty sample, like at
the moment of their initial registration. One trick is that the same
function is used to purge the memory at the end and to delete, so we
need to add an extra "force" argument to make the choice.
In order to continue to honor the ifexist Lua option and prevent rogue
SPOA agents from creating too many variables, we'll need to keep the
ability to mark certain proc.* variables as permanent when they're
known from the config file.
Let's add a flag there for this. It's added to the variable when the
variable is created with this flag set by the caller.
Another approach could have been to use a distinct list or distinct
scope but that sounds complicated and bug-prone.
Storing an unset sample (SMP_T_ANY == 0) will be used to only reserve
the variable's space but associate no value. We need to slightly adjust
var_to_smp() for this so that it considers a value-less variable as non
existent and falls back to the default value.
Passing this flag to var_set() will result in the variable to only be
created if it did not exist, otherwise nothing is done (it's not even
updated). This will be used for pre-registering names.
When setting variables, there are currently two variants, one which will
always create the variable, and another one, "ifexist", which will only
create or update a variable if a similarly named variable in any scope
already existed before.
The goal was to limit the risk of injecting random names in the proc
scope, but it was achieved by making use of the somewhat limited name
indexing model, which explains the scope-agnostic restriction.
With this change, we're moving the check downwards in the chain, at the
variable level, and only variables under the scope "proc" will be subject
to the restriction. A new set of VF_* flags was added to adjust how
variables are set, and VF_UPDATEONLY is used to mention this restriction.
In this exact state of affairs, this is not completely exact, as if a
similar name was not known in any scope, the variable will continue to
be rejected like before, but this will change soon.
The names for these two functions are totally misleading, they have
nothing to do with samples, they're purely dedicated to variables. The
former is only used by the second one and makes no sense by itself, so
it cannot even get a meaningful name. Let's remerge them into a single
one called "var_set()" which, as its name tries to imply, sets a variable
to a given value.
This name was quite misleading, as it has nothing to do with samples nor
streams. This function's sole purpose is to unset a variable, so let's
call it "var_unset()" and document it a little bit.
The vars_init() name is particularly confusing as it does not initialize
the variables code but the head of a list of variables passed in
arguments. And we'll soon need to have proper initialization code, so
let's rename it now.
In ticket #1348 some users expressed some concerns regarding the removal
of the "grace" directive from the proxies. Their use case very closely
mimmicks the original intent of the grace keyword, which is, let haproxy
accept traffic for some time when stopping, while indicating an external
LB that it's stopping.
This is implemented here by starting a task whose expiration triggers
the soft-stop for real. The global "stopping" variable is immediately
set however. For example, this below will be sufficient to instantly
notify an external check on port 9999 that the service is going down,
while other services remain active for 10s:
global
grace 10s
frontend ext-check
bind :9999
monitor-uri /ext-check
monitor fail if { stopping }
This reverts commit e0dec4b7b258101f6d5faa15234103a45c16f0f8.
At first glance, channel_is_empty() was used on purpose in si_update_rx(),
because of the HTX ("b3e0de46c" MEDIUM: stream-int: Rely only on
SI_FL_WAIT_ROOM to stop data receipt). It is not pretty clear for now why
channel_may_recv() sould not be used here but this change introduce a
possible infinite loop with the stats applet. So, it is safer to revert the
patch, waiting for a better understanding of the probelm.
This means the abortonclose option will be broken again on the 2.3 and lower
versions.
This patch should fix the issue #1360. It must be backported as far as 2.0.
Since commit "BUG/MINOR: config: reject configs using HTTP with bufsize
>= 256 MB" we are now sure that it's not possible anymore to have an HTX
block of a size 256 MB or more, even after concatenation thanks to the
tests for len >= htx_free_data_space(). Let's remove these now obsolete
comments.
A BUG_ON() was added in htx_add_blk() to track any such exception if
the conditions would change later, to complete the one that is performed
on the start address that must remain within the buffer.
As seen in commit 5ef965606 ("BUG/MINOR: lua: use strlcpy2() not
strncpy() to copy sample keywords"), configs with large values of
tune.bufsize were not practically usable since Lua was introduced,
regardless of the machine's available memory.
In addition, HTX encoding already limits block sizes to 256 MB, thus
it is not technically possible to use that large a buffer size when
HTTP is in use. This is absurdly high anyway, and for example Lua
initialization would take around one minute on a 4 GHz CPU. Better
prevent such a config from starting than having to deal with bug
reports that make no sense.
The check is only enforced if at least one HTX proxy was found, as
there is no techincal reason to block it for configs that are solely
based on raw TCP, and it could still be imagined that some such might
exist with single connections (e.g. a log forwarder that buffers to
cover for the storage I/O latencies).
This should be backported to all HTX-enabled versions (2.0 and above).
It is quite common to see in configurations constructions like the
following one:
http-request set-var(txn.bodylen) 0
http-request set-var(txn.bodylen) req.hdr(content-length)
...
http-request set-header orig-len %[var(txn.bodylen)]
The set-var() rules are almost always duplicated when manipulating
integers or any other value that is mandatory along operations. This is
a problem because it makes the configurations complicated to maintain
and slower than needed. And it becomes even more complicated when several
conditions may set the same variable because the risk of forgetting to
initialize it or to accidentally reset it is high.
This patch extends the var() sample fetch function to take an optional
argument which contains a default value to be returned if the variable
was not set. This way it becomes much simpler to use the variable, just
set it where needed, and read it with a fall back to the default value:
http-request set-var(txn.bodylen) req.hdr(content-length)
...
http-request set-header orig-len %[var(txn.bodylen,0)]
The default value is always passed as a string, thus it will experience
a cast to the output type. It doesn't seem userful to complicate the
configuration to pass an explicit type at this point.
The vars.vtc regtest was updated accordingly.
In preparation for support default values when fetching variables, we
need to update the internal API to pass an extra argument to functions
vars_get_by_{name,desc} to provide an optional default value. This
patch does this and always passes NULL in this argument. var_to_smp()
was extended to fall back to this value when available.
The two functions vars_get_by_name() and vars_get_by_scope() perform
almost the same operations except that they differ from the way the
name and scope are retrieved. The second part in common is more
complex and involves locking, so better factor this one out into a
new function.
There is no other change than refactoring.
Most often "set var" on the CLI is used to set a string, and using only
expressions is not always convenient, particularly when trying to
concatenate variables sur as host names and paths.
Now the "set var" command supports an optional keyword before the value
to indicate its type. "expr" takes an expression just like before this
patch, and "fmt" a format string, making it work like the "set-var-fmt"
actions.
The VTC was updated to include a test on the format string.
Just like the set-var-fmt action for tcp/http rules, the set-var-fmt
directive in global sections allows to pre-set process-wide variables
using a format string instead of a sample expression. This is often
more convenient when it is required to concatenate multiple fields,
or when emitting just one word.
The log-format strings are usable at plenty of places, but the expressions
using %[] were restricted to request or response context and nothing else.
This prevents from using them from the config context or the CLI, let's
relax this.