In order for the link between the cafile_entry and the default ckch
instance to be built, we need to give a pointer to the instance during
the ssl_sock_prepare_ctx call.
Each ca-file entry of the tree will now hold a list of the ckch
instances that use it so that we can iterate over them when updating the
ca-file via a cli command. Since the link between the SSL contexts and
the CA file tree entries is only built during the ssl_sock_prepare_ctx
function, which are called after all the ckch instances are created, we
need to add a little post processing after each ssl_sock_prepare_ctx
that builds the link between the corresponding ckch instance and CA file
tree entries.
In order to manage the ca-file and ca-verify-file options, any ckch
instance can be linked to multiple CA file tree entries and any CA file
entry can link multiple ckch instances. This is done thanks to a
dedicated list of ckch_inst references stored in the CA file tree
entries over which we can iterate (during an update for instance). We
avoid having one of those instances go stale by keeping a list of
references to those references in the instances.
When deleting a ckch_inst, we can then remove all the ckch_inst_link
instances that reference it, and when deleting a cafile_entry, we
iterate over the list of ckch_inst reference and clear the corresponding
entry in their own list of ckch_inst_link references.
In order to ease ca-file hot update via the CLI, the ca-file tree will
need to allow duplicate entries for a given path. This patch simply
enables it and offers a way to select either the oldest entry or the
latest entry in the tree for a given path.
This patch moves all the ssl_store related code to ssl_ckch.c since it
will mostly be used there once the CA file update CLI commands are all
implemented. It also makes the cafile_entry structure visible as well as
the cafile_tree.
atoll() is not portable, but strtoll() is more common. We must pass NULL
to the end pointer however since the parser must consume digits and stop
at the first non-digit char. No backport is needed as this was introduced
in 2.4-dev17 with commit 51c8ad45c ("MINOR: sample: converter: Add json_query
converter").
stdint.h is not as portable as inttypes.h. It doesn't exist at least
on AIX 5.1 and Solaris 7, while inttypes.h is present there and does
include stdint.h on platforms supporting it.
This is equivalent to libslz upstream commit e36710a ("slz: use
inttypes.h instead of stdint.h")
The function is defined when using linux+cpu affinity but is only used
if threads are enabled, so let's add this condition to avoid aa build
warning about an unused function when building with thread disabled.
This came in 2.4-dev17 with commit b56a7c89a ("MEDIUM: cfgparse: detect
numa and set affinity if needed") so no backport is needed.
A mistake was introduced in 2.4-dev17 by commit 982fb5339 ("MEDIUM:
config: use platform independent type hap_cpuset for cpu-map"), it
initializes cpu_map.thread[] from 0 to MAX_PROCS-1 instead of
MAX_THREADS-1 resulting in crashes when the two differ, e.g. when
building with USE_THREAD= but still with USE_CPU_AFFINITY=1.
No backport is needed.
Variable names are stored into a unified list that helps compare them
just based on a pointer instead of duplicating their name with every
variable. This is convenient for those declared in the configuration
but this started to cause issues with Lua when random names would be
created upon each access, eating lots of memory and CPU for lookups,
hence the work in 2.2 with commit 4e172c93f ("MEDIUM: lua: Add
`ifexist` parameter to `set_var`") to address this.
But there remains a corner case with get_var(), which also allocates
a new variables. After a bit of thinking and discussion, it never
makes sense to allocate a new variable name on get_var():
- if the name exists, it will be returned ;
- if it does not exist, then the only way for it to appear will
be that some code calls set_var() on it
- a call to get_var() after a careful set_var(ifexist) ruins the
effort on set_var().
For this reason, this patch addresses this issue by making sure that
get_var() will never cause a variable to be allocated. This is done
by modifying vars_get_by_name() to always call register_name() with
alloc=0, since vars_get_by_name() is exclusively used by Lua and the
new CLI's "get/set var" which also benefit from this protection.
It probably makes sense to backport this as far as 2.2 after some
observation period and feedback from users.
For more context and discussions about the issues this was causing,
see https://www.mail-archive.com/haproxy@formilux.org/msg40451.html
and in issue #664.
This function is one of the few high-profile, unresolved ones in the memory
profile output, let's have it resolve to ease matching of SSL allocations,
which are not easy to follow.
"show profiling" by default sorts by usage/counts, which is suitable for
occasional use. But when called from scripts to monitor/search variations,
this is not very convenient. Let's add a new "byaddr" option to support
sorting the output by address. It also eases matching alloc/free calls
from within a same library, or reading grouped tasks costs by library.
Commit d3a9a4992 ("MEDIUM: stats: allow to select one field in
`stats_fill_sv_stats`") left one occurrence of a direct assignment
of stats[] instead of placing it into the <metric> variable, and it
was on ST_F_CHECK_STATUS. This resulted in the field being overwritten
with an empty one immediately after being set in stats_fill_sv_stats()
and the field to appear empty on the stats page.
No backport is needed as this was only for 2.4.
Since the introduction of bc_src, smp_fetch_src from tcp_sample inspect
the kw argument to choose between the frontend or the backend source
address. However, for the stick tables, the argument is left to NULL.
This causes a segfault.
Fix the crash by explicitely set the kw argument to "src" to retrieve
the source address of the frontend side.
This bug was introduced by the following commit :
7d081f02a4
MINOR: tcp_samples: Add samples to get src/dst info of the backend connection
It does not need a backport as it is integrated in the current 2.4-dev
branch.
To reproduce the crash, I used the following config :
frontend fe
bind :20080
http-request track-sc0 src table foo
http-request reject if { src_conn_rate(foo) gt 10 }
use_backend h1
backend foo
stick-table type ip size 200k expire 30s store conn_rate(60s)
backend h1
server nginx 127.0.0.1:30080 check
This should fix the github issue #1247.
On ARM with native CRC support, no need to inflate the executable with
a 4kB CRC table, let's just drop it.
This is slz upstream commit d8715db20b2968d1f3012a734021c0978758f911.
This is the only place where we conditionally use the crc32_fast table,
better call the crc32_char inline function for this. This should also
reduce by ~1kB the L1 cache footprint of the compression when dealing
with small blocks, and at least shows a consistent 0.5% perf improvement.
This is slz upstream commit 075351b6c2513b548bac37d6582e46855bc7b36f.
This function was not used anymore after the atomic updates were
implemented in 2.3, and it must not be used given that it does not
yield and can easily make the process hang for tens of seconds on
large acls/maps. Let's remove it before someone uses it as an
example to implement something else!
Already had to perform too many additions by external scripts, it's
time to add the totals and delay alloc-free as a last line in the
output of the "show memory profiling".
This was planned but missing in the previous attempt, we really need to
see what is used at each place, especially due to realloc(). Now we
print the function used in front of the caller's address, as well as
the average alloc/free size per call.
The realloc() function checks if the size grew or reduced in order to
count an allocation or a free, but it does so with the absolute (new
or old) value instead of the difference, resulting in realloc() often
being credited for allocating too much.
No backport is needed.
It was found that when viewing the help output from the CLI that
"set profiling" had 2 spaces in it, which was pushing it out from
the rest of similar commands.
i.e. it looked like this:
prepare acl <acl>
prepare map <acl>
set profiling <what> {auto|on|off}
set dynamic-cookie-key backend <bk> <k>
set map <map> [<key>|#<ref>] <value>
set maxconn frontend <frontend> <value>
This patch removes all of the double spaces within the command and
unifies them to single spacing, which is what is observed within the
rest of the commands.
Check the return value of url2sa in smp_fetch_url_ip/port. If negative,
the address result is uninitialized and the sample fetch is aborted.
Also, the sockaddr is prelimiary zero'ed before calling url2sa to ensure
that it is not used by upper functions even if the sample returns 0.
Without the check, the value returned by the url_ip/url_port fetches is
unspecified. This can be triggered with the following curl :
$ curl -iv --request-target "xxx://127.0.0.1:20080/" http://127.0.0.1:20080/
This should be backported to all stable branches. However, note that
between the 1.8 and 2.0, the targetted functions have been extracted
from proto_http.c to http_fetch.c.
This should fix in part coverity report from the github issue #1244.
The compiler sees the possibility of null-deref for which a path is
possible but which doesn't exist as we didn't pass a null args outside
of the help request. The test was introduced by the simplified test on
ishelp variable, so let's add it to shut the warning.
It's still very difficult to find all commands starting with a given
keyword like "set", "show" etc. Let's sort the lines by usage message,
this is much more convenient.
With ~100 commands on the CLI, it's particularly difficult to find a
specific one in the "help" output. The function used to display the
help already supports filtering on certain commands, so in the end it's
just needed to pass the argument of the help command to enable the
automatic filtering. That's what this patch does so that "help clear"
only lists commands starting with "clear" and that "help map" lists
commands containing "map" in them.
The build fails on versions older than 1.0.1d which is the first one
introducing CRYPTO_memcmp(), so let's have a define for this instead
of enabling it whenever USE_OPENSSL is set. One could also wonder why
we're relying on openssl for such a trivial thing, and a simple local
implementation could also allow to restore lexicographic ordering.
gcc-4.4 complains about aliasing in smp_fetch_url_port() and
smp_fetch_url_ip() because the local addr variable is casted to sturct
sockaddr_in before being checked. The family should be checked on the
sockaddr_storage and we have a function to retrieve the port.
The compiler still sees some warnings but these ones are OK now.
ha_random64() uses a DWCAS loop to produce the random, but it computes
the resulting value inside the loop while it doesn't change upon success,
so this is a needless overhead inside the critcal path that participates
to making threads fail the race and try again. Let's take the value out
of the loop.
Some of the Lua doc and a few places still used "Haproxy" or "HAproxy".
There was even one "HA proxy". A few of them were in an example of VTest
output, indicating that VTest ought to be fixed as well. No big deal but
better address all the remaining ones so that these inconsistencies stop
spreading around.
When running "haproxy -v", we still get "HA-Proxy" which is the last
place where this confusing oddity happens. Being so used to it I didn't
even notice it until it was reported to me just after 2.2 but it never
got fixed, despite the PRODUCT_NAME macro that is used to report the
name in the stats page and in "show info" being already set to "HAProxy"
15 years ago in 1.2.14 with commit e03312613. It's about time to
uniformize everything.
This one comes with a very deep dependency hell, only to know that
process_stream() is a function. Dropping it reduces the preprocessed
output from 1.5MB to 640kB.
These ones are used by virtually every config parser. Not only they
provide no benefit in being inlined, but they imply a very deep
dependency starting at proxy.h, which results for example in task.c
including openssl.
Let's move these two functions to cfgparse.c.
This function has no business being inlined in stick_table.h since it's
only used at boot time by the config parser. In addition it causes an
undesired dependency on tools.h because it uses parse_time_err(). Let's
move it to stick_table.c.
No idea why this was put inlined into connection.h, it's used only once
for haproxy -vv, and requires tools.h, causing an undesired dependency
from connection.h. Let's move it to connection.c instead where it ought
to have been.
Only mworker uses proc_self, and it was declared in global.h, forcing
users of global.h to include mworker and its dependencies.
Moving it to mworker reduces the preprocessed size of version.c from
170 to 125kB by shrinking the number of local includes from 30 to 16
and the number of system includes from 147 to 132.
The presence of this field causes a long dependency chain because almost
everyone includes global-t.h, and vars include sample_data which include
some system includes as well as HTTP parts.
There is absolutely no reason for having the process-wide variables in
the global struct, let's just move them into vars.c and vars.h. This
reduces from ~190k to ~170k the preprocessed output of version.c.
This one is stated as experimental in the doc but could still be used
by accidental copy-paste. Let's mark it with KWF_EXPERIMENTAL so that
users have to opt-in to use it.
Now "show info float" will also report SSL rates, connection rates and
key reuse ratios as floats. This can be convenient at very low rates.
Note that the SSL reuse ratio which used to commonly oscillate between
0 and 1 under load is now more often above zero with small values. It
indicates that for better stability we shouldn't be comparing a key rate
with a connection rate but instead we should measure the reuse rate at
its source.
We'll have to support reporting sub-second uptimes, so let's use the
appropriate function which will automatically adjust the tv_usec field.
In addition to this, it will also report a more accurate uptime thanks
to considering the sub-second part in the result.
This will allow some fields to be produced with a higher accuracy when
the requester indicates being able to parse floats. Rates and times are
among the elements which can make sense.
Currently the stats filling function knows nothing about the caller's
needs, so let's pass the STAT_* flags so that it can adapt to the
requester's constraints.
For the prometheus exporter, a new float type was added for the fields
and its conversion was added everywhere except for the HTML output.
Now that we have F2H() we can implement it for consistency.
When emitting stats, we don't need to have 6 zeroes after the decimal point
for each value, so let's trim floating point numbers to the longest needed
only.
We already had ultoa_r() and friends but nothing to emit inline floats.
This is now done with ftoa_r() and F2A/F2H. Note that the latter both use
the itoa_str[] as temporary storage and that the HTML format currently is
the exact same as the ASCII one. The trailing zeroes are always timmed so
these outputs are usable in user-visible output.
When using "%f" to print a float, it automatically gets 6 digits after
the decimal point and there's no way to automatically adjust to the
required ones by dropping trailing zeroes. This function does exactly
this and automatically drops the decimal point if all digits after it
were zeroes. This will make numbers more friendly in stats and makes
outputs shorter (e.g. JSON where everything is just a "number").
The function is designed to be easy to use with snprint() and chunks:
snprintf:
flt_trim(buf, 0, snprintf(buf, sizeof(buf), "%f", x));
chunk_printf:
out->data = flt_trim(out->area, 0, chunk_printf(out, "%f", x));
chunk_appendf:
size_t prev_data = out->data;
out->data = flt_trim(out->area, prev_data, chunk_appendf(out, "%f", x));
Calling the strcmp() converter with no argument yields this strange error:
[ALERT] (31439) : parsing [test.cfg:3] : error detected in frontend 'f' while parsing 'http-request redirect' rule : failed to parse sample expression <src,strcmp]> : invalid args in converter 'strcmp' : failed to register variable name ''.
This is because the vars name check tries to see if it can create such a
variable having an empty name. Let's at least make a special case of the
missing argument. Now we can read a more explicit:
[ALERT] (31655) : parsing [test.cfg:3] : error detected in frontend 'f' while parsing 'http-request redirect' rule : failed to parse sample expression <src,strcmp]> : invalid args in converter 'strcmp' : missing variable name.
This was done for secure_strcmp() as well.
Only check servers attached to a proxy with PR_CAP_LB.
This does not need to be backported as the diag message was added in the
current 2.4-dev branch.
Add a new proxy capability for proxy with load-balancing capabilities.
This help to differentiate listen/frontend/backend with special proxies
such as peer proxies.
normalize-uri http rule is marked as experimental, so it cannot be
activated without the global 'expose-experimental-directives'. The
associated vtc is updated to be able to use it.
Support experimental actions. It is mandatory to use
'expose-experimental-directives' before to be able to use them.
If such action is present in the config file, the tainted status of the
process is updated. Another tainted status is set when an experimental
action is executed.
Define a new keyword flag KWF_MATCH_PREFIX. This is used to replace the
match_pfx field of action struct.
This has the benefit to have more explicit action declaration, and now
it is possible to quickly implement experimental actions.
Add a new flag to mark a keyword as experimental. An experimental
keyword cannot be used if the global 'expose-experimental-directives' is
not present first.
Only keywords parsed through a standard cfg_keywords lists in
global/proxies section will be automatically detected if declared
experimental. To support a keyword outside of these lists,
check_kw_experimental must be called manually during its parsing.
If an experimental keyword is present in the config, the tainted flag is
updated.
For the moment, no keyword is marked as experimental.
Add a global flag named 'tainted'. Its purpose is to report various
status about experimental features used for the current process
lifetime.
By default it is initialized to 0. It can be set/retrieve by a couple of
new functions mark_tainted()/get_tainted(). Once a flag is set, it
cannot be resetted.
Currently, no tainted status is implemented, it will be the subject of
the following commits.
On observe mode, if a server is marked as DOWN, the server's health-check is
rescheduled using the fastinter timeout if the new expiration date is newer
that the current one. But this must only be performed if the fastinter
timeout is defined.
Internally, tick_is_lt() function only checks the date and does not perform any
verification on the provided args. Thus, we must take care of it. However, it is
possible to disable the server health-check by setting its task expiration date
to TICK_ETERNITY.
This patch must be backported as far as 2.2. It is related to
A connection may be synchronously established. In the tcpcheck context, it
may be a problem if several connections come one after another. In this
case, there is no event to close the very first connection before starting
the next one. The checks is thus blocked and timed out, a L7 timeout error
is reported.
To fix the bug, when a tcpcheck is started, we immediately evaluate its
state. Most of time, nothing is performed and we must wait. But it is thus
possible to handle the result of a successfull connection.
This patch should fix the issue #1234. It must be backported as far as 2.2.
Thanks to a previous fix, the stream error mask is now cleared on L7
retry. But the stream final state (SF_FINST_*) and the stream-interface
error type must also be reset to properly restart a new connection and be
sure to not inherit errors from the previous connection attempt.
In addition, SF_ADDR_SET flag is not systematically removed.
stream_choose_redispatch() already takes care to unset it if necessary. When
the connection is not redispatch, the server address can be preserved.
This patch must be backported as far as 2.0.
There were 102 CLI commands whose help were zig-zagging all along the dump
making them unreadable. This patch realigns all these messages so that the
command now uses up to 40 characters before the delimiting colon. About a
third of the commands did not correctly list their arguments which were
added after the first version, so they were all updated. Some abuses of
the term "id" were fixed to use a more explanatory term. The
"set ssl ocsp-response" command was not listed because it lacked a help
message, this was fixed as well. The deprecated enable/disable commands
for agent/health/server were prominently written as deprecated. Whenever
possible, clearer explanations were provided.
This one works just like .notice/.warning/.alert except that it prints
the message at level "DIAG" only when haproxy runs in diagnostic mode
(-dD). This can be convenient for example to pass a few hints to help
locate certain config parts or to leave messages about certain temporary
workarounds.
Example:
.diag "WTA/2021-05-07: $.LINE: replace 'redirect' with 'return' after final switch to 2.4"
http-request redirect location /goaway if ABUSE
For about 20 years we've been emitting cryptic messages on warnings and
alerts, that nobody knows how to parse:
[NOTICE] 126/080118 (3115) : haproxy version is 2.4-dev18-0b7c78-49
[NOTICE] 126/080118 (3115) : path to executable is ./haproxy
[WARNING] 126/080119 (3115) : Server default/srv1 is DOWN via static/srv1. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 126/080119 (3115) : backend 'default' has no server available!
Hint: the first 3-digit number is the day of year, and the 6 digits
after it represent the time of day in format HHMMSS, then the pid in
parenthesis. These are not quite user-friendly and such cryptic into
are not useful at all.
This patch slightly adjusts the output by performing these minimal changes:
- removing the date/time, as they were added very early when haproxy
was meant to be used in foreground as a debugging tool, and they're
provided in more details in logs nowadays ;
- better aligning the fields by padding the severity tag to 10 chars.
The diag output was renamed to "DIAG" only.
Now the output provides this:
[NOTICE] (4563) : haproxy version is 2.4-dev18-75a428-51
[NOTICE] (4563) : path to executable is ./haproxy
[WARNING] (4563) : Server default/srv1 is DOWN via static/srv1. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] (4563) : backend 'default' has no server available!
The useless space before the colon was kept so as not to confuse any
possible output parser.
The few entries in the doc referring to this format were adjusted to
reflect the new one.
The change was tagged "MEDIUM" as it may have visible consequences on
home-grown monitoring tools, though it is extremely unlikely due to the
limited extent of these changes.
The cleanup of the previous error was incorrect on L7 retries, it would
OR two values while they're part of an enum, leaving some bits set.
Depending on the errors it was possible to occasionally see an internal
error ("I" flag) being logged.
This should be backported as far as 2.0, though the do_l7_retry() function
in in proto_htx.c in older versions.
When memory profiling is enabled, realloc() can occasionally get the area
size wrong due to the wrong pointer being used to check the new size. When
the old area gets unmapped in the operation, this may even result in a
crash. There's no impact without memory profiling though.
No backport is needed as this is exclusively 2.4-dev.
These predicates respectively verify that the current version is at least
a given version or is before a specific one. The syntax is exactly the one
reported by "haproxy -v", though each component is optional, so both "1.5"
and "2.4-dev18-88910-48" are supported. Missing components equal zero, and
"dev" is below "pre" or "rc", which are both inferior to no such mention
(i.e. they are negative). Thus "2.4-dev18" is older than "2.4-rc1" which
is older than "2.4".
The "feature(name)" predicate will return true if <name> corresponds to
a name listed after a '+' in the features list, that is it was enabled at
build time with USE_<name>=1. Typical use cases will include OPENSSL, LUA
and LINUX_SPLICE. But maybe it will also be convenient to use with optional
addons such as PROMEX and the device detection modules to help keeping the
same configs across various deployments.
"streq(str1,str2)" will return true if the two strings match while
"strneq(str1,str2)" will return true only if they differ. This is
convenient to match an environment variable against a predefined value.
Now we can look up a list of known predicates and pre-parse their
arguments. For now the list is empty. The code needed to be arranged with
a common exit point to release all arguments because there's no default
argument freeing function (it likely only used to exist in the deinit
code). Since we only support simple arguments for now it's no big deal,
only a 2-liner loop.
Let's return the position of the first unparsable character on error,
so that instead of just saying "unparsable conditional expression blah"
we can have:
[ALERT] 125/150618 (13995) : parsing [test-conds2.cfg:1]: unparsable conditional expression '12/blah' in '.if' at position 1:
.if 12/blah
^
This is important because conditions will be made from environment
variables or later from more complex expressions where the error will
not always be easy to locate.
The new function split_version() converts a parsable haproxy version to
an array of integers. The function compare_current_version() compares an
arbitrary version to the current one. These two functions were written
by Thierry Fournier in 2013, and are still usable as-is. They will be
used to write config language predicates.
Till now it was only presented in the version output but could not be
consulted outside of haproxy.c, let's export it as a variable, and set
it to an empty string if not defined.
When the closing brace is missing after an argument (acl, ...), the
error may report something like "expected ')' before ''". Let's just
drop "before ''" when the final word is empty to make the message a
bit clearer.
The new pseudo-variables ".FILE", ".LINE" and ".SECTION" will be resolved
on the fly by the config parser and will respectively retrieve the current
configuration file name, the current line number and the current section
being parsed. This may help emit logs, errors, and debugging information
(e.g. which rule matched).
The '.' in the first char was reserved for such pseudo-variables and no
other variable is permitted. This will allow to add support for new ones
in the future if they prove to be useful (e.g. randoms/uuid for secret
keying or automatic naming of configuration objects).
Let's add a few fields to the global struct to store information about
the current file being processed, the current line number and the current
section. This will be used to retrieve them using special variables.
Instead of duplicating the condition evaluations, let's have a single
function cfg_eval_condition() that returns true/false/error. It takes
less code and will ease its extension.
The doc about .if/.elif config block conditions says:
a non-nul integer (e.g. '1'), always returns "true"
So we must accept negative integers as well. The test was made on
atoi() > 0.
No backport is needed, this is only 2.4.
This missing state was causing a second elif condition to be evaluated
after a first one succeeded after a .if failed. For example in the test
below the else would be executed:
.if 0
.elif 1
.elif 0
.else
.endif
No backport is needed, this is 2.4-only.
The condition to skip the block in the ".if" evaluator forgot to check
that the level was high enough, resulting in rare cases where a random
value matched one of the 5 values that cause the block to be skipped.
No backport is needed as it's 2.4-only.
When a L7 retry is performed, we must not forget to decrement the current
session counter of the assigned server. Of course, it must only be done if
the current session is already counted on the server, thus if SF_CURR_SESS
flag is set on the stream.
This patch is related to the issue #1003. It must be backported as far as
2.0.
Because H1C_F_RX_BLK and H1C_F_TX_BLK flags now only concerns data
processing, at the H1 stream level, there is no reason to still manage them
on the H1 connection. Thus, these flags are now set on the H1 stream.
These flags are used to block, respectively, the output and the input
processing. Thus, to be more explicit, H1C_F_WAIT_INPUT is renamed to
H1C_F_TX_BLK and H1C_F_WAIT_OUTPUT is renamed to H1C_F_RX_BLK.
Instead of subscribing for reads or sends to restart data processing, when
both sides are synchronized, the H1 stream is woken up. This happens when
H1C_F_WAIT_INPUT or H1C_F_WAIT_OUTPUT flags are removed, Indeed, these flags
block the data processing and not raw data sending or receiving.
In h1_rcv_pipe(), when the splicing is not possible or disabled at the end
of the fnuction, we make sure to subscribe for reads. It is not a bug but it
avoid an extra call to h1_rcv_pipe() to handle the subscription in some
cases (end of message, end of chunk or read0).
In addition, the condition to detect end of splicing has been simplified. We
now only rely on H1C_F_WANT_SPLICE flags.
In h1_snd_pipe(), before sending spliced data, we take care to flush the
output buffer by subscribing for sends. However, the condition to do so is
not accurate. We test data remaining in the pipe. It works but it also
unnecessarily subscribes H1C for sends when the output buffer is empty if we
are unable to send all spliced data in one time. Instead, H1C is now
subscribed for sends if output buffer is not empty.
First, there is no reason to announce the splicing support at the
conn-stream level when it is created, at least for now. GTUNE_USE_SPLICE
option is already handled at the stream level.
Second, in h1_rcv_buf(), there is no reason to test the message state to
switch the H1C in splicing mode (via H1C_F_WANT_SPLICE flag).
h1_process_input() already takes care to set CS_FL_MAY_SPLICE flag on the
conn-stream when appropriate. Thus, in h1_rcv_buf(), we can rely on this
flag to change the H1C state.
Finally, if h1_rcv_pipe() is called, it means the H1C is already in the
splicing mode. H1C_F_WANT_SPLICE flag is necessarily already set. Thus no
reason to force it.
On client side, if CO_RFL_KEEP_RECV flags is set when h1_rcv_buf() is
called, we force subscription for reads to be able to catch read0. This way,
the event will be reported to upper layer to let the stream abort the
request.
This patch fixes the abortonclose option for H1 connections. It depends on
following patches :
* MEDIUM: mux-h1: Don't block reads when waiting for the other side
* MINOR: conn-stream: Force mux to wait for read events if abortonclose is set
But to be sure the event is handled by the stream, the following patches are
also required :
* BUG/MINOR: stream-int: Don't block reads in si_update_rx() if chn may receive
* MINOR: channel: Rely on HTX version if appropriate in channel_may_recv()
All the series must be backported with caution as far as 2.0, and only after
a period of observation to be sure nothing broke.
When we are waiting for the other side to read more data, or to read the
next request, we must only stop the processing of input data and not the
data receipt. This patch don't change anything on the subscribes for
reads. So it should not change anything. The only difference is that the H1
connection will try to read data if it is woken up for an I/O event and if
it was subscribed for reads.
This patch is required to fix abortonclose option for H1 client connections.
When the abortonclose option is enabled, to be sure to be immediately
notified when a shutdown is received from the client, the frontend
conn-stream must be sure the mux will wait for read events. To do so, the
CO_RFL_KEEP_RECV flag is set when mux->rcv_buf() is called. This new flag
instructs the mux to wait for read events, regardless its internal state.
This patch is required to fix abortonclose option for H1 client connections.
In si_update_rx() function, the reads may be blocked because we explicitly
don't want to read or because of a lack of room in the input buffer. The
first condition is valid. However the second one only test if the channel is
empty or not. It means the reads are blocked if there are still some output
data in the input channel, in its buffer or its pipe. This condition is not
accurate. The reads must not be blocked if the channel can still receive
data. Thus instead of relying on channel_is_empty() function, we now call
channel_may_recv().
This patch is especially useful to be able to catch read0 on client side
when we are waiting for a connection to the server, when abortonclose option
is enabled. Otherwise, the client abort is not detected.
This patch depends on "MINOR: channel: Rely on HTX version if appropriate in
channel_may_recv()". Both must be backported as far as 2.0 after a period of
observation to be sure nothing broke.
Now the memory usage stats are dumped. They are first sorted by total
alloc+free so that the first ones are always the most relevant, and
that most symmetric alloc/free pairs appear next to each other. This
way it becomes convenient to only show a small part of them such as:
show profiling memory 20
It's worth noting that the sorting is performed upon each call to the
iohandler so it is technically possible that an entry could appear
twice or be dropped if the ordering changes between two calls. In
practice it is not an issue but it's worth being mentioned.
Let's rearrange it to make it more configurable and allow to iterate
over multiple parts (header, tasks, memory etc), to restart from a
given line number (previously it didn't work, though fortunately it
didn't happen), and to support dumping only certain parts and a given
number of lines. A few entries from ctx.cli are now used to store a
restart point and the current step.
When built with USE_MEMORY_PROFILING the main memory allocation functions
are diverted to collect statistics per caller. It is a bit tricky because
the only way to call the original ones is to find their pointer, which
requires dlsym(), and which is not available everywhere.
Thus all functions are designed to call their fallback function (the
original one), which is preset to an initialization function that is
supposed to call dlsym() to resolve the missing symbols, and vanish.
This saves expensive tests in the critical path.
A second problem is that dlsym() calls calloc() to initialize some
error messages. After plenty of tests with posix_memalign(), valloc()
and friends, it turns out that returning NULL still makes it happy.
Thus we currently use a visit counter (in_memprof) to detect if we're
reentering, in which case all allocation functions return NULL.
In order to convert a return address to an entry in the stats, we
perform a cheap hash consisting in multiplying the pointer by a
balanced number (as many zeros as ones) and keeping the middle bits.
The hash is already pretty good like this, achieving to store up to
638 entries in a 2048-entry table without collision. But in order to
further refine this and improve the fill ratio of the table, in case
of collision we move up to 16 adjacent entries to find a free place.
This remains quite cheap and manages to store all of these inside a
1024-entries hash table with even less risk of collision.
Also, free(NULL) does not produce any stats. By doing so we reduce
from 638 to 208 the average number of entries needed for a basic
config using SSL. free(NULL) not only provides no information as it's
a NOP, but keeping it is pure pollution as it happens all the time.
When DEBUG_MEM_STATS is enabled, malloc/calloc/realloc are redefined as
macros, preventing the code from compiling. Thus, when this option is
detected, the macros are undefined as they are pointless there anyway.
The functions are optimized to quickly jump to the fallback and as such
become almost invisible in terms of processing time, execpt an extra
"if" on a read_mostly variable and a jump. Considering that this only
happens for pool misses and library routines, this remains acceptable.
Performance tests in SSL (the most stressful test) shows less than 1%
performance loss when profiling is enabled on 2c4t.
The code was written in a way to ease backporting to modern versions
(2.2+) if needed, so it keeps the long names for integers and doesn't
use the _INC version of the atomic ops.
We'll need to store for each call place, the pointer to the caller
(the return address to be more exact as with free() it's not uncommon
to see tail calls), the number of calls to alloc/free and the total
alloc/free bytes. realloc() will be counted either as alloc or free
depending on the balance of the size before vs after.
We store 1024+1 entries. The first ones are used as hashes and the
last one for collisions.
When profiling is enabled via the CLI, all the stats are reset.
This adds the necessary flags to permit run-time enabling/disabling of
memory profiling. For now this is disabled.
A few words were added to the management doc about it and recalling that
this is limited to certain OSes.
These ones are only read by the scheduler and occasionally written to
by the CLI parser, so let's move them to read_mostly so that they do
not risk to suffer from cache line pollution.
get_sym_curr_addr() will return the address of the first occurrence of
the given symbol while get_sym_next_addr() will return the address of
the next occurrence of the symbol. These ones return NULL on non-linux,
non-ELF, non-USE_DL.
Implement a safe mechanism to close front idling connection which
prevents the soft-stop to complete. Every h1/h2 front connection is
added in a new per-thread list instance. On shutdown, a new task is
waking up which calls wake mux operation on every connection still
present in the new list.
A new stopping_list attach point has been added in the connection
structure. As this member is only used for frontend connections, it
shared the same union as the session_list reserved for backend
connections.
In h1_process, if the proxy of a frontend connection is disabled,
release the connection.
This commit is in preparation to properly close idling front connections
on soft-stop. h1_process must still be called, this will be done via a
dedicated task which monitors the global variable stopping.
Implement a function to close all server idle connections. This function
is called via a global deinit server handler.
The main objective is to prevents from leaving sockets in TIME_WAIT
state. To limit the set of operations on shutdown and prevents
tasks rescheduling, only the ctrl stack closing is done.
The purpose of this debugging option was to prevent certain pools from
masking other ones when they were shared. For example, task, http_txn,
h2s, h1s, h1c, session, fcgi_strm, and connection are all 192 bytes and
would normally be mergedi, but not with this option. The problem is that
certain pools are declared multiple times with various parameters, which
are often very close, and due to the way the option works, they're not
shared either. Good examples of this are captures and stick tables. Some
configurations have large numbers of stick-tables of pretty similar types
and it's very common to end up with the following when the option is
enabled:
$ socat - /tmp/sock1 <<< "show pools" | grep stick
- Pool sticktables (160 bytes) : 0 allocated (0 bytes), 0 used, needed_avg 0, 0 failures, 1 users, @0x753800=56
- Pool sticktables (160 bytes) : 0 allocated (0 bytes), 0 used, needed_avg 0, 0 failures, 1 users, @0x753880=57
- Pool sticktables (160 bytes) : 0 allocated (0 bytes), 0 used, needed_avg 0, 0 failures, 1 users, @0x753900=58
- Pool sticktables (160 bytes) : 0 allocated (0 bytes), 0 used, needed_avg 0, 0 failures, 1 users, @0x753980=59
- Pool sticktables (160 bytes) : 0 allocated (0 bytes), 0 used, needed_avg 0, 0 failures, 1 users, @0x753a00=60
- Pool sticktables (160 bytes) : 0 allocated (0 bytes), 0 used, needed_avg 0, 0 failures, 1 users, @0x753a80=61
- Pool sticktables (160 bytes) : 0 allocated (0 bytes), 0 used, needed_avg 0, 0 failures, 1 users, @0x753b00=62
- Pool sticktables (224 bytes) : 0 allocated (0 bytes), 0 used, needed_avg 0, 0 failures, 1 users, @0x753780=55
In addition to not being convenient, it can have important effects on the
memory usage because these pools will not share their entries, so one stick
table cannot allocate from another one's pool.
This patch solves this by going back to the initial goal which was not to
have different pools in the same list. Instead of masking the MAP_F_SHARED
flag, it simply adds a test on the pool's name, and disables pool sharing
if the names differ. This way pools are not shared unless they're of the
same name and size, which doesn't hinder debugging. The same test above
now returns this:
$ socat - /tmp/sock1 <<< "show pools" | grep stick
- Pool sticktables (160 bytes) : 0 allocated (0 bytes), 0 used, needed_avg 0, 0 failures, 7 users, @0x3fadb30 [SHARED]
- Pool sticktables (224 bytes) : 0 allocated (0 bytes), 0 used, needed_avg 0, 0 failures, 1 users, @0x3facaa0 [SHARED]
This is much better. This should probably be backported, in order to limit
the side effects of DEBUG_DONT_SHARE_POOLS being enabled in production.
This command attempts to resolve a pointer to a symbol name. This is
convenient during development as it's easier to get such pointers live
than by issuing a debugger or calling addr2line.
This bug was introduced in e5ff4ad ("BUG/MINOR: ssl: fix a trash buffer
leak in some error cases").
When cli_parse_set_cert() returns because alloc_trash_chunk() failed, it
does not unlock the spinlock which can lead to a deadlock later.
Must be backported as far as 2.1 where e5ff4ad was backported.
Since the introduction of payload support on the CLI in 1.9-dev1 by
commit abbf60710 ("MEDIUM: cli: Add payload support"), a chunk is
temporarily allocated for the CLI to support defragmenting a payload
passed with a command. However it's only released when passing via
the CLI_ST_END state (i.e. on clean shutdown), but not on errors.
Something as trivial as:
$ while :; do ncat --send-only -U /path/to/cli <<< "show stat"; done
with a few hundreds of servers is enough see the number of allocated
trash chunks go through the roof in "show pools".
This needs to be backported as far as 2.0.
When the lua buffers are used, a variable number of stack slots may be
used. Thus we cannot assume that we know where the top of the stack is. It
was not an issue for lua < 5.4.3 (at least for small buffers). But
'socket:receive()' now fails with lua 5.4.3 because a light userdata is
systematically pushed on the top of the stack when a buffer is initialized.
To fix the bug, in hlua_socket_receive(), we save the index of the top of
the stack before creating the buffer. This way, we can check the number of
arguments, regardless anything was pushed on the stack or not.
Note that the other buffer usages seem to be safe.
This patch should solve the issue #1240. It should be backport to all stable
branches.
By passing a version number to "add map/acl", it becomes possible to
atomically replace maps and ACLs. The principle is that a new version
number is first retrieved by calling"prepare map/acl", and this version
number is used with "add map" and "add acl". Newly added entries then
remain invisible to the matching mechanism but are visible in "show
map/acl" when the version number is specified, or may be cleard with
"clear map/acl". Finally when the insertion is complete, a
"commit map/acl" command must be issued, and the version is atomically
updated so that there is no intermediate state with incomplete entries.
The command is used to atomically replace a map/acl with the pending
contents of the designated version. The new version must have been
allocated by "prepare map/acl" prior to this. At the moment it is not
possible to force the version when adding new entries, so this may only
be used to atomically clear an ACL/map.
This command allocates a new version for the map/acl, that will be usable
later to prepare the addition of new values to atomically replace existing
ones. Technically speaking the operation consists in atomically incrementing
the next version. There's no "undo" operation here, if a version is not
committed, it will automatically be trashed when committing a newer version.
This will ease maintenance of versionned maps by allowing to clear old or
failed updates instead of the current version. Nothing was done to allow
clearing everyhing, though if there was a need for this, implementing "@all"
or something equivalent wouldn't require more than 3 lines of code.
Instead of being able to purge only values older than a specific value,
let's support arbitrary ranges and make pat_ref_purge_older() just be
one special case of this one.
The maps and ACLs internally all have two versions, the "current" one,
which is the one being matched against, and the "next" one, the one being
filled during an atomic replacement. Till now the "show" commands only used
to show the current one but it can be convenient to be able to show other
ones as well, so let's add the ability to do this with "show map" and
"show acl". The method used here consists in passing the version number
as "@<ver>" before the map/acl name or ID. It would have been better after
it but that could create confusion with keys already using such a format.
The "show map" command wasn't updated when pattern generations were
added for atomic reloads, let's report them in the "show map" command
that lists all known maps. It will be useful for users.
This function was only used once in cli_parse_add_map(), and half of the
work it used to do was already known from the caller or testable outside
of the lock. Given that we'll need to modify it soon to pass a generation
number, let's remerge it in the caller instead, using pat_ref_load() which
is the one we'll need.
The function uses two distinct code paths for single the key/value pair
and multiple pairs inserted as payload, each with a copy-paste of the
error handling. Let's modify the loop to factor them out.
The text mentionned that only backends with consistent hash method were
supported for dynamic servers. In fact, it is only required that the lb
algorith is dynamic.
There is some serious confusion in the lua interface code related to
sockets and services coming from the hlua_appctx structs being called
"appctx" everywhere, and where the real appctx is reached using
appctx->appctx. This part is a bit of a pain to debug so let's rename
all occurrences of this local variable to "luactx".
During commit 7e4a557f6 ("MINOR: time: change the global timeval and the
the global tick at once") the approach made sure that the new now_ms was
always higher than or equal to global_now_ms, but by forgetting the old
value. This can cause the first update to global_now_ms to fail if it's
already out of sync, going back into the loop, and the subsequent call
would then succeed due to commit 4d01f3dcd ("MINOR: time: avoid
overwriting the same values of global_now").
And if it goes out of sync, it will fail to update forever, as observed
by Ashley Penney in github issue #1194, causing incorrect freq counters
calculations everywhere. One possible trigger for this issue is one thread
spinning for a few milliseconds while the other ones continue to work.
The issue really is that old_now_ms ought not to be modified in the loop
as it's used for the CAS. But we don't need to structurally guarantee that
global_now_ms grows monotonically as it's computed from the new global_now
which is already verified for this via the __tv_islt() test. Thus, dropping
any corrections on global_now_ms in the loop is the correct way to proceed
as long as this one is always updated to follow global_now.
No backport is needed, this is only for 2.4-dev.
This patch adds miscellenous informative flags raised during the initial
full resync process performed during the reload for debugging purpose.
0x00000010: Timeout waiting for a full resync from a local node
0x00000020: Timeout waiting for a full resync from a remote node
0x00000040: Session aborted learning from a local node
0x00000080: Session aborted learning from a remote node
0x00000100: A local node teach us and was fully up to date
0x00000200: A remote node teach us and was fully up to date
0x00000400: A local node teach us but was partially up to date
0x00000800: A remote node teach us but was partially up to date
0x00001000: A local node was assigned for a full resync
0x00002000: A remote node was assigned for a full resync
0x00004000: A resync was explicitly requested
This patch could be backported on any supported branch
Flags used as context to know current status of each table pushing a
full resync to a peer were correctly reset receiving a new resync
request or confirmation message but in case of local peer sync during
reload the resync request is implicit and those flags were not
correctly reset in this case.
This could result to a partial initial resync of some tables after reload
if the connection with the old process was broken and retried.
This patch reset those flags at the end of the handshake for all new
connections to be sure to push a entire full resync if needed.
This patch should be backported on all supported branches ( v >= 1.6 )
Only entries between the opposite of the last 'local update' rotating
counter were considered to be pushed. This processing worked in most
cases because updates are continually pushed trying to reach this point
but it remains some cases where updates id are more far away in the past
and appearing in futur and the push of updates is stuck until the head
reach again the tail which could take a very long time.
This patch re-work the lookup to consider that all positions on the
rotating counter is considered in the past until we reach exactly
the 'local update' value. Doing this, the updates push won't be stuck
anymore.
This patch should be backported on all supported branches ( >= 1.6 )
The commitupdate value of the table is used to check if the update
is still pending for a push for all peers. To be sure to not miss a
push we reset it just after a handshake success.
This patch should be backported on all supported branches ( >= 1.6 )
If two peers are disconnected and during this period they continue to
process a large amount of local updates, after a reconnection they
may take a long time before restarting to push their updates. because
the last pushed update would appear internally in futur.
This patch fix this resetting the cursor on acked updates at the maximum
point considered in the past if it appears in futur but it means we
may lost some updates. A clean fix would be to update the protocol to
be able to signal a remote peer that is was not updated for a too long
period and needs a full resync but this is not yet supported by the
protocol.
This patch should be backported on all supported branches ( >= 1.6 )
The re-con cursor was updated receiving any ack message
even if we are pushing a complete resync to a peer. This cursor
is reset at the end of the resync but if the connection is broken
during resync, we could re-start at an unwanted point.
With this patch, the peer stops to consider ack messages pushing
a resync since the resync process has is own acknowlegement and
is always restarted from the beginning in case of broken connection.
This patch should be backported on all supported branches ( >= 1.6 )
Receiving a resync request, the origins to start the full sync and
to reset after the full resync are mistakenly computed based on
the last update on the table instead of computed based on the
the last update acked by the node requesting the resync.
It could result in disordered or missing updates pushing to the
requester
This patch sets correctly those origins.
This patch should be backported on all supported branches ( >= 1.6 )
If a reload is performed and there is no incoming connections
from the old process to push a full resync, the new process
can be stuck waiting indefinitely for this conn and it never tries a
fallback requesting a full resync from a remote peer because the resync
timer was init to TICK_ETERNITY.
This patch forces a reset of the resync timer to default value (5 secs)
if we detect value is TICK_ETERNITY.
This patch should be backported on all supported branches ( >= 1.6 )
By default haproxy loads all files designated by a relative path from the
location the process is started in. In some circumstances it might be
desirable to force all relative paths to start from a different location
just as if the process was started from such locations. This is what this
directive is made for. Technically it will perform a temporary chdir() to
the designated location while processing each configuration file, and will
return to the original directory after processing each file. It takes an
argument indicating the policy to use when loading files whose path does
not start with a slash ('/').
A few options are offered, "current" (the default), "config" (files
relative to config file's dir), "parent" (files relative to config file's
parent dir), and "origin" with an absolute path.
This should address issue #1198.
In readcfgfile() when malloc() fails to allocate a buffer for the
config line, it currently says "parsing[<file>]: out of memory" while
the error is unrelated to the config file and may make one think it has
to do with the file's size. The second test (fopen() returning error)
needs to release the previously allocated line. Both directly return -1
which is not even documented as a valid error code for the function.
Let's simply make sure that the few variables freed at the end are
properly preset, and jump there upon error, after having displayed a
meaningful error message. Now at least we can get this:
$ ./haproxy -f /dev/kmem
[NOTICE] 116/191904 (23233) : haproxy version is 2.4-dev17-c3808c-13
[NOTICE] 116/191904 (23233) : path to executable is ./haproxy
[ALERT] 116/191904 (23233) : Could not open configuration file /dev/kmem : Permission denied
When a DATA frame is sent, we must take care to properly detect the EOM flag
on the HTX message to set ES flag on the frame when necessary, to finish the
stream. But it is only done when data are copied from the HTX message to the
mux buffer and not when the frame are sent via a zero-copy. This patch fixes
this bug.
It is a 2.4-specific bug. No backport is needed.
When an HTTP lua service is started, headers are consumed before calling the
script. When it was initialized, the headers were stored in a lua array,
thus they can be removed from the HTX message because the lua service will
no longer access them. But it is a problem with bodyless messages because
the EOM flag is lost. Indeed, once the headers are consumed, the message is
empty and the buffer is reset, included the flags.
Now, the headers are not immediately consumed. We will skip them if
applet:receive() or applet:getline(). This way, the EOM flag is preserved.
At the end, when the script is finished, all output data are consumed, thus
this remains safe.
It is a 2.4-specific bug. No backport is needed.
If an applet consumed output data (the amount of output data has changed
between before and after the call to the applet), the producer is
notified. It means CF_WRITE_PARTIAL and CF_WROTE_DATA are set on the output
channel and the opposite stream interface is notified some room was made in
its input buffer. This way, it is no longer the applet responsibility to
take care of it. However, it doesn't matter if the applet does the same.
Said like that, it looks like an improvement not a bug. But it really fixes
a bug in the lua, for HTTP applets. Indeed, applet:receive() and
applet:getline() are buggy for HTTP applets. Data are consumed but the
producer is not notified. It means if the payload is not fully received in
one time, the applet may be blocked because the producer remains blocked (it
is time dependent).
This patch must be backported as far as 2.0 (only for the HTX part).
A read error on the server side is also reported as a write error on the
client side. It means some times, a server side error is handled on the
client side. Among others, it is the case when the client side is waiting
for the response while the request processing is already finished. In this
case, the error is not handled as a server error. It is not accurate.
So now, when the request processing is finished but not the response
processing and if a read error was encountered on the server side, the error
is not immediatly processed on the client side, to let a chance to response
analysers to properly catch the error.
Since the input buffer is transferred to the stream when it is created,
there is no longer control on the request size to be sure the buffer's
reserve is still respected. It was automatically performed in h2_rcv_buf()
because the caller took care to provide the correct available space in the
buffer. The control is still there but it is no longer applied on the
request headers. Now, we should take care of the reserve when the headers
are decoded, before the stream creation.
The test is performed for the request and the response.
It is a 2.4-specific bug. No backport is needed.
It is the only function using the hdrs_bytes start-line field. Thus the
function has been refactored to no longer rely on it. To do so, we first
copy HTX blocks to the destination message, without removing them from the
source message. If the copy is interrupted on headers or trailers, we roll
back. Otherwise, data are drained from the source buffer.
Most of time, the copy will succeeds. So the roll back is only performed in
the worst but very rare case.
When all data of an HTX message are drained, we rely on htx_reset() to
reinit the message state. However, the flags must be preserved. It is, among
other things, important to preserve processing or parsing errors.
This patch must be backported as far as 2.0.
The compilation fails due to the following commit:
fc6ac53dca
BUG/MAJOR: fix build on musl with cpu_set_t support
The new global variable cpu_map conflicted with a local variable of the
same name in the code path for the apple platform when setting the
process affinity.
This does not need to be backported.
Move cpu_map structure outside of the global struct to a global
variable defined in cpuset.c compilation unit. This allows to reorganize
the includes without having to define _GNU_SOURCE everywhere for the
support of the cpu_set_t.
This fixes the compilation with musl libc, most notably used for the
alpine based docker image.
This fixes the github issue #1235.
No need to backport as this feature is new in the current
2.4-dev.
The return value check was wrongly based on error codes when the
function actually returns an error number.
This bug was introduced by f3eedfe195
which is a feature not present before branch 2.4.
It does not need to be backported.
The HTX functions used to add new HTX blocks in a message have been moved to
the header file to inline them in calling functions. These functions are
small enough.
A normalized URI is the internal term used to specify an URI is stored using
the absolute format (scheme + authority + path). For now, it is only used
for H2 clients. It is the default and recommended format for H2 request.
However, it is unusual for H1 servers to receive such URI. So in this case,
we only send the path of the absolute URI. It is performed for H1 servers,
but not for FCGI applications. This patch fixes the difference.
Note that it is not a real bug, because FCGI applications should support
abosolute URI.
Note also a normalized URI is only detected for H2 clients when a request is
received. There is no such test on the H1 side. It means an absolute URI
received from an H1 client will be sent without modification to an H1 server
or a FCGI application.
To make it possible, a dedicated function has been added to get the H1
URI. This function is called by the H1 and the FCGI multiplexer when a
request is sent to a server.
This patch should fix the issue #1232. It must be backported as far as 2.2.
The error path of the NUMA topology detection introduced in commit
b56a7c89a ("MEDIUM: cfgparse: detect numa and set affinity if needed")
lacks an initialization resulting in possible crashes at boot. No
backport is needed since that was introduced in 2.4-dev.
In proxy.c, when process is stopping we try to flush tables content
using 'stktable_trash_oldest'. A check on a counter "table->syncing" was
made to verify if there is no pending resync in progress.
But using multiple threads this counter can be increased by an other thread
only after some delay, so the content of some tables can be trashed earlier and
won't be pushed to the new process (after reload, some tables appear reset and
others don't).
This patch re-names the counter "table->syncing" to "table->refcnt" and
the counter is increased during configuration parsing (registering a table to
a peer section) to protect tables during runtime and until resync of a new
process has succeeded or failed.
The inc/dec operations are now made using atomic operations
because multiple peer sections could refer to the same table in futur.
This fix addresses github #1216.
This patch should be backported on all branches multi-thread support (v >= 1.8)
The peers task handling the "stopping" could wake up multiple
times in stopping state with WOKEN_SIGNAL: the connection to the
local peer initiated on the first processing was immediatly
shutdown by the next processing of the task and the old process
exits considering it is unable to connect. It results on
empty stick-tables after a reload.
This patch checks the flag 'PEERS_F_DONOTSTOP' to know if the
signal is considered and if remote peers connections shutdown
is already done or if a connection to the local peer must be
established.
This patch should be backported on all supported branches (v >= 1.6)
The old process checked each table resync status even if
the resync process is finished. This behavior had no known impact
except useless processing and was discovered during debugging on
an other issue.
This patch could be backported in all supported branches (v >= 1.6)
but once again, it has no impact except avoid useless processing.
In tv_update_date(), we calculate the new global date based on the local
one. It's very likely that other threads will end up with the exact same
now_ms date (at 1 million wakeups/s it happens 99.9% of the time), and
even the microsecond was measured to remain unchanged ~70% of the time
with 16 threads, simply because sometimes another thread already updated
a more recent version of it.
In such cases, performing a CAS to the global variable requires a cache
line flush which brings nothing. By checking if they're changed before
writing, we can divide by about 6 the number of writes to the global
variables, hence the overall contention.
In addition, it's worth noting that all threads will want to update at
the same time, so let's place a cpu relax call before trying again, this
will spread attempts apart.
The time adjustment is very rare, even at high pool rates. Tests show
that only 0.2% of tv_update_date() calls require a change of offset. Such
concurrent writes to a shared variable have an important impact on future
loads, so let's only update the variable if it changed.
The compilation is currently broken on platform without USE_CPU_AFFINITY
set. An error has been reported by the cygwin build of the CI.
This does not need to be backported.
In file included from include/haproxy/global-t.h:27,
from include/haproxy/global.h:26,
from include/haproxy/fd.h:33,
from src/ev_poll.c:22:
include/haproxy/cpuset-t.h:32:3: error: #error "No cpuset support implemented on this platform"
32 | # error "No cpuset support implemented on this platform"
| ^~~~~
include/haproxy/cpuset-t.h:37:2: error: unknown type name ‘CPUSET_REPR’
37 | CPUSET_REPR cpuset;
| ^~~~~~~~~~~
make: *** [Makefile:944: src/ev_poll.o] Error 1
make: *** Waiting for unfinished jobs....
In file included from include/haproxy/global-t.h:27,
from include/haproxy/global.h:26,
from include/haproxy/fd.h:33,
from include/haproxy/connection.h:30,
from include/haproxy/ssl_sock.h:27,
from src/ssl_sample.c:30:
include/haproxy/cpuset-t.h:32:3: error: #error "No cpuset support implemented on this platform"
32 | # error "No cpuset support implemented on this platform"
| ^~~~~
include/haproxy/cpuset-t.h:37:2: error: unknown type name ‘CPUSET_REPR’
37 | CPUSET_REPR cpuset;
| ^~~~~~~~~~~
make: *** [Makefile:944: src/ssl_sample.o] Error 1
Fix the warning treated as error on the CI for the macOS compilation :
"src/haproxy.c:2939:23: error: unused variable 'set'
[-Werror,-Wunused-variable]"
This does not need to be backported.
Render numa detection optional with a global configuration statement
'no numa-cpu-mapping'. This can be used if the applied affinity of the
algorithm is not optimal. Also complete the documentation with this new
keyword.
On process startup, the CPU topology of the machine is inspected. If a
multi-socket CPU machine is detected, automatically define the process
affinity on the first node with active cpus. This is done to prevent an
impact on the overall performance of the process in case the topology of
the machine is unknown to the user.
This step is not executed in the following condition :
- a non-null nbthread statement is present
- a restrictive 'cpu-map' statement is present
- the process affinity is already restricted, for example via a taskset
call
For the record, benchmarks were executed on a machine with 2 CPUs
Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz. In both clear and ssl
scenario, the performance were sub-optimal without the automatic
rebinding on a single node.
Allow to specify multiple cpu ids/ranges in parse_cpu_set separated by a
comma. This is optional and must be activated by a parameter.
The comma support is disabled for the parsing of the 'cpu-map' config
statement. However, it will be useful to parse files in sysfs when
inspecting the cpus topology for NUMA automatic process binding.
Create a function thread_cpu_mask_forced. Its purpose is to report if a
restrictive cpu mask is active for the current proces, for example due
to a taskset invocation. It is only implemented for the linux platform
currently.
Use the platform independent type hap_cpuset for the cpu-map statement
parsing. This allow to address CPU index greater than LONGBITS.
Update the documentation to reflect the removal of this limit except for
platforms without cpu_set_t type or equivalent.
Replace the unsigned long parameter by a hap_cpuset. This allows to
address CPU with index greater than LONGBITS.
This function is used to parse the 'cpu-map' statement. However at the
moment, the result is casted back to a long to store it in the global
structure. The next step is to replace ulong in in cpu_map in the
global structure with hap_cpuset.
This module can be used to manipulate a cpu sets in a platform agnostic
way. Use the type cpu_set_t/cpuset_t if available on the platform, or
fallback to unsigned long, which limits de facto the maximum cpu index
to LONGBITS.
The H2_CF_RCVD_SHUT flag is used to report a read0 was encountered. It is
used by the H2 mux to properly handle shutdowns. However, this flag is only
set when no data are received. If it is detected at the socket level when
some data are received, it is not handled. And because the event was
reported on the connection, any other read attempts are blocked. In this
case, we are unable to close the connection and release the mux
immediately. We must wait the mux timeout expires.
This patch should fix the issue #1231. It must be backported as far as 2.0.
As we now embed the library we don't need to support the older 1.0 API
any more, so we can remove the explicit calls to slz_make_crc_table()
and slz_prepare_dist_table().
Now that SLZ is merged, let's update the makefile and compression
files to use it. As a result, SLZ_INC and SLZ_LIB are neither defined
nor used anymore.
USE_SLZ is enabled by default ("USE_SLZ=default") and can be disabled
by passing "USE_SLZ=" or by enabling USE_ZLIB=1.
The doc was updated to reflect the changes.
SLZ is rarely packaged by distros and there have been complaints about
the CPU and memory usage of ZLIB, leading to some suggestions to better
address the issue by simply integrating SLZ into the tree (just 3 files).
See discussions below:
https://www.mail-archive.com/haproxy@formilux.org/msg38037.htmlhttps://www.mail-archive.com/haproxy@formilux.org/msg40079.htmlhttps://www.mail-archive.com/haproxy@formilux.org/msg40365.html
This patch does just this, after minor adjustments to these files:
- tables.h was renamed to slz-tables.h
- tables.h had the precomputed tables removed since not used here
- slz.c uses includes <import/slz*> instead of "slz*.h"
The slz commit imported here was b06c172 ("slz: avoid a build warning
with -Wimplicit-fallthrough"). No other change was performed either to
SLZ nor to haproxy at this point so that this operation may be replicated
if needed for a future version.
Since commit 3f12887 ("MINOR: mworker: don't use children variable
anymore"), the oldpids array is not used anymore to generate the new -sf
parameters. So we don't need to set nb_oldpids to 0 during the first
start of the master process.
This patch fixes a bug when 2 masters process tries to synchronize their
peers, there is a small chances that it won't work because nb_oldpids
equals 0.
Should be backported as far as 2.0.
This bug affects the peers synchronisation code which rely on the
nb_oldpids variable to synchronize the peer from the old PID.
In the case the process is not started in master-worker mode and tries
to synchronize using the peers, there is a small chance that won't work
because nb_oldpids equals 0.
Fix the bug by setting the variable to 0 only in the case of the
master-worker when not reloaded.
It could also be a problem when trying to synchronize the peers between
2 masters process which should be fixed in another patch.
Bug exists since commit 8a361b5 ("BUG/MEDIUM: mworker: don't reuse PIDs
passed to the master").
Sould be backported as far as 1.8.
The application of a cpu-map statement with both process and threads
is broken (P-Q/1 or 1/P-Q notation).
For example, before the fix, when using P-Q/1, proc_t1 would be updated.
Then it would be AND'ed with thread which is still 0 and thus does
nothing.
Another problem is when using 1/1[-Q], thread[0] is defined. But if
there is multiple processes, every processes will use this define
affinity even if it should be applied only to 1st process.
The solution to the fix is a little bit too complex for my taste and
there is maybe a simpler solution but I did not wish to break the
storage of global.cpu_map, as it is quite painful to test all the
use-cases. Besides, this code will probably be clean up when
multiprocess support removed on the future version.
Let's try to explain my logic.
* either haproxy runs in multiprocess or multithread mode. If on
multiprocess, we should consider proc_t1 (P-Q/1 notation). If on
multithread, we should consider thread (1/P-Q notation). However
during parsing, the final number of processes or threads is unknown,
thus we have to consider the two possibilities.
* there is a special case for the first thread / first process which is
present in both execution modes. And as a matter of fact cpu-map 1 or
1/1 notation represents the same thing. Thus, thread[0] and proc_t1[0]
represents the same thing. To solve this problem, only thread[0] is
used for this special case.
This fix must be backported up to 2.0.
This normalizer removes "/./" segments from the path component.
Usually the dot refers to the current directory which renders those segments redundant.
See GitHub Issue #714.
Currently the delimiter is hardcoded as ampersand (&) but the function takes the delimiter as a paramter.
This patch replaces the hardcoded ampersand with the given delimiter.
When header are splitted over several frames, payload of HEADERS and
CONTINUATION frames are merged to form a unique HEADERS frame before
decoding the payload. To do so, info about the current frame are updated
(dff, dfl..) with info of the next one. Here there is a bug when the frame
length (dfl) is update. We must add the next frame length (hdr.dfl) and not
only the amount of data found in the buffer (clen). Because HEADERS frames
are decoded in one pass, dfl value is the whole frame length or 0. nothing
intermediary.
This patch must be backported as far as 2.0.
In the function decoding payload of HEADERS frames, an internal error is
returned if the frame length is too large. it cannot exceed the buffer
size. The same is true when headers are splitted on several frames. The
payload of HEADERS and CONTINUATION frames are merged and the overall size
must not exceed the buffer size.
However, there is a bug when the current frame is big enough to only have
the space for a part of the header of the next frame. Because, in this case,
we wait for more data, to have the whole frame header. We don't properly
detect that the headers are too large to be stored in one buffer. In fact
the test to trigger this error is not accurate. When the buffer is full, the
error is reported if the frame length exceeds the amount of data in the
buffer. But in reality, an error must be reported when we are unable to
decode the current frame while the buffer is full. Because, in this case, we
know there is no way to change this state.
When the bug happens, the H2 connection is woken up in loop, consumming all
the CPU. But the traffic is not blocked for all that.
This patch must be backported as far as 2.0.
gcc still reports a potential null pointer dereference in delete server
function event with a BUG_ON before it. Remove the misleading NULL check
in the for loop which should never happen.
This does not need to be backported.
Implement a new CLI command 'del server'. It can be used to removed a
dynamically added server. Only servers in maintenance mode can be
removed, and without pending/active/idle connection on it.
Add a new reg-test for this feature. The scenario of the reg-test need
to first add a dynamic server. It is then deleted and a client is used
to ensure that the server is non joinable.
The management doc is updated with the new command 'del server'.
cli_parse_add_server can be executed in parallel by several CLI
instances and so must be thread-safe. The critical points of the
function are :
- server duplicate detection
- insertion of the server in the proxy list
The mode of operation has been reversed. The server is first
instantiated and parsed. The duplicate check has been moved at the end
just before the insertion in the proxy list, under the thread isolation.
Thus, the thread safety is guaranteed and server allocation is kept
outside of locks/thread isolation.
Config information has been added into the logsrv struct. The filename
is duplicated and should be freed on exit.
Introduced in the current release.
This does not need to be backported.
The current "ADD" vs "ADDQ" is confusing because when thinking in terms
of appending at the end of a list, "ADD" naturally comes to mind, but
here it does the opposite, it inserts. Several times already it's been
incorrectly used where ADDQ was expected, the latest of which was a
fortunate accident explained in 6fa922562 ("CLEANUP: stream: explain
why we queue the stream at the head of the server list").
Let's use more explicit (but slightly longer) names now:
LIST_ADD -> LIST_INSERT
LIST_ADDQ -> LIST_APPEND
LIST_ADDED -> LIST_INLIST
LIST_DEL -> LIST_DELETE
The same is true for MT_LISTs, including their "TRY" variant.
LIST_DEL_INIT keeps its short name to encourage to use it instead of the
lazier LIST_DELETE which is often less safe.
The change is large (~674 non-comment entries) but is mechanical enough
to remain safe. No permutation was performed, so any out-of-tree code
can easily map older names to new ones.
The list doc was updated.
This improves the use of local variables in sample_conv_json_query:
- Use the enum type for the return value of `mjson_find`.
- Do not use single letter variables.
- Reduce the scope of variables that are only needed in a single branch.
- Add missing newlines after variable declaration.
The test in srv_alloc_lb() to allocate the lb_nodes[] array used in the
consistent hash was incorrect, it wouldn't do it for consistent hash and
could do it for regular random.
No backport is needed as this was added for dynamic servers in 2.4-dev by
commit f99f77a50 ("MEDIUM: server: implement 'add server' cli command").
Amaury noticed that I managed to break the build of DEBUG_FAIL_ALLOC
for the second time with 207c09509 ("MINOR: pools: move the fault
injector to __pool_alloc()"). The joy of endlessly reworking patch
sets... No backport is needed, that was in the just merged cleanup
series.
This function has become too big (251 bytes) and is now hurting
performance a lot, with up to 4% request rate being lost over the last
pool changes. Let's move it to pool.c as a regular function. Other
attempts were made to cut it in half but it's still inefficient. Doing
this results in saving ~90kB of object code, and even 112kB since the
pool changes, with code that is even slightly faster!
Conversely, pool_get_from_cache(), which remains half of this size, is
still faster inlined, likely in part due to the immediate use of the
returned pointer afterwards.
Till now it used to call it only if there were not too many objects into
the local cache otherwise would send the latest one directly into the
shared cache. Now it always sends to the local cache and it's up to the
local cache to free its oldest objects. From a cache freshness perspective
it's better this way since we always evict cold objects instead of hot
ones. From an API perspective it's better because it will help make the
shared cache invisible to the public API.
Till now we could only evict oldest objects from all local caches using
pool_evict_from_local_caches() until the cache size was satisfying again,
but there was no way to evict excess objects from a single cache, which
is the reason why pool_put_to_cache() used to refrain from putting into
the local cache and would directly write to the shared cache, resulting
in massive writes when caches were full.
Let's add this new function now. It will stop once the number of objects
in the local cache is no higher than 16+total/8 or the cache size is no
more than 75% full, just like before.
For now the function is not used.
Continuing the unification of local and shared pools, now the usage of
pools is governed by CONFIG_HAP_POOLS without which allocations and
releases are performed directly from the OS using pool_alloc_nocache()
and pool_free_nocache().
There are two levels of freeing to the OS:
- code that wants to keep the pool's usage counters updated uses
pool_free_area() and handles the counters itself. That's what
pool_put_to_shared_cache() does in the no-global-pools case.
- code that does not want to update the counters because they were
already updated only calls pool_free_area().
Let's extract these calls to establish the symmetry with pool_get_from_os()
and pool_alloc_nocache(), resulting in pool_put_to_os() (which only updates
the allocated counter) and pool_free_nocache() (which also updates the used
counter). This will later allow to simplify the generic code.
A part of the code cannot be factored out because it still uses non-atomic
inc/dec for pool->used and pool->allocated as these are located under the
pool's lock. While it can make sense in terms of bus cycles, it does not
make sense in terms of code normalization. Further, some operations were
still performed under a lock that could be totally removed via the use of
atomic ops.
There is still one occurrence in pool_put_to_shared_cache() in the locked
code where pool_free_area() is called under the lock, which must absolutely
be fixed.
Now there's one part dealing with the allocation itself and keeping
counters up to date, and another one on top of it to return such an
allocated pointer to the user and update the use count and stats.
This is in anticipation for being able to group cache-related parts.
The release code is still done at once.
Till now it was limited to objects allocated from the OS which means
it had little use as soon as pools were enabled. Let's move it upper
in the layers so that any code can benefit from fault injection. In
addition this allows to pass a new flag POOL_F_NO_FAIL to disable it
if some callers prefer a no-failure approach.
ha_random() is quite heavy and uses atomic ops or even a lock on some
architectures. Here we don't seek good randoms, just statistical ones,
so let's use the statistical prng instead.
Now the multi-level cache becomes more visible:
pool_get_from_local_cache()
pool_put_to_local_cache()
pool_get_from_shared_cache()
pool_put_to_shared_cache()
The functions were rightfully called from/to_cache when the thread-local
cache was considered as the only cache, but this is getting terribly
confusing. Let's call them from/to local_cache to make it clear that
it is not related with the shared cache.
As a side note, since pool_evict_from_cache() used not to work for a
particular pool but for all of them at once, it was renamed to
pool_evict_from_local_caches() (plural form).
They were strictly equivalent, let's remerge them and rename them to
pool_alloc_nocache() as it's the call which performs a real allocation
which does not check nor update the cache. The only difference in the
past was the former taking the lock and not the second but now the lock
is not needed anymore at this stage since the pool's list is not touched.
In addition, given that the "avail" argument is no longer used by the
function nor by its callers, let's drop it.
Now we don't loop anymore trying to refill multiple items at once, and
an allocated object is directly returned to the requester instead of
being stored into the shared pool. This has multiple benefits. The
first one is that no locking is needed anymore on the allocation path
and the second one is that the loop will no longer cause latency spikes.
This is a first step towards unifying all the fallback code. Right now
these two functions are the only ones which do not update the needed_avg
rate counter since there's currently no shared pool kept when using them.
But their code is similar to what could be used everywhere except for
this one, so let's make them capable of maintaining usage statistics.
As a side effect the needed field in "show pools" will now be populated.
The mem_should_fail() call enabled by DEBUG_FAIL_ALLOC used to be placed
only in the no-cache version of the allocator. Now we can generalize it
to all modes and remove the exclusive test on CONFIG_HAP_NO_GLOBAL_POOLS.
We're going to make the local pool always present unless pools are
completely disabled. This means that pools are always enabled by
default, regardless of the use of threads. Let's drop this notion
of "local" pools and make it just "pool". The equivalent debug
option becomes DEBUG_NO_POOLS instead of DEBUG_NO_LOCAL_POOLS.
For now this changes nothing except the option and dropping the
dependency on USE_THREAD.
Initially per-thread pool caches were stored into a fixed-size array.
But this was a bit ugly because the last allocated pools were not able
to benefit from the cache at all. As a work around to preserve
performance, a size of 64 cacheable pools was set by default (there
are 51 pools at the moment, excluding any addon and debugging code),
so all in-tree pools were covered, at the expense of higher memory
usage.
In addition an index had to be calculated for each pool, and was used
to acces the pool cache head into that array. The pool index was not
even stored into the pools so it was required to determine it to access
the cache when the pool was already known.
This patch changes this by moving the pool cache head into the pool
head itself. This way it is certain that each pool will have its own
cache. This removes the need for index calculation.
The pool cache head is 32 bytes long so it was aligned to 64B to avoid
false sharing between threads. The extra cost is not huge (~2kB more
per pool than before), and we'll make better use of that space soon.
The pool cache head contains the size, which should probably be removed
since it's already in the pool's head.
When building with DEBUG_FAIL_ALLOC we call a random generator to decide
whether the pool alloc should succeed or fail, and there was a preliminary
debugging mechanism to keep sort of a history of the previous decisions. But
it was never used, enforces a lock during the allocation, and forces to use
static variables, all of which are limiting the ability to pursue the pools
cleanups with no real benefit. Let's get rid of them now.
Since recent commit ae07592 ("MEDIUM: pools: add CONFIG_HAP_NO_GLOBAL_POOLS
and CONFIG_HAP_GLOBAL_POOLS") the pre-allocation of all desired reserved
buffers was not done anymore on systems not using the shared cache. This
basically has no practical impact since these ones will quickly be refilled
by all the ones used at run time, but it may confuse someone checking if
they're allocated in "show pools".
That's only 2.4-dev, no backport is needed.
When running with CONFIG_HAP_NO_GLOBAL_POOLS, it's theoritically possible
to keep an incorrect count of allocated entries in a pool because the
allocated counter was used as a cumulated counter of alloc calls instead
of a number of currently allocated items (it's possible the meaning has
changed over time). The only impact in this mode essentially is that
"show pools" will report incorrect values. But this would only happen on
limited pools, which is not even certain still exist.
This was added by recent commit 0bae07592 ("MEDIUM: pools: add
CONFIG_HAP_NO_GLOBAL_POOLS and CONFIG_HAP_GLOBAL_POOLS") so no backport
is needed.
This patch renames all existing uri-normalizers into a more consistent naming
scheme:
1. The part of the URI that is being touched.
2. The modification being performed as an explicit verb.
This normalizer merges `../` path segments with the predecing segment, removing
both the preceding segment and the `../`.
Empty segments do not receive special treatment. The `merge-slashes` normalizer
should be executed first.
See GitHub Issue #714.
When the session is aborted before any connection attempt to any server, the
number of connection retries reported in the logs is wrong. It happens
because when the retries counter is not strictly positive, we consider the
max number of retries was reached and the backend retries value is used. It
is obviously wrong when no connectioh was performed.
In fact, at this stage, the retries counter is initialized to 0. But the
backend stream-interface is in the INI state. Once it is set to SI_ST_REQ,
the counter is set to the backend value. And it is the only possible state
transition from INI state. Thus it is safe to rely on it to fix the bug.
This patch must be backported to all stable versions.
The http_get_stline() was designed to be called from HTTP analyzers. Thus
before any data forwarding. To prevent any invalid usage, two BUG_ON()
statements were added. However, it is not a good idea because it is pretty
hard to be sure no HTTP sample fetch will never be called outside the
analyzers context. Especially because there is at least one possible area
where it may happens. An HTTP sample fetch may be used inside the unique-id
format string. On the normal case, it is generated in AN_REQ_HTTP_INNER
analyzer. But if an error is reported too early, the id is generated when
the log is emitted.
So, it is safer to remove the BUG_ON() statements and consider the normal
behavior is to return NULL if the first block is not a start-line. Of
course, this means all calling functions must test the return value or be
sure the start-line is really there.
This patch must be backported as far as 2.0.
This patch adds 4 new sample fetches to get the source and the destination
info (ip address and port) of the backend connection :
* bc_dst : Returns the destination address of the backend connection
* bc_dst_port : Returns the destination port of the backend connection
* bc_src : Returns the source address of the backend connection
* bc_src_port : Returns the source port of the backend connection
The configuration manual was updated accordingly.
When method sample fetch is called, if an exotic method is found
(HTTP_METH_OTHER), when smp_prefetch_htx() is called, we must be sure the
start-line is still there. Otherwise, HAproxy may crash because of a NULL
pointer dereference, for instance if the method sample fetch is used inside
a unique-id format string. Indeed, the unique id may be generated when the
log message is emitted. At this stage, the request channel is empty.
This patch must be backported as far as 2.0. But the bug exists in all
stable versions for the legacy HTTP mode too. Thus it must be adapted to the
legacy HTTP mode and backported to all other stable versions.
For all ssl_bc_* sample fetches, the test on the keyword when called from a
health-check is inverted. We must be sure the 5th charater is a 'b' to
retrieve a connection.
This patch must be backported as far as 2.2.
bc_http_major sample fetch now works when it is called from a
tcp-check. When it happens, the session origin is a check. The backend
connection is retrieved from the conn-stream attached to the check.
If required, this path may easily be backported as far as 2.2.
fc_http_major and bc_http_major sample fetches return the major digit of the
HTTP version used, respectively, by the frontend and the backend
connections, based on the mux. However, in reality, "2" is returned if the
H2 mux is detected, otherwise "1" is inconditionally returned, regardless
the mux used. Thus, if called for a raw TCP connection, "1" is returned.
To fix this bug, we now get the multiplexer flags, if there is one, to be
sure MX_FL_HTX is set.
I guess it was made this way on purpose when the H2 multiplexer was
introduced in the 1.8 and with the legacy HTTP mode there is no other
solution at the connection level. Thus this patch should be backported as
far as 2.2. For the 2.0, it must be evaluated first because of the legacy
HTTP mode.
When a log-format string is built from an health-check, the session origin
is the health-check itself and not a connection. In addition, there is no
stream. It means for now some formats are not supported: %s, %sc, %b, %bi,
%bp, %si and %sp.
Thanks to this patch, the session origin is converted to a check. So it is
possible to retrieve the backend and the backend connection. Note this
session have no listener, thus %ft format must be guarded.
This patch is light and standalone, thus it may be backported as far as 2.2
if required. However, because the error is human, it is probably better to
wait a bit to be sure everything is properly protected.
The dummy frontend used to create the session of the tcp-checks is
initialized without identifier. However, it is required because this id may
be used without any guard, for instance in log-format string via "%f" or
when fe_name sample fetch is called. Thus, an unset id may lead to crashes.
This patch must be backported as far as 2.2.
When a thread ends its harmeless period, we must only consider running
threads when testing threads_want_rdv_mask mask. To do so, we reintroduce
all_threads_mask mask in the bitwise operation (It was removed to fix a
deadlock).
Note that for now it is useless because there is no way to stop threads or
to have threads reserved for another task. But it is safer this way to avoid
bugs in the future.
With the json_query can a JSON value be extacted from a header
or body of the request and saved to a variable.
This converter makes it possible to handle some JSON workload
to route requests to different backends.
This library is required for the subsequent patch which adds
the JSON query possibility.
It is necessary to change the include statement in "src/mjson.c"
because the imported includes in haproxy are in "include/import"
orig: #include "mjson.h"
new: #include <import/mjson.h>
ub64dec and ub64enc are the base64url equivalent of b64dec and base64
converters. base64url encoding is the "URL and Filename Safe Alphabet"
variant of base64 encoding. It is also used in in JWT (JSON Web Token)
standard.
RFC1421 mention in base64.c file is deprecated so it was replaced with
RFC4648 to which existing converters, base64/b64dec, still apply.
Example:
HAProxy:
http-request return content-type text/plain lf-string %[req.hdr(Authorization),word(2,.),ub64dec]
Client:
Token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyIjoiZm9vIiwia2V5IjoiY2hhZTZBaFhhaTZlIn0.5VsVj7mdxVvo1wP5c0dVHnr-S_khnIdFkThqvwukmdg
$ curl -H "Authorization: Bearer ${TOKEN}" http://haproxy.local
{"user":"foo","key":"chae6AhXai6e"}
Adjust the size of the sample buffer before we change the "area"
pointer. The change in size is calculated as the difference between the
original pointer and the new start pointer. But since the
`smp->data.u.str.area` assignment results in `smp->data.u.str.area` and
`start` being the same pointer, we always ended up substracting zero.
This changes it to change the size by the actual amount it changed.
I'm not entirely sure what the impact of this is, but the previous code
seemed wrong.
[wt: from what I can see the only harmful case is when the output is
converted to a stick-table key, it could result in zeroing past the
end of the buffer; other cases do not touch beyond ->data]
All allocation errors in cfg_parse_listen() are now handled in a unique
place under the "alloc_error" label. This simplify a bit error handling in
this function.
At several places during the proxy section parsing, memory allocation was
performed with no check. Result is now tested and an error is returned if
the allocation fails.
This patch may be backported to all stable version but it only fixes
allocation errors during configuration parsing. Thus, it is not mandatory.
Allocation error are now handled in bind_conf_alloc() functions. Thus
callers, when not already done, are also updated to catch NULL return value.
This patch may be backported (at least partially) to all stable
versions. However, it only fix errors durung configuration parsing. Thus it
is not mandatory.
Allocated variables are now released when an error occurred during
use_backend, use-server, force/ignore-parsing, stick-table, stick and stats
directives parsing. For some of these directives, allocation errors have
been added.
This patch may be backported to all stable version but it only fixes leaks
or allocation errors during configuration parsing. Thus, it is not
mandatory. It should fix issue #1119.
When an error occurred in hlua_register_cli(), the allocated lua function
and keyword must be released to avoid memory leaks.
This patch depends on "MINOR: hlua: Add function to release a lua
function". It may be backported in all stable versions.
When an error occurred in hlua_register_service(), the allocated lua
function and keyword must be released to avoid memory leaks.
This patch depends on "MINOR: hlua: Add function to release a lua
function". It may be backported in all stable versions.
When an error occurred in hlua_register_action(), the allocated lua function
and keyword must be released to avoid memory leaks.
This patch depends on "MINOR: hlua: Add function to release a lua
function". It may be backported in all stable versions.
hen an error occurred in action_register_lua(), the allocated hlua rule and
arguments must be released to avoid memory leaks.
This patch may be backported in all stable versions.
When an error occurred in hlua_register_fetches(), the allocated lua
function and keyword must be released to avoid memory leaks.
This patch depends on "MINOR: hlua: Add function to release a lua
function". It may be backported in all stable versions. It should fix#1112.
When an error occurred in hlua_register_converters(), the allocated lua
function and keyword must be released to avoid memory leaks.
This patch depends on "MINOR: hlua: Add function to release a lua
function". It may be backported in all stable versions.
When an error occurred in hlua_register_task(), the allocated lua context
and task must be released to avoid memory leaks.
This patch may be backported in all stable versions.
Add the trace support for the checks. Only tcp-check based health-checks are
supported, including the agent-check.
In traces, the first argument is always a check object. So it is easy to get
all info related to the check. The tcp-check ruleset, the conn-stream and
the connection, the server state...
Since 1.8 for simplicity the time offset used to compensate for time
drift and jumps had been stored per thread. But with a global time,
the complexit has significantly increased.
What this patch does in order to address this is to get back to the
origins of the pre-thread time drift correction, and keep a single
offset between the system's date and the current global date.
The thread first verifies from the before_poll date if the time jumped
backwards or forward, then either fixes it by computing the new most
likely date, or applies the current offset to this latest system date.
In the first case, if the date is out of range, the old one is reused
with the max_wait offset or not depending on the interrupted flag.
Then it compares its date to the global date and updates both so that
both remain monotonic and that the local date always reflects the
latest known global date.
In order to support atomic updates to the offset, it's saved as a
ullong which contains both the tv_sec and tv_usec parts in its high
and low words. Note that a part of the patch comes from the inlining
of the equivalent of tv_add applied to the offset to make sure that
signed ints are permitted (otherwise it depends on how timeval is
defined).
This is significantly more reliable than the previous model as the
global time should move in a much smoother way, and not according
to what thread last updated it, and the thread-local time should
always be very close to the global one.
Note that (at least for debugging) a cheap way to measure processing
lag would consist in measuring the difference between global_now_ms
and now_ms, as long as other threads keep it up-to-date.
Instead of using two CAS loops, better compute the two units
simultaneously and update them at once. There is no guarantee that
the update will be synchronous, but we don't care, what matters is
that both are monotonically updated and that global_now_ms always
follows the last known value of global_now.
In the global_now loop, we used to set tmp_adj from adjusted, then
set update it from tmp_now, then set adjusted back to tmp_adj, and
finally set now from adjusted. This is a long and unneeded set of
moves resulting from years of code changes. Let's just set now
directly in the loop, stop using adjusted and remove tmp_adj.
The time initialization was made a bit complex because we rely on a
dummy negative argument to reset all fields, leaving no distinction
between process-level initialization and thread-level initialization.
This patch changes this by introducing two functions, one for the
process and the second one for the threads. This removes ambigous
test and makes sure that the relevant fields are always initialized
exactly once. This also offers a better solution to the bug fixed in
commit b48e7c001 ("BUG/MEDIUM: time: make sure to always initialize
the global tick") as there is no more special values for global_now_ms.
It's simple enough to be backported if any other time-related issues
are encountered in stable versions in the future.
It was only used by freq_ctr and is not used anymore. In addition the
local curr_sec_ms was removed, as well as the equivalent extern
definitions which did not exist anymore either.
It remains cumbersome to preserve two versions of the freq counters and
two different internal clocks just for this. In addition, the savings
from using two different mechanisms are not that important as the only
saving is a divide that is replaced by a multiply, but now thanks to
the freq_ctr_total() unificaiton the code could also be simplified to
optimize it in case of constants.
This patch turns all non-period freq_ctr functions to static inlines
which call the period-based ones with a period of 1 second. A direct
benefit is that a single internal clock is now needed for any counter
and that they now all rely on ticks.
These 1-second counters are essentially used to report request rates
and to enforce a connection rate limitation in listeners. It was
verified that these continue to work like before.
Both structures are identical except the name of the field starting
the period and its description. Let's call them all freq_ctr and the
period's start "curr_tick" which is generic.
This is only a temporary change and fields are expected to remain
the same with no code change (verified).
This one is the easiest to implement, it just requires a call and a
divide of the result. Anti-flapping correction for low-rates was
preserved.
Now calls using a constant period will be able to use a reciprocal
multiply for the period instead of a divide.
Most of the functions designed to read a counter over a period go through
the same complex loop and only differ in the way they use the returned
values, so it was worth implementing all this into freq_ctr_total() which
returns the total number of events over a period so that the caller can
finish its operation using a divide or a remaining time calculation. As
a special case, read_freq_ctr_period() doesn't take pending events but
requires to enable an anti-flapping correction at very low frequencies.
Thus the function implements it when pend<0.
Thanks to this function it will be possible to reimplement the other ones
as inline and merge the per-second ones with the arbitrary period ones
without always adding the cost of a 64 bit divide.
This variable almost never changes and is read a lot in time-critical
sections. threads_want_rdv_mask is read very often as well in
thread_harmless_end() and is almost never changed (only when someone
uses thread_isolate()). Let's move both to read_mostly.
This one only contains the list of per-thread kqueue FDs, and is used
a lot during updates. Let's mark it read_mostly to avoid false sharing
of FDs placed at the extremities.
This one only contains the list of per-thread epoll FDs, and is used
a lot during updates. Let's mark it read_mostly to avoid false sharing
of FDs placed at the extremities.
Some pointer to arrays such as fdtab, fdinfo, polled_mask etc are never
written to at run time but are used a lot. fdtab accesses appear a lot in
perf top because ha_used_fds is in the same cache line and is modified
all the time. This patch moves all these read-mostly variables to the
read_mostly section when defined. This way their cache lines will be
able to remain in shared state in all CPU caches.
Some variables are mostly read (mostly pointers) but they tend to be
merged with other ones in the same cache line, slowing their access down
in multi-thread setups. This patch declares an empty, aligned variable
in a section called "read_mostly". This will force a cache-line alignment
on this section so that any variable declared in it will be certain to
avoid false sharing with other ones. The section will be eliminated at
link time if not used.
A __read_mostly attribute was added to compiler.h to ease use of this
section.
Interestingly, all arrays used to declare patterns were read-write while
only hard-coded. Let's mark them const so that they move from data to
rodata and don't risk to experience false sharing.
In mux_pt_io_cb(), if a connection error or a shutdown is detected, the mux
is destroyed. Thus we must be careful to not use it in a trace message once
destroyed.
No backport needed. This patch should fix the issue #1220.
As for the other muxes, traces are now supported in the pt mux. All parts of
the multiplexer is covered by these traces. Events are splitted by
categories (connection, stream, rx and tx).
In traces, the first argument is always a connection. So it is easy to get
the mux context (conn->ctx). The second argument is always a conn-stream and
mau be NUUL. The third one is a buffer and it may also be NULL. Depending on
the context it is the request or the response. In all cases it is owned by a
channel. Finally, the fourth argument is an integer value. Its meaning
depends on the calling context.
This patch re-works configuration parsing, it removes the "server"
lines from "resolvers" sections introduced in commit 56fc5d9eb:
MEDIUM: resolvers: add supports of TCP nameservers in resolvers.
It also extends the nameserver lines to support stream server
addresses such as:
resolvers
nameserver localhost tcp@127.0.0.1:53
Doing so, a part of nameserver's init code was factorized in
function 'parse_resolvers' and removed from 'post_parse_resolvers'.
This patch replaces roughly all occurrences of an HA_ATOMIC_ADD(&foo, 1)
or HA_ATOMIC_SUB(&foo, 1) with the equivalent HA_ATOMIC_INC(&foo) and
HA_ATOMIC_DEC(&foo) respectively. These are 507 changes over 45 files.
The fetch_and_xxx variant is often missing for add/sub/and/or. In fact
it was only provided for ADD under the name XADD which corresponds to
the x86 instruction name. But for destructive operations like AND and
OR it's missing even more as it's not possible to know the value before
modifying it.
This patch explicitly adds HA_ATOMIC_FETCH_{OR,AND,ADD,SUB} which
cover these standard operations, and renames XADD to FETCH_ADD (there
were only 6 call places).
In the future, backport of fixes involving such operations could simply
remap FETCH_ADD(x) to XADD(x), FETCH_SUB(x) to XADD(-x), and for the
OR/AND if needed, these could possibly be done using BTS/BTR.
It's worth noting that xchg could have been renamed to fetch_and_store()
but xchg already has well understood semantics and it wasn't needed to
go further.
Currently our atomic ops return a value but it's never known whether
the fetch is done before or after the operation, which causes some
confusion each time the value is desired. Let's create an explicit
variant of these operations suffixed with _FETCH to explicitly mention
that the fetch occurs after the operation, and make use of it at the
few call places.
Slightly reorder the status flags to better match their order in the
"state" field, and also decode the "shut" state which is particularly
useful and already part of this field.
There is a function called fd_write_frag_line() that's essentially used
by loggers and that is used to write an atomic message line over a file
descriptor using writev(). However a lock is required around the writev()
call to prevent messages from multiple threads from being interleaved.
Till now a SPIN_TRYLOCK was used on a dedicated lock that was common to
all FDs. This is quite not pretty as if there are multiple output pipes
to collect logs, there will be quite some contention. Now that there
are empty flags left in the FD state and that we can finally use atomic
ops on them, let's add a flag to indicate the FD is locked for exclusive
access by a syscall. At least the locking will now be on an FD basis and
not the whole process, so we can remove the log_lock.
No need to keep this flag apart any more, let's merge it into the global
state. The bit was not cleared in fd_insert() because the only user is
the function used to create and atomically send a log message to a pipe
FD, which never registers the fd. Here we clear it nevertheless for the
sake of clarity.
Note that with an extra cleaning pass we could have a bit number
here and simply use a BTS to test and set it.
No need to keep this flag apart any more, let's merge it into the global
state. The CLI's output state was extended to 6 digits and the linger/cloned
flags moved inside the parenthesis.
For a long time we've had fdtab[].ev and fdtab[].state which contain two
arbitrary sets of information, one is mostly the configuration plus some
shutdown reports and the other one is the latest polling status report
which also contains some sticky error and shutdown reports.
These ones used to be stored into distinct chars, complicating certain
operations and not even allowing to clearly see concurrent accesses (e.g.
fd_delete_orphan() would set the state to zero while fd_insert() would
only set the event to zero).
This patch creates a single uint with the two sets in it, still delimited
at the byte level for better readability. The original FD_EV_* values
remained at the lowest bit levels as they are also known by their bit
value. The next step will consist in merging the remaining bits into it.
The whole bits are now cleared both in fd_insert() and _fd_delete_orphan()
because after a complete check, it is certain that in both cases these
functions are the only ones touching these areas. Indeed, for
_fd_delete_orphan(), the thread_mask has already been zeroed before a
poller can call fd_update_event() which would touch the state, so it
is certain that _fd_delete_orphan() is alone. Regarding fd_insert(),
only one thread will get an FD at any moment, and it as this FD has
already been released by _fd_delete_orphan() by definition it is certain
that previous users have definitely stopped touching it.
Strictly speaking there's no need for clearing the state again in
fd_insert() but it's cheap and will remove some doubts during some
troubleshooting sessions.
In preparation of merging FD_POLL* and FD_EV*, this only changes the
value of FD_POLL_* to use bits 8-15 (the second byte). The size of the
field has been temporarily extended to 32 bits already, as well as
the temporary variables that carry the new composite value inside
fd_update_events(). The resulting fdtab entry becomes temporarily
unaligned. All places making access to .ev or FD_POLL_* were carefully
inspected to make sure they were safe regarding this change. Only one
temporary update was needed for the "show fd" code. The code was only
slightly inflated at this step.
The regression was introduced by commit previous commit 94aab06:
MEDIUM: log: support tcp or stream addresses on log lines.
This previous patch tries to retrieve the used protocol parsing
the address using the str2sa_range function but forgets that
the raw file descriptor adresses don't specify a protocol
and str2sa_range probes an error.
This patch re-work the str2sa_range function to stop
probing error if an authorized RAW_FD address is parsed
whereas the caller request also a protocol.
It also modify the code of parse_logsrv to switch on stream
logservers only if a protocol was detected.
An explicit stream address prefix such as "tcp6@" "tcp4@"
"stream+ipv6@" "stream+ipv4@" or "stream+unix@" will
allocate an implicit ring buffer with a forward server
targeting the given address.
This is usefull to simply send logs to a log server in tcp
and It doesn't need to declare a ring section in configuration.
This patch registers the parsed file and the line where a log server
is declared to make those information available in configuration
post check.
Those new informations were added on error messages probed resolving
ring names on post configuration check.
Since the internal function str2sa_range is used to addresses
for different objects ('server', 'bind' but also 'log' or
'nameserver') we notice that some combinations are missing.
"ip@" is introduced to authorize the prefix "dgram+ip@" or
"stream+ip@" which dectects automatically IP version but
specify dgram or stream.
"tcp@" was introduced and is an alias for "stream+ip@".
"tcp6" and "tcp4" are now aliases for "stream+ipv6@" and
"stream+ipv4@".
"uxst@" and "uxdg@" are now aliases for "stream+unix@" and
"dgram+unix@".
This patch also adds a complete section in documentation to
describe adresses and their prefixes.
Commit c20ad0d8db (BUG/MINOR: tools: make
parse_time_err() more strict on the timer validity) broke parsing the "us"
unit in timers. It caused `parse_time_err()` to return the string "s",
which indicates an error.
Now if the "u" is followed by an "s" we properly continue processing the
time instead of immediately failing.
This fixes#1209. It must be backported to all stable versions.
When a script retrieves request data from an HTTP applet, line per line or
not, we must be sure to properly detect the end of the request by checking
HTX_FL_EOM flag when everything was consumed. Otherwise, the script may
hang.
It is pretty easy to reproduce the bug by calling applet:receive() without
specifying any length. If the request is not chunked, the function never
returns.
The bug was introduced when the EOM block was removed. Thus, it is specific
to the 2.4. This patch should fix the issue #1207. No backport needed.
HTTP_2.0 predefined macro returns true for HTTP/2 requests. HTTP/2 doen't
convey a version information, so this macro may seem a bit strange. But for
compatiblity reasons, internally, the "HTTP/2.0" version is set. Thus, it is
handy to rely on it to differenciate HTTP/1 and HTTP/2 requests.
Some predefined ACLs were still based on deprecated sample fetches, like
req_proto_http or req_ver. Now, they use non-deprecated sample fetches. In
addition, the usage lines in the configuration manual have been updated to
be more explicit.
Both the source file and the dummy library are now at the same place.
Maybe the build howto could be moved there as well to make things even
cleaner.
The Makefile, MAINTAINERS, doc, and vtest matrix were updated.
Both the source file and the dummy library are now at the same place.
Maybe the build howto could be moved there as well to make things even
cleaner.
The Makefile, MAINTAINERS, doc, github build matrix, coverity checks
and travis CI's build were updated.
Now it's much cleaner, both 51d.c and the dummy library live together and
are easier to spot and maintain. The build howto probably ought to be moved
there as well. Makefile, docs and MAINTAINERS were updated, as well as
the github CI's build matrix, travis CI's, and coverity checks.
The following directories were moved from contrib/ to dev/ to make their
use case a bit clearer. In short, only developers are expected to ever
go there. The makefile was updated to build and clean from these ones.
base64/ flags/ hpack/ plug_qdisc/ poll/ tcploop/ trace/
Add a diagnostic to check that two servers of the same backend does not
use the same cookie value. Ignore backup servers as it is quite common
for them to share a cookie value with a primary one.
Define MODE_DIAG which is used to run haproxy in diagnostic mode. This
mode is used to output extra warnings about possible configuration
blunder or sub-optimal usage. It can be activated with argument '-dD'.
A new output function ha_diag_warning is implemented reserved for
diagnostic output. It serves to standardize the format of diagnostic
messages.
A macro HA_DIAG_WARN_COND is also available to automatically check if
diagnostic mode is on before executing the diagnostic check.
In issue #1200 Coverity believes we may use an uninitialized field
smp.sess here while it's not possible because the returned variable
necessarily matches SCOPE_PROC hence smp.sess is not used. But it
cannot see this and it could be confusing if the code later evolved
into something more complex. That's not a critical path so let's
first reset the sample.
A bug was introduced when the legacy HTTP mode was removed. To capture the
HTTP version of the request or the response, we rely on the message state to
be sure the status line was received. However, the test is inverted. The
version can be captured if message headers were received, not the opposite.
This patch must be backported as far as 2.2.
Historically, an option was added to wait for the request payload (option
http-buffer-request). This option has 2 drawbacks. First, it is an ON/OFF
option for the whole proxy. It cannot be enabled on demand depending on the
message. Then, as its name suggests, it only works on the request side. The
only option to wait for the response payload was to write a dedicated
filter. While it is an acceptable solution for complex applications, it is a
bit overkill to simply match strings in the body.
To make everyone happy, this patch adds a dedicated HTTP action to wait for
the message payload, for the request or the response depending it is used in
an http-request or an http-response ruleset. The time to wait is
configurable and, optionally, the minimum payload size to have before stop
to wait.
Both the http action and the old http analyzer rely on the same internal
function.
L6 sample fetches are now ignored when called from an HTTP proxy. Thus, a
warning is emitted during the startup if such usage is detected. It is true
for most ACLs and for log-format strings. Unfortunately, it is a bit painful
to do so for sample expressions.
This patch relies on the commit "MINOR: action: Use a generic function to
check validity of an action rule list".
The check_action_rules() function is now used to check the validity of an
action rule list. It is used from check_config_validity() function to check
L5/6/7 rulesets.
It is not really a context-less sample fetch, but it is internal. And it
only fails if no stream is attached to the sample. This way, it is still
possible to use it on an HTTP proxy (L6 sample fetches are ignored now for
HTTP proxies).
If the commit "BUG/MINOR: payload/htx: Ingore L6 sample fetches for HTX
streams/checks" is backported, it may be a good idea to backport this one
too. But only as far as 2.2.
Use a L6 sample fetch on an HTX streams or a HTX health-check is meaningless
because data are not raw but structured. So now, these sample fetches fail
when called from an HTTP proxy. In addition, a warning has been added in the
configuration manual, at the begining of the L6 sample fetches section.
Note that req.len and res.len samples return the HTX data size instead of
failing. It is not accurate because it does not reflect the buffer size nor
the raw data length. But we keep it for backward compatibility purpose.
However it remains a bit strange to use it on an HTTP proxy.
This patch may be backported to all versions supporting the HTX, i.e as far
as 2.0. But the part about the health-checks is only valid for the 2.2 and
upper.
If a 'switch-mode http' tcp action is configured on a listener with no
backend, a warning is displayed to remember HTTP connections cannot be
routed to TCP servers. Indeed, backend connection is still established using
the proxy mode.
It is now possible to perform HTTP upgrades on a TCP stream from the
frontend side. To do so, a tcp-request content rule must be defined with the
switch-mode action, specifying the mode (for now, only http is supported)
and optionnaly the proto (h1 or h2).
This way it could be possible to set HTTP directives on a TCP frontend which
will only be evaluated if an upgrade is performed. This new way to perform
HTTP upgrades should replace progressively the old way, consisting to route
the request to an HTTP backend. And it should be also a good start to remove
all HTTP processing from tcp-request content rules.
This action is terminal, it stops the ruleset evaluation. It is only
available on proxy with the frontend capability.
The configuration manual has been updated accordingly.
The code responsible to perform an HTTP upgrade from a TCP stream is moved
in a dedicated function, stream_set_http_mode().
The stream_set_backend() function is slightly updated, especially to
correctly set the request analysers.
Now allocation and initialization of HTTP transactions are performed in a
unique function. Historically, there were two functions because the same TXN
was reset for K/A connections in the legacy HTTP mode. Now, in HTX, K/A
connections are handled at the mux level. A new stream, and thus a new TXN,
is created for each request. In addition, the function responsible to end
the TXN is now also reponsible to release it.
So, now, http_create_txn() and http_destroy_txn() must be used to create and
destroy an HTTP transaction.
It is just a small cleanup. AN_REQ_FLT_HTTP_HDRS and AN_RES_FLT_HTTP_HDRS
analysers are now set in HTTP analysers at the same place
AN_REQ_HTTP_XFER_BODY and AN_RES_HTTP_XFER_BODY are set.
We now use the stream instead of the proxy to know if we are processing HTTP
data or not. If the stream is an HTX stream, it means we are dealing with
HTTP data. It is more accurate than the proxy mode because when an HTTP
upgrade is performed, the proxy is not changed and only the stream may be
used.
Note that it was not a problem to rely on the proxy because HTTP upgrades
may only happen when an HTTP backend was set. But, we will add the support
of HTTP upgrades on the frontend side, after te tcp-request rules
evaluation. In this context, we cannot rely on the proxy mode.
Add "none" in the list of supported mux protocols. It relies on the
passthrough multiplexer and use almost the same mux_ops structure. Only the
flags differ because this "new" mux does not support the upgrades. "none"
was chosen to explicitly stated there is not processing at the mux level.
Thus it is now possible to set "proto none" or "check-proto none" on
bind/server lines, depending on the context. However, when set, no upgrade
to HTTP is performed. It may be a way to disable HTTP upgrades per bind
line.
Add "h1" in the list of supported mux protocols. It relies on the H1
multiplexer and use the almost the same mux_ops structure. Only the flags
differ because this "new" mux does not support the upgrades.
Thus it is now possible to set "proto h1" or "check-proto h1" on bind/server
lines, depending on the context. However, when set, no upgrade to HTTP/2 is
performed. It may be a way to disable implicit HTTP/2 upgrades per bind
line.
MX_FL_NO_UPG flag may now be set on a multiplexer to explicitly disable
upgrades from this mux. For now, it is set on the FCGI multiplexer because
it is not supported and there is no upgrade on backend-only multiplexers. It
is also set on the H2 multiplexer because it is clearly not supported.
No warning is emitted if some http-after-response rules are configured on a
TCP proxy while such warning messages are emitted for other HTTP ruleset in
same condition. It is just an oversight.
This patch may be backported as far as 2.2.
When a TCP stream is first upgraded to H1 and then to H2, we must be sure to
inhibit any connect and to properly handle the TCP stream destruction.
When the TCP stream is upgraded to H1, the HTTP analysers are set. Thus
http_wait_for_request() is called. In this case, the server connection must
be blocked, waiting for the request analysis. Otherwise, a server may be
assigned to the stream too early. It is especially a problem if the stream
is finally destroyed because of an implicit upgrade to H2.
In this case, the stream processing must be properly aborted to not have a
stalled stream. Thus, if a shutdown is detected in http_wait_for_request()
when an HTTP upgrade is performed, the stream is aborted.
It is a 2.4-specific bug. No backport is needed.
Always set frontend HTTP analysers when an HTX stream is created. It is only
useful in case a destructive HTTP upgrades (TCP>H2) because the frontend is
a TCP proxy.
In fact, to be strict, we must only set these analysers when the upgrade is
performed before setting the backend (it is not supported yet, but this
patch is required to do so), in the frontend part. If the upgrade happens
when the backend is set, it means the HTTP processing is just the backend
buisness. But there is no way to make the difference when a stream is
created, at least for now.
When an HTX stream is created, be sure to always create the HTTP txn object,
regardless of the ".http_needed" value of the frontend. That happens when a
destructive HTTP upgrades is performed (TCP>H2). The frontend is a TCP
proxy. If there is no dependency on the HTTP part, the HTTP transaction is
not created at this stage but only when the backend is set. For now, it is
not a problem. But an HTTP txn will be mandatory to fully support TCP to
HTTP upgrades after frontend tcp-request rules evaluation.
When a TCP stream is upgraded to H2 stream, a destructive upgrade is
performed. It means the TCP stream is silently released while a new one is
created. It is of course more complicated but it is what we observe from the
stream point of view.
That was performed by returning an error when the backend was set. It is
neither really elegant nor accurate. So now, instead of returning an error
from stream_set_backend() in case of destructive HTTP upgrades, the TCP
stream processing is aborted and no error is reported. However, the result
is more or less the same.
sess_log() was called twice if an error occurred on the preface parsing, in
h2c_frt_recv_preface() and in h2_process_demux().
This patch must be backported as far as 2.0.
The fix in commit 7b0e00d94 ("BUG/MINOR: http_fetch: make hdr_ip() reject
trailing characters") made hdr_ip() more sensitive to empty fields, for
example if a trusted proxy incorrectly sends the header with an empty
value, we could return 0.0.0.0 which is not correct. Let's make sure we
only assign an IPv4 type here when a non-empty address was found.
This should be backported to all branches where the fix above was
backported.
Historically we've used SOL_IP/SOL_IPV6/SOL_TCP everywhere as the socket
level value in getsockopt() and setsockopt() but as we've seen over time
it regularly broke the build and required to have them defined to their
IPPROTO_* equivalent. The Linux ip(7) man page says:
Using the SOL_IP socket options level isn't portable; BSD-based
stacks use the IPPROTO_IP level.
And it indeed looks like a pure linuxism inherited from old examples and
documentation. strace also reports SOL_* instead of IPPROTO_*, which does
not help... A check to linux/in.h shows they have the same values. Only
SOL_SOCKET and other non-IP values make sense since there is no IPPROTO
equivalent.
Let's get rid of this annoying confusion by removing all redefinitions of
SOL_IP/IPV6/TCP and using IPPROTO_* instead, just like any other operating
system. This also removes duplicated tests for the same value.
Note that this should not result in exposing syscalls to other OSes
as the only ones that were still conditionned to SOL_IPV6 were for
IPV6_UNICAST_HOPS which already had an IPPROTO_IPV6 equivalent, and
IPV6_TRANSPARENT which is Linux-specific.
Lukas reported in issue #1203 that the previous fix for silent-drop in
commit ab79ee8b1 ("BUG/MINOR: tcp: fix silent-drop workaround for IPv6")
breaks the build on FreeBSD/MacOS due to SOL_IPV6 not being defined. On
these platforms, IPPROTO_IPV6 must be used instead, so this should fix
it.
This needs to be backported to whatever version the fix above is backported
to.
As reported in github issue #1203 the TTL-based workaround that is used
when permissions are insufficient for the TCP_REPAIR trick does not work
for IPv6 because we're using only SOL_IP with IP_TTL. In IPv6 we have to
use SOL_IPV6 and IPV6_UNICAST_HOPS. Let's pick the right one based on the
source address's family.
This may be backported to all versions.
The issue with non-rotating freq counters was addressed in commit 8cc586c73
("BUG/MEDIUM: freq_ctr/threads: use the global_now_ms variable") using the
global date. But an issue remained with the comparison of the most recent
time. Since the initial time in the structure is zero, the tick_is_lt()
works on half of the periods depending on the first date an entry is
touched. And the wrapping happened last night:
$ date --date=@$(((($(date +%s) * 1000) & -0x8000000) / 1000))
Mon Mar 29 23:59:46 CEST 2021
So users of the last fix (backported to 2.3.8) may experience again an
always increasing rate for the next 24 days if they restart their process.
Let's always update the time if the latest date was not updated yet. It
will likely be simplified once the function is reorganized but this will
do the job for now.
Note that since this timer is only used by freq counters, no other
sub-system is affected. The bug can easily be tested with this config
during the right time period (i.e. today to today+24 days + N*49.7 days):
global
stats socket /tmp/sock1
frontend web
bind :8080
mode http
http-request track-sc0 src
stick-table type ip size 1m expire 1h store http_req_rate(2s)
Issuing 'socat - /tmp/sock1 <<< "show table web"' should show a stable
rate after 2 seconds.
The fix must be backported to 2.3 and any other version the fix above
goes into.
Thanks to Thomas SIMON and Sander Klein for quickly reporting this issue
with a working reproducer.
When a backend is in status DOWN and going UP it is currently displayed
as yellow ("active UP, going down") instead of orange ("active DOWN, going
UP"). This patches restyles the table rows to actually match the
legend.
This may be backported to any version, the issue appeared in 1.7-dev2
with commit 0c378efe8 ("MEDIUM: stats: compute the color code only in
the HTML form").
In payload() and payload_lv() sample fetches, if the buffer is empty, we
must wait for more data by setting SMP_F_MAY_CHANGE flag on the sample.
Otherwise, when it happens in an ACL, nothing is returned (because the
buffer is empty) and the ACL is considered as finished (success or failure
depending on the test).
As a workaround, the buffer length may be tested first. For instance :
tcp-request inspect-delay 1s
tcp-request content reject unless { req.len gt 0 } { req.payload(0,0),fix_is_valid }
instead of :
tcp-request inspect-delay 1s
tcp-request content reject if ! { req.payload(0,0),fix_is_valid }
This patch must be backported as far as 2.2.
Commit b1adf03df ("MEDIUM: backend: use a trylock when trying to grab an
idle connection") solved a contention issue on the backend under normal
condition, but there is another one further, which only happens when the
number of FDs in use is considered too high, and which obviously causes
random crashes with just 16 threads once the number of FDs is about to be
exhausted.
Like the aforementioned patch, this one should be backported to 2.3.
set var <name> <expression>
Allows to set or overwrite the process-wide variable 'name' with the result
of expression <expression>. Only process-wide variables may be used, so the
name must begin with 'proc.' otherwise no variable will be set. The
<expression> may only involve "internal" sample fetch keywords and converters
even though the most likely useful ones will be str('something') or int().
Note that the command line parser doesn't know about quotes, so any space in
the expression must be preceeded by a backslash. This command requires levels
"operator" or "admin". This command is only supported on a CLI connection
running in experimental mode (see "experimental-mode on").
Just like for "set-var" in the global section, the command uses a temporary
dummy proxy to create a temporary "set-var(name)" rule to assign the value.
The reg test was updated to verify that an updated global variable is properly
reflected in subsequent HTTP responses.
Process-wide variables can now be displayed from the CLI using "get var"
followed by the variable name. They must all start with "proc." otherwise
they will not be found. The output is very similar to the one of the
debug converter, with a type and value being reported for the embedded
sample.
This command is limited to clients with the level "operator" or higher,
since it can possibly expose traffic-related data.
In order to process samples from the command line interface we'll need
rules as well, and these rules will have to be marked as coming from
the CLI parser. This new origin is used for this.
In order to prepare for supporting calling sample expressions from the
CLI, let's create a new CLI_PARSER parsing context. This one supports
constants and internal samples only.
While we do support process-wide variables ("proc.<name>"), there was
no way to preset them from the configuration. This was particularly
limiting their usefulness since configs involving them always had to
first check if the variable was set prior to performing an operation.
This patch adds a new "set-var" directive in the global section that
supports setting the proc.<name> variables from an expression, like
other set-var actions do. The syntax however follows what is already
being done for setenv, which consists in having one argument for the
variable name and another one for the expression.
Only "constant" expressions are allowed here, such as "int", "str"
etc, combined with arithmetic or string converters, and variable
lookups. A few extra sample fetch keywords like "date", "rand" and
"uuid" are also part of the constant expressions and may make sense
to allow to create a random key or differentiate processes.
The way it was done consists in parsing a dummy rule an executing the
expression in the CFG_PARSE context, then releasing the expression.
This is safe because the sample that variables store does not hold a
back pointer to expression that created them.
In order to process samples from the config file we'll need rules as
well, and these rules will have to be marked as coming from the
config parser. This new origin is used for this.
We'd sometimes like to be able to process samples while parsing
the configuration based on purely internal thing but that's not
possible right now. Let's add a new CFG_PARSER context for samples
which only permits constant samples (i.e. those which do not change
in the process' life and which are stable during config parsing).
A number of keywords are really constant and safe to use at config
time. This is the case for str(), int() etc but also env(), hostname(),
nbproc() etc. By extension a few other ones which can be useful to
preset values in a configuration were enabled as well, like data(),
rand() or uuid(). At the moment this doesn't change anything as they
are still only usable from runtime rules.
The "var()" keyword was also marked as const as it can definitely
return stable stuff at boot time.
This level indicates that everything it constant in the expression during
the whole process' life and that it may safely be used at config parsing
time.
For now smp_resolve_args() complains on stderr via ha_alert(), but if we
want to make it a bit more dynamic, we need it to return errors in an
allocated message. Let's pass it an error pointer and have it fill it.
On return we indent the output if it contains more than one line.
The "stopping" sample fetch keyword was accidently duplicated in 1.9
by commit 70fe94419 ("MINOR: sample: add cpu_calls, cpu_ns_avg,
cpu_ns_tot, lat_ns_avg, lat_ns_tot"). This has no effect so no
backport is needed.
This sample fetch doesn't require any L4 client session in practice, as
get_var() now checks for the session. This is important to remove this
dependency in order to support accessing variables in scope "proc" from
anywhere.
Instantiate both lua Socket servers tcp/ssl using standard function
new_server. There is currently no need to tune their settings except to
activate the ssl mode with noverify for the second one. Both servers are
freed with the free_server function.
Replace static initialization of the lua Socket proxy with the standard
function alloc_new_proxy. The settings proxy are properly applied thanks
to PR_CAP_LUA. The proxy is freed with the free_proxy function.
Define a new cap PR_CAP_LUA. It can be used to allocate the internal
proxy for lua Socket class. This cap overrides default settings for
preferable values in the lua context.
Move all liberation code related to a proxy in a dedicated function
free_proxy in proxy.c. For now, this function is only called in
haproxy.c. In the future, it will be used to free the lua proxy.
This helps to clean up haproxy.c.
Create a new function parse_new_proxy specifically designed to allocate
a new proxy from the configuration file and copy settings from the
default proxy.
The function alloc_new_proxy is reduced to a minimal allocation. It is
used for default proxy allocation and could also be used for internal
proxies such as the lua Socket proxy.
Move deinit_acl_cond and deinit_act_rules from haproxy.c respectively in
acl.c and action.c. The name of the functions has been slightly altered,
replacing the prefix deinit_* by free_* to reflect their purpose more
clearly.
This change has been made in preparation to the implementation of a free
proxy function. As a side-effect, it helps to clean up haproxy.c.
Create a new module init which contains code related to REGISTER_*
macros for initcalls. init.h is included in api.h to make init code
available to all modules.
It's a step to clean up a bit haproxy.c/global.h.
If the first active line of a crt-list file is also the first mentioned
certificate of a frontend that does not have the strict-sni option
enabled, then its certificate will be used as the default one. We then
do not want this instance to be removable since it would make a frontend
lose its default certificate.
Considering that a crt-list file can be used by multiple frontends, and
that its first mentioned certificate can be used as default certificate
for only a subset of those frontends, we do not want the line to be
removable for some frontends and not the others. So if any of the ckch
instances corresponding to a crt-list line is a default instance, the
removal of the crt-list line will be forbidden.
It can be backported as far as 2.2.
The default SSL_CTX used by a specific frontend is the one of the first
ckch instance created for this frontend. If this instance has SNIs, then
the SSL context is linked to the instance through the list of SNIs
contained in it. If the instance does not have any SNIs though, then the
SSL_CTX is only referenced by the bind_conf structure and the instance
itself has no link to it.
When trying to update a certificate used by the default instance through
a cli command, a new version of the default instance was rebuilt but the
default SSL context referenced in the bind_conf structure would not be
changed, resulting in a buggy behavior in which depending on the SNI
used by the client, he could either use the new version of the updated
certificate or the original one.
This patch adds a reference to the default SSL context in the default
ckch instances so that it can be hot swapped during a certificate
update.
This should fix GitHub issue #1143.
It can be backported as far as 2.2.
In issue #1197, Stphane Graber reported a rare case of crash that
results from an attempt to close an already closed H1 connection. It
indeed looks like under some circumstances it should be possible to
call the h1_shutw_conn() function more than once, though these
conditions are not very clear.
Without going through a deep analysis of all possibilities, one
potential case seems to be a detach() called with pending output data,
causing H1C_F_ST_SHUTDOWN to be set on the connection, then h1_process()
being immediately called on I/O, causing h1_send() to flush these data
and call h1_shutw_conn(), and finally the upper stream calling cs_shutw()
hence h1_shutw(), which itself will call h1_shutw_conn() again while the
transport and control layers have already been released. But the whole
sequence is not certain as it's not very clear in which case it's
possible to leave h1_send() without the connection anymore (at least
the obuf is empty).
However what is certain is that a shutdown function must be idempotent,
so let's fix h1_shutw_conn() regarding this point. Stphane reported the
issue as far back as 2.0, so this patch should be backported this far.
The hdr_ip() sample fetch function will try to extract IP addresses
from a header field. These IP addresses are parsed using url2ipv4()
and if it fails it will fall back to inet_pton(AF_INET6), otherwise
will fail.
There is a small problem there which is that if a field starts with
an IP address and is immediately followed by some garbage, the IP
address part is still returned. This is a problem with fields such
as x-forwarded-for because it prevents detection of accidental
corruption or bug along the chain. For example, the following string:
x-forwarded-for: 1.2.3.4; 5.6.7.8
or this one:
x-forwarded-for: 1.2.3.4O ( the last one being the letter 'O')
would still return "1.2.3.4" despite the trailing characters. This is
bad because it will silently cover broken code running on intermediary
proxies and may even in some cases allow haproxy to pass improperly
formatted headers after they were apparently validated, for example,
if someone extracts the address from this field to place it into
another one.
This issue would only affect the IPv4 parser, because the IPv6 parser
already uses inet_pton() which fails at the first invalid character and
rejects trailing port numbers.
In strict compliance with RFC7239, let's make sure that if there are any
characters left in the string, the parsing fails and makes hdr_ip()
return nothing. However, a special case has to be handled to support
IPv4 addresses followed by a colon and a valid port number, because till
now the parser used to implicitly accept them and it appears that this
practice, though rare, does exist at least in Azure:
https://docs.microsoft.com/en-us/azure/application-gateway/how-application-gateway-works
This issue has always been there so the fix may be backported to all
versions. It will need the following commit in order to work as expected:
MINOR: tools: make url2ipv4 return the exact number of bytes parsed
Many thanks to https://twitter.com/melardev and the BitMEX Security Team
for their detailed report.
The function's return value is currently used as a boolean but we'll
need it to return the number of bytes parsed. Right now it returns
it minus one, unless the last char doesn't match what is permitted.
Let's update this to make it more usable.
If an isolated thread is marked as harmless, it will loop forever in
thread_harmless_till_end() waiting no threads are isolated anymore. It never
happens because the current thread is isolated. To fix the bug, we exclude
the current thread for the test. We now wait for all other threads to leave
the rendez-vous point.
This bug only seems to occurr if HAProxy is compiled with DEBUG_UAF, when
pool_gc() is called. pool_gc() isolates the current thread, while
pool_free_area() set the thread as harmless when munmap is called.
This patch must be backported as far as 2.0.
Release the lock before calling mux destroy in connect_server when
trying to kill an idle connection because the pool high count has been
reached.
The lock must be released because the mux destroy will call
srv_release_conn which also takes the lock to remove the connection from
the tree. As the connection was already deleted from the tree at this
stage, it is safe to release the lock, and the removal in
srv_release_conn will be a noop.
It does not need to be backported because it is only present in the
current release. It has been introduced by
5c7086f6b0
MEDIUM: connection: protect idle conn lists with locks
In fd_delete(), if we're running with no double-width cas, take the
fd_mig_lock before setting thread_mask to 0 to make sure that
another thread calling fd_set_running() won't miss the new value of
thread_mask and set its bit in running_mask after we checked it.
This should be backported to 2.2 as part of the series fixing fd_delete().
Christopher discovered an issue mostly affecting 2.2 and to a less extent
2.3 and above, which is that it's possible to deadlock a soft-stop when
several threads are using a same listener:
thread1 thread2
unbind_listener() fd_set_running()
lock(listener) listener_accept()
fd_delete() lock(listener)
while (running_mask); -----> deadlock
unlock(listener)
This simple case disappeared from 2.3 due to the removal of some locked
operations at the end of listener_accept() on the regular path, but the
architectural problem is still here and caused by a lock inversion built
around the loop on running_mask in fd_clr_running_excl(), because there
are situations where the caller of fd_delete() may hold a lock that is
preventing other threads from dropping their bit in running_mask.
The real need here is to make sure the last user deletes the FD. We have
all we need to know the last one, it's the one calling fd_clr_running()
last, or entering fd_delete() last, both of which can be summed up as
the last one calling fd_clr_running() if fd_delete() calls fd_clr_running()
at the end. And we can prevent new threads from appearing in running_mask
by removing their bits in thread_mask.
So what this patch does is that it sets the running_mask for the thread
in fd_delete(), clears the thread_mask, thus marking the FD as orphaned,
then clears the running mask again, and completes the deletion if it was
the last one. If it was not, another thread will pass through fd_clr_running
and will complete the deletion of the FD.
The bug is easily reproducible in 2.2 under high connection rates during
soft close. When the old process stops its listener, occasionally two
threads will deadlock and the old process will then be killed by the
watchdog. It's strongly believed that similar situations do exist in 2.3
and 2.4 (e.g. if the removal attempt happens during resume_listener()
called from listener_accept()) but if so, they should be much harder to
trigger.
This should be backported to 2.2 as the issue appeared with the FD
migration. It requires previous patches "fd: make fd_clr_running() return
the remaining running mask" and "MINOR: fd: remove the unneeded running
bit from fd_insert()".
Notes for backport: in 2.2, the fd_dodelete() function requires an extra
argument "do_close" indicating whether we want to remove and close the FD
(fd_delete) or just delete it (fd_remove). While this information is not
conveyed along the chain, we know that late calls always imply do_close=1
become do_close=0 exclusively results from fd_remove() which is only used
by the config parser and the master, both of which are single-threaded,
hence are always the last ones in the running_mask. Thus it is safe to
assume that a postponed FD deletion always implies do_close=1.
Thanks to Olivier for his help in designing this optimal solution.
When a lua context is allocated, its stack must be initialized to NULL
before attaching it to its owner (task, stream or applet). Otherwise, if
the watchdog is fired before the stack is really created, that may lead to a
segfault because we try to dump the traceback of an uninitialized lua stack.
It is easy to trigger this bug if a lua script do a blocking call while
another thread try to initialize a new lua context. Because of the global
lua lock, the init is blocked before the stack creation. Of course, it only
happens if the script is executed in the shared global context.
This patch must be backported as far as 2.0.
The commit reverts following commits:
* 83926a04 BUG/MEDIUM: debug/lua: Don't dump the lua stack if not dumpable
* a61789a1 MEDIUM: lua: Use a per-thread counter to track some non-reentrant parts of lua
Instead of relying on a Lua function to print the lua traceback into the
debugger, we are now using our own internal function (hlua_traceback()).
This one does not allocate memory and use a chunk instead. This avoids any
issue with a possible deadlock in the memory allocator because the thread
processing was interrupted during a memory allocation.
This patch relies on the commit "BUG/MEDIUM: debug/lua: Use internal hlua
function to dump the lua traceback". Both must be backported wherever the
patches above are backported, thus as far as 2.0
The separator string is now configurable, passing it as parameter when the
function is called. In addition, the message have been slightly changed to
be a bit more readable.
If an unknown CA file was first mentioned in an "add ssl crt-list" CLI
command, it would result in a call to X509_STORE_load_locations which
performs a disk access which is forbidden during runtime. The same would
happen if a "ca-verify-file" or "crl-file" was specified. This was due
to the fact that the crt-list file parsing and the crt-list related CLI
commands parsing use the same functions.
The patch simply adds a new parameter to all the ssl_bind parsing
functions so that they know if the call is made during init or by the
CLI, and the ssl_store_load_locations function can then reject any new
cafile_entry creation coming from a CLI call.
It can be backported as far as 2.2.
Previous commit 69ba35146 ("MINOR: tools: introduce new option
PA_O_DEFAULT_DGRAM on str2sa_range.") managed to introduce a
parenthesis imbalance that broke the build. No backport is needed.
str2sa_range function options PA_O_DGRAM and PA_O_STREAM are used to
define the supported address types but also to set the default type
if it is not explicit. If the used address support both STREAM and DGRAM,
the default was always set to STREAM.
This patch introduce a new option PA_O_DEFAULT_DGRAM to force the
default to DGRAM type if it is not explicit in the address field
and both STREAM and DGRAM are supported. If only DGRAM or only STREAM
is supported, it continues to be considered as the default.
In commit a1ecbca0a ("BUG/MINOR: freq_ctr/threads: make use of the last
updated global time"), for period-based counters, the millisecond part
of the global_now variable was used as the date for the new period. But
it's wrong, it only works with sub-second periods as it wraps every
second, and for other periods the counters never rotate anymore.
Let's make use of the newly introduced global_now_ms variable instead,
which contains the global monotonic time expressed in milliseconds.
This patch needs to be backported wherever the patch above is backported.
It depends on previous commit "MINOR: time: also provide a global,
monotonic global_now_ms timer".
The period-based freq counters need the global date in milliseconds,
so better calculate it and expose it rather than letting all call
places incorrectly retrieve it.
Here what we do is that we maintain a new globally monotonic timer,
global_now_ms, which ought to be very close to the global_now one,
but maintains the monotonic approach of now_ms between all threads
in that global_now_ms is always ahead of any now_ms.
This patch is made simple to ease backporting (it will be needed for
a subsequent fix), but it also opens the way to some simplifications
on the time handling: instead of computing the local time and trying
to force it to the global one, we should soon be able to proceed in
the opposite way, that is computing the new global time an making the
local one just the latest snapshot of it. This will bring the benefit
of making sure that the global time is always ahead of the local one.
The function's purpose used to be to fail a buffer allocation if that
allocation wouldn't result in leaving some buffers available. Thus,
some allocations could succeed and others fail for the sole purpose of
trying to provide 2 buffers at once to process_stream(). But things
have changed a lot with 1.7 breaking the promise that process_stream()
would always succeed with only two buffers, and later the thread-local
pool caches that keep certain buffers available that are not accounted
for in the global pool so that local allocators cannot guess anything
from the number of currently available pools.
Let's just replace all last uses of b_alloc_margin() with b_alloc() once
for all.
pool_alloc_dirty() is the version below pool_alloc() that never performs
the memory poisonning. It should only be called directly for very large
unstructured areas for which enabling memory poisonning would not bring
anything but could significantly hurt performance (e.g. buffers). Using
this function here will not provide any benefit and will hurt the ability
to debug.
It would be desirable to backport this, although it does not cause any
user-visible bug, it just complicates debugging.
pool_alloc_dirty() is the version below pool_alloc() that never performs
the memory poisonning. It should only be called directly for very large
unstructured areas for which enabling memory poisonning would not bring
anything but could significantly hurt performance (e.g. buffers). Using
this function here will not provide any benefit and will hurt the ability
to debug.
It would be desirable to backport this, although it does not cause any
user-visible bug, it just complicates debugging.
pool_alloc_dirty() is the version below pool_alloc() that never performs
the memory poisonning. It should only be called directly for very large
unstructured areas for which enabling memory poisonning would not bring
anything but could significantly hurt performance (e.g. buffers). Using
this function here will not provide any benefit and will hurt the ability
to debug.
It would be desirable to backport this, although it does not cause any
user-visible bug, it just complicates debugging.
pool_alloc_dirty() is the version below pool_alloc() that never performs
the memory poisonning. It should only be called directly for very large
unstructured areas for which enabling memory poisonning would not bring
anything but could significantly hurt performance (e.g. buffers). Using
this function here will not provide any real benefit, it only avoids the
area being poisonned before being zeroed. Ideally a pool_calloc() function
should be provided for this.
pool_alloc_dirty() is the version below pool_alloc() that never performs
the memory poisonning. It should only be called directly for very large
unstructured areas for which enabling memory poisonning would not bring
anything but could significantly hurt performance (e.g. buffers). Using
this function here will not provide any benefit and will hurt the ability
to debug.
It would be desirable to backport this, although it does not cause any
user-visible bug, it just complicates debugging.
This fixes a gcc warning about a missing const on defproxy for
mem_parse_global_fail_alloc.
This is needed since the commit :
018251667e
CLEANUP: config: make the cfg_keyword parsers take a const for the
defproxy
When we try to dump the stack of a lua context, if it is not dumpable,
nothing is performed and a message is emitted instead. This happens when a
lua execution was interrupted inside a non-reentrant part.
This patch depends on following commit :
* MEDIUM: lua: Use a per-thread counter to track some non-reentrant parts of lua
Thanks to this patch, we avoid a possible deadllock if the lua is
interrupted by the watchdog in the lua memory allocator, because realloc()
is not async-signal-safe.
Both patches must be backported as far as 2.0.
Some parts of the Lua are non-reentrant. We must be sure to carefully track
these parts to not dump the lua stack when it is interrupted inside such
parts. For now, we only identified the custom lua allocator. If the thread
is interrupted during the memory allocation, we must not try to print the
lua stack wich also allocate memory. Indeed, realloc() is not
async-signal-safe.
In this patch we introduce a thread-local counter. It is incremented before
entering in a non-reentrant part and decremented when exiting. It is only
performed in hlua_alloc() for now.
It was misspelled (expect-netscaler-ip instead of expect-netscaler-cip). 2
commits are concerned :
* db67b0ed7 MINOR: tcp-rules: suggest approaching action names on mismatch
* 72d012fbd CLEANUP: tcp-rules: add missing actions in the tcp-request error message
The first one will not be backported, but the second one was backported as
far as 1.8. Thus this one may also be backported, but only the 2nd part
about the list of accepted keywords.
Now that connections aren't being reused when they failed, remove the
reset() method. It was unimplemented anywhere, except for H1 where it did
nothing, anyway.
Add a start() method to ssl_sock. It is responsible with initiating the
SSL handshake, currently by just scheduling the tasklet, instead of doing
it in the init() method, when all the XPRT may not have been initialized.
Add a start_method to xprt_handshake. It schedules the tasklet that does
the handshake. This used to be done in xprt_handshake_add_xprt(), but that's
a much better place.
Introduce a new XPRT method, start(). The init() method will now only
initialize whatever is needed for the XPRT to run, but any action the XPRT
has to do before being ready, such as handshakes, will be done in the new
start() method. That way, we will be sure the full stack of xprt will be
initialized before attempting to do anything.
The init() call is also moved to conn_prepare(). There's no longer any reason
to wait for the ctrl to be ready, any action will be deferred until start(),
anyway. This means conn_xprt_init() is no longer needed.
The proto "uxdg" (UNIX DGRAM) was not declared, causing an error trying
to put a socket unix on "dgram-bind" into a log-forward section.
This patch introduces the missing "uxdg" protocol by adding proto_uxdg.c
which was fully created based on the code available for the other
protocols.
This patch should be backported to version 2.3 and above.
Allow to specify the mux proto for a dynamic server. It must be
compatible with the backend mode to be accepted. The reg-tests has been
extended for this error case.
Enable a subset of server options to be used as keywords on the CLI
command 'add server'. These options are safe and can be applied
flawlessly for a dynamic server.
Add a new cli command 'add server'. This command is used to create a new
server at runtime attached on an existing backend. The syntax is the
following one :
$ add server <be_name>/<sv_name> [<kws>...]
This command is only available through experimental mode for the moment.
Currently, no server keywords are supported. They will be activated
individually when deemed properly functional and safe.
Another limitation is put on the backend load-balancing algorithm. The
algorithm must use consistent hashing to guarantee a minimal
reallocation of existing connections on the new server insertion.
Remove static qualifier on stats_allocate_proxy_counters_internal. This
function will be used to allocate extra counters at runtime for dynamic
servers.
Prepare the server parsing API to support dynamic servers.
- define a new parsing flag to be used for dynamic servers
- each keyword contains a new field dynamic_ok to indicate if it can be
used for a dynamic server. For now, no keyword are supported.
- do not copy settings from the default server for a new dynamic server.
- a dynamic server is created in a maintenance mode and requires an
explicit 'enable server' command.
- a new server flag named SRV_F_DYNAMIC is created. This flag is set for
all servers created at runtime. It might be useful later, for example
to know if a server can be purged.
Modify the API of parse_server function. Use flags to describe the type
of the parsed server instead of discrete arguments. These flags can be
used to specify if a server/default-server/server-template is parsed.
Additional parameters are also specified (parsing of the address
required, resolve of a name must be done immediately).
It is now unneeded to use strcmp on args[0] in parse_server. Also, the
calls to parse_server are more explicit thanks to the flags.
Move server linked into proxy backend list outside of _srv_parse_init to
parse_server.
This is groundwork for dynamic servers support. There will be two
differences in case of a dynamic server :
- the server will be attached to the proxy list only at the very end of the
operations when everything is ok
- the server will be directly attached to the end of the server proxy
list
Move every ha_alert calls in parsing functions into parse_server.
Parsing functions now support a pointer-to-string argument which will be
allocated with an error message if needed via memprintf.
parse_server has then the responsibility to display errors with ha_alert.
This is groundwork for dynamic server. No traces should be printed on
stderr as a response to a cli command. cli_err will replace ha_alert in
this case.
The huge parse_server function is splitted into two smaller ones.
* _srv_parse_init allocates a new server instance and parses the address
parameter
* _srv_parse_kw parse the current server keyword
This simplify a bit the parse_server function. Besides, it will be
useful for dynamic server creation.
Move server-keyword hardcoded in parse_server into the srv_kws list of
server.c. Now every server keywords is checked through srv_find_kw. This
has the effect to reduce the size of parse_server. As a side-effect,
common kw list can be reduced.
This change has been made to be able to quickly discard these keywords
in case of a dynamic server.
The idle conn task is is a global task used to cleanup backend
connections marked for deletion. Previously, it was only only allocated
if at least one server in the configuration has idle connections.
This assumption won't be valid anymore when new servers can be created
at runtime with idle connections. Always allocate the global idle conn
task.
Declaring a master CLI socket without activating the master-worker mode
is likely a user error, so we issue a warning.
This patch can be backported as far as 1.8.
If the configuration file contains a 'unix-bind prefix' directive, and
if we use the -S option and specify a UNIX socket path, the path of the
socket will be prepended with the value of the unix-bind prefix.
For instance, if we have 'unix-bind prefix /tmp/sockets/' and we use
'-S /tmp/master-socket' on the command line, we will get this error:
Starting proxy MASTER:
cannot bind UNIX socket (No such file or directory) [/tmp/sockets/tmp/master-socket]
So this patch adds an exception, and will ignore the unix-bind prefix
for the master CLI socket.
This patch can be backported as far as 1.9.
The freq counters were using the thread's own time as the start of the
current period. The problem is that in case of contention, it was
occasionally possible to perform non-monotonic updates on the edge of
the next second, because if the upfront thread updates a counter first,
it causes a rotation, then the second thread loses the race from its
older time, and tries again, and detects a different time again, but
in the past so it only updates the counter, then a third thread on the
new date would detect a change again, thus provoking a rotation again.
The effect was triple:
- rare loss of stored values during certain transitions from one
period to the next one, causing counters to report 0
- half of the threads forced to go through the slow path every second
- difficult convergence when using many threads where the CAS can fail
a lot and we can observe N(N-1) attempts for N threads to complete
This patch fixes this issue in two ways:
- first, it now makes use og the monotonic global_now value which also
happens to be volatile and to carry the latest known time; this way
time will never jump backwards anymore and only the first thread
updates it on transition, the other ones do not need to.
- second, re-read the time in the loop after each failure, because
if the date changed in the counter, it means that one thread knows
a more recent one and we need to update. In this case if it matches
the new current second, the fast path is usable.
This patch relies on previous patch "MINOR: time: export the global_now
variable" and must be backported as far as 1.8.
This is the process-wide monotonic time that is used to update each
thread's own time. It may be required at a few places where a strictly
monotonic clock is required such as freq_ctr. It will be have to be
backported as a dependency of a forthcoming fix.
DNS hostname comparisons were fixed to be case-insensitive (see b17b88487
"BUG/MEDIUM: dns: Consider the fact that dns answers are
case-insensitive"). However 2 comparisons are still case-sensitive.
This patch must be backported as far as 1.8.
Some are not always easy to spot with "chk" vs "check" or hyphens at
some places and not at others. Now entering "option http-close" properly
suggests "httpclose" and "option tcp-chk" suggests "tcp-check". There's
no need to consider the proxy's capabilities, what matters is to figure
what related word the user tried to spell, and there are not that many
options anyway.
Now the suggested keywords are sorted with the most relevant ones first
instead of scanning them all in registration order and only dumping the
proposed ones:
- "tra"
trace <module> [cmd [args...]] : manage live tracing
operator : lower the level of the current CLI session to operator
user : lower the level of the current CLI session to user
show trace [<module>] : show live tracing state
- "pool"
show pools : report information about the memory pools usage
add acl : add acl entry
del map : delete map entry
user : lower the level of the current CLI session to user
del acl : delete acl entry
- "sh ta"
show stat : report counters for each proxy and server [desc|json|no-maint|typed|up]*
show tasks : show running tasks
set table [id] : update or create a table entry's data
show table [id]: report table usage stats or dump this table's contents
trace <module> [cmd [args...]] : manage live tracing
- "sh state"
show stat : report counters for each proxy and server [desc|json|no-maint|typed|up]*
set table [id] : update or create a table entry's data
show table [id]: report table usage stats or dump this table's contents
show servers state [id]: dump volatile server information (for backend <id>)
show sess [id] : report the list of current sessions or dump this session
Till now the fuzzy matching would only work on the same number of words,
but this doesn't account for commands like "show servers conn" which
involve 3 words and were not proposed when entering only "show conn".
Let's improve the situation by building the two fingerprints separately
for the correct keyword sequence and the entered one, then compare them.
This can result in slightly larger variations due to the different string
lengths but is easily compensated for. Thanks to this, we can now see
"show servers conn" when entering "show conn", and the following choices
are relevant to correct typos:
- "show foo"
show sess [id] : report the list of current sessions or dump this session
show info : report information about the running process [desc|json|typed]*
show env [var] : dump environment variables known to the process
show fd [num] : dump list of file descriptors in use
show pools : report information about the memory pools usage
- "show stuff"
show sess [id] : report the list of current sessions or dump this session
show info : report information about the running process [desc|json|typed]*
show stat : report counters for each proxy and server [desc|json|no-maint|typed|up]*
show fd [num] : dump list of file descriptors in use
show tasks : show running tasks
- "show stafe"
show sess [id] : report the list of current sessions or dump this session
show stat : report counters for each proxy and server [desc|json|no-maint|typed|up]*
show fd [num] : dump list of file descriptors in use
show table [id]: report table usage stats or dump this table's contents
show tasks : show running tasks
- "show state"
show stat : report counters for each proxy and server [desc|json|no-maint|typed|up]*
show servers state [id]: dump volatile server information (for backend <id>)
It's still visible that the shorter ones continue to easily match, such
as "show sess" not having much in common with "show foo" but what matters
is that the best candidates are definitely relevant. Probably that listing
them in match order would further help.
While sums of squares usually give excellent results in fixed-sise
patterns, they don't work well to compare different sized ones such
as when some sub-words are missing, because a word such as "server"
contains "er" twice, which will rsult in an extra distance of at
least 4 for just this e->r transition compared to another one missing
it. This is one of the main reasons why "show conn" only proposes
"show info" on the CLI. Maybe an improved approach consisting in
using squares only for exact same lengths would work, but it would
still make it difficult to spot reversed characters.
The distance between two words can be high due to a sub-word being missing
and in this case it happens that other totally unrealted words are proposed
because their average score looks lower thanks to being shorter. Here we're
introducing the notion of presence of each character so that word sequences
that contain existing sub-words are favored against the shorter ones having
nothing in common. In addition we do not distinguish being/end from a
regular delimitor anymore. That made it harder to spot inverted words.
In commit a0e8eb8ca ("MINOR: cfgparse: suggest correct spelling for
unknown words in global section") we got the ability to locate a better
matching word in case of error. But it mistakenly used the CFG_LISTEN
class of words instead of CFG_GLOBAL, resulting in proposing unsuitable
matches in addition to the long hard-coded list. Now, "tune.dh-param"
correctly proposes "tune.ssl.default-dh-param".
No backport is needed.
I somehow managed to re-break the "help" command in b736458bf ("MEDIUM:
cli: apply spelling fixes for known commands before listing them")
after fixing it once. A null-deref happens when checking the args
early in the processing.
No backport is needed as this was introduced in 2.4-dev12.
When tasklets were derived from tasks, there was no immediate need for
the scheduler to know their status after execution, and in a spirit of
simplicity they just started to always return NULL. The problem is that
it simply prevents the scheduler from 1) accounting their execution time,
and 2) keeping track of their current execution status. Indeed, a remote
wake-up could very well end up manipulating a tasklet that's currently
being executed. And this is the reason why those handlers have to take
the idle lock before checking their context.
In 2.5 we'll take care of making tasklets and tasks work more similarly,
but trouble is to be expected if we continue to propagate the trend of
returning NULL everywhere, especially if some fixes relying on a stricter
model later need to be backported. For this reason this patch updates all
known tasklet handlers to make them return NULL only when the tasklet was
freed. It has no effect for now and isn't even guaranteed to always be
100% safe but it puts the code into the right direction for this.
There were still a very small list of functions, variables and fields
called "stats_" while they were really purely CLI-centric. There's the
frontend called "stats_fe" in the global section, which instantiates a
"cli_applet" called "<CLI>" so it was renamed "cli_fe".
The "alloc_stats_fe" function cas renamed to "cli_alloc_fe" which also
better matches the naming convention of all cli-specific functions.
Finally the "stats_permission_denied_msg" used to return an error on
the CLI was renamed "cli_permission_denied_msg".
Now there's no more "stats_something" that designates the CLI.
This is the number of args accepted on a command received on the CLI,
is has long been totally independent of stats and should not carry
this misleading "stats" name anymore.
Now instead of comparing words at an exact position, we build a fingerprint
made of all of them, so that we can check for them in any position. For
example, "show conn serv" finds "show servers conn" and that "set servers
maxconn" proposes both "set server" and "set maxconn servers".
Instead of making a new one from scratch, let's support not wiping the
existing fingerprint and updating it, and to do the same char by char.
The word-by-word one will still result in multiple beginnings and ends,
but that will accurately translate word boundaries. The char-based one
has more flexibility and requires that the caller maintains the previous
char to indicate the transition, which also allows to insert delimiters
for example.
Entering "show tls" would still emit 35 entries. By measuring the distance
between all unknown words and the candidates, we can sort them and pick the
10 most likely candidates. This works reasonably well, as now "show tls"
only proposes "show tls-keys", "show threads", "show pools" and "show tasks".
If the distance is still too high or if a word is missing, the whole
prefix list continues to be dumped, thus "show" alone will still report
the entire list of commands beginning with "show".
It's still impossible to skip a word, for example "show conn" will not
propose "show servers conn" because the distance is calculated for each
word individually. Some changes to the distance calculation to support
updating an existing map could easily address this. But this is already
a great improvement.
The error message on the CLI has become unreadable due to the long list
and it's not even sorted, making it even harder to figure the right
command.
This patch starts by looking if some of the words match something known,
and if so, will limit the listing only to those commands that start like
the current one. The "help", "prompt" and "quit" commands are always
shown to help the user try something else. Now thanks to this, typing
"add" or "del" will only list "add acl", "add map" and not 50 lines
anymore.
As a small bonus, we won't print "Unknown command" anymore in response
to the "help" command.
By doing so we can report more accurate information about what's wrong.
As a first step, we already distinguish the case of expert-only commands
from other ones.
Now that the appctx contains the master level, it greatly simplifies
all the tests, as we can simply verify that keyword levels match the
effective level without having to cheat with applet pointers. This
also allows to fold the expert test in them.
Right now the code is a bit hackish, it tests for the keyword's level
flags but checks the applet's origin to compare the bits. Let's start
by properly setting the ACCESS_MASTER_ONLY and ACCESS_MASTER flags on
the master CLI's bind_conf so that they are automatically present
all the time.
These 3 commands are functionally valid both in master and worker CLIs.
However, while they do have a valid handler, they are not permitted by
the code and work partially by chance in the master:
- "prompt" and "quit" are intercepted by the request analyser
- "help" triggers an error, which results in displaying the error
message
Let's make sure they are permitted so that we don't count errors there and
that we can report appropriate help.
This bug has always been there but it doesn't have any functional effect
at the moment since "help" can only show the error message. As such, there
is no need to backport it.
The loop looking for existing ADD items to renew their last_seen must ignore
the items already renewed in the same loop. To do so, we rely on the
last_seen time. because it is now based on now_ms, it is safe.
Doing so avoid to match several time the same ADD item when the same IP
address is found in several ADD item. This reduces the number of extra DNS
resolutions.
This patch depends on "MINOR: resolvers: Use milliseconds for cached items
in resolver responses". Both may be backported as far as 2.2 if necessary.
The last time when an item was seen in a resolver responses is now stored in
milliseconds instead of seconds. This avoid some corner-cases at the
edges. This also simplifies time comparisons.
At startup, if a SRV resolution is set for a server, no DNS resolution is
created. We must wait the first SRV resolution to know if it must be
triggered. It is important to do so for two reasons.
First, during a "classical" startup, a server based on a SRV resolution has
no hostname. Thus the created DNS resolution is useless. Best waiting the
first SRV resolution. It is not really a bug at this stage, it is just
useless.
Second, in the same situation, if the server state is loaded from a file,
its hosname will be set a bit later. Thus, if there is no additionnal record
for this server, because there is already a DNS resolution, it inhibits any
new DNS resolution. But there is no hostname attached to the existing DNS
resolution. So no resolution is performed at all for this server.
To avoid any problem, it is fairly easier to handle this special case during
startup. But this means we must be prepared to have no "resolv_requester"
field for a server at runtime.
This patch must be backported as far as 2.2.
Another way to say it: "Safely unlink requester from a requester callbacks".
Requester callbacks must never try to unlink a requester from a resolution, for
the current requester or another one. First, these callback functions are called
in a loop on a request list, not necessarily safe. Thus unlink resolution at
this place, may be unsafe. And it is useless to try to make these loops safe
because, all this stuff is placed in a loop on a resolution list. Unlink a
requester may lead to release a resolution if it is the last requester.
However, the unkink is necessary because we cannot reset the server state
(hostname and IP) with some pending DNS resolution on it. So, to workaround
this issue, we introduce the "safe" unlink. It is only performed from a
requester callback. In this case, the unlink function never releases the
resolution, it only reset it if necessary. And when a resolution is found
with an empty requester list, it is released.
This patch depends on the following commits :
* MINOR: resolvers: Purge answer items when a SRV resolution triggers an error
* MINOR: resolvers: Use a function to remove answers attached to a resolution
* MINOR: resolvers: Directly call srvrq_update_srv_state() when possible
* MINOR: resolvers: Add function to change the srv status based on SRV resolution
All the series must be backported as far as 2.2. It fixes a regression
introduced by the commit b4badf720 ("BUG/MINOR: resolvers: new callback to
properly handle SRV record errors").
don't release resolution from requester cb
When the server status must be updated from the result of a SRV resolution,
we can directly call srvrq_update_srv_state(). It is simpler and this avoid
a test on the server DNS resolution.
This patch is mandatory for the next commit. It also rely on "MINOR:
resolvers: Directly call srvrq_update_srv_state() when possible".
srvrq_update_srv_status() update the server status based on result of SRV
resolution. For now, it is only used from snr_update_srv_status() when
appropriate.
When a SRV request trigger an error, if we decide to handle the error
because last_valid duration is expired, the answer list may be purged. All
items are considered as obsolete.
resolv_purge_resolution_answer_records() must be used to removed all answers
attached to a resolution. For now, it is only used when a resolution is
released.
When a ADD item attached to a SRV item is removed because it is obsolete, we
must trigger a DNS resolution to be sure the hostname still resolves or
not. There is no other way to be the entry is still valid. And we cannot set
the server in RMAINT immediatly, because a DNS server may be inconsitent and
may stop to add some additionnal records.
The opposite is also true. If a valid ADD item is still attached to a SRV
item, any DNS resolution must be stopped. There is no reason to perform
extra resolution in this case.
This patch must be backported as far as 2.2.
If no ADD item is found for a SRV item in a SRV response, a DNS resolution
is triggered. When it succeeds, we must be sure the SRV item is still
alive. Otherwise the DNS resolution must be ignored.
This patch depends on the commit "MINOR: resolvers: Move last_seen time of
an ADD into its corresponding SRV item". Both must be backported as far as
2.2.
This function search for a SRV answer item associated to a requester
whose type is server.
This is mainly useful to "link" a server to its SRV record when no
additional record were found to configure the IP address.
This patch is required by a bug fix.
For each ADD item found in a SRV response, we try to find a corresponding
ADD item already attached to an existing SRV item. If found, the ADD
last_seen time is updated, otherwise we try to find a SRV item with no ADD
to attached the new one.
However, the loop is buggy. Instead of comparing 2 ADD items, it compares
the new ADD item with the SRV item. Because of this bug, we are unable to
renew last_seen time of existing ADD.
This patch must be backported as far as 2.2.
when a server status is updated based on a SRV item, it is always set to UP,
regardless it has an IP address defined or not. For instance, if only a SRV
item is received, with no additional record, only the server hostname is
defined. We must wait to have an IP address to set the server as UP.
This patch must be backported as far as 2.2.
When a server is set in RMAINT becaues of a SRV resolution failure, the
server DNS resolution, if any, must be unlink first. It is mandatory to
handle the change in the context of a SRV resolution.
This patch must be backported as far as 2.2.
When a DNS resolution error is detected, in snr_resolution_error_cb(), the
server address must be reset only if the server status has changed. It this
case, it means the server is set to RMAINT. Thus the server address may by
reset.
This patch fixes a bug introduced by commit d127ffa9f ("BUG/MEDIUM:
resolvers: Reset address for unresolved servers"). It must be backported as
far as 2.0.
When an error is received for a DNS resolution, for instance a NXDOMAIN
error, the server must be considered to have no address when its status is
updated, not the opposite.
Concretly, because this parameter is not used on error path in
snr_update_srv_status(), there is no impact.
This patch must be backported as far as 1.8.
This reverts commit a331a1e8eb.
This commit fixes a real bug, but it also reveals some hidden bugs, mostly
because of some design issues. Thus, in itself, it create more problem than
it solves. So revert it for now. All known bugs will be addressed in next
commits.
This patch should be backported as far as 2.2.
This was introduced in previous commit 49c2b45c1 ("MINOR: cfgparse/server:
try to fix spelling mistakes on server lines"), the loop was changed but
the increment left. No backport is needed.
This adds support for action_suggest() in http-request, http-response
and http-after-response rulesets. For example:
parsing [/dev/stdin:2]: 'http-request' expects (...), but got 'del-hdr'. Did you mean 'del-header' maybe ?
action_suggest() will return a pointer to an action whose keyword more or
less ressembles the passed argument. It also accepts to be more tolerant
against prefixes (since actions taking arguments are handled as prefixes).
This will be used to suggest approaching words.
Just like with the server keywords, now's the turn of "bind" keywords.
The difference is that 100% of the bind keywords are registered, thus
we do not need the list of extra keywords.
There are multiple bind line parsers today, all were updated:
- peers
- log
- dgram-bind
- cli
$ printf "listen f\nbind :8000 tcut\n" | ./haproxy -c -f /dev/stdin
[NOTICE] 070/101358 (25146) : haproxy version is 2.4-dev11-7b8787-26
[NOTICE] 070/101358 (25146) : path to executable is ./haproxy
[ALERT] 070/101358 (25146) : parsing [/dev/stdin:2] : 'bind :8000' unknown keyword 'tcut'; did you mean 'tcp-ut' maybe ?
[ALERT] 070/101358 (25146) : Error(s) found in configuration file : /dev/stdin
[ALERT] 070/101358 (25146) : Fatal errors found in configuration.
Let's apply the fuzzy match to server keywords so that we can avoid
dumping the huge list of supported keywords each time there is a spelling
mistake, and suggest proper spelling instead:
$ printf "listen f\nserver s 0 sendpx-v2\n" | ./haproxy -c -f /dev/stdin
[NOTICE] 070/095718 (24152) : haproxy version is 2.4-dev11-caa6e3-25
[NOTICE] 070/095718 (24152) : path to executable is ./haproxy
[ALERT] 070/095718 (24152) : parsing [/dev/stdin:2] : 'server s' unknown keyword 'sendpx-v2'; did you mean 'send-proxy-v2' maybe ?
[ALERT] 070/095718 (24152) : Error(s) found in configuration file : /dev/stdin
[ALERT] 070/095718 (24152) : Fatal errors found in configuration.
The global section also knows a large number of keywords that are not
referenced in any list, so this needed them to be specifically listed.
It becomes particularly handy now because some tunables are never easy
to remember, but now it works remarkably well:
$ printf "global\nsched.queue_depth\n" | ./haproxy -c -f /dev/stdin
[NOTICE] 070/093007 (23457) : haproxy version is 2.4-dev11-dd8ee5-24
[NOTICE] 070/093007 (23457) : path to executable is ./haproxy
[ALERT] 070/093007 (23457) : parsing [/dev/stdin:2] : unknown keyword 'sched.queue_depth' in 'global' section; did you mean 'tune.runqueue-depth' maybe ?
[ALERT] 070/093007 (23457) : Error(s) found in configuration file : /dev/stdin
[ALERT] 070/093007 (23457) : Fatal errors found in configuration.
Let's start by the largest keyword list, the listeners. Many keywords were
still not part of a list, so a common_kw_list array was added to list the
not enumerated ones. Now for example, typing "tmout" properly suggests
"timeout":
$ printf "frontend f\ntmout client 10s\n" | ./haproxy -c -f /dev/stdin
[NOTICE] 070/091355 (22545) : haproxy version is 2.4-dev11-3b728a-21
[NOTICE] 070/091355 (22545) : path to executable is ./haproxy
[ALERT] 070/091355 (22545) : parsing [/dev/stdin:2] : unknown keyword 'tmout' in 'frontend' section; did you mean 'timeout' maybe ?
[ALERT] 070/091355 (22545) : Error(s) found in configuration file : /dev/stdin
[ALERT] 070/091355 (22545) : Fatal errors found in configuration.
Instead of just reporting "unknown keyword", let's provide a function which
will look through a list of registered keywords for a similar-looking word
to the one that wasn't matched. This will help callers suggest correct
spelling. Also, given that a large part of the config parser still relies
on a long chain of strcmp(), we'll need to be able to pass extra candidates.
Thus the function supports an optional extra list for this purpose.
This introduces two functions, one which creates a fingerprint of a word,
and one which computes a distance between two words fingerprints. The
fingerprint is made by counting the transitions between one character and
another one. Here we consider the 26 alphabetic letters regardless of
their case, then any digit as a digit, and anything else as "other". We
also consider the first and last locations as transitions from begin to
first char, and last char to end. The distance is simply the sum of the
squares of the differences between two fingerprints. This way, doubling/
missing a letter has the same cost, however some repeated transitions
such as "e"->"r" like in "server" are very unlikely to match against
situations where they do not exist. This is a naive approach but it seems
to work sufficiently well for now. It may be refined in the future if
needed.
The error message for http-request and http-response starts with a comma
that very likely is a leftover from a previous list construct. Let's remove
it: "'http-request' expects , 'wait-for-handshake', 'use-service' ...".
The error message after "http-response set-var" isn't very clear:
[ALERT] 070/115043 (30526) : parsing [/dev/stdin:2] : error detected in proxy 'f' while parsing 'http-response set-var' rule : invalid variable 'set-var'. Expects 'set-var(<var-name>)' or 'unset-var(<var-name>)'.
Let's change it to this instead:
[ALERT] 070/115608 (30799) : parsing [/dev/stdin:2] : error detected in proxy 'f' while parsing 'http-response set-var' rule : invalid or incomplete action 'set-var'. Expects 'set-var(<var-name>)' or 'unset-var(<var-name>)'.
With a wrong action name, it also works better (it's handled as a prefix
due to the opening parenthesis):
[ALERT] 070/115608 (30799) : parsing [/dev/stdin:2] : error detected in proxy 'f' while parsing 'http-response set-varxxx' rule : invalid or incomplete action 'set-varxxx'. Expects 'set-var(<var-name>)' or 'unset-var(<var-name>)'.
The tcp-request error message only mentions "accept", "reject" and
track-sc*, but there are a few other ones that were missing, so let's
add them.
This could be backported, though it's not likely that it will help anyone
with an existing config.
The refactoring in commit 131b07be3 ("MEDIUM: server: Refactor
apply_server_state() to make it more readable") also had a copy-paste
error resulting in using global.server_state_file instead of the
function's argument, which easily crashes with a conf having a
state file in a backend and no global state file.
In addition, let's simplify the code and get rid of strcpy() which
almost certainly will break the build on OpenBSD.
This was introduced in 2.4-dev10, no backport is needed.
The refactoring in commit 131b07be3 ("MEDIUM: server: Refactor
apply_server_state() to make it more readable") made the global
server_state_base be dereferenced before being checked, resulting
in a crash on certain files.
This happened in 2.4-dev10, no backport is needed.
When a "tcp-check" or a "http-check" rule is parsed, we try to get the
previous rule in the ruleset to get its index. We must take care to reset
the pointer on this rule in case an error is triggered later on the
parsing. Otherwise, the same rule may be released twice. For instance, it
happens with such line :
http-check meth GET uri / ## note there is no "send" parameter
This patch must be backported as far as 2.2.
If an agent-check is configured for a server, When the response is parsed,
the .health threshold of the agent must be updated on up/down/stopped/fail
command and not the threshold of the health-check. Otherwise, the
agent-check will compete with the health-check and may mark a DOWN server as
UP.
This patch should fix the issue #1176. It must be backported as far as 2.2.
CF_FL_ANALYZE flag is used to know a channel is filtered. It is important to
synchronize request and response channels when the filtering ends.
However, it is possible to call all request analyzers before starting the
filtering on the response channel. This means flt_end_analyze() may be
called for the request channel before flt_start_analyze() on the response
channel. Thus because CF_FL_ANALYZE flag is not set on the response channel,
we consider the filtering is finished on both sides. The consequence is that
flt_end_analyze() is not called for the response and backend filters are
unregistered before their execution on the response channel.
It is possible to encounter this bug on TCP frontend or CONNECT request on
HTTP frontend if the client shutdown is reveiced with the first read.
To fix this bug, CF_FL_ANALYZE is set when filters are attached to the
stream. It means, on the request channel when the stream is created, in
flt_stream_start(). And on both channels when the backend is set, in
flt_set_stream_backend().
This patch must be backported as far as 1.7.
Setting multiple http-request track-scX rules generates entries
which never expires.
If there was already an entry registered by a previous http rule
'stream_track_stkctr(&s->stkctr[rule->action], t, ts)' didn't
register the new 'ts' into the stkctr. And function is left
with no reference on 'ts' whereas refcount had been increased
by the '_get_entry'
The patch applies the same policy as the one showed on tcp track
rules and if there is successive rules the track counter keep the
first entry registered in the counter and nothing more is computed.
After validation this should be backported in all versions.
The recent default runqueue size reduction appeared to have significantly
lowered performance on low-thread count configs. Testing various values
runqueue values on different workloads under thread counts ranging from
1 to 64, it appeared that lower values are more optimal for high thread
counts and conversely. It could even be drawn that the optimal value for
various workloads sits around 280/sqrt(nbthread), and probably has to do
with both the L3 cache usage and how to optimally interlace the threads'
activity to minimize contention. This is much easier to optimally
configure, so let's do this by default now.
Instead of setting a hard-limit on runqueue-depth and keeping it short
to maintain fairness, let's allow the scheduler to automatically cut
the existing one in two equal halves if its size is between the configured
size and its double. This will allow to increase the default value while
keeping a low latency.
Emeric found that SSL+keepalive traffic had dropped quite a bit in the
recent changes, which could be bisected to recent commit 9205ab31d
("MINOR: ssl: mark the SSL handshake tasklet as heavy"). Indeed, a
first incarnation of this commit made use of the TASK_SELF_WAKING
flag but the last version directly used TASK_HEAVY, but it would still
continue to remove the already absent TASK_SELF_WAKING one instead of
TASK_HEAVY. As such, the SSL traffic remained processed with low
granularity.
No backport is needed as this is only 2.4.
The tests were made on slz and the zlib parsers for memlevel and windowsize
managed to escape the change made by commit 018251667 ("CLEANUP: config: make
the cfg_keyword parsers take a const for the defproxy"). This is now fixed.
These are some leftovers from the ancient code where they were still
called sessions, but these areas in the code remain confusing due to
this naming. They were now called "strm" which will not even affect
indenting nor alignment.
When implementing a client applet, a NULL dereference was encountered on
the error path which increment the counters.
Indeed, the counters incremented are the one in the listener which does
not exist in the case of client applets, so in sess->listener->counters,
listener is NULL.
This patch fixes the access to the listener structure when accessing
from a sesssion, most of the access are the counters in error paths.
Must be backported as far as 1.8.
The default proxy was passed as a variable to all parsers instead of a
const, which is not without risk, especially when some timeout parsers used
to make some int pointers point to the default values for comparisons. We
want to be certain that none of these parsers will modify the defaults
sections by accident, so it's important to mark this proxy as const.
This patch touches all occurrences found (89).
grq_total was incremented when picking tasks from the global run queue,
but this variable was not defined with threads disabled, and the code
was optimized away at -O2. No backport is needed.
This commit is a fix/complement to the following one :
08d87b3f49
BUG/MEDIUM: backend: never reuse a connection for tcp mode
It fixes the check for the early insertion of backend connections in
the reuse lists if the backend mode is HTTP.
The impact of this bug seems limited because :
- in tcp mode, no insertion is done in the avail list as mux_pt does not
support multiple streams.
- in http mode, muxes are also responsible to insert backend connections
in lists in their detach functions. Prior to this fix the reuse rate
could be slightly inferior.
It can be backported to 2.3.
Currently, there seems to be no way to have the transport layer ready
but not the mux in the function connect_server. Add a BUG_ON to report
if this implicit condition is not true anymore.
This should fix coverity report from github issue #1120.
The actconns list creates massive contention on low server counts because
it's in fact a list of streams using a server, all threads compete on the
list's head and it's still possible to see some watchdog panics on 48
threads under extreme contention with 47 threads trying to add and one
thread trying to delete.
Moving this list per thread is trivial because it's only used by
srv_shutdown_streams(), which simply required to iterate over the list.
The field was renamed to "streams" as it's really a list of streams
rather than a list of connections.
There are multiple per-thread lists in the listeners, which isn't the
most efficient in terms of cache, and doesn't easily allow to store all
the per-thread stuff.
Now we introduce an srv_per_thread structure which the servers will have an
array of, and place the idle/safe/avail conns tree heads into. Overall this
was a fairly mechanical change, and the array is now always initialized for
all servers since we'll put more stuff there. It's worth noting that the Lua
code still has to deal with its own deinit by itself despite being in a
global list, because its server is not dynamically allocated.
Till now servers were only initialized as part of the proxy setup loop,
which doesn't cover peers, tcp log, dns, lua etc. Let's move this part
out of this loop and instead iterate over all registered servers. This
way we're certain to visit them all.
The patch looks big but it's just a move of a large block with the
corresponding reindent (as can be checked with diff -b). It relies
on the two previous ones ("MINOR: server: add a global list of all
known servers and" and "CLEANUP: lua: set a dummy file name and line
number on the dummy servers").
It's a real pain not to have access to the list of all registered servers,
because whenever there is a need to late adjust their configuration, only
those attached to regular proxies are seen, but not the peers, lua, logs
nor DNS.
What this patch does is that new_server() will automatically add the newly
created server to a global list, and it does so as well for the 1 or 2
statically allocated servers created for Lua. This way it will be possible
to iterate over all of them.
The "socket_tcp" and "socket_ssl" servers had no config file name nor
line number, but this is sometimes annoying during debugging or later
in error messages, while all other places using new_server() or
parse_server() make sure to have a valid file:line set. Let's set
something to address this.
Currently the SSL layer checks the validity of its tasklet's context just
in case it would have been stolen, had the connection been idle. Now it
will be able to be notified by the mux when this situation happens so as
not to have to grab the idle connection lock on each pass. This reuses the
TASK_F_USR1 flag just as the muxes do.
These functions are used on the mux layer to indicate that the connection
is becoming idle and that the xprt ought to be careful before checking the
context or that it's not idle anymore and that the context is safe. The
purpose is to allow a mux which is going to release a connection to tell
the xprt to be careful when touching it. At the moment, the xprt are
always careful and that's costly so we want to have the ability to relax
this a bit.
No xprt layer uses this yet.
The muxes are touching the idle_conns_lock all the time now because
they need to be careful that no other thread has stolen their tasklet's
context.
This patch changes this a little bit by setting the TASK_F_USR1 flag on
the tasklet before marking a connection idle, and removing it once it's
not idle anymore. Thanks to this we have the guarantee that a tasklet
without this flag cannot be present in an idle list and does not need
to go through this costly lock. This is especially true for front
connections.
This flag will be usable by any application. It will be preserved across
wakeups so the application can use it to do various stuff. Some I/O
handlers will soon benefit from this.
It's been too short for quite a while now and is now full. It's still
time to extend it to 32-bits since we have room for this without
wasting any space, so we now gained 16 new bits for future flags.
The values were not reassigned just in case there would be a few
hidden u16 or short somewhere in which these flags are placed (as
it used to be the case with stream->pending_events).
The patch is tagged MEDIUM because this required to update the task's
process() prototype to use an int instead of a short, that's quite a
bunch of places.
It's cleaner to use a flag from the task's state to detect a tasklet
and it's even cheaper. One of the best benefits is that this will
allow to get the nice field out of the common part since the tasklet
doesn't need it anymore. This commit uses the last task bit available
but that's temporary as the purpose of the change is to extend this.
The PRNG used by the "random" LB algorithm was the central one which tries
hard to produce "correct" (i.e. hardly predictable) values suitable for use
in UUIDs or cookies. It's much too expensive for pure load balancing where
a cheaper thread-local PRNG is sufficient, and the current PRNG is part of
the hot places when running with many threads.
Let's switch to the stastistical PRNG instead, it's thread-local, very
fast, and with a period of (2^32)-1 which is more than enough to decide
on a server.
We frequently need to access a simple and fast PRNG for statistical
purposes. The debug_prng() function did exactly this using a xorshift
generator but its use was limited to debug only. Let's move this to
tools.h and tools.c to make it accessible everywhere. Since it needs to
be fast, its state is thread-local. An initialization function starts a
different initial value for each thread for better distribution.
In conn_backend_get() we can cause some extreme contention due to the
idle_conns_lock. Indeed, even though it's per-thread, it still causes
high contention when running with many threads. The reason is that all
threads which do not have any idle connections are quickly skipped,
till the point where there are still some, so the first reaching that
point will grab the lock and the other ones wait behind. From this
point, all threads are synchronized waiting on the same lock, and
will follow the leader in small jumps, all hindering each other.
Here instead of doing this we're using a trylock. This way when a thread
is already checking a list, other ones will continue to next thread. In
the worst case, a high contention will lead to a few new connections to
be set up, but this may actually be what is required to avoid contention
in the first place. With this change, the contention has mostly
disappeared on this lock (it's still present in muxes and transport
layers due to the takeover).
Surprisingly, checking for emptiness of the tree root before taking
the lock didn't address any contention.
A few improvements are still possible and desirable here. The first
one would be to avoid seeing all threads jump to the next one. We
could have each thread use a different prime number as the increment
so as to spread them across the entire table instead of keeping them
synchronized. The second one is that the lock in the muck layers
shouldn't be needed to check for the tasklet's context availability.
Using abort() occasionally results in unexploitable core due to issues
rewinding the stack. Let's use ABORT_NOW() which in addition to crashing
much closer to the call point also has the benefit of showing the call
trace.
We've reached a point where the global pools represent a significant
bottleneck with threads. On a 64-core machine, the performance was
divided by 8 between 32 and 64 H2 connections only because there were
not enough entries in the local caches to avoid picking from the global
pools, and the contention on the list there was very high. It becomes
obvious that we need to have an array of lists, but that will require
more changes.
In parallel, standard memory allocators have improved, with tcmalloc
and jemalloc finding their ways through mainstream systems, and glibc
having upgraded to a thread-aware ptmalloc variant, keeping this level
of contention here isn't justified anymore when we have both the local
per-thread pool caches and a fast process-wide allocator.
For these reasons, this patch introduces a new compile time setting
CONFIG_HAP_NO_GLOBAL_POOLS which is set by default when threads are
enabled with thread local pool caches, and we know we have a fast
thread-aware memory allocator (currently set for glibc>=2.26). In this
case we entirely bypass the global pool and directly use the standard
memory allocator when missing objects from the local pools. It is also
possible to force it at compile time when a good allocator is used with
another setup.
It is still possible to re-enable the global pools using
CONFIG_HAP_GLOBAL_POOLS, if a corner case is discovered regarding the
operating system's default allocator, or when building with a recent
libc but a different allocator which provides other benefits but does
not scale well with threads.
Errors reported by ssl_sock_dump_errors() to stderr would only report the
16 lower bits of the file descriptor because it used to be casted to ushort.
This can be backported to all versions but has really no importance in
practice since this is never seen.
Refactoring performed with the following Coccinelle patch:
@@
char *s;
@@
(
- ist2(s, strlen(s))
+ ist(s)
|
- ist2(strdup(s), strlen(s))
+ ist(strdup(s))
)
Note that this replacement is safe even in the strdup() case, because `ist()`
will not call `strlen()` on a `NULL` pointer. Instead is inserts a length of
`0`, effectively resulting in `IST_NULL`.
When dns_connect_nameserver() is called, the nameserver has always a dgram
field properly defined. The caller, dns_send_nameserver(), already performed
the appropriate verification.
When a DNS session is created, the call to ring_attach() never fails. The
ring is freshly initialized and there is other watcher on it. Thus, the call
always succeeds.
Instead of catching an error that must never happen, we use the DISGUISE()
macro to make static analyzers happy.
Recent changes on the server-state file loading have introduced a
regression. HAproxy crashes if a backend with no server-state file is
disabled in the configuration. Indeed, configuration of such backends is not
finalized. Thus many fields are not defined.
To fix the bug, disabled backends must be ignored. In addition a BUG_ON()
has been added to verify the proxy mode regarding the server-state file. It
must be specified (none, global or local) for enabled backends.
No backport needed.
hlua_pushstrippedstring() function strips leading and trailing LWS
characters. But the result length it too short by 1 byte. Thus the last
non-LWS character is stripped. Note that a string containing only LWS
characters resulting to a stipped string with an invalid length (-1). This
leads to a lua runtime error.
This bug was reported in the issue #1155. It must be backported as far as
1.7.
If dispatch mode or transparent backend is used, the backend connection
target is a proxy instead of a server. In these cases, the reuse of
backend connections is not consistent.
With the default behavior, no reuse is done and every new request uses a
new connection. However, if http-reuse is set to never, the connection
are stored by the mux in the session and can be reused for future
requests in the same session.
As no server is used for these connections, no reuse can be made outside
of the session, similarly to http-reuse never mode. A different
http-reuse config value should not have an impact. To achieve this, mark
these connections as private to have a defined behavior.
For this feature to properly work, the connection hash has been slightly
adjusted. The server pointer as an input as been replaced by a generic
target pointer to refer to the server or proxy instance. The hash is
always calculated on connect_server even if the connection target is not
a server. This also requires to allocate the connection hash node for
every backend connections, not just the one with a server target.
Fix a leak in connect_server which happens when a connection is reused
and a bind_addr was allocated because transparent mode is active. The
connection has already an allocated bind_addr so free the newly
allocated one.
No backport needed.
That comma should've been a semicolon. Fortunately, as it is now there
is no impact thanks to operators precedence, and all expressions are
properly evaluated. But this is troubling and the risk is high to
turn it into an effective bug with a minor change.
Introduced in b8ce8905cf which first
appeared in 2.1-dev3. This fix must be backported to 2.1+.
Since this commit:
144289b45 ("REORG: move init_default_instance() to proxy.c and pass it the defproxy pointer")
as quic_transport_params_init() has been moved from cfgparse.c to proxy.c this
latter source file must include xprt_quic.h header.
Should fix#1153 issue.
When the processing stage is finished for a SPOE applet, before returning it
into the idle list, we check if the assigned server appears as full or if
there are some pending connections on the backend or the assigned server. If
yes, it means we reach a maxconn and we close the applet to free a
slot. Otherwise, the applet can be reused. This test is only performed if
there are more than one thread.
It is important to close SPOE applets when there are pending connections for
multithreaded instances because connections with the SPOE agents are
persistent and local to a thread (applets are local to a thread). If a
maxconn is configured, some threads may take all available slots for a
while, leaving remaining threads without any free slot to process SPOE
messages. It is especially true if the maxconn is low.
This patch should fix the issue #705. It must be backported as far as
1.8. However, the code in 1.8 is quite different, a test must be performed
to be sure it works well.
When the selected server has no address, the destination address of the
client is used. However, for now, only the address is set, not the
family. Thus depending on how the server is configured and the client's
destination address, the server address family may be wrong.
For instance, with such server :
server srv 0.0.0.0:0
The server address family is AF_INET. The server connection will fail if a
client is asking for an IPv6 destination.
To fix the bug, we take care to set the rigth family, the family of the
client destination address.
This patch should fix the issue #202. It must be backported to all stable
versions.
If an IPv4 is set via a TCP/HTTP set-dst rule, the original port must be
preserved or set to 0 if the previous family was neither AF_INET nor
AF_INET6. The first case is not an issue because the port remains the
same. But if the previous family was, for instance, AF_UNIX, the port is not
set to 0 and have an undefined value.
This patch must be backported as far as 1.7.
There was a free(ptr) followed by ptr=malloc(ptr, len), which is the
equivalent of ptr = realloc(ptr, len) but slower and less clean. Let's
replace this.
In ssl_sock_free_srv_ctx() there are some calls to free() which are not
followed by a zeroing of the pointers. For now this function is only used
during deinit but it could be used at run time in the near future, so
better secure this.
In sample_store(), depending on the new sample types, the area pointer
was not always zeroed after being freed. Let's make sure it's always the
case to avoid the risk of dangling pointers being misused.
A few occurrences of calls to free() to free a section name,
peers name or server name were using casts and didn't include
the trailing free, let's switch them to ha_free().
This makes the code more readable and less prone to copy-paste errors.
In addition, it allows to place some __builtin_constant_p() predicates
to trigger a link-time error in case the compiler knows that the freed
area is constant. It will also produce compile-time error if trying to
free something that is not a regular pointer (e.g. a function).
The DEBUG_MEM_STATS macro now also defines an instance for ha_free()
so that all these calls can be checked.
178 occurrences were converted. The vast majority of them were handled
by the following Coccinelle script, some slightly refined to better deal
with "&*x" or with long lines:
@ rule @
expression E;
@@
- free(E);
- E = NULL;
+ ha_free(&E);
It was verified that the resulting code is the same, more or less a
handful of cases where the compiler optimized slightly differently
the temporary variable that holds the copy of the pointer.
A non-negligible amount of {free(str);str=NULL;str_len=0;} are still
present in the config part (mostly header names in proxies). These
ones should also be cleaned for the same reasons, and probably be
turned into ist strings.
A network may be specified to avoid header addition for "forwardfor" and
"orignialto" option via the "except" parameter. However, only IPv4
networks/addresses are supported. This patch adds the support of IPv6.
To do so, the net_addr structure is used to store the parameter value in the
proxy structure. And ipcmp2net() function is used to perform the comparison.
This patch should fix the issue #1145. It depends on the following commit:
* c6ce0ab MINOR: tools: Add function to compare an address to a network address
* 5587287 MINOR: tools: Add net_addr structure describing a network addess
ipcmp2net() function may be used to compare an addres (struct
sockaddr_storage) to a network address (struct net_addr). Among other
things, this function will be used to add support of IPv6 for "except"
parameter of "forwardfor" and "originalto" options.
When an except parameter is used for originalto option, only the destination
address must be evaluated. Especially, the address family of the destination
must be tested and not the source one.
This patch must be backported to all stable versions. However be careful,
depending the versions the code may be slightly different.
The preliminary approach to dealing with heavy tasks forced us to quit
the poller after meeting one. Now instead we process at most one per poll
loop and ignore the next ones, so that we get more bandwidth to process
all other classes.
Doing so further reduced the induced HTTP request latency at 100k req/s
under the stress of 1000 concurrent SSL handshakes in the following
proportions:
| default | low-latency
---------+------------+--------------
before | 2.75 ms | 2.0 ms
after | 1.38 ms | 0.98 ms
In both cases, the latency is roughly halved. It's worth noting that
both values are now exactly 10 times better than in 2.4-dev9. Even the
percentiles have much improved. For 16 HTTP connections (1 per thread)
competing with 1000 SSL handshakes, we're seeing these long-tail
latencies (in milliseconds) :
| 99.5% | 99.9% | 100%
-----------+---------+---------+--------
2.4-dev9 | 48.4 | 58.1 | 78.5
previous | 6.2 | 11.4 | 67.8
this patch | 2.8 | 2.9 | 6.1
The task latency profiling report now shows this in default mode:
$ socat - /tmp/sock1 <<< "show profiling"
Per-task CPU profiling : on # set profiling tasks {on|auto|off}
Tasks activity:
function calls cpu_tot cpu_avg lat_tot lat_avg
si_cs_io_cb 3061966 2.224s 726.0ns 42.03s 13.72us
h1_io_cb 3061960 6.418s 2.096us 18.76m 367.6us
process_stream 3059982 9.137s 2.985us 15.52m 304.3us
ssl_sock_io_cb 602657 4.265m 424.7us 4.736h 28.29ms
h1_timeout_task 202973 - - 6.254s 30.81us
accept_queue_process 135547 1.179s 8.699us 16.29s 120.1us
srv_cleanup_toremove_conns 81 15.64ms 193.1us 30.87ms 381.1us
task_run_applet 10 758.7us 75.87us 51.77us 5.176us
srv_cleanup_idle_conns 4 375.3us 93.83us 54.52us 13.63us
And this in low-latency mode, showing that both si_cs_io_cb() and process_stream()
have significantly benefitted from the improvement, with values 50 to 200 times
smaller than 2.4-dev9:
$ socat - /tmp/sock1 <<< "show profiling"
Per-task CPU profiling : on # set profiling tasks {on|auto|off}
Tasks activity:
function calls cpu_tot cpu_avg lat_tot lat_avg
h1_io_cb 6407006 11.86s 1.851us 31.14m 291.6us
process_stream 6403890 18.40s 2.873us 2.134m 20.00us
si_cs_io_cb 6403866 4.139s 646.0ns 1.773m 16.61us
ssl_sock_io_cb 894326 6.407m 429.9us 7.326h 29.49ms
h1_timeout_task 301189 - - 8.440s 28.02us
accept_queue_process 211989 1.691s 7.977us 21.48s 101.3us
srv_cleanup_toremove_conns 220 23.46ms 106.7us 65.61ms 298.2us
task_run_applet 16 1.219ms 76.17us 181.7us 11.36us
srv_cleanup_idle_conns 12 713.3us 59.44us 168.4us 14.03us
The changes are slightly more invasive than previous ones and depend on
recent patches so they are not likely well suited for backporting.
Instead of placing heavy tasklets into the TL_BULK queue, we now place
them into the TL_HEAVY one, which is assigned a default weight of ~1%
load at once. This way heavy tasks will not block TL_BULK anymore.
This class will be used exclusively for heavy processing tasklets. It
will be cleaner than mixing them with the bulk ones. For now it's
allocated ~1% of the CPU bandwidth.
The largest part of the patch consists in re-arranging the fields in the
task_per_thread structure to preserve a clean alignment with one more
list head. Since we're now forced to increase the struct past a second
cache line, it now uses 4 cache lines (for easy multiplying) with the
first two ones being exclusively used by local operations and the third
one mostly by atomic operations. Interestingly, this better arrangement
causes less stress and reduced the response time by 8 microseconds at
1 million requests per second.
A potential null pointer dereference was reported with an old gcc
version (6.5)
src/ssl_ckch.c: In function 'cli_parse_set_cert':
src/ssl_ckch.c:844:7: error: potential null pointer dereference [-Werror=null-dereference]
if (!ssl_sock_copy_cert_key_and_chain(src->ckch, dst->ckch))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/ssl_ckch.c:844:7: error: potential null pointer dereference [-Werror=null-dereference]
src/ssl_ckch.c: In function 'ckchs_dup':
src/ssl_ckch.c:844:7: error: potential null pointer dereference [-Werror=null-dereference]
if (!ssl_sock_copy_cert_key_and_chain(src->ckch, dst->ckch))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/ssl_ckch.c:844:7: error: potential null pointer dereference [-Werror=null-dereference]
This could happen if ckch_store_new() fails to allocate memory and returns NULL.
This patch must be backported with 8f71298 since it was wrongly fixed and
the bug could happen.
Must be backported as far as 2.2.
These function names are unbearably long, they don't even fit into the
screen in "show profiling", let's trim the "_connections" to "_conns",
which happens to match the name of the lists there.
There's a fairness issue between SSL and clear text. A full end-to-end
cleartext connection can require up to ~7.7 wakeups on average, plus 3.3
for the SSL tasklet, one of which is particularly expensive. So if we
accept to process many handshakes taking 1ms each, we significantly
increase the processing time of regular tasks just by adding an extra
delay between their calls. Ideally in order to be fair we should have a
1:18 call ratio, but this requires a bit more accounting. With very little
effort we can mark the SSL handshake tasklet as TASK_HEAVY until the
handshake completes, and remove it once done.
Doing so reduces from 14 to 3.0 ms the total response time experienced
by HTTP clients running in parallel to 1000 SSL clients doing full
handshakes in loops. Better, when tune.sched.low-latency is set to "on",
the latency further drops to 1.8 ms.
The tasks latency distribution explain pretty well what is happening:
Without the patch:
$ socat - /tmp/sock1 <<< "show profiling"
Per-task CPU profiling : on # set profiling tasks {on|auto|off}
Tasks activity:
function calls cpu_tot cpu_avg lat_tot lat_avg
ssl_sock_io_cb 2785375 19.35m 416.9us 5.401h 6.980ms
h1_io_cb 1868949 9.853s 5.271us 4.829h 9.302ms
process_stream 1864066 7.582s 4.067us 2.058h 3.974ms
si_cs_io_cb 1733808 1.932s 1.114us 26.83m 928.5us
h1_timeout_task 935760 - - 1.033h 3.975ms
accept_queue_process 303606 4.627s 15.24us 16.65m 3.291ms
srv_cleanup_toremove_connections452 64.31ms 142.3us 2.447s 5.415ms
task_run_applet 47 5.149ms 109.6us 57.09ms 1.215ms
srv_cleanup_idle_connections 34 2.210ms 65.00us 87.49ms 2.573ms
With the patch:
$ socat - /tmp/sock1 <<< "show profiling"
Per-task CPU profiling : on # set profiling tasks {on|auto|off}
Tasks activity:
function calls cpu_tot cpu_avg lat_tot lat_avg
ssl_sock_io_cb 3000365 21.08m 421.6us 20.30h 24.36ms
h1_io_cb 2031932 9.278s 4.565us 46.70m 1.379ms
process_stream 2010682 7.391s 3.675us 22.83m 681.2us
si_cs_io_cb 1702070 1.571s 922.0ns 8.732m 307.8us
h1_timeout_task 1009594 - - 17.63m 1.048ms
accept_queue_process 339595 4.792s 14.11us 3.714m 656.2us
srv_cleanup_toremove_connections779 75.42ms 96.81us 438.3ms 562.6us
srv_cleanup_idle_connections 48 2.498ms 52.05us 178.1us 3.709us
task_run_applet 17 1.738ms 102.3us 11.29ms 663.9us
other 1 947.8us 947.8us 202.6us 202.6us
=> h1_io_cb() and process_stream() are divided by 6 while ssl_sock_io_cb() is
multipled by 4
And with low-latency on:
$ socat - /tmp/sock1 <<< "show profiling"
Per-task CPU profiling : on # set profiling tasks {on|auto|off}
Tasks activity:
function calls cpu_tot cpu_avg lat_tot lat_avg
ssl_sock_io_cb 3000565 20.96m 419.1us 20.74h 24.89ms
h1_io_cb 2019702 9.294s 4.601us 49.22m 1.462ms
process_stream 2009755 6.570s 3.269us 1.493m 44.57us
si_cs_io_cb 1997820 1.566s 783.0ns 2.985m 89.66us
h1_timeout_task 1009742 - - 1.647m 97.86us
accept_queue_process 494509 4.697s 9.498us 1.240m 150.4us
srv_cleanup_toremove_connections1120 92.32ms 82.43us 463.0ms 413.4us
srv_cleanup_idle_connections 70 2.703ms 38.61us 204.5us 2.921us
task_run_applet 13 1.303ms 100.3us 85.12us 6.548us
=> process_stream() is divided by 100 while ssl_sock_io_cb() is
multipled by 4
Interestingly, the total HTTPS response time doesn't increase and even very
slightly decreases, with an overall ~1% higher request rate. The net effect
here is a redistribution of the CPU resources between internal tasks, and
in the case of SSL, handshakes wait bit more but everything after completes
faster.
This was made simple enough to be backportable if it helps some users
suffering from high latencies in mixed traffic.
While the scheduler is priority-aware and class-aware, and consistently
tries to maintain fairness between all classes, it doesn't make use of a
fine execution budget to compensate for high-latency tasks such as TLS
handshakes. This can result in many subsequent calls adding multiple
milliseconds of latency between the various steps of other tasklets that
don't even depend on this.
An ideal solution would be to add a 4th queue, have all tasks announce
their estimated cost upfront and let the scheduler maintain an auto-
refilling budget to pick from the most suitable queue.
But it turns out that a very simplified version of this already provides
impressive gains with very tiny changes and could easily be backported.
The principle is to reserve a new task flag "TASK_HEAVY" that indicates
that a task is expected to take a lot of time without yielding (e.g. an
SSL handshake typically takes 700 microseconds of crypto computation).
When the scheduler sees this flag when queuing a tasklet, it will place
it into the bulk queue. And during dequeuing, we accept only one of
these in a full round. This means that the first one will be accepted,
will not prevent other lower priority tasks from running, but if a new
one arrives, then the queue stops here and goes back to the polling.
This will allow to collect more important updates for other tasks that
will be batched before the next call of a heavy task.
Preliminary tests consisting in placing this flag on the SSL handshake
tasklet show that response times under SSL stress fell from 14 ms
before the patch to 3.0 ms with the patch, and even 1.8 ms if
tune.sched.low-latency is set to "on".
Only the first 3 characters are compared for ';no-maint' suffix in
http_handle_stats. Fix it by doing a full match over the entire suffix.
As a side effect, the ';norefresh' suffix matched the inaccurate
comparison, so the maintenance servers were always hidden on the stats
page in this case.
no-maint suffix is present since commit
3e32036701
MINOR: stats: also support a "no-maint" show stat modifier
It should be backported up to 2.3.
This fixes github issue #1147.
In H1, H2 and FCGI muxes, in the show_fd function, there is duplicated test on
the stream's subs field.
This patch fixes the issue #1142. It may be backported as far as 2.2.
Some static functions are now exported and renamed to follow the same
pattern of other exported functions. Here is the list :
* update_server_fqdn: Renamed to srv_update_fqdn and exported
* update_server_check_addr_port: renamed to srv_update_check_addr_port and exported
* update_server_agent_addr_port: renamed to srv_update_agent_addr_port and exported
* update_server_addr: renamed to srv_update_addr
* update_server_addr_potr: renamed to srv_update_addr_port
* srv_prepare_for_resolution: exported
This change is mandatory to move all functions dealing with the server-state
files in a separate file.
This change is not huge but may have a visible impact for users. Now, if a
line of a server-state file is corrupted, the whole file is ignored. A
warning is emitted with the corrupted line number.
In fact, there is no way to recover from a corrupted line. A line is
considered as corrupted if it is too long (truncated line) or if it contains
the wrong number of arguments. In both cases, it means the file was forged
(or at least manually edited). It is safer to ignore it.
Note for now, memory allocation errors are not reported and the
corresponding line is silently ignored.
Now, srv_state_parse_and_store_line() function is used to parse and store a
line in a tree. It is used for global and local server-state files. This
significatly simplies the apply_server_state() function.
Just like for the global server-state file, the line of a local server-state
file are now stored in a tree. This way, the file is fully parsed before
loading the servers state. And with this change, global and local
server-state files are now handled the same way. This will be the
opportunity to factorize the code. It is also a good way to validate the
file before loading any server state.
The loop on the servers of a proxy to load the server states was moved in
the function srv_state_px_update(). This simplify a bit the
apply_server_state() function. It is aslo mandatory to simplify the loading
of local server-state file.
When a server for a given backend is found in the tree containing all lines
of the global server-state file, the node is removed from the tree. It is
useless to keep it longer. It is a small improvement, but it may also be
usefull to track the orphan lines (not used for now).
Parsed parameters are now stored in the tree of server-state lines. This
way, a line from the global server-state file is only parsed once. Before,
it was parsed a first time to store it in the tree and one more time to load
the server state. To do so, the server-state line object must be allocated
before parsing a line. This means its size must no longer depend on the
length of first parsed parameters (backend and server names). Thus the node
type was changed to use a hashed key instead of a string.
Now, we read a full line and expects to found an integer only on it. And if
the line is empty or truncated, an error is returned. If the version is not
valid, an error is also returned. This way, the first line is no longer
partially read.
There is no reason to use a global variable to store the lines of the global
server-state file. This tree is only used during the file parsing, as a line
cache. Now the eb-tree is declared as a local variable in the
apply_server_state() function.
The structure used to store a server-state line in an eb-tree has a too
generic name. Instead of state_line, the structure is renamed as
server_state_line.
<state_line.name_name> field is a node in an eb-tree. Thus, instead of
"name_name", we now use "node" to name this field. If is a more explicit
name and not too strange.
The apply_server_state() function is really hard to read. Thus it was
refactored to be more maintainable. First, an helper function is used to get
the server-state file path. Some useless variables were removed and most of
other variables were renamed to be more readable. The error messages are now
prefixed to know the context (global vs per-proxy). Finally, the loop on the
proxies list was simplified.
This patch may seem a bit huge, but the changes are not so important.
There is no reason to fill two parameter arrays in srv_state_parse_line()
function. Now, only one array is used. The 4th first entries are just
skipped when srv_update_state() is called.
The srv_state_parse_line() function was rewritten to be more strict. First
of all, it is possible to make the difference between an ignored line and an
malformed one. Then, only blank characters (spaces and tabs) are now allowed
as field separator. An error is reported for truncated lines or for lines
with an unexpected number of arguments regarding the provided version.
However, for now, errors are ignored by the caller, invalid lines are just
skipped.
First, we don't want to measure wakeup times if the call date had not
been set before profiling was enabled at run time. And second, we may
only collect the value before clearing the TASK_IN_LIST bit, otherwise
another wakeup might happen on another thread and replace the call date
we're about to use, hence artificially lower the wakeup times.
It is extremely useful to be able to observe the wakeup latency of some
important I/O operations, so let's accept to inflate the tasklet struct
by 8 extra bytes when DEBUG_TASK is set. With just this we have enough
to get live reports like this:
$ socat - /tmp/sock1 <<< "show profiling"
Per-task CPU profiling : on # set profiling tasks {on|auto|off}
Tasks activity:
function calls cpu_tot cpu_avg lat_tot lat_avg
si_cs_io_cb 8099492 4.833s 596.0ns 8.974m 66.48us
h1_io_cb 7460365 11.55s 1.548us 2.477m 19.92us
process_stream 7383828 22.79s 3.086us 18.39m 149.5us
h1_timeout_task 4157 - - 348.4ms 83.81us
srv_cleanup_toremove_connections751 39.70ms 52.86us 10.54ms 14.04us
srv_cleanup_idle_connections 21 1.405ms 66.89us 30.82us 1.467us
task_run_applet 16 1.058ms 66.13us 446.2us 27.89us
accept_queue_process 7 34.53us 4.933us 333.1us 47.58us
Instead of decrementing grq_total once per task picked from the global
run queue, let's do it at once after the loop like we do for other
counters. This simplifies the code everywhere. It is not expected to
bring noticeable improvements however, since global tasks tend to be
less common nowadays.
Now we don't need to decrement rq_total when we pick a tack in the tree
to immediately increment it again after installing it into the local
list. Instead, we simply add to the local queue count the number of
globally picked tasks. Avoiding this shows ~0.5% performance gains at
1Mreq/s (2M task switches/s).
As indicated in previous commit, this function tries to guess which tree
the task is in to figure what counters to update, while we already have
that info in the caller. Let's just pick the relevant parts to place them
in the caller.
In process_runnable_tasks() we're still calling __task_unlink_rq() to
pick a task, and this function tries to guess where to pick the task
from and which counter to update while the caller's context already
has everything. Worse, the number of local tasks is decremented then
recredited, doubling the operations. In order to avoid this we first
need to keep separate counters for local and global tasks that were
picked. This is what this patch does.
The function htx_reserve_max_data() should be used to get an HTX DATA block
with the max possible size. A current block may be extended or a new one
created, depending on the HTX message state. But the idea is to let the
caller to copy a bunch of data without requesting many new blocks. It is its
responsibility to resize the block at the end, to set the final block size.
This function will be used to parse messages with small chunks. Indeed, we
can have more than 2700 1-byte chunks in a 16Kb of input data. So it is easy
to understand how this function may help to improve the parsing of chunk
messages.
If the DNS resolution failed for a server, its ip address must be
removed. Otherwise, the server is stopped but keeps its ip. This may be
confusing when the servers state are retrieved on the CLI and it may lead to
undefined behavior if HAproxy is configured to load its servers state from a
file.
This patch should be backported as far as 2.0.
When a SRV record expires, the ip/port assigned to the associated server are
now removed. Otherwise, the server is stopped but keeps its ip/port while
the server hostname is removed. It is confusing when the servers state are
retrieve on the CLI and may be a problem if saved in a server-state
file. Because the reload may fail because of this inconsistency.
Here is an example:
* Declare a server template in a backend, using the resolver <dns>
server-template test 2 _http._tcp.example.com resolvers dns check
* 2 SRV records are announced with the corresponding additional
records. Thus, 2 servers are filled. Here is the "show servers state"
output :
2 frt 1 test1 192.168.1.1 2 64 0 1 2 15 3 4 6 0 0 0 http1.example.com 8001 _http._tcp.example.com 0 0 - - 0
2 frt 2 test2 192.168.1.2 2 64 0 1 1 15 3 4 6 0 0 0 http2.example.com 8002 _http._tcp.example.com 0 0 - - 0
* Then, one additional record is removed (or a SRV record is removed, the
result is the same). Here is the new "show servers state" output :
2 frt 1 test1 192.168.1.1 2 64 0 1 38 15 3 4 6 0 0 0 http1.example.com 8001 _http._tcp.example.com 0 0 - - 0
2 frt 2 test2 192.168.1.2 0 96 0 1 19 15 3 0 14 0 0 0 - 8002 _http._tcp.example.com 0 0 - - 0
On reload, if a server-state file is used, this leads to undefined behaviors
depending on the configuration.
This patch should be backported as far as 2.0.
When a SRV record was created, it used to register the regular server name
resolution callbacks. That said, SRV records and regular server name
resolution don't work the same way, furthermore on error management.
This patch introduces a new call back to manage DNS errors related to
the SRV queries.
this fixes github issue #50.
Backport status: 2.3, 2.2, 2.1, 2.0
If no additional record is associated to a SRV record, its TTL must not be
renewed. Otherwise the entry never expires. Thus once announced a first
time, the entry remains blocked on the same IP/port except if a new announce
replaces the old one.
Now, the TTL is updated if a SRV record is received while a matching
existing one is found with an additional record or when an new additional
record is assigned to an existing SRV record.
This patch should be backported as far as 2.2.
At the end of resolv_validate_dns_response(), if a received additionnal
record is not assigned to an existing server record, it is released. But the
condition to do so is buggy. If "answer_record" (the received AR) is not
assigned, "tmp_record" is not a valid record object. It is just a dummy
record "representing" the head of the record list.
Now, the condition is far cleaner. This patch must be backported as far as
2.2.
This function has become large with the multi-queue scheduler. We need
to keep the fast path and the debugging parts inlined, but the rest now
moves to task.c just like was done for task_wakeup(). This has reduced
the code size by 6kB due to less inlining of large parts that are always
context-dependent, and as a side effect, has increased the overall
performance by 1%.
The nb_tasks counter was still global and gets incremented and decremented
for each task_new()/task_free(), and was read in process_runnable_tasks().
But it's only used for stats reporting, so doing this this often is
pointless and expensive. Let's move it to the task_per_thread struct and
have the stats sum it when needed.
The test in __task_wakeup() to figure if the remote threads are sleeping
doesn't make sense outside of the global runqueue test, since there are
only two possibilities here: local runqueue or global runqueue, hence a
sleeping thread is another one and can only happen when sending to the
global run queue. Let's move the test inside the "if" block.
Historically we used to call __task_wakeup() with a known tree root but
this is not the case and the code has remained needlessly complicated
with the root calculation in task_wakeup() passed in argument to
__task_wakeup() which compares it again.
Let's get rid of this and just move the detection code there. This
eliminates some ifdefs and allows to simplify the test conditions quite
a bit.
This one is systematically misunderstood due to its unclear name. It
is in fact the number of tasks in the local tasklet list. Let's call
it "tasks_in_list" to remove some of the confusion.
This one is exclusively used as a boolean nowadays and is non-zero only
when the thread-local run queue is not empty. Better check the root tree's
pointer and avoid updating this counter all the time.
This counter is solely used for reporting in the stats and is the hottest
thread contention point to date. Moving it to the scheduler and having a
separate one for the global run queue dramatically improves the performance,
showing a 12% boost on the request rate on 16 threads!
In addition, the thread debugging output which used to rely on rqueue_size
was not totally accurate as it would only report task counts. Now we can
return the exact thread's run queue length.
It is also interesting to note that there are still a few other task/tasklet
counters in the scheduler that are not efficiently updated because some cover
a single area and others cover multiple areas. It looks like having a distinct
counter for each of the following entries would help and would keep the code
a bit cleaner:
- global run queue (tree)
- per-thread run queue (tree)
- per-thread shared tasklets list
- per-thread local lists
Maybe even splitting the shared tasklets lists between pure tasklets and
tasks instead of having the whole and tasks would simplify the code because
there remain a number of places where several counters have to be updated.
dns_session_release() only uses its struct dns_stream_server to access
the lock, so a warning is emitted when threads are disabled. Let's mark
it __maybe_unused.
The lock was still used exclusively to deal with the concurrency between
the "show sess" release handler and a stream_new() or stream_free() on
another thread. All other accesses made by "show sess" are already done
under thread isolation. The release handler only requires to unlink its
node when stopping in the middle of a dump (error, timeout etc). Let's
just isolate the thread to deal with this case so that it's compatible
with the dump conditions, and remove all remaining locking on the streams.
This effectively kills the streams lock. The measured gain here is around
1.6% with 4 threads (374krps -> 380k).
The global streams list is exclusively used for "show sess", to look up
a stream to shut down, and for the hard-stop. Having all of them in a
single list is extremely expensive in terms of locking when using threads,
with performance losses as high as 7% having been observed just due to
this.
This patch makes the list per-thread, since there's no need to have a
global one in this situation. All call places just iterate over all
threads. The most "invasive" changes was in "show sess" where the end
of list needs to go back to the beginning of next thread's list until
the last thread is seen. For now the lock was maintained to keep the
code auditable but a next commit should get rid of it.
The observed performance gain here with only 4 threads is already 7%
(350krps -> 374krps).
Instead of placing the current stream at the end of the stream list when
issuing a "show sess" on the CLI as was done in 2.2 with commit c6e7a1b8e
("MINOR: cli: make "show sess" stop at the last known session"), now we
compare the listed stream's epoch with the dumping stream's and stop on
more recent ones.
This way we're certain to always only dump known streams at the moment we
issue the dump command without having to modify the list. In theory we
could miss some streams if more than 2^31 "show sess" requests are issued
while an old stream remains present, but that's 68 years at 1 "show sess"
per second and it's unlikely we'll keep a process, let alone a stream, that
long.
It could be verified that the count of dumped streams still matches the
one before this change.
The "show sess" CLI command currently lists all streams and needs to
stop at a given position to avoid dumping forever. Since 2.2 with
commit c6e7a1b8e ("MINOR: cli: make "show sess" stop at the last known
session"), a hack consists in unlinking the stream running the applet
and linking it again at the current end of the list, in order to serve
as a delimiter. But this forces the stream list to be global, which
affects scalability.
This patch introduces an epoch, which is a global 32-bit counter that
is incremented by the "show sess" command, and which is copied by newly
created streams. This way any stream can know whether any other one is
newer or older than itself.
For now it's only stored and not exploited.
The hard-stop event didn't wake threads up. In the past it wasn't an issue
as the poll timeout was limited to 1 second, but since commit 4f59d3861
("MINOR: time: increase the minimum wakeup interval to 60s") it has become
a problem because old processes can remain live for up to one minute after
the hard-stop-after delay. Let's just wake them up.
This may be backported to older releases, though before 2.4 the extra
delay was only one second.
There's no locking around the lookup of a stream nor its shutdown
when issuing "shutdown sessions" over the CLI so the risk of crashing
the process is particularly high.
Let's use a thread_isolate() there which is suitable for this task, and
there are not that many alternatives.
This must be backported to 1.8.
When setting hard-stop-after, hard_stop() is called at the end to kill
last pending streams. Unfortunately there's no locking there while
walking over the streams list nor when shutting them down, so it's
very likely that some old processes have been crashing or gone wild
due to this. Let's use a thread_isolate() call for this as we don't
have much other choice (and it happens once in the process' life,
that's OK).
This must be backported to 1.8.
This patch adds a lock to functions vars_get_by_name() and
vars_get_by_desc() to protect accesses to the list of variables.
After the variable is fetched, a sample data is duplicated by using
smp_dup() because the variable may be modified by another thread.
This should be backported to all versions supporting vars along with
"BUG/MINOR: sample: secure convs that accept base64 string and var name
as args" which this patch depends on.
This patch adds a few improvements in order to secure the use of
converters that accept base64 string and variable name as arguments.
The first change is within related function sample_conv_var2smp_str()
which now flags the sample as SMP_F_CONST if the argument is of type
ARGT_STR. This makes the sample more safe for later use.
A new function sample_check_arg_base64() is added. It checks an argument
and fills it with a variable type if the argument string contains a
valid variable name. If failed, it tries to perform a base64 decode
operation on a non-empty string, and fills the argument with the decoded
content which can be used later, without any additional base64dec()
function calls during runtime. This means that haproxy configuration
check may fail if variable lookup fails and an invalid base64 encoded
string is specified as an argument for such converters.
Both converters, "aes_gcm_dec" and "hmac", now use alloc_trash_chunk()
in order to allocate additional buffers for various conversions, and
avoid the use of a pre-allocated trash chunks directly (usually returned
by get_trash_chunk()). The function sample_check_arg_base64() is used
for both converters in order to check their arguments specified within
the haproxy configuration.
This patch should be backported as far as 2.0. However, it is important
to keep in mind a few things. The "hmac" converter is only available
starting with 2.2. In versions prior to 2.2, the "aes_gcm_dec" converter
and sample_conv_var2smp_str() are implemented in src/ssl_sock.c. Thus
the patch will have to be adapted on these versions.
Note that this patch is required for a subsequent, more important fix.
A potential null pointer dereference was reported with an old gcc
version (6.5)
src/ssl_ckch.c: In function 'cli_parse_set_cert':
src/ssl_ckch.c:838:7: error: potential null pointer dereference [-Werror=null-dereference]
if (!ssl_sock_copy_cert_key_and_chain(src->ckch, dst->ckch))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/ssl_ckch.c:838:7: error: potential null pointer dereference [-Werror=null-dereference]
src/ssl_ckch.c: In function 'ckchs_dup':
src/ssl_ckch.c:838:7: error: potential null pointer dereference [-Werror=null-dereference]
if (!ssl_sock_copy_cert_key_and_chain(src->ckch, dst->ckch))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/ssl_ckch.c:838:7: error: potential null pointer dereference [-Werror=null-dereference]
cc1: all warnings being treated as errors
This case does not actually happen but it's better to fix the ckch API
with a NULL check.
Could be backported as far as 2.1.
RAND_keep_random_devices_open is OpenSSL specific function, not
implemented in LibreSSL and BoringSSL. Let us define guard
HAVE_SSL_RAND_KEEP_RANDOM_DEVICES_OPEN in include/haproxy/openssl-compat.h
That guard does not depend anymore on HA_OPENSSL_VERSION
The runqueue_ticks counts the number of task wakeups and is used to
position new tasks in the run queue, but since we've had per-thread
run queues, the values there are not very relevant anymore and the
nice value doesn't apply well if some threads are more loaded than
others. In addition, letting all threads compete over a shared counter
is not smart as this may cause some excessive contention.
Let's move this index close to the run queues themselves, i.e. one per
thread and a global one. In addition to improving fairness, this has
increased global performance by 2% on 16 threads thanks to the lower
contention on rqueue_ticks.
Fairness issues were not observed, but if any were to be, this patch
could be backported as far as 2.0 to address them.
Historically this function would try to wake the most accurate number of
process_stream() waiters. But since the introduction of filters which could
also require buffers (e.g. for compression), things started not to be as
accurate anymore. Nowadays muxes and transport layers also use buffers, so
the runqueue size has nothing to do anymore with the number of supposed
users to come.
In addition to this, the threshold was compared to the number of free buffer
calculated as allocated minus used, but this didn't work anymore with local
pools since these counts are not updated upon alloc/free!
Let's clean this up and pass the number of released buffers instead, and
consider that each waiter successfully called counts as one buffer. This
is not rocket science and will not suddenly fix everything, but at least
it cannot be as wrong as it is today.
This could have been marked as a bug given that the current situation is
totally broken regarding this, but this probably doesn't completely fix
it, it only goes in a better direction. It is possible however that it
makes sense in the future to backport this as part of a larger series if
the situation significantly improves.
The buffer wait queue used to be global historically but this doest not
make any sense anymore given that the most common use case is to have
thread-local pools. Thus there's no point waking up waiters of other
threads after releasing an entry, as they won't benefit from it.
Let's move the queue head to the thread_info structure and use
ti->buffer_wq from now on.
When a server-state line is parsed, a test is performed to be sure there is
enough but not too much fields. However the test is buggy. The bug was
introduced in the commit ea2cdf55e ("MEDIUM: server: Don't introduce a new
server-state file version").
No backport needed.
This revert the commit 63e6cba12 ("MEDIUM: server: add server-states version
2"), but keeping all recent features added to the server-sate file. Instead
of adding a 2nd version for the server-state file format to handle the 5 new
fields added during the 2.4 development, these fields are considered as
optionnal during the parsing. So it is possible to load a server-state file
from HAProxy 2.3. However, from 2.4, these new fields are always dumped in
the server-state file. But it should not be a problem to load it on the 2.3.
This patch seems a bit huge but the diff ignoring the space is much smaller.
The version 2 of the server-state file format is reserved for a real
refactoring to address all issues of the current format.
If a line of a server-state file has too many fields, the last one is not
cut on the first following space, as all other fileds. It contains all the
end of the line. It is not the expected behavior. So, now, we cut it on the
next following space, if any. The parsing loop was slighly rewritten.
Note that for now there is no error reported if the line is too long.
This patch may be backported at least as far as 2.1. On 2.0 and prior the
code is not the same. The line parsing is inlined in apply_server_state()
function.
Same static arrays of parameters are used to parse all server-state
lines. Thus it is important to reinit them to be sure to not get params from
the previous line, eventually from the previous loaded file.
This patch should be backported to all stable branches. However, in 2.0 and
prior, the parsing of server-state lines are inlined in apply_server_state()
function. Thus the patch will have to be adapted on these versions.
When a HTTP return action is triggered, HAProxy is responsible to return the
response, based on the configured status code. On the request side, there is
no problem because there is no server response to replace. But on the
response side, we must take care to override the server response status
code, if any, to be sure to use the rigth status code to get the http reply
message.
In short, we must always set the configured status code of the HTTP return
action before returning the http reply to be sure to get the right reply,
the one base on the http return action status code and not a reply based on
the server response status code..
This patch should fix the issue #1139. It must be backported as far as 2.2.
If a SPOE filter is configured to send its logs to a ring buffer, the
corresponding sink must be resolved during the configuration post
parsing. Otherwise, the sink is undefined when a log message is emitted,
crashing HAProxy.
This patch must be backported as far as 2.2.
Remove ebmb_node entry from struct connection and create a dedicated
struct conn_hash_node. struct connection contains now only a pointer to
a conn_hash_node, allocated only for connections where target is of type
OBJ_TYPE_SERVER. This will reduce memory footprints for every
connections that does not need http-reuse such as frontend connections.
In h2_process there was two parts where the connection was removed from
the idle trees, without first checking if the connection is a backend
side.
This should not produce a crash as the node is properly zeroed on
conn_init. However, it is better to explicit the test as it is done on
all other places. Besides it will be mandatory if the node part is
dynamically allocated only for backend connections.
The maximum number of connections accepted at once by a thread for a single
listener used to default to 64 divided by the number of processes but the
tasklet-based model is much more scalable and benefits from smaller values.
Experimentation has shown that 4 gives the highest accept rate for all
thread values, and that 3 and 5 come very close, as shown below (HTTP/1
connections forwarded per second at multi-accept 4 and 64):
ac\thr| 1 2 4 8 16
------+------------------------------
4| 80k 106k 168k 270k 336k
64| 63k 89k 145k 230k 274k
Some tests were also conducted on SSL and absolutely no change was observed.
The value was placed into a define because it used to be spread all over the
code.
It might be useful at some point to backport this to 2.3 and 2.2 to help
those who observed some performance regressions from 1.6.
SCTL (signed certificate timestamp list) specified in RFC6962
was implemented in c74ce24cd22e8c683ba0e5353c0762f8616e597d, let
us introduce macro HAVE_SSL_SCTL for the HAVE_SSL_SCTL sake,
which in turn is based on SN_ct_cert_scts, which comes in the same commit
we previously forgot to add `agent-*` commands.
Take this opportunity to rewrite the help string in a simpler way for
readability (mainly removing simple quotes)
Signed-off-by: William Dauchy <wdauchy@gmail.com>
Due to the two-phase server reservation, there are 3 calls to
fwlc_srv_reposition() per request, one during assign_server() to reserve
the slot, one in connect_server() to commit it, and one in process_stream()
to release it. However only one of the first two will change the key, so
it's needlessly costly to take the lock, remove a server and insert it
again at the same place when we can already figure we ought not to even
have taken the lock.
Furthermore, even when the server needs to move, there can be quite some
contention on the lbprm lock forcing the thread to wait. During this time
the served and nbpend server values might have changed, just like the
lb_node.key itself. Thus we measure the values again under the lock
before updating the tree. Measurements have shown that under contention
with 16 servers and 16 threads, 50% of the updates can be avoided there.
This patch makes the function compute the new key and compare it to
the current one before deciding to move the entry (and does it again
under the lock forthe second test).
This removes between 40 and 50% of the tree updates depending on the
thread contention and the number of servers. The performance gain due
to this (on 16 threads) was:
16 servers: 415 krps -> 440 krps (6%, contention on lbprm)
4 servers: 554 krps -> 714 krps (+29%, little contention)
One point worth thinking about is that it's not logic to update the
tree 2-3 times per request while it's only read once. half to 2/3 of
these updates are not needed. An experiment consisting in subscribing
the server to a list and letting the readers reinsert them on the fly
showed further degradation instead of an improvement.
A better approach would probably consist in avoinding writes to shared
cache lines by having the leastconn nodes distinct from the servers,
with one node per value, and having them hold an mt-list with all the
servers having that number of connections. The connection count tree
would then be read-mostly instead of facing heavy writes, and most
write operations would be performed on 1-3 list heads which are way
cheaper to migrate than a tree node, and do not require updating the
last two updated neighbors' cache lines.
The operations are only an insert and a delete into the LB tree, which
doesn't require the server's lock at all as the lbprm lock is already
held. Let's drop it. Just for the sake of cleanness, given that the
served and nbpend values used to be atomically updated, we'll use an
atomic load to read them.
The operations are only an insert and a delete into the LB tree, which
doesn't require the server's lock at all as the lbprm lock is already
held. Let's drop it.
The two algos defining these functions (first and leastconn) do not need the
server's lock. However it's already present in pendconn_process_next_strm()
so the API must be updated so that the functions may take it if needed and
that the callers indicate whether they already own it.
As such, the call places (backend.c and stream.c) now do not take it
anymore, queue.c was unchanged since it's already held, and both "first"
and "leastconn" were updated to take it if not already held.
A quick test on the "first" algo showed a jump from 432 to 565k rps by
just dropping the lock in stream.c!
The remaining contention on the server lock solely comes from
sess_change_server() which takes the lock to add and remove a
stream from the server's actconn list. This is both expensive
and pointless since we have mt-lists, and this list is only
used by the CLI's "shutdown server sessions" command!
Let's migrate to an mt-list and remove the need for this costly
lock. By doing so, the request rate increased by ~1.8%.
The server lock was taken preventively for anything in health_adjust(),
including the static config checks needed to detect that the lock was not
needed, while the function is always called on the response path to update
a server's status. This was responsible for huge contention causing a
performance drop of about 17% on 16 threads. Let's move the lock only
where it should be, i.e. inside the function around the critical sections
only. By doing this, a 16-thread process jumped back from 575 to 675 krps.
This should be backported to 2.3 as the situation degraded there, and
maybe later to 2.2.
There's an issue when a server state changes, we use an integer comparison
to decide whether or not to reschedule a test instead of using a wrapping
timer comparison. This will cause some health-checks not to be immediately
triggered half of the time, and some unneeded calls to task_queue() to be
performed in other cases.
This bug has always been there as it was introduced with the commit that
added the feature, 97f07b832 ("[MEDIUM] Decrease server health based on
http responses / events, version 3"). This may be backported everywhere.
conn_hash_prehash does not need a nul-terminated string, thus it is only
needed to test if the sni sample is not null before using it as
connection hash input.
Moreover, a bug could be introduced between smp_make_safe and
ssl_sock_set_servername call. Indeed, smp_make_safe may call smp_dup
which duplicates the sample in the trash buffer. If another function
manipulates the trash buffer before the call to ssl_sock_set_servername,
the sni sample might be erased. Currently, no function seems to do that
except make_proxy_line in case proxy protocol is used simultaneously
with the sni on the server.
This does not need to be backported.
In session_count_new() the tracked counter was still incremented with
a "++" outside of any lock, resulting in occasional slightly off values
such as the following:
# table: foo, type: string, size:1000, used:1
0xb2a398: key=127.1.2.3 use=0 exp=86398318 sess_cnt=999959 http_req_cnt=1000004
Now with the correct atomic increment:
# table: foo, type: string, size:1000, used:1
0x7f82a4026d38: key=127.1.2.3 use=0 exp=86399294 sess_cnt=1000004 http_req_cnt=1000004
This can be backported to 1.8.
It seems that fd_delete perform the close of the file descriptor
Se we must not close the fd once again after that.
This should fix issues #1128, #1130 and #1131
This patch fix a case which should never happen writing
in output channel since we check available room before
This patch should fix github issue #1132
This patch fix returns code in case of dns_connect_server is called
on unsupported type (which should not happen). Doing this we have
the warranty that after a return 0 the fd is never -1.
This patch should fix github issues #1127, #1128 and #1130
This patch adds a missing test in dns_session_io_handler, getting
the query id from the buffer of the ring. An error should never
happen since messages are completely added atomically.
This bug should fix github issue #1133
move listen status to a helper, defining both status enum and string
definition.
this will be helpful to be reused in prometheus code. It also removes
this hard-to-read nested ternary.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
prometheus approach requires to output all values for a given metric
name; meaning we iterate through all metrics, and then iterate in the
inner loop on all objects for this metric.
In order to allow more code reuse, adapt the stats API to be able to
select one field or fill them all otherwise.
From this patch it should be possible to add support for listen stats in
prometheus.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
The RMAINT admin state is dynamic and should be remove from the
srv_admin_state parameter when a server state is loaded from a server-state
file. Otherwise an erorr is reported, the server-state line is ignored and
the server state is not updated.
This patch should fix the issue #576. It must be backported as far as 1.8.
This patch introduce the new line "server" to set a TCP
nameserver in a "resolvers" section:
server <name> <address> [param*]
Used to configure a DNS TCP or stream server. This supports for all
"server" parameters found in 5.2 paragraph. Some of these parameters
are irrelevant for DNS resolving. Note: currently 4 queries are pipelined
on the same connections. A batch of idle connections are removed every
5 seconds. "maxconn" can be configured to limit the amount of those
concurrent connections and TLS should also usable if the server supports
. The current implementation limits to 4 pipelined
The name of the line in configuration is open to discussion
and could be changed before the next release.
This patch introduce the "dns_stream_nameserver" to use DNS over
TCP on strict nameservers. For the upper layer it is analog to
the api used with udp nameservers except that the user que switch
the name server in "stream" mode at the init using "dns_stream_init".
The fallback from UDP to TCP is not handled and this is not the
purpose of this feature. This is done to choose the transport layer
during the initialization.
Currently there is a hardcoded limit of 4 pipelined transactions
per TCP connections. A batch of idle connections is expired every 5s.
This code is designed to support a maximum DNS message size on TCP: 64k.
Note: this code won't perform retry on unanswered queries this
should be handled by the upper layer
This patch splits current dns.c into two files:
The first dns.c contains code related to DNS message exchange over UDP
and in future other TCP. We try to remove depencies to resolving
to make it usable by other stuff as DNS load balancing.
The new resolvers.c inherit of the code specific to the actual
resolvers.
Note:
It was really difficult to obtain a clean diff dur to the amount
of moved code.
Note2:
Counters and stuff related to stats is not cleany separated because
currently counters for both layers are merged and hard to separate
for now.
This patch splits recv and send functions in two layers. the
lowest is responsible of DNS message transactions over
the network. Doing this we could use DNS message layer
for something else than resolving. Load balancing for instance.
This patch also re-works the way to init a nameserver and
introduce the new struct dns_dgram_server to prepare the arrival
of dns_stream_server and the support of DNS over TCP.
The way to retry a send failure of a request because of EAGAIN
was re-worked. Previously there was no control and all "pending"
queries were re-played each time it reaches a EAGAIN. This
patch introduce a ring to stack messages in case of sent
failure. This patch is emptied if poller shows that the
socket is ready again to push messages.
Counters are currently stored into lowlevel nameservers struct but
most of them are resolving layer data and increased in the upper layer
So this patch renames the prototype used to allocate/dump them with prefix
'resolv' waiting for a clean split.
Some types are specific to resolver code and a renamed using
the 'resolv' prefix instead 'dns'.
-struct dns_query_item {
+struct resolv_query_item {
-struct dns_answer_item {
+struct resolv_answer_item {
-struct dns_response_packet {
+struct resolv_response {
Resolv callbacks are also updated to rely on counters and not on
nameservers.
"show stat domain dns" will now show the parent id (i.e. resolvers
section name).
FreeBSD has a kernel feature (accf) and a sockopt flag similar to the
Linux's TCP_DEFER_ACCEPT to filter incoming data upon ACK. The main
difference is the filter needs to be placed when the socket actually
listens.
Very often, especially since reg-tests, it would be desirable to be able
to conditionally comment out a config block, such as removing an SSL
binding when SSL is disabled, or enabling HTX only for certain versions,
etc.
This patch introduces a very simple nested block management which takes
".if", ".elif", ".else" and ".endif" directives to take or ignore a block.
For now the conditions are limited to empty string or "0" for false versus
a non-nul integer for true, which already suffices to test environment
variables. Still, it needs to be a bit more advanced with defines, versions
etc.
A set of ".notice", ".warning" and ".alert" statements are provided to
emit messages, often in order to provide advice about how to fix certain
conditions.
The "show peers" output has become huge due to the dictionaries making it
less readable. Now this feature has reached a certain level of maturity
which doesn't warrant to dump it all the time, given that it was essentially
needed by developers. Let's make it optional, and disabled by default, only
when "show peers dict" is requested. The default output reminds about the
command. The output has been divided by 5 :
$ socat - /tmp/sock1 <<< "show peers dict" | wc -l
125
$ socat - /tmp/sock1 <<< "show peers" | wc -l
26
It could be useful to backport this to recent stable versions.
This variable is now only used to point on the local server-state file. When
the server-state is global, it is unused. So, we now use "localfilepath"
instead. Thus, the "filepath" variable can safely be removed.
When a local server-state file is loaded, if its name is too long, the error
is not properly handled, resulting to a call to fopen() with the "filepath"
variable set to NULL. To fix the bug, when this error occurs, we jump to the
next proxy, via a "continue" statement. And we take case to set "filepath"
variable after the error handling to be sure.
This patch should fix the issue #1111. It must be backported as far as 1.6.
When a connect rule is evaluated a test is performed on the "port" variable
while it is set to 0 just on the line just above. Just remove this useless
test to make ccpcheck happy.
This patch fixes the issue #1113.
Now it becomes possible to specify "from foo" on a frontend/listen/backend
or even on a "defaults" line, to mention that defaults section "foo" needs
to be used to preset the proxy's settings.
When not set, the last section remains used. In case the designated name
is found at multiple places, it is rejected and an error indicates two
occurrences of the same name. Similarly, if the section name is found,
its name must only use valid characters. This allows multiple named
defaults section to continue to coexist without the risk that they will
cause trouble by accident.
When it comes to "defaults" relying on another defaults, what happens is
just that a new defaults section is created from the designated one. This
will make it possible for example to reuse some settings such as log-format
like below:
defaults tcp-clear
log stdout local0 info
log-format "%ci:%cp/%b/%si:%sp %ST %ts %U/%B %{+Q}r"
defaults tcp-ssl
log stdout local0 info
log-format "%ci:%cp/%b/%si:%sp %ST %ts %U/%B %{+Q}r ssl=%sslv"
defaults http-clear from tcp-clear
mode http
defaults http-ssl from tcp-ssl
mode http
frontend fe1 from http-clear
bind :8001
frontend fe2 from http-ssl
bind :8002
A small corner case remains in the error detection, if a second defaults
section appears with the same name after the point where it was used, and
nobody references it, the duplicate will not be detected. This could be
addressed by performing the syntactic checks in check_config_validity(),
and by postponing the freeing of the defaults, after tagging a defaults
section as explicitly looked up by another section. This doesn't seem
that important at the moment though.
Now default proxies are stored into a dedicated tree, sorted by name.
Only unnamed entries are not kept upon new section creation. The very
first call to cfg_parse_listen() will automatically allocate a dummy
defaults section which corresponds to the previous static one, since
the code requires to have one at a few places.
The first immediately visible benefit is that it allows to reuse
alloc_new_proxy() to allocate a defaults section instead of doing it by
hand. And the secret goal is to allow to keep multiple named defaults
section in memory to reuse them from various proxies.
Now we'll have a tree of named defaults sections. The regular insertion
and lookup functions take care of the capability in order to select the
appropriate tree. A new function proxy_destroy_defaults() removes a
proxy from this tree and frees it entirely.
In order to make the default proxy configurable, we'll need to have a
pointer to it which might differ from &defproxy. cfg_parse_listen()
now gets curr_defproxy for this.
We don't want to expose this one anymore as we'll soon keep multiple
default proxies. Let's move it inside the parser which is the only
place which still uses it, and initialize it on the fly once needed
instead of doing it at boot time.
The default proxy was passed as a variable, which in addition to being
a PITA to deal with in the config parser, doesn't feel safe to use when
it ought to be const.
This will only affect new code so no backport is needed.
The default proxy was passed as a variable, which in addition to being
a PITA to deal with in the config parser, doesn't feel safe to use when
it ought to be const.
This will only affect new code so no backport is needed.
The default proxy was passed as a variable, which in addition to being
a PITA to deal with in the config parser, doesn't feel safe to use when
it ought to be const.
This will only affect new code so no backport is needed.
In proxy_free_defaults(); none of the free() calls was followed by a
pointer reset. Not only it's hard to figure if one of them is duplicated,
but this code started to call other functions which might or might not
rely on such just freed pointers. Let's reset them as they should be to
make sure there will never be any case of use-after-free. The 3 functions
called there were inspected and are all unaffected by this so this remains
safe to do right now.
This used to be open-coded in cfgparse-listen.c when facing a "defaults"
keyword. Let's move this into proxy_free_defaults(). This code is ugly and
doesn't even reset the just freed pointers. Let's not change this yet.
This code should probably be merged with a generic proxy deinit function
called from deinit(). However there's a catch on uri_auth which cannot be
freed because it might be used by one or several proxies. We definitely
need refcounts there!
The proxy initialization code relies on three phases, allocation,
pre-initialization, and assignments from defaults. This last part is
entirely taken from the defaults proxy when arguments are set. This
sensibly complexifies the initialization code as it requires to always
have a default proxy.
This patch instead first applies the original default settings on a
proxy, and then uses those from a default proxy only if one such is
used. This will allow to initialize a proxy out of any default proxy
while still using valid defaults. A careful inspection of the function
showed that only 4 fields used to be set regardless of the default
proxy, and those were moved to init_new_proxy() where they ought to
have been in the first place.
This new function takes over the old open-coding that used to be done
for too long in cfg_parse_listen() and it now does everything at once
in a proxy-centric function. The function does all the job of allocating
the structure, initializing it, presetting its defaults from the default
proxy and checking for errors. The code was almost unchanged except for
defproxy being passed as a pointer, and the error message being passed
using memprintf().
This change will be needed to ease reuse of multiple default proxies,
or to create dynamic backends in a distant future.
init_default_instance() was still left in cfgparse.c which is not the
best place to pre-initialize a proxy. Let's place it in proxy.c just
after init_new_proxy(), take this opportunity for renaming it to
proxy_preset_defaults() and taking out init_new_proxy() from it, and
let's pass it the pointer to the default proxy to be initialized instead
of implicitly assuming defproxy. We'll soon be able to exploit this.
Only two call places had to be updated.
This is just an API bug but it's annoying when trying to tidy the code.
The source list passed in argument must be a const and not a variable,
as it's typically the list head from a default proxy and must obviously
not be modified by the function. No backport is needed as it only impacts
new code.
This is just an API bug but it's annoying when trying to tidy the code.
The default proxy passed in argument must be a const and not a variable.
No backport is needed as it only impacts new code.
The very old error message indicating that a proxy name is mandatory
still had a reference to the optional addr:port argument while this one
is explicitly rejected a few lines later since at least 1.9.
This is harmless but confusing. This can be backported to 2.0.
In 2.1, commit ee4f5f83d ("MINOR: stats: get rid of the ST_CONVDONE flag")
introduced a subtle bug. By testing curproxy against defproxy in
check_config_validity(), it tried to eliminate the need for a flag
to indicate that stats authentication rules were already compiled,
but by doing so it left the issue opened for the case where a new
defaults section appears after the two proxies sharing the first
one:
defaults
mode http
stats auth foo:bar
listen l1
bind :8080
listen l2
bind :8181
defaults
# just to break above
This config results in:
[ALERT] 042/113725 (3121) : proxy 'f2': stats 'auth'/'realm' and 'http-request' can't be used at the same time.
[ALERT] 042/113725 (3121) : Fatal errors found in configuration.
Removing the last defaults remains OK. It turns out that the cleanups
that followed that patch render it useless, so the best fix is to revert
the change (with the up-to-date flags instead). The flag was marked as
belonging to the config. It's not exact but it's the closest to the
reality, as it's not there to configure the behavior but ti mention
that the config parser did its job.
This could be backported as far as 2.1, but in practice it looks like
nobody ever hit it.
Since commit 1.3.14 with commit 1fa3126ec ("[MEDIUM] introduce separation
between contimeout, and tarpit + queue"), check_config_validity() looks
at the last defaults section to update all proxies' queue and tarpit
timeouts if they were not set!
This was apparently an attempt to properly set them on the fallback values,
except that the fallback values were taken from the default proxy before
looking at the current proxy itself. The worst part of it is that it might
have randomly worked by accident for some configurations when there was a
single defaults section, but has certainly caused too short queue
expirations once another defaults section was added later in the file with
these explicitly defined.
Let's remove the defproxy part and keep only the curproxy ones. This could
be backported everywhere, the bug has been there for 13 years.
Since the beginning, this directive is documented to accept an optional file
name. But it should also be possible to use it without any argument to use
the backend name as file name. However, when no argument is provided, an
error is reported during the configuration parsing requesting an argument, a
file name or "use-backend-name". And This last special argument is not
documented.
So, to respect the documentation and to avoid configuration breakages, all
modes are now supported. If this directive is called with no argument or
with "use-backend-name", the backend name is use as file name for the
server-state file. Otherwise, the provided string is used.
In addition, we take care to release any previously allocated file name in
case this directive is defines multiple times in the same backend. And an
error is reported if more than one argument are defined. Finally, the
documentation is updated accordingly. Sections supporting this directive are
also mentioned.
This patch should be backported as far as 1.6.
server health checks and agent parameters are written the same way as
others to be able to enahcne code reuse: basically we make use of
parsing and assignment at the same place. It makes it difficult for
error handling to know whether srv object was modified partially or not.
The problem was already present with SRV resolution though.
I was a bit puzzled about the approach to take to be honest, and I did
not wanted to go into a full refactor, so I assumed it was ok to simply
notify whether the line was failed or partially applied.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
logical followup from cli commands addition, so that the state server
file stays compatible with the changes made at runtime; use previously
added helper to load server attributes.
also alloc a specific chunk to avoid mixing with other called functions
using it
Signed-off-by: William Dauchy <wdauchy@gmail.com>
Even if it is possibly too much work for the current usage, it makes
sure we don't break states file from v2.3 to v2.4; indeed, since v2.3,
we introduced two new fields, so we put them aside to guarantee we can
easily reload from a version 1.
The diff seems huge but there is no specific change apart from:
- introduce v2 where it is needed (parsing, update)
- move away from switch/case in update to be able to reuse code
- move srv lock to the whole function to make it easier
this patch confirm how painful it is to maintain this functionality.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
this patch allows to set agent port at runtime. In order to align with
both `addr` and `check-addr` commands, also add the possibility to
optionnaly set port on `agent-addr` command. This led to a small
refactor in order to use the same function for both `agent-addr` and
`agent-port` commands.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
this patch allows to set server health check address at runtime. In
order to align with `addr` command, also allow to set port optionnaly.
This led to a small refactor in order to use the same function for both
`check-addr` and `check-port` commands.
for `check-port`, we however don't permit the change anymore if checks
are not enabled on the server.
This command becomes more and more useful for people having a consul
like architecture:
- the backend server is located on a container with its own IP
- the health checks are done the consul instance located on the host
with the host IP
Signed-off-by: William Dauchy <wdauchy@gmail.com>
Use the proxy protocol frame if proxy protocol is activated on the
server line. Do not add anymore these connections in the private list.
If some requests are made with the same proxy fields, they can reuse
the idle connection.
The reg-tests proxy_protocol_send_unique_id must be adapted has it
relied on the side effect behavior that every requests from a same
connection reused a private server connection. Now, a new connection is
created as expected if the proxy protocol fields differ.
The source address is used as an input to the the server connection hash. The
address and port are used as separate hash inputs. Do not add anymore these
connections in the private list.
This parameter is set only if used in the transparent-proxy mode.
The destination address is used as an input to the server connection hash. The
address and port are used as separated hash inputs. Note that they are not used
when statically specified on the server line. This is only useful for dynamic
destination address.
This is typically used when the server address is dynamically set via the
set-dst action. The address and port are separated hash parameters.
Most notably, it should fixed set-dst use case (cf github issue #947).
Change the API of the function used to allocate the stream target
address. This is done in order to be able to allocate the destination
address and use it to reuse a connection sharing with the same address.
In particular, the flag stream SF_ADDR_SET is now set outside of the
function.
The sni parameter is an input to the server connection hash. Do not add
anymore connections with dynamic sni in the private list. Thus, it is
now possible to reuse a server connection if they use the same sni.
Compare the connection hash when reusing a connection from the session.
This ensures that a private connection is reused only if it shares the
same set of parameters.
The pointer of the target server is used as a first parameter for the
server connection hash calcul. This prevents the hash to be null when no
specific parameters are present, and can serve as a simple defense
against an attacker trying to reuse a non-conform connection.
This is a preliminary work for the calcul of the backend connection
hash. A structure conn_hash_params is the input for the operation,
containing the various specific parameters of a connection.
The high bits of the hash will reflect the parameters present as input.
A set of macros is written to manipulate the connection hash and extract
the parameters/payload.
With http-reuse always, if no matching safe connection is found, check
in idle tree for a matching one. This is needed because now idle
connections can be differentiated from each other.
If only the safe tree was checked because not empty, but did not contain
a matching connection, we could miss matching entry in idle tree.
If no matching connection is found on available, check on idle/safe
trees for a matching one. This is needed because now idle connections
can be differentiated from each other.
If only the available list was checked because not empty, but did not
contain a matching connection, we could miss matching entries in idle or
safe trees.
The server idle/safe/available connection lists are replaced with ebmb-
trees. This is used to store backend connections, with the new field
connection hash as the key. The hash is a 8-bytes size field, used to
reflect specific connection parameters.
This is a preliminary work to be able to reuse connection with SNI,
explicit src/dst address or PROXY protocol.
This is a preparation work for connection reuse with sni/proxy
protocol/specific src-dst addresses.
Protect every access to idle conn lists with a lock. This is currently
strictly not needed because the access to the list are made with atomic
operations. However, to be able to reuse connection with specific
parameters, the list storage will be converted to eb-trees. As this
structure does not have atomic operation, it is mandatory to protect it
with a lock.
For this, the takeover lock is reused. Its role was to protect during
connection takeover. As it is now extended to general idle conns usage,
it is renamed to idle_conns_lock. A new lock section is also
instantiated named IDLE_CONNS_LOCK to isolate its impact on performance.
The wrong lock seems to be held when trying to remove another thread
connection if max fd limit has been reached (locking the current thread
instead of the target thread lock).
This could be backported up to 2.0.
This patch removes unecessary tests on p or pp pointers in
pendconn_process_next_strm() function. This should make cppcheck happy and
avoid false report of null pointer dereference.
This patch should fix the issue #1036.
this is pure cleanup, no need to backport
2116 if ((end - 1) == (payload + strlen(PAYLOAD_PATTERN))) {
2117 /* if the payload pattern is at the end */
2118 s->pcli_flags |= PCLI_F_PAYLOAD;
CID 1399833 (#1 of 1): Unused value (UNUSED_VALUE)assigned_value: Assigning value from reql to ret here, but that stored value is overwritten before it can be used.
2119 ret = reql;
2120 }
This patch fixes the issue #1048.
When an invalid character is found during parsing in parse_dotted_uints()
function, the allocated array of uint must be released. This patch fixes a
memory leak on error path during the configuration parsing.
This patch should fix the issue #1106. It should be backported as far as
2.0. Note that, for 2.1 and 2.0, the function is in src/standard.c
In H1, H2 and FCGI muxes, b_realign_if_empty() is called to reset the head
of an empty buffer before setting it a specific value to permit the
zero-copy. Thus, we can remove call to b_realign_if_empty().
When a message is sent, an extra check is performed when the parser is
switch to MSG_DONE state to be sure the EOM flag is really set. This flag is
quite new and replaces the EOM block. Thus, this test is a safeguard waiting
for a proper refactoring of the outgoing side.
In the H2 mux, when a empty DATA frame is used to finish a message, just to
set the ES flag, we now only set the EOM flag on the HTX message. However,
if the HTX message is empty, this event will not be properly handled on the
other side because there is no effective data to handle. Thus, it is
interpreted as an abort by the H1 mux.
It is in part caused by the current H1 mux design but also because there is
no way to emit empty HTX block (NOOP HTX block) or to wakeup a mux for send
when there is no data to finish some internal processing.
Thus, for now, to work around this limitation, an EOT HTX block is added by
the H2 mux if a EOM flag is added on an empty HTX message. This case is only
possible when an empty DATA frame with the ES flag is received.
This fix is specific for 2.4. No backport needed.
In HTTP/2, we may have trailers for messages with a Content-length
header. Thus, when the H2 mux receives a HEADERS frame at the end of a
message, it always emits TLR and EOT HTX blocks. On the H1 mux, if this
happens, these blocks are just skipped because we cannot emit trailers for a
non-chunked message. But the EOT HTX block must not be blindly
ignored. Indeed, there is no longer EOM HTX block to mark the end of the
message. Thus the EOT block, when found, is the end of the message. So we
must handle it to swith in MSG_DONE state.
This fix is specific for 2.4. No backport needed.
When payload is received for a bodyless response, for instance a response to
a HEAD request, it is silently skipped. Unfortunately, when this happens,
the end of the message is not properly handled. The response remains in the
MSG_DATA state (or MSG_TRAILERS if the message is chunked). In addition,
when a zero-copy is possible, the data are not removed from the channel
buffer and the H1 connection is killed because an error is then triggered.
To fix the bug, the zero-copy is disabled for bodyless responses. It is not
a problem because there is no copy at all. And the last block (DATA or EOT)
is now properly handled.
This bug was introduced by the commit e5596bf53 ("MEDIUM: mux-h1: Don't emit
any payload for bodyless responses").
This fix is specific for 2.4. No backport needed.
During the message parsing, if in MSG_DONE state, the CS_FL_EOI flag must
always be set on the conn-stream if following conditions are met :
* It is a response or
* It is a request but not a protocol upgrade nor a CONNECT.
For now, there is no test on the message type (request or response). Thus
the CS_FL_EOI flag is not set for a response with a "Connection: upgrade"
header but not a 101 response.
This bug was introduced by the commit 3e1748bbf ("BUG/MINOR: mux-h1: Don't
set CS_FL_EOI too early for protocol upgrade requests"). It was backported
as far as 2.0. Thus, this patch must also be backported as far as 2.0.
If internal error is reported by the mux during HTTP request parsing, the
HTTP error counter should not be incremented. It should only be incremented
on parsing error to reflect errors caused by clients.
This patch must be backported as far as 2.0. During the backport, the same
must be performed for 408-request-time-out errors.
The HTTP error counter reflects the number of errors caused by
clients. Thus, In the H1 mux, it should only be increment on parsing errors.
This fix is specific for 2.4. No backport needed.
Historically we've been counting lots of client-triggered events in stick
tables to help detect misbehaving ones, but we've been missing the same on
the server side, and there's been repeated requests for being able to count
the server errors per URL in order to precisely monitor the quality of
service or even to avoid routing requests to certain dead services, which
is also called "circuit breaking" nowadays.
This commit introduces http_fail_cnt and http_fail_rate, which work like
http_err_cnt and http_err_rate in that they respectively count events and
their frequency, but they only consider server-side issues such as network
errors, unparsable and truncated responses, and 5xx status codes other
than 501 and 505 (since these ones are usually triggered by the client).
Note that retryable errors are purposely not accounted for, so that only
what the client really sees is considered.
With this it becomes very simple to put some protective measures in place
to perform a redirect or return an excuse page when the error rate goes
beyond a certain threshold for a given URL, and give more chances to the
server to recover from this condition. Typically it could look like this
to bypass a URL causing more than 10 requests per second:
stick-table type string len 80 size 4k expire 1m store http_fail_rate(1m)
http-request track-sc0 base # track host+path, ignore query string
http-request return status 503 content-type text/html \
lf-file excuse.html if { sc0_http_fail_rate gt 10 }
A more advanced mechanism using gpt0 could even implement high/low rates
to disable/enable the service.
Reg-test converteers_ref_cnt_never_dec.vtc was updated to test it.
The sleep time calculation in next_event_delay() was wrong because it
was dividing 999 by the number of pending events, and was directly
responsible for an observation made a long time ago that listeners
would eat all the CPU when hammered while globally rate-limited,
because the more the queued events, the least it would wait, and would
ignore the configured frequency to compute the delay.
This was addressed in various ways in listeners through the switch to
the FULL state and the wakeup of manage_global_listener_queue() that
avoids this fast loop, but the calculation made there remained wrong
nevertheless. It's even visible with this patch that the accept
frequency is much more accurate at low values now; for example,
configuring a maxconrate of 10 would give between 8.99 and 11.0 cps
before this patch and between 9.99 and 10.0 with it.
Better fix it now in case it's reused anywhere else and causes confusion
again. It maybe be backported but is probably not worth it.
When adding the server side support for certificate update over the CLI
we encountered a design problem with the SSL session cache which was not
locked.
Indeed, once a certificate is updated we need to flush the cache, but we
also need to ensure that the cache is not used during the update.
To prevent the use of the cache during an update, this patch introduce a
rwlock for the SSL server session cache.
In the SSL session part this patch only lock in read, even if it writes.
The reason behind this, is that in the session part, there is one cache
storage per thread so it is not a problem to write in the cache from
several threads. The problem is only when trying to write in the cache
from the CLI (which could be on any thread) when a session is trying to
access the cache. So there is a write lock in the CLI part to prevent
simultaneous access by a session and the CLI.
This patch also remove the thread_isolate attempt which is eating too
much CPU time and was not protecting from the use of a free ptr in the
session.
both SSL_CTX_set_msg_callback and SSL_CTRL_SET_MSG_CALLBACK defined since
ea262260469e49149cb10b25a87dfd6ad3fbb4ba, we can safely switch to that guard
instead of OpenSSL version
Because of a buggy tests when processing the EOH HTX block, an extra CRLF is
added for empty chunked messages. This bug was introduced by the commit
d1ac2b90c ("MAJOR: htx: Remove the EOM block type and use HTX_FL_EOM
instead").
This fix is specific for 2.4. No backport needed.
special guard macros HAVE_SSL_CTX_ADD_SERVER_CUSTOM_EXT was defined earlier
exactly for guarding SSL_CTX_add_server_custom_ext, let us use it wherever
appropriate
HAVE_SSL_CTX_ADD_SERVER_CUSTOM_EXT was introduced in ec60909871
however it was defined as HAVE_SL_CTX_ADD_SERVER_CUSTOM_EXT (missing "S")
let us fix typo
The demux loop could quit on missing data but the H2_CF_END_REACHED flag
would not be set in this case. This fixes a remaining situation where
previous commit f09612289 ("BUG/MEDIUM: mux-h2: handle remaining read0
cases") could not be sufficient and still leave CLOSE_WAIT. It's harder
to reproduce but was still observed in prod.
Now we quit via the end of the loop which already takes care of shutr.
This should be backported along with the patch above as far as 2.0.
If allocating a connection object failed right after a successful accept
on a listener, the new file descriptor was not properly closed.
This fixes GitHub issue #905.
It can be backported to 2.3.
During error files conversion to HTX message, in http_str_to_htx(), if a
file is empty, the corresponding buffer's area is initialized with a
malloc(0) and its size is set to 0. There is no problem here. The behaviour
is totally defined. But it is not really intuitive. Instead, we can simply
set the area to NULL.
This patch should fix the issue #1022.
Commit 3d4631fec ("BUG/MEDIUM: mux-h2: fix read0 handling on partial
frames") tried to address an issue introduced in commit aade4edc1 where
read0 wasn't properly handled in the middle of a frame. But the fix was
incomplete for two reasons:
- first, it would set H2_CF_RCVD_SHUT in h2_recv() after detecting
a read0 but the condition was guarded by h2_recv_allowed() which
explicitly excludes read0 ;
- second, h2_process would only call h2_process_demux() when there
were still data in the buffer, but closing after a short pause to
leave a buffer empty wouldn't be caught in this case.
This patch fixes this by properly taking care of the received shutdown
and by also waking up h2_process_demux() on an empty buffer if the demux
is not blocked.
Given the patches above were tagged for backporting to 2.0, this one
should be as well.
FD dumps are not always easy to match against netstat dumps, and often
require an lsof as a third dump. Let's emit the socket family, and the
local and remore ports when the FD is an IPv4/IPv6 socket, this will
significantly ease the matching.
The CO_FL_EARLY_SSL_HS flag was inconditionally set on the connection,
resulting in SSL_read_early_data() always being used first in handshake
calculations. While this seems to work well (probably that there are
fallback paths inside openssl), it's particularly confusing and makes
the debugging quite complicated. It possibly is not optimal by the way.
This flag ought to be set only when early_data is configured on the bind
line. Apparently there used to be a good reason for doing it this way in
1.8 times, but it really does not make sense anymore. It may be OK to
backport this to 2.3 if this helps with troubleshooting, but better not
go too far as it's unlikely to fix any real issue while it could introduce
some in old versions.
in the same manner of agentaddr, we now:
- permit to set agentport through `port` keyword, like it is the case
for agentaddr through `addr`
- set the priority on `agent-port` keyword when used
- add a flag to be able to test when the value is set like for agentaddr
it makes the behaviour between `addr` and `port` more consistent.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
small consistency problem with `addr` and `agent-addr` options:
for the both options, the last one parsed is always used to set the
agent-check addr. Thus these two lines don't have the same behavior:
server ... addr <addr1> agent-addr <addr2>
server ... agent-addr <addr2> addr <addr1>
After this patch `agent-addr` will always be the priority option over
`addr`. It means we test the flag before setting agentaddr.
We also fix all the places where we did not set the flag to be coherent
everywhere.
I was not really able to determine where this issue is coming from. So
it is probable we may backport it to all stable version where the agent
is supported.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
We can currently change the check-port using the cli command `set server
check-port` but there is a consistency issue when using server state.
This patch aims to fix this problem but will be also a good preparation
work to get rid of checkport flag, so we are able to know when checkport
was set by config.
I am fully aware this is not making github #953 moving forward, I
however think this might be acceptable while waiting for a proper
solution and resolve consistency problem faced with port settings.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
While trying to fix some consistency problem with the config file/cli
(e.g. check-port cli command does not set the flag), we realised
checkport flag was not necessarily needed. Indeed tcpcheck uses service
port as the last choice if check.port is zero. So we can assume if
check.port is zero, it means it was never set by the user, regardless if
it is by the cli or config file. In the longterm this will avoid to
introduce a new consistency issue if we forget to set the flag.
in the same manner of checkport flag, we don't really need checkaddr
flag. We can assume if checkaddr is not set, it means it was never set
by the user or config.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
When a server dns resolution is performed, there is no reason to set an
unconfigured check port with the server port. Because by default, if the
check port is not set, the server's one is used. Thus we can remove this
useless assignment. It is mandatory for next improvements.
When the server state is loaded from a server-state file, there is no reason
to set an unconfigured check port with the server port. Because by default,
if the check port is not set, the server's one is used. Thus we can remove
this useless assignment. It is mandatory for next improvements.
while reading `update_server_addr_port` I found out some things which
can be seen as incoherency. I hope I did not overlooked anything:
- one comment is stating check's address should be updated if it uses
the server one; however the condition checks if `SRV_F_CHECKADDR` is
set; this flag is set when a check address is set; result is that we
override the check address where I was not expecting it. In fact we
don't need to update anything here as server addr is used when check
addr is not set.
- same goes for check agent addr
- for port, it is a bit different, we update the check port if it is
unset. This is harmless because we also use server port if check port
is unset. However it creates some incoherency before/after using this
command, as check port should stay unset througout the life of the
process unless it is is set by `set server check-port` command.
quite hard to locate the origin of this this issue but the function was
introduced in commit d458adcc52 ("MINOR:
new update_server_addr_port() function to change both server's ADDR and
service PORT"). I was however not able to determine whether this is due
to a change of behavior along the years. So this patch can potentially
be backported up to v1.8 but we must be careful while doing so, as the
code has changed a lot. That being said, the bug being not very
impacting I would be fine keeping it for 2.4 only.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
Flush the SSL session cache when updating a certificate which is used on a
server line. This prevent connections to be established with a cached
session which was using the previous SSL_CTX.
This patch also replace the ha_barrier with a thread_isolate() since there
are more operations to do. The reg-test was also updated to remove the
'no-ssl-reuse' keyword which is now uneeded.
Duplicate titles for the stats H2_ST_{OPEN,TOTAL}_{CONN,STREAM}. These
entries are used on csv for the heading.
This must be backported up to 2.3.
This fixes the github issue #1102.
As spotted in issue #822, we're having a problem with error detection in
the SSL layer. The problem is that on an overwhelmed machine, accepted
connections can start to pile up, each of them requiring a slow handshake,
and during all this time if the client aborts, the handshake will still be
calculated.
The error controls are properly placed, it's just that the SSL layer
reads records exactly of the advertised size, without having the ability
to encounter a pending connection error. As such if injecting many TLS
connections to a listener with a huge backlog, it's fairly possible to
meet this situation:
12:50:48.236056 accept4(8, {sa_family=AF_INET, sin_port=htons(62794), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_NONBLOCK) = 1109
12:50:48.236071 setsockopt(1109, SOL_TCP, TCP_NODELAY, [1], 4) = 0
(process other connections' handshakes)
12:50:48.257270 getsockopt(1109, SOL_SOCKET, SO_ERROR, [ECONNRESET], [4]) = 0
(proof that error was detectable there but this code was added for the PoC)
12:50:48.257297 recvfrom(1109, "\26\3\1\2\0", 5, 0, NULL, NULL) = 5
12:50:48.257310 recvfrom(1109, "\1\0\1\3"..., 512, 0, NULL, NULL) = 512
(handshake calculation taking 700us)
12:50:48.258004 sendto(1109, "\26\3\3\0z"..., 1421, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = -1 EPIPE (Broken pipe)
12:50:48.258036 close(1109) = 0
The situation was amplified by the multi-queue accept code, as it resulted
in many incoming connections to be accepted long before they could be
handled. Prior to this they would have been accepted and the handshake
immediately started, which would have resulted in most of the connections
waiting in the the system's accept queue, and dying there when the client
aborted, thus the error would have been detected before even trying to
pass them to the handshake code.
As a result, with a listener running on a very large backlog, it's possible
to quickly accept tens of thousands of connections and waste time slowly
running their handshakes while they get replaced by other ones.
This patch adds an SO_ERROR check on the connection's FD before starting
the handshake. This is not pretty as it requires to access the FD, but it
does the job.
Some improvements should be made over the long term so that the transport
layers can report extra information with their ->rcv_buf() call, or at the
very least, implement a ->get_conn_status() function to report various
flags such as shutr, shutw, error at various stages, allowing an upper
layer to inquire for the relevance of engaging into a long operation if
it's known the connection is not usable anymore. An even simpler step
could probably consist in implementing this in the control layer.
This patch is simple enough to be backported as far as 2.0.
Many thanks to @ngaugler for his numerous tests with detailed feedback.
The "abort ssl cert" command is buggy and removes the current ckch store,
and instances, leading to SNI removal. It must only removes the new one.
This patch also adds a check in set_ssl_cert.vtc and
set_ssl_server_cert.vtc.
Must be backported as far as 2.2.
In order to unify prometheus and stats description, we need to remove
some field reference which are specific to stats implementation:
- `scur` in max current sessions (also reword current session)
- `rate` in max sessions
- `req_rate` in max requests
- `conn_rate` in max connections
Signed-off-by: William Dauchy <wdauchy@gmail.com>
In order to unify prometheus and stats description, we need to clarify
the description for pending connections.
- remove the BE reference in counters struct, as it is also used in
servers
- remove reference of `qcur` field in description as it is specific to
stats implemention
- try to reword cur and max pending connections description
Signed-off-by: William Dauchy <wdauchy@gmail.com>
The function get_check_status_result() can now be used to get the result
code (CHK_RES_*) corresponding to a check status (HCHK_STATUS_*). It will be
used by the Prometheus exporter when reporting the check status of a server.
During the call to thread_isolate(), some other threads might have
performed some task_wakeup() which will have a call date past the
one we retrieved. It could be avoided by taking the current date
once we're alone but this would significantly affect the latency
measurements by adding the isolation time. Instead we're now only
accounting positive times, so that late wakeups normally appear
with a zero latency.
No backport is needed, this is 2.4.
In h2s_bck_make_req_headers() function, in the loop on the HTX blocks, the
most common blocks, the headers, are now handled in first, before the
start-line. The same change was already performed on the response HEADERS
frames. Thus the code is more consistent now.
When a HEADERS frame is sent, it is always when an HTX start-line block is
found. Thus, in h2s_bck_make_req_headers() and h2s_frt_make_resp_headers()
functions, it is useless to tests the start-line. Instead of being too
defensive, we use BUG_ON() now because it must not happen and must be
handled as a bug.
This patch should fix the issue #1086.
The list is always defined by definition. Thus there is no reason to test
it. There is also plenty of checks on arguments types while it is already
validated during the configuration parsing. But one thing at a time.
This patch should fix the issue #1087.
The sample fetch functions must always be called with a valid argument
list. When called by hand, if there is no argument to pass, empty_arg_list must
be used.
In the stick-table code, there are some calls to smp_fetch_src() with NULL as
argument list. It is changed to use empty_arg_list instead. It is not really a
bug because smp_fetch_src() does not use the argument list. But it is an API
bug.
This patch may be backported to all stable branches as a cleanup.
h1_process_output() function is never called with no data to send (count ==
0). Thus, the first test on count, at the beginning of the function is
useless and may be removed. This way, by reading the code, it is obvious the
<chn_htx> variable is always defined.
This patch should fix the issue #1085.
This handler can take quite some time as it deletes a large number of
entries under a lock, let's export it so that it's immediately visible
in "show profiling".
The check I/O handler, process_chk_conn and server_warmup are often
present in complex backtraces as they're impacted by locking or I/O
issues. Let's export them so that they resolve cleanly.
This finally adds the long-awaited solution to inspect the run queues
and figure what is eating the CPU or causing latencies. We can even see
the experienced latencies when profiling is enabled. Example on a
saturated process:
> show tasks
Running tasks: 14983 (4 threads)
function places % lat_tot lat_avg
process_stream 4948 33.0 5.840m 70.82ms
h1_io_cb 2535 16.9 - -
main+0x9e670 2508 16.7 2.930m 70.10ms
ssl_sock_io_cb 2499 16.6 - -
si_cs_io_cb 2493 16.6 - -
If a user enables profiling by hand, it makes sense to reset the stats
counters to provide fresh new measurements. Therefore it's worth using
this as the standard method to reset counters.
"show profiling" will now dump the stats collected by the scheduler if
profiling was previously enabled. This will immediately make it obvious
what functions are responsible for others' high latencies or which ones
are suffering from others, and should help spot issues like undesired
wakeups.
Example:
Per-task CPU profiling : on # set profiling tasks {on|auto|off}
Tasks activity:
function calls cpu_tot cpu_avg lat_tot lat_avg
si_cs_io_cb 5569479 23.37s 4.196us - -
h1_io_cb 5558654 13.60s 2.446us - -
process_stream 250841 1.476s 5.882us 3.499s 13.95us
main+0x9e670 198 - - 5.526ms 27.91us
task_run_applet 17 1.509ms 88.77us 205.8us 12.11us
srv_cleanup_idle_connections 12 44.51us 3.708us 25.71us 2.142us
main+0x158c80 9 48.72us 5.413us - -
srv_cleanup_toremove_connections 5 165.1us 33.02us 123.6us 24.72us
Now when the profiling is enabled, the scheduler wlil update per-function
task-level statistics on number of calls, cpu usage and lateny, that could
later be checked using "show profiling". This will immediately make it
obvious what functions are responsible for others' high latencies or which
ones are suffering from others, and should help spot issues like undesired
wakeups. For now the stats are only collected but not reported (though they
are readable from sched_activity[] under gdb).
The new sched_activity structure will be used to collect task-level
activity based on the target function. The principle is to declare a
large enough array to make collisions rare (256 entries), and hash
the function pointer using a reduced XXH to decide where to store the
stats. On first computation an entry is definitely assigned to the
array and it's done atomically. A special entry (0) is used to store
collisions ("others"). The goal is to make it easy and inexpensive for
the scheduler code to use these to store #calls, cpu_time and lat_time
for each task.
In 2.0, commit d2d3348ac ("MINOR: activity: enable automatic profiling
turn on/off") introduced an automatic mode to enable/disable profiling.
The problem is that the automatic mode automatically changes to on/off,
which implied that the forced on/off modes aren't sticky anymore. It's
annoying when debugging because as soon as the load decreases, profiling
stops.
This makes a small change which ought to have been done first, which
consists in having two states for "auto" (auto-on, auto-off) to
distinguish them from the forced states. Setting to "auto" in the config
defaults to "auto-off" as before, and setting it on the CLI switches to
auto but keeps the current operating state.
This is simple enough to be backported to older releases if needed.
When reporting some values in debugging output we often need to have
some condensed, stable-length values. This function prints a duration
from nanosecond to years with at least 4 digits of accuracy using the
most suitable unit, always on 7 chars.
Do not consider reuse connection if available list is not allocated for
the target server. This will prevent a crash when using a standalone
server for an external purpose like socket_tcp/socket_ssl on hlua code.
For the idle/safe lists, they are considered allocated if
srv.max_idle_conns is not null.
Note that the hlua code is currently safe thanks to the additional
checks on proxy http mode and stream reuse policy not never. However,
this might not be sufficient for future code.
This patch should be backported in every branches containing the
following patch :
7f68d815af (2.4 tree)
REORG: backend: simplify conn_backend_get
This reverts commit 62e8aaa1bd.
While is works extremely well to address SSL handshake floods, it prevents
establishment of new connections during regular traffic above 50-60 Gbps,
because for an unknown reason the queue seems to have ~1.7 active tasks
per connection all the time, which makes no sense as these ought to be
waiting on subscribed events. It might uncover a deeper issue but at least
for now a different solution is needed. cf issue #822.
The test is trivial to run, just start a config with tune.runqueue-depth 10
and inject on 1GB objects with more than 10 connections. Try to connect to
the stats socket, it only works once, then the listeners are not dequeued.
In github issue #822, user @ngaugler reported some performance problems when
dealing with many concurrent SSL connections on restarts, after migrating
from 1.6 to 2.2, indicating a long time required to re-establish connections.
The Run_queue metric in the traces showed an abnormally high number of tasks
in the run queue, likely indicating we were accepting faster than we could
process. And this is indeed one of the differences between 1.6 and 2.2, the
accept I/O loop and the TLS handshakes are totally independent, so much that
they can even run on different threads. In 1.6 the SSL handshake was handled
almost immediately after the accept(), so this was limiting the input rate.
With large maxconn values, as long as there are incoming connections, new
I/Os are scheduled and many of them pass before the handshake, being tagged
for low latency processing.
The result is that handshakes get postponed, and are further postponed as
new connections are accepted. When they are finally able to be processed,
some of them fail as the client is gone, and the client had already queued
new ones. This causes an excess number of apparent connections and total
number of handshakes to be processed, just because we were accepting
connections on a temporarily saturated machine.
The solution is to temporarily pause new incoming connections when the
load already indicates that more tasks are already queued than will be
handled in a poll loop. The difficulty with this usually is to be able
to come back to re-enable the operation, but given that the metric is
the run queue, we just have to queue the global_listener_queue task so
that it gets picked by any thread once the run queues get flushed.
Before this patch, injecting with SSL reneg with 10000 concurrent
connections resulted in 350k tasks in the run queue, and a majority of
handshake timeouts noticed by the client. With the patch, the run queue
fluctuates between 1-3x runqueue-depth, the process is constantly busy, the
accept rate is maximized and clients observe no error anymore.
It would be desirable to backport this patch to 2.3 and 2.2 after some more
testing, provided the accept loop there is compatible.
The allowed chunk size was historically limited to 2GB to avoid risk of
overflow. This restriction is no longer necessary because the chunk size is
immediately stored into a 64bits integer after the parsing. Thus, it is now
possible to raise this limit. However to never fed possibly bogus values
from languages that use floats for their integers, we don't get more than 13
hexa-digit (2^52 - 1). 4 petabytes is probably enough !
This patch should fix the issue #1065. It may be backported as far as
2.1. For the 2.0, the legacy HTTP part must be reviewed. But there is
honestely no reason to do so.
A number of traces could be added or changed to report errors with
TRACE_ERROR. The goal is to be able to enable error tracing only to detect
anomalies.
A number of traces could be added or changed to report errors with
TRACE_ERROR. The goal is to be able to enable error tracing only to detect
anomalies.
In order to announce support for the Extended CONNECT h2 method by
haproxy, always send the ENABLE_CONNECT_PROTOCOL h2 settings. This new
setting has been described in the rfc 8441.
After receiving ENABLE_CONNECT_PROTOCOL, the client is free to use the
Extended CONNECT h2 method. This can notably be useful for the support
of websocket handshake on http/2.
Support for the rfc 8441 Bootstraping WebSockets with HTTP/2
Convert an Extended CONNECT HTTP/2 request into a htx representation.
The htx message uses the GET method with an Upgrade header field to be
fully compatible with the equivalent HTTP/1.1 Upgrade mechanism.
The Extended CONNECT is of the following form :
:method = CONNECT
:protocol = websocket
:scheme = https
:path = /chat
:authority = server.example.com
The new pseudo-header :protocol has been defined and is used to identify
an Extended CONNECT method. Contrary to standard CONNECT, Extended
CONNECT must have :scheme, :path and :authority defined.
Add the header Sec-Websocket-Key when generating a h1 handshake websocket
without this header. This is the case when doing h2-h1 conversion.
The key is randomly generated and base64 encoded. It is stored on the session
side to be able to verify response key and reject it if not valid.
Support for the rfc 8441 Bootstraping WebSockets with HTTP/2
Generate an HTTP/2 Extended CONNECT request from a htx Upgrade message.
This conversion is done when seeing the header Connection: Upgrade. A
CONNECT request is written with the :protocol pseudo-header set from the
Upgrade htx header value.
The protocol is saved in the h2s structure. This is needed on the
response side because the protocol is not present on HTTP/2 response but
is needed if the client side is using HTTP/1.1 with 101 status code.
Support for the rfc 8441 Bootstraping WebSockets with HTTP/2
Convert a 200 status reply from an Extended CONNECT request into a htx
representation. The htx message is set to 101 status code to be fully
compatible with the equivalent HTTP/1.1 Upgrade mechanism.
This conversion is only done if the stream flags H2_SF_EXT_CONNECT_SENT
has been set. This is true if an Extended CONNECT request has already
been seen on the stream.
Besides the 101 status, the additional headers Connection/Upgrade are
added to the htx message. The protocol is set from the value stored in
h2s. Typically it will be extracted from the client request. This is
only used if the client is using h1 as only the HTTP/1.1 101 Response
contains the Upgrade header.
This flag is used to signal that an Extended CONNECT has been sent by
the server mux on the current stream.
This will allow to convert the response to a 101 htx status message.
Add the Sec-Websocket-Accept header on a websocket handshake response.
This header may be missing if a h2 server is used with a h1 client.
The response key is calculated following the rfc6455. For this, the
handshake request key must be stored in the h1 session, as a new field
name ws_key. Note that this is only done if the message has been
prealably identified as a Websocket handshake request.
If a request is identified as a WebSocket handshake, it must contains a
websocket key header or else it can be reject, following the rfc6455.
A new flag H1_MF_UPG_WEBSOCKET is set on such messages. For the request
te be identified as a WebSocket handshake, it must contains the headers:
Connection: upgrade
Upgrade: websocket
This commit is a compagnon of
"MEDIUM: h1: generate WebSocket key on response if needed" and
"MEDIUM: h1: add a WebSocket key on handshake if needed".
Indeed, it ensures that a WebSocket key is added only from a http/2 side
and not for a http/1 bogus peer.
The code dealing with the copy of requests in the L7-buffer and the
retransmits during L7 retries has been moved in the HTTP analysers. The copy
is now performed in the REQ_HTTP_XFER_BODY analyser and the L7 retries is
performed in the RES_WAIT_HTTP analyser. This way, si_cs_recv() and
si_cs_send() don't care of it anymore. It is much more natural to deal with
L7 retry in HTTP analysers.
Some responses must not contain data. Reponses to HEAD requests and 204/304
responses. But there is no warranty that this will be really respected by
the senders or even if it is possible. For instance, the method may be
rewritten by an http-request rule (HEAD->GET). Thus, it is not really
possible to always strip these data from the response at the receive
stage. And the response may be emitted by an applet or an internal service
not strictly following the spec. All that to say that we may be prepared to
handle payload for bodyless responses on the sending path.
In addition, unlike the HTTP/1, it is not really clear that the trailers is
part of the payload or not. Thus, some clients may expect to have the
trailers, if any, in the response to a HEAD request. For instance, the GRPC
status is placed in a trailer and clients rely on it. But what happens for
204 responses then. Read the following thread for details :
https://lists.w3.org/Archives/Public/ietf-http-wg/2020OctDec/0040.html
So, thanks to previous patches, it is now possible to know on the sending
path if a response must be bodyless or not. So, for such responses, no DATA
frame is emitted, except eventually the last empty one carring the ES
flag. However, the TRAILERS frames are still emitted. The h2s_skip_data()
function is added to take care to remove HTX DATA blocks without emitting
any DATA frame expect the last one, if there is no trailers.
The H2 message flag H2_MSGF_BODYLESS_RSP is now used during the request or
the response parsing to notify the mux that, considering the parsed message,
the response is known to have no body. This happens during HEAD requests
parsing and during 204/304 responses parsing.
On the H2 multiplexer, the equivalent flag is set on H2 streams. Thus the
H2_SF_BODYLESS_RESP flag is set on a H2 stream if the H2_MSGF_BODYLESS_RSP
is found after a HEADERS frame parsing. Conversely, this flag is also set
when a HEADERS frame is emitted for HEAD requests and for 204/304 responses.
The H2_SF_BODYLESS_RESP flag will be used to ignore data payload from the
response but not the trailers.
No connection header must be added by the H1 mux in 1xx messages, including
101. Existing connection headers remains untouched, especially the "Connection:
upgrade" of 101 responses. This patch only avoids to add "Connection: close" or
"Connection: keep-alive" to 1xx responses.
204 and 1xx responses must not have any payload. Now, the H1 mux takes care
of that in last resort. But they also must not have any C-L or T-E
headers. Thus, if found on the sending path, these headers are ignored.
Some responses must not contain data. Reponses to HEAD requests and 204/304
xresponses. But there is no warranty that this will be really respected by
the senders or even if it is possible. For instance, the method may be
rewritten by an http-request rule (HEAD->GET). Thus, it is not really
possible to always strip the payload from the response at the receive
stage. And the response may be emitted by an applet or an internal service
not strictly following the spec. All that to say that we may be prepared to
handle payload for bodyless responses on the sending path.
So, thanks to previous patches, it is now possible to know on the sending
path if a response must be bodyless or not. So, for such responses, no
payload is emitted, all HTX blocks after the EOH are silently removed
(including the trailers).
In HTTP/1, responses to HEAD requests and 204/304 must not have payload. The
H1S_F_BODYLESS_RESP flag is not set on streams that should handle such
responses, on the client side and the server side.
On the client side, this flag is set when a HEAD request is parsed and when
a 204/304 response is emitted. On the server side, this happends when a HEAD
request is emitted or a 204/304 response is parsed.
The EOM block may be removed. The HTX_FL_EOM flags is enough. Most of time,
to know if the end of the message is reached, we just need to have an empty
HTX message with HTX_FL_EOM flag set. It may also be detected when the last
block of a message with HTX_FL_EOM flag is manipulated.
Removing EOM blocks simplifies the HTX message filling. Indeed, there is no
more edge problems when the message ends but there is no more space to write
the EOM block. However, some part are more tricky. Especially the
compression filter or the FCGI mux. The compression filter must finish the
compression on the last DATA block. Before it was performed on the EOM
block, an extra DATA block with the checksum was added. Now, we must detect
the last DATA block to be sure to finish the compression. The FCGI mux on
its part must be sure to reserve the space for the empty STDIN record on the
last DATA block while this record was inserted on the EOM block.
The H2 multiplexer is probably the part that benefits the most from this
change. Indeed, it is now fairly easier to known when to set the ES flag.
The HTX documentaion has been updated accordingly.
Tunnel management between the H1 and H2 multiplexers is a bit blurred. And
the HTX is not enough well defined on this point to make things clear. In
fact, Establishing a tunnel between an H2 client and an H1 server, or the
opposite is buggy because the both multiplexers don't handle the EOM block
the same way when a tunnel is established. In fact, the H2 multiplexer is
pretty strict and add an END_STREAM flag when an EOM block is found, while
the H1 multiplexer is more flexible.
The purpose of this patch is to make the EOM block usage pretty clear and to
fix the HTTP multiplexers to really handle HTTP tunnels in the right
way. Now, an EOM block is used to mark the end of an HTTP message,
semantically speaking. That means it may be followed by tunneled data. Thus,
CONNECT requests are now finished by an EOM block, just after the EOH block.
On the H1 multiplexer side, a tunnel is now only established on the response
path. So a CONNECT request remains in a DONE state waiting for the 2xx
response. On the H2 multiplexer side, a flag is used to know an HTTP tunnel
is requested, to not immediately add the END_STREAM flag on the EOM block.
All these changes are sensitives and not backportable because of recent
changes. The same problem exists on earlier versions and should be
addressed. But it will only be possible with a specific patchset.
This patch relies on the following ones :
* MEDIUM: mux-h1: Properly handle tunnel establishments and aborts
* MEDIUM: mux-h2: Close streams when processing data for an aborted tunnel
* MEDIUM: mux-h2: Block client data on server side waiting tunnel establishment
* MINOR: mux-h2: Add 2 flags to help to properly handle tunnel mode
* MINOR: mux-h1: Split H1C_F_WAIT_OPPOSITE flag to separate input/output sides
* MINOR: mux-h1/mux-fcgi: Don't set TUNNEL mode if payload length is unknown
In the same way than the H2 mux, we now bloc data sending on the server side
if a tunnel is not fully established. In addition, if some data are still
pending for a aborted tunnel, an error is triggered and the server
connection is closed.
To do so, we rely on the H1C_F_WAIT_INPUT flag to bloc the output
processing. This patch contributes to fix the tunnel mode between the H1 and
the H2 muxes.
In the previous patch ("MEDIUM: mux-h2: Block client data on server side
waiting tunnel establishment"), we added a way to block client data for not
fully established tunnel on the server side. This one closes the stream with
an ERR_CANCEL erorr if there are some pending tunneled data while the tunnel
was aborted. This may happen on the client side if a non-empty DATA frame or
an empty DATA frame without the ES flag is received. This may also happen on
the server side if there is a DATA htx block. However in this last case, we
first wait the response is fully forwarded.
This patch contributes to fix the tunnel mode between the H1 and the H2
muxes.
On the server side, when a tunnel is not fully established, we must block
tunneled data, waiting for the server response. It is mandatory because the
server may refuse the tunnel. This happens when a DATA htx block is
processed in tunnel mode (H2_SF_BODY_TUNNEL flag set) but before the
response HEADERS frame is received (H2_SF_HEADERS_RCVD flag no set). In this
case, the H2_SF_BLK_MBUSY flag is set to mark the stream as busy. This flag
is removed when the tunnel is fully established or aborted.
This patch contributes to fix the tunnel mode between the H1 and the H2
muxes.
H2_SF_BODY_TUNNEL and H2_SF_TUNNEL_ABRT flags are added to properly handle
the tunnel mode in the H2 mux. The first one is used to detect tunnel
establishment or fully established tunnel. The second one is used to abort a
tunnel attempt. It is the first commit having as a goal to fix tunnel
establishment between H1 and H2 muxes.
There is a subtlety in h2_rcv_buf(). CS_FL_EOS flag is added on the
conn-stream when ES is received on a tunneled stream. It really reflects the
conn-stream state and is mandatory for next commits.
The H1C_F_WAIT_OPPOSITE flag is now splitted in 2 flags, H1C_F_WAIT_INPUT
and H1C_F_WAIT_OUTPUT, depending on the side is waiting. The change is a
prerequisite to fix the tunnel mode management in HTTP muxes.
H1C_F_WAIT_INPUT must be used to bloc the output side and to wait for an
event from the input side. H1C_F_WAIT_OUTPUT does the opposite. It bloc the
input side and wait for an event from the output side.
Responses with no C-L and T-E headers are no longer switched in TUNNEL mode
and remains in DATA mode instead. The H1 and FCGI muxes are updated
accordingly. This change reflects the real message state. It is not a true
tunnel. Data received are still part of the message.
It is not a bug. However, this message may be backported after some
observation period (at least as far as 2.2).
As stated in the RFC7540, section 8.1.1, the HTTP/2 removes support for the
101 informational status code. Thus a PROTOCOL_ERROR is now returned to the
server if a 101-switching-protocols response is received. Thus, the server
connection is aborted.
This patch may be backported as far as 2.0.
A 101-switching-protocols response must contain a Connection header with the
Upgrade option. And this response must only be received from a server if the
client explicitly requested a protocol upgrade. Thus, the request must also
contain a Connection header with the Upgrade option. If not, a
502-bad-gateway response is returned to the client. This way, a tunnel is
only established if both sides are agree.
It is closer to what the RFC says, but it remains a bit flexible because
there is no check on the Upgrade header itself. However, that's probably
enough to ensure a tunnel is not established when not requested.
This one is not tagged as a bug. But it may be backported, at least to
2.3. It relies on :
* MINOR: htx/http-ana: Save info about Upgrade option in the Connection header
Add an HTX start-line flag and its counterpart into the HTTP message to
track the presence of the Upgrade option into the Connection header. This
way, without parsing the Connection header again, it will be easy to know if
a client asks for a protocol upgrade and if the server agrees to do so. It
will also be easy to perform some conformance checks when a
101-switching-protocols is received.
It is the second part and the most important of the fix.
Since the mux-h1 refactoring, and more specifically since the commit
c4bfa59f1 ("MAJOR: mux-h1: Create the client stream as later as possible"),
the upgrade from a TCP client connection to H1 is broken. Indeed, now the H1
mux is responsible to create the frontend conn-stream once the request
headers are fully received. But, to properly support TCP to H1 upgrades, we
must inherit from the existing conn-stream. To do so, if the conn-stream
already exists when the client H1 connection is created, we create a H1
stream in ST_ATTACHED state, but not ST_READY, and the conn-stream is
attached to it. Because the ST_READY state is not set, no data are xferred
to the data layer when h1_rcv_buf() is called and shutdowns are inhibited
except on client aborts. This way, the request is parsed the same way than
for a classical H1 connection. Once the request headers are fully received
and parsed, the data stream is upgraded and the ST_READY state is set.
A tricky case appears when an H2 upgrade is performed because the H2 preface
is matched. In this case, the conn-stream must be detached and destroyed
before switching to the H2 mux and releasing the current H1 mux. We must
also take care to detach and destroy the conn-stream when a timeout
occurres.
This patch relies on the following series of patches :
* BUG/MEDIUM: stream: Don't immediatly ack the TCP to H1 upgrades
* MEDIUM: http-ana: Do nothing in wait-for-request analyzer if not htx
* MINOR: stream: Add a function to validate TCP to H1 upgrades
* MEDIUM: mux-h1: Add ST_READY state for the H1 connections
* MINOR: mux-h1: Wake up instead of subscribe for reads after H1C creation
* MINOR: mux-h1: Try to wake up data layer first before calling its wake callback
* MINOR: stream-int: Take care of EOS in the SI wake callback function
* BUG/MINOR: stream: Don't update counters when TCP to H2 upgrades are performed
This fix is specific for 2.4. No backport needed.
Instead of switching the stream to HTX mode, the request channel is only
reset (the request buffer is xferred to the mux) and the SF_IGNORE flag is
set on the stream. This flag prevent any processing in case of abort. Once
the upgrade confirmed, the flag is removed, in stream_upgrade_from_cs().
It is only the first part of the fix. The next one ("BUG/MAJOR: mux-h1:
Properly handle TCP to H1 upgrades") is also required. Both rely on the
following series of patches :
* MEDIUM: http-ana: Do nothing in wait-for-request analyzer if not htx
* MINOR: stream: Add a function to validate TCP to H1 upgrades
* MEDIUM: mux-h1: Add ST_READY state for the H1 connections
* MINOR: mux-h1: Wake up instead of subscribe for reads after H1C creation
* MINOR: mux-h1: Try to wake up data layer first before calling its wake callback
* MINOR: stream-int: Take care of EOS in the SI wake callback function
* BUG/MINOR: stream: Don't update counters when TCP to H2 upgrades are performed
This fix is specific for 2.4. No backport needed.
If http_wait_for_request() analyzer is called with a non-htx stream, nothing
is performed and we return immediatly. For now, it is totally unexpected.
But it will be true during TCP to H1 upgrades, once fixed. Indeed, there
will be a transition period during these upgrades. First the mux will be
upgraded and the not the stream, and finally the stream will be upgraded by
the mux once ready. In the meantime, the stream will still be in raw
mode. Nothing will be performed in wait-for-request analyzer because it will
be the mux responsibility to handle errors.
This patch is required to fix the TCP to H1 upgrades.
TCP to H1 upgrades are buggy for now. When such upgrade is performed, a
crash is experienced. The bug is the result of the recent H1 mux
refactoring, and more specifically because of the commit c4bfa59f1 ("MAJOR:
mux-h1: Create the client stream as later as possible"). Indeed, now the H1
mux is responsible to create the frontend conn-stream once the request
headers are fully received. Thus the TCP to H1 upgrade is a problem because
the frontend conn-stream already exists.
To fix the bug, we must keep this conn-stream and the associate stream and
use it in the H1 mux. To do so, the upgrade will be performed in two
steps. First, the mux is upgraded from mux-pt to mux-h1. Then, the mux-h1
performs the stream upgrade, once the request headers are fully received and
parsed. To do so, stream_upgrade_from_cs() must be used. This function set
the SF_HTX flags to switch the stream to HTX mode, it removes the SF_IGNORE
flags and eventually it fills the request channel with some input data.
This patch is required to fix the TCP to H1 upgrades and is intimately
linked with the next commits.
An alive H1 connection may be in one of these 3 states :
* ST_IDLE : not active and is waiting to be reused (no h1s and no cs)
* ST_EMBRYONIC : active with a h1s but without any cs
* ST_ATTACHED : active with a h1s and a cs
ST_IDLE and ST_ATTACHED are possible for frontend and backend
connection. ST_EMBRYONIC is only possible on the client side, when we are
waiting for the request headers. The last one is the expected state for an
active connection processing data. These states are mutually exclusives.
Now, there is a new state, ST_READY. It may only be set if ST_ATTACHED is
also set and when the CS is considered as fully active. For now, ST_READY is
set in the same time of ST_ATTACHED. But it will be used to fix TCP to H1
upgrades. Idea is to have an H1 connection in ST_ATTACHED state but not
ST_READY yet and have more or less the same behavior than an H1 connection
in ST_EMBRYONIC state. And when the upgrade is fully achieved, the ST_READY
state may be set and the data layer may be notified accordingly.
So for now, this patch should not change anything. TCP to H1 upgrades are
still buggy. But it is mandatory to make it work properly.
When a H1 connection is created, we now wakeup the H1C tasklet if there are
some data in the input buffer. If not we only subscribe for reads.
This patch is required to fix the TCP to H1 upgrades.
Instead of calling the data layer wake callback function, we now first try
to wake it up. If the data layer is subscribed for receives or for sends,
its tasklet is woken up. The wake callback function is only called as the
last chance to notify the data layer.
Because si_cs_process() is also the SI wake callback function, it may be
called from the mux layer. Thus, in such cases, it is performed outside any
I/O event and si_cs_recv() is not called. If a read0 is reported by the mux,
via the CS_FL_EOS flag, the event is not handled, because only si_cs_recv()
take care of this flag for now.
It is not a bug, because this does not happens for now. All muxes set this
flag when the data layer retrieve data (via mux->rcv_buf()). But it is safer
to be prepared to handle it from the wake callback. And in fact, it will be
useful to fix the HTTP upgrades of TCP connections (especially TCP>H1>H2
upgrades).
To be sure to not handle the same event twice, it is only handled if the
shutr is not already set on the input channel.
The reuse of idle connections should only happen for a proxy with the
http mode. In case of a backend with the tcp mode, the reuse selection
and insertion in session list are skipped.
This behavior is present since commit :
MEDIUM: connection: Add private connections synchronously in session server list
It could also be further exagerated by :
MEDIUM: backend: add reused conn to sess if mux marked as HOL blocking
It can be backported up to 2.3.
Use chunk_inistr() for a chunk initialisation in
ssl_sock_load_sctl_from_file() instead of a manual initialisation which
was not initialising head.
Fix issue #1073.
Must be backported as far as 2.2
The new ckch_inst_new_load_srv_store() function which mimics the
ckch_inst_new_load_store() function includes some dead code which was
used only in the former function.
Fix issue #1081.
The previous patch was pushed too quickly (399bf72f6 "BUG/MINOR: stats:
Remove a break preventing ST_F_QCUR to be set for servers"). It was not an
extra break but a misplaced break statement. Thus, now a break statement
must be added after filling the ST_F_MODE field in stats_fill_sv_stats().
No backport needed except if the above commit is backported.
There is an extra break statement wrongly placed in stats_fill_sv_stats()
function, just before filling the ST_F_QCUR field. It prevents this field to
be set to the right value for servers.
No backport needed except if commit 3a9a4992 ("MEDIUM: stats: allow to
select one field in `stats_fill_sv_stats`") is backported.
This patch makes things more consistent between the bind_conf functions
and the server ones:
- ssl_sock_load_srv_ckchs() loads the SSL_CTX in the server
(ssl_sock_load_ckchs() load the SNIs in the bind_conf)
- add the server parameter to ssl_sock_load_srv_ckchs()
- changes made to the ckch_inst are done in
ckch_inst_new_load_srv_store()
Since the server SSL_CTX is now stored in the ckch_inst, it is not
needed anymore to pass an SSL_CTX to ckch_inst_new_load_srv_store() and
ssl_sock_load_srv_ckchs().
The new feature allowing the change of server side certificates
introduced duplicated free code. Rework the code in
cli_io_handler_commit_cert() to be more consistent.
The client_crt member is not used anymore since the server's ssl context
initialization now behaves the same way as the bind lines one (using
ckch stores and instances).
When trying to update a backend certificate, we should find a
server-side ckch instance thanks to which we can rebuild a new ssl
context and a new ckch instance that replace the previous ones in the
server structure. This way any new ssl session will be built out of the
new ssl context and the newly updated certificate.
This resolves a subpart of GitHub issue #427 (the certificate part)
In order for the backend server's certificate to be hot-updatable, it
needs to fit into the implementation used for the "bind" certificates.
This patch follows the architecture implemented for the frontend
implementation and reuses its structures and general function calls
(adapted for the server side).
The ckch store logic is kept and a dedicated ckch instance is used (one
per server). The whole sni_ctx logic was not kept though because it is
not needed.
All the new functions added in this patch are basically server-side
copies of functions that already exist on the frontend side with all the
sni and bind_cond references removed.
The ckch_inst structure has a new 'is_server_instance' flag which is
used to distinguish regular instances from the server-side ones, and a
new pointer to the server's structure in case of backend instance.
Since the new server ckch instances are linked to a standard ckch_store,
a lookup in the ckch store table will succeed so the cli code used to
update bind certificates needs to be covered to manage those new server
side ckch instances.
Split the server's ssl context initialization into the general ssl
related initializations and the actual initialization of a single
SSL_CTX structure. This way the context's initialization will be
usable by itself from elsewhere.
Reorganize the conditions for the reuse of idle/safe connections :
- reduce code by using variable to store reuse mode and idle/safe conns
counts
- consider that idle/safe/avail lists are properly allocated if
max_idle_conns not null. An allocation failure prevents haproxy
startup.
It is only a problem on the response path because the request payload length
it always known. But when a filter is registered to analyze the response
payload, the filtering may hang if the server closes just after the headers.
The root cause of the bug comes from an attempt to allow the filters to not
immediately forward the headers if necessary. A filter may choose to hold
the headers by not forwarding any bytes of the payload. For a message with
no payload but a known payload length, there is always a EOM block to
forward. Thus holding the EOM block for bodyless messages is a good way to
also hold the headers. However, messages with an unknown payload length,
there is no EOM block finishing the message, but only a SHUTR flag on the
channel to mark the end of the stream. If there is no payload when it
happens, there is no payload at all to forward. In the filters API, it is
wrongly detected as a condition to not forward the headers.
Because it is not the most used feature and not the obvious one, this patch
introduces another way to hold the message headers at the begining of the
forwarding. A filter flag is added to explicitly says the headers should be
hold. A filter may choose to set the STRM_FLT_FL_HOLD_HTTP_HDRS flag and not
forwad anything to hold the headers. This flag is removed at each call, thus
it must always be explicitly set by filters. This flag is only evaluated if
no byte has ever been forwarded because the headers are forwarded with the
first byte of the payload.
reg-tests/filters/random-forwarding.vtc reg-test is updated to also test
responses with unknown payload length (with and without payload).
This patch must be backported as far as 2.0.
prometheus approach requires to output all values for a given metric
name; meaning we iterate through all metrics, and then iterate in the
inner loop on all objects for this metric.
In order to allow more code reuse, adapt the stats API to be able to
select one field or fill them all otherwise.
This patch follows what has already been done on frontend and backend
side.
From this patch it should be possible to remove most of the duplicate
code on prometheuse side for the server.
A few things to note though:
- state require prior calculation, so I moved that to a sort of helper
`stats_fill_be_stats_computestate`.
- all ST_F*TIME fields requires some minor compute, so I moved it at te
beginning of the function under a condition.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
prometheus approach requires to output all values for a given metric
name; meaning we iterate through all metrics, and then iterate in the
inner loop on all objects for this metric.
In order to allow more code reuse, adapt the stats API to be able to
select one field or fill them all otherwise.
This patch follows what has already been done on frontend side.
From this patch it should be possible to remove most of the duplicate
code on prometheuse side for the backend
A few things to note though:
- status and uweight field requires prior compute, so I moved that to a
sort of helper `stats_fill_be_stats_computesrv`.
- all ST_F*TIME fields requires some minor compute, so I moved it at te
beginning of the function under a condition.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
while working on backend/servers I realised I could have written that in
a better way and avoid one extra break. This is slightly improving
readiness.
also while being here, fix function declaration which was not 100%
accurate.
this patch does not change the behaviour of the code.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
In stats_fill_fe_stats(), some fields are conditionnal (ST_F_HRSP_* for
instance). But unlike unimplemented fields, for those fields, the <metric>
variable is used to fill the <stats> array, but it is not initialized. This
bug as no impact, because these fields are not used. But it is better to fix
it now to avoid future bugs.
To fix it, the metric is now defined and initialized into the for loop.
The bug was introduced by the commit 0ef54397 ("MEDIUM: stats: allow to
select one field in `stats_fill_fe_stats`"). No backport is needed except if
the above commit is backported. It fixes the issue #1063.
A regression was introduced by the commit 0ef54397b ("MEDIUM: stats: allow
to select one field in `stats_fill_fe_stats`"). stats_fill_fe_stats()
function fails on unimplemented metrics for frontends. However, not all
stats metrics are used by frontends. For instance ST_F_QCUR. As a
consequence, the frontends stats are always skipped.
To fix the bug, we just skip unimplemented metric for frontends. An error is
triggered only if a specific field is given and is unimplemented.
No backport is needed except if the above commit is backported.
Lua requires LLONG_MAX defined with __USE_ISOC99 which is set by
_GNU_SOURCE, not necessarely defined by default on old compiler/glibc.
$ make V=1 TARGET=linux-glibc-legacy USE_THREAD= USE_ACCEPT4= USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_LUA=1
..
cc -Iinclude -O2 -g -Wall -Wextra -Wdeclaration-after-statement -fwrapv -Wno-strict-aliasing -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-missing-field-initializers -DUSE_EPOLL -DUSE_NETFILTER -DUSE_PCRE -DUSE_POLL -DUSE_TPROXY -DUSE_LINUX_TPROXY -DUSE_LINUX_SPLICE -DUSE_LIBCRYPT -DUSE_CRYPT_H -DUSE_GETADDRINFO -DUSE_OPENSSL -DUSE_LUA -DUSE_FUTEX -DUSE_ZLIB -DUSE_CPU_AFFINITY -DUSE_DL -DUSE_RT -DUSE_PRCTL -DUSE_THREAD_DUMP -I/usr/include/openssl101e/ -DUSE_PCRE -I/usr/include -DCONFIG_HAPROXY_VERSION=\"2.4-dev5-73246d-83\" -DCONFIG_HAPROXY_DATE=\"2021/01/21\" -c -o src/hlua.o src/hlua.c
In file included from /usr/local/include/lua.h:15,
from /usr/local/include/lauxlib.h:15,
from src/hlua.c:16:
/usr/local/include/luaconf.h:581:2: error: #error "Compiler does not support 'long long'. Use option '-DLUA_32BITS' or '-DLUA_C89_NUMBERS' (see file 'luaconf.h' for details)"
..
cc -Iinclude -O2 -g -Wall -Wextra -Wdeclaration-after-statement -fwrapv -Wno-strict-aliasing -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-missing-field-initializers -DUSE_EPOLL -DUSE_NETFILTER -DUSE_PCRE -DUSE_POLL -DUSE_TPROXY -DUSE_LINUX_TPROXY -DUSE_LINUX_SPLICE -DUSE_LIBCRYPT -DUSE_CRYPT_H -DUSE_GETADDRINFO -DUSE_OPENSSL -DUSE_LUA -DUSE_FUTEX -DUSE_ZLIB -DUSE_CPU_AFFINITY -DUSE_DL -DUSE_RT -DUSE_PRCTL -DUSE_THREAD_DUMP -I/usr/include/openssl101e/ -DUSE_PCRE -I/usr/include -DCONFIG_HAPROXY_VERSION=\"2.4-dev5-73246d-83\" -DCONFIG_HAPROXY_DATE=\"2021/01/21\" -c -o src/hlua_fcn.o src/hlua_fcn.c
In file included from /usr/local/include/lua.h:15,
from /usr/local/include/lauxlib.h:15,
from src/hlua_fcn.c:17:
/usr/local/include/luaconf.h:581:2: error: #error "Compiler does not support 'long long'. Use option '-DLUA_32BITS' or '-DLUA_C89_NUMBERS' (see file 'luaconf.h' for details)"
..
Cc: Thierry Fournier <tfournier@arpalert.org>
hlua_init() uses 'idx' only in openssl related code, while 'i' is used
in shared code and is safe to be reused. This commit replaces the use of
'idx' with 'i'
$ make V=1 TARGET=linux-glibc USE_LUA=1 USE_OPENSSL=
..
cc -Iinclude -O2 -g -Wall -Wextra -Wdeclaration-after-statement -fwrapv -Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers -Wno-cast-function-type -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference -DUSE_EPOLL -DUSE_NETFILTER -DUSE_POLL -DUSE_THREAD -DUSE_BACKTRACE -DUSE_TPROXY -DUSE_LINUX_TPROXY -DUSE_LINUX_SPLICE -DUSE_LIBCRYPT -DUSE_CRYPT_H -DUSE_GETADDRINFO -DUSE_LUA -DUSE_FUTEX -DUSE_ACCEPT4 -DUSE_CPU_AFFINITY -DUSE_TFO -DUSE_NS -DUSE_DL -DUSE_RT -DUSE_PRCTL -DUSE_THREAD_DUMP -I/usr/include/lua5.3 -I/usr/include/lua5.3 -DCONFIG_HAPROXY_VERSION=\"2.4-dev5-37286a-78\" -DCONFIG_HAPROXY_DATE=\"2021/01/21\" -c -o src/hlua.o src/hlua.c
src/hlua.c: In function 'hlua_init':
src/hlua.c:9145:6: warning: unused variable 'idx' [-Wunused-variable]
9145 | int idx;
| ^~~
When writing commit a8459b28c ("MINOR: debug: create
ha_backtrace_to_stderr() to dump an instant backtrace") I just forgot
that some distros are a bit extremist about the syscall return values.
src/debug.c: In function `ha_backtrace_to_stderr':
src/debug.c:147:3: error: ignoring return value of `write', declared with attribute warn_unused_result [-Werror=unused-result]
write(2, b.area, b.data);
^~~~~~~~~~~~~~~~~~~~~~~~
CC src/h1_htx.o
Let's apply the usual tricks to shut them up. No backport is needed.
The dump state is now passed to the function so that the caller can adjust
the behavior. A new series of 4 values allow to stop *after* dumping main
instead of before it or any of the usual loops. This allows to also report
BUG_ON() that could happen very high in the call graph (e.g. startup, or
the scheduler itself) while still understanding what the call path was.
The purpose is to enable the dumping of a backtrace on BUG_ON(). While
it's very useful to know that a condition was met, very often some
caller context is missing to figure how the condition could happen.
From now on, on systems featuring backtrace, a backtrace of the calling
thread will also be dumped to stderr in addition to the unexpected
condition. This will help users of DEBUG_STRICT as they'll most often
find this backtrace in their logs even if they can't find their core
file.
A new "debug dev bug" expert-mode CLI command was added to test the
feature.
This function calls the ha_dump_backtrace() function with a locally
allocated buffer and sends the output slightly indented to fd #2. It's
meant to be used as an emergency backtrace dump.
The backtrace dumping code was located into the thread dump function
but it looks particularly convenient to be able to call it to produce
a dump in other situations, so let's move it to its own function and
make sure it's called last in the function so that we can benefit from
tail merging to save one entry.
In order to simplify the code and remove annoying ifdefs everywhere,
let's always export my_backtrace() and make it adapt to the situation
and return zero if not supported. A small update in the thread dump
function was needed to make sure we don't use its results if it fails
now.
Since commit aade4edc1 ("BUG/MEDIUM: mux-h2: Don't handle pending read0
too early on streams"), we've met a few cases where an early connection
close wouldn't be properly handled if some data were pending in a frame
header, because the test now considers the buffer's contents before
accepting to report the close, but given that frame headers or preface
are consumed at once, the buffer cannot make progress when it's stuck
at intermediary lengths.
In order to address this, this patch introduces two flags in the h2c
connection to store any reported shutdown and failed parsing. The idea
is that we cannot rely on conn_xprt_read0_pending() in the parser since
it wouldn't consider data pending in the buffer nor intermediary layers,
but we know for certain that after a read0 is reported by the transport
layer in presence of an RD_SH on the connection, no more progress will
be made there. This alone is not sufficient to decide to end processing,
we can only do this once these final data have been submitted to a parser.
Therefore, now when a parser fails on missing data, we check if a read0
has already been reported on this connection, and if so we set a new
END_REACHED flag on the connection to indicate a failure to process the
final data. The h2c_read0_pending() function now simply reports this
flag's status. This way we're certain that the input shutdown is only
considered after the demux attempted to parse the last frame.
Maybe over the long term the subscribe() API should be improved to
synchronously fail when trying to subscribe for an even that will not
happen. This may be an elegant solution that could possibly work across
multiple layers and even muxes, and be usable at a few specific places
where that's needed.
Given the patch above was backported as far as 2.0, this one should be
backported there as well. It is possible that the fcgi mux has the same
issue, but this was not analysed yet.
Thanks to Pierre Cheynier for providing detailed traces allowing to
quickly narrow the problem down, and to Olivier for his analysis.
When a TCP to H2 upgrade is performed, the SF_IGNORE flag is set on the
stream before killing it. This happens when a TCP/SSL client connection is
routed to a HTTP backend and the h2 alpn detected. The SF_IGNORE flag was
added for this purpose, to skip some processing when the stream is aborted
before a mux upgrade. Some counters updates were skipped this way. But some
others are still updated.
Now, all counters update at the end of process_stream(), before releasing
the stream, are ignored if SF_IGNORE flag is set. Note this stream is
aborted because we switch from a mono-stream to a multi-stream
multiplexer. It works differently for TCP to H1 upgrades.
This patch should be backported as far as 2.0 after some observation period.
use `stats_fill_fe_stats` when possible to avoid duplicating code; make
use of field selector to get the needed field only.
this should not introduce any difference of output.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
prometheus approach requires to output all values for a given metric
name; meaning we iterate through all metrics, and then iterate in the
inner loop on all objects for this metric.
In order to allow more code reuse, adapt the stats API to be able to
select one field or fill them all otherwise.
From this patch it should be possible to remove most of the duplicate
code on prometheuse side for the frontend.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
Another patch in order to try to reconciliate haproxy stats and
prometheus. Here I'm adding a proper start time field in order to make
proper use of uptime field.
That being done we can move the calculation in `fill_info`
Signed-off-by: William Dauchy <wdauchy@gmail.com>
in order to prepare a possible merge of fields between haproxy stats and
prometheus, duplicate 3 fields:
INF_MEMMAX
INF_POOL_ALLOC
INF_POOL_USED
Those were specifically named in MB unit which is not what prometheus
recommends. We therefore used them but changed the unit while doing the
calculation. It created a specific case for that, up to the description.
This patch:
- removes some possible confusion, i.e. using MB field for bytes
- will permit an easier merge of fields such as description
First consequence for now, is that we can remove the calculation on
prometheus side and move it on `fill_info`.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
If an HTTP protocol upgrade request with a payload is received, a
501-not-implemented error is now returned to the client. It is valid from
the RFC point of view but will be incompatible with the way the H2
websockets will be handled by HAProxy. And it is probably a very uncommon
way to do perform protocol upgrades.
With this patch, the H1 mux is now able to return 501-not-implemented errors
to client during the request parsing. However, no such errors are returned
for now.
The MUX_ES_NOTIMPL_ERR exit status is added to allow the multiplexers to
report errors about not implemented features. This will be used by the H1
mux to return 501-not-implemented errors.
Add the support for the 501-not-implemented status code with the
corresponding default message. The documentation is updated accordingly
because it is now part of status codes HAProxy may emit via an errorfile or
a deny/return HTTP action.
Just like the H1 muliplexer, when a new frontend H2 stream is created, the
rxbuf is xferred to the stream at the upper layer.
Originally, it is not a bug fix, but just an api standardization. And in
fact, it fixes a crash when a h2 stream is aborted after the request parsing
but before the first call to process_stream(). It crashes since the commit
8bebd2fe5 ("MEDIUM: http-ana: Don't process partial or empty request
anymore"). It is now totally unexpected to have an HTTP stream without a
valid request. But here the stream is unable to get the request because the
client connection was aborted. Passing it during the stream creation fixes
the bug. But the true problem is that the stream-interfaces are still
relying on the connection state while only the muxes should do so.
This fix is specific for 2.4. No backport needed.
When a tcpcheck ruleset uses multiple connections, the existing one must be
closed and destroyed before openning the new one. This part is handled in
the tcpcheck_main() function, when called from the wake callback function
(wake_srv_chk). But it is indeed a problem, because this function may be
called from the mux layer. This means a mux may call the wake callback
function of the data layer, which may release the connection and the mux. It
is easy to see how it is hazardous. And actually, depending on the
scheduling, it leads to crashes.
Thus, we must avoid to release the connection in the wake callback context,
and move this part in the check's process function instead. To do so, we
rely on the CHK_ST_CLOSE_CONN flags. When a connection must be replaced by a
new one, this flag is set on the check, in tcpcheck_main() function, and the
check's task is woken up. Then, the connection is really closed in
process_chk_conn() function.
This patch must be backported as far as 2.2, with some adaptations however
because the code is not exactly the same.
glibc < 2.10 requires _GNU_SOURCE in order to make use of strsignal(),
otherwise leading to SEGV at runtime.
$ make V=1 TARGET=linux-glibc-legacy USE_THREAD= USE_ACCEPT4=
..
src/mworker.c: In function 'mworker_catch_sigchld':
src/mworker.c:285: warning: implicit declaration of function 'strsignal'
src/mworker.c:285: warning: pointer/integer type mismatch in conditional expression
..
$ make V=1 reg-tests REGTESTS_TYPES=slow,default
..
###### Test case: reg-tests/mcli/mcli_start_progs.vtc ######
## test results in: "/tmp/haregtests-2021-01-19_15-18-07.n24989/vtc.29077.28f6153d"
---- h1 Bad exit status: 0x008b exit 0x0 signal 11 core 128
---- h1 Assert error in haproxy_wait(), src/vtc_haproxy.c line 792: Condition(*(&h->fds[1]) >= 0) not true. Errno=0 Success
..
$ gdb ./haproxy /tmp/core.0.haproxy.30270
..
Core was generated by `/root/haproxy/haproxy -d -W -S fd@8 -dM -f /tmp/haregtests-2021-01-19_15-18-07.'.
Program terminated with signal 11, Segmentation fault.
#0 0x00002aaaab387a10 in strlen () from /lib64/libc.so.6
(gdb) bt
#0 0x00002aaaab387a10 in strlen () from /lib64/libc.so.6
#1 0x00002aaaab354b69 in vfprintf () from /lib64/libc.so.6
#2 0x00002aaaab37788a in vsnprintf () from /lib64/libc.so.6
#3 0x00000000004a76a3 in memvprintf (out=0x7fffedc680a0, format=0x5a5d58 "Current worker #%d (%d) exited with code %d (%s)\n", orig_args=0x7fffedc680d0)
at src/tools.c:3868
#4 0x00000000004bbd40 in print_message (label=0x58abed "ALERT", fmt=0x5a5d58 "Current worker #%d (%d) exited with code %d (%s)\n", argp=0x7fffedc680d0)
at src/log.c:1066
#5 0x00000000004bc07f in ha_alert (fmt=0x5a5d58 "Current worker #%d (%d) exited with code %d (%s)\n") at src/log.c:1109
#6 0x0000000000534b7b in mworker_catch_sigchld (sh=<value optimized out>) at src/mworker.c:293
#7 0x0000000000556af3 in __signal_process_queue () at src/signal.c:88
#8 0x00000000004f6216 in signal_process_queue () at include/haproxy/signal.h:39
#9 run_poll_loop () at src/haproxy.c:2859
#10 0x00000000004f63b7 in run_thread_poll_loop (data=<value optimized out>) at src/haproxy.c:3028
#11 0x00000000004faaac in main (argc=<value optimized out>, argv=0x7fffedc68498) at src/haproxy.c:904
See: https://man7.org/linux/man-pages/man3/strsignal.3.html
Must be backported as far as 2.0.
If a subscriber's tasklet was called more than one million times, if
the ssl_ctx's connection doesn't match the current one, or if the
connection appears closed in one direction while the SSL stack is
still subscribed, the FD is reported as suspicious. The close cases
may occasionally trigger a false positive during very short and rare
windows. Similarly the 1M calls will trigger after 16GB are transferred
over a given connection. These are rare enough events to be reported as
suspicious.
A file descriptor which maps to a connection but has more than one
thread in its mask, or an FD handle that doesn't correspond to the FD,
or wiht no mux context, or an FD with no thread in its mask, or with
more than 1 million events is flagged as suspicious.
Now the show_fd helpers at the transport and mux levels return an integer
which indicates whether or not the inspected entry looks suspicious. When
an entry is reported as suspicious, "show fd" will suffix it with an
exclamation mark ('!') in the dump, that is supposed to help detecting
them.
For now, helpers were adjusted to adapt to the new API but none of them
reports any suspicious entry yet.
When dumping a live h1 stream, also take the opportunity for reporting
the subscriber including the event, tasklet, handler and context. Example:
3030 : st=0x21(R:rA W:Ra) ev=0x04(heOpi) [Lc] tmask=0x4 umask=0x0 owner=0x7f97805c1f70 iocb=0x65b847(sock_conn_iocb) back=1 cflg=0x00002300 sv=s1/recv mux=H1 ctx=0x7f97805c21b0 h1c.flg=0x80000200 .sub=1 .ibuf=0@(nil)+0/0 .obuf=0@(nil)+0/0 h1s=0x7f97805c2380 h1s.flg=0x4010 .req.state=MSG_DATA .res.state=MSG_RPBEFORE .meth=POST status=0 .cs.flg=0x00000000 .cs.data=0x7f97805c1720 .subs=0x7f97805c1748(ev=1 tl=0x7f97805c1990 tl.calls=2 tl.ctx=0x7f97805c1720 tl.fct=si_cs_io_cb) xprt=RAW
In FD dumps it's often very important to figure what upper layer function
is going to be called. Let's export the few I/O callbacks that appear as
tasklet functions so that "show fd" can resolve them instead of printing
a pointer relative to main. For example:
1028 : st=0x21(R:rA W:Ra) ev=0x01(heopI) [lc] tmask=0x2 umask=0x2 owner=0x7f00b889b200 iocb=0x65b638(sock_conn_iocb) back=0 cflg=0x00001300 fe=recv mux=H2 ctx=0x7f00c8824de0 h2c.st0=FRH .err=0 .maxid=795 .lastid=-1 .flg=0x0000 .nbst=0 .nbcs=0 .fctl_cnt=0 .send_cnt=0 .tree_cnt=0 .orph_cnt=0 .sub=1 .dsi=795 .dbuf=0@(nil)+0/0 .msi=-1 .mbuf=[1..1|32],h=[0@(nil)+0/0],t=[0@(nil)+0/0] xprt=SSL xprt_ctx=0x7f00c86d0750 xctx.st=0 .xprt=RAW .wait.ev=1 .subs=0x7f00c88252e0(ev=1 tl=0x7f00a07d1aa0 tl.calls=1047 tl.ctx=0x7f00c8824de0 tl.fct=h2_io_cb) .sent_early=0 .early_in=0
The SSL context contains a lot of important details that are currently
missing from debug outputs. Now that we detect ssl_sock, we can perform
some sanity checks, print the next xprt, the subscriber callback's context,
handler and number of calls. The process function is also resolved. This
now gives for example on an H2 connection:
1029 : st=0x21(R:rA W:Ra) ev=0x01(heopI) [lc] tmask=0x2 umask=0x2 owner=0x7fc714881700 iocb=0x65b528(sock_conn_iocb) back=0 cflg=0x00001300 fe=recv mux=H2 ctx=0x7fc734545e50 h2c.st0=FRH .err=0 .maxid=217 .lastid=-1 .flg=0x0000 .nbst=0 .nbcs=0 .fctl_cnt=0 .send_cnt=0 .tree_cnt=0 .orph_cnt=0 .sub=1 .dsi=217 .dbuf=0@(nil)+0/0 .msi=-1 .mbuf=[1..1|32],h=[0@(nil)+0/0],t=[0@(nil)+0/0] xprt=SSL xprt_ctx=0x7fc73478f230 xctx.st=0 .xprt=RAW .wait.ev=1 .subs=0x7fc734546350(ev=1 tl=0x7fc7346702e0 tl.calls=278 tl.ctx=0x7fc734545e50 tl.fct=main-0x144efa) .sent_early=0 .early_in=0
Just like we did for the muxes, now the transport layers will have the
ability to provide helpers to report more detailed information about their
internal context. When the helper is not known, the pointer continues to
be dumped as-is if it's not NULL. This way a transport with no context nor
dump function will not add a useless "xprt_ctx=(nil)" but the pointer will
be emitted if valid or if a helper is defined.
These ones are definitely missing from some dumps, let's report them! We
print the xprt's name instead of its useless pointer, as well as its ctx
when xprt is not NULL.
Over time the code has uglified, casting fdt.owner as a struct connection
for about everything. Let's have a const struct connection* there and take
this opportunity for passing all fields as const as well.
Additionally a misplaced closing parenthesis on the output was fixed.
When 0c439d895 ("BUILD: tools: make resolve_sym_name() return a const")
was written, the pointer argument ought to have been turned to const for
more flexibility. Let's do it now.
That was causing confusing outputs like this one whenan H2S is known:
1030 : ... last_h2s=0x2ed8390 .id=775 .st=HCR.flg=0x4001 .rxbuf=...
^^^^
This was introduced by commit ab2ec4540 in 2.1-dev2 so the fix can be
backported as far as 2.1.
This counter could be hugely incremented by the peer task responsible of managing
peer synchronizations and reconnections, for instance when a peer is not reachable
there is a period where the appctx is not created. If we receive stick-table
updates before the peer session (appctx) is instantiated, we reach the code
responsible of incrementing the "new_conn" counter.
With this patch we increment this counter only when we really instantiate a new
peer session thanks to peer_session_create().
May be backported as far as 2.0.
If a server varies on the accept-encoding header and it sends a response
with an encoding we do not know (see parse_encoding_value function), we
will not store it. This will prevent unexpected errors caused by
cache collisions that could happen in accept_encoding_hash_cmp.
commit 5a982a7165 ("MINOR:
contrib/prometheus-exporter: export build_info") is breaking lua
`core.get_info()`.
This patch makes sure build_info is correctly initialised in all cases.
Reviewed-by: William Dauchy <wdauchy@gmail.com>
Previous commit da2b0844f ("MINOR: peers: Add traces for peer control
messages.") introduced a build warning on some compiler versions after
the removal of variable "peers" in peer_send_msgs() because variable
"s" was used only to assign this one, and variable "si" to assign "s".
Let's remove both to fix the warning. No backport is needed.
V2 of this fix which includes a missing pointer initialization which was
causing a segfault in v1 (949a7f6459)
This bug happens when a service has multiple records on the same host
and the server provides the A/AAAA resolution in the response as AR
(Additional Records).
In such condition, the first occurence of the host will be taken from
the Additional section, while the second (and next ones) will be process
by an independent resolution task (like we used to do before 2.2).
This can lead to a situation where the "synchronisation" of the
resolution may diverge, like described in github issue #971.
Because of this behavior, HAProxy mixes various type of requests to
resolve the full list of servers: SRV+AR for all "first" occurences and
A/AAAA for all other occurences of an existing hostname.
IE: with the following type of response:
;; ANSWER SECTION:
_http._tcp.be2.tld. 3600 IN SRV 5 500 80 A2.tld.
_http._tcp.be2.tld. 3600 IN SRV 5 500 86 A3.tld.
_http._tcp.be2.tld. 3600 IN SRV 5 500 80 A1.tld.
_http._tcp.be2.tld. 3600 IN SRV 5 500 85 A3.tld.
;; ADDITIONAL SECTION:
A2.tld. 3600 IN A 192.168.0.2
A3.tld. 3600 IN A 192.168.0.3
A1.tld. 3600 IN A 192.168.0.1
A3.tld. 3600 IN A 192.168.0.3
the first A3 host is resolved using the Additional Section and the
second one through a dedicated A request.
When linking the SRV records to their respective Additional one, a
condition was missing (chek if said SRV record is already attached to an
Additional one), leading to stop processing SRV only when the target
SRV field matches the Additional record name. Hence only the first
occurence of a target was managed by an additional record.
This patch adds a condition in this loop to ensure the record being
parsed is not already linked to an Additional Record. If so, we can
carry on the parsing to find a possible next one with the same target
field value.
backport status: 2.2 and above
Display traces when sending/receiving peer control messages (synchronisation, heartbeat).
Add remaining traces when parsing malformed messages (acks, stick-table definitions)
or ignoring them.
Also add traces when releasing session or when reaching the PEER_SESS_ST_ERRPROTO
peer protocol state.
It's about the third time I get confused by these functions, half of
which manipulate the reference as a whole and those manipulating only
an entry. For me "pat_ref_commit" means committing the pattern reference,
not just an element, so let's rename it. A number of other ones should
really be renamed before 2.4 gets released :-/
There is no low level api to achieve same as Linux/FreeBSD, we rely
on CPUs available. Without this, the number of threads is just 1 for
Mac while having 8 cores in my M1.
Backporting to 2.1 should be enough if that's possible.
Signed-off-by: David CARLIER <devnexen@gmail.com>
An fatal error is now reported if a server is defined in a frontend
section. til now, a warning was just emitted and the server was ignored. The
warning was added in the 1.3.4 when the frontend/backend keywords were
introduced to allow a smooth transition and to not break existing
configs. It is old enough now to emit an fatal error in this case.
This patch is related to the issue #1043. It may be backported at least as
far as 2.2, and possibly to older versions. It relies on the previous commit
("MINOR: config: Add failifnotcap() to emit an alert on proxy capabilities").
The HAPROXY_CFGFILES env variable is built using a static trash chunk, via a
call to get_trash_chunk() function. This chunk is reserved during the whole
configuration parsing. It is far too large to guarantee it will not be
reused during the configuration parsing. And in fact, it happens in the lua
code since the commit f67442efd ("BUG/MINOR: lua: warn when registering
action, conv, sf, cli or applet multiple times"), when a lua script is
loaded.
To fix the bug, we now use a dynamic buffer instead. And we call memprintf()
function to handle both the allocation and the formatting. Allocation errors
at this stage are fatal.
This patch should fix the issue #1041. It must be backported as far as 2.0.
The strict-limits global option was introduced with commit 0fec3ab7b
("MINOR: init: always fail when setrlimit fails"). When used in
conjuction with master-worker, haproxy will not fail when a setrlimit
fails. This happens because we only exit() if master-worker isn't used.
This patch removes all tests for master-worker mode for all cases covered
by strict-limits scope.
This should be backported from 2.1 onward.
This should fix issue #1042.
Reviewed by William Dauchy <wdauchy@gmail.com>
If a server is defined in a frontend, thus a proxy without the backend
capability, the 'check' and 'agent-check' keywords are ignored. This way, no
check is performed on an ignored server. This avoids a segfault because some
part of the tcpchecks are not fully initialized (or released for frontends
during the post-check).
In addition, an test on the server's proxy capabilities is performed when
checks or agent-checks are initialized and nothing is performed for servers
attached to a non-backend proxy.
This patch should fix the issue #1043. It must be backported as far as 2.2.
If an errors occurs during the sample expression parsing, the alloced
sample_expr is not freed despite having its main pointer reset.
This fixes GitHub issue #1046.
It could be backported as far as 1.8.
This reverts commit 949a7f6459.
The first part of the patch introduces a bug. When a dns answer item is
allocated, its <ar_item> is only initialized at the end of the parsing, when
the item is added in the answer list. Thus, we must not try to release it
during the parsing.
The second part is also probably buggy. It fixes the issue #971 but reverts
a fix for the issue #841 (see commit fb0884c8297 "BUG/MEDIUM: dns: Don't
store additional records in a linked-list"). So it must be at least
revalidated.
This revert fixes a segfault reported in a comment of the issue #971. It
must be backported as far as 2.2.
like it is done in other places, check the return value of
`alloc_trash_chunk` before using it. This was detected by coverity.
this patch fixes commit 591fc3a330
("BUG/MINOR: sample: fix concat() converter's corruption with non-string
variables"
As a consequence, this patch should be backported as far as 2.0
this should fix github issue #1039
Signed-off-by: William Dauchy <wdauchy@gmail.com>
I was looking at writing a simple first test for prometheus but I
realised there is no proper way to exclude it if haproxy was not built
with prometheus plugin.
Today we have `REQUIRE_OPTIONS` in reg-tests which is based on `Feature
list` from `haproxy -vv`. Those options are coming from the Makefile
itself.
A plugin is build this way:
EXTRA_OBJS="contrib/prometheus-exporter/service-prometheus.o"
It does register service actions through `service_keywords_register`.
Those are listed through `list_services` in `haproxy -vv`.
To facilitate parsing, I slightly changed the output to a single line
and integrate it in regtests shell script so that we can now specify a
dependency while writing a reg-test for prometheus, e.g:
#REQUIRE_SERVICE=prometheus-exporter
#REQUIRE_SERVICES=prometheus-exporter,foo
There might be other ways to handle this, but that's the cleanest I
found; I understand people might be concerned by this output change in
`haproxy -vv` which goes from:
Available services :
foo
bar
to:
Available services : foo bar
Signed-off-by: William Dauchy <wdauchy@gmail.com>
- check functions are never called with a NULL args list, it is always
an array, so first check can be removed
- the expression parser guarantees that we can't have anything else,
because we mentioned json converter takes a mandatory string argument.
Thus test on `ARGT_STR` can be removed as well
- also add breaking line between enum and function declaration
In order to validate it, add a simple json test testing very simple
cases but can be improved in the future:
- default json converter without args
- json converter failing on error (utf8)
- json converter with error being removed (utf8s)
Signed-off-by: William Dauchy <wdauchy@gmail.com>
GitHub Issue #1037 Reported a memory leak in deinit() caused by an
allocation made in sa2str() that was stored in srv_set_addr_desc().
When destroying each server for a proxy in deinit, include freeing the
memory in the key of server->addr_node.
The leak was introduced in commit 92149f9a8 ("MEDIUM: stick-tables: Add
srvkey option to stick-table") which is not in any released version so
no backport is needed.
Cc: Tim Duesterhus <tim@bastelstu.be>
Patrick Hemmer reported that calling concat() with an integer variable
causes a %00 to appear at the beginning of the output. Looking at the
code, it's not surprising. The function uses get_trash_chunk() to get
one of the trashes, but can call casting functions which will also use
their trash in turn and will cycle back to ours, causing the trash to
be overwritten before being assigned to a sample.
By allocating the trash from a pool using alloc_trash_chunk(), we can
avoid this. However we must free it so the trash's contents must be
moved to a permanent trash buffer before returning. This is what's
achieved using smp_dup().
This should be backported as far as 2.0.
This is from the output of codespell. It's done at once over a bunch
of files and only affects comments, so there is nothing user-visible.
No backport needed.
During a configuration check valgrind reports:
==14425== 0 bytes in 106 blocks are definitely lost in loss record 1 of 107
==14425== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==14425== by 0x4C2FDEF: realloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==14425== by 0x443CFC: hlua_alloc (hlua.c:8662)
==14425== by 0x5F72B11: luaM_realloc_ (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==14425== by 0x5F78089: luaH_free (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==14425== by 0x5F707D3: sweeplist (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==14425== by 0x5F710D0: luaC_freeallobjects (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==14425== by 0x5F7715D: close_state (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==14425== by 0x443D4C: hlua_deinit (hlua.c:9302)
==14425== by 0x543F88: deinit (haproxy.c:2742)
==14425== by 0x5448E7: deinit_and_exit (haproxy.c:2830)
==14425== by 0x5455D9: init (haproxy.c:2044)
This is due to Lua calling `hlua_alloc()` with `ptr = NULL` and `nsize = 0`.
While `realloc` is supposed to be equivalent `free()` if the size is `0` this
is only required for a non-NULL pointer. Apparently my allocator (or valgrind)
actually allocates a zero size area if the pointer is NULL, possibly taking up
some memory for management structures.
Fix this leak by specifically handling the case where both the pointer and the
size are `0`.
This bug appears to have been introduced with the introduction of the
multi-threaded Lua, thus this fix is specific for 2.4. No backport needed.
add base support for url encode following RFC3986, supporting `query`
type only.
- add test checking url_enc/url_dec/url_enc
- update documentation
- leave the door open for future changes
this should resolve github issue #941
Signed-off-by: William Dauchy <wdauchy@gmail.com>
Released version 2.4-dev5 with the following main changes :
- BUG/MEDIUM: mux_h2: Add missing braces in h2_snd_buf()around trace+wakeup
- BUILD: hpack: hpack-tbl-t.h uses VAR_ARRAY but does not include compiler.h
- MINOR: time: increase the minimum wakeup interval to 60s
- MINOR: check: do not ignore a connection header for http-check send
- REGTESTS: complete http-check test
- CI: travis-ci: drop coverity scan builds
- MINOR: atomic: don't use ; to separate instruction on aarch64.
- IMPORT: xxhash: update to v0.8.0 that introduces stable XXH3 variant
- MEDIUM: xxhash: use the XXH3 functions to generate 64-bit hashes
- MEDIUM: xxhash: use the XXH_INLINE_ALL macro to inline all functions
- CLEANUP: xxhash: remove the unused src/xxhash.c
- MINOR: sample: add the xxh3 converter
- REGTESTS: add tests for the xxh3 converter
- MINOR: protocol: Create proto_quic QUIC protocol layer.
- MINOR: connection: Attach a "quic_conn" struct to "connection" struct.
- MINOR: quic: Redefine control layer callbacks which are QUIC specific.
- MINOR: ssl_sock: Initialize BIO and SSL objects outside of ssl_sock_init()
- MINOR: connection: Add a new xprt to connection.
- MINOR: ssl: Export definitions required by QUIC.
- MINOR: cfgparse: Do not modify the QUIC xprt when parsing "ssl".
- MINOR: tools: Add support for QUIC addresses parsing.
- MINOR: quic: Add definitions for QUIC protocol.
- MINOR: quic: Import C source code files for QUIC protocol.
- MINOR: listener: Add QUIC info to listeners and receivers.
- MINOR: server: Add QUIC definitions to servers.
- MINOR: ssl: SSL CTX initialization modifications for QUIC.
- MINOR: ssl: QUIC transport parameters parsing.
- MINOR: quic: QUIC socket management finalization.
- MINOR: cfgparse: QUIC default server transport parameters init.
- MINOR: quic: Enable the compilation of QUIC modules.
- MAJOR: quic: Make usage of ebtrees to store QUIC ACK ranges.
- MINOR: quic: Attempt to make trace more readable
- MINOR: quic: Make usage of the congestion control window.
- MINOR: quic: Flag RX packet as ack-eliciting from the generic parser.
- MINOR: quic: Code reordering to help in reviewing/modifying.
- MINOR: quic: Add traces to congestion avoidance NewReno callback.
- MINOR: quic: Display the SSL alert in ->ssl_send_alert() callback.
- MINOR: quic: Update the initial salt to that of draft-29.
- MINOR: quic: Add traces for in flght ack-eliciting packet counter.
- MINOR: quic: make a packet build fails when qc_build_frm() fails.
- MINOR: quic: Add traces for quic_packet_encrypt().
- MINOR: cache: Refactoring of secondary_key building functions
- MINOR: cache: Avoid storing responses whose secondary key was not correctly calculated
- BUG/MINOR: cache: Manage multiple headers in accept-encoding normalization
- MINOR: cache: Add specific secondary key comparison mechanism
- MINOR: http: Add helper functions to trim spaces and tabs
- MEDIUM: cache: Manage a subset of encodings in accept-encoding normalizer
- REGTESTS: cache: Simplify vary.vtc file
- REGTESTS: cache: Add a specific test for the accept-encoding normalizer
- MINOR: cache: Remove redundant test in http_action_req_cache_use
- MINOR: cache: Replace the "process-vary" option's expected values
- CI: GitHub Actions: enable daily Coverity scan
- BUG/MEDIUM: cache: Fix hash collision in `accept-encoding` handling for `Vary`
- MEDIUM: stick-tables: Add srvkey option to stick-table
- REGTESTS: add test for stickiness using "srvkey addr"
- BUILD: Makefile: disable -Warray-bounds until it's fixed in gcc 11
- BUG/MINOR: sink: Return an allocation failure in __sink_new if strdup() fails
- BUG/MINOR: lua: Fix memory leak error cases in hlua_config_prepend_path
- MINOR: lua: Use consistent error message 'memory allocation failed'
- CLEANUP: Compare the return value of `XXXcmp()` functions with zero
- CLEANUP: Apply the coccinelle patch for `XXXcmp()` on include/
- CLEANUP: Apply the coccinelle patch for `XXXcmp()` on contrib/
- MINOR: qpack: Add static header table definitions for QPACK.
- CLEANUP: qpack: Wrong comment about the draft for QPACK static header table.
- CLEANUP: quic: Remove useless QUIC event trace definitions.
- BUG/MINOR: quic: Possible CRYPTO frame building errors.
- MINOR: quic: Pass quic_conn struct to frame parsers.
- BUG/MINOR: quic: Wrong STREAM frames parsing.
- MINOR: quic: Drop packets with STREAM frames with wrong direction.
- CLEANUP: ssl: Remove useless loop in tlskeys_list_get_next()
- CLEANUP: ssl: Remove useless local variable in tlskeys_list_get_next()
- MINOR: ssl: make tlskeys_list_get_next() take a list element
- Revert "BUILD: Makefile: disable -Warray-bounds until it's fixed in gcc 11"
- BUG/MINOR: cfgparse: Fail if the strdup() for `rule->be.name` for `use_backend` fails
- CLEANUP: mworker: remove duplicate pointer tests in cfg_parse_program()
- CLEANUP: Reduce scope of `header_name` in http_action_store_cache()
- CLEANUP: Reduce scope of `hdr_age` in http_action_store_cache()
- CLEANUP: spoe: fix typo on `var_check_arg` comment
- BUG/MINOR: tcpcheck: Report a L7OK if the last evaluated rule is a send rule
- CI: github actions: build several popular "contrib" tools
- DOC: Improve the message printed when running `make` w/o `TARGET`
- BUG/MEDIUM: server: srv_set_addr_desc() crashes when a server has no address
- REGTESTS: add unresolvable servers to srvkey-addr
- BUG/MINOR: stats: Make stat_l variable used to dump a stat line thread local
- BUG/MINOR: quic: NULL pointer dereferences when building post handshake frames.
- SCRIPTS: improve announce-release to support different tag and versions
- SCRIPTS: make announce release support preparing announces before tag exists
- CLEANUP: assorted typo fixes in the code and comments
- BUG/MINOR: srv: do not init address if backend is disabled
- BUG/MINOR: srv: do not cleanup idle conns if pool max is null
- CLEANUP: assorted typo fixes in the code and comments
- CLEANUP: few extra typo and fixes over last one ("ot" -> "to")
If a server is configured to not have any idle conns, returns immediatly
from srv_cleanup_connections. This avoids a segfault when a server is
configured with pool-max-conn to 0.
This should be backported up to 2.2.
Do not proceed on init_addr if the backend of the server is marked as
disabled. When marked as disabled, the server is not fully initialized
and some operation must be avoided to prevent segfault. It is correct
because there is no way to activate a disabled backend.
This fixes the github issue #1031.
This should be backported to 2.2.
Since ee63d4bd6 ("MEDIUM: stats: integrate static proxies stats in new
stats"), all dumped stats for a given domain, the default ones and the
modules ones, are merged in a signle array to dump them in a generic way.
For this purpose, the stat_l global variable is allocated at startup to
store a line of stats before the dump, i.e. all stats of an entity
(frontend, backend, listener, server or dns nameserver). But this variable
is not thread safe. If stats are retrieved concurrently by several clients
on different threads, the same variable is used. This leads to corrupted
stats output.
To fix the bug, the stat_l variable is now thread local.
This patch should probably solve issues #972 and #992. It must be backported
to 2.3.
GitHub Issue #1026 reported a crash during configuration check for the
following example config:
backend 0
server 0 0
server 0 0
HAProxy crashed in srv_set_addr_desc() due to a NULL pointer dereference
caused by `sa2str` returning NULL for an `AF_UNSPEC` address (`0`).
Check to make sure the address key is non-null before using it for
comparison or inserting it into the tree.
The crash was introduced in commit 92149f9a8 ("MEDIUM: stick-tables: Add
srvkey option to stick-table") which not in any released version so no
backport is needed.
Cc: Tim Duesterhus <tim@bastelstu.be>
When all rules of a tcpcheck ruleset are successfully evaluated, the right
check status must always be reported. It is true if the last evaluated rule
is an expect or a connect rule. But not if it is a send rule. In this
situation, nothing more is done until the check timeout expiration and a
L7TOUT is reported instead of a L7OK.
Now, by default, when all rules were successfully evaluated, a L7OK is
reported. When the last evaluated rule is an expect or a connect, the
behavior remains unchanged.
This patch should fix the issue #1027. It must be backported as far as 2.2.
This variable is only needed deeply nested in a single location and clang's
static analyzer complains about a dead initialization. Reduce the scope to
satisfy clang and the human that reads the function.
As reported in issue #1017, there are two harmless duplicate tests in
cfg_parse_program(), one made of a "if" using the same condition as the
loop it's in, and the other one being a null test before a free. This
just removes them. No backport is needed.
This patch fixes GitHub issue #1024.
I could track the `strdup` back to commit
3a1f5fda10 which is 1.9-dev8. It's probably not
worth the effort to backport it across this refactoring.
This patch should be backported to 1.9+.
As reported in issue #1010, gcc-11 as of 2021-01-05 is overzealous in
its -Warray-bounds check as it considers that a cast of a global struct
accesses the entire struct even if only one specific element is accessed.
This instantly breaks all lists making use of container_of() to build
their iterators as soon as the starting point is known if the next
element is retrieved from the list head in a way that is visible to the
compiler's optimizer, because it decides that accessing the list's next
element dereferences the list as a larger struct (which it does not).
The temporary workaround consisted in disabling -Warray-bounds, but this
warning is traditionally quite effective at spotting real bugs, and we
actually have is a single occurrence of this issue in the whole code.
By changing the tlskeys_list_get_next() function to take a list element
as the starting point instead of the current element, we can avoid
the starting point issue but this requires to change all call places
to write hideous casts made of &((struct blah*)ref)->list. At the
moment we only have two such call places, the first one being used to
initialize the list (which is the one causing the warning) and which
is thus easy to simplify, and the second one for which we already have
an aliased pointer to the reference that is still valid at the call
place, and given the original pointer also remained unchanged, we can
safely use this alias, and this is safer than leaving a cast there.
Let's make this change now while it's still easy.
The generated code only changed in function cli_io_handler_tlskeys_files()
due to register allocation and the change of variable scope between the
old one and the new one.
`getnext` was only used to fill `ref` at the beginning of the function. Both
have the same type. Replace the parameter name by `ref` to remove the useless
local variable.
After having re-read the RFC, we noticed there are two bugs in the STREAM
frame parser. When the OFF bit (0x04) in the frame type is not set
we must set the offset to 0 (it was not set at all). When the LEN bit (0x02)
is not set we must extend the length of the data field to the end of the packet
(it was not set at all).
This is issue is due to the fact that when we call the function
responsible of building CRYPTO frames to fill a buffer, the Length
field of this packet did not take into an account the trailing 16 bytes for
the AEAD tag. Furthermore, the remaining <room> available in this buffer
was not decremented by the CRYPTO frame length, but only by the CRYPTO data length
of this frame.
This patch fixes GitHub issue #1023.
The function was introduced in commit 99c453d ("MEDIUM: ring: new
section ring to declare custom ring buffers."), which first appeared
in 2.2-dev9. The fix should be backported to 2.2+.
This allows using the address of the server rather than the name of the
server for keeping track of servers in a backend for stickiness.
The peers code was also extended to support feeding the dictionary using
this key instead of the name.
Fixes#814
This patch fixes GitHub Issue #988. Commit ce9e7b2521
was not sufficient, because it fell back to a hash comparison if the bitmap
of known encodings was not acceptable instead of directly returning the the
cached response is not compatible.
This patch also extends the reg-test to test the hash collision that was
mentioned in #988.
Vary handling is 2.4, no backport needed.
The accept-encoding normalizer now explicitely manages a subset of
encodings which will all have their own bit in the encoding bitmap
stored in the cache entry. This way two requests with the same primary
key will be served the same cache entry if they both explicitely accept
the stored response's encoding, even if their respective secondary keys
are not the same and do not match the stored response's one.
The actual hash of the accept-encoding will still be used if the
response's encoding is unmanaged.
The encoding matching and the encoding weight parsing are done for every
subpart of the accept-encoding values, and a bitmap of accepted
encodings is built for every request. It is then tested upon any stored
response that has the same primary key until one with an accepted
encoding is found.
The specific "identity" and "*" accept-encoding values are managed too.
When storing a response in the key, we also parse the content-encoding
header in order to only set the response's corresponding encoding's bit
in its cache_entry encoding bitmap.
This patch fixes GitHub issue #988.
It does not need to be backported.
The accept-encoding part of the secondary key (vary) was only built out
of the first occurrence of the header. So if a client had two
accept-encoding headers, gzip and br for instance, the key would have
been built out of the gzip string. So another client that only managed
gzip would have been sent the cached resource, even if it was a br resource.
The http_find_header function is now called directly by the normalizers
so that they can manage multiple headers if needed.
A request that has more than 16 encodings will be considered as an
illegitimate request and its response will not be stored.
This fixes GitHub issue #987.
It does not need any backport.
If any of the secondary hash normalizing functions raises an error, the
secondary hash will be unusable. In this case, the response will not be
stored anymore.
Add traces to have an idea why this function may fail. In fact
in never fails when the passed parameters are correct, especially the
lengths. This is not the case when a packet is not correctly built
before being encrypted.
Even if the size of frames built by qc_build_frm() are computed so that
not to overflow a buffer, do not rely on this and always makes a packet
build fails if we could not build a frame.
Also add traces to have an idea where qc_build_frm() fails.
Fixes a memory leak in qc_build_phdshk_apkt().
This salt is ued at leat up to draft-32. At this date ngtcp2 always
uses this salt even if it started the draft-33 development.
Note that when the salt is not correct, we cannot remove the header
protection. In this case the packet number length is wrong.
At least displays the SSL alert error code passed to ->ssl_send_alert()
QUIC BIO method and the SSL encryption level. This function is newly called
when using picoquic client with a recent version of BoringSSL (Nov 19 2020).
This is not the case with OpenSSL with 32 as QUIC draft implementation.
Reorder by increasing type the switch/case in qc_parse_pkt_frms()
which is the high level frame parser.
Add new STREAM_X frame types to support some tests with ngtcp2 client.
Add ->flags to the QUIC frame parser as this has been done for the builder so
that to flag RX packets as ack-eliciting at low level. This should also be
helpful to maintain the code if we have to add new flags to RX packets.
Remove the statements which does the same thing as higher level in
qc_parse_pkt_frms().
Remove ->ifcdata which was there to control the CRYPTO data sent to the
peer so that not to saturate its reception buffer. This was a sort
of flow control.
Add ->prep_in_flight counter to the QUIC path struct to control the
number of bytes prepared to be sent so that not to saturare the
congestion control window. This counter is increased each time a
packet was built. This has nothing to see with ->in_flight which
is the real in flight number of bytes which have really been sent.
We are olbiged to maintain two such counters to know how many bytes
of data we can prepared before sending them.
Modify traces consequently which were useful to diagnose issues about
the congestion control window usage.
As there is a lot of information in this protocol, this is not
easy to make the traces readable. We remove here a few of them and
shorten some line shortening the variable names.
This patch is there to initialize the default transport parameters for QUIC
as a preparation for one of the QUIC next steps to come: fully support QUIC
protocol for haproxy servers.
Implement ->accept_conn() callback for QUIC listener sockets.
Note that this patch also implements quic_session_accept() function
which is similar to session_accept_fd() without calling conn_complete_session()
at this time because we do not have any real QUIC mux.
Makes TLS/TCP and QUIC share the same CTX initializer so that not to modify the
caller which is an XPRT callback used both by the QUIC xprt and the SSL xprt over
TCP.
This patch adds QUIC structs to server struct so that to make the QUIC code
compile. Also initializes the ebtree to store the connections by connection
IDs.
This patch adds a quic_transport_params struct to bind_conf struct
used for the listeners. This is to store the QUIC transport parameters
for the listeners. Also initializes them when calling str2listener().
Before str2sa_range() it's too early to figure we're going to speak QUIC,
and after it's too late as listeners are already created. So it seems that
doing it in str2listener() when the protocol is discovered is the best
place.
Also adds two ebtrees to the underlying receivers to store the connection
by connections IDs (one for the original connection IDs, and another
one for the definitive connection IDs which really identify the connections.
However it doesn't seem normal that it is stored in the receiver nor the
listener. There should be a private context in the listener so that
protocols can store internal information. This element should in
fact be the listener handle.
Something still feels wrong, and probably we'll have to make QUIC and
SSL co-exist: a proof of this is that there's some explicit code in
bind_parse_ssl() to prevent the "ssl" keyword from replacing the xprt.
This patch imports all the C files for QUIC protocol implementation with few
modifications from 20200720-quic branch of quic-dev repository found at
https://github.com/haproxytech/quic-dev.
Traces were implemented to help with the development.
When parsing "ssl" keyword for TLS bindings, we must not use the same xprt as the one
for TLS/TCP connections. So, do not modify the QUIC xprt which will be initialized
when parsing QUIC addresses wich "ssl" bindings.
QUIC needs to initialize its BIO and SSL session the same way as for SSL over TCP
connections. It needs also to use the same ClientHello callback.
This patch only exports functions and variables shared between QUIC and SSL/TCP
connections.
This patch extraces the code which initializes the BIO and SSL session
objects so that to reuse it elsewhere later for QUIC conections which
only needs SSL and BIO objects at th TLS layer stack level to work.
We add src/quic_sock.c QUIC specific socket management functions as callbacks
for the control layer: ->accept_conn, ->default_iocb and ->rx_listening.
accept_conn() will have to be defined. The default I/O handler only recvfrom()
the datagrams received. Furthermore, ->rx_listening callback always returns 1 at
this time but should returns 0 when reloading the processus.
As QUIC is a connection oriented protocol, this file is almost a copy of
proto_tcp without TCP specific features. To suspend/resume a QUIC receiver
we proceed the same way as for proto_udp receivers.
With the recent updates to the listeners, we don't need a specific set of
quic*_add_listener() functions, the default ones are sufficient. The fields
declaration were reordered to make the various layers more visible like in
other protocols.
udp_suspend_receiver/udp_resume_receiver are up-to-date (the check for INHERITED
is present) and the code being UDP-specific, it's normal to use UDP here.
Note that in the future we might more reasily reference stacked layers so that
there's no more need for specifying the pointer here.
A new XXH3 variant of hash functions shows a noticeable improvement in
performance (especially on small data), and also brings 128-bit support,
better inlining and streaming capabilities.
Performance comparison is available here:
https://github.com/Cyan4973/xxHash/wiki/Performance-comparison
Allow the user to specify a custom Connection header for http-check
send. This is useful for example to implement a websocket upgrade check.
If no connection header has been set, a 'Connection: close' header is
automatically appended to allow the server to close the connection
immediately after the request/response.
Update the documentation related to http-check send.
This fixes the github issue #1009.
This is a regression in 7838a79ba ("MEDIUM: mux-h2/trace: add lots of traces
all over the code"). The issue was found using -Wmisleading-indentation.
This patch fixes GitHub issue #1015.
The impact of this bug is that it could in theory cause occasional delays
on some long responses for connections having otherwise no traffic.
This patch should be backported to 2.1+, the commit was first tagged in
v2.1-dev2.
This bug happens when a service has multiple records on the same host
and the server provides the A/AAAA resolution in the response as AR
(Additional Records).
In such condition, the first occurence of the host will be taken from
the Additional section, while the second (and next ones) will be process
by an independent resolution task (like we used to do before 2.2).
This can lead to a situation where the "synchronisation" of the
resolution may diverge, like described in github issue #971.
Because of this behavior, HAProxy mixes various type of requests to
resolve the full list of servers: SRV+AR for all "first" occurences and
A/AAAA for all other occurences of an existing hostname.
IE: with the following type of response:
;; ANSWER SECTION:
_http._tcp.be2.tld. 3600 IN SRV 5 500 80 A2.tld.
_http._tcp.be2.tld. 3600 IN SRV 5 500 86 A3.tld.
_http._tcp.be2.tld. 3600 IN SRV 5 500 80 A1.tld.
_http._tcp.be2.tld. 3600 IN SRV 5 500 85 A3.tld.
;; ADDITIONAL SECTION:
A2.tld. 3600 IN A 192.168.0.2
A3.tld. 3600 IN A 192.168.0.3
A1.tld. 3600 IN A 192.168.0.1
A3.tld. 3600 IN A 192.168.0.3
the first A3 host is resolved using the Additional Section and the
second one through a dedicated A request.
When linking the SRV records to their respective Additional one, a
condition was missing (chek if said SRV record is already attached to an
Additional one), leading to stop processing SRV only when the target
SRV field matches the Additional record name. Hence only the first
occurence of a target was managed by an additional record.
This patch adds a condition in this loop to ensure the record being
parsed is not already linked to an Additional Record. If so, we can
carry on the parsing to find a possible next one with the same target
field value.
backport status: 2.2 and above
SSL_CTX_get0_privatekey is openssl/boringssl specific function present
since openssl-1.0.2, let us define readable guard for it, not depending
on HA_OPENSSL_VERSION
Since commit 8a069eb9a ("MINOR: debug: add a trivial PRNG for scheduler
stress-tests"), 32-bit gcc 4.7 emits this warning when parsing the
initial seed for the debugger's RNG (2463534242):
src/debug.c:46:1: warning: this decimal constant is unsigned only in ISO C90 [enabled by default]
Let's mark it explicitly unsigned.
On frontend side, when a conn-stream is detached from a H1 connection, the
H1 stream is destroyed and if we already have some data to parse (a
pipelined request), we process these data immedialtely calling
h1_process(). Then we adjust the H1 connection timeout. But h1_process() may
fail and release the H1 connection. For instance, a parsing error may be
reported. Thus, when that happens, we must not use anymore the H1 connection
and exit.
This patch must be backported as far as the 2.2. This bug can impact the 2.3
and the 2.2, in theory, if h1 stream creation fails. But, concretly, it only
fails on the 2.4 because the requests are now parsed at this step.
When a channel is set in TUNNEL mode, we now always set the CF_NEVER_WAIT flag,
to be sure to never wait for sending data. It is important because in TUNNEL
mode, we have no idea if more data are expected or not. Setting this flag
prevent the MSG_MORE flag to be set on the connection.
It is only a problem with the HTX, since the 2.2. On previous versions, the
MSG_MORE flag is only set on the mux initiative. In fact, the problem arises
because there is an ambiguity in tunnel mode about the HTX_FL_EOI flag. In this
mode, from the mux point of view, while the SHUTR is not received more data are
expected. But from the channel point of view, we want to send data asap.
At short term, this fix is good enough and is valid anyway. But for the long
term more reliable solution must be found. At least, the to_forward field must
regain its original meaning.
This patch must be backported as far as 2.2.
When a protocol upgrade request is received, once parsed, it is waiting for
the response in the DONE state. But we must not set the flag CS_FL_EOI
because we don't know if a protocol upgrade will be performed or not.
Now, it is set on the response path, if both sides reached the DONE
state. If a protocol upgrade is finally performed, both side are switched in
TUNNEL state. Thus the CS_FL_EOI flag is not set.
If backported, this patch must be adapted because for now it relies on last
2.4-dev changes. It may be backported as far as 2.0.
As stated in the rfc7231, section 4.3.6, an HTTP tunnel via a CONNECT method
is successfully established if the server replies with any 2xx status
code. However, only 200 responses are considered as valid. With this patch,
any 2xx responses are now considered to estalish the tunnel.
This patch may be backported on demand to all stable versions and adapted
for the legacy HTTP. It works this way since a very long time and nobody
complains.
Due to the addition of the OpenTracing filter it is necessary to define
ARGC_OT enum. This value is used in the functions fmt_directive() and
smp_resolve_args().
The OpenTracing filter uses several internal HAProxy functions to work
with variables and therefore requires two static local HAProxy functions,
var_accounting_diff() and var_clear(), to be declared global.
In fact, the var_clear() function was not originally defined as static,
but it lacked a declaration.
This new option allows to tune the maximum number of simultaneous
entries with the same primary key in the cache (secondary entries).
When we try to store a response in the cache and there are already
max-secondary-entries living entries in the cache, the storage will
fail (but the response will still be sent to the client).
It defaults to 10 and does not have a maximum number.
The secondary entry counter cannot be updated without going over all the
items of a duplicates list periodically. In order to avoid doing it too
often and to impact the cache's performances, a timestamp is added to
the cache_entry. It will store the timestamp (with second precision) of
the last iteration over the list (actually the last call of the
clear_expired_duplicates function). This way, this function will not be
called more than once per second for a given duplicates list.
Add an arbitrary maximum number of secondary entries per primary hash
(10 for now) to the cache. This prevents the cache from being filled
with duplicates of the same resource.
This works thanks to an entry counter that is kept in one of the
duplicates of the list (the last one).
When an entry is added to the list, the ebtree's implementation ensures
that it will be added to the end of the existing list so the only thing
to do to keep the counter updated is to get the previous counter from
the second to last entry.
Likewise, when an entry is explicitely deleted, we update the counter
from the list's last item.
SSL_CTX_add_server_custom_ext is openssl specific function present
since openssl-1.0.2, let us define readable guard for it, not depending
on HA_OPENSSL_VERSION
The cache entries are now added into the tree even when they are not
complete yet. If we realized while trying to add a response's payload
that the shctx was full, the entry was disabled through the
disable_cache_entry function, which cleared the key field of the entry's
node, but without actually removing it from the tree. So the shctx row
could be stolen from the entry and the row's content be rewritten while
a lookup in the tree would still find a reference to the old entry. This
caused a random crash in case of cache saturation and row reuse.
This patch adds the missing removal of the node from the tree next to
the reset of the key in disable_cache_entry.
This bug was introduced by commit 3243447 ("MINOR: cache: Add entry
to the tree as soon as possible")
It does not need to be backported.
In issue #1004, it was reported that it is not possible to remove
correctly a certificate after updating it when it came from a crt-list.
Indeed the "commit ssl cert" command on the CLI does not update the list
of ckch_inst in the crtlist_entry. Because of this, the "del ssl
crt-list" command does not remove neither the instances nor the SNIs
because they were never linked to the crtlist_entry.
This patch fixes the issue by inserting the ckch_inst in the
crtlist_entry once generated.
Must be backported as far as 2.2.
When a frontend H1 connection timed out waiting for the next request, a 408
error message is returned to the client. It is performed into the H1C task
process function, h1_timeout_task(), and under the idle connection takeover
lock. If the 408 error message cannot be sent immediately, we wait for a
next retry. In this case, the lock must be released.
This bug was introduced by the commit c4bfa59f1d ("MAJOR: mux-h1: Create the
client stream as later as possible") and is specific to the 2.4-DEV. No
backport needed.
Depending on the context, the current eweight or the next one must be used
to reposition a server in the tree. When the server state is updated, for
instance its weight, the next eweight must be used because it is not yet
committed. However, when the server is used, on normal conditions, the
current eweight must be used.
In fact, it is only a bug on the 1.8. On newer versions, the changes on a
server are performed synchronously. But it is safer to rely on the right
eweight value to avoid any futur bugs.
On the 1.8, it is important to do so, because the server state is updated
and committed inside the rendez-vous point. Thus, the next server state may
be unsync with the current state for a short time, waiting all threads join
the rendez-vous point. It is especially a problem if the next eweight is set
to 0. Because otherwise, it must not be used to reposition the server in the
tree, leading to a divide by 0.
This patch must be backported as far as 1.8.
This changes the subscribe/unsubscribe functions to rely on the control
layer's check_events/ignore_events. At the moment only the socket version
of these functions is present so the code should basically be the same.
Right now the connection subscribe/unsubscribe code needs to manipulate
FDs, which is not compatible with QUIC. In practice what we need there
is to be able to either subscribe or wake up depending on readiness at
the moment of subscription.
This commit introduces two new functions at the control layer, which are
provided by the socket code, to check for FD readiness or subscribe to it
at the control layer. For now it's not used.
Now we don't touch the fd anymore there, instead we rely on the ->drain()
provided by the control layer. As such the function was renamed to
conn_ctrl_drain().
This is what we need to drain pending incoming data from an connection.
The code was taken from conn_sock_drain() without the connection-specific
stuff. It still takes a connection for now for API simplicity.
conn_fd_handler() is 100% specific to socket code. It's about time
it moves to sock.c which manipulates socket FDs. With it comes
conn_fd_check() which tests for the socket's readiness. The ugly
connection status check at the end of the iocb was moved to an inlined
function in connection.h so that if we need it for other socket layers
it's not too hard to reuse.
The code was really only moved and not changed at all.
The send() loop present in this function and the error handling is already
present in raw_sock_from_buf(). Let's rely on it instead and stop touching
the FD from this place. The send flag was changed to use a more agnostic
CO_SFL_*. The name was changed to "conn_ctrl_send()" to remind that it's
meant to be used to send at the lowest level.
Add cur_server_timeout and cur_tunnel_timeout.
These sample fetches return the current timeout value for a stream. This
is useful to retrieve the value of a timeout which was changed via a
set-timeout rule.
Prepare the possibility to register sample fetches on the stream.
This commit is necessary to implement sample fetches to retrieve the
current timeout values.
Add a new http-request action 'set-timeout [server/tunnel]'. This action
can be used to update the server or tunnel timeout of a stream. It takes
two parameters, the timeout name to update and the new timeout value.
This rule is only valid for a proxy with backend capabilities. The
timeout value cannot be null. A sample expression can also be used
instead of a plain value.
Allow the modification of the tunnel timeout on the stream side.
Use a new field in the stream for the tunnel timeout. It is initialized
by the tunnel timeout from backend unless it has already been set by a
set-timeout tunnel rule.
Allow the modification of the timeout server value on the stream side.
Do not apply the default backend server timeout in back_establish if it
is already defined. This is the case if a set-timeout server rule has
been executed.
parse_size_err() function is now more strict on the size format. The first
character must be a digit. Otherwise an error is returned. Thus "size k" is
now rejected.
This patch must be backported to all stable versions.
First, an error is now reported if the first character is not a digit. Thus,
"timeout client s" triggers an error now. Then 'u' is also rejected
now. 'us' is valid and should be used set the timer in microseconds. However
'u' alone is not a valid unit. It was just ignored before (default to
milliseconds). Now, it is an error. Finally, a warning is reported if the
end of the text is not reached after the timer parsing. This warning will
probably be switched to an error in a futur version.
This patch must be backported to all stable versions.
For HTTP expect rules, if the buffer is not empty, it is guarantee that all
responses headers are received, with the start-line. Thus, except for
payload matching, there is no reason to wait for more data from the moment
the htx message is not empty.
This patch may be backported as far as 2.2.
The check timeout is used to limit a health-check execution. By default
inter timeout is used. But when defined the check timeout is used. In this
case, the inter timeout (or connect timeout) is used for the connection
establishment only. And the check timeout for the health-check
execution. Thus, it must be set after a successfull connect. It means it is
rearm at the end of each connect rule.
This patch with the previous one (BUG/MINOR: http-check: Use right condition
to consider HTX message as full) should solve the issue #991. It must be
backported as far as 2.2. On the 2.3 and 2.2, there are 2 places were the
connection establishement is handled. The check timeout must be set on both.
When an HTTP expect rule is evaluated, we must know if more data is expected
or not to wait if the matching fails. If the whole response is received or
if the HTX message is full, we must not wait. In this context,
htx_free_data_space() must be used instead of htx_free_space(). The fisrt
one count down the block size. Otherwise at the edge, when only the block
size remains free (8 bytes), we may think there is some place for more data
while the mux is unable to add more block.
This bug explains the loop described on the GH issue #991. It should be
backported as far as 2.2.
This last call to conn_cond_update_polling() is now totally misleading as
the function only stops polling in case of unrecoverable connection error.
Let's open-code the test to make it more prominent and explain what we're
trying to do there. It's even almost certain this code is never executed
anymore, as the only remaining case should be a mux's wake function setting
CO_FL_ERROR without disabling the polling, but they need to be audited first
to make sure this is the case.
This was a leftover of the pre-mux v1.8-dev3 era. It makes no sense anymore
to try to disable polling on a connection we don't own, it's the mux's job
and it's properly done upon shutdowns and closes.
As explained in previous commit, the situation is absurd as we try to
cleanly drain pending data before impolitely shutting down, and it could
be counter productive on real muxes. Let's use cs_drain_and_close() instead.
When the shutr() requests CS_SHR_DRAIN and there's no particular shutr
implemented on the underlying transport layer, we must drain pending data.
This is what happens when cs_drain_and_close() is called. It is important
for TCP checks to drain large responses and close cleanly.
Not only it's become totally useless with muxes, in addition it's
dangerous to play with the mux's FD while shutting a stream down for
writes. It's already done *if necessary* by the cs_shutw() code at the
mux layer. Fortunately it doesn't seem to have any impact, most likely
the polling updates used to immediately revert this operation.
conn_stop_polling() in fact only calls fd_stop_both() after checking
that the ctrl layer is ready. It's the case in conn_fd_check() so
let's get rid of this next-to-last user of this function.
The duplicated entries (in case of vary) were not taken into account by
the "show cache" command. They are now dumped too.
A new "vary" column is added to the output. It contains the complete
seocndary key (in hex format).
QUIC will rely on UDP at the receiver level, and will need these functions
to suspend/resume the receivers. In the future, protocol chaining may
simplify this.
Currnetly conn_ctrl_init() does an fd_insert() and conn_ctrl_close() does an
fd_delete(). These are the two only short-term obstacles against using a
non-fd handle to set up a connection. Let's have pur these into the protocol
layer, along with the other connection-level stuff so that the generic
connection code uses them instead. This will allow to define new ones for
other protocols (e.g. QUIC).
Since we only support regular sockets at the moment, the code was placed
into sock.c and shared with proto_tcp, proto_uxst and proto_sockpair.
For the sake of an improved readability, let's group the protocol
field members according to where they're supposed to be defined:
- connection layer (note: for now even UDP needs one)
- binding layer
- address family
- socket layer
Nothing else was changed.
The various protocols were made static since there was no point in
exporting them in the past. Nowadays with QUIC relying on UDP we'll
significantly benefit from UDP being exported and more generally from
being able to declare some functions as being the same as other
protocols'.
In an ideal world it should not be these protocols which should be
exported, but the intermediary levels:
- socket layer (sock.c only right now), already exported as functions
but nothing structured at the moment ;
- family layer (sock_inet, sock_unix, sockpair etc): already structured
and exported
- binding layer (the part that relies on the receiver): currently fused
within the protocol
- connectiong layer (the part that manipulates connections): currently
fused within the protocol
- protocol (connection's control): shouldn't need to be exposed
ultimately once the elements above are in an easily sharable way.
This field used to be needed before commit 2b5e0d8b6 ("MEDIUM: proto_udp:
replace last AF_CUST_UDP* with AF_INET*") as it was used as a protocol
entry selector. Since this commit it's always equal to the socket family's
value so it's entirely redundant. Let's remove it now to simplify the
protocol definition a little bit.
At the end of stream_new(), once the input buffer is transfer to the request
channel, it must not be used anymore. The previous patch (16df178b6 "BUG/MEDIUM:
stream: Xfer the input buffer to a fully created stream") was pushed to quickly.
No backport needed.
The input buffer passed as argument to create a new stream must not be
transferred when the request channel is initialized because the channel
flags are not set at this stage. In addition, the API is a bit confusing
regarding the buffer owner when an error occurred. The caller remains the
owner, but reading the code it is not obvious.
So, first of all, to avoid any ambiguities, comments are added on the
calling chain to make it clear. The buffer owner is the caller if any error
occurred. And the ownership is transferred to the stream on success.
Then, to make things simple, the ownership is transferred at the end of
stream_new(), in case of success. And the input buffer is updated to point
on BUF_NULL. Thus, in all cases, if the caller try to release it calling
b_free() on it, it is not a problem. Of course, it remains the caller
responsibility to release it on error.
The patch fixes a bug introduced by the commit 26256f86e ("MINOR: stream:
Pass an optional input buffer when a stream is created"). No backport is
needed.
Since HAProxy 2.3, OpenSSL 1.1.1 is a requirement for using a
multi-certificate bundle in the configuration. This patch emits a fatal
error when HAProxy tries to load a bundle with an older version of
HAProxy.
This problem was encountered by an user in issue #990.
This must be backported in 2.3.
With the removal of the family-specific port setting, all protocol had
exactly the same implementation of ->add(). A generic one was created
with the name "default_add_listener" so that all other ones can now be
removed. The API was slightly adjusted so that the protocol and the
listener are passed instead of the listener and the port.
Note that all protocols continue to provide this ->add() method instead
of routinely calling default_add_listener() from create_listeners(). This
makes sure that any non-standard protocol will still be able to intercept
the listener addition if needed.
This could be backported to 2.3 along with the few previous patches on
listners as a pure code cleanup.
In create_listeners() we iterate over a port range and call the
protocol's ->add() function to add a new listener on the specified
port. Only tcp4/tcp6/udp4/udp6 support a port, the other ones ignore
it. Now that we can rely on the address family to properly set the
port, better do it this way directly from create_listeners() and
remove the family-specific case from the protocol layer.
At various places we need to set a port on an IPv4 or IPv6 address, and
it requires casts that are easy to get wrong. Let's add a new set_port()
helper to the address family to assist in this. It will be directly
accessible from the protocol and will make the operation seamless.
Right now this is only implemented for sock_inet as other families do
not need a port.
When a H1 message is parsed, if the parser state is switched to TUNNEL mode
just after the header parsing, the BODYLESS flag is set on the HTX
start-line. By transitivity, the corresponding flag is set on the message in
HTTP analysers. Thus it is possible to rely on it to not wait for the
request body.
CNT_LEN and TE_CHNK flags must be set on the message only when the
corresponding flag is set on the HTX start-line. Before, when the transfer
length was known XFER_LEN set), the HTTP_MSGF_TE_CHNK was the default. But
it is not appropriate. Now, it is only set if the message is chunked. Thus,
it is now possible to have a known transfer length without CNT_LEN or
TE_CHNK.
In addition, the BODYLESS flags may be set, independently on XFER_LEN one.
This flags is now unused. It was used in REQ_WAIT_HTTP analyser, when a
stream was waiting for a request, to set the keep-alive timeout or to avoid
to send HTTP errors to client.
It is now impossible to start the HTTP request processing in the stream
analysers with a partial or empty request message. The mux-h2 was already
waiting of the request headers before creating the stream. Now the mux-h1
does the same. All errors (aborts, timeout or invalid requests) waiting for
the request headers are now handled by the multiplexers. So there is no
reason to still handle them in the REQ_WAIT_HTTP (http_wait_for_request)
analyser.
To ensure there is no ambiguity, a BUG_ON() was added to exit if a partial
request is received in this analyser.
H1C_F_CS_* flags are renamed into H1C_F_ST_*. They reflect the connection
state. So "ST" is well suited. "CS" is confusing because it is also the
abbreviation for conn-stream.
In addition, H1C flags are reordered.
This is the reason for all previous patches. The conn-stream and the
associated stream are created as later as possible. It only concerns the
frontend connections. But it means the request headers, and possibly the
first data block, are received and parsed before the conn-stream
creation. To do so, an embryonic H1 stream, with no conn-stream, is
created. The result of this "early parsing" is stored in its rx buffer, used
to fill the request channel when the stream is created. During this step,
some HTTP errors may be returned by the mux. It must also handle
http-request/keep-alive timeouts. A significative change is about H1 to H2
upgrade. It happens very early now, and no H1 stream are created (and thus
of course no conn-stream).
The most important part of this patch is located to the h1_process()
function. Because it must trigger the parsing when there is no H1
stream. h1_recv() function has also been simplified.
For now, this part is unsued. But this patch adds functions to handle errors
on idle and embryonic H1 connections and send corresponding HTTP error
messages to the client (400, 408 or 500). Thanks to previous patches, these
functions take care to update the right stats counters, but also the
counters tracked by the session.
A field to store the HTTP error code has been added in the H1C structure. It
is used for error retransmits, if any, and to get it in http logs. It is
used to return the mux exit status code when the MUX_EXIT_STATUS ctl
parameter is requested.
When a log message is emitted from the session level, by a multiplexer,
there is no stream. Thus for HTTP session, there no status code and the
termination flags are not correctly set.
Thanks to previous patch, the HTTP status code is deduced from the mux exist
status, using the MUX_EXIT_STATE ctl param. This is only done for HTTP
frontends. If it is defined ( != 0), it is used to deduce the termination
flags.
The ctl param MUX_EXIT_STATUS can be request to get the exit status of a
multiplexer. For instance, it may be an HTTP status code or an H2 error. For
now, 0 is always returned. When the mux h1 will be able to return HTTP
errors itself, this ctl param will be used to get the HTTP status code from
the logs.
the mux_exit_status enum has been created to map internal mux exist status
to generic one. Thus there is 5 possible status for now: success, invalid
error, timeout error, internal error and unknown.
The cumulative numbers of http requests, http errors, bytes received and
sent and their respective rates for a tracked counters are now updated using
specific stream independent functions. These functions are used by the
stream but the aim is to allow the session to do so too. For now, there is
no reason to perform these updates from the session, except from the mux-h2
maybe. But, the mux-h1, on the frontend side, will be able to return some
errors to the client, before the stream creation. In this case, it will be
mandatory to update counters tracked at the session level.
An idle expiration date is added on the H1 connection with the function to
set it depending on connection state. First, there is no idle timeout on
backend connections, For idle frontend connections, the http-request or
keep-alive timeout are used depending on which timeout is defined and if it
is the first request or not. For embryonic connections, the http-request is
always used, if defined. For attached or shutted down connections, no idle
timeout is applied.
For now the idle expiration date is never set and the h1_set_idle_expiration
function remains unused.
When the conn-stream is detached for a H1 connection, there is no reason to
subscribe for reads or process pending input data if the connection is not
idle. Because, it means a shutdown is pending.
Conditions to set a timeout on the H1C task have been simplified or at least
changed to rely on H1 connection flags. Now, following rules are used :
* the shutdown timeout is applied on dead (not alive) or shutted down
connections.
* The client/server timeout is applied if there are still some pending
outgoing data.
* The client timeout is applied on alive frontend connections with no
conn-stream. It means on idle or embryionic frontend connections.
* For all other connections (backend or attached connections), no timeout
is applied. For frontend or backend attached connections, the timeout is
handled by the application layer. For idle backend connections, there is
no timeout.
We now only rely on one flag to notify a shutdown. The shutdown is performed
at the connection level when there are no more pending outgoing data. So, it
means it is performed immediately if the output buffer is empty. Otherwise
it is deferred after the outgoing data are sent.
This simplify a bit the mux because there is now only one flag to check.
Don't try to read more data if a parsing or a formatting error was reported
on the H1 stream. There is no reason to continue to process the messages for
the current connection in this case. If a parsing error occurs, it means the
input is invalid. If a formatting error occurs, it is an internal error and
it is probably safer to give up.
Mainly to make it easier to read. First of all, when a H1 connection is
still there, we check if the connection was stolen by another thread or
not. If yes we release the task and leave. Then we check if the task is
expired or not. Only expired tasks are considered. Finally, if a conn-stream
is still attached to the connection (H1C_F_CS_ATTACHED flag set), we
return. Otherwise, the task and the H1 connection are released.
Be prepared to have a H1 connection in one of the following states :
* A H1 connection waiting for a new message with no H1 stream.
H1C_F_CS_IDLE flag is set.
* A H1 connection processing a new message with a H1 stream but no
conn-stream attached. H1C_F_CS_EMBRYONIC flag is set
* A H1 connection with a H1 stream and a conn-stream attached.
H1C_F_CS_ATTACHED flag is set.
* A H1 connection with no H1 stream, waiting to be released. No flag is set.
These flags are mutually exclusives. When none is set, it means the
connection will be released ASAP, just remaining outgoing data must be sent
before. For now, the second state (H1C_F_CS_EMBRYONIC) is transient.
For now this buffer is not used. But it will be used to parse the headers,
and possibly the first block of data, when no stream is attached to the H1
connection. The aim is to use it to create the stream, thanks to recent
changes on the streams creation api.
Dedicated functions are now used to create frontend and backend H1
streams. h1c_frt_stream_new() is now used to create frontend H1 streams and
h1c_bck_stream_new() to create backend ones. Both rely on h1s_new() function
to allocate the stream itself. It is a bit easier to add specific processing
depending we are on the frontend or the backend side.
Instead of using H1S flags to report an error on the request or the
response, independently it is a parsing or a formatting error, we now use a
flag to report parsing errors and another one to report formatting
ones. This simplify the message parsing. It is also easier to figure out
what error happened when one of this flag is set. The side may be deduced
checking the H1C_F_IS_BACK flag.
Instead of using 2 flags on the H1 stream (H1S_F_BUF_FLUSH and
H1S_F_SPLICED_DATA), we now only use one flag on the H1 connection
(H1C_F_WANT_SPLICE) to notify we want to use splicing or we are using
splicing. This flag blocks the calls to rcv_buf() connection callback.
It is a bit easier to set the H1 connection capability to receive data in
its input buffer instead of relying on the H1 stream.
H1C_F_WAIT_OPPOSITE must be set on the H1 conenction to don't read more data
because we must be sync with the opposite side. This flag replaces the
H1C_F_IN_BUSY flag. Its name is a bit explicit. It is automatically set on
the backend side when the mux is created. It is safe to do so because at
this stage, the request has not yet been sent to the server. This way, in
h1_recv_allowed(), a test on this flag is enough to block the reads instead
of testing the H1 stream state on the backend side.