Ensure that no more than olen bytes is written to the output buffer,
otherwise we might experience an unexpected behavior.
While the original code used to validate that the output size was
always large enough before starting to write, this validation was
later broken by the commit below, allowing to 3-byte blocks to
areas whose size is not multiple of 3:
commit ed697e4856
Author: Emeric Brun <ebrun@haproxy.com>
Date: Mon Jan 14 14:38:39 2019 +0100
BUG/MINOR: base64: dec func ignores padding for output size checking
Decode function returns an error even if the ouptut buffer is
large enought because the padding was not considered. This
case was never met with current code base.
For base64urldec(), it's basically the same problem except that since
the input format supports arbitrary lengths, the problem has always
been there since its introduction in 2.4.
This should be backported to all stable branches having a backport of
the patch above (i.e. 2.0), with some adjustments depending on the
availability of the base64dec() and base64urldec().
The httpclient does a free of the servers and proxies it uses, however
since we are including them in the global proxy list, haproxy already
free them during the deinit. We can safely remove these free.
The sc-set-gpt0() parser was extended in 2.1 by commit 0d7712dff ("MINOR:
stick-table: allow sc-set-gpt0 to set value from an expression") to support
sample expressions in addition to plain integers. However there is a
subtlety there, which is that while the arg position must be incremented
when parsing an integer, it must not be touched when calling an expression
since the expression parser already does it.
The effect is that rules making use of sc-set-gpt0() followed by an
expression always ignore one word after that expression, and will typically
fail to parse if followed by an "if" as the parser will restart after the
"if". With no condition it's different because an empty condition doesn't
result in trying to parse anything.
This patch moves the increment at the right place and adds a few
explanations for a code part that was far from being obvious.
This should be backported to branches having the commit above (2.1+).
Implements a way of checking the running openssl version:
If the OpenSSL support was not compiled within HAProxy it will returns a
error, so it's recommanded to do a SSL feature check before:
$ ./haproxy -cc 'feature(OPENSSL) && openssl_version_atleast(0.9.8zh) && openssl_version_before(3.0.0)'
This will allow to select the SSL reg-tests more carefully.
Some users are facing huge CPU usage or even watchdog panics due to
the Lua global lock when many threads compete on it, but they have
no way to see that in the usual dumps. We take the lock at 2 or 3
places only, thus it's trivial to move it to a global function so
that stack dumps will now explicitly show it, increasing the change
that it rings a bell and someone suggests switch to lua-load-per-thread:
Current executing Lua from a stream analyser -- stack traceback:
loop.lua:1: in function line 1
call trace(27):
| 0x5ff157 [48 83 c4 10 5b 5d 41 5c]: wdt_handler+0xf7/0x104
| 0x7fe37fe82690 [48 c7 c0 0f 00 00 00 0f]: libpthread:+0x13690
| 0x614340 [66 48 0f 7e c9 48 01 c2]: main+0x1e8a40
| 0x607b85 [48 83 c4 08 48 89 df 31]: main+0x1dc285
| 0x6070bc [48 8b 44 24 20 48 8b 14]: main+0x1db7bc
| 0x607d37 [41 89 c4 89 44 24 1c 83]: lua_resume+0xc7/0x214
| 0x464ad6 [83 f8 06 0f 87 f1 01 00]: main+0x391d6
| 0x4691a7 [83 f8 06 0f 87 03 20 fc]: main+0x3d8a7
| 0x51dacb [85 c0 74 61 48 8b 5d 20]: sample_process+0x4b/0xf7
| 0x51e55c [48 85 c0 74 3f 64 48 63]: sample_fetch_as_type+0x3c/0x9b
| 0x525613 [48 89 c6 48 85 c0 0f 84]: sess_build_logline+0x2443/0x3cae
| 0x4af0be [4c 63 e8 4c 03 6d 10 4c]: http_apply_redirect_rule+0xbfe/0xdf8
| 0x4af523 [83 f8 01 19 c0 83 e0 03]: main+0x83c23
| 0x4b2326 [83 f8 07 0f 87 99 00 00]: http_process_req_common+0xf6/0x15f6
| 0x4d5b30 [85 c0 0f 85 9f f5 ff ff]: process_stream+0x2010/0x4e18
It also allows "perf top" to directly show the time spent on this lock.
This may be backported to some stable versions as it improves the
overall debuggability.
Include the correct .h files in http_client.c and http_client.h.
The api.h is needed in http_client.c and http_client-t.h is now include
directly from http_client.h
Reported by coverity in ticket #1355
CID 1461505: Memory - illegal accesses (UNINIT)
Using uninitialized value "sl".
Fix the problem by initializing sl to NULL.
Proxies must call proxy_preset_defaults() to initialize their settings
that are usually learned from defaults sections (e.g. connection retries,
pool purge delay etc). At the moment there was likely no impact, but not
doing so could cause trouble soon when using the client more extensively
or when new defaults are introduced and failed to be initialized.
No backport is needed.
Recent commit 83614a9fb ("MINOR: httpclient: initialize the proxy") broke
reg tests that match the output of "show stats" or "show servers state"
because it changed the proxies' numeric ID.
In fact it did nothing wrong, it just registers a proxy and adds it at
the head of the list. But the automatic numbering scheme, which was made
to make sure that temporarily disabled proxies in the config keep their
ID instead of shifting all others, sees one more proxy and increments
next_pxid for all subsequent proxies.
This patch avoids this by not assigning automatic IDs to such internal
proxies, leaving them with their ID of -1, and by not shifting next_pxid
for them. This is important because the user might experience them
appearing or disappearing depending on apparently unrelated config
options or build options, and this must not cause visible proxy IDs
to change (e.g. stats or minitoring may break).
Though the issue has always been there, it only became a problem with
the recent proxy additions so there is no need to backport this.
The X509_STORE_CTX_get0_cert did not exist yet on OpenSSL 1.0.2 and
neither did X509_STORE_CTX_get0_chain, which was not actually needed
since its get1 equivalent already existed.
RFC7540 states that :path follows RFC3986's path-absolute. However
that was a bug introduced in the spec between draft 04 and draft 05
of the spec, which implicitly causes paths starting with "//" to be
forbidden. HTTP/1 (and now HTTP core semantics) made it explicit
that the request-target in origin-form follows a purposely defined
absolute-path defined as 1*(/ segment) to explicitly allow "//".
http2bis now fixes this by relying on absolute-path so that "//"
becomes valid and matches other versions. Full discussion here:
https://lists.w3.org/Archives/Public/ietf-http-wg/2021JulSep/0245.html
This issue appeared in haproxy with commit 4b8852c70 ("BUG/MAJOR: h2:
verify that :path starts with a '/' before concatenating it") when
making the checks on :path fully comply with the spec, and was backported
as far as 2.0, so this fix must be backported there as well to allow
"//" in H2 again.
Most of the SSL sample fetches related to the client certificate were
based on the SSL_get_peer_certificate function which returns NULL when
the verification process failed. This made it impossible to use those
fetches in a log format since they would always be empty.
The patch adds a reference to the X509 object representing the client
certificate in the SSL structure and makes use of this reference in the
fetches.
The reference can only be obtained in ssl_sock_bind_verifycbk which
means that in case of an SSL error occurring before the verification
process ("no shared cipher" for instance, which happens while processing
the Client Hello), we won't ever start the verification process and it
will be impossible to get information about the client certificate.
This patch also allows most of the ssl_c_XXX fetches to return a usable
value in case of connection failure (because of a verification error for
instance) by making the "conn->flags & CO_FL_WAIT_XPRT" test (which
requires a connection to be established) less strict.
Thanks to this patch, a log-format such as the following should return
usable information in case of an error occurring during the verification
process :
log-format "DN=%{+Q}[ssl_c_s_dn] serial=%[ssl_c_serial,hex] \
hash=%[ssl_c_sha1,hex]"
It should answer to GitHub issue #693.
Change the User-Agent from "HAProxy HTTP client" to "HAProxy" as the
previous name is not valid according to RFC 7231#5.5.3.
This patch fixes issue #1354.
This commit implements an HTTP Client over the CLI, this was made as
working example for the HTTP Client API.
It usable over the CLI by specifying a method and an URL:
echo "httpclient GET http://127.0.0.1:8000/demo.file" | socat /tmp/haproxy.sock -
Only IP addresses are accessibles since the API does not allow to
resolve addresses yet.
This commit implements a very simple HTTP Client API.
A client can be operated by several functions:
- httpclient_new(), httpclient_destroy(): create
and destroy the struct httpclient instance.
- httpclient_req_gen(): generate a complete HTX request using the
the absolute URL, the method and a list of headers. This request
is complete and sets the HTX End of Message flag. This is limited
to small request we don't need a body.
- httpclient_start() fill a sockaddr storage with a IP extracted
from the URL (it cannot resolve an fqdm for now), start the
applet. It also stores the ptr of the caller which could be an
appctx or something else.
- hc->ops contains a list of callbacks used by the
HTTPClient, they should be filled manually after an
httpclient_new():
* res_stline(): the client received a start line, its content
will be stored in hc->res.vsn, hc->res.status, hc->res.reason
* res_headers(): the client received headers, they are stored in
hc->res.hdrs.
* res_payload(): the client received some payload data, they are
stored in the hc->res.buf buffer and could be extracted with the
httpclient_res_xfer() function, which takes a destination buffer
as a parameter
* res_end(): this callback is called once we finished to receive
the response.
Initialize a proxy which contain a server for the raw HTTP, and another
one for the HTTPS. This proxy will use the global server log definition
and the 'option httplog' directive.
This proxy is internal and will only be used for the HTTP Client API.
The wording regarding Host vs :authority in RFC7540 is ambiguous as it
says that an intermediary must produce a host header from :authority if
Host is missing, but, contrary to HTTP/1.1, doesn't say anything regarding
the possibility that Host and :authority differ, which leaves Host with
higher precedence there. In addition it mentions that clients should use
:authority *instead* of Host, and that H1->H2 should use :authority only
if the original request was in authority form. This leaves some gray
area in the middle of the chain for fully valid H2 requests arboring a
Host header that are forwarded to the other side where it's possible to
drop the Host header and use the authority only after forwarding to a
second H2 layer, thus possibly seeing two different values of Host at
a different stage. There's no such issue when forwarding from H2 to H1
as the authority is dropped only only the Host is kept.
Note that the following request is sufficient to re-normalize such a
request:
http-request set-header host %[req.hdr(host)]
The new spec in progress (draft-ietf-httpbis-http2bis-03) addresses
this trouble by being a bit is stricter on these rules. It clarifies
that :authority must always be used instead of Host and that Host ought
to be ignored. This is much saner as it avoids to convey two distinct
values along the chain. This becomes the protocol-level equivalent of:
http-request set-uri %[url]
So this patch does exactly this, which we were initially a bit reluctant
to do initially by lack of visibility about other implementations'
expectations. In addition it slightly simplifies the Host header field
creation by always placing it first in the list of headers instead of
last; this could also speed up the look up a little bit.
This needs to be backported to 2.0. Non-HTX versions are safe regarding
this because they drop the URI during the conversion to HTTP/1.1 so
only Host is used and transmitted.
Thanks to Tim Dsterhus for reporting that one.
Before HTX was introduced, all the HTTP request elements passed in
pseudo-headers fields were used to build an HTTP/1 request whose syntax
was then scrutinized by the HTTP/1 parser, leaving no room to inject
invalid characters.
While NUL, CR and LF are properly blocked, it is possible to inject
spaces in the method so that once translated to HTTP/1, fields are
shifted by one spcae, and a lenient HTTP/1 server could possibly be
fooled into using a part of the method as the URI. For example, the
following request:
H2 request
:method: "GET /admin? HTTP/1.1"
:path: "/static/images"
would become:
GET /admin? HTTP/1.1 /static/images HTTP/1.1
It's important to note that the resulting request is *not* valid, and
that in order for this to be a problem, it requires that this request
is delivered to an already vulnerable HTTP/1 server.
A workaround here is to reject malformed methods by placing this rule
in the frontend or backend, at least before leaving haproxy in H1:
http-request reject if { method -m reg [^A-Z0-9] }
Alternately H2 may be globally disabled by commenting out the "alpn"
directive on "bind" lines, and by rejecting H2 streams creation by
adding the following statement to the global section:
tune.h2.max-concurrent-streams 0
This patch adds a check for each character of the method to make sure
they belong to the ones permitted in a token, as mentioned in RFC7231#4.1.
This should be backported to versions 2.0 and above. For older versions
not having HTX_FL_PARSING_ERROR, a "goto fail" works as well as it
results in a protocol error at the stream level. Non-HTX versions are
safe because the resulting invalid request will be rejected by the
internal HTTP/1 parser.
Thanks to Tim Dsterhus for reporting that one.
Tim Dsterhus found that while the H2 path is checked for non-emptiness,
invalid chars and '*', a test is missing to verify that except for '*',
it always starts with exactly one '/'. During the reconstruction of the
full URI when passing to HTX, this missing test allows to affect the
apparent authority by appending a port number or a suffix name.
This only affects H2-to-H2 communications, as H2-to-H1 do not use the
full URI. Like for previous fix, the following rule inserted before
other ones in the frontend is sufficient to renormalize the internal
URI and let haproxy see the same authority as the target server:
http-request set-uri %[url]
This needs to be backported to 2.2. Earlier versions do not rebuild a
full URI using the authority and will fail on the malformed path at the
HTTP layer, so they are safe.
While we do explicitly check for strict character sets in the scheme,
this is only done when extracting URL components from an assembled one,
and we have special handling for "http" and "https" schemes directly in
the H2-to-HTX conversion. Sadly, this lets all other ones pass through
if they start exactly with "http://" or "https://", allowing the
reconstructed URI to start with a different looking authority if it was
part of the scheme.
It's interesting to note that in this case the valid authority is in
the Host header and that the request will only be wrong if emitted over
H2 on the backend side, since H1 will not emit an absolute URI by
default and will drop the scheme. So in essence, this is a variant of
the scheme-based attack described below in that it only affects H2-H2
and not H2-H1 forwarding:
https://portswigger.net/research/http2
As such, a simple workaround consists in just inserting the following
rule before other ones in the frontend, which will have for effect to
renormalize the authority in the request line according to the
concatenated version (making haproxy see the same authority and host
as what the target server will see):
http-request set-uri %[url]
This patch simply adds the missing syntax checks for non-http/https
schemes before the concatenation in the H2 code. An improvement may
consist in the future in splitting these ones apart in the start
line so that only the "url" sample fetch function requires to access
them together and that all other places continue to access them
separately. This will then allow the core code to perform such checks
itself.
The patch needs to be backported as far as 2.2. Before 2.2 the full
URI was not being reconstructed so the scheme and authority part were
always dropped from H2 requests to leave only origin requests. Note
for backporters: this depends on this previous patch:
MINOR: http: add a new function http_validate_scheme() to validate a scheme
Many thanks to Tim Dsterhus for figuring that one and providing a
reproducer.
While http_parse_scheme() extracts a scheme from a URI by extracting
exactly the valid characters and stopping on delimiters, this new
function performs the same on a fixed-size string.
txn functions can now be called from an action or a filter context. Thus the
return code must be adapted depending on this context. From an action, act.ABORT
is returned. From a filter, -1 is returned. It is the filter error code.
This bug only affects 2.5-dev. No backport needed.
CF_FLT_ANALYZE flags may be set before the FLT_END analyser. Thus if an error is
triggered in the mean time, this may block the stream and prevent it to be
released. It is indeed a problem only for the response channel because the
response analysers may be skipped on early errors.
So, to prevent any issue, depending on the code path, the FLT_END analyser is
systematically set when the CF_FLT_ANALYZE flag is set.
This patch must be backported in all stable branches.
The internal proxies should be part of the proxies list, because of
this, the check_config_validity() fonction could emit warnings about
these proxies.
This patch disables 3 startup warnings for internal proxies:
- "has no 'bind' directive" (this one was already ignored for the CLI
frontend, but we made it generic instead)
- "missing timeouts"
- "log format ignored"
User reported that the config check returns an error with the message:
"Configuration file has no error but will not start (no listener) => exit(2)."
if the configuration present only a log-forward section with bind or dgram-bind
listeners but no listen/backend nor peer sections.
The process checked if there was 'peers' section avalaible with
an internal frontend (and so a listener) or a 'listen/backend'
section not disabled with at least one configured listener (into the
global proxies_list). Since the log-forward proxies appear in a
different list, they were not checked.
This patch adds a lookup on the 'log-forward' proxies list to check
if one of them presents a listener and is not disabled. And
this is done only if there was no available listener found into
'listen/backend' sections.
I have also studied how to re-work this check considering the 'listeners'
counter used after startup/init to keep the same algo and avoid further
mistakes but currently this counter seems increased during config parsing
and if a proxy is disabled, decreased during startup/init which is done
after the current config check. So the fix still not rely on this
counter.
This patch should fix the github issue #1346
This patch should be backported as far as 2.3 (so on branches
including the "log-forward" feature)
When a lua filter declaration is parsed, some allocation errors were not
properly handled. In addition, we must be sure the filter identifier is defined
in lua to duplicate it when the filter configuration is filled.
This patch fix a defect reported in the issue #1347. It only concerns
2.5-dev. No backport needed.
In Channel and HTTPMessage classes, several functions uses an offset that
may be negative to start from the end of incoming data. But, after
calculation, the offset must never be negative. However, there is a bug
because of a bad cast to unsigned when "input + offset" is performed. The
result must be a signed integer.
This patch should fix most of defects reported in the issue #1347. It only
affects 2.5-dev. No backport needed.
Now an HTTPMessage class is available to manipulate HTTP message from a filter
it is possible to bind HTTP filters callback function on lua functions. Thus,
following methods may now be defined by a lua filter:
* Filter:http_headers(txn, http_msg)
* Filter:http_payload(txn, http_msg, offset, len)
* Filter:http_end(txn, http_msg)
http_headers() and http_end() may return one of the constant filter.CONTINUE,
filter.WAIT or filter.ERROR. If nothing is returned, filter.CONTINUE is used as
the default value. On its side, http_payload() may return the amount of data to
forward. If nothing is returned, all incoming data are forwarded.
For now, these functions are not allowed to yield because this interferes with
the filter workflow.
When a lua TXN is created from a filter context, the request and the response
HTTP message objects are accessible from ".http_req" and ".http_res" fields. For
an HTTP proxy, these objects are always defined. Otherwise, for a TCP proxy, no
object is created and nil is used instead. From any other context (action or
sample fetch), these fields don't exist.
This new class exposes methods to manipulate HTTP messages from a filter
written in lua. Like for the HTTP class, there is a bunch of methods to
manipulate the message headers. But there are also methods to manipulate the
message payload. This part is similar to what is available in the Channel
class. Thus the payload can be duplicated, erased, modified or
forwarded. For now, only DATA blocks can be retrieved and modified because
the current API is limited. No HTTPMessage method is able to yield. Those
manipulating the headers are always called on messages containing all the
headers, so there is no reason to yield. Those manipulating the payload are
called from the http_payload filters callback function where yielding is
forbidden.
When an HTTPMessage object is instantiated, the underlying Channel object
can be retrieved via the ".channel" field.
For now this class is not used because the HTTP filtering is not supported
yet. It will be the purpose of another commit.
There is no documentation for now.
It is now possible to write some filter callback functions in lua. All
filter callbacks are not supported yet but the mechanism to call them is now
in place. Following method may be defined in the Lua filter class to be
bound on filter callbacks:
* Filter:start_analyse(txn, chn)
* Filter:end_analyse(txn, chn)
* Filter:tcp_payload(txn, chn, offset, length)
hlua_filter_callback() function is responsible to call the good lua function
depending on the filter callback function. Using some flags it is possible
to allow a lua call to yield or not, to retrieve a return value or not, and
to specify if a channel or an http message must be passed as second
argument. For now, the HTTP part has not been added yet. It is also possible
to add extra argument adding them on the stack before the call.
3 new functions are exposed by the global object "filter". The first one,
filter.wake_time(ms_delay), to set the wake_time when a Lua callback
function yields (if allowed). The two others,
filter.register_data_filter(filter, chn) and
filter.unregister_data_filter(filter, chn), to enable or disable the data
filtering on a channel for a specific lua filter instance.
start_analyse() and end_analyse() may return one of the constant
filter.CONTINUE, filter.WAIT or filter.ERROR. If nothing is returned,
filter.CONTINUE is used as the default value. On its side, tcp_payload() may
return the amount of data to forward. If nothing is returned, all incoming
data are forwarded.
For now, these functions are not allowed to yield because this interferes
with the filter workflow.
Here is a simple example :
MyFilter = {}
MyFilter.id = "My Lua filter"
MyFilter.flags = filter.FLT_CFG_FL_HTX
MyFilter.__index = MyFilter
function MyFilter:new()
flt = {}
setmetatable(flt, MyFilter)
flt.req_len = 0
flt.res_len = 0
return flt
end
function MyFilter:start_analyze(txn, chn)
filter.register_data_filter(self, chn)
end
function MyFilter:end_analyze(txn, chn)
print("<Total> request: "..self.req_len.." - response: "..self.res_len)
end
function MyFilter:tcp_payload(txn, chn)
offset = chn:ouput()
len = chn:input()
if chn:is_resp() then
self.res_len = self.res_len + len
print("<TCP:Response> offset: "..offset.." - length: "..len)
else
self.req_len = self.req_len + len
print("<TCP:Request> offset: "..offset.." - length: "..len)
end
end
For filters written in lua, the tcp payloads will be filtered using methods
exposed by the Channel class. So the corrsponding C binding functions must
be prepared to process payload in a filter context and not only in an action
context.
The main change is the offset where to start to process data in the channel
buffer, and the length of these data. For an action, all input data are
considered. But for a filter, it depends on what the filter is allow to
forward when the tcp_payload callback function is called. It depends on
previous calls but also on other filters.
In addition, when the payload is modified by a lua filter, its context must
be updated. Note also that channel functions cannot yield when called from a
filter context.
For now, it is not possible to define callbacks to filter data and the
documentation has not been updated.
A lua TXN can be created when a sample fetch, an action or a filter callback
function is executed. A flag is now used to track the execute context.
Respectively, HLUA_TXN_SMP_CTX, HLUA_TXN_ACT_CTX and HLUA_TXN_FLT_CTX. The
filter flag is not used for now.
For now, there is no support for filters written in lua. So this function,
if called, will always return NULL. But when it will be called in a filter
context, it will return the filter structure attached to a channel
class. This function is also responsible to set the offset of data that may
be processed and the length of these data. When called outside a filter
context (so from an action), the offset is the input data position and the
length is the input data length. From a filter, the offset and the length of
data that may be filtered are retrieved the filter context.
It is now possible to write dummy filters in lua. Only the basis to declare
such filters has been added for now. There is no way to declare callbacks to
filter anything. Lua filters are for now empty nutshells.
To do so, core.register_filter() must be called, with 3 arguments, the
filter's name (as it appears in HAProxy config), the lua class that will be
used to instantiate filters and a function to parse arguments passed on the
filter line in HAProxy configuration file. The lua filter class must at
least define the method new(), without any extra args, to create new
instances when streams are created. If this method is not found, the filter
will be ignored.
Here is a template to declare a new Lua filter:
// haproxy.conf
global
lua-load /path/to/my-filter.lua
...
frontend fe
...
filter lua.my-lua-filter arg1 arg2 arg3
filter lua.my-lua-filter arg4 arg5
// my-filter.lua
MyFilter = {}
MyFilter.id = "My Lua filter" -- the filter ID (optional)
MyFilter.flags = filter.FLT_CFG_FL_HTX -- process HTX streams (optional)
MyFilter.__index = MyFilter
function MyFilter:new()
flt = {}
setmetatable(flt, MyFilter)
-- Set any flt fields. self.args can be used
flt.args = self.args
return flt -- The new instance of Myfilter
end
core.register_filter("my-lua-filter", MyFilter, function(filter, args)
-- process <args>, an array of strings. For instance:
filter.args = args
return filter
end)
In this example, 2 filters are declared using the same lua class. The
parsing function is called for both, with its own copy of the lua class. So
each filter will be unique.
The global object "filter" exposes some constants and flags, and later some
functions, to help writting filters in lua.
Internally, when a lua filter is instantiated (so when new() method is
called), 2 lua contexts are created, one for the request channel and another
for the response channel. It is a prerequisite to let some callbacks yield
on one side independently on the other one.
There is no documentation for now.
First of all, following functions are now considered deprecated:
* Channel:dup()
* Channel:get()
* Channel:getline()
* Channel:get_in_len()
* Cahnnel:get_out_len()
It is just informative, there is no warning and functions may still be
used. Howver it is recommended to use new functions. New functions are more
flexible and use a better naming pattern. In addition, the same names will
be used in the http_msg class to manipulate http messages from lua filters.
The new API is:
* Channel:data()
* Channel:line()
* Channel:append()
* Channel:prepend()
* Channel:insert()
* Channel:remove()
* Channel:set()
* Channel:input()
* Channel:output()
* Channel:send()
* Channel:forward()
* Channel:is_resp()
* Channel:is_full()
* Channel:may_recv()
The lua documentation was updated accordingly.
The main change is that following functions will now process channel's data
using an offset and a length:
* hlua_channel_dup_yield()
* hlua_channel_get_yield()
* hlua_channel_getline_yield()
* hlua_channel_append_yield()
* hlua_channel_set()
* hlua_channel_send_yield()
* hlua_channel_forward_yield()
So for now, the offset is always the input data position and the length is
the input data length. But with the support for filters, from a filter
context, these values will be relative to the filter.
To make all processing clearer, the function _hlua_channel_dup() has been
updated and _hlua_channel_dupline(), _hlua_channel_insert() and
_hlua_channel_delete() have been added.
This patch is mandatory to allow the support of the filters written in lua.
The hlua_checktable() function may now be used to create and return a
reference on a table in stack, given its position. This function ensures it
is really a table and throws an exception if not.
This patch is mandatory to allow the support of the filters written in lua.
Lua functions to set or append data to the input part of a channel must not
yield because new data may be received while the lua script is suspended. So
adding data to the input part in several passes is highly unpredicatble and
may be interleaved with received data.
Note that if necessary, it is still possible to suspend a lua action by
returning act.YIELD. This way the whole action will be reexecuted later
because of I/O events or a timer. Another solution is to call core.yield().
This bug affects all stable versions. So, it may be backported. But it is
probably not necessary because nobody notice it till now.
When a script is executed, it is not always allowed to yield. Lua sample
fetches and converters cannot yield. For lua actions, it depends on the
context. When called from tcp content ruleset, an action may yield until the
expiration of the inspect-delay timeout. From http rulesets, yield is not
possible.
Thus, when channel functions (dup, get, append, send...) are called, instead
of yielding when it is not allowed and triggering an error, we just give
up. In this case, some functions do nothing (dup, append...), some others
just interrupt the in-progress job (send, forward...). But, because these
functions don't yield anymore when it is not allowed, the script regains the
control and can continue its execution.
This patch depends on "MINOR: lua: Add a flag on lua context to know the
yield capability at run time". Both may be backported in all stable
versions. However, because nobody notice this bug till now, it is probably
not necessary, excepted if someone ask for it.
When a script is executed, a flag is used to allow it to yield. An error is
returned if a lua function yield, explicitly or not. But there is no way to
get this capability in C functions. So there is no way to choose to yield or
not depending on this capability.
To fill this gap, the flag HLUA_NOYIELD is introduced and added on the lua
context if the current script execution is not authorized to yield. Macros
to set, clear and test this flags are also added.
This feature will be usefull to fix some bugs in lua actions execution.
When at least one filter is registered on a stream, the FLT_END analyzer is
called on both direction when all other analyzers have finished their
processing. During this step, filters may release any allocated elements if
necessary. So it is important to not skip it.
Unfortunately, if both stream interfaces are closed, it is possible to not
wait the end of this analyzer. It is possible to be in this situation if a
filter must wait and prevents the analyzer completion. To fix the bug, we
now wait FLT_END analyzer is no longer registered on both direction before
releasing the stream.
This patch may be backported as far as 1.7, but AFAIK, no filter is affected
by this bug. So the backport seems to be optional for now. In any case, it
should remain under observation for some weeks first.
In tcpcheck_eval_send(), the condition to detect there are still pending
data in the output buffer is buggy. Presence of raw data must be tested for
TCP connection only. But a condition on the connection was missing to be
sure it is not an HTX connection.
This patch must be backported as far as 2.2.
The formatting of the buffer_dump() output must be calculated using the
relative counter, not the absolute one, or everything will be broken if
the <from> variable is not a multiple of 16.
Could be backported in all maintained versions.
A static server is able to support simultaneously both health chech and
agent-check. Adjust the dynamic server CLI handlers to also support this
configuration.
This should not be backported, unless dynamic server checks are
backported.
There is currently a leak on agent-check for dynamic servers. When
deleted, the check rules and vars are not liberated. This leak grows
each time a dynamic server with agent-check is deleted.
Replace the manual purge code by a free_check invocation which
centralizes all the details on check cleaning.
There is no leak for health check because in this case the proxy is the
owner of the check vars and rules.
This should not be backported, unless dynamic server checks are
backported.
If an error occured during a dynamic server creation, free_check is used
to liberate a possible agent-check. However, this does not free
associated vars and rules associated as this is done on another function
named deinit_srv_agent_check.
To simplify the check free and avoid a leak, move free vars/rules in
free_check. This is valid because deinit_srv_agent_check also uses
free_check.
This operation is done only for an agent-check because for a health
check, the proxy instance is the owner of check vars/rules.
This should not be backported, unless dynamic server checks are
backported.
Do not reset check flags when setting CHK_ST_PURGE.
Currently, this change has no impact. However, it is semantically wrong
to clear important flags such as CHK_ST_AGENT on purge.
Furthermore, this change will become mandatoy for a future fix to
properly free agent checks on dynamic servers removal. For this, it will
be needed to differentiate health/agent-check on purge via CHK_ST_AGENT
to properly free agent checks.
This must not be backported unless dynamic servers checks are
backported.
Currently there is a leak at process shutdown with dynamic servers with
check/agent-check activated. Check purges are not executed on process
stopping, so the server is not liberated due to its refcount.
The solution is simply to ignore the refcount on process stopping mode
and free the server on the first free_server invocation.
This should not be backported, unless dynamic server checks are
backported. In this case, the following commit must be backported first.
7afa5c1843
MINOR: global: define MODE_STOPPING
Test if server is not null before using free_server in the check purge
operation. Currently, the null server scenario should not occured as
purge is used with refcounted dynamic servers. However, this might not
be always the case if purge is use in the future in other cases; thus
the test is useful for extensibility.
No need to backport, unless dynamic server checks are backported.
This has been reported through a coverity report in github issue #1343.
This commit is the counterpart for agent check of
"MEDIUM: server: implement check for dynamic servers".
The "agent-check" keyword is enabled for dynamic servers. The agent
check must manually be activated via "enable agent" CLI. This can
enable the dynamic server if the agent response is "ready" without an
explicit "enable server" CLI.
Implement check support for dynamic servers. The "check" keyword is now
enabled for dynamic servers. If used, the server check is initialized
and the check task started in the "add server" CLI handler. The check is
explicitely disabled and must be manually activated via "enable health"
CLI handler.
The dynamic server refcount is incremented if a check is configured. On
"delete server" handler, the check is purged, which decrements the
refcount.
Implement a collection of keywords deemed safe and useful to dynamic
servers. The list of the supported keywords is :
- addr
- check-proto
- check-send-proxy
- check-via-socks4
- rise
- fall
- fastinter
- downinter
- port
- agent-addr
- agent-inter
- agent-port
- agent-send
Implement a mechanism to free a started check on runtime for dynamic
servers. A new function check_purge is created for this. The check task
will be marked for deletion and scheduled to properly close connection
elements and free the task/tasklet/buf_wait elements.
This function will be useful to delete a dynamic server wich checks.
It is necessary to have a refcount mechanism on dynamic servers to be
able to enable check support. Indeed, when deleting a dynamic server
with check activated, the check will be asynchronously removed. This is
mandatory to properly free the check resources in a thread-safe manner.
The server instance must be kept alive for this.
global maxsock is used to estimate a number of fd to reserve for
internal use, such as checks. It is incremented at startup with the info
from the config file.
Disable this incrementation in checks functions at runtime. First, it
currently serves no purpose to increment it after startup. Worse, it may
lead to out-of-bound accesse on the fdtab.
This will be useful to initiate checks for dynamic servers.
Remove static qualifier on init_srv_check, init_srv_agent_check and
start_check_task. These functions will be called in server.c for dynamic
servers with checks.
Allocate default tcp ruleset for every backend without explicit rules
defined, even if no server in the backend use check. This change is
required to implement checks for dynamic servers.
This allocation is done on check_config_validity. It must absolutely be
called before check_proxy_tcpcheck (called via post proxy check) which
allocate the implicit tcp connect rule.
Implement an equivalent of task_kill for tasklets. This function can be
used to request a tasklet deletion in a thread-safe way.
Currently this function is unused.
Remove the "DEPRECATED" marker on "enable/disable health/agent"
commands. Their purpose is to toggle the check/agent on a server.
These commands are still useful because their purpose is not covered by
the "set server" command. Most there was confusion with the commands
'set server health/agent', which in fact serves another goal.
Note that the indication "use 'set server' instead" has been added since
2016 on the commit
2c04eda8b5
REORG: cli: move "{enable|disable} health" to server.c
and
58d9cb7d22
REORG: cli: move "{enable|disable} agent" to server.c
Besides, these commands will become required to enable check/agent on
dynamic servers which will be created with check disabled.
This should be backported up to 2.4.
It is the second part of the fix that should solve fairness issues with the
connections management inside the SPOE filter. Indeed, in multithreaded
mode, when the SPOE detects there are some connections in queue on a server,
it closes existing connections by releasing SPOE applets. It is mandatory
when a maxconn is set because few connections on a thread may prenvent new
connections establishment.
The first attempt to fix this bug (9e647e5af "BUG/MEDIUM: spoe: Kill applets
if there are pending connections and nbthread > 1") introduced a bug. In
pipelining mode, SPOE applets might be closed while some frames are pending
for the ACK reply. To fix the bug, in the processing stage, if there are
some connections in queue, only truly idle applets may process pending
requests. In this case, only one request at a time is processed. And at the
end of the processing stage, only truly idle applets may be released. It is
an empirical workaround, but it should be good enough to solve contention
issues when a low maxconn is set.
This patch should partely fix the issue #1340. It must be backported as far
as 2.0.
On a thread, when the last SPOE applet is released, if there are still
pending streams, a new one is created. Of course, HAproxy must not be
stopping. It is important to start a new applet in this case to not abort
in-progress jobs, especially when a maxconn is set. Because applets may be
closed to be fair with connections waiting for a free slot.
This patch should partely fix the issue #1340. It depends on the commit
"MINOR: spoe: Create a SPOE applet if necessary when the last one on a
thread is closed". Both must be backported as far as 2.0.
There was no way to access the SPOE filter configuration from the agent
object. However it could be handy to have it. And in fact, this will be
required to fix a bug.
Nenad noticed that when leaving maintenance, the servers' last_change
field was not updated. This is visible in the Status column of the stats
page in front of the state, as the cumuled time spent in the current state
is wrong, it starts from the last transition (typically ready->maint). In
addition, the backend's state was not updated either, because the down
transition is performed by set_backend_down() which also emits a log, and
it is this function which was extended to update the backend's last_change,
but it's not called for down->up transitions so that was not done.
The most visible (and unpleasant) effect of this bug is that it affects
slowstart so such a server could immediately restart with a significant
load ratio.
This should likely be backported to all stable releases.
Right now we're using a DWCAS to atomically set the running_mask while
being constrained by the thread_mask. This DWCAS is annoying because we
may seriously need it later when adding support for thread groups, for
checking that the running_mask applies to the correct group.
It turns out that the DWCAS is not strictly necessary because we never
need it to set the thread_mask based on the running_mask, only the other
way around. And in fact, the running_mask is always cleared alone, and
the thread_mask is changed alone as well. The running_mask is only
relevant to indicate a takeover when the thread_mask matches it. Any
bit set in running and not present in thread_mask indicates a transition
in progress.
As such, it is possible to re-arrange this by using a regular CAS around a
consistency check between running_mask and thread_mask in fd_update_events
and by making a CAS on running_mask then an atomic store on the thread_mask
in fd_takeover(). The only other case is fd_delete() but that one already
sets the running_mask before clearing the thread_mask, which is compatible
with the consistency check above.
This change has happily survived 10 billion takeovers on a 16-thread
machine at 800k requests/s.
The fd-migration doc was updated to reflect this change.
This one is set whenever an FD is reported by a poller with a null owner,
regardless of the thread_mask. It has become totally meaningless because
it only indicates a migrated FD that was not yet reassigned to a thread,
but as soon as a thread uses it, the status will change to skip_fd. Thus
there is no reason to distinguish between the two, it adds more confusion
than it helps. Let's simply drop it.
If an error occured during the CLI 'add server' handler, the newly
created server must be removed from the proxy list if already inserted.
Currently, this can happen on the extremely rare error during server id
generation if there is no id left.
The removal operation is not thread-safe, it must be conducted before
releasing the thread isolation.
This can be backported up to 2.4. Please note that dynamic server track
is not implemented in 2.4, so the release_server_track invocation must
be removed for the backport to prevent a compilation error.
In 2.4, runtime server deletion was brought by commit e558043e1 ("MINOR:
server: implement delete server cli command"). A comment remained in the
code about a theoretical race between the thread_isolate() call and another
thread being in the process of allocating memory before accessing the
server via a reference that was grabbed before the memory allocation,
since the thread_harmless_now()/thread_harmless_end() pair around mmap()
may have the effect of allowing cli_parse_delete_server() to proceed.
Now that the full thread isolation is available, let's update the code
to rely on this. Now it is guaranteed that competing threads will either
be in the poller or queued in front of thread_isolate_full().
This may be backported to 2.4 if any report of breakage suggests the bug
really exists, in which case the two following patches will also be
needed:
MINOR: threads: make thread_release() not wait for other ones to complete
MEDIUM: threads: add a stronger thread_isolate_full() call
The current principle of running under isolation was made to access
sensitive data while being certain that no other thread was using them
in parallel, without necessarily having to place locks everywhere. The
main use case are "show sess" and "show fd" which run over long chains
of pointers.
The thread_isolate() call relies on the "harmless" bit that indicates
for a given thread that it's not currently doing such sensitive things,
which is advertised using thread_harmless_now() and which ends usings
thread_harmless_end(), which also waits for possibly concurrent threads
to complete their work if they took this opportunity for starting
something tricky.
As some system calls were notoriously slow (e.g. mmap()), a bunch of
thread_harmless_now() / thread_harmless_end() were placed around them
to let waiting threads do their work while such other threads were not
able to modify memory contents.
But this is not sufficient for performing memory modifications. One such
example is the server deletion code. By modifying memory, it not only
requires that other threads are not playing with it, but are not either
in the process of touching it. The fact that a pool_alloc() or pool_free()
on some structure may call thread_harmless_now() and let another thread
start to release the same object's memory is not acceptable.
This patch introduces the concept of "idle threads". Threads entering
the polling loop are idle, as well as those that are waiting for all
others to become idle via the new function thread_isolate_full(). Once
thread_isolate_full() is granted, the thread is not idle anymore, and
it is released using thread_release() just like regular isolation. Its
users have to keep in mind that across this call nothing is granted as
another thread might have performed shared memory modifications. But
such users are extremely rare and are actually expecting this from their
peers as well.
Note that that in case of backport, this patch depends on previous patch:
MINOR: threads: make thread_release() not wait for other ones to complete
The original intent of making thread_release() wait for other requesters to
proceed was more of a fairness trade, guaranteeing that a thread that was
granted an access to the CPU would be in turn giving back once its job is
done. But this is counter-productive as it forces such threads to spin
instead of going back to the poller, and it prevents us from implementing
multiple levels of guarantees, as a thread_release() call could spin
waiting for another requester to pass while that requester expects
stronger guarantees than the current thread may be able to offer.
Let's just remove that wait period and let the thread go back to the
poller, a-la "race to idle".
While in theory it could possibly slightly increase the perceived
latency of concurrent slow operations like "show fd" or "show sess",
it is not the case at all in tests, probably because the time needed
to reach the poller remains extremely low anyway.
Probably due to a copy-paste, there were two indent levels in this function
since its introduction in 1.9 by commit 60b639ccb ("MEDIUM: hathreads:
implement a more flexible rendez-vous point"). Let's fix this.
If an error occurs during a dynamic server creation with tracking, it
must be removed from the tracked list. This operation is not thread-safe
and thus must be conducted under the thread isolation.
Track support for dynamic servers has been introduced in this release.
This does not need to be backported.
Previous patch b5c0d65 ("MINOR: proxy: disabled takes a stopping and a
disabled state") allows us to set 2 states for a stopped or a disabled
proxy. With this patch we are now able to show the stats of all proxies
when the process is in a stopping states, not only when there is some
activity on a proxy.
This patch should fix issue #1307.
This patch splits the disabled state of a proxy into a PR_DISABLED and a
PR_STOPPED state.
The first one is set when the proxy is disabled in the configuration
file, and the second one is set upon a stop_proxy().
Rename the 'dontloglegacyconnerr' option to 'log-error-via-logformat'
which is much more self-explanatory and readable.
Note: only legacy keywords don't use hyphens, it is recommended to
separate words with them in new keywords.
update_freq_ctr_period() was using relaxed atomics without using barriers,
which usually works fine on x86 but not everywhere else. In addition, some
values were read without being enclosed by barriers, allowing the compiler
to possibly prefetch them a bit earlier. Finally, freq_ctr_total() was also
reading these without enough barriers. Let's make explicit use of atomic
loads and atomic stores to get rid of this situation. This required to
slightly rearrange the freq_ctr_total() loop, which could possibly slightly
improve performance under extreme contention by avoiding to reread all
fields.
A backport may be done to 2.4 if a problem is encountered, but last tests
on arm64 with LSE didn't show any issue so this can possibly stay as-is.
This function already performs a number of checks prior to calling the
IOCB, and detects the change of thread (FD migration). Half of the
controls are still in each poller, and these pollers also maintain
activity counters for various cases.
Note that the unreliable test on thread_mask was removed so that only
the one performed by fd_set_running() is now used, since this one is
reliable.
Let's centralize all that fd-specific logic into the function and make
it return a status among:
FD_UPDT_DONE, // update done, nothing else to be done
FD_UPDT_DEAD, // FD was already dead, ignore it
FD_UPDT_CLOSED, // FD was closed
FD_UPDT_MIGRATED, // FD was migrated, ignore it now
Some pollers already used to call it last and have nothing to do after
it, regardless of the result. epoll has to delete the FD in case a
migration is detected. Overall this removes more code than it adds.
If an MT-aware poller reports that a file descriptor was migrated, it
must stop reporting it. The simplest way to do this is to program an
update if not done yet. This will automatically mark the FD for update
on next round. Otherwise there's a risk that some events are reported
a bit too often and cause extra CPU usage with these pollers. Note
that epoll is currently OK regarding this. Select does not need this
because it uses a single shared events table, so in case of migration
no FD change is expected.
This should be backported as far as 2.2.
The skip_fd counter that is incremented when a migrated FD is reported
was abnormally high in with poll. The reason is that it was accounted
for before preparing the polled events instead of being measured from
the reported events.
This mistake was done when the counters were introduced in 1.9 with
commit d80cb4ee1 ("MINOR: global: add some global activity counters to
help debugging"). It may be backported as far as 2.0.
In 1.8, commit ab62f5195 ("MINOR: polling: Use fd_update_events to update
events seen for a fd") updated the pollers to rely on fd_update_events(),
but the modification delayed the test of presence of the FD in the report,
resulting in owner/thread_mask and possibly event updates being performed
for each FD appearing in a block of 32 FDs around an active one. This
caused the request rate to be ~3 times lower with select() than poll()
under 6 threads.
This can be backported as far as 1.8.
A bug was introduced in 2.1-dev2 by commit 305d5ab46 ("MAJOR: fd: Get
rid of the fd cache."). Pollers "poll" and "evport" had the sleeping
bit accidentally removed before the syscall instead of after. This
results in them not being woken up by inter-thread wakeups, which is
particularly visible with the multi-queue accept() and with queues.
As a work-around, when these pollers are used, "nbthread 1" should
be used.
The fact that it has remained broken for 2 years is a great indication
that threads are definitely not enabled outside of epoll and kqueue,
hence why this patch is only tagged medium.
This must be backported as far as 2.2.
In case of connection failure, a dedicated error message is output,
following the format described in section "Error log format" of the
documentation. These messages cannot be configured through a log-format
option.
This patch adds a new option, "dontloglegacyconnerr", that disables
those error logs when set, and "replaces" them by a regular log line
that follows the configured log-format (thanks to a call to sess_log in
session_kill_embryonic).
The new fc_conn_err sample fetch allows to add the legacy error log
information into a regular log format.
This new option is unset by default so the logging logic will remain the
same until this new option is used.
This new sample fetch along the ssl_fc_hsk_err_str fetch contain the
last SSL error of the error stack that occurred during the SSL
handshake (from the frontend's perspective). The errors happening during
the client's certificate verification will still be given by the
ssl_c_err and ssl_c_ca_err fetches. This new fetch will only hold errors
retrieved by the OpenSSL ERR_get_error function.
The ssl_c_err, ssl_c_ca_err and ssl_c_ca_err_depth sample fetches values
were not recoverable when the connection failed because of the test
"conn->flags & CO_FL_WAIT_XPRT" (which required the connection to be
established). They could then not be used in a log-format since whenever
they would have sent a non-null value, the value fetching was disabled.
This patch ensures that all these values can be fetched in case of
connection failure.
The fc_conn_err and fc_conn_err_str sample fetches give information
about the problem that made the connection fail. This information would
previously only have been given by the error log messages meaning that
thanks to these fetches, the error log can now be included in a custom
log format. The log strings were all found in the conn_err_code_str
function.
Cleanup the mworker_cli_proxy_create() function by removing the
allocation and init of the proxy which is done manually, and replace it
by alloc_new_proxy(). Do the same with the free_proxy() function.
This patch also move the insertion at the end of the function.
Disable the output of the statistics of internal proxies (PR_CAP_INT),
wo we don't rely only on the px->uuid > 0. This will allow to hide more
cleanly the internal proxies in the stats.
This patch renames the proxy capability "LUA" to "INT" so it could be
used for any internal proxy.
Every proxy that are not user defined should use this flag.
This part was fixed several times since commit aade4edc1 ("BUG/MEDIUM:
mux-h2: Don't handle pending read0 too early on streams") and there are
still some cases where a read0 event may be ignored because a partial frame
inhibits the event.
Here, we must take care to set H2_CF_END_REACHED flag if a read0 was
received while a partial frame header is received or if the padding length
is missing.
To ease partial frame detection, H2_CF_DEM_SHORT_READ flag is introduced. It
is systematically removed when some data are received and is set when a
partial frame is found or when dbuf buffer is empty. At the end of the
demux, if the connection must be closed ASAP or if data are missing to move
forward, we may acknowledge the pending read0 event, if any. For now,
H2_CF_DEM_SHORT_READ is not part of H2_CF_DEM_BLOCK_ANY mask.
This patch should fix the issue #1328. It must be backported as far as 2.0.
The splicing does not work anymore because the H1 connection is not swap to
splice mode when rcv_pipe() callback function is called. It is important to
set H1C_F_WANT_SPLICE flag to inhibit data receipt via the buffer
API. Otherwise, because there are always data in the buffer, it is not
possible to use the kernel splicing.
This bug was introduced by the commit 2b861bf72 ("MINOR: mux-h1: clean up
conditions to enabled and disabled splicing").
The patch must be backported to 2.4.
If a connection is closed during the preface while no data are received, if
the dontlognull option is set, no log message must be emitted. However, this
will still be handled as a protocol error. Only the log is omitted.
This patch should fix the issue #1336 for H2 sessions. It must be backported
to 2.4 and 2.3 at least, and probably as far as 2.0.
If a H1 connection is closed while no data are received, if the dontlognull
option is set, no log message must be emitted. Because the H1 multiplexer
handles early errors, it must take care to obey this option. It is true for
400-Bad-Request, 408-Request-Time-out and 501-Not-Implemented
responses. 500-Internal-Server-Error responses are still logged.
This patch should fix the issue #1336 for H1 sessions. It must be backported
to 2.4.
Use non-checked function to retrieve listener/server via obj_type. This
is done as a previous obj_type function ensure that the type is well
known and the instance is not NULL.
Incidentally, this should prevent the coverity report from the #1335
github issue which warns about a possible NULL dereference.
When we evaluate a DNS response item, it may be necessary to look for a
server with a hostname matching the item target into the named servers
tree. To do so, the item target is transformed to a lowercase string. It
must be a null-terminated string. Thus we must explicitly set the trailing
'\0' character.
For a specific resolution, the named servers tree contains all servers using
this resolution with a hostname loaded from a state file. Because of this
bug, same entry may be duplicated because we are unable to find the right
server, assigning this way the item to a free server slot.
This patch should fix the issue #1333. It must be backported as far as 2.2.
Commit 048368ef6 ("MINOR: deinit: always deinit the init_mutex on
failed initialization") added the missing unlock but forgot to
condition it on USE_THREAD, resulting in a build failure. No
backport is needed.
This addresses oss-fuzz issue 36426.
A config like the below fails to validate because of a bogus test:
backend b1
tcp-check connect port 1234
option tcp-check
server s1 1.2.3.4 check
[ALERT] (18887) : config : config: proxy 'b1': server 's1' has neither
service port nor check port, and a tcp_check rule
'connect' with no port information.
A || instead of a && only validates the connect rule when both the
address *and* the port are set. A work around is to set the rule like
this:
tcp-check connect addr 0:1234 port 1234
This needs to be backported as far as 2.2 (2.0 is OK).
Agent stats were lost during the stats refactoring performed in the 2.4 to
simplify the Prometheus exporter. stats_fill_sv_stats() function must fill
ST_F_AGENT_* and ST_F_LAST_AGT stats.
This patch should fix the issue #1331. It must be backported to 2.4.
Some ssl samples cause a segfault when the stream is not instantiated,
for example during an invalid HTTP request. A new check is added to
prevent the stream dereferencing if NULL.
This is the list of the affected samples :
- ssl_s_chain_der
- ssl_s_der
- ssl_s_i_dn
- ssl_s_key_alg
- ssl_s_notafter
- ssl_s_notbefore
- ssl_s_s_dn
- ssl_s_serial
- ssl_s_sha1
- ssl_s_sig_alg
- ssl_s_version
This bug can be reproduced easily by using one of these samples in a
log-format string. Emit an invalid HTTP request with an HTTP client to
trigger the crash.
This bug has been reported in redmine issue 3913.
This must be backported up to 2.2.
This undocumented variable is only for internal use, and its sole
presence affects the process' behavior, as shown in bug #1324. It must
not be exported to workers, external checks, nor programs. Let's unset
it before forking programs and workers.
This should be backported as far as 1.8. The worker code might differ
a bit before 2.5 due to the recent removal of multi-process support.
The master-worker code registers an exit handler to deal with configuration
issues during reload, leading to a restart of the master process in wait
mode. But it shouldn't do that when it's expected that the program stops
during config parsing or condition checks, as the reload operation is
unexpectedly called and results in abnormal behavior and even crashes:
$ HAPROXY_MWORKER_REEXEC=1 ./haproxy -W -c -f /dev/null
Configuration file is valid
[NOTICE] (18418) : haproxy version is 2.5-dev2-ee2420-6
[NOTICE] (18418) : path to executable is ./haproxy
[WARNING] (18418) : config : Reexecuting Master process in waitpid mode
Segmentation fault
$ HAPROXY_MWORKER_REEXEC=1 ./haproxy -W -cc 1
[NOTICE] (18412) : haproxy version is 2.5-dev2-ee2420-6
[NOTICE] (18412) : path to executable is ./haproxy
[WARNING] (18412) : config : Reexecuting Master process in waitpid mode
[WARNING] (18412) : config : Reexecuting Master process
Note that the presence of this variable happens by accident when haproxy
is called from within its own programs (see issue #1324), but this should
be the object of a separate fix.
This patch fixes this by preventing the atexit registration in such
situations. This should be backported as far as 1.8. MODE_CHECK_CONDITION
has to be dropped for versions prior to 2.5.
Oss-fuzz reports in issue 36328 that we can recurse too far by passing
extremely deep expressions to the ".if" parser. I thought we were still
limited to the 1024 chars per line, that would be highly sufficient, but
we don't have any limit now :-/
Let's just pass a maximum recursion counter to the recursive parsers.
It's decremented for each call and the expression fails if it reaches
zero. On the most complex paths it can add 3 levels per parenthesis,
so with a limit of 1024, that's roughly 343 nested sub-expressions that
are supported in the worst case. That's more than sufficient, for just
a few kB of RAM.
No backport is needed.
The init_mutex was not unlocked in case an error is encountered during
a thread initialization, and the polling loop was aborted during startup.
In practise it does not have any observable effect since an explicit
exit() is placed there, but it could confuse some debugging tools or
some static analysers, so let's release it as expected.
This addresses issue #1326.
Since last change on HTTP analysers (252412316 "MEDIUM: proxy: remove
long-broken 'option http_proxy'"), http_process_request() may only return
internal errors on failures. Thus the label used to handle bad requests may
be removed.
This patch should fix the issue #1330.
This option had always been broken in HTX, which means that the first
breakage appeared in 1.9, that it was broken by default in 2.0 and that
no workaround existed starting with 2.1. The way this option works is
praticularly unfit to the rest of the configuration and to the internal
architecture. It had some uses when it was introduced 14 years ago but
nowadays it's possible to do much better and more reliable using a
set of "http-request set-dst" and "http-request set-uri" rules, which
additionally are compatible with DNS resolution (via do-resolve) and
are not exclusive to normal load balancing. The "option-http_proxy"
example config file was updated to reflect this.
The option is still parsed so that an error message gives hints about
what to look for.
The cfg_free_cond_{term,and,expr}() functions used to take a pointer to
the pointer to be freed in order to replace it with a NULL once done.
But this doesn't cope well with freeing lists as it would require
recursion which the current code tried to avoid.
Let's just change the API to free the area and let the caller set the NULL.
This leak was reported by oss-fuzz (issue 36265).
While we do free the array containing the arguments, we do not free
allocated ones. Most of them are unresolved, but strings are allocated
and have to be freed as well. Note that for the sake of not breaking
the args resolution list that might have been set, we still refrain
from doing this if a resolution was already programmed, but for most
common cases (including the ones that can be found in config conditions
and at run time) we're safe.
This may be backported to stable branches, but it relies on the new
free_args() function that was introduced by commit ab213a5b6 ("MINOR:
arg: add a free_args() function to free an args array"), and which is
likely safe to backport as well.
This leak was reported by oss-fuzz (issue 36265).
The removal for the shared inter-process cache in commit 6fd0450b4
("CLEANUP: shctx: remove the different inter-process locking techniques")
accidentally removed the enforcement of rlimit_memmax_all which
corresponds to what is passed to the command-line "-m" argument.
Let's restore it.
Thanks to @nafets227 for spotting this. This fixes github issue #1319.
Now it's possible to form a term using parenthesis around an expression.
This will soon allow to build more complex expressions. For now they're
still pretty limited but parenthesis do work.
Now evaluating a condition will rely on an expression (or an empty string),
and this expression will support ORing a sub-expression with another
optional expression. The sub-expressions ANDs a term with another optional
sub-expression. With this alone precedence between && and || is respected,
and the following expression:
A && B && C || D || E && F || G
will naturally evaluate as:
(A && B && C) || D || (E && F) || G
It's not convenient to let the caller be responsible for node allocation,
better have the leaf function do that and implement the accompanying free
call. Now only a pointer is needed instead of a struct, and the leaf
function makes sure to leave the situation in a consistent way.
Till now we were dealing with single-word expressions but in order to
extend the configuration condition language a bit more, we'll need to
support slightly more complex expressions involving operators, and we
must absolutely support spaces around them to keep them readable.
As all arguments are pointers to the same line with spaces replaced by
zeroes, we can trivially rebuild the whole line before calling the
condition evaluator, and remove the test for extraneous argument. This
is what this patch does.
Random characters placed after a configuration predicate currently do
not report an error. This is a problem because extra parenthesis,
commas or even other random left-over chars may accidently appear there.
Let's now report an error when this happens.
This is marked MEDIUM because it may break otherwise working configs
which are faulty.
The purpose is to build a descendent parser that will split conditions
into expressions made of terms. There are two phases, a parsing phase
and an evaluation phase. Strictly speaking it's not required to cut
that in two right now, but it's likely that in the future we won't want
certain predicates to be evaluated during the parsing (e.g. file system
checks or execution of some external commands).
The cfg_eval_condition() function is now much simpler, it just tries to
parse a single term, and if OK evaluates it, then returns the result.
Errors are unchanged and may still be reported during parsing or
evaluation.
It's worth noting that some invalid expressions such as streq(a,b)zzz
continue to parse correctly for now (what remains after the parenthesis
is simply ignored as not necessary).
The .if/.else/.endif and condition evaluation code is quite dirty and
was dumped into cfgparse.c because it was easy. But it should be tidied
quite a bit as it will need to evolve.
Let's move all that to cfgcond.{c,h}.
Argument arrays used in hlua_lua2arg_check() as well as in the functions
used to call sample fetches and converters were manually released, let's
use the cleaner and more reliable free_args() instead. The prototype of
hlua_lua2arg_check() was amended to mention that the function relies on
the final ARGT_STOP, which is already the case, and the pointless test
for this was removed.
make_arg_list() can create an array of arguments, some of which remain
to be resolved, but all users had to deal with their own roll back on
error. Let's add a free_args() function to release all the array's
elements and let the caller deal with the array itself (sometimes it's
allocated in the stack).
I found myself a few times testing some conditoin examples from the doc
against command line's "-cc" to see that they didn't work with environment
variables expansion. Not being documented as being on purpose it looks like
a miss, so let's add PARSE_OPT_ENV and PARSE_OPT_WORD_EXPAND to be able to
test for example -cc "streq(${WITH_SSL},yes)" to help debug expressions.
This adds the exact same restriction as commit 5546c8bdc ("MINOR:
cfgparse: Fail when encountering extra arguments in macro") but for
the "-cc" command line argument, for the sake of consistency.
Allow the usage of the 'track' keyword for dynamic servers. On server
deletion, the server is properly removed from the tracking chain to
prevents NULL pointer dereferencing.
Prevents the use of the "track" keyword for a dynamic server. This
simplifies the deletion of a dynamic server, without having to worry
about servers which might tracked it.
A BUG_ON is present in the dynamic server delete function to validate
this assertion.
TCC doesn't have the equivalent of __builtin_unreachable() and complains
that hlua_panic_ljmp() may return no value. Let's add a return 0 there.
All compilers that know that longjmp() doesn't return will see no change
and tcc will be happy.
Modern compilers love to break existing code, and some options detected
at build time (such as -fwrapv) are absolutely critical otherwise some
bad code can be generated.
Given that some users rely on packages that force CFLAGS without being
aware of this and can be hit by runtime bugs, we have to help packagers
figure that they need to be careful about their build options.
The test here consists in detecting correct wrapping of signed integers.
Some of the old code relies on it, and modern compilers recently decided
to break it. It's normally addressed using -fwrapv which users will
rarely enforce in their own flags. Thus it is a good indicator of missing
critical CFLAGS, and it happens to be very easy to detect at run time.
Note that the test uses argc in order to have a variable. While gcc
ignores wrapping even for constants, clang only ignores it for variables.
The way the code is constructed doesn't result in code being emitted for
optimized builds thanks to value range propagation.
This should address GitHub issue #1315, and should be backported to all
stable versions. It may result in instantly breaking binaries that seemed
to work fine (typically the ones suddenly showing a busy loop after a few
weeks of uptime), and require packagers to fix their flags. The vast
majority of distro packages are fine and will not be affected though.
When a default-server line specified a client certificate to use, the
frontend would not take it into account and create an empty SSL context,
which would raise an error on the backend side ("peer did not return a
certificate").
This bug was introduced by d817dc733e in
which the SSL contexts are created earlier than before (during the
default-server line parsing) without setting it in the corresponding
server structures. It then made the server create an empty SSL context
in ssl_sock_prepare_srv_ctx because it thought it needed one.
It was raised on redmine, in Bug #3906.
It can be backported to 2.4.
Since 1.9 with commit 673867c35 ("MAJOR: applets: Use tasks, instead
of rolling our own scheduler.") the thread_mask field of the appctx
became unused, but the code hadn't been cleaned for this. The appctx
has its own task and the task's thread_mask is the one to be displayed.
It's worth noting that all calls to appctx_new() pass tid_bit as the
thread_mask. This makes sense, and it could be convenient to decide
that this becomes the norm and to simplify the API.
Define a new global config statement named
"h2-workaround-bogus-websocket-clients".
This statement will disable the automatic announce of h2 websocket
support as specified in the RFC8441. This can be use to overcome clients
which fail to implement the relatively fresh RFC8441. Clients will in
his case automatically downgrade to http/1.1 for the websocket tunnel
if the haproxy configuration allows it.
This feature is relatively simple and can be backported up to 2.4, which
saw the introduction of h2 websocket support.
Fix the wrong usage of http_uri_parser which is defined with an
uninitialized uri. This causes a crash which happens when forwarding a
request to a backend configured in plain proxy ('option http_proxy').
This has been reported through a clang warning on the CI.
This bug has been introduced by the refactoring of URI parser API.
c453f9547e
MINOR: http: use http uri parser for path
This does not need to be backported.
WARNING: although this patch fix the crash, the 'option http_proxy'
seems to be non buggy, possibly since quite a few stable versions.
Indeed, the URI rewriting is not functional : the path is written on the
beginning of the URI but the rest of the URI is not and this garbage is
passed to the server which does not understand the request.
Replace http_get_path by the http_uri_parser API. The new functions is
renamed http_parse_path. Replace duplicated code for scheme and
authority parsing by invocations to http_parse_scheme/authority.
If no scheme is found for an URI detected as an absolute-uri/authority,
consider it to be an authority format : no path will be found. For an
absolute-uri or absolute-path, use the remaining of the string as the
path. A new http_uri_parser state is declared to mark the path parsing
as done.
Split in two the condition which check if the monitor-uri is set for the
current request. This will allow to easily use the http_uri_parser type
for http_get_path.
Replace http_get_authority by the http_uri_parser API.
The new function is renamed http_parse_authority. Replace duplicated
scheme parsing code by http_parse_scheme invocation. A new
http_uri_parser state is declared to mark the authority parsing as done.
Replace http_get_scheme by the http_uri_parser API. The new function is
renamed http_parse_scheme. A new http_uri_parser state is declared to
mark the scheme parsing as completed.
Apply the rfc 3986 scheme-based normalization on h2 requests. This
process will be executed for most of requests because scheme and
authority are present on every h2 requests, except CONNECT. However, the
normalization will only be applied on requests with defaults http port
(http/80 or https/443) explicitly specified which most http clients
avoid.
This change is notably useful for http2 websockets with Firefox which
explicitly specify the 443 default port on Extended CONNECT. In this
case, users can be trapped if they are using host routing without
removing the port. With the scheme-based normalization, the default port
will be removed.
To backport this change, it is required to backport first the following
commits:
* MINOR: http: implement http_get_scheme
* MEDIUM: http: implement scheme-based normalization
Apply the rfc 3986 scheme-based normalization on h1 requests. It is
executed only for requests which uses absolute-form target URI, which is
not the standard case.
Implement the scheme-based uri normalization as described in rfc3986
6.3.2. Its purpose is to remove the port of an uri if the default one is
used according to the uri scheme : 80/http and 443/https. All other
ports are not touched.
This method uses an htx message as an input. It requires that the target
URI is in absolute-form with a http/https scheme. This represents most
of h2 requests except CONNECT. On the contrary, most of h1 requests
won't be elligible as origin-form is the standard case.
The normalization is first applied on the target URL of the start line.
Then, it is conducted on every Host headers present, assuming that they
are equivalent to the target URL.
This change will be notably useful to not confuse users who are
accustomed to use the host for routing without specifying default ports.
This problem was recently encountered with Firefox which specify the 443
default port for http2 websocket Extended CONNECT.
gcc 8.3.0 spews a bunch of:
src/stick_table.c: In function 'action_inc_gpc0':
include/haproxy/freq_ctr.h:66:12: warning: 'period' may be used uninitialized in this function [-Wmaybe-uninitialized]
curr_tick += period;
^~
src/stick_table.c:2241:15: note: 'period' was declared here
unsigned int period;
^~~~~~
but they're incorrect because all accesses are guarded by the exact same
condition (ptr1 not being null), it's just the compiler being overzealous
about the uninitialized detection that seems to be stronger than its
ability to follow its own optimizations. This code path is not critical,
let's just pre-initialize the period to zero.
No backport is needed.
After reloading HAProxy, the old process may still hold active sessions.
Currently there is no way to gather information, how many sessions such
a process still holds. This patch will not exclude disabled proxies from
stats output when they hold at least one active session. This will allow
sending `!@<PID> show stat` through a master socket to the disabled
process and have it returning its stats data.
This reverts commit 19bbbe0562.
For now, set-src/set-src-port actions are directly performed on the client
connection. Using these actions at the stream level is really a problem with
HTTP connection (See #90) because all requests are affected by this change
and not only the current request. And it is worse with the H2, because
several requests can set their source address into the same connection at
the same time.
It is already an issue when these actions are called from "http-request"
rules. It is safer to wait a bit before adding the support to "tcp-request
content" rules. The solution is to be able to set src/dst address on the
stream and not on the connection when the action if performed from the L7
level..
Reverting the above commit means the issue #1303 is no longer fixed.
This patch must be backported in all branches containing the above commit
(as far as 2.0 for now).
A server name was displayed as <srv>/<proxy> instead of the reverse.
It only confuses diagnostics. This was introduced by commit 7a4a0ac71
("MINOR: cli: add a new "show fd" command") so this fix can be backport
down to 1.8.
As shown in issue #1251, it is possible for a connect() to report an
error directly via the poller without ever reporting send readiness,
but currentlt sock_conn_check() manages to ignore that situation,
leading to high CPU usage as poll() wakes up on these FDs.
The bug was apparently introduced in 1.5-dev22 with commit fd803bb4d
("MEDIUM: connection: add check for readiness in I/O handlers"), but
was likely only woken up by recent changes to conn_fd_handler() that
made use of wakeups instead of direct calls between 1.8 and 1.9,
voiding any chance to catch such errors in the early recv() callback.
The exact sequence that leads to this situation remains obscure though
because the poller does not report send readiness nor does it report an
error. Only HUP and IN are reported on the FD. It is also possible that
some recent kernel updates made this condition appear while it never
used to previously.
This needs to be backported to all stable branches, at least as far
as 2.0. Before 2.2 the code was in tcp_connect_probe() in proto_tcp.c.
This patch makes the use of 'gpc' excluding the use of the legacy
types 'gpc0' and 'gpc1" on the same table.
It also makes the use of 'gpc_rate' excluding the use of the legacy
types 'gpc0_rate' and 'gpc1_rate" on the same table.
The 'gpc0' and 'gpc1' related fetches and actions will apply
to the first two elements of the 'gpc' array if stored in table.
The 'gpc0_rate' and 'gpc1_rate' related fetches and actions will apply
to the first two elements of the 'gpc_rate' array if stored in table.
This patch adds the definition of two new array data_types:
'gpc': This is an array of 32bits General Purpose Counters.
'gpc_rate': This is an array on increment rates of General Purpose Counters.
Like for all arrays, they are limited to 100 elements.
This patch also adds actions and fetches to handle
elements of those arrays.
Note: As documented, those new actions and fetches won't
apply to the legacy 'gpc0', 'gpc1', 'gpc0_rate' nor 'gpc1_rate'.
This patch makes the use of 'gpt' excluding the use of the legacy
type 'gpt0' on the same table.
It also makes the 'gpt0' related fetches and actions applying
to the first element of the 'gpt' array if stored in table.
This patch adds the definition of a new array data_type
'gpt'. This is an array of 32bits General Purpose Tags.
Like for all arrays, it is limited to 100 elements.
This patch also adds actions and fetches to handle
elements of this array.
Note: As documented, those new actions and fetches won't
apply to the legacy 'gpt0' data type.
This patch adds support of array data_types on the peer protocol.
The table definition message will provide an additionnal parameter
for array data-types: the number of elements of the array.
In case of array of frqp it also provides a second parameter:
the period used to compute freq counter.
The array elements are std_type values linearly encoded in
the update message.
Note: if a remote peer announces an array data_type without
parameters into the table definition message, all updates
on this table will be ignored because we can not
parse update messages consistently.
This patch provides the code to handle arrays of some
standard types (SINT, UINT, ULL and FRQP) in stick table.
This way we could define new "array" data types.
Note: the number of elements of an array was limited
to 100 to put a limit and to ensure that an encoded
update message will continue to fit into a buffer
when the peer protocol will handle such data types.
This patch replaces all advanced data type aliases on
stktable_data_cast calls by standard types.
This way we could call the same stktable_data_cast
regardless of the used advanced data type as long they
are using the same std type.
It also removes all the advanced data type aliases.
This patch fixes the computation of the bit of the current data_type
in some part of code of peer protocol where the computation is limited
to 32bits whereas the bitfield of data_types can support 64bits.
Without this patch it could result in bugs when we will define more
than 32 data_types.
Backport is useless because there is currently less than 32 data_types
This patch fixes several errors printing integers
of stick table entry values and args during dump on cli.
This patch should be backported since the dump of entries
is supported. [wt: roughly 1.5-dev1 hence all stable branches]
The commit 3406766d5 ("MEDIUM: resolvers: add a ref between servers and srv
request or used SRV record") introduced a regression. The first server of a
template based on SRV record is no longer resolved. The same bug exists for
a normal server based on a SRV record.
In fact, the server used during parsing (used as reference when a
server-template line is parsed) is never attached to the corresponding srvrq
object. Thus with following lines, no resolution is performed because
"srvrq->attached_servers" is empty:
server-template test 1 _http.domain.tld resolvers dns ...
server test1 _http.domain.tld resolvers dns ...
This patch should fix the issue #1295 (but not confirmed yet it is the same
bug). It must be backported everywhere the above commit is.
As specified by the MQTT specification (MQTT-3.1.3-6), the client ID may be
empty. That means the length of the client ID string may be 0. However, The
MQTT parser does not support empty strings.
So, to fix the bug, the mqtt_read_string() function may now parse empty
string. 2 bytes must be found to decode the string length, but the length
may be 0 now. It is the caller responsibility to test the string emptiness
if necessary. In addition, in mqtt_parse_connect(), the client ID may be
empty now.
This patch should partely fix the issue #1310. It must be backported to 2.4.
Parsing of too long strings (> 127 characters) was buggy because of a wrong
cast on the length bytes. To fix the bug, we rely on mqtt_read_2byte_int()
function. This way, the string length is properly decoded.
This patch should partely fix the issue #1310. It must be backported to 2.4.
Since recent commit 469c06c30 ("MINOR: http-act/tcp-act: Add "set-mark"
and "set-tos" for tcp content rules") there's a build warning (or error)
on Windows due to static function tcp_action_set_mark() not being used
because the set-mark functionality is not supported there. It's caused
by the fact that only the parsing function uses it so if the code is
ifdefed out the function remains unused.
Let's surround it with ifdefs as well, and do the same for
tcp_action_set_tos() which could suffer the same fate on operating systems
not defining IP_TOS.
This may need to be backported if the patch above is backported. Also
be careful, the condition was adjusted to cover FreeBSD after commit
f7f53afcf ("BUILD/MEDIUM: tcp: set-mark setting support for FreeBSD.").
It is now possible to set the Netfilter MARK and the TOS field value in all
packets sent to the client from any tcp-request rulesets or the "tcp-response
content" one. To do so, the parsing of "set-mark" and "set-tos" actions are
moved in tcp_act.c and the actions evaluation is handled in dedicated functions.
This patch may be backported as far as 2.2 if necessary.
It is now possible to set the "nice" factor of the current stream from a
"tcp-request content" or "tcp-response content" ruleset. To do so, the
action parsing is moved in stream.c and the action evaluation is handled in
a dedicated function.
This patch may be backported as far as 2.2 if necessary.
It is now possible to set the stream log level from a "tcp-request content"
or "tcp-response content" ruleset. To do so, the action parsing is moved in
stream.c and the action evaluation is handled in a dedicated function.
This patch should fix issue #1306. It may be backported as far as 2.2 if
necessary.
The index of the failing rule is reported in the health-check log message. The
rules index is also used in the check traces. But for implicit HTTP send/expect
rules, the index is wrong. It must be incremented by one compared to the
preceding rule.
This patch may be backported as far as 2.2.
In srv_parse_agent_check the error code is not returned in case
something goes wrong. The value 0 is always return.
Additionally, there's a small cleanup of unreachable returns that in
most checks are not present either and removed in two places they were
present. This makes the code consistent across the different checks.
If resolv_get_ip_from_response() returns an error (or an unexpected return
value), the server is set to RMAINT status. However, its address must also
be reset. Otherwise, it is still reported by the cli on "show servers state"
commands. This may be confusing. Note that it is a theorical patch because
this code path does not exist. Thus it is not tagged as a BUG.
This patch may be backported as far as 2.0.
For A/AAAA resolution, if no ip is found for a server in the response, the
server is set to RMAINT status. However, its address must also be
reset. Otherwise, it is still reported by the cli on "show servers state"
commands. This may be confusing.
This patch may be backported as far as 2.0.
On A/AAAA resolution, for a given server, if a record is matching, we must
always attach the server to this record. Before it was only done if the
server IP was not the same than the record one. However, it is a problem if
the server IP was not set for a previous resolution. From the libc during
startup for instance. In this case, the server IP is not updated and the
server is not attached to any record. It remains in this state while a
matching record is found in the DNS response. It is especially a problem
when the resolution is used for server-templates.
This bug was introduced by the commit bd78c912f ("MEDIUM: resolvers: add a
ref on server to the used A/AAAA answer item").
This patch should solve the issue #1305. It must be backported to all
versions containing the above commit.
A dedicated queue lock was added by commit 16fbdda3c ("MEDIUM: queue:
use a dedicated lock for the queues (v2)") but during its rebase, some
labels were lost and left to SERVER_LOCK / PROXY_LOCK instead of
QUEUE_LOCK. It's harmless but can confuse the lock debugger, so better
fix it.
No backport is needed.
Commit ae0b12ee0 ("MEDIUM: queue: use a trylock on the server's queue")
introduced a hard to trigger bug that's more visible with a single thread:
if a server dequeues a connection and finds another free slot with no
connection to place there, process_srv_queue() will never break out of
the loop. In multi-thread it almost does not happen because other threads
bring new connections.
No backport is needed as it's only in -dev.
Since the code paths became exactly the same except for what log field
to update, let's simplify the code and move further code out of the
lock. The queue position update and the test for server vs proxy do not
need to be inside the lock.
Now we directly use p->queue to get to the queue, which is much more
straightforward. The performance on 100 servers and 16 threads
increased from 560k to 574k RPS, or 2.5%.
A lot more simplifications are possible, but the minimum was done at
this point.
A queue is specific to a server or a proxy, so we don't need to place
this distinction inside all pendconns, it can be in the queue itself.
This commit adds the relevant fields "px" and "sv" into the struct
queue, and initializes them accordingly.
Doing so makes sure that threads attempting to wake up new connections
for a server will give up early if another thread is already in charge
of this. The goal is to avoid unneeded contention on low server counts.
Now with a single server with 16 threads in roundrobin we get the same
performance as with multiple servers, i.e. ~575kreq/s instead of ~496k
before. Leastconn is seeing a similar jump, from ~460 to ~560k (the
difference being the calls to fwlc_srv_reposition).
The overhead of process_srv_queue() is now around 2% instead of ~20%
previously.
There's no point keeping the proxy lock held for a long time, it's
only needed when checking the proxy's queue, and keeping it prevents
multiple servers from dequeuing in parallel. Let's move it into
pendconn_process_next_strm() and release it ASAP. The pendconn
remains under the server queue lock's protection, guaranteeing that
no stream will release it while it's being touched.
For roundrobin, the performance increases by 76% (327k to 575k) on
16 threads. Even with a single server and maxconn=100, the performance
increases from 398 to 496 kreq/s. For leastconn, almost no change is
visible (less than one percent) but this is expected since most of the
time there is spent in fwlc_reposition() and fwlc_get_next_server().
Doing so allows to retrieve and update the pendconn's queue index outside
of the queue's lock and to save one more percent CPU on a highly-contented
backend.
The code only differed by the nbpend_max counter. Let's have a pointer
to it and merge the two variants to always use a generic queue. It was
initially considered to put the max inside the queue structure itself,
but the stats support clearing values and maxes and this would have been
the only counter having to be handled separately there. Given that we
don't need this max anywhere outside stats, let's keep it where it is
and have a pointer to it instead.
The CAS loop to update the max remains. It was naively thought that it
would have been faster without atomic ops inside the lock, but this is
not the case for the simple reason that it is a max, it converges very
quickly and never has to perform the check anymore. Thus this code is
better out of the lock.
The queue_idx is still updated inside the lock since that's where the
idx is updated, though it could be performed using atomic ops given
that it's only used to roughly count places for logging.
This basically undoes the API changes that were performed by commit
0274286dd ("BUG/MAJOR: server: fix deadlock when changing maxconn via
agent-check") to address the deadlock issue: since process_srv_queue()
doesn't use the server lock anymore, it doesn't need the "server_locked"
argument, so let's get rid of it before it gets used again.
Till now whenever a server or proxy's queue was touched, this server
or proxy's lock was taken. Not only this requires distinct code paths,
but it also causes unnecessary contention with other uses of these locks.
This patch adds a lock inside the "queue" structure that will be used
the same way by the server and the proxy queuing code. The server used
to use a spinlock and the proxy an rwlock, though the queue only used
it for locked writes. This new version uses a spinlock since we don't
need the read lock part here. Tests have not shown any benefit nor cost
in using this one versus the rwlock so we could change later if needed.
The lower contention on the locks increases the performance from 362k
to 374k req/s on 16 threads with 20 servers and leastconn. The gain
with roundrobin even increases by 9%.
This is tagged medium because the lock is changed, but no other part of
the code touches the queues, with nor without locking, so this should
remain invisible.
There's no point doing atomic incs over px->served/px->totpend under the
locks from the inner loop, as this value is used by the LB algorithms but
not during the dequeuing step. In addition, the LB algo's take_conn()
doesn't need to be refreshed for each and every connection taken
under the lock, it can be performed once at the end and out of the
lock.
While the gain on roundrobin is not noticeable (only the atomic inc),
on leastconn which uses take_conn(), the performance increases from
355k to 362k req/s on 16 threads.
This reverts commit 5304669e1b.
The recent changes since 5304669e1 MEDIUM: queue: make
pendconn_process_next_strm() only return the pendconn opened a tiny race
condition between stream_free() and process_srv_queue(), as the pendconn
is accessed outside of the lock, possibly while it's being freed. A
different approach is required.
This reverts commit 3e92a31783.
The recent changes since 5304669e1 MEDIUM: queue: make
pendconn_process_next_strm() only return the pendconn opened a tiny race
condition between stream_free() and process_srv_queue(), as the pendconn
is accessed outside of the lock, possibly while it's being freed. A
different approach is required.
This reverts commit 1b648c857b.
The recent changes since 5304669e1 MEDIUM: queue: make
pendconn_process_next_strm() only return the pendconn opened a tiny race
condition between stream_free() and process_srv_queue(), as the pendconn
is accessed outside of the lock, possibly while it's being freed. A
different approach is required.
This reverts commit fcb8bf8650.
The recent changes since 5304669e1 MEDIUM: queue: make
pendconn_process_next_strm() only return the pendconn opened a tiny race
condition between stream_free() and process_srv_queue(), as the pendconn
is accessed outside of the lock, possibly while it's being freed. A
different approach is required.
This reverts commit c83e45e9b0.
The recent changes since 5304669e1 MEDIUM: queue: make
pendconn_process_next_strm() only return the pendconn opened a tiny race
condition between stream_free() and process_srv_queue(), as the pendconn
is accessed outside of the lock, possibly while it's being freed. A
different approach is required.
This reverts commit 3eecdb65c5.
The recent changes since 5304669e1 MEDIUM: queue: make
pendconn_process_next_strm() only return the pendconn opened a tiny race
condition between stream_free() and process_srv_queue(), as the pendconn
is accessed outside of the lock, possibly while it's being freed. A
different approach is required.
This reverts commit 1335eb9867.
The recent changes since 5304669e1 MEDIUM: queue: make
pendconn_process_next_strm() only return the pendconn opened a tiny race
condition between stream_free() and process_srv_queue(), as the pendconn
is accessed outside of the lock, possibly while it's being freed. A
different approach is required.
This reverts commit de814dd422.
The recent changes since 5304669e1 MEDIUM: queue: make
pendconn_process_next_strm() only return the pendconn opened a tiny race
condition between stream_free() and process_srv_queue(), as the pendconn
is accessed outside of the lock, possibly while it's being freed. A
different approach is required.
This reverts commit 9a6d0ddbd6.
The recent changes since 5304669e1 MEDIUM: queue: make
pendconn_process_next_strm() only return the pendconn opened a tiny race
condition between stream_free() and process_srv_queue(), as the pendconn
is accessed outside of the lock, possibly while it's being freed. A
different approach is required.
This reverts commit 5b39275311.
The recent changes since 5304669e1 MEDIUM: queue: make
pendconn_process_next_strm() only return the pendconn opened a tiny race
condition between stream_free() and process_srv_queue(), as the pendconn
is accessed outside of the lock, possibly while it's being freed. A
different approach is required.
This reverts commit 772e968b06.
The recent changes since 5304669e1 MEDIUM: queue: make
pendconn_process_next_strm() only return the pendconn opened a tiny race
condition between stream_free() and process_srv_queue(), as the pendconn
is accessed outside of the lock, possibly while it's being freed. A
different approach is required.
If it possible to set source IP/Port from "tcp-request connection",
"tcp-request session" and "http-request" rules but not from "tcp-request
content" rules. There is no reason for this limitation and it may be a
problem for anyone wanting to call a lua fetch to dynamically set source
IP/Port from a TCP proxy. Indeed, to call a lua fetch, we must have a
stream. And there is no stream when "tcp-request connection/session" rules
are evaluated.
Thanks to this patch, "set-src" and "set-src-port" action are now supported
by "tcp_request content" rules.
This patch is related to the issue #1303. It may be backported to all stable
versions.
In 1.4, consistent hashing was brought by commit 6b2e11be1 ("[MEDIUM]
backend: implement consistent hashing variation") which took care of
replacing all direct calls to map_get_server_rr() with an alternate
call to chash_get_next_server() if consistent hash was being used.
One of them, however, cannot happen because a preliminary test for
static round-robin is being done prior to the call, so we're certain
that if it matches it cannot use a consistent hash tree.
Let's remove it.
Dealing with the queue lock in the caller remains complicated. Let's
change pendconn_first() to take the queue instead of the tree head,
and handle the lock itself. It now returns an element with a locked
queue or no element with an unlocked queue. It can avoid locking if
the queue is already empty.
There's no point keeping the server's queue lock after seeing that the
server's queue is empty, just like there's no need to keep the proxy's
lock when its queue is empty. This patch checks for emptiness and
releases these locks as soon as possible.
With this the performance increased from 524k to 530k on 16 threads
with round-robin.
By placing the lock there, it becomes possible to lock the proxy
later and to unlock it earlier. The server unlocking also happens slightly
earlier.
The performance on roundrobin increases from 481k to 524k req/s on 16
threads. Leastconn shows about 513k req/s (the difference being the
take_conn() call).
The performance profile changes from this:
9.32% hap-pxok [.] process_srv_queue
7.56% hap-pxok [.] pendconn_dequeue
6.90% hap-pxok [.] pendconn_add
to this:
7.42% haproxy [.] process_srv_queue
5.61% haproxy [.] pendconn_dequeue
4.95% haproxy [.] pendconn_add
By doing so we can move some evaluations outside of the lock and the
loop. In the round robin case, the performance increases from 497k to
505k rps on 16 threads with 100 servers.
Doing so allows to retrieve and update the pendconn's queue index outside
of the queue's lock and to save one more percent CPU on a highly-contented
backend.
The code only differed by the nbpend_max counter. Let's have a pointer
to it and merge the two variants to always use a generic queue. It was
initially considered to put the max inside the queue structure itself,
but the stats support clearing values and maxes and this would have been
the only counter having to be handled separately there. Given that we
don't need this max anywhere outside stats, let's keep it where it is
and have a pointer to it instead.
The CAS loop to update the max remains. It was naively thought that it
would have been faster without atomic ops inside the lock, but this is
not the case for the simple reason that it is a max, it converges very
quickly and never has to perform the check anymore. Thus this code is
better out of the lock.
The queue_idx is still updated inside the lock since that's where the
idx is updated, though it could be performed using atomic ops given
that it's only used to roughly count places for logging.
This basically undoes the API changes that were performed by commit
0274286dd ("BUG/MAJOR: server: fix deadlock when changing maxconn via
agent-check") to address the deadlock issue: since process_srv_queue()
doesn't use the server lock anymore, it doesn't need the "server_locked"
argument, so let's get rid of it before it gets used again.
Till now whenever a server or proxy's queue was touched, this server
or proxy's lock was taken. Not only this requires distinct code paths,
but it also causes unnecessary contention with other uses of these locks.
This patch adds a lock inside the "queue" structure that will be used
the same way by the server and the proxy queuing code. The server used
to use a spinlock and the proxy an rwlock, though the queue only used
it for locked writes. This new version uses a spinlock since we don't
need the read lock part here. Tests have not shown any benefit nor cost
in using this one versus the rwlock so we could change later if needed.
The lower contention on the locks increases the performance from 491k
to 507k req/s on 16 threads with 20 servers and leastconn. The gain
with roundrobin even increases by 6%.
The performance profile changes from this:
13.03% haproxy [.] fwlc_srv_reposition
8.08% haproxy [.] fwlc_get_next_server
3.62% haproxy [.] process_srv_queue
1.78% haproxy [.] pendconn_dequeue
1.74% haproxy [.] pendconn_add
to this:
11.95% haproxy [.] fwlc_srv_reposition
7.57% haproxy [.] fwlc_get_next_server
3.51% haproxy [.] process_srv_queue
1.74% haproxy [.] pendconn_dequeue
1.70% haproxy [.] pendconn_add
At this point the differences are mostly measurement noise.
This is tagged medium because the lock is changed, but no other part of
the code touches the queues, with nor without locking, so this should
remain invisible.
This essentially reverts commit 2b4370078 ("MINOR: lb/api: let callers
of take_conn/drop_conn tell if they have the lock") that was merged
during 2.4 before the various locks could be eliminated at the lower
layers. Passing that information complicates the cleanup of the queuing
code and it's become useless.
The lock in process_srv_queue() was placed around the whole loop to
avoid the cost of taking/releasing it multiple times. But in practice
almost all calls to this function only dequeue a single connection, so
that argument doesn't really stand. However by placing the lock inside
the loop, we'd make it possible to release it before manipulating the
pendconn and waking the task up. That's what this patch does.
This increases the performance from 431k to 491k req/s on 16 threads
with 20 servers under leastconn.
The performance profile changes from this:
14.09% haproxy [.] process_srv_queue
10.22% haproxy [.] fwlc_srv_reposition
6.39% haproxy [.] fwlc_get_next_server
3.97% haproxy [.] pendconn_dequeue
3.84% haproxy [.] pendconn_add
to this:
13.03% haproxy [.] fwlc_srv_reposition
8.08% haproxy [.] fwlc_get_next_server
3.62% haproxy [.] process_srv_queue
1.78% haproxy [.] pendconn_dequeue
1.74% haproxy [.] pendconn_add
The difference is even slightly more visible in roundrobin which
does not have take_conn() call.
It used to do far too much under the lock, including waking up tasks,
updating counters and repositionning entries in the load balancing algo.
This patch first moves all that stuff out of the function into the only
caller (process_srv_queue()). The decision to update the LB algo is now
taken out of the lock. The wakeups could be performed outside of the
loop by using a local list.
This increases the performance from 377k to 431k req/s on 16 threads
with 20 servers under leastconn.
The perf profile changes from this:
23.17% haproxy [.] process_srv_queue
6.58% haproxy [.] pendconn_add
6.40% haproxy [.] pendconn_dequeue
5.48% haproxy [.] fwlc_srv_reposition
3.70% haproxy [.] fwlc_get_next_server
to this:
13.95% haproxy [.] process_srv_queue
9.96% haproxy [.] fwlc_srv_reposition
6.21% haproxy [.] fwlc_get_next_server
3.96% haproxy [.] pendconn_dequeue
3.75% haproxy [.] pendconn_add
The server_parse_maxconn_change_request locks the server lock. However,
this function can be called via agent-checks or lua code which already
lock it. This bug has been introduced by the following commit :
commit 79a88ba3d0
BUG/MAJOR: server: prevent deadlock when using 'set maxconn server'
This commit tried to fix another deadlock with can occur because
previoulsy server_parse_maxconn_change_request requires the server lock
to be held. However, it may call internally process_srv_queue which also
locks the server lock. The locking policy has thus been updated. The fix
is functional for the CLI 'set maxconn' but fails to address the
agent-check / lua counterparts.
This new issue is fixed in two steps :
- changes from the above commit have been reverted. This means that
server_parse_maxconn_change_request must again be called with the
server lock.
- to counter the deadlock fixed by the above commit, process_srv_queue
now takes an argument to render the server locking optional if the
caller already held it. This is only used by
server_parse_maxconn_change_request.
The above commit was subject to backport up to 1.8. Thus this commit
must be backported in every release where it is already present.
Since commit c7eedf7a5 ("MINOR: queue: reduce the locked area in
pendconn_add()") the stream's pend_pos is set out of the lock, after
the pendconn is queued. While this entry is only manipulated by the
stream itself and there is no bug caused by this right now, it's a
bit dangerous because another thread could decide to look at this
field during dequeuing and could randomly see something else. Also
in case of crashes, memory inspection wouldn't be as trustable.
Let's assign the pendconn before it can be found in the queue.
Activate the 'ssl' keyword for dynamic servers. This is the final step
to have ssl dynamic servers feature implemented. If activated,
ssl_sock_prepare_srv_ctx will be called at the end of the 'add server'
CLI handler.
At the same time, update the management doc to list all ssl keywords
implemented for dynamic servers.
These keywords are deemed safe-enough to be enable on dynamic servers.
Their parsing functions are simple and can be called at runtime.
- allow-0rtt
- alpn
- ciphers
- ciphersuites
- force-sslv3/tlsv10/tlsv11/tlsv12/tlsv13
- no-sslv3/tlsv10/tlsv11/tlsv12/tlsv13
- no-ssl-reuse
- no-tls-tickets
- npn
- send-proxy-v2-ssl
- send-proxy-v2-ssl-cn
- sni
- ssl-min-ver
- ssl-max-ver
- tls-tickets
- verify
- verifyhost
'no-ssl-reuse' and 'no-tls-tickets' are enabled to override the default
behavior.
'tls-tickets' is enable to override a possible 'no-tls-tickets' set via
the global option 'ssl-default-server-options'.
'force' and 'no' variants of tls method options are useful to override a
possible 'ssl-default-server-options'.
File-access through ssl_store_load_locations_file is deactivated if
srv_parse_crl is used at runtime for a dynamic server. The crl must
have already been loaded either in the config or through the 'ssl crl'
CLI commands.
File-access through ssl_store_load_locations_file is deactivated if
srv_parse_crt is used at runtime for a dynamic server. The cert must
have already been loaded either in the config or through the 'ssl cert'
CLI commands.
File-access through ssl_store_load_locations_file is deactivated if
srv_parse_ca_file is used at runtime for a dynamic server. The ca-file
must have already been loaded either in the config or through the 'ssl
ca-file' CLI commands.
This will be in preparation for support of ssl on dynamic servers. The
'alpn' keyword will be allowed for dynamic servers but not the
'check-alpn'.
The alpn parsing is extracted into a new function parse_alpn. Each
srv_parse_alpn and srv_parse_check_alpn called it.
The function ssl_sock_load_srv_cert will be used at runtime for dynamic
servers. If the cert is not loaded on ckch tree, we try to access it
from the file-system.
Now this access operation is rendered optional by a new function
argument. It is only allowed at parsing time, but will be disabled for
dynamic servers at runtime.
'set server ssl' uses ssl parameters from default-server. As dynamic
servers does not reuse any default-server parameters, this command has
no sense for them.
Explicitly call ssl_initialize_random to initialize the random generator
in init() global function. If the initialization fails, the startup is
interrupted.
This commit is in preparation for support of ssl on dynamic servers. To
be able to activate ssl on dynamic servers, it is necessary to ensure
that the random generator is initialized on startup regardless of the
config. It cannot be called at runtime as access to /dev/urandom is
required.
This also has the effect to fix the previous non-consistent behavior.
Indeed, if bind or server in the config are using ssl, the
initialization function was called, and if it failed, the startup was
interrupted. Otherwise, the ssl initialization code could have been
called through the ssl server for lua, but this times without blocking
the startup on error. Or not called at all if lua was deactivated.
RFC 7231#5.3.4 makes a difference between a completely missing
'accept-encoding' header and an 'accept-encoding' header without any values.
This case was already correctly handled by accident, because an empty accept
encoding does not match any known encoding. However this resulted in the
'other' encoding being added to the bitmap. Usually this also succeeds in
serving cached responses, because the cached response likely has no
'content-encoding', thus matching the identity case instead of not serving the
response, due to the 'other' encoding. But it's technically not 100% correct.
Fix this by special-casing 'accept-encoding' values with a length of zero and
extend the test to check that an empty accept-encoding is correctly handled.
Due to the reasons given above the test also passes without the change in
cache.c.
Vary support was added in HAProxy 2.4. This fix should be backported to 2.4+.
The commit c7b391aed ("BUG/MEDIUM: server/cli: Fix ABBA deadlock when fqdn
is set from the CLI") introduced 2 bugs. The first one is a typo on the
server's lock label (s/SERVER_UNLOCK/SERVER_LOCK/). The second one is about
the server's lock itself. It must be acquired to execute the "agent-send"
subcommand.
The patch above is marked to be backported as far as 1.8. Thus, this one
must also backported as far 1.8.
BUG/MINOR: server/cli: Don't forget to lock server on agent-send subcommand
The commit dcac41806 ("BUG/MEDIUM: resolvers: Add a task on servers to check
SRV resolution status") introduced a type. In resolv_srvrq_expire_task()
function, the resolver's lock must be used instead of the resolver itself.
This patch must be backported with the patch above (at least as far as 2.2).
When reusing a backend connection, do not reapply the SNI on the
connection. It should already be defined when the connection was
instantiated on a previous connect_server invocation. As the SNI is a
parameter used to select a connection, only connection with same value
can be reused.
The impact of this bug is unknown and may be null. No memory leak has
been reported by valgrind. So this is more a cleaning fix.
This commit relies on the SF_SRV_REUSED flag and thus depends on the
following fix :
BUG/MINOR: backend: restore the SF_SRV_REUSED flag original purpose
This should be backported up to 2.4.
The SF_SRV_REUSED flag was set if a stream reused a backend connection.
One of its purpose is to count the total reuse on the backend in
opposition to newly instantiated connection.
However, the flag was diverted from its original purpose since the
following commit :
e8f5f5d8b2
BUG/MEDIUM: servers: Only set SF_SRV_REUSED if the connection if fully ready.
With this change, the flag is not set anymore if the mux is not ready
when a connection is picked for reuse. This can happen for multiplexed
connections which are inserted in the available list as soon as created
in http-reuse always mode. The goal of this change is to not retry
immediately this request in case on an error on the same server if the
reused connection is not fully ready.
This change is justified for the retry timeout handling but it breaks
other places which still uses the flag for its original purpose. Mainly,
in this case the wrong 'connect' backend counter is incremented instead
of the 'reuse' one. The flag is also used in http_return_srv_error and
may have an impact if a http server error is replied for this stream.
To fix this problem, the original purpose of the flag is restored by
setting it unconditionaly when a connection is reused. Additionally, a
new flag SF_SRV_REUSED_ANTICIPATED is created. This flag is set when the
connection is reused but the mux is not ready yet. For the timeout
handling on error, the request is retried immediately only if the stream
reused a connection without this newly anticipated flag.
This must be backported up to 2.1.
When a server relies on a SRV resolution, a task is created to clean it up
(fqdn/port and address) when the SRV resolution is considered as outdated
(based on the resolvers 'timeout' value). It is only possible if the server
inherits outdated info from a state file and is no longer selected to be
attached to a SRV item. Note that most of time, a server is attached to a
SRV item. Thus when the item becomes obsolete, the server is cleaned
up.
It is important to have such task to be sure the server will be free again
to have a chance to be resolved again with fresh information. Of course,
this patch is a workaround to solve a design issue. But there is no other
obvious way to fix it without rewritting all the resolvers part. And it must
be backportable.
This patch relies on following commits:
* MINOR: resolvers: Clean server in a dedicated function when removing a SRV item
* MINOR: resolvers: Remove server from named_servers tree when removing a SRV item
All the series must be backported as far as 2.2 after some observation
period. Backports to 2.0 and 1.8 must be evaluated.
When a server is cleaned up because the corresponding SRV item is removed,
we always remove the server from the srvrq's name_servers tree. For now, it
is useless because, if a server was attached to a SRV item, it means it was
already removed from the tree. But it will be mandatory to fix a bug.
A dedicated function is now used to clean up servers when a SRV item becomes
obsolete or when a requester is removed from a resolution. This patch is
mandatory to fix a bug.
To perform servers resolution, the resolver's lock is first acquired then
the server's lock when necessary. However, when the fqdn is set via the CLI,
the opposite is performed. So, it is possible to experience an ABBA
deadlock.
To fix this bug, the server's lock is acquired and released for each
subcommand of "set server" with an exception when the fqdn is set. The
resolver's lock is first acquired. Of course, this means we must be sure to
have a resolver to lock.
This patch must be backported as far as 1.8.
If a server is configured to rely on a SRV resolution, we must forbid to
change its fqdn on the CLI. Indeed, in this case, the server retrieves its
fqdn from the SRV resolution. If the fqdn is changed via the CLI, this
conflicts with the SRV resolution and leaves the server in an undefined
state. Most of time, the SRV resolution remains enabled with no effect on
the server (no update). Some time the A/AAAA resolution for the new fqdn is
not enabled at all. It depends on the server state and resolver state when
the CLI command is executed.
This patch must be backported as far as 2.0 (maybe to 1.8 too ?) after some
observation period.
To avoid repeating the same source code, allocating memory and initializing
the per_thr field from the server structure is transferred to a separate
function.
In 2.4, commit d1ac2b90c ("MAJOR: htx: Remove the EOM block type and
use HTX_FL_EOM instead") changed the HTX processing to destroy the
blocks as they are processed. So the traces that were emitted at the
end of the send headers functions didn't have anything to show.
Let's move these traces earlier in the function, right before the HTX
processing, so that everything is still in place.
This should be backported to 2.4.
Since commit 7d013e796 ("BUG/MEDIUM: mux-h2: Xfer rxbuf to the upper
layer when creating a front stream"), the rxbuf is lost during the
call to h2c_frt_stream_new(), so the trace that happens later cannot
find a request there and we've lost the useful part indicating what
the request looked like. Let's move the trace before this call.
This should be backported to 2.4.
We're seeing some browsers setting up multiple connections and closing
some to just keep one. It looks like they do this in case they'd
negotiate H1. This results in aborted prefaces and log pollution about
bad requests and "PR--" in the status flags.
We already have an option to ignore connections with no data, it's called
http-ignore-probes. But it was not used by the H2 mux. However it totally
makes sense to use it during the preface.
This patch changes this so that connections aborted before sending the
preface can avoid being logged.
This should be backported to 2.4 and 2.3 at least, and probably even
as far as 2.0.
As part of the changes to support per-module stats data in 2.3-dev6
with commit ee63d4bd6 ("MEDIUM: stats: integrate static proxies stats
in new stats"), a small change resulted in the description field to
be replaced by the name field, making it pointless. Let's fix this
back.
This should fix issue #1291. Thanks to Nick Ramirez for reporting this
issue.
This patch can be backported to 2.3.
"sent H2 request" was already misaligned with the 3 other ones
(sent/rcvd, request/response), and now with "new H2 connection" that's
yet another alignment making the traces even less legible. Let's just
realign all 5 messages, this even eases quick pointer comparisons. This
should probably be backported to 2.4 as it's where it's the most likely
to be used in the mid-term.
It is currently very difficult to match some H2 trace outputs against
some log extracts because there's no exactly equivalent info.
This patch tries to address this by adding a TRACE_USER() call in h2_init()
that is matched in h2_trace() to report:
- connection pointer and direction
- frontend's name or server's name
- transport layer and control layer (e.g. "SSL/tcpv4")
- source and/or destination depending on what is set
This now permits to get something like this at verbosity level complete:
<0>2021-06-16T18:30:19.810897+02:00 [00|h2|1|mux_h2.c:1006] new H2 connection : h2c=0x19fee50(F,PRF) : conn=0x7f373c026850(IN) fe=h2gw RAW/tcpv4 src=127.0.0.1:19540
<0>2021-06-16T18:30:19.810919+02:00 [00|h2|1|mux_h2.c:2731] rcvd H2 request : h2c=0x19fee50(F,FRH)
<0>2021-06-16T18:30:19.810998+02:00 [00|h2|1|mux_h2.c:1006] new H2 connection : h2c=0x1a04ee0(B,PRF) : conn=0x1a04ce0(OUT) sv=h2gw/s1 RAW/tcpv4 dst=127.0.0.1:4446
This function appends to a buffer some information from a connection.
This will be used by traces and possibly some debugging as well. A
frontend/backend/server, transport/control layers, source/destination
ip:port, connection pointer and direction are reported depending on
the available information.
Since 2.4-dev3 with commit c4bfa59f1 ("MAJOR: mux-h1: Create the client
stream as later as possible"), a request error doesn't result in any
error response if "option http-ignore-probes" is set, there's just a
close. This is caused by an unneeded b_reset() in h1_process_demux()'s
error path, which makes h1_handle_bad_req() believe there was an empty
request. There is no reason for this reset to be there, it must have
been a leftover of an earlier attempt at dealing with the error, let's
drop it.
This should be backported to 2.4.
In 2.3, a significant improvement was brought against situations where
the queue was heavily used, because some LB algos were still checked
for no reason before deciding to put the request into the queue. This
was commit 82cd5c13a ("OPTIM: backend: skip LB when we know the backend
is full").
As seen in previous commit ("BUG/MAJOR: queue: set SF_ASSIGNED when
setting strm->target on dequeue") the dequeuing code is extremely
tricky, and the optimization above tends to emphasize transient issues
by making them permanent until the next reload, which is not acceptable
as the code must always be robust against any bad situation.
This commit brings a protection against such a situation by slightly
relaxing the test. Instead of checking that there are pending connections
in the backend queue, it also verifies that the backend's connections are
not solely composed of queued connections, which would then indicate we
are in this situation. This is not rocket science, but at least if the
situation happens, we know that it will unlock by itself once the streams
have left, as new requests will be allowed to reach the servers and to
flush the queue again.
This needs to be backported to 2.4 and 2.3.
Commit 82cd5c13a ("OPTIM: backend: skip LB when we know the backend is
full") has uncovered a long-burried bug in the dequeing code: when a
server releases a connection, it picks a new one from the proxy's or
its queue. Technically speaking it only picks a pendconn which is a
link between a position in the queue and a stream. It then sets this
pendconn's target to itself, and wakes up the stream's task so that
it can try to connect again.
The stream then goes through the regular connection setup phases,
calls back_try_conn_req() which calls pendconn_dequeue(), which
sets the stream's target to the pendconn's and releases the pendconn.
It then reaches assign_server() which sees no SF_ASSIGNED and calls
assign_server_and_queue() to perform load balancing or queuing. This
one first destroys the stream's target and gets ready to perform load
balancing. At this point we're load-balancing for no reason since we
already knew what server was available. And this is where the commit
above comes into play: the check for the backend's queue above may
detect other connections that arrived in between, and will immediately
return FULL, forcing this request back into the queue. If the server
had a very low maxconn (e.g. 1 due to a long slowstart), it's possible
that this evicted connection was the last one on the server and that
no other one will ever be present to process the queue. Usually a
regularly processed request will still have its own srv_conn that will
be used during stream_free() to dequeue other connections. But if the
server had a down-up cycle, then a call to pendconn_grab_from_px()
may start to dequeue entries which had no srv_conn and which will have
no server slot to offer when they expire, thus maintaining the situation
above forever. Worse, as new requests arrive, there are always some
requests in the queue and the situation feeds on itself.
The correct fix here is to properly set SF_ASSIGNED in pendconn_dequeue()
when the stream's target is assigned (as it's what this flag means), so
as to avoid a load-balancing pass when dequeuing.
Many thanks to Pierre Cheynier for the numerous detailed traces he
provided that helped narrow this problem down.
This could be backported to all stable versions, but in practice only
2.3 and above are really affected since the presence of the commit
above. Given how tricky this code is it's better to limit it to those
versions that really need it.
With a single process, we don't need to USE_PRIVATE_CACHE, USE_FUTEX
nor USE_PTHREAD_PSHARED anymore. Let's only keep the basic spinlock
to lock between threads.
The relative_pid is always 1. In mworker mode we also have a
child->relative_pid which is always equalt relative_pid, except for a
master (0) or external process (-1), but these types are usually tested
for, except for one place that was amended to carefully check for the
PROC_O_TYPE_WORKER option.
Changes were pretty limited as most usages of relative_pid were for
designating a process in stats output and peers protocol.
As there's no more nbproc>1, we can remove some loops and tests in cpu-map.
Both the lack of thread number and thread 1 can count as the whole process
now (which is still used for whole process binding when threads are disabled).
Lots of places iterating over nbproc or comparing with nbproc could be
simplified. Further, "bind-process" and "process" parsing that was
already limited to process 1 or "all" or "odd" resulted in a bind_proc
field that was either 0 or 1 during the init phase and later always 1.
All the checks for compatibilities were removed since it's not possible
anymore to run a frontend and a backend on different processes or to
have peers and stick-tables bound on different ones. This is the largest
part of this patch.
The bind_proc field was removed from both the proxy and the receiver
structs.
Since the "process" and "bind-process" directives are still parsed,
configs making use of correct values allowing process 1 will continue
to work.
There was a loop iterating over all nbproc values during init that
couldn't be immediately removed because the loop's index was used
to distinguish a child from a parent. That's now fixed by replacing
the iterator with an in_parent flag. All bindings that were checking
(1UL << proc) or cpu_map.proc[proc] were adjusted to always use zero
for proc.
Since its introduction in 1.8 with commit 095ba4c24 ("MEDIUM: mworker:
replace systemd mode by master worker mode"), it says "cannot chroot1(...)"
which seems to be a leftover of a debug message. It could be backported but
probably nobody will notice.
The global shctx lookups and misses was updated without using atomic
ops, so the stats available in "show info" are very likely off by a few
units over time. This should be backported as far as 1.8. Versions
without _HA_ATOMIC_INC() can use HA_ATOMIC_ADD(,1).
Since threads were introduced in 1.8, the USE_PRIVATE_CACHE mode of the
shctx was not updated to use locks. Originally it was meant to disable
sharing between processes, so it removes the lock/unlock instructions.
But with threads enabled, it's not possible to work like this anymore.
It's easy to see that once built with private cache and threads enabled,
sending violent SSL traffic to the the process instantly makes it die.
The HTTP cache is very likely affected as well.
This patch addresses this by falling back to our native spinlocks when
USE_PRIVATE_CACHE is used. In practice we could use them also for other
modes and remove all older implementations, but this patch aims at keeping
the changes very low and easy to backport. A new SHCTX_LOCK label was
added to help with debugging, but OTHER_LOCK might be usable as well
for backports.
An even lighter approach for backports may consist in always declaring
the lock (or reusing "waiters"), and calling pl_take_s() for the lock()
and pl_drop_s() for the unlock() operation. This could even be used in
all modes (process and threads), even when thread support is disabled.
Subsequent patches will further clean up this area.
This patch must be backported to all supported versions since 1.8.
Until then, the servers were automatically attached on their creation
into the proxy addr_node tree via _srv_parse_init. In case of an invalid
dynamic server which is instantly freed, no detach operation was made
leaving a NULL server in the tree.
Change this mode of operation by marking the attach operation as
optional in _srv_parse_init. This operation is not conduct for a dynamic
server. The server is attached only at the end of the CLI handler when
it is marked as valid.
This must be backported up to 2.4.
A bug is present when trying to create a dynamic server with a fixed id.
If the server is detected invalid due to a later parsing arguments
error, the server is not removed from the proxy used ids tree before
being freed.
Change the mode of operation of 'id' keyword parsing handler. The
insertion in the backend tree is removed from the handler and is not
taken in charge by parse_server for configuration parsing. For the
dynamic servers, the insertion is called at the end of the 'add server'
CLI handler when the server has been validated.
This must be backported up to 2.4.
If no id is specified by the user for a dynamic server, it is necessary
to generate a new one. This operation is now done at the end of 'add
server' CLI handler. The server is then inserted into the proxy ids
tree.
Without this, several features may be broken for dynamic servers. Among
them, there is the "first" lb algorithm, the persistence using
stick-tables or the uniqueness internal check of srv_parse_id.
This must be backported up to 2.4.
Do not leave deleted server in used_server_id/used_server_addr backend
trees. This might lead to crashes if a deleted server is used through
these trees.
At this moment, dynamic servers are only added in used_server_id if they
have a fixed id. They are never inserted in used_server_addr as this
code is missing. So these new delete instructions are noop. However, a
fix will be provided soon to insert properly all dynamic servers in both
used_server_id and used_server_addr trees so the deletion counterpart
will be mandatory in the CLI server delete handler.
This must be backported to 2.4.
Some config parsing handlers were designed to be run at startup on a
single-thread. When executing at runtime for dynamic servers,
thread-safety is not guaranteed. This is the case for example in
srv_parse_id which manipulates backend used_ids tree.
One solution could be to add locks but it might be tricky to found all
affected functions and it can be an easy source of deadlock. The other
solution which has been chosen is to use thread-isolation over almost
all of the cli_parse_add_server CLI handler.
For now this solution is sufficient. If some users make heavy use of the
'add server', hurting the overall performance, it will be necessary to
design a much thinner solution.
This must be backported up to 2.4.
If the server id is fixed in the configuration, it is immediately
inserted in the 'used_server_id' backend tree via srv_parse_id. On
check_config_validity, the dynamic id generation is thus skipped for
fixed-id servers. However, it must nevertheless be inserted in the
'used_server_name' backend tree.
This bug seems to be not noticeable for the user. Indeed, before the
fix, the search in sticking_rule_find_target always returned NULL for
the name, then the fallback search with server id succeeded, so the
persistence is properly applied. However with the fix the fallback
search is not executed anymore, which saves from the locking of
STK_SESS.
This should be backported up to 2.0.
This patch fix the issue adding a test in srvrq before registering
the server on it during server template init.
This was a regression due to commit :
3406766d57
This should be backported with this previous commit (until 2.0)
This one was deprecated in 2.3 and marked for removal in 2.5. It suffers
too many limitations compared to threads, and prevents some improvements
from being engaged. Instead of a bypassable startup error, there is now
a hard error.
The parsing code was removed, and very few obvious cases were as well.
The code is deeply rooted at certain places (e.g. "for" loops iterating
from 0 to nbproc) so it will not be that trivial to remove everywhere.
The "bind" and "bind-process" parsers will have to be adjusted, though
maybe not completely changed if we later want to support thread groups
for large NUMA machines. Some stats socket restrictions were removed,
and the doc was updated according to what was done. A few places in the
doc still refer to nbproc and will have to be revisited. The master-worker
code also refers to the process number to distinguish between master and
workers and will have to be carefully adjusted. The MAX_PROCS macro was
reset to 1, this will at least reduce the size of some remaining arrays.
Two regtests were dependieng on this directive, one with an explicit
"nbproc 1" and another one testing the master's CLI using nbproc 4.
Both were adapted.
Commit ab0a5192a ("MEDIUM: config: mark "grace" as deprecated") marked
the "grace" keyword as deprecated in 2.3, tentative removal for 2.4
with a hard deadline in 2.5, so let's remove it and return an error now.
This old and outdated feature was incompatible with soft-stop, reload
and socket transfers, and keeping it forced ugly hacks in the lower
layers of the protocol stack.
It was marked as deprecated in 2.1-dev2 and for removal in 2.2, but it
was missed. A warning was already emitted and the doc didn't refer to
it any more, let's now get rid of it.
It's been warning as being deprecated since 2.0-dev4, it's about time
to drop it now. The error message recommends to either remove it or
use "option httpclose" instead. It's still referred to in the old
internal doc about the connection header, which itself seems highly
inaccurate by now.
This one was marked as deprecated 9 years ago by commit 28376d62c
("MEDIUM: http: merge ACL and pattern cookie fetches into a single one")
and has disappeared from any documentation, so it never appeared in any
released version. Let's remove it now.
The following 10 log-format tags were implemented during log-format
development and changed before the release. They were marked as deprecated
in 2012 by commit 2beef5888 ("MEDIUM: log: change a few log tokens to make
them easier to remember") and were not documented. They've been emitting a
warning since then, with a suggestion of the one to use instead. Let's get
rid of them now.
Bi => bi, Bp => bp, Ci => ci, Cp => cp, Fi => fi
Fp => fp, Si => si, Sp => sp, cc => CC, cs => CS
It was marked as deprecated for immediate removal as it was not used,
let's reject it and remove it from the doc. A specific error suggests
to check tune.bufsize instead.
When the state of a server is loaded, if there is no hostname defined for
this server and if a fqdn and a server record are retrieved from the state
file, it means the server should rely on a SRV resolution. But we must be
sure the server is configured this way. A SRV resolution must be configured
with the same SRV record. This part must be skipped if there is no SRV
resolution configured for this server or if the SRV record used is not the
same.
This patch should be backported as far as 1.8 after some observation period.
This patch add a ref into servers to register them onto the
record answer item used to set their hostnames.
It also adds a head list into 'srvrq' to register servers free
to be affected to a SRV record.
A head of a tree is also added to srvrq to put servers which
present a hotname in server state file. To re-link them fastly
to the matching record as soon an item present the same name.
This results in better performances on SRV record response
parsing.
This is an optimization but it could avoid to trigger the haproxy's
internal wathdog in some circumstances. And for this reason
it should be backported as far we can (2.0 ?)
This patch adds a head list into answer items on servers which use
this record to set their IPs. It makes lookup on duplicated ip faster and
allow to check immediatly if an item is still valid renewing the IP.
This results in better performances on A/AAAA resolutions.
This is an optimization but it could avoid to trigger the haproxy's
internal wathdog in some circumstances. And for this reason
it should be backported as far we can (2.0 ?)
In case of SRV records, The answer item list was purged by the
error callback of the first requester which considers the error
could not be safely ignored. It makes this item list unavailable
for subsequent requesters even if they consider the error
could be ignored.
On A resolution or do_resolve action error, the answer items were
never trashed.
This patch re-work the error callbacks and the code to check the return code
If a callback return 1, we consider the error was ignored and
the answer item list must be kept. At the opposite, If all error callbacks
of all requesters of the same resolution returns 0 the list will be purged
This patch should be backported as far as 2.0.
The return value is enough now to know if the allocation succeeded or
failed.
This cleanup was already pushed by Willy (f499f50) but a revert crushed
it. It may be backported to the 2.4 because the original patch was done on
this version.
On some platform, a char may be unsigned. Of course, we should not rely on
the signess of a char to be portable. Unfortunatly, since the commit
a835f3cb ("MINOR: h1-htx: Use a correlation table to speed-up small chunks
parsing") we rely on it to test the value retrieved from the hexadecimal
correlation table when the size of a chunk is parsed.
To fix the bug, we now test the result is in the range [0,15] with a bitwise
AND.
This patch should fix the issue #1272. It is 2.5-specific, no backport is
needed except if the commit above is backported.
As specified in the RFC3875 (section 4.1.17), this parameter must be set to
the name and version of the information server software making the CGI
request. Thus, it is now added to the default parameters defined by
HAProxy. It is set to the string "HAProxy $version".
This patch should fix the issue #1285 and must be backported as far as 2.2.
When an HTX block is expanded, a defragmentation may be performed first to
have enough space to copy the new data. When it happens, the meta data of
the HTX message must take account of the new data length but copied data are
still unchanged at this stage (because we need more space to update the
message content). And here there is a bug because the meta data are updated
by the caller. It means that when the blocks content is copied, the new
length is already set. Thus a block larger than the reality is copied and
data outside the buffer may be accessed, leading to a crash.
To fix this bug, htx_defrag() is updated to use an extra argument with the
new meta data to use for the referenced block. Thus the caller does not need
to update the HTX message by itself. However, it still have to update the
data.
Most of time, the bug will be encountered in the HTTP compression
filter. But, even if it is highly unlikely, in theory it is also possible to
hit it when a HTTP header (or only its value) is replaced or when the
start-line is changed.
This patch must be backported as far as 2.0.
The ifdefs surrounding the "show ssl ocsp-response" functionality that
were supposed to disable the code with BoringSSL were built the wrong
way.
It does not need to be backported.
Now that the modified lockless variant does not need a DWCAS anymore,
there's no reason to keep the much slower locked version, so let's
just get rid of it.
In GH issue #1275, Fabiano Nunes Parente provided a nicely detailed
report showing reproducible crashes under musl. Musl is one of the libs
coming with a simple allocator for which we prefer to keep the shared
cache. On x86 we have a DWCAS so the lockless implementation is enabled
for such libraries.
And this implementation has had a small race since day one: the allocator
will need to read the first object's <next> pointer to place it into the
free list's head. If another thread picks the same element and immediately
releases it, while both the local and the shared pools are too crowded, it
will be freed to the OS. If the libc's allocator immediately releases it,
the memory area is unmapped and we can have a crash while trying to read
that pointer. However there is no problem as long as the item remains
mapped in memory because whatever value found there will not be placed
into the head since the counter will have changed.
The probability for this to happen is extremely low, but as analyzed by
Fabiano, it increases with the buffer size. On 16 threads it's relatively
easy to reproduce with 2MB buffers above 200k req/s, where it should
happen within the first 20 seconds of traffic usually.
This is a structural issue for which there are two non-trivial solutions:
- place a read lock in the alloc call and a barrier made of lock/unlock
in the free() call to force to serialize operations; this will have
a big performance impact since free() is already one of the contention
points;
- change the allocator to use a self-locked head, similar to what is
done in the MT_LISTS. This requires two memory writes to the head
instead of a single one, thus the overhead is exactly one memory
write during alloc and one during free;
This patch implements the second option. A new POOL_DUMMY pointer was
defined for the locked pointer value, allowing to both read and lock it
with a single xchg call. The code was carefully optimized so that the
locked period remains the shortest possible and that bus writes are
avoided as much as possible whenever the lock is held.
Tests show that while a bit slower than the original lockless
implementation on large buffers (2MB), it's 2.6 times faster than both
the no-cache and the locked implementation on such large buffers, and
remains as fast or faster than the all implementations when buffers are
48k or higher. Tests were also run on arm64 with similar results.
Note that this code is not used on modern libcs featuring a fast allocator.
A nice benefit of this change is that since it removes a dependency on
the DWCAS, it will be possible to remove the locked implementation and
replace it with this one, that is then usable on all systems, thus
significantly increasing their performance with large buffers.
Given that lockless pools were introduced in 1.9 (not supported anymore),
this patch will have to be backported as far as 2.0. The code changed
several times in this area and is subject to many ifdefs which will
complicate the backport. What is important is to remove all the DWCAS
code from the shared cache alloc/free lockless code and replace it with
this one. The pool_flush() code is basically the same code as the
allocator, retrieving the whole list at once. If in doubt regarding what
barriers to use in older versions, it's safe to use the generic ones.
This patch depends on the following previous commits:
- MINOR: pools: do not maintain the lock during pool_flush()
- MINOR: pools: call malloc_trim() under thread isolation
- MEDIUM: pools: use a single pool_gc() function for locked and lockless
The last one also removes one occurrence of an unneeded DWCAS in the
code that was incompatible with this fix. The removal of the now unused
seq field will happen in a future patch.
Many thanks to Fabiano for his detailed report, and to Olivier for
his help on this issue.
Locked and lockless shared pools don't need to use a different pool_gc()
function because this function isolates itself during the operation, so
we do not need to rely on DWCAS nor any atomic operation in fact. Let's
just get rid of the lockless one in favor of the simple one. This should
even result in a faster execution.
The ifdefs were slightly moved so that we can have pool_gc() defined
as soon as there are global pools, this avoids duplicating the function.
pool_gc() was adjusted to run under thread isolation by commit c0e2ff202
("MEDIUM: memory: make pool_gc() run under thread isolation") so that the
underlying malloc() and free() don't compete between threads during these
potentially aggressive moments (especially when mmap/munmap are involved).
Commit 88366c292 ("MEDIUM: pools: call malloc_trim() from pool_gc()")
later added a call to malloc_trim() but made it outside of the thread
isolation, which is contrary to the principle explained above. Also it
missed it in the locked version, meaning that those without a lockless
implementation cannot benefit from trimming.
This patch fixes that by calling it before thread_release() in both
places.
The locked version of pool_flush() is absurd, it locks the pool for each
and every element to be released till the end. Not only this is extremely
inefficient, but it may even never finish if other threads spend their
time refilling the pool. The only case where this can happen is during
soft-stop so the risk remains limited, but it should be addressed.
Since the code was reorganized, DEBUG_UAF was still tested in the locked
pool code despite pools being disabled when DEBUG_UAF is used. Let's move
the test to pool_put_to_os() which is the one that is always called in
this condition.
The impact is only a possible misleading analysis during a troubleshooting
session due to a missing double-frees or free of const area test that is
normally already dealt with by the underlying code anyway. In practice it's
unlikely anyone will ever notice.
This should only be backported to 2.4.
The lockless version of pool_flush() had a leftover of the original
version causing the pool's first entry to be set to NULL at the end.
The problem is that it does this outside of any lock and in a non-
atomic way, so that any concurrent alloc+free would result in a lost
object.
The risk is low and the consequence even lower, given that pool_flush()
is only used in pool_destroy() (hence single-threaded) or by stream_free()
during a soft-stop (not the place where most allocations happen), so in
the worst case it could result in valgrind complaining on soft-stop.
The bug was introduced with the first version of the code, in 1.9, so
the fix can be backported to all stable versions.
Define srv.init_addr_methods to SRV_IADDR_NONE on 'add server' CLI
handler. This explicitly states that no resolution will be made on the
server creation.
This is not a real bug as the default value (SRV_IADDR_END) has the same
effect in practice. However the intent is clearer and prevent to use the
default "libc,last" by mistake which cannot execute on runtime (blocking
call + file access via gethostbyname/getaddrinfo).
The doc is also updated to reflect this limitation.
This should be backported up to 2.4.
Add the ability to dump an OCSP response details through a call to "show
ssl cert cert.pem.ocsp". It can also be used on an ongoing transaction
by prefixing the certificate name with a '*'.
Even if the ckch structure holds an ocsp_response buffer, we still need
to look for the actual ocsp response entry in the ocsp response tree
rather than just dumping the ckch's buffer details because when updating
an ocsp response through a "set ssl ocsp-response" call, the
corresponding buffer in the ckch is not updated accordingly. So this
buffer, even if it is not empty, might hold an outdated ocsp response.
This patch adds an "OCSP Response Key" information in the output of a
"show ssl cert <certfile>" call. The key can then be used in a "show ssl
ocsp-response <key>" CLI command.
This patch adds the "show ssl ocsp-response [<id>]" CLI command. This
command can be used to display the IDs of the OCSP tree entries along
with details about the entries' certificate ID (issuer's name and key
hash + serial number), or to display the details of a single
ocsp-response if an ID is given. The details displayed in this latter
case are the ones shown by a "openssl ocsp -respin <ocsp-response>
-text" call.
The OCSP tree entry key is a serialized version of the OCSP_CERTID of
the entry which is stored in a buffer that can be at most 128 bytes.
Depending on the length of the serial number, the actual non-zero part
of the key can be smaller than 128 bytes and this new structure member
allows to know how many of the bytes are filled. It will be useful when
dumping the key (in a "show ssl cert <cert>" output for instance).
Since the commit acfd71b97 ("BUG/MINOR: http-comp: Preserve
HTTP_MSGF_COMPRESSIONG flag on the response"), there is no more flag to know
when the compression ends. This means it is possible to finish the
compression several time if there are trailers.
So, we reintroduce almost the same mechanism but with a dedicated flag. So
now, there is a bits field in the compression filter context.
The commit above is marked to be backported as far as 2.0. Thus this patch
must also be backported as far as 2.0.
When a DATA block is compressed, or when the compression context is finished
on a TLR/EOT block, the next block used to loop on the HTX message must be
refreshed because a defragmentation may have occurred.
This bug was introduced when the EOM block was removed in 2.4. Thus, this
patch must be backported to 2.4.
In comp_http_payload(), the loop skipping unused blocks is buggy and may
lead to a infinite loop if the first next block is unused. Indeed instead of
iterating on blocks, we always retrieve the same one because <blk> is used
instead of <next> to get the next block.
This bug was introduced when the EOM block was removed in 2.4. Thus, this
patch must be backported to 2.4.
The wey the "Next Update" field of the OCSP response is converted into a
timestamp relies on the use of signed integers for the year and month so
if the calculated timestamp happens to overflow INT_MAX, it ends up
being seen as negative and the OCSP response being dwignored in
ssl_sock_ocsp_stapling_cbk (because of the "ocsp->expire < now.tv_sec"
test).
It could be backported to all stable branches.
Since commit 04a5a44 ("BUILD: ssl: use HAVE_OPENSSL_KEYLOG instead of
OpenSSL versions") the "tune.ssl.keylog" feature is broken because
HAVE_OPENSSL_KEYLOG does not exist.
Replace this by a HAVE_SSL_KEYLOG which is defined in openssl-compat.h.
Also add an error when not built with the right openssl version.
Must be backported as far as 2.3.
Change the algorithm for the generation of the user messages context
prefix. Remove the dubious API relying on optional printf positional
arguments. This may be non portable, and in fact the CI glibc crashes
with the following error when some arguments are not present in the
format string :
"invalid %N$ use detected".
Now, a fixed buffer attached to the context instance is allocated once
for the program lifetime. Then call repeatedly snprintf with the
optional arguments of context if present to build the context string.
The buffer is deallocated via a per-thread free handler.
This does not need to be backported.
This patch adds the `-cc` (check condition) argument to evaluate conditions on
startup and return the result as the exit code.
As an example this can be used to easily check HAProxy's version in scripts:
haproxy -cc 'version_atleast(2.4)'
This resolves GitHub issue #1246.
Co-authored-by: Tim Duesterhus <tim@bastelstu.be>
We can calculate the number of possible arguments based off the size of the
`args` array. We should do so to prevent the two values from getting out of
sync.
Replace memprintf usage in _srv_parse* functions by ha_alert calls. This
has the advantage to simplify the function prototype by removing an
extra char** argument.
As a consequence, the CLI handler of 'add server' is updated to output
the user messages buffers if not empty.
Initialize the parsing context in srv_init_addr. This function is called
after configuration check.
This will standardize the stderr output on startup with the parse_server
function.
Initialize the parsing context when checking server config validity.
Adjust the log messages to remove redundant config file/line and server
name. Do a similar cleaning in prepare_srv from ssl_sock as this
function is called at the same stage.
This will standardize the stderr output on startup with the parse_server
function.
Set "config :" as a prefix for the user messages context before starting
the configuration parsing. All following stderr output will be prefixed
by it.
As a consequence, remove extraneous prefix "config" already specified in
various ha_alert/warning/notice calls.
Display process executable path on first warning if not already done in
ha_warning, as in ha_alert. The output is thus cleaner when ALERT and
WARN messages are mixed, with the executable path always on first
position.
Prepend the user messages context to stderr output in print_message. It
is inserted between the output prefix (log level / pid) and the message
itself. Its content depends on the loaded context infos.
Create a parsing_ctx structure. This type is used to store information
about the current file/line parsed. A global context is created and
can be manipulated when haproxy is in STARTING mode. When starting is
over, the context is resetted and should not be accessed anymore.
Always print message in ha_alert/warning/notice when starting is over,
regardless of quiet/verbose options.
This change is useful to retrieve the output via the newly implemented
user messages buffer at runtime, for the CLI handlers.
The user messages buffer is used to store the stderr output after the
starting is over. Each thread has it own user messages buffer. Add some
functions to add a new message, retrieve and clear the content.
The user messages buffer primary goal is to be consulted by CLI
handlers. Each handlers using it must clear the buffer before starting
its operation.
Fix memprintf used in server_parse_sni_expr. Error messages should not
be ending with a newline as it will be inserted in the parent function
on the ha_alert invocation.
Move functions related to errors output on stderr from log.c to a newly
created errors.c file. It targets print_message and
ha_alert/warning/notice/diag functions and related startup_logs feature.
The leastconn and roundrobin functions mention that the server's lock
must be held while this is not true at all and it is not used either.
The "first" algo doesn't mention anything about the need for locking,
so let's mention that it uses the lbprm lock.
When a client aborts while the session is in the queue or during the connect
stage, instead of reporting a 503-Service-Unavailable error in logs, -1
status is used. It means -1 status is now reported with 'CC' and 'CQ'
termination state.
Indeed, when a client aborts before the server connection is established,
there is no reason to report a 503 because nothing is sent to the
server. And in this case, because it is a client abort, it is useless to
send any response to the client. Thus -1 status is approriate. This status
is used in log messages when the connection is closed and no response is
sent.
This patch should fix the issue #1266.
Some changes in the OpenSSL syntax API broke this syntax:
#if SSL_OP_NO_TLSv1_3
OpenSSL made this change which broke our usage in commit f04bb0bce490de847ed0482b8ec9eabedd173852:
-# define SSL_OP_NO_TLSv1_3 (uint64_t)0x20000000
+#define SSL_OP_BIT(n) ((uint64_t)1 << (uint64_t)n)
+# define SSL_OP_NO_TLSv1_3 SSL_OP_BIT(29)
Which can't be evaluated by the preprocessor anymore.
This patch replace the test by an openssl version test.
This fix part of #1276 issue.
Return values of fcgi_strm_parse_data() and fcgi_strm_parse_trailers() are
no longer checked. Thus it is useless to store it.
This patch should fix the issues #1269 and #1268.
It is now possible to get any variables from the cli. Concretely, only
variables in the PROC scope can be retrieved because there is neither stream
nor session defined. But, nothing forbids anyone to try to get a variable in
any scope. No value will be found, but it is allowed. Thus, we must be sure
to not rely on an undefined session or stream in that case. Especially, the
session must be tested before retrieving variables in CHECK scope.
This patch should fix the issue #1249. It must be backported to 2.4.
Thanks to the previous patch (822decfd "BUG/MAJOR: stream-int: Release SI
endpoint on server side ASAP on retry"), it is now useless to release any
existing connection in connect_server() because it was already done in
back_handle_st_cer() if necessary.
This patch is not a CLEANUP because it may introduce some bugs in edge
cases. There is no reason to backport it for now except if it is required to
fix a bug.
When a connection attempt failed, if a retry is possible, the SI endpoint on
the server side is immediately released, instead of waiting to establish a
new connection to a server. Thus, when the backend SI is switched from
SI_ST_CER state to SI_ST_REQ, SI_ST_ASS or SI_ST_TAR, its endpoint is
released. It is expected because the SI is moved to a state prior to the
connection stage ( < SI_ST_CONN). So it seems logical to not have any server
connection.
It is especially important if the retry is delayed (SI_ST_TAR or
SI_ST_QUE). Because, if the server connection is preserved, any error at the
connection level is unexpectedly relayed to the stream, via the
stream-interface, leading to an infinite loop in process_stream(). if
SI_FL_ERR flag is set on the backend SI in another state than SI_ST_CLO, an
internal goto is performed to resync the stream-interfaces. In addtition,
some ressources are not released ASAP.
This bug is quite old and was reported 1 or 2 times per years since the 2.2
(at least) with not enough information to catch it. It must be backported as
far as 2.2 with a special care because this part has moved several times and
after some observation period and feedback from users to be sure. For info,
in 2.0 and prior, the connection is released when an error is encountered in
SI_ST_CON or SI_ST_RDY states.
Thanks to the commit 1f08bffe0 ("MINOR: http-ana: Perform L7 retries because
of status codes in response analyser"), the L7 retries about the response
status code is now fully handled in the HTTP response analyser.
CF_READ_ERROR flag is no longer set on the response channel in this
case. Thus it is useless to try to catch L7 retries when CF_READ_ERROR is
set because it cannot happen.
The above commit was backported to 2.4, thus this one should also be
backported.
A memory allocation failure happening in chash_init_server_tree while
trying to allocate a server's lb_nodes item used in consistent hashing
would have resulted in a crash. This function is only called during
configuration parsing.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening in make_arg_list when trying to
allocate the argument list would have resulted in a crash. This function
is only called during configuration parsing.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening in http_parse_redirect_rule when
trying to allocate a redirect_rule structure would have resulted in a
crash. This function is only called during configuration parsing.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening in mworker_env_to_proc_list when
trying to allocate a mworker_proc would have resulted in a crash. This
function is only called during init.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening in comp_append_type or
comp_append_algo called while parsing compression options would have
resulted in a crash. These functions are only called during
configuration parsing.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening in tcp_parse_request_rule while
processing the "capture" keyword and trying to allocate a cap_hdr
structure would have resulted in a crash. This function is only called
during configuration parsing.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening in tcp_parse_tcp_req or
tcp_parse_tcp_rep when trying to allocate an act_rule structure would
have resulted in a crash. These functions are only called during
configuration parsing.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening in proxy_defproxy_cpy while
copying the default compression options would have resulted in a crash.
This function is called for every new proxy found while parsing the
configuration.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening during proxy_parse_declare while
processing the "capture" keyword and allocating a cap_hdr structure
would have resulted in a crash. This function is only called during
configuration parsing.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening in parse_http_req_capture while
processing a "len" keyword and allocating a cap_hdr structure would
have resulted in a crash. This function is only called during
configuration parsing.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening during ssl_init_single_engine
would have resulted in a crash. This function is only called during
init.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening during peers_register_table would
have resulted in a crash. This function is only called during init.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
Two calloc calls were not checked in the srv_parse_source function.
Considering that this function could be called at runtime through a
dynamic server creation via the CLI, this could lead to an unfortunate
crash.
It was raised in GitHub issue #1233.
It could be backported to all stable branches even though the runtime
crash could only happen on branches where dynamic server creation is
possible.
L7 retries because of status codes are now performed in the response
analyser. This way, it is no longer required to handle L7 retries in
si_cs_recv(). It is also useless to set CF_READ_ERROR on the response
channel to be able to trigger such retries.
In addition, if no L7 retries are performed when the response is received,
the L7 buffer is immediately released. Before in this case, it was only
released with the stream.
When a network error occurred on the server side, if it is not the first
request (in case of keep-alive), nothing is returned to the client and its
connexion is closed to be sure it may retry. However L7 retries on refused
early data (0rtt-rejected) must be performed first.
In addition, such L7 retries must also be performed before incrementing the
failed responses counter.
This patch must be backported as far as 2.0.
This bug was introduced by the previous commit (9f5382e45 Revert "MEDIUM:
http-ana: Deal with L7 retries in HTTP analysers") because I failed the
revert.
On L7 retry, if the maximum connection retries is reached, an error must be
return to the client. Depending the situation, it may be a 502-Bad-Gateway
(empty-response or junk-response), a 504-Gateway-Timeout (response-timeout)
or a 425-Too-Early (0rtt-rejected). But contrary to what the comment says,
the do_l7_retry() function always returns a success.
Note it is not a problem for L7 retries on the response status code because
the stream-interface already takes care to have not reached the maximum
connection retries counter to trigger a L7 retry.
This patch must be backported to 2.4 because the commit must also be
backported to 2.4.
This reverts commit 5b82cc5b5c. The purpose of
this commit was to fully handle L7 retries in HTTP analysers and stop to
deal with the L7 buffer in si_cs_send()/si_cs_recv(). It is of course
cleaner this way. But there is a huge drawback. The L7 buffer is reserved
from the time the request analysis is finished until the moment the response
is received. For a small request, the analysis is finished before the
connection to the server. Thus for the L7 buffer will be kept for queued
sessions while it is not mandatory.
So, for now, the commit is reverted to go back to the less expensive
solution. This patch must be backported to 2.4.
Main functions are renamed h1_process_demux() and h1_process_mux() to be
consistent with the H2 mux. For the same reason,
h1_process_header/data/tralers) functions, responsible to parse incoming
data are renamed with "h1_handle_" prefix.
Input buffers have never output data. So, use b_slow_realign_ofs() function
instead of b_slow_realign(). It is a slighly simpler function. And in the H1
mux, it allows a realign by setting the input buffer head to permit
zero-copies.
Add h1_parse_full_contig_chunks() function to parse full contiguous chunks.
This function neither handles incomplete chunks nor wrapping buffers. It is
designed to efficiently parse a buffer with several small chunks. Of course,
there is no zero copy here because it is not possible. This function is a
bit tricky and all changes may a have a impact. This one may probably be
optimized, but it is good enough for now and not too complex.
The main function (h1_parse_msg_chunks) always tries to use this function
when the HTTP parser is waiting for a chunk size. In this case, there is no
zero-copy, so there is no reason to call the generic version to parse the
chunk. However, if some unparsed data remain after this step, the generic
function is called. This way, wrapping data and incomplete chunks may be
parsed.
Quick tests show it is now slightly faster in all cases than the legacy
mode.
A generic function is now used to only parse the current chunk (h1_parse_chunk)
and the main one (h1_parse_msg_chunks) is used to loop on the buffer and relies
on the first one. This change is mandatory to be able to use an optimized
function to parse contiguous small chunks.
Chunked data are now parsed in a dedicated function. This way, it will be
possible to have two functions to parse chunked messages. The current one
for messages with large chunks and an other one to parse messages with small
chunks.
The parsing of small chunks is really sensitive because it may be used as a
DoS attack. So we must be carefull to have an optimized function to parse
such messages.
Because the function parsing H1 data is now able to handle wrapping input
buffers, there is no reason to loop anymore in the muxes to be sure to parse
wrapping data.
Since the beginning, wrapping input data are parsed and copied in 2 steps to
not deal with the wrapping in H1 parsing functions. But there is no reason
to do so. This needs 2 calls to parsing functions. This also means, most of
time, when the input buffer does not wrap, there is an extra call for
nothing.
Thus, now, the data parsing functions try to copy as much data as possible,
handling wrapping buffer if necessary.
h1 parsing functions (h1_parse_msg_*) returns the number of bytes parsed or
0 if nothing is parsed because an error occurred or some data are
missing. But they never return negative values. Thus, instead of a signed
integer, these function now return a size_t value.
The H1 and FCGI muxes are updated accordingly. Note that h1_parse_msg_data()
has been slightly adapted because the parsing of chunked messages still need
to handle negative values when a parsing error is reported by
h1_parse_chunk_size() or h1_skip_chunk_crlf().
The output of "show map/acl" now contains the 'entry_cnt' value that
represents the count of all the entries for each map/acl, not just the
active ones, which means that it also includes entries currently being
added.
This flag is set on the response when its payload is compressed by HAProxy.
It must be preserved because it may be used when the log message is emitted.
When the compression filter was refactored to support the HTX, an
optimization was added to not perform extra proessing on the trailers.
HTTP_MSGF_COMPRESSIONG flag is removed when the last data block is
compressed. It is not required, it is just an optimization and unfortunately
a bug. This optimization must be removed to preserve the flag.
This patch must be backported as far as 2.0. On the HTX is affected.
For each filter, pre and post callback functions must only be called one
time. To do so, when one of them is finished, the corresponding analyser bit
must be removed from pre_analyzers or post_analyzers bit field. It is only
an issue with pre-analyser callback functions if the corresponding analyser
yields. It may happens with lua action for instance. In this case, the
filters pre analyser callback function is unexpectedly called several times.
This patch should fix the issue #1263. It must be backported is all stable
versions.
A deadlock is possible with 'set maxconn server' command, if there is
pending connection ready to be dequeued. This is caused by the locking
of server spinlock in both cli_parse_set_maxconn_server and
process_srv_queue.
Fix this by reducing the scope of the server lock into
server_parse_maxconn_change_request. If connection are dequeued, the
lock is taken a second time. This can be seen as suboptimal but as it
happens only during 'set maxconn server' it can be considered as
tolerable.
This issue was reported on the mailing list, for the 1.8.x branch.
It must be backported up to the 1.8.
The following functions used in CA/CRL file hot update were not defined
in OpenSSL 1.0.2 so they need to be defined in openssl-compat :
- X509_CRL_get_signature_nid
- X509_CRL_get0_lastUpdate
- X509_CRL_get0_nextUpdate
- X509_REVOKED_get0_serialNumber
- X509_REVOKED_get0_revocationDate
The CA/CRL hot update patches did not compile on some targets of the CI
(mainly gcc + ssl). This patch should fix almost all of them. It adds
missing variable initializations and return value checks to the
BIO_reset calls in show_crl_detail.
This patch adds the "show ssl crl-file [<crlfile>]" CLI command. This
command can be used to display the list of all the known CRL files when
no specific file name is specified, or to display the details of a
specific CRL file when a name is given.
The details displayed for a specific CRL file are inspired by the ones
shown by a "openssl crl -text -noout -in <filename>".
The "abort" command aborts an ongoing transaction started by a "set ssl
crl-file" command. Since the updated CRL file data is not pushed into
the CA file tree until a "commit ssl crl-file" call is performed, the
abort command simply deleted the new cafile_entry (storing the new CRL
file data) stored in the transaction.
This patch adds the "new ssl crl-file" and "del ssl crl-file" CLI
commands.
The "new" command can be used to create a new empty CRL file that can be
filled in thanks to a "set ssl crl-file" command. It can then be used in
a new crt-list line.
The newly created CRL file is added to the CA file tree so any call to
"show ssl crl-file" will display its name.
The "del" command allows to delete an unused CRL file. A CRL file will
be considered unused if its list of ckch instances is empty. It does not
work on an uncommitted CRL file transaction created via a "set ssl
crl-file" command call.
This patch adds the "set ssl crl-file" and "commit ssl crl-file"
commands, following the same logic as the certificate and CA file update
equivalents.
When trying to update a Certificate Revocation List (CRL) file via a
"set" command, we start by looking for the entry in the CA file tree and
then building a new cafile_entry out of the payload, without adding it
to the tree yet. It will only be added when a "commit" command is
called.
During a "commit" command, we insert the newly built cafile_entry in the
CA file tree while keeping the previous entry. We then iterate over all
the instances that used the CRL file and rebuild a new one and its
dedicated SSL context for every one of them.
When all the contexts are properly created, the old instances get
replaced by the new ones and the old CRL file is removed from the tree.
In order for crl-file hot update to be possible, we need to add an extra
link between the CA file tree entries that hold Certificate Revocation
Lists and the instances that use them. This way we will be able to
rebuild each instance upon CRL modification.
This mechanism is similar to what was made for the actual CA file update
since both the CA files and the CRL files are stored in the same CA file
tree.
This patch adds the "del ssl ca-file <cafile>" CLI command which can be
used to delete an unused CA file.
The CA file will be considered unused if its list of ckch instances is
empty. This command cannot be used to delete the uncommitted CA file of
a previous "set ssl ca-file" without commit. It only acts on
CA file entries already inserted in the CA file tree.
This fixes a subpart of GitHub issue #1057.
This patch adds the "new ssl ca-file <cafile>" CLI command. This command
can be used to create a new empty CA file that can be filled in thanks
to a "set ssl ca-file" command. It can then be used in a new crt-list
line.
The newly created CA file is added directly in the cafile tree so any
following "show ssl ca-file" call will display its name.
This fixes a subpart of GitHub issue #1057.
This patch adds the "show ssl ca-file [<cafile>[:index]]" CLI command.
This command can be used to display the list of all the known CA files
when no specific file name is specified, or to display the details of a
specific CA file when a name is given. If an index is given as well, the
command will only display the certificate having the specified index in
the CA file (if it exists).
The details displayed for each certificate are the same as the ones
showed when using the "show ssl cert" command on a single certificate.
This fixes a subpart of GitHub issue #1057.
The CA files and CRL files are stored in the same cafile_tree so this
patch adds a new field the the cafile_entry structure that specifies the
type of the entry. Since a ca-file can also have some CRL sections, the
type will be based on the option used to load the file and not on its
content (ca-file vs crl-file options).
The "abort" command aborts an ongoing transaction started by a "set ssl
ca-file" command. Since the updated CA file data is not pushed into the
cafile tree until a "commit ssl ca-file" call is performed, the abort
command simply clears the new cafile_entry that was stored in the
cafile_transaction.
This fixes a subpart of GitHub issue #1057.
This patch adds the "set ssl ca-file" and "commit ssl ca-file" commands,
following the same logic as the certificate update equivalents.
When trying to update a ca-file entry via a "set" command, we start by
looking for the entry in the cafile_tree and then building a new
cafile_entry out of the given payload. This new object is not added to
the cafile_tree until "commit" is called.
During a "commit" command, we insert the newly built cafile_entry in the
cafile_tree, while keeping the previous entry as well. We then iterate
over all the instances linked in the old cafile_entry and rebuild a new
ckch instance for every one of them. The newly inserted cafile_entry is
used for all those new instances and their respective SSL contexts.
When all the contexts are properly created, the old instances get
replaced by the new ones and the old cafile_entry is removed from the
tree.
This fixes a subpart of GitHub issue #1057.
The process of rebuilding a ckch_instance when a certificate is updated
through a cli command will be roughly the same when a ca-file is updated
so this factorization will avoid code duplication.
Adds a way to insert a new uncommitted cafile_entry in the tree. This
entry will be the one fetched by any lookup in the tree unless the
oldest cafile_entry is explicitely looked for. This way, until a "commit
ssl ca-file" command is completed, there could be two cafile_entries
with the same path in the tree, the original one and the newly updated
one.
The updated CA content coming from the CLI during a ca-file update will
directly be in memory and not on disk so the way CAs are loaded in a
cafile_entry for now (via X509_STORE_load_locations calls) cannot be
used.
This patch adds a way to fill a cafile_entry directly from memory and to
load the contained certificate and CRL sections into an SSL store.
CRL sections are managed as well as certificates in order to mimic the
way CA files are processed when specified in an option. Indeed, when
parsing a CA file given through a ca-file or ca-verify-file option, we
iterate over the different sections in ssl_set_cert_crl_file and load
them regardless of their type. This ensures that a file that was
properly parsed when given as an option will also be accepted by the
CLI.
In order for the link between the cafile_entry and the default ckch
instance to be built, we need to give a pointer to the instance during
the ssl_sock_prepare_ctx call.
Each ca-file entry of the tree will now hold a list of the ckch
instances that use it so that we can iterate over them when updating the
ca-file via a cli command. Since the link between the SSL contexts and
the CA file tree entries is only built during the ssl_sock_prepare_ctx
function, which are called after all the ckch instances are created, we
need to add a little post processing after each ssl_sock_prepare_ctx
that builds the link between the corresponding ckch instance and CA file
tree entries.
In order to manage the ca-file and ca-verify-file options, any ckch
instance can be linked to multiple CA file tree entries and any CA file
entry can link multiple ckch instances. This is done thanks to a
dedicated list of ckch_inst references stored in the CA file tree
entries over which we can iterate (during an update for instance). We
avoid having one of those instances go stale by keeping a list of
references to those references in the instances.
When deleting a ckch_inst, we can then remove all the ckch_inst_link
instances that reference it, and when deleting a cafile_entry, we
iterate over the list of ckch_inst reference and clear the corresponding
entry in their own list of ckch_inst_link references.
In order to ease ca-file hot update via the CLI, the ca-file tree will
need to allow duplicate entries for a given path. This patch simply
enables it and offers a way to select either the oldest entry or the
latest entry in the tree for a given path.
This patch moves all the ssl_store related code to ssl_ckch.c since it
will mostly be used there once the CA file update CLI commands are all
implemented. It also makes the cafile_entry structure visible as well as
the cafile_tree.
atoll() is not portable, but strtoll() is more common. We must pass NULL
to the end pointer however since the parser must consume digits and stop
at the first non-digit char. No backport is needed as this was introduced
in 2.4-dev17 with commit 51c8ad45c ("MINOR: sample: converter: Add json_query
converter").
stdint.h is not as portable as inttypes.h. It doesn't exist at least
on AIX 5.1 and Solaris 7, while inttypes.h is present there and does
include stdint.h on platforms supporting it.
This is equivalent to libslz upstream commit e36710a ("slz: use
inttypes.h instead of stdint.h")
The function is defined when using linux+cpu affinity but is only used
if threads are enabled, so let's add this condition to avoid aa build
warning about an unused function when building with thread disabled.
This came in 2.4-dev17 with commit b56a7c89a ("MEDIUM: cfgparse: detect
numa and set affinity if needed") so no backport is needed.
A mistake was introduced in 2.4-dev17 by commit 982fb5339 ("MEDIUM:
config: use platform independent type hap_cpuset for cpu-map"), it
initializes cpu_map.thread[] from 0 to MAX_PROCS-1 instead of
MAX_THREADS-1 resulting in crashes when the two differ, e.g. when
building with USE_THREAD= but still with USE_CPU_AFFINITY=1.
No backport is needed.
Variable names are stored into a unified list that helps compare them
just based on a pointer instead of duplicating their name with every
variable. This is convenient for those declared in the configuration
but this started to cause issues with Lua when random names would be
created upon each access, eating lots of memory and CPU for lookups,
hence the work in 2.2 with commit 4e172c93f ("MEDIUM: lua: Add
`ifexist` parameter to `set_var`") to address this.
But there remains a corner case with get_var(), which also allocates
a new variables. After a bit of thinking and discussion, it never
makes sense to allocate a new variable name on get_var():
- if the name exists, it will be returned ;
- if it does not exist, then the only way for it to appear will
be that some code calls set_var() on it
- a call to get_var() after a careful set_var(ifexist) ruins the
effort on set_var().
For this reason, this patch addresses this issue by making sure that
get_var() will never cause a variable to be allocated. This is done
by modifying vars_get_by_name() to always call register_name() with
alloc=0, since vars_get_by_name() is exclusively used by Lua and the
new CLI's "get/set var" which also benefit from this protection.
It probably makes sense to backport this as far as 2.2 after some
observation period and feedback from users.
For more context and discussions about the issues this was causing,
see https://www.mail-archive.com/haproxy@formilux.org/msg40451.html
and in issue #664.
This function is one of the few high-profile, unresolved ones in the memory
profile output, let's have it resolve to ease matching of SSL allocations,
which are not easy to follow.
"show profiling" by default sorts by usage/counts, which is suitable for
occasional use. But when called from scripts to monitor/search variations,
this is not very convenient. Let's add a new "byaddr" option to support
sorting the output by address. It also eases matching alloc/free calls
from within a same library, or reading grouped tasks costs by library.
Commit d3a9a4992 ("MEDIUM: stats: allow to select one field in
`stats_fill_sv_stats`") left one occurrence of a direct assignment
of stats[] instead of placing it into the <metric> variable, and it
was on ST_F_CHECK_STATUS. This resulted in the field being overwritten
with an empty one immediately after being set in stats_fill_sv_stats()
and the field to appear empty on the stats page.
No backport is needed as this was only for 2.4.
Since the introduction of bc_src, smp_fetch_src from tcp_sample inspect
the kw argument to choose between the frontend or the backend source
address. However, for the stick tables, the argument is left to NULL.
This causes a segfault.
Fix the crash by explicitely set the kw argument to "src" to retrieve
the source address of the frontend side.
This bug was introduced by the following commit :
7d081f02a4
MINOR: tcp_samples: Add samples to get src/dst info of the backend connection
It does not need a backport as it is integrated in the current 2.4-dev
branch.
To reproduce the crash, I used the following config :
frontend fe
bind :20080
http-request track-sc0 src table foo
http-request reject if { src_conn_rate(foo) gt 10 }
use_backend h1
backend foo
stick-table type ip size 200k expire 30s store conn_rate(60s)
backend h1
server nginx 127.0.0.1:30080 check
This should fix the github issue #1247.
On ARM with native CRC support, no need to inflate the executable with
a 4kB CRC table, let's just drop it.
This is slz upstream commit d8715db20b2968d1f3012a734021c0978758f911.
This is the only place where we conditionally use the crc32_fast table,
better call the crc32_char inline function for this. This should also
reduce by ~1kB the L1 cache footprint of the compression when dealing
with small blocks, and at least shows a consistent 0.5% perf improvement.
This is slz upstream commit 075351b6c2513b548bac37d6582e46855bc7b36f.
This function was not used anymore after the atomic updates were
implemented in 2.3, and it must not be used given that it does not
yield and can easily make the process hang for tens of seconds on
large acls/maps. Let's remove it before someone uses it as an
example to implement something else!
Already had to perform too many additions by external scripts, it's
time to add the totals and delay alloc-free as a last line in the
output of the "show memory profiling".
This was planned but missing in the previous attempt, we really need to
see what is used at each place, especially due to realloc(). Now we
print the function used in front of the caller's address, as well as
the average alloc/free size per call.
The realloc() function checks if the size grew or reduced in order to
count an allocation or a free, but it does so with the absolute (new
or old) value instead of the difference, resulting in realloc() often
being credited for allocating too much.
No backport is needed.
It was found that when viewing the help output from the CLI that
"set profiling" had 2 spaces in it, which was pushing it out from
the rest of similar commands.
i.e. it looked like this:
prepare acl <acl>
prepare map <acl>
set profiling <what> {auto|on|off}
set dynamic-cookie-key backend <bk> <k>
set map <map> [<key>|#<ref>] <value>
set maxconn frontend <frontend> <value>
This patch removes all of the double spaces within the command and
unifies them to single spacing, which is what is observed within the
rest of the commands.
Check the return value of url2sa in smp_fetch_url_ip/port. If negative,
the address result is uninitialized and the sample fetch is aborted.
Also, the sockaddr is prelimiary zero'ed before calling url2sa to ensure
that it is not used by upper functions even if the sample returns 0.
Without the check, the value returned by the url_ip/url_port fetches is
unspecified. This can be triggered with the following curl :
$ curl -iv --request-target "xxx://127.0.0.1:20080/" http://127.0.0.1:20080/
This should be backported to all stable branches. However, note that
between the 1.8 and 2.0, the targetted functions have been extracted
from proto_http.c to http_fetch.c.
This should fix in part coverity report from the github issue #1244.
The compiler sees the possibility of null-deref for which a path is
possible but which doesn't exist as we didn't pass a null args outside
of the help request. The test was introduced by the simplified test on
ishelp variable, so let's add it to shut the warning.
It's still very difficult to find all commands starting with a given
keyword like "set", "show" etc. Let's sort the lines by usage message,
this is much more convenient.
With ~100 commands on the CLI, it's particularly difficult to find a
specific one in the "help" output. The function used to display the
help already supports filtering on certain commands, so in the end it's
just needed to pass the argument of the help command to enable the
automatic filtering. That's what this patch does so that "help clear"
only lists commands starting with "clear" and that "help map" lists
commands containing "map" in them.
The build fails on versions older than 1.0.1d which is the first one
introducing CRYPTO_memcmp(), so let's have a define for this instead
of enabling it whenever USE_OPENSSL is set. One could also wonder why
we're relying on openssl for such a trivial thing, and a simple local
implementation could also allow to restore lexicographic ordering.
gcc-4.4 complains about aliasing in smp_fetch_url_port() and
smp_fetch_url_ip() because the local addr variable is casted to sturct
sockaddr_in before being checked. The family should be checked on the
sockaddr_storage and we have a function to retrieve the port.
The compiler still sees some warnings but these ones are OK now.
ha_random64() uses a DWCAS loop to produce the random, but it computes
the resulting value inside the loop while it doesn't change upon success,
so this is a needless overhead inside the critcal path that participates
to making threads fail the race and try again. Let's take the value out
of the loop.
Some of the Lua doc and a few places still used "Haproxy" or "HAproxy".
There was even one "HA proxy". A few of them were in an example of VTest
output, indicating that VTest ought to be fixed as well. No big deal but
better address all the remaining ones so that these inconsistencies stop
spreading around.
When running "haproxy -v", we still get "HA-Proxy" which is the last
place where this confusing oddity happens. Being so used to it I didn't
even notice it until it was reported to me just after 2.2 but it never
got fixed, despite the PRODUCT_NAME macro that is used to report the
name in the stats page and in "show info" being already set to "HAProxy"
15 years ago in 1.2.14 with commit e03312613. It's about time to
uniformize everything.
This one comes with a very deep dependency hell, only to know that
process_stream() is a function. Dropping it reduces the preprocessed
output from 1.5MB to 640kB.
These ones are used by virtually every config parser. Not only they
provide no benefit in being inlined, but they imply a very deep
dependency starting at proxy.h, which results for example in task.c
including openssl.
Let's move these two functions to cfgparse.c.
This function has no business being inlined in stick_table.h since it's
only used at boot time by the config parser. In addition it causes an
undesired dependency on tools.h because it uses parse_time_err(). Let's
move it to stick_table.c.
No idea why this was put inlined into connection.h, it's used only once
for haproxy -vv, and requires tools.h, causing an undesired dependency
from connection.h. Let's move it to connection.c instead where it ought
to have been.
Only mworker uses proc_self, and it was declared in global.h, forcing
users of global.h to include mworker and its dependencies.
Moving it to mworker reduces the preprocessed size of version.c from
170 to 125kB by shrinking the number of local includes from 30 to 16
and the number of system includes from 147 to 132.
The presence of this field causes a long dependency chain because almost
everyone includes global-t.h, and vars include sample_data which include
some system includes as well as HTTP parts.
There is absolutely no reason for having the process-wide variables in
the global struct, let's just move them into vars.c and vars.h. This
reduces from ~190k to ~170k the preprocessed output of version.c.
This one is stated as experimental in the doc but could still be used
by accidental copy-paste. Let's mark it with KWF_EXPERIMENTAL so that
users have to opt-in to use it.
Now "show info float" will also report SSL rates, connection rates and
key reuse ratios as floats. This can be convenient at very low rates.
Note that the SSL reuse ratio which used to commonly oscillate between
0 and 1 under load is now more often above zero with small values. It
indicates that for better stability we shouldn't be comparing a key rate
with a connection rate but instead we should measure the reuse rate at
its source.
We'll have to support reporting sub-second uptimes, so let's use the
appropriate function which will automatically adjust the tv_usec field.
In addition to this, it will also report a more accurate uptime thanks
to considering the sub-second part in the result.
This will allow some fields to be produced with a higher accuracy when
the requester indicates being able to parse floats. Rates and times are
among the elements which can make sense.
Currently the stats filling function knows nothing about the caller's
needs, so let's pass the STAT_* flags so that it can adapt to the
requester's constraints.
For the prometheus exporter, a new float type was added for the fields
and its conversion was added everywhere except for the HTML output.
Now that we have F2H() we can implement it for consistency.
When emitting stats, we don't need to have 6 zeroes after the decimal point
for each value, so let's trim floating point numbers to the longest needed
only.
We already had ultoa_r() and friends but nothing to emit inline floats.
This is now done with ftoa_r() and F2A/F2H. Note that the latter both use
the itoa_str[] as temporary storage and that the HTML format currently is
the exact same as the ASCII one. The trailing zeroes are always timmed so
these outputs are usable in user-visible output.
When using "%f" to print a float, it automatically gets 6 digits after
the decimal point and there's no way to automatically adjust to the
required ones by dropping trailing zeroes. This function does exactly
this and automatically drops the decimal point if all digits after it
were zeroes. This will make numbers more friendly in stats and makes
outputs shorter (e.g. JSON where everything is just a "number").
The function is designed to be easy to use with snprint() and chunks:
snprintf:
flt_trim(buf, 0, snprintf(buf, sizeof(buf), "%f", x));
chunk_printf:
out->data = flt_trim(out->area, 0, chunk_printf(out, "%f", x));
chunk_appendf:
size_t prev_data = out->data;
out->data = flt_trim(out->area, prev_data, chunk_appendf(out, "%f", x));
Calling the strcmp() converter with no argument yields this strange error:
[ALERT] (31439) : parsing [test.cfg:3] : error detected in frontend 'f' while parsing 'http-request redirect' rule : failed to parse sample expression <src,strcmp]> : invalid args in converter 'strcmp' : failed to register variable name ''.
This is because the vars name check tries to see if it can create such a
variable having an empty name. Let's at least make a special case of the
missing argument. Now we can read a more explicit:
[ALERT] (31655) : parsing [test.cfg:3] : error detected in frontend 'f' while parsing 'http-request redirect' rule : failed to parse sample expression <src,strcmp]> : invalid args in converter 'strcmp' : missing variable name.
This was done for secure_strcmp() as well.
Only check servers attached to a proxy with PR_CAP_LB.
This does not need to be backported as the diag message was added in the
current 2.4-dev branch.
Add a new proxy capability for proxy with load-balancing capabilities.
This help to differentiate listen/frontend/backend with special proxies
such as peer proxies.
normalize-uri http rule is marked as experimental, so it cannot be
activated without the global 'expose-experimental-directives'. The
associated vtc is updated to be able to use it.
Support experimental actions. It is mandatory to use
'expose-experimental-directives' before to be able to use them.
If such action is present in the config file, the tainted status of the
process is updated. Another tainted status is set when an experimental
action is executed.
Define a new keyword flag KWF_MATCH_PREFIX. This is used to replace the
match_pfx field of action struct.
This has the benefit to have more explicit action declaration, and now
it is possible to quickly implement experimental actions.
Add a new flag to mark a keyword as experimental. An experimental
keyword cannot be used if the global 'expose-experimental-directives' is
not present first.
Only keywords parsed through a standard cfg_keywords lists in
global/proxies section will be automatically detected if declared
experimental. To support a keyword outside of these lists,
check_kw_experimental must be called manually during its parsing.
If an experimental keyword is present in the config, the tainted flag is
updated.
For the moment, no keyword is marked as experimental.
Add a global flag named 'tainted'. Its purpose is to report various
status about experimental features used for the current process
lifetime.
By default it is initialized to 0. It can be set/retrieve by a couple of
new functions mark_tainted()/get_tainted(). Once a flag is set, it
cannot be resetted.
Currently, no tainted status is implemented, it will be the subject of
the following commits.
On observe mode, if a server is marked as DOWN, the server's health-check is
rescheduled using the fastinter timeout if the new expiration date is newer
that the current one. But this must only be performed if the fastinter
timeout is defined.
Internally, tick_is_lt() function only checks the date and does not perform any
verification on the provided args. Thus, we must take care of it. However, it is
possible to disable the server health-check by setting its task expiration date
to TICK_ETERNITY.
This patch must be backported as far as 2.2. It is related to
A connection may be synchronously established. In the tcpcheck context, it
may be a problem if several connections come one after another. In this
case, there is no event to close the very first connection before starting
the next one. The checks is thus blocked and timed out, a L7 timeout error
is reported.
To fix the bug, when a tcpcheck is started, we immediately evaluate its
state. Most of time, nothing is performed and we must wait. But it is thus
possible to handle the result of a successfull connection.
This patch should fix the issue #1234. It must be backported as far as 2.2.
Thanks to a previous fix, the stream error mask is now cleared on L7
retry. But the stream final state (SF_FINST_*) and the stream-interface
error type must also be reset to properly restart a new connection and be
sure to not inherit errors from the previous connection attempt.
In addition, SF_ADDR_SET flag is not systematically removed.
stream_choose_redispatch() already takes care to unset it if necessary. When
the connection is not redispatch, the server address can be preserved.
This patch must be backported as far as 2.0.
There were 102 CLI commands whose help were zig-zagging all along the dump
making them unreadable. This patch realigns all these messages so that the
command now uses up to 40 characters before the delimiting colon. About a
third of the commands did not correctly list their arguments which were
added after the first version, so they were all updated. Some abuses of
the term "id" were fixed to use a more explanatory term. The
"set ssl ocsp-response" command was not listed because it lacked a help
message, this was fixed as well. The deprecated enable/disable commands
for agent/health/server were prominently written as deprecated. Whenever
possible, clearer explanations were provided.
This one works just like .notice/.warning/.alert except that it prints
the message at level "DIAG" only when haproxy runs in diagnostic mode
(-dD). This can be convenient for example to pass a few hints to help
locate certain config parts or to leave messages about certain temporary
workarounds.
Example:
.diag "WTA/2021-05-07: $.LINE: replace 'redirect' with 'return' after final switch to 2.4"
http-request redirect location /goaway if ABUSE
For about 20 years we've been emitting cryptic messages on warnings and
alerts, that nobody knows how to parse:
[NOTICE] 126/080118 (3115) : haproxy version is 2.4-dev18-0b7c78-49
[NOTICE] 126/080118 (3115) : path to executable is ./haproxy
[WARNING] 126/080119 (3115) : Server default/srv1 is DOWN via static/srv1. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 126/080119 (3115) : backend 'default' has no server available!
Hint: the first 3-digit number is the day of year, and the 6 digits
after it represent the time of day in format HHMMSS, then the pid in
parenthesis. These are not quite user-friendly and such cryptic into
are not useful at all.
This patch slightly adjusts the output by performing these minimal changes:
- removing the date/time, as they were added very early when haproxy
was meant to be used in foreground as a debugging tool, and they're
provided in more details in logs nowadays ;
- better aligning the fields by padding the severity tag to 10 chars.
The diag output was renamed to "DIAG" only.
Now the output provides this:
[NOTICE] (4563) : haproxy version is 2.4-dev18-75a428-51
[NOTICE] (4563) : path to executable is ./haproxy
[WARNING] (4563) : Server default/srv1 is DOWN via static/srv1. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] (4563) : backend 'default' has no server available!
The useless space before the colon was kept so as not to confuse any
possible output parser.
The few entries in the doc referring to this format were adjusted to
reflect the new one.
The change was tagged "MEDIUM" as it may have visible consequences on
home-grown monitoring tools, though it is extremely unlikely due to the
limited extent of these changes.
The cleanup of the previous error was incorrect on L7 retries, it would
OR two values while they're part of an enum, leaving some bits set.
Depending on the errors it was possible to occasionally see an internal
error ("I" flag) being logged.
This should be backported as far as 2.0, though the do_l7_retry() function
in in proto_htx.c in older versions.
When memory profiling is enabled, realloc() can occasionally get the area
size wrong due to the wrong pointer being used to check the new size. When
the old area gets unmapped in the operation, this may even result in a
crash. There's no impact without memory profiling though.
No backport is needed as this is exclusively 2.4-dev.
These predicates respectively verify that the current version is at least
a given version or is before a specific one. The syntax is exactly the one
reported by "haproxy -v", though each component is optional, so both "1.5"
and "2.4-dev18-88910-48" are supported. Missing components equal zero, and
"dev" is below "pre" or "rc", which are both inferior to no such mention
(i.e. they are negative). Thus "2.4-dev18" is older than "2.4-rc1" which
is older than "2.4".
The "feature(name)" predicate will return true if <name> corresponds to
a name listed after a '+' in the features list, that is it was enabled at
build time with USE_<name>=1. Typical use cases will include OPENSSL, LUA
and LINUX_SPLICE. But maybe it will also be convenient to use with optional
addons such as PROMEX and the device detection modules to help keeping the
same configs across various deployments.
"streq(str1,str2)" will return true if the two strings match while
"strneq(str1,str2)" will return true only if they differ. This is
convenient to match an environment variable against a predefined value.
Now we can look up a list of known predicates and pre-parse their
arguments. For now the list is empty. The code needed to be arranged with
a common exit point to release all arguments because there's no default
argument freeing function (it likely only used to exist in the deinit
code). Since we only support simple arguments for now it's no big deal,
only a 2-liner loop.
Let's return the position of the first unparsable character on error,
so that instead of just saying "unparsable conditional expression blah"
we can have:
[ALERT] 125/150618 (13995) : parsing [test-conds2.cfg:1]: unparsable conditional expression '12/blah' in '.if' at position 1:
.if 12/blah
^
This is important because conditions will be made from environment
variables or later from more complex expressions where the error will
not always be easy to locate.
The new function split_version() converts a parsable haproxy version to
an array of integers. The function compare_current_version() compares an
arbitrary version to the current one. These two functions were written
by Thierry Fournier in 2013, and are still usable as-is. They will be
used to write config language predicates.
Till now it was only presented in the version output but could not be
consulted outside of haproxy.c, let's export it as a variable, and set
it to an empty string if not defined.
When the closing brace is missing after an argument (acl, ...), the
error may report something like "expected ')' before ''". Let's just
drop "before ''" when the final word is empty to make the message a
bit clearer.
The new pseudo-variables ".FILE", ".LINE" and ".SECTION" will be resolved
on the fly by the config parser and will respectively retrieve the current
configuration file name, the current line number and the current section
being parsed. This may help emit logs, errors, and debugging information
(e.g. which rule matched).
The '.' in the first char was reserved for such pseudo-variables and no
other variable is permitted. This will allow to add support for new ones
in the future if they prove to be useful (e.g. randoms/uuid for secret
keying or automatic naming of configuration objects).
Let's add a few fields to the global struct to store information about
the current file being processed, the current line number and the current
section. This will be used to retrieve them using special variables.
Instead of duplicating the condition evaluations, let's have a single
function cfg_eval_condition() that returns true/false/error. It takes
less code and will ease its extension.
The doc about .if/.elif config block conditions says:
a non-nul integer (e.g. '1'), always returns "true"
So we must accept negative integers as well. The test was made on
atoi() > 0.
No backport is needed, this is only 2.4.
This missing state was causing a second elif condition to be evaluated
after a first one succeeded after a .if failed. For example in the test
below the else would be executed:
.if 0
.elif 1
.elif 0
.else
.endif
No backport is needed, this is 2.4-only.
The condition to skip the block in the ".if" evaluator forgot to check
that the level was high enough, resulting in rare cases where a random
value matched one of the 5 values that cause the block to be skipped.
No backport is needed as it's 2.4-only.
When a L7 retry is performed, we must not forget to decrement the current
session counter of the assigned server. Of course, it must only be done if
the current session is already counted on the server, thus if SF_CURR_SESS
flag is set on the stream.
This patch is related to the issue #1003. It must be backported as far as
2.0.
Because H1C_F_RX_BLK and H1C_F_TX_BLK flags now only concerns data
processing, at the H1 stream level, there is no reason to still manage them
on the H1 connection. Thus, these flags are now set on the H1 stream.
These flags are used to block, respectively, the output and the input
processing. Thus, to be more explicit, H1C_F_WAIT_INPUT is renamed to
H1C_F_TX_BLK and H1C_F_WAIT_OUTPUT is renamed to H1C_F_RX_BLK.
Instead of subscribing for reads or sends to restart data processing, when
both sides are synchronized, the H1 stream is woken up. This happens when
H1C_F_WAIT_INPUT or H1C_F_WAIT_OUTPUT flags are removed, Indeed, these flags
block the data processing and not raw data sending or receiving.
In h1_rcv_pipe(), when the splicing is not possible or disabled at the end
of the fnuction, we make sure to subscribe for reads. It is not a bug but it
avoid an extra call to h1_rcv_pipe() to handle the subscription in some
cases (end of message, end of chunk or read0).
In addition, the condition to detect end of splicing has been simplified. We
now only rely on H1C_F_WANT_SPLICE flags.
In h1_snd_pipe(), before sending spliced data, we take care to flush the
output buffer by subscribing for sends. However, the condition to do so is
not accurate. We test data remaining in the pipe. It works but it also
unnecessarily subscribes H1C for sends when the output buffer is empty if we
are unable to send all spliced data in one time. Instead, H1C is now
subscribed for sends if output buffer is not empty.
First, there is no reason to announce the splicing support at the
conn-stream level when it is created, at least for now. GTUNE_USE_SPLICE
option is already handled at the stream level.
Second, in h1_rcv_buf(), there is no reason to test the message state to
switch the H1C in splicing mode (via H1C_F_WANT_SPLICE flag).
h1_process_input() already takes care to set CS_FL_MAY_SPLICE flag on the
conn-stream when appropriate. Thus, in h1_rcv_buf(), we can rely on this
flag to change the H1C state.
Finally, if h1_rcv_pipe() is called, it means the H1C is already in the
splicing mode. H1C_F_WANT_SPLICE flag is necessarily already set. Thus no
reason to force it.
On client side, if CO_RFL_KEEP_RECV flags is set when h1_rcv_buf() is
called, we force subscription for reads to be able to catch read0. This way,
the event will be reported to upper layer to let the stream abort the
request.
This patch fixes the abortonclose option for H1 connections. It depends on
following patches :
* MEDIUM: mux-h1: Don't block reads when waiting for the other side
* MINOR: conn-stream: Force mux to wait for read events if abortonclose is set
But to be sure the event is handled by the stream, the following patches are
also required :
* BUG/MINOR: stream-int: Don't block reads in si_update_rx() if chn may receive
* MINOR: channel: Rely on HTX version if appropriate in channel_may_recv()
All the series must be backported with caution as far as 2.0, and only after
a period of observation to be sure nothing broke.
When we are waiting for the other side to read more data, or to read the
next request, we must only stop the processing of input data and not the
data receipt. This patch don't change anything on the subscribes for
reads. So it should not change anything. The only difference is that the H1
connection will try to read data if it is woken up for an I/O event and if
it was subscribed for reads.
This patch is required to fix abortonclose option for H1 client connections.
When the abortonclose option is enabled, to be sure to be immediately
notified when a shutdown is received from the client, the frontend
conn-stream must be sure the mux will wait for read events. To do so, the
CO_RFL_KEEP_RECV flag is set when mux->rcv_buf() is called. This new flag
instructs the mux to wait for read events, regardless its internal state.
This patch is required to fix abortonclose option for H1 client connections.
In si_update_rx() function, the reads may be blocked because we explicitly
don't want to read or because of a lack of room in the input buffer. The
first condition is valid. However the second one only test if the channel is
empty or not. It means the reads are blocked if there are still some output
data in the input channel, in its buffer or its pipe. This condition is not
accurate. The reads must not be blocked if the channel can still receive
data. Thus instead of relying on channel_is_empty() function, we now call
channel_may_recv().
This patch is especially useful to be able to catch read0 on client side
when we are waiting for a connection to the server, when abortonclose option
is enabled. Otherwise, the client abort is not detected.
This patch depends on "MINOR: channel: Rely on HTX version if appropriate in
channel_may_recv()". Both must be backported as far as 2.0 after a period of
observation to be sure nothing broke.
Now the memory usage stats are dumped. They are first sorted by total
alloc+free so that the first ones are always the most relevant, and
that most symmetric alloc/free pairs appear next to each other. This
way it becomes convenient to only show a small part of them such as:
show profiling memory 20
It's worth noting that the sorting is performed upon each call to the
iohandler so it is technically possible that an entry could appear
twice or be dropped if the ordering changes between two calls. In
practice it is not an issue but it's worth being mentioned.
Let's rearrange it to make it more configurable and allow to iterate
over multiple parts (header, tasks, memory etc), to restart from a
given line number (previously it didn't work, though fortunately it
didn't happen), and to support dumping only certain parts and a given
number of lines. A few entries from ctx.cli are now used to store a
restart point and the current step.
When built with USE_MEMORY_PROFILING the main memory allocation functions
are diverted to collect statistics per caller. It is a bit tricky because
the only way to call the original ones is to find their pointer, which
requires dlsym(), and which is not available everywhere.
Thus all functions are designed to call their fallback function (the
original one), which is preset to an initialization function that is
supposed to call dlsym() to resolve the missing symbols, and vanish.
This saves expensive tests in the critical path.
A second problem is that dlsym() calls calloc() to initialize some
error messages. After plenty of tests with posix_memalign(), valloc()
and friends, it turns out that returning NULL still makes it happy.
Thus we currently use a visit counter (in_memprof) to detect if we're
reentering, in which case all allocation functions return NULL.
In order to convert a return address to an entry in the stats, we
perform a cheap hash consisting in multiplying the pointer by a
balanced number (as many zeros as ones) and keeping the middle bits.
The hash is already pretty good like this, achieving to store up to
638 entries in a 2048-entry table without collision. But in order to
further refine this and improve the fill ratio of the table, in case
of collision we move up to 16 adjacent entries to find a free place.
This remains quite cheap and manages to store all of these inside a
1024-entries hash table with even less risk of collision.
Also, free(NULL) does not produce any stats. By doing so we reduce
from 638 to 208 the average number of entries needed for a basic
config using SSL. free(NULL) not only provides no information as it's
a NOP, but keeping it is pure pollution as it happens all the time.
When DEBUG_MEM_STATS is enabled, malloc/calloc/realloc are redefined as
macros, preventing the code from compiling. Thus, when this option is
detected, the macros are undefined as they are pointless there anyway.
The functions are optimized to quickly jump to the fallback and as such
become almost invisible in terms of processing time, execpt an extra
"if" on a read_mostly variable and a jump. Considering that this only
happens for pool misses and library routines, this remains acceptable.
Performance tests in SSL (the most stressful test) shows less than 1%
performance loss when profiling is enabled on 2c4t.
The code was written in a way to ease backporting to modern versions
(2.2+) if needed, so it keeps the long names for integers and doesn't
use the _INC version of the atomic ops.
We'll need to store for each call place, the pointer to the caller
(the return address to be more exact as with free() it's not uncommon
to see tail calls), the number of calls to alloc/free and the total
alloc/free bytes. realloc() will be counted either as alloc or free
depending on the balance of the size before vs after.
We store 1024+1 entries. The first ones are used as hashes and the
last one for collisions.
When profiling is enabled via the CLI, all the stats are reset.
This adds the necessary flags to permit run-time enabling/disabling of
memory profiling. For now this is disabled.
A few words were added to the management doc about it and recalling that
this is limited to certain OSes.
These ones are only read by the scheduler and occasionally written to
by the CLI parser, so let's move them to read_mostly so that they do
not risk to suffer from cache line pollution.
get_sym_curr_addr() will return the address of the first occurrence of
the given symbol while get_sym_next_addr() will return the address of
the next occurrence of the symbol. These ones return NULL on non-linux,
non-ELF, non-USE_DL.
Implement a safe mechanism to close front idling connection which
prevents the soft-stop to complete. Every h1/h2 front connection is
added in a new per-thread list instance. On shutdown, a new task is
waking up which calls wake mux operation on every connection still
present in the new list.
A new stopping_list attach point has been added in the connection
structure. As this member is only used for frontend connections, it
shared the same union as the session_list reserved for backend
connections.
In h1_process, if the proxy of a frontend connection is disabled,
release the connection.
This commit is in preparation to properly close idling front connections
on soft-stop. h1_process must still be called, this will be done via a
dedicated task which monitors the global variable stopping.
Implement a function to close all server idle connections. This function
is called via a global deinit server handler.
The main objective is to prevents from leaving sockets in TIME_WAIT
state. To limit the set of operations on shutdown and prevents
tasks rescheduling, only the ctrl stack closing is done.
The purpose of this debugging option was to prevent certain pools from
masking other ones when they were shared. For example, task, http_txn,
h2s, h1s, h1c, session, fcgi_strm, and connection are all 192 bytes and
would normally be mergedi, but not with this option. The problem is that
certain pools are declared multiple times with various parameters, which
are often very close, and due to the way the option works, they're not
shared either. Good examples of this are captures and stick tables. Some
configurations have large numbers of stick-tables of pretty similar types
and it's very common to end up with the following when the option is
enabled:
$ socat - /tmp/sock1 <<< "show pools" | grep stick
- Pool sticktables (160 bytes) : 0 allocated (0 bytes), 0 used, needed_avg 0, 0 failures, 1 users, @0x753800=56
- Pool sticktables (160 bytes) : 0 allocated (0 bytes), 0 used, needed_avg 0, 0 failures, 1 users, @0x753880=57
- Pool sticktables (160 bytes) : 0 allocated (0 bytes), 0 used, needed_avg 0, 0 failures, 1 users, @0x753900=58
- Pool sticktables (160 bytes) : 0 allocated (0 bytes), 0 used, needed_avg 0, 0 failures, 1 users, @0x753980=59
- Pool sticktables (160 bytes) : 0 allocated (0 bytes), 0 used, needed_avg 0, 0 failures, 1 users, @0x753a00=60
- Pool sticktables (160 bytes) : 0 allocated (0 bytes), 0 used, needed_avg 0, 0 failures, 1 users, @0x753a80=61
- Pool sticktables (160 bytes) : 0 allocated (0 bytes), 0 used, needed_avg 0, 0 failures, 1 users, @0x753b00=62
- Pool sticktables (224 bytes) : 0 allocated (0 bytes), 0 used, needed_avg 0, 0 failures, 1 users, @0x753780=55
In addition to not being convenient, it can have important effects on the
memory usage because these pools will not share their entries, so one stick
table cannot allocate from another one's pool.
This patch solves this by going back to the initial goal which was not to
have different pools in the same list. Instead of masking the MAP_F_SHARED
flag, it simply adds a test on the pool's name, and disables pool sharing
if the names differ. This way pools are not shared unless they're of the
same name and size, which doesn't hinder debugging. The same test above
now returns this:
$ socat - /tmp/sock1 <<< "show pools" | grep stick
- Pool sticktables (160 bytes) : 0 allocated (0 bytes), 0 used, needed_avg 0, 0 failures, 7 users, @0x3fadb30 [SHARED]
- Pool sticktables (224 bytes) : 0 allocated (0 bytes), 0 used, needed_avg 0, 0 failures, 1 users, @0x3facaa0 [SHARED]
This is much better. This should probably be backported, in order to limit
the side effects of DEBUG_DONT_SHARE_POOLS being enabled in production.
This command attempts to resolve a pointer to a symbol name. This is
convenient during development as it's easier to get such pointers live
than by issuing a debugger or calling addr2line.
This bug was introduced in e5ff4ad ("BUG/MINOR: ssl: fix a trash buffer
leak in some error cases").
When cli_parse_set_cert() returns because alloc_trash_chunk() failed, it
does not unlock the spinlock which can lead to a deadlock later.
Must be backported as far as 2.1 where e5ff4ad was backported.
Since the introduction of payload support on the CLI in 1.9-dev1 by
commit abbf60710 ("MEDIUM: cli: Add payload support"), a chunk is
temporarily allocated for the CLI to support defragmenting a payload
passed with a command. However it's only released when passing via
the CLI_ST_END state (i.e. on clean shutdown), but not on errors.
Something as trivial as:
$ while :; do ncat --send-only -U /path/to/cli <<< "show stat"; done
with a few hundreds of servers is enough see the number of allocated
trash chunks go through the roof in "show pools".
This needs to be backported as far as 2.0.
When the lua buffers are used, a variable number of stack slots may be
used. Thus we cannot assume that we know where the top of the stack is. It
was not an issue for lua < 5.4.3 (at least for small buffers). But
'socket:receive()' now fails with lua 5.4.3 because a light userdata is
systematically pushed on the top of the stack when a buffer is initialized.
To fix the bug, in hlua_socket_receive(), we save the index of the top of
the stack before creating the buffer. This way, we can check the number of
arguments, regardless anything was pushed on the stack or not.
Note that the other buffer usages seem to be safe.
This patch should solve the issue #1240. It should be backport to all stable
branches.
By passing a version number to "add map/acl", it becomes possible to
atomically replace maps and ACLs. The principle is that a new version
number is first retrieved by calling"prepare map/acl", and this version
number is used with "add map" and "add acl". Newly added entries then
remain invisible to the matching mechanism but are visible in "show
map/acl" when the version number is specified, or may be cleard with
"clear map/acl". Finally when the insertion is complete, a
"commit map/acl" command must be issued, and the version is atomically
updated so that there is no intermediate state with incomplete entries.
The command is used to atomically replace a map/acl with the pending
contents of the designated version. The new version must have been
allocated by "prepare map/acl" prior to this. At the moment it is not
possible to force the version when adding new entries, so this may only
be used to atomically clear an ACL/map.
This command allocates a new version for the map/acl, that will be usable
later to prepare the addition of new values to atomically replace existing
ones. Technically speaking the operation consists in atomically incrementing
the next version. There's no "undo" operation here, if a version is not
committed, it will automatically be trashed when committing a newer version.
This will ease maintenance of versionned maps by allowing to clear old or
failed updates instead of the current version. Nothing was done to allow
clearing everyhing, though if there was a need for this, implementing "@all"
or something equivalent wouldn't require more than 3 lines of code.
Instead of being able to purge only values older than a specific value,
let's support arbitrary ranges and make pat_ref_purge_older() just be
one special case of this one.
The maps and ACLs internally all have two versions, the "current" one,
which is the one being matched against, and the "next" one, the one being
filled during an atomic replacement. Till now the "show" commands only used
to show the current one but it can be convenient to be able to show other
ones as well, so let's add the ability to do this with "show map" and
"show acl". The method used here consists in passing the version number
as "@<ver>" before the map/acl name or ID. It would have been better after
it but that could create confusion with keys already using such a format.
The "show map" command wasn't updated when pattern generations were
added for atomic reloads, let's report them in the "show map" command
that lists all known maps. It will be useful for users.
This function was only used once in cli_parse_add_map(), and half of the
work it used to do was already known from the caller or testable outside
of the lock. Given that we'll need to modify it soon to pass a generation
number, let's remerge it in the caller instead, using pat_ref_load() which
is the one we'll need.
The function uses two distinct code paths for single the key/value pair
and multiple pairs inserted as payload, each with a copy-paste of the
error handling. Let's modify the loop to factor them out.
The text mentionned that only backends with consistent hash method were
supported for dynamic servers. In fact, it is only required that the lb
algorith is dynamic.
There is some serious confusion in the lua interface code related to
sockets and services coming from the hlua_appctx structs being called
"appctx" everywhere, and where the real appctx is reached using
appctx->appctx. This part is a bit of a pain to debug so let's rename
all occurrences of this local variable to "luactx".
During commit 7e4a557f6 ("MINOR: time: change the global timeval and the
the global tick at once") the approach made sure that the new now_ms was
always higher than or equal to global_now_ms, but by forgetting the old
value. This can cause the first update to global_now_ms to fail if it's
already out of sync, going back into the loop, and the subsequent call
would then succeed due to commit 4d01f3dcd ("MINOR: time: avoid
overwriting the same values of global_now").
And if it goes out of sync, it will fail to update forever, as observed
by Ashley Penney in github issue #1194, causing incorrect freq counters
calculations everywhere. One possible trigger for this issue is one thread
spinning for a few milliseconds while the other ones continue to work.
The issue really is that old_now_ms ought not to be modified in the loop
as it's used for the CAS. But we don't need to structurally guarantee that
global_now_ms grows monotonically as it's computed from the new global_now
which is already verified for this via the __tv_islt() test. Thus, dropping
any corrections on global_now_ms in the loop is the correct way to proceed
as long as this one is always updated to follow global_now.
No backport is needed, this is only for 2.4-dev.
This patch adds miscellenous informative flags raised during the initial
full resync process performed during the reload for debugging purpose.
0x00000010: Timeout waiting for a full resync from a local node
0x00000020: Timeout waiting for a full resync from a remote node
0x00000040: Session aborted learning from a local node
0x00000080: Session aborted learning from a remote node
0x00000100: A local node teach us and was fully up to date
0x00000200: A remote node teach us and was fully up to date
0x00000400: A local node teach us but was partially up to date
0x00000800: A remote node teach us but was partially up to date
0x00001000: A local node was assigned for a full resync
0x00002000: A remote node was assigned for a full resync
0x00004000: A resync was explicitly requested
This patch could be backported on any supported branch
Flags used as context to know current status of each table pushing a
full resync to a peer were correctly reset receiving a new resync
request or confirmation message but in case of local peer sync during
reload the resync request is implicit and those flags were not
correctly reset in this case.
This could result to a partial initial resync of some tables after reload
if the connection with the old process was broken and retried.
This patch reset those flags at the end of the handshake for all new
connections to be sure to push a entire full resync if needed.
This patch should be backported on all supported branches ( v >= 1.6 )
Only entries between the opposite of the last 'local update' rotating
counter were considered to be pushed. This processing worked in most
cases because updates are continually pushed trying to reach this point
but it remains some cases where updates id are more far away in the past
and appearing in futur and the push of updates is stuck until the head
reach again the tail which could take a very long time.
This patch re-work the lookup to consider that all positions on the
rotating counter is considered in the past until we reach exactly
the 'local update' value. Doing this, the updates push won't be stuck
anymore.
This patch should be backported on all supported branches ( >= 1.6 )
The commitupdate value of the table is used to check if the update
is still pending for a push for all peers. To be sure to not miss a
push we reset it just after a handshake success.
This patch should be backported on all supported branches ( >= 1.6 )
If two peers are disconnected and during this period they continue to
process a large amount of local updates, after a reconnection they
may take a long time before restarting to push their updates. because
the last pushed update would appear internally in futur.
This patch fix this resetting the cursor on acked updates at the maximum
point considered in the past if it appears in futur but it means we
may lost some updates. A clean fix would be to update the protocol to
be able to signal a remote peer that is was not updated for a too long
period and needs a full resync but this is not yet supported by the
protocol.
This patch should be backported on all supported branches ( >= 1.6 )
The re-con cursor was updated receiving any ack message
even if we are pushing a complete resync to a peer. This cursor
is reset at the end of the resync but if the connection is broken
during resync, we could re-start at an unwanted point.
With this patch, the peer stops to consider ack messages pushing
a resync since the resync process has is own acknowlegement and
is always restarted from the beginning in case of broken connection.
This patch should be backported on all supported branches ( >= 1.6 )
Receiving a resync request, the origins to start the full sync and
to reset after the full resync are mistakenly computed based on
the last update on the table instead of computed based on the
the last update acked by the node requesting the resync.
It could result in disordered or missing updates pushing to the
requester
This patch sets correctly those origins.
This patch should be backported on all supported branches ( >= 1.6 )
If a reload is performed and there is no incoming connections
from the old process to push a full resync, the new process
can be stuck waiting indefinitely for this conn and it never tries a
fallback requesting a full resync from a remote peer because the resync
timer was init to TICK_ETERNITY.
This patch forces a reset of the resync timer to default value (5 secs)
if we detect value is TICK_ETERNITY.
This patch should be backported on all supported branches ( >= 1.6 )
By default haproxy loads all files designated by a relative path from the
location the process is started in. In some circumstances it might be
desirable to force all relative paths to start from a different location
just as if the process was started from such locations. This is what this
directive is made for. Technically it will perform a temporary chdir() to
the designated location while processing each configuration file, and will
return to the original directory after processing each file. It takes an
argument indicating the policy to use when loading files whose path does
not start with a slash ('/').
A few options are offered, "current" (the default), "config" (files
relative to config file's dir), "parent" (files relative to config file's
parent dir), and "origin" with an absolute path.
This should address issue #1198.
In readcfgfile() when malloc() fails to allocate a buffer for the
config line, it currently says "parsing[<file>]: out of memory" while
the error is unrelated to the config file and may make one think it has
to do with the file's size. The second test (fopen() returning error)
needs to release the previously allocated line. Both directly return -1
which is not even documented as a valid error code for the function.
Let's simply make sure that the few variables freed at the end are
properly preset, and jump there upon error, after having displayed a
meaningful error message. Now at least we can get this:
$ ./haproxy -f /dev/kmem
[NOTICE] 116/191904 (23233) : haproxy version is 2.4-dev17-c3808c-13
[NOTICE] 116/191904 (23233) : path to executable is ./haproxy
[ALERT] 116/191904 (23233) : Could not open configuration file /dev/kmem : Permission denied
When a DATA frame is sent, we must take care to properly detect the EOM flag
on the HTX message to set ES flag on the frame when necessary, to finish the
stream. But it is only done when data are copied from the HTX message to the
mux buffer and not when the frame are sent via a zero-copy. This patch fixes
this bug.
It is a 2.4-specific bug. No backport is needed.
When an HTTP lua service is started, headers are consumed before calling the
script. When it was initialized, the headers were stored in a lua array,
thus they can be removed from the HTX message because the lua service will
no longer access them. But it is a problem with bodyless messages because
the EOM flag is lost. Indeed, once the headers are consumed, the message is
empty and the buffer is reset, included the flags.
Now, the headers are not immediately consumed. We will skip them if
applet:receive() or applet:getline(). This way, the EOM flag is preserved.
At the end, when the script is finished, all output data are consumed, thus
this remains safe.
It is a 2.4-specific bug. No backport is needed.
If an applet consumed output data (the amount of output data has changed
between before and after the call to the applet), the producer is
notified. It means CF_WRITE_PARTIAL and CF_WROTE_DATA are set on the output
channel and the opposite stream interface is notified some room was made in
its input buffer. This way, it is no longer the applet responsibility to
take care of it. However, it doesn't matter if the applet does the same.
Said like that, it looks like an improvement not a bug. But it really fixes
a bug in the lua, for HTTP applets. Indeed, applet:receive() and
applet:getline() are buggy for HTTP applets. Data are consumed but the
producer is not notified. It means if the payload is not fully received in
one time, the applet may be blocked because the producer remains blocked (it
is time dependent).
This patch must be backported as far as 2.0 (only for the HTX part).
A read error on the server side is also reported as a write error on the
client side. It means some times, a server side error is handled on the
client side. Among others, it is the case when the client side is waiting
for the response while the request processing is already finished. In this
case, the error is not handled as a server error. It is not accurate.
So now, when the request processing is finished but not the response
processing and if a read error was encountered on the server side, the error
is not immediatly processed on the client side, to let a chance to response
analysers to properly catch the error.
Since the input buffer is transferred to the stream when it is created,
there is no longer control on the request size to be sure the buffer's
reserve is still respected. It was automatically performed in h2_rcv_buf()
because the caller took care to provide the correct available space in the
buffer. The control is still there but it is no longer applied on the
request headers. Now, we should take care of the reserve when the headers
are decoded, before the stream creation.
The test is performed for the request and the response.
It is a 2.4-specific bug. No backport is needed.
It is the only function using the hdrs_bytes start-line field. Thus the
function has been refactored to no longer rely on it. To do so, we first
copy HTX blocks to the destination message, without removing them from the
source message. If the copy is interrupted on headers or trailers, we roll
back. Otherwise, data are drained from the source buffer.
Most of time, the copy will succeeds. So the roll back is only performed in
the worst but very rare case.
When all data of an HTX message are drained, we rely on htx_reset() to
reinit the message state. However, the flags must be preserved. It is, among
other things, important to preserve processing or parsing errors.
This patch must be backported as far as 2.0.
The compilation fails due to the following commit:
fc6ac53dca
BUG/MAJOR: fix build on musl with cpu_set_t support
The new global variable cpu_map conflicted with a local variable of the
same name in the code path for the apple platform when setting the
process affinity.
This does not need to be backported.
Move cpu_map structure outside of the global struct to a global
variable defined in cpuset.c compilation unit. This allows to reorganize
the includes without having to define _GNU_SOURCE everywhere for the
support of the cpu_set_t.
This fixes the compilation with musl libc, most notably used for the
alpine based docker image.
This fixes the github issue #1235.
No need to backport as this feature is new in the current
2.4-dev.
The return value check was wrongly based on error codes when the
function actually returns an error number.
This bug was introduced by f3eedfe195
which is a feature not present before branch 2.4.
It does not need to be backported.
The HTX functions used to add new HTX blocks in a message have been moved to
the header file to inline them in calling functions. These functions are
small enough.
A normalized URI is the internal term used to specify an URI is stored using
the absolute format (scheme + authority + path). For now, it is only used
for H2 clients. It is the default and recommended format for H2 request.
However, it is unusual for H1 servers to receive such URI. So in this case,
we only send the path of the absolute URI. It is performed for H1 servers,
but not for FCGI applications. This patch fixes the difference.
Note that it is not a real bug, because FCGI applications should support
abosolute URI.
Note also a normalized URI is only detected for H2 clients when a request is
received. There is no such test on the H1 side. It means an absolute URI
received from an H1 client will be sent without modification to an H1 server
or a FCGI application.
To make it possible, a dedicated function has been added to get the H1
URI. This function is called by the H1 and the FCGI multiplexer when a
request is sent to a server.
This patch should fix the issue #1232. It must be backported as far as 2.2.
The error path of the NUMA topology detection introduced in commit
b56a7c89a ("MEDIUM: cfgparse: detect numa and set affinity if needed")
lacks an initialization resulting in possible crashes at boot. No
backport is needed since that was introduced in 2.4-dev.
In proxy.c, when process is stopping we try to flush tables content
using 'stktable_trash_oldest'. A check on a counter "table->syncing" was
made to verify if there is no pending resync in progress.
But using multiple threads this counter can be increased by an other thread
only after some delay, so the content of some tables can be trashed earlier and
won't be pushed to the new process (after reload, some tables appear reset and
others don't).
This patch re-names the counter "table->syncing" to "table->refcnt" and
the counter is increased during configuration parsing (registering a table to
a peer section) to protect tables during runtime and until resync of a new
process has succeeded or failed.
The inc/dec operations are now made using atomic operations
because multiple peer sections could refer to the same table in futur.
This fix addresses github #1216.
This patch should be backported on all branches multi-thread support (v >= 1.8)
The peers task handling the "stopping" could wake up multiple
times in stopping state with WOKEN_SIGNAL: the connection to the
local peer initiated on the first processing was immediatly
shutdown by the next processing of the task and the old process
exits considering it is unable to connect. It results on
empty stick-tables after a reload.
This patch checks the flag 'PEERS_F_DONOTSTOP' to know if the
signal is considered and if remote peers connections shutdown
is already done or if a connection to the local peer must be
established.
This patch should be backported on all supported branches (v >= 1.6)
The old process checked each table resync status even if
the resync process is finished. This behavior had no known impact
except useless processing and was discovered during debugging on
an other issue.
This patch could be backported in all supported branches (v >= 1.6)
but once again, it has no impact except avoid useless processing.
In tv_update_date(), we calculate the new global date based on the local
one. It's very likely that other threads will end up with the exact same
now_ms date (at 1 million wakeups/s it happens 99.9% of the time), and
even the microsecond was measured to remain unchanged ~70% of the time
with 16 threads, simply because sometimes another thread already updated
a more recent version of it.
In such cases, performing a CAS to the global variable requires a cache
line flush which brings nothing. By checking if they're changed before
writing, we can divide by about 6 the number of writes to the global
variables, hence the overall contention.
In addition, it's worth noting that all threads will want to update at
the same time, so let's place a cpu relax call before trying again, this
will spread attempts apart.
The time adjustment is very rare, even at high pool rates. Tests show
that only 0.2% of tv_update_date() calls require a change of offset. Such
concurrent writes to a shared variable have an important impact on future
loads, so let's only update the variable if it changed.
The compilation is currently broken on platform without USE_CPU_AFFINITY
set. An error has been reported by the cygwin build of the CI.
This does not need to be backported.
In file included from include/haproxy/global-t.h:27,
from include/haproxy/global.h:26,
from include/haproxy/fd.h:33,
from src/ev_poll.c:22:
include/haproxy/cpuset-t.h:32:3: error: #error "No cpuset support implemented on this platform"
32 | # error "No cpuset support implemented on this platform"
| ^~~~~
include/haproxy/cpuset-t.h:37:2: error: unknown type name ‘CPUSET_REPR’
37 | CPUSET_REPR cpuset;
| ^~~~~~~~~~~
make: *** [Makefile:944: src/ev_poll.o] Error 1
make: *** Waiting for unfinished jobs....
In file included from include/haproxy/global-t.h:27,
from include/haproxy/global.h:26,
from include/haproxy/fd.h:33,
from include/haproxy/connection.h:30,
from include/haproxy/ssl_sock.h:27,
from src/ssl_sample.c:30:
include/haproxy/cpuset-t.h:32:3: error: #error "No cpuset support implemented on this platform"
32 | # error "No cpuset support implemented on this platform"
| ^~~~~
include/haproxy/cpuset-t.h:37:2: error: unknown type name ‘CPUSET_REPR’
37 | CPUSET_REPR cpuset;
| ^~~~~~~~~~~
make: *** [Makefile:944: src/ssl_sample.o] Error 1
Fix the warning treated as error on the CI for the macOS compilation :
"src/haproxy.c:2939:23: error: unused variable 'set'
[-Werror,-Wunused-variable]"
This does not need to be backported.
Render numa detection optional with a global configuration statement
'no numa-cpu-mapping'. This can be used if the applied affinity of the
algorithm is not optimal. Also complete the documentation with this new
keyword.
On process startup, the CPU topology of the machine is inspected. If a
multi-socket CPU machine is detected, automatically define the process
affinity on the first node with active cpus. This is done to prevent an
impact on the overall performance of the process in case the topology of
the machine is unknown to the user.
This step is not executed in the following condition :
- a non-null nbthread statement is present
- a restrictive 'cpu-map' statement is present
- the process affinity is already restricted, for example via a taskset
call
For the record, benchmarks were executed on a machine with 2 CPUs
Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz. In both clear and ssl
scenario, the performance were sub-optimal without the automatic
rebinding on a single node.
Allow to specify multiple cpu ids/ranges in parse_cpu_set separated by a
comma. This is optional and must be activated by a parameter.
The comma support is disabled for the parsing of the 'cpu-map' config
statement. However, it will be useful to parse files in sysfs when
inspecting the cpus topology for NUMA automatic process binding.
Create a function thread_cpu_mask_forced. Its purpose is to report if a
restrictive cpu mask is active for the current proces, for example due
to a taskset invocation. It is only implemented for the linux platform
currently.
Use the platform independent type hap_cpuset for the cpu-map statement
parsing. This allow to address CPU index greater than LONGBITS.
Update the documentation to reflect the removal of this limit except for
platforms without cpu_set_t type or equivalent.
Replace the unsigned long parameter by a hap_cpuset. This allows to
address CPU with index greater than LONGBITS.
This function is used to parse the 'cpu-map' statement. However at the
moment, the result is casted back to a long to store it in the global
structure. The next step is to replace ulong in in cpu_map in the
global structure with hap_cpuset.
This module can be used to manipulate a cpu sets in a platform agnostic
way. Use the type cpu_set_t/cpuset_t if available on the platform, or
fallback to unsigned long, which limits de facto the maximum cpu index
to LONGBITS.
The H2_CF_RCVD_SHUT flag is used to report a read0 was encountered. It is
used by the H2 mux to properly handle shutdowns. However, this flag is only
set when no data are received. If it is detected at the socket level when
some data are received, it is not handled. And because the event was
reported on the connection, any other read attempts are blocked. In this
case, we are unable to close the connection and release the mux
immediately. We must wait the mux timeout expires.
This patch should fix the issue #1231. It must be backported as far as 2.0.
As we now embed the library we don't need to support the older 1.0 API
any more, so we can remove the explicit calls to slz_make_crc_table()
and slz_prepare_dist_table().
Now that SLZ is merged, let's update the makefile and compression
files to use it. As a result, SLZ_INC and SLZ_LIB are neither defined
nor used anymore.
USE_SLZ is enabled by default ("USE_SLZ=default") and can be disabled
by passing "USE_SLZ=" or by enabling USE_ZLIB=1.
The doc was updated to reflect the changes.
SLZ is rarely packaged by distros and there have been complaints about
the CPU and memory usage of ZLIB, leading to some suggestions to better
address the issue by simply integrating SLZ into the tree (just 3 files).
See discussions below:
https://www.mail-archive.com/haproxy@formilux.org/msg38037.htmlhttps://www.mail-archive.com/haproxy@formilux.org/msg40079.htmlhttps://www.mail-archive.com/haproxy@formilux.org/msg40365.html
This patch does just this, after minor adjustments to these files:
- tables.h was renamed to slz-tables.h
- tables.h had the precomputed tables removed since not used here
- slz.c uses includes <import/slz*> instead of "slz*.h"
The slz commit imported here was b06c172 ("slz: avoid a build warning
with -Wimplicit-fallthrough"). No other change was performed either to
SLZ nor to haproxy at this point so that this operation may be replicated
if needed for a future version.
Since commit 3f12887 ("MINOR: mworker: don't use children variable
anymore"), the oldpids array is not used anymore to generate the new -sf
parameters. So we don't need to set nb_oldpids to 0 during the first
start of the master process.
This patch fixes a bug when 2 masters process tries to synchronize their
peers, there is a small chances that it won't work because nb_oldpids
equals 0.
Should be backported as far as 2.0.
This bug affects the peers synchronisation code which rely on the
nb_oldpids variable to synchronize the peer from the old PID.
In the case the process is not started in master-worker mode and tries
to synchronize using the peers, there is a small chance that won't work
because nb_oldpids equals 0.
Fix the bug by setting the variable to 0 only in the case of the
master-worker when not reloaded.
It could also be a problem when trying to synchronize the peers between
2 masters process which should be fixed in another patch.
Bug exists since commit 8a361b5 ("BUG/MEDIUM: mworker: don't reuse PIDs
passed to the master").
Sould be backported as far as 1.8.
The application of a cpu-map statement with both process and threads
is broken (P-Q/1 or 1/P-Q notation).
For example, before the fix, when using P-Q/1, proc_t1 would be updated.
Then it would be AND'ed with thread which is still 0 and thus does
nothing.
Another problem is when using 1/1[-Q], thread[0] is defined. But if
there is multiple processes, every processes will use this define
affinity even if it should be applied only to 1st process.
The solution to the fix is a little bit too complex for my taste and
there is maybe a simpler solution but I did not wish to break the
storage of global.cpu_map, as it is quite painful to test all the
use-cases. Besides, this code will probably be clean up when
multiprocess support removed on the future version.
Let's try to explain my logic.
* either haproxy runs in multiprocess or multithread mode. If on
multiprocess, we should consider proc_t1 (P-Q/1 notation). If on
multithread, we should consider thread (1/P-Q notation). However
during parsing, the final number of processes or threads is unknown,
thus we have to consider the two possibilities.
* there is a special case for the first thread / first process which is
present in both execution modes. And as a matter of fact cpu-map 1 or
1/1 notation represents the same thing. Thus, thread[0] and proc_t1[0]
represents the same thing. To solve this problem, only thread[0] is
used for this special case.
This fix must be backported up to 2.0.
This normalizer removes "/./" segments from the path component.
Usually the dot refers to the current directory which renders those segments redundant.
See GitHub Issue #714.
Currently the delimiter is hardcoded as ampersand (&) but the function takes the delimiter as a paramter.
This patch replaces the hardcoded ampersand with the given delimiter.
When header are splitted over several frames, payload of HEADERS and
CONTINUATION frames are merged to form a unique HEADERS frame before
decoding the payload. To do so, info about the current frame are updated
(dff, dfl..) with info of the next one. Here there is a bug when the frame
length (dfl) is update. We must add the next frame length (hdr.dfl) and not
only the amount of data found in the buffer (clen). Because HEADERS frames
are decoded in one pass, dfl value is the whole frame length or 0. nothing
intermediary.
This patch must be backported as far as 2.0.
In the function decoding payload of HEADERS frames, an internal error is
returned if the frame length is too large. it cannot exceed the buffer
size. The same is true when headers are splitted on several frames. The
payload of HEADERS and CONTINUATION frames are merged and the overall size
must not exceed the buffer size.
However, there is a bug when the current frame is big enough to only have
the space for a part of the header of the next frame. Because, in this case,
we wait for more data, to have the whole frame header. We don't properly
detect that the headers are too large to be stored in one buffer. In fact
the test to trigger this error is not accurate. When the buffer is full, the
error is reported if the frame length exceeds the amount of data in the
buffer. But in reality, an error must be reported when we are unable to
decode the current frame while the buffer is full. Because, in this case, we
know there is no way to change this state.
When the bug happens, the H2 connection is woken up in loop, consumming all
the CPU. But the traffic is not blocked for all that.
This patch must be backported as far as 2.0.
gcc still reports a potential null pointer dereference in delete server
function event with a BUG_ON before it. Remove the misleading NULL check
in the for loop which should never happen.
This does not need to be backported.
Implement a new CLI command 'del server'. It can be used to removed a
dynamically added server. Only servers in maintenance mode can be
removed, and without pending/active/idle connection on it.
Add a new reg-test for this feature. The scenario of the reg-test need
to first add a dynamic server. It is then deleted and a client is used
to ensure that the server is non joinable.
The management doc is updated with the new command 'del server'.
cli_parse_add_server can be executed in parallel by several CLI
instances and so must be thread-safe. The critical points of the
function are :
- server duplicate detection
- insertion of the server in the proxy list
The mode of operation has been reversed. The server is first
instantiated and parsed. The duplicate check has been moved at the end
just before the insertion in the proxy list, under the thread isolation.
Thus, the thread safety is guaranteed and server allocation is kept
outside of locks/thread isolation.
Config information has been added into the logsrv struct. The filename
is duplicated and should be freed on exit.
Introduced in the current release.
This does not need to be backported.
The current "ADD" vs "ADDQ" is confusing because when thinking in terms
of appending at the end of a list, "ADD" naturally comes to mind, but
here it does the opposite, it inserts. Several times already it's been
incorrectly used where ADDQ was expected, the latest of which was a
fortunate accident explained in 6fa922562 ("CLEANUP: stream: explain
why we queue the stream at the head of the server list").
Let's use more explicit (but slightly longer) names now:
LIST_ADD -> LIST_INSERT
LIST_ADDQ -> LIST_APPEND
LIST_ADDED -> LIST_INLIST
LIST_DEL -> LIST_DELETE
The same is true for MT_LISTs, including their "TRY" variant.
LIST_DEL_INIT keeps its short name to encourage to use it instead of the
lazier LIST_DELETE which is often less safe.
The change is large (~674 non-comment entries) but is mechanical enough
to remain safe. No permutation was performed, so any out-of-tree code
can easily map older names to new ones.
The list doc was updated.
This improves the use of local variables in sample_conv_json_query:
- Use the enum type for the return value of `mjson_find`.
- Do not use single letter variables.
- Reduce the scope of variables that are only needed in a single branch.
- Add missing newlines after variable declaration.
The test in srv_alloc_lb() to allocate the lb_nodes[] array used in the
consistent hash was incorrect, it wouldn't do it for consistent hash and
could do it for regular random.
No backport is needed as this was added for dynamic servers in 2.4-dev by
commit f99f77a50 ("MEDIUM: server: implement 'add server' cli command").
Amaury noticed that I managed to break the build of DEBUG_FAIL_ALLOC
for the second time with 207c09509 ("MINOR: pools: move the fault
injector to __pool_alloc()"). The joy of endlessly reworking patch
sets... No backport is needed, that was in the just merged cleanup
series.
This function has become too big (251 bytes) and is now hurting
performance a lot, with up to 4% request rate being lost over the last
pool changes. Let's move it to pool.c as a regular function. Other
attempts were made to cut it in half but it's still inefficient. Doing
this results in saving ~90kB of object code, and even 112kB since the
pool changes, with code that is even slightly faster!
Conversely, pool_get_from_cache(), which remains half of this size, is
still faster inlined, likely in part due to the immediate use of the
returned pointer afterwards.
Till now it used to call it only if there were not too many objects into
the local cache otherwise would send the latest one directly into the
shared cache. Now it always sends to the local cache and it's up to the
local cache to free its oldest objects. From a cache freshness perspective
it's better this way since we always evict cold objects instead of hot
ones. From an API perspective it's better because it will help make the
shared cache invisible to the public API.
Till now we could only evict oldest objects from all local caches using
pool_evict_from_local_caches() until the cache size was satisfying again,
but there was no way to evict excess objects from a single cache, which
is the reason why pool_put_to_cache() used to refrain from putting into
the local cache and would directly write to the shared cache, resulting
in massive writes when caches were full.
Let's add this new function now. It will stop once the number of objects
in the local cache is no higher than 16+total/8 or the cache size is no
more than 75% full, just like before.
For now the function is not used.
Continuing the unification of local and shared pools, now the usage of
pools is governed by CONFIG_HAP_POOLS without which allocations and
releases are performed directly from the OS using pool_alloc_nocache()
and pool_free_nocache().
There are two levels of freeing to the OS:
- code that wants to keep the pool's usage counters updated uses
pool_free_area() and handles the counters itself. That's what
pool_put_to_shared_cache() does in the no-global-pools case.
- code that does not want to update the counters because they were
already updated only calls pool_free_area().
Let's extract these calls to establish the symmetry with pool_get_from_os()
and pool_alloc_nocache(), resulting in pool_put_to_os() (which only updates
the allocated counter) and pool_free_nocache() (which also updates the used
counter). This will later allow to simplify the generic code.
A part of the code cannot be factored out because it still uses non-atomic
inc/dec for pool->used and pool->allocated as these are located under the
pool's lock. While it can make sense in terms of bus cycles, it does not
make sense in terms of code normalization. Further, some operations were
still performed under a lock that could be totally removed via the use of
atomic ops.
There is still one occurrence in pool_put_to_shared_cache() in the locked
code where pool_free_area() is called under the lock, which must absolutely
be fixed.
Now there's one part dealing with the allocation itself and keeping
counters up to date, and another one on top of it to return such an
allocated pointer to the user and update the use count and stats.
This is in anticipation for being able to group cache-related parts.
The release code is still done at once.
Till now it was limited to objects allocated from the OS which means
it had little use as soon as pools were enabled. Let's move it upper
in the layers so that any code can benefit from fault injection. In
addition this allows to pass a new flag POOL_F_NO_FAIL to disable it
if some callers prefer a no-failure approach.
ha_random() is quite heavy and uses atomic ops or even a lock on some
architectures. Here we don't seek good randoms, just statistical ones,
so let's use the statistical prng instead.
Now the multi-level cache becomes more visible:
pool_get_from_local_cache()
pool_put_to_local_cache()
pool_get_from_shared_cache()
pool_put_to_shared_cache()
The functions were rightfully called from/to_cache when the thread-local
cache was considered as the only cache, but this is getting terribly
confusing. Let's call them from/to local_cache to make it clear that
it is not related with the shared cache.
As a side note, since pool_evict_from_cache() used not to work for a
particular pool but for all of them at once, it was renamed to
pool_evict_from_local_caches() (plural form).
They were strictly equivalent, let's remerge them and rename them to
pool_alloc_nocache() as it's the call which performs a real allocation
which does not check nor update the cache. The only difference in the
past was the former taking the lock and not the second but now the lock
is not needed anymore at this stage since the pool's list is not touched.
In addition, given that the "avail" argument is no longer used by the
function nor by its callers, let's drop it.
Now we don't loop anymore trying to refill multiple items at once, and
an allocated object is directly returned to the requester instead of
being stored into the shared pool. This has multiple benefits. The
first one is that no locking is needed anymore on the allocation path
and the second one is that the loop will no longer cause latency spikes.
This is a first step towards unifying all the fallback code. Right now
these two functions are the only ones which do not update the needed_avg
rate counter since there's currently no shared pool kept when using them.
But their code is similar to what could be used everywhere except for
this one, so let's make them capable of maintaining usage statistics.
As a side effect the needed field in "show pools" will now be populated.
The mem_should_fail() call enabled by DEBUG_FAIL_ALLOC used to be placed
only in the no-cache version of the allocator. Now we can generalize it
to all modes and remove the exclusive test on CONFIG_HAP_NO_GLOBAL_POOLS.
We're going to make the local pool always present unless pools are
completely disabled. This means that pools are always enabled by
default, regardless of the use of threads. Let's drop this notion
of "local" pools and make it just "pool". The equivalent debug
option becomes DEBUG_NO_POOLS instead of DEBUG_NO_LOCAL_POOLS.
For now this changes nothing except the option and dropping the
dependency on USE_THREAD.
Initially per-thread pool caches were stored into a fixed-size array.
But this was a bit ugly because the last allocated pools were not able
to benefit from the cache at all. As a work around to preserve
performance, a size of 64 cacheable pools was set by default (there
are 51 pools at the moment, excluding any addon and debugging code),
so all in-tree pools were covered, at the expense of higher memory
usage.
In addition an index had to be calculated for each pool, and was used
to acces the pool cache head into that array. The pool index was not
even stored into the pools so it was required to determine it to access
the cache when the pool was already known.
This patch changes this by moving the pool cache head into the pool
head itself. This way it is certain that each pool will have its own
cache. This removes the need for index calculation.
The pool cache head is 32 bytes long so it was aligned to 64B to avoid
false sharing between threads. The extra cost is not huge (~2kB more
per pool than before), and we'll make better use of that space soon.
The pool cache head contains the size, which should probably be removed
since it's already in the pool's head.
When building with DEBUG_FAIL_ALLOC we call a random generator to decide
whether the pool alloc should succeed or fail, and there was a preliminary
debugging mechanism to keep sort of a history of the previous decisions. But
it was never used, enforces a lock during the allocation, and forces to use
static variables, all of which are limiting the ability to pursue the pools
cleanups with no real benefit. Let's get rid of them now.
Since recent commit ae07592 ("MEDIUM: pools: add CONFIG_HAP_NO_GLOBAL_POOLS
and CONFIG_HAP_GLOBAL_POOLS") the pre-allocation of all desired reserved
buffers was not done anymore on systems not using the shared cache. This
basically has no practical impact since these ones will quickly be refilled
by all the ones used at run time, but it may confuse someone checking if
they're allocated in "show pools".
That's only 2.4-dev, no backport is needed.
When running with CONFIG_HAP_NO_GLOBAL_POOLS, it's theoritically possible
to keep an incorrect count of allocated entries in a pool because the
allocated counter was used as a cumulated counter of alloc calls instead
of a number of currently allocated items (it's possible the meaning has
changed over time). The only impact in this mode essentially is that
"show pools" will report incorrect values. But this would only happen on
limited pools, which is not even certain still exist.
This was added by recent commit 0bae07592 ("MEDIUM: pools: add
CONFIG_HAP_NO_GLOBAL_POOLS and CONFIG_HAP_GLOBAL_POOLS") so no backport
is needed.
This patch renames all existing uri-normalizers into a more consistent naming
scheme:
1. The part of the URI that is being touched.
2. The modification being performed as an explicit verb.
This normalizer merges `../` path segments with the predecing segment, removing
both the preceding segment and the `../`.
Empty segments do not receive special treatment. The `merge-slashes` normalizer
should be executed first.
See GitHub Issue #714.
When the session is aborted before any connection attempt to any server, the
number of connection retries reported in the logs is wrong. It happens
because when the retries counter is not strictly positive, we consider the
max number of retries was reached and the backend retries value is used. It
is obviously wrong when no connectioh was performed.
In fact, at this stage, the retries counter is initialized to 0. But the
backend stream-interface is in the INI state. Once it is set to SI_ST_REQ,
the counter is set to the backend value. And it is the only possible state
transition from INI state. Thus it is safe to rely on it to fix the bug.
This patch must be backported to all stable versions.
The http_get_stline() was designed to be called from HTTP analyzers. Thus
before any data forwarding. To prevent any invalid usage, two BUG_ON()
statements were added. However, it is not a good idea because it is pretty
hard to be sure no HTTP sample fetch will never be called outside the
analyzers context. Especially because there is at least one possible area
where it may happens. An HTTP sample fetch may be used inside the unique-id
format string. On the normal case, it is generated in AN_REQ_HTTP_INNER
analyzer. But if an error is reported too early, the id is generated when
the log is emitted.
So, it is safer to remove the BUG_ON() statements and consider the normal
behavior is to return NULL if the first block is not a start-line. Of
course, this means all calling functions must test the return value or be
sure the start-line is really there.
This patch must be backported as far as 2.0.
This patch adds 4 new sample fetches to get the source and the destination
info (ip address and port) of the backend connection :
* bc_dst : Returns the destination address of the backend connection
* bc_dst_port : Returns the destination port of the backend connection
* bc_src : Returns the source address of the backend connection
* bc_src_port : Returns the source port of the backend connection
The configuration manual was updated accordingly.
When method sample fetch is called, if an exotic method is found
(HTTP_METH_OTHER), when smp_prefetch_htx() is called, we must be sure the
start-line is still there. Otherwise, HAproxy may crash because of a NULL
pointer dereference, for instance if the method sample fetch is used inside
a unique-id format string. Indeed, the unique id may be generated when the
log message is emitted. At this stage, the request channel is empty.
This patch must be backported as far as 2.0. But the bug exists in all
stable versions for the legacy HTTP mode too. Thus it must be adapted to the
legacy HTTP mode and backported to all other stable versions.
For all ssl_bc_* sample fetches, the test on the keyword when called from a
health-check is inverted. We must be sure the 5th charater is a 'b' to
retrieve a connection.
This patch must be backported as far as 2.2.
bc_http_major sample fetch now works when it is called from a
tcp-check. When it happens, the session origin is a check. The backend
connection is retrieved from the conn-stream attached to the check.
If required, this path may easily be backported as far as 2.2.
fc_http_major and bc_http_major sample fetches return the major digit of the
HTTP version used, respectively, by the frontend and the backend
connections, based on the mux. However, in reality, "2" is returned if the
H2 mux is detected, otherwise "1" is inconditionally returned, regardless
the mux used. Thus, if called for a raw TCP connection, "1" is returned.
To fix this bug, we now get the multiplexer flags, if there is one, to be
sure MX_FL_HTX is set.
I guess it was made this way on purpose when the H2 multiplexer was
introduced in the 1.8 and with the legacy HTTP mode there is no other
solution at the connection level. Thus this patch should be backported as
far as 2.2. For the 2.0, it must be evaluated first because of the legacy
HTTP mode.
When a log-format string is built from an health-check, the session origin
is the health-check itself and not a connection. In addition, there is no
stream. It means for now some formats are not supported: %s, %sc, %b, %bi,
%bp, %si and %sp.
Thanks to this patch, the session origin is converted to a check. So it is
possible to retrieve the backend and the backend connection. Note this
session have no listener, thus %ft format must be guarded.
This patch is light and standalone, thus it may be backported as far as 2.2
if required. However, because the error is human, it is probably better to
wait a bit to be sure everything is properly protected.
The dummy frontend used to create the session of the tcp-checks is
initialized without identifier. However, it is required because this id may
be used without any guard, for instance in log-format string via "%f" or
when fe_name sample fetch is called. Thus, an unset id may lead to crashes.
This patch must be backported as far as 2.2.
When a thread ends its harmeless period, we must only consider running
threads when testing threads_want_rdv_mask mask. To do so, we reintroduce
all_threads_mask mask in the bitwise operation (It was removed to fix a
deadlock).
Note that for now it is useless because there is no way to stop threads or
to have threads reserved for another task. But it is safer this way to avoid
bugs in the future.
With the json_query can a JSON value be extacted from a header
or body of the request and saved to a variable.
This converter makes it possible to handle some JSON workload
to route requests to different backends.
This library is required for the subsequent patch which adds
the JSON query possibility.
It is necessary to change the include statement in "src/mjson.c"
because the imported includes in haproxy are in "include/import"
orig: #include "mjson.h"
new: #include <import/mjson.h>
ub64dec and ub64enc are the base64url equivalent of b64dec and base64
converters. base64url encoding is the "URL and Filename Safe Alphabet"
variant of base64 encoding. It is also used in in JWT (JSON Web Token)
standard.
RFC1421 mention in base64.c file is deprecated so it was replaced with
RFC4648 to which existing converters, base64/b64dec, still apply.
Example:
HAProxy:
http-request return content-type text/plain lf-string %[req.hdr(Authorization),word(2,.),ub64dec]
Client:
Token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyIjoiZm9vIiwia2V5IjoiY2hhZTZBaFhhaTZlIn0.5VsVj7mdxVvo1wP5c0dVHnr-S_khnIdFkThqvwukmdg
$ curl -H "Authorization: Bearer ${TOKEN}" http://haproxy.local
{"user":"foo","key":"chae6AhXai6e"}
Adjust the size of the sample buffer before we change the "area"
pointer. The change in size is calculated as the difference between the
original pointer and the new start pointer. But since the
`smp->data.u.str.area` assignment results in `smp->data.u.str.area` and
`start` being the same pointer, we always ended up substracting zero.
This changes it to change the size by the actual amount it changed.
I'm not entirely sure what the impact of this is, but the previous code
seemed wrong.
[wt: from what I can see the only harmful case is when the output is
converted to a stick-table key, it could result in zeroing past the
end of the buffer; other cases do not touch beyond ->data]
All allocation errors in cfg_parse_listen() are now handled in a unique
place under the "alloc_error" label. This simplify a bit error handling in
this function.
At several places during the proxy section parsing, memory allocation was
performed with no check. Result is now tested and an error is returned if
the allocation fails.
This patch may be backported to all stable version but it only fixes
allocation errors during configuration parsing. Thus, it is not mandatory.
Allocation error are now handled in bind_conf_alloc() functions. Thus
callers, when not already done, are also updated to catch NULL return value.
This patch may be backported (at least partially) to all stable
versions. However, it only fix errors durung configuration parsing. Thus it
is not mandatory.
Allocated variables are now released when an error occurred during
use_backend, use-server, force/ignore-parsing, stick-table, stick and stats
directives parsing. For some of these directives, allocation errors have
been added.
This patch may be backported to all stable version but it only fixes leaks
or allocation errors during configuration parsing. Thus, it is not
mandatory. It should fix issue #1119.
When an error occurred in hlua_register_cli(), the allocated lua function
and keyword must be released to avoid memory leaks.
This patch depends on "MINOR: hlua: Add function to release a lua
function". It may be backported in all stable versions.
When an error occurred in hlua_register_service(), the allocated lua
function and keyword must be released to avoid memory leaks.
This patch depends on "MINOR: hlua: Add function to release a lua
function". It may be backported in all stable versions.
When an error occurred in hlua_register_action(), the allocated lua function
and keyword must be released to avoid memory leaks.
This patch depends on "MINOR: hlua: Add function to release a lua
function". It may be backported in all stable versions.
hen an error occurred in action_register_lua(), the allocated hlua rule and
arguments must be released to avoid memory leaks.
This patch may be backported in all stable versions.
When an error occurred in hlua_register_fetches(), the allocated lua
function and keyword must be released to avoid memory leaks.
This patch depends on "MINOR: hlua: Add function to release a lua
function". It may be backported in all stable versions. It should fix#1112.
When an error occurred in hlua_register_converters(), the allocated lua
function and keyword must be released to avoid memory leaks.
This patch depends on "MINOR: hlua: Add function to release a lua
function". It may be backported in all stable versions.
When an error occurred in hlua_register_task(), the allocated lua context
and task must be released to avoid memory leaks.
This patch may be backported in all stable versions.
Add the trace support for the checks. Only tcp-check based health-checks are
supported, including the agent-check.
In traces, the first argument is always a check object. So it is easy to get
all info related to the check. The tcp-check ruleset, the conn-stream and
the connection, the server state...
Since 1.8 for simplicity the time offset used to compensate for time
drift and jumps had been stored per thread. But with a global time,
the complexit has significantly increased.
What this patch does in order to address this is to get back to the
origins of the pre-thread time drift correction, and keep a single
offset between the system's date and the current global date.
The thread first verifies from the before_poll date if the time jumped
backwards or forward, then either fixes it by computing the new most
likely date, or applies the current offset to this latest system date.
In the first case, if the date is out of range, the old one is reused
with the max_wait offset or not depending on the interrupted flag.
Then it compares its date to the global date and updates both so that
both remain monotonic and that the local date always reflects the
latest known global date.
In order to support atomic updates to the offset, it's saved as a
ullong which contains both the tv_sec and tv_usec parts in its high
and low words. Note that a part of the patch comes from the inlining
of the equivalent of tv_add applied to the offset to make sure that
signed ints are permitted (otherwise it depends on how timeval is
defined).
This is significantly more reliable than the previous model as the
global time should move in a much smoother way, and not according
to what thread last updated it, and the thread-local time should
always be very close to the global one.
Note that (at least for debugging) a cheap way to measure processing
lag would consist in measuring the difference between global_now_ms
and now_ms, as long as other threads keep it up-to-date.
Instead of using two CAS loops, better compute the two units
simultaneously and update them at once. There is no guarantee that
the update will be synchronous, but we don't care, what matters is
that both are monotonically updated and that global_now_ms always
follows the last known value of global_now.
In the global_now loop, we used to set tmp_adj from adjusted, then
set update it from tmp_now, then set adjusted back to tmp_adj, and
finally set now from adjusted. This is a long and unneeded set of
moves resulting from years of code changes. Let's just set now
directly in the loop, stop using adjusted and remove tmp_adj.
The time initialization was made a bit complex because we rely on a
dummy negative argument to reset all fields, leaving no distinction
between process-level initialization and thread-level initialization.
This patch changes this by introducing two functions, one for the
process and the second one for the threads. This removes ambigous
test and makes sure that the relevant fields are always initialized
exactly once. This also offers a better solution to the bug fixed in
commit b48e7c001 ("BUG/MEDIUM: time: make sure to always initialize
the global tick") as there is no more special values for global_now_ms.
It's simple enough to be backported if any other time-related issues
are encountered in stable versions in the future.
It was only used by freq_ctr and is not used anymore. In addition the
local curr_sec_ms was removed, as well as the equivalent extern
definitions which did not exist anymore either.
It remains cumbersome to preserve two versions of the freq counters and
two different internal clocks just for this. In addition, the savings
from using two different mechanisms are not that important as the only
saving is a divide that is replaced by a multiply, but now thanks to
the freq_ctr_total() unificaiton the code could also be simplified to
optimize it in case of constants.
This patch turns all non-period freq_ctr functions to static inlines
which call the period-based ones with a period of 1 second. A direct
benefit is that a single internal clock is now needed for any counter
and that they now all rely on ticks.
These 1-second counters are essentially used to report request rates
and to enforce a connection rate limitation in listeners. It was
verified that these continue to work like before.
Both structures are identical except the name of the field starting
the period and its description. Let's call them all freq_ctr and the
period's start "curr_tick" which is generic.
This is only a temporary change and fields are expected to remain
the same with no code change (verified).