This one doesn't use anything from an SSL context, it only checks the
type of the transport layer of a connection, thus it belongs to
connection.h. This is particularly visible due to all the ifdefs
around it in various call places.
These functions have no reason for being inlined, and they require some
includes with long dependencies. Let's move them to listener.c and trim
unused includes in listener.h.
The lock-debugging code in thread.h has no reason to be inlined. the
functions are quite fat and perform a lot of operations so there's no
saving keeping them inlined. Worse, most of them are in fact not
inlined, resulting in a significantly bigger executable.
This patch moves all this part from thread.h to thread.c. The functions
are still exported in thread.h of course. This results in ~166kB less
code:
text data bss dec hex filename
3165938 99424 897376 4162738 3f84b2 haproxy-before
2991987 99424 897376 3988787 3cdd33 haproxy-after
In addition the build time with thread debugging enabled has shrunk
from 19.2 to 17.7s thanks to much less code to be parsed in thread.h
that is included virtually everywhere.
pool-os.h relies on a number of includes solely because the
pool_alloc_area() function was inlined, and this only because we want
the normal version to be inlined so that we can track the calling
places for the memory profiler. It's worth noting that it already
does not work at -O0, and that when UAF is enabled we don't care a
dime about profiling.
This patch does two things at once:
- force-inline the functions so that pool_alloc_area() is still
inlined at -O0 to help track malloc() users ;
- uninline the UAF version of these (that rely on mmap/munmap)
and move them to pools.c so that we can remove all unneeded
includes.
Doing so reduces by ~270kB or 0.15% the total build size.
A number of files currently access activity counters but rely on their
definitions to be inherited from other files (task.c, backend.c hlua.c,
sock.c, pool.c, stats.c, fd.c).
backend.c, all muxes, backend.c started manipulating ebmb_nodes with
the introduction of idle conns but the types were inherited through
other includes. Let's add ebmbtree.h there.
The various variable-to-sample converters allow to turn a variable to
a sample of type string, sint or binary, but both the string one used
by strcmp() and the binary one used by secure_memcmp() are missing a
pointer check on the ability to the cast, making them crash if a
variable of type addr is used with strcmp(), or if an addr or bool is
used with secure_memcmp().
Let's rely on the new sample_conv_var2smp() function to run the proper
checks.
This will need to be backported to all supported version. It relies on
previous commits:
CLEANUP: server: always include the storage for SSL settings
CLEANUP: sample: rename sample_conv_var2smp() to *_sint
CLEANUP: sample: uninline sample_conv_var2smp_str()
MINOR: sample: provide a generic var-to-sample conversion function
For backports it's probably easier to check the sample_casts[] pointer
before calling it in sample_conv_strcmp() and sample_conv_secure_memcmp().
We're using variable-to-sample conversion at least 4 times in the code,
two of which are bogus. Let's introduce a generic conversion function
that performs the required checks.
This one only handles integers, contrary to its sibling with the suffix
_str that only handles strings. Let's rename it and uninline it since it
may well be used from outside.
The SSL stuff in struct server takes less than 3% of it and requires
lots of annoying ifdefs in the code just to take care of the cases
where the field is absent. Let's get rid of this and stop including
openssl-compat from server.c to detect NPN and ALPN capabilities.
This reduces the total LoC by another 0.4%.
Migrate the httpclient:get() method to named arguments so we can
specify optional arguments.
This allows to pass headers as an optional argument as an array.
The () in the method call must be replaced by {}:
local res = httpclient:get{url="http://127.0.0.1:9000/?s=99",
headers= {["X-foo"] = { "salt" }, ["X-bar"] = {"pepper" }}}
During httpclient_destroy, add a condition in the BUG_ON which checks
that the client was started before it has ended. A httpclient structure
could have been created without being started.
When using the lua httpclient, haproxy could crash because a b_xfer is
done in httpclient_xfer, which will do a zero-copy swap of the data in
the buffers. The ptr will then be free() by the pool.
However this can't work with a trash buffer, because the area was not
allocated from the pool buffer, so the pool is not suppose to free it
because it does not know this ptr, using -DDEBUG_MEMORY_POOLS will
result with a crash during the free.
Fix the problem by using b_force_xfer() instead of b_xfer which copy
the data instead. The problem still exist with the trash however, and
the trash API must be reworked.
Implement the garbage collector of the lua httpclient.
This patch declares the __gc method of the httpclient object which only
does a httpclient_stop_and_destroy().
httpclient_stop_and_destroy() tries to destroy the httpclient structure
if the client was stopped.
In the case the client wasn't stopped, it ask the client to stop itself
and to destroy the httpclient structure itself during the release of the
applet.
httpclient_destroy() must free all the ist in the httpclient structure,
the URL in the request, the vsn and reason in the response.
It also must free the list of headers of the response.
A bug was introduced by the commit 2d5650082 ("BUG/MEDIUM: http-ana: Reset
channels analysers when returning an error").
The request analyzers must be cleared when a redirect rule is applied. It is
not a problem if the redirect rule is inside an http-request ruleset because
the analyzer takes care to clear it. However, when it comes from a redirect
ruleset (via the "redirect ..." directive), because of the above commit,
the request analyzers are no longer cleared. It means some HTTP request
analyzers may be called while the request channel was already flushed. It is
totally unexpected and may lead to crash.
Thanks to Yves Lafon for reporting the problem.
This patch must be backported everywhere the above commit was backported.
When a filter is attached to a stream, the wrong FLT_END analyzer is added
on the request channel. AN_REQ_FLT_END must be added instead of
AN_RES_FLT_END. Because of this bug, the stream may hang on the filter
release stage.
It seems to be ok for HTTP filters (cache & compression) in HTTP mode. But
when enabled on a TCP proxy, the stream is blocked until the client or the
server timeout expire because data forwarding is blocked. The stream is then
prematurely aborted.
This bug was introduced by commit 26eb5ea35 ("BUG/MINOR: filters: Always set
FLT_END analyser when CF_FLT_ANALYZE flag is set"). The patch must be
backported in all stable versions.
time.h is a horrible place to put activity calculation, it's a
historical mistake because the functions were there. We already have
most of the parts in sched.{c,h} and these ones make an exception in
the middle, forcing time.h to include some thread stuff and to access
the before/after_poll and idle_pct values.
Let's move these 3 functions to task.h with the other ones. They were
prefixed with "sched_" instead of the historical "tv_" which already
made no sense anymore.
I don't know why I inlined this one, this makes no sense given that it's
only used for stats, and it starts a circular dependency on tinfo.h which
can be problematic in the future. In addition, all the stuff related to
idle time calculation should be with the rest of the scheduler, which
currently is in task.{c,h}, so let's move it there.
We'll need to improve the API to pass other arguments in the future, so
let's start to adapt better to the current use cases. task_new() is used:
- 18 times as task_new(tid_bit)
- 18 times as task_new(MAX_THREADS_MASK)
- 2 times with a single bit (in a loop)
- 1 in the debug code that uses a mask
This patch provides 3 new functions to achieve this:
- task_new_here() to create a task on the calling thread
- task_new_anywhere() to create a task to be run anywhere
- task_new_on() to create a task to run on a specific thread
The change is trivial and will allow us to later concentrate the
required adaptations to these 3 functions only. It's still possible
to call task_new() if needed but a comment was added to encourage the
use of the new ones instead. The debug code was not changed and still
uses it.
Work lists were a mechanism introduced in 1.8 to asynchronously delegate
some work to be performed on another thread via a dedicated task.
The only user was the listeners, to deal with the queue. Nowadays
the tasklets have made this much more convenient, and have replaced
work_lists in the listeners. It seems there will be no valid use case
of work lists anymore, so better get rid of them entirely and keep the
scheduler code cleaner.
__task_queue() must absolutely not be called with TICK_ETERNITY or it
will place a never-expiring node upfront in the timers queue, preventing
any timer from expiring until the process is restarted. Code was found
to cause this using "task_schedule(task, now_ms)" which does this one
millisecond every 49.7 days, so let's add a condition against this. It
must never trigger since any process susceptible to trigger it would
already accumulate tasks until it dies.
An extra test was added in wake_expired_tasks() to detect tasks whose
timeout would have been changed after being queued.
An improvement over this could be in the future to use a non-scalar
type (union/struct) for expiration dates so as to avoid the risk of
using them directly like this. But now_ms is already such a valid
time and this specific construct would still not be caught.
This could even be backported to stable versions to help detect other
occurrences if any.
For now, tcp-request and tcp-response content rules evaluation is
interrupted before the inspect-delay when the channel's buffer is full, the
RX path is blocked or when a shutdown for reads was received. To sum up, the
evaluation is interrupted when no more input data are expected. However, it
is not exhaustive. It also happens when end of input is reached (CF_EOI flag
set) or when a read error occurred (CF_READ_ERROR flag set).
Note that, AFAIK, it is only a problem on HAProy 2.3 and prior when a H1 to
H2 upgrade is performed. On newer versions, it works as expected because the
stream is not created at this stage.
This patch must be backported as far as 2.0.
During tcp/http check rules parsing, when a sample fetch or a log-format
string is parsed, the proxy's argument list used to track unresolved
argument is no longer passed for default proxies. It means it is no longer
possible to rely on sample fetches depending on the execution context (for
instance 'nbsrv').
It is important to avoid HAProxy crashes because these arguments are
resolved during the configuration validity check. But, default proxies are
not evaluated during this stage. Thus, these arguments remain unresolved.
It will probably be possible to relax this rule. But to ease backports, it
is forbidden for now.
This patch must be backported as far as 2.2. It depends on the commit
"MINOR: arg: Be able to forbid unresolved args when building an argument
list". It must be adapted for the 2.3 because PR_CAP_DEF capability was
introduced in the 2.4. A solution may be to test The proxy's id agains NULL.
In make_arg_list() function, unresolved dependencies are pushed in an
argument list to be resolved later, during the configuration validity
check. It is now possible to forbid such unresolved dependencies by omitting
<al> parameter (setting it to NULL). It is usefull when the parsing context
is not the same than the running context or when the parsing context is lost
after the startup stage. For instance, an argument may be defined in
defaults section during parsing and executed in a frontend/backend section.
The Lua tasks registered vi core.register_task() use a dangerous
task_schedule(task, now_ms) to start them, that will most of the
time work by accident, except when the time wraps every 49.7 days,
if now_ms is 0, because it's not valid to queue a task with an
expiration date set to TICK_ETERNITY, as it will fail all wakeup
checks and prevent all subsequent timers from being seen as expired.
The only solution in this case is to restart the process.
Fortunately for the vast majority of users it is extremely unlikely
to ever be met (only one millisecond every 49.7 days is at risk), but
this can be systematic for a process dealing with 1000 req/s, hence
the major tag.
The bug was introduced in 1.6-dev with commit 24f335340 ("MEDIUM: lua:
add coroutine as tasks."), so the fix must be backported to all stable
branches.
A time comparison was wrong in hlua_sleep_yield(), making the sleep()
code do nothing for periods of 24 days every 49 days. An arithmetic
comparison was performed on now_ms instead of using tick_is_expired().
This bug was added in 1.6-dev by commit 5b8608f1e ("MINOR: lua: core:
add sleep functions") so the fix should be backported to all stable
versions.
In case of error while calling a SSL_read or SSL_write, the
SSL_get_error function is called in order to know more about the error
that happened. If the error code is SSL_ERROR_SSL or SSL_ERROR_SYSCALL,
the error queue might contain more information on the error. This error
code was not used until now. But we now need to store it in order for
backend error fetches to catch all handshake related errors.
The change was required because the previous backend fetch would not
have raised anything if the client's certificate was rejected by the
server (and the connection interrupted). This happens because starting
from TLS1.3, the 'Finished' state on the client is reached before its
certificate is sent to the server (see the "Protocol Overview" part of
RFC 8446). The only place where we can detect that the server rejected the
certificate is after the first SSL_read call after the SSL_do_handshake
function.
This patch then adds an extra ERR_peek_error after the SSL_read and
SSL_write calls in ssl_sock_to_buf and ssl_sock_from_buf. This means
that it could set an error code in the SSL context a long time after the
handshake is over, hence the change in the error fetches.
The ssl_bc_hsk_err sample fetch will need to raise more errors than only
handshake related ones hence its renaming to a more generic ssl_bc_err.
This patch is required because some handshake failures that should have
been caught by this fetch (verify error on the server side for instance)
were missed. This is caused by a change in TLS1.3 in which the
'Finished' state on the client is reached before its certificate is sent
(and verified) on the server side (see the "Protocol Overview" part of
RFC 8446).
This means that the SSL_do_handshake call is finished long before the
server can verify and potentially reject the client certificate.
The ssl_bc_hsk_err will then need to be expanded to catch other types of
errors.
This change is also applied to the frontend fetches (ssl_fc_hsk_err
becomes ssl_fc_err) and to their string counterparts.
In case of a connection error happening after the SSL handshake is
completed, the error code stored in the connection structure would not
always be set, hence having some connection failures being described as
successful in the fc_conn_err or bc_conn_err sample fetches.
The most common case in which it could happen is when the SSL server
rejects the client's certificate. The SSL_do_handshake call on the
client side would be sucessful because the client effectively sent its
client hello and certificate information to the server, but the next
call to SSL_read on the client side would raise an SSL_ERROR_SSL code
(through the SSL_get_error function) which is decribed in OpenSSL
documentation as a non-recoverable and fatal SSL error.
This patch ensures that in such a case, the connection's error code is
set to a special CO_ERR_SSL_FATAL value.
HAproxy only handles "chunked" encoding internally. Because it is a gateway,
we stated it was not a problem if unknown encodings were applied on a
message because it is the recipient responsibility to accept the message or
not. And indeed, it is not a problem if both the client and the server
connections are using H1. However, Transfer-Encoding headers are dropped
from H2 messages. It is not a problem for chunk-encoded payload because
dechunking is performed during H1 parsing. But, for any other encodings, the
xferred H2 message is invalid.
It is also a problem for internal payload manipulations (lua,
filters...). Because the TE request headers are now sanitiezd, unsupported
encoding should not be used by servers. Thus it is only a problem for the
request messages. For this reason, such messages are now rejected. And if a
server decides to use an unknown encoding, the response will also be
rejected.
Note that it is pretty uncommon to use other encoding than "chunked" on the
request payload. So it is not necessary to backport it.
This patch should fix the issue #1301. No backport is needed.
According to the RFC7230, "chunked" encoding must not be applied more than
once to a message body. To handle this case, h1_parse_xfer_enc_header() is
now responsible to fail when a parsing error is found. It also fails if the
"chunked" encoding is not the last one for a request.
To help the parsing, two H1 parser flags have been added: H1_MF_TE_CHUNKED
and H1_MF_TE_OTHER. These flags are set, respectively, when "chunked"
encoding and any other encoding are found. H1_MF_CHNK flag is used when
"chunked" encoding is the last one.
Only chunk-encoded response payloads are supported by HAProxy. All other
transfer encodings are not supported and will be an issue if the HTTP
compression is enabled. So be sure only "trailers" is send in TE request
headers.
The patch is related to the issue #1301. It must be backported to all stable
versions. Be carefull for 2.0 and lower because the HTTP legacy must also be
fixed.
Transfer-Encoding header is not supported in HTTP/1.0. However, softwares
dealing with HTTP/1.0 and HTTP/1.1 messages may accept it and transfer
it. When a Content-Length header is also provided, it must be
ignored. Unfortunately, this may lead to vulnerabilities (request smuggling
or response splitting) if an intermediary is only implementing
HTTP/1.0. Because it may ignore Transfer-Encoding header and only handle
Content-Length one.
To avoid any security issues, when Transfer-Encoding and Content-Length
headers are found in a message, the close mode is forced. The same is
performed for HTTP/1.0 message with a Transfer-Encoding header only. This
change is conform to what it is described in the last HTTP/1.1 draft. See
also httpwg/http-core#879.
Note that Content-Length header is also removed from any incoming messages
if a Transfer-Encoding header is found. However it is not true (not yet) for
responses generated by HAProxy.
This kind of requests is now forbidden and rejected with a
413-Payload-Too-Large error.
It is unexpected to have a payload for GET/HEAD/DELETE requests. It is
explicitly allowed in HTTP/1.1 even if some servers may reject such
requests. However, HTTP/1.0 is not clear on this point and some old servers
don't expect any payload and never look for body length (via Content-Length
or Transfer-Encoding headers).
It means that some intermediaries may properly handle the payload for
HTTP/1.0 GET/HEAD/DELETE requests, while some others may totally ignore
it. That may lead to security issues because a request smuggling attack is
possible.
To prevent any issue, those requests are now rejected.
See also httpwg/http-core#904
When a parsing error is triggered, the status code may be customized by
setting H1C .errcode field. By default a 400-Bad-Request is returned. The
function h1_handle_bad_req() has been renamed to h1_handle_parsing_error()
to be more generic.
In h1_ctl(), if output parameter is provided when MUX_EXIT_STATUS is
returned, it is used to set the error code. In addition, any client errors
(4xx), except for 408 ones, are handled as invalid errors
(MUX_ES_INVALID_ERR). This way, it will be possible to customize the parsing
error code for request messages.
The mux .ctl callback can provide some information about the mux to the
caller if the third parameter is provided. Thus, when MUX_EXIT_STATUS is
retrieved, a pointer on the status is now passed. The mux may fill it. It
will be pretty handy to provide custom error code from h1 mux instead of
default ones (400/408/500/501).
The startup code was still ugly with tons of unreadable nested ifdefs.
Let's just have one function to set up the extra threads and another one
to wait for their completion. The ifdefs are isolated into their own
functions now and are more readable, just like the end of main(), which
now uses the same statements to start thread 0 with and without threads.
Till now the threads startup was quite messy:
- we would start all threads but one
- then we would change all threads' CPU affinities
- then we would manually start the poll loop for the current thread
Let's change this by moving the CPU affinity setting code to a function
set_thread_cpu_affinity() that does this job for the current thread only,
and that is called during the thread's initialization in the polling loop.
It takes care of not doing this for the master, and will result in all
threads to be properly bound earlier and with cleaner code. It also
removes some ugly nested ifdefs.
Probably because of some copy-paste from "nbproc", "nbthread" used to
be parsed in cfgparse instead of using a registered parser. Let's fix
this to clean up the code base now.
ASAN reported a buffer overflow in the httpclient. This overflow is the
consequence of ist0() which is incorrect here.
Replace all occurences of ist0() by istptr() which is more appropried
here since all ist in the httpclient were created from strings.
src/hlua.c:7074:6: error: variable 'url_str' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
if (lua_type(L, -1) == LUA_TSTRING)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/hlua.c:7079:36: note: uninitialized use occurs here
hlua_hc->hc->req.url = istdup(ist(url_str));
^~~~~~~
Return an error on the stack if the argument is not a string.
Provide a new field "headers" in the response of the HTTPClient, which
contains all headers of the response.
This field is a multi-dimensionnal table which could be represented this
way in lua:
headers = {
["content-type"] = { "text/html" },
["cache-control"] = { "no-cache" }
}
This commit provides an hlua_httpclient object which is a bridge between
the httpclient and the lua API.
The HTTPClient is callable in lua this way:
local httpclient = core.httpclient()
local response = httpclient:get("http://127.0.0.1:9000/?s=9999")
core.Debug("Status: ".. res.status .. ", Reason : " .. res.reason .. ", Len:" .. string.len(res.body) .. "\n")
The resulting response object will provide a "status" field which
contains the status code, a "reason" string which contains the reason
string, and a "body" field which contains the response body.
The implementation uses the httpclient callback to wake up the lua task
which yield each time it pushes some data. The httpclient works in the
same thread as the lua task.
According with the W3 CSS specification, media queries 5 allow
the browser to enable some CSS when dark mode is enabled. This
patch defines dark mode CSS for the stats page.
https://www.w3.org/TR/mediaqueries-5/#prefers-color-scheme
A bug was introduced in the commit cff0f739e5 ("MINOR: counters: Review
conditions to increment counters from analysers"). The internal_errors
counter for the target server was incremented twice. The counter for the
session listener needs to be incremented instead.
This must be backported everywhere the commit cff0f739e5 is.
The transient flag CO_RFL_BUF_NOT_STUCK should now be set when the mux's
rcv_buf() function is called, in si_cs_recv(), to be sure the mux is able to
perform some optimisation during data copy. This flag is set when we are
sure the channel buffer is not stuck. Concretely, it happens when there are
data scheduled to be sent.
It is not a fix and this flag is not used for now. But it makes sense to have
this info to be sure to be able to do some optimisations if necessary.
This patch is related to the issue #1362. It may be backported to 2.4 to
ease future backports.
The stream interface is now responsible for defragmenting the HTX message of
the input channel if necessary, before calling the mux's .rcv_buf()
function. The defrag is performed if the underlying buffer contains only
input data while the HTX message free space is not contiguous.
The defrag is important here to be sure the mux and the app layer have the
same criteria to decide if a buffer is full or not. Otherwise, the app layer
may wait for more data because the buffer is not full while the mux is
blocked because it needs more space to proceed.
This patch depends on following commits:
* MINOR: htx: Add an HTX flag to know when a message is fragmented
* MINOR: htx: Add a function to know if the free space wraps
This patch is related to the issue #1362. It may be backported as far as 2.0
after some observation period (not sure it is required or not).
HTX_FL_FRAGMENTED flag is now set on an HTX message when it is
fragmented. It happens when an HTX block is removed in the middle of the
message and flagged as unused. HTX_FL_FRAGMENTED flag is removed when all
data are removed from the message or when the message is defragmented.
Note that some optimisations are still possible because the flag can be
avoided in other situations. For instance when the last header of a bodyless
message is removed.
In si_cs_recv(), some CO_RFL flags are set when the mux's .rcv_buf()
function is called. Some are persitent inside si_cs_recv() scope, some
others must be computed at each call to rcv_buf(). This patch takes care of
distinguishing them.
Among others, CO_RFL_KEEP_RECV is a persistent flag while CO_RFL_BUF_WET is
transient.
If the stream-interface is waiting for more buffer room to store incoming
data, it is important at the stream level to stop to wait for more data to
continue. Thanks to the previous patch ("BUG/MEDIUM: stream-int: Notify
stream that the mux wants more room to xfer data"), the stream is woken up
when this happens. In this patch, we take care to interrupt the
corresponding tcp-content ruleset or to stop waiting for the HTTP message
payload.
To ease detection of the state, si_rx_blocked_room() helper function has
been added. It returns non-zero if the stream interface's Rx path is blocked
because of lack of room in the input buffer.
This patch is part of a series related to the issue #1362. It should be
backported as ar as 2.0, probably with some adaptations. So be careful
during backports.
When the mux failed to transfer data to the upper layer because of a lack of
room, it is important to wake the stream up to let it handle this
event. Otherwise, if the stream is waiting for more data, both the stream
and the mux reamin blocked waiting for each other.
When this happens, the mux set the CS_FL_WANT_ROOM flag on the
conn-stream. Thus, in si_cs_recv() we are able to detect this event. Today,
the stream-interface is blocked. But, it is not enough to wake the stream
up. To fix the bug, CF_READ_PARTIAL flag is extended to also handle cases
where a read exception occurred. This flag should idealy be renamed. But for
now, it is good enough. By setting this flag, we are sure the stream will be
woken up.
This patch is part of a series related to the issue #1362. It should be
backported as far as 2.0, probably with some adaptations. So be careful
during backports.
When a message is parsed and copied into the channel buffer, in
h1_process_demux(), more space is requested if some pending data remain
after the parsing while the channel buffer is not empty. To do so,
CS_FL_WANT_ROOM flag is set. It means the H1 parser needs more space in the
channel buffer to continue. In the stream-interface, when this flag is set,
the SI is considered as blocked on the RX path. It is only unblocked when
some data are sent.
However, it is not accurrate because the parsing may be stopped because
there is not enough data to continue. For instance in the middle of a chunk
size. In this case, some data may have been already copied but the parser is
blocked because it must receive more data to continue. If the calling SI is
blocked on RX at this stage when the stream is waiting for the payload
(because http-buffer-request is set for instance), the stream remains stuck
infinitely.
To fix the bug, we must request more space to the app layer only when it is
not possible to copied more data. Actually, this happens when data remain in
the input buffer while the H1 parser is in states MSG_DATA or MSG_TUNNEL, or
when we are unable to copy headers or trailers into a non-empty buffer.
The first condition is quite easy to handle. The second one requires an API
refactoring. h1_parse_msg_hdrs() and h1_parse_msg_tlrs() fnuctions have been
updated. Now it is possible to know when we need more space in the buffer to
copy headers or trailers (-2 is returned). In the H1 mux, a new H1S flag
(H1S_F_RX_CONGESTED) is used to track this state inside h1_process_demux().
This patch is part of a series related to the issue #1362. It should be
backported as far as 2.0, probably with some adaptations. So be careful
during backports.
In h1_postparse_req_hdrs(), if we need more space to copy headers, the request
parser is reset. However, because of a typo, it was reset as a response parser
instead of a request one. h1m_init_req() must be called.
This patch must be backported as far as 2.2.
We wake up the xprt as soon as STREAM frames have been pushed to
the TX mux buffer (->tx.buf).
We also make the mux subscribe() to the xprt layer if some data
remain in its ring buffer after having try to transfer them to the
xprt layer (TX mux buffer for the stream full).
Also do not consider a buffer in the ring if not allocated (see b_size(buf))
condition in the for(;;) loop.
Make a call to qc_process_mux() if possible when entering qc_send() to
fill the mux with data from streams in the send or flow control lists.
The FIN of a STREAM frame to be built must be set if there is no more
at all data in the ring buffer.
Do not do anything if there is nothing to transfer the ->tx.buf mux
buffer via b_force_xfer() (without zero copy)
When ACK have been received by the xprt, it must wake up the
mux if this latter has subscribed to SEND events. This is the
role of qcs_try_to_consume() to detect such a situation. This
is the function which consumes the buffer filled by the mux.
It is important to know if the packet number spaces used during the
handshakes have really been discarding. If not, this may have a
significant impact on the packet loss detection.
There were cases where the Initial packet number space was not discarded.
This leaded the packet loss detection to continue to take it into
considuration during the connection lifetime. Some Application level
packets could not be retransmitted.
QUIC_FL_TX_PACKET_ACK_ELICITING was replaced by QUIC_FL_RX_PACKET_ACK_ELICITING
by this commit due to a copy and paste:
e5b47b637 ("MINOR: quic: Add a mask for TX frame builders and their authorized packet types")
Furthermore the flags for the PADDING frame builder was not initialized.
The STREAM data to send coming from the upper layer must be stored until
having being acked by the peer. To do so, we store them in buffer structs,
one by stream (see qcs.tx.buf). Each time a STREAM is built by quic_push_frame(),
its offset must match the offset of the first byte added to the buffer (modulo
the size of the buffer) by the frame. As they are not always acknowledged in
order, they may be stored in eb_trees ordered by their offset to be sure
to sequentially delete the STREAM data from their buffer, in the order they
have been added to it.
The peer transport parameter values were not initialized with
the default ones (when absent), especially the
"active_connection_id_limit" parameter with 2 as default value
when absent from received remote transport parameters. This
had as side effect to send too much NEW_CONNECTION_ID frames.
This was the case for curl which does not announce any
"active_connection_id_limit" parameter.
Also rename ->idle_timeout to ->max_idle_timeout to reflect the RFC9000.
These salts are used to derive initial secrets to decrypt the first Initial packet.
We support draft-29 and v1 QUIC version initial salts.
Add parameters to our QUIC-TLS API functions used to derive these secret for
these salts.
Make our xprt_quic use the correct initial salt upon QUIC version field found in
the first paquet. Useful to support connections with curl which use draft-29
QUIC version.
Move the "ACK required" bit from the packet number space to the connection level.
Force the "ACK required" option when acknowlegding Handshake or Initial packet.
A client may send three packets with a different encryption level for each. So,
this patch modifies qc_treat_rx_pkts() to consider two encryption level passed
as parameters, in place of only one.
Make qc_conn_io_cb() restart its process after the handshake has succeeded
so that to process any Application level packets which have already been received
in the same datagram as the last CRYPTO frames in Handshake packets.
We must take as most as possible data from STREAM frames to be encapsulated
in QUIC packets, almost as this is done for CRYPTO frames whose fields are
variable length fields. The difference is that STREAM frames are only accepted
for short packets without any "Length" field. So it is sufficient to call
max_available_room() for that in place of max_stream_data_size() as this
is done for CRYPTO data.
It is possible the TLS stack stack provides us with 1-RTT TX secrets
at the same time as Handshake secrets are provided. Thanks to this
simple patch we can build Application level packets during the handshake.
Make qc_prep_hdshk_pkts() and qui_conn_io_cb() handle the case
where we enter them with QUIC_HS_ST_COMPLETE or QUIC_HS_ST_CONFIRMED
as connection state with QUIC_TLS_ENC_LEVEL_APP and QUIC_TLS_ENC_LEVEL_NONE
to consider to prepare packets.
quic_get_tls_enc_levels() is modified to return QUIC_TLS_ENC_LEVEL_APP
and QUIC_TLS_ENC_LEVEL_NONE as levels to consider when coalescing
packets in the same datagram.
With very few packets received by the listener, it is possible
that its state may move from QUIC_HS_ST_SERVER_INITIAL to
QUIC_HS_ST_COMPLETE without transition to QUIC_HS_ST_SERVER_HANDSHAKE state.
This latter state is not mandatory.
This simple enable use to coalesce Application level packet with
Handshake ones at the end of the handshake. This is highly useful
if we do want to send a short Handshake packet followed by Application
level ones.
We must evaluate the packet lenghts in advance to be sure we do not
consume a packet number for nothing. The packet building must always
succeeds. This is the role of qc_eval_pkt() implemented by this patch
called before calling qc_do_build_pkt() which was previously modified to
always succeed.
There were cases where the encoded size of acks was not updated leading
to ACK frames building too big compared to the expected size. At this
time, this makes the code "BUG_ON()".
Rename qc_build_hdshk_pkt() to qc_build_pkt() and qc_do_build_hdshk_pkt()
to qc_do_build_pkt().
Update their comments consequently.
Make qc_do_build_hdshk_pkt() BUG_ON() when it does not manage to build
a packet. This is a bug!
Remove the functions which were specific to the Application level.
This is the same function which build any packet for any encryption
level: quic_prep_hdshk_pkts() directly called from the quic_conn_io_cb().
There is no need to pass a copy of CRYPTO frames to qc_build_frm() from
qc_do_build_hdshk_pkt(). Furthermore, after the previous modifications,
qc_do_build_hdshk_pkt() do not build only CRYPTO frame from ->pktns.tx.frms
MT_LIST but any type of frame.
Atomically increase the "next packet variable" before building a new packet.
Make the code bug on a packet building failure. This should never happen
if we do not want to consume a packet number for nothing. There are remaining
modifications to come to ensure this is the case.
Modify this task which is called at least each a packet is received by a listener
so that to make it behave almost as qc_do_hdshk(). This latter is no more useful
and removed.
This function was responsible of building CRYPTO frames to fill as much as
possible a packet passed as argument. This patch makes it support any frame
except STREAM frames whose lengths are highly variable.
We want to treat all the frames to be built the same way as frames
built during handshake (CRYPTO frames). So, let't store them at the same
place which is an MT_LIST.
As this has been done for RX frame parsers, we add a mask for each TX frame
builder to denote the packet types which are authorized to embed such frames.
Each time a TX frame builder is called, we check that its mask matches the
packet type the frame is built for.
These structures are similar. quic_tx_frm was there to try to reduce the
size of such objects which embed a union for all the QUIC frames.
Furtheremore this patch fixes the issue where quic_tx_frm objects were freed
from the pool for quic_frame.
Make quic_rx_packet_ref(inc|dec)() functions be thread safe.
Make use of ->rx.crypto.frms_rwlock RW lock when manipulating RX frames
from qc_treat_rx_crypto_frms().
Modify atomically several variables attached to RX part of quic_enc_level struct.
->rx.crypto member of quic_enc_level struct was not initialized as
this was done for all other members of this structure. This patch
fixes this.
Also adds a RW lock for the frame of this member.
If we let the connection packet handler task (quic_conn_io_cb) process the first
client Initial packet which contain the TLS Client Hello message before the mux
context is initialized, quic_mux_transport_params_update() makes haproxy crash.
->start xprt callback already wakes up this task and is called after all the
connection contexts are initialized. So, this patch do not wakes up quic_conn_io_cb()
if the mux context is not initialized (this was already the case for the connection
context (conn_ctx)).
If we add TX packets to their trees before sending them, they may
be detected as lost before being sent. This may make haproxy crash
when it retreives the prepared packets from TX ring buffers, dereferencing
them after they have been freed.
We use only ring buffers (struct qring) to prepare and send QUIC datagrams.
We can safely remove the old buffering implementation which was not thread safe.
We modify the functions responsible of building packets to put these latters
in ring buffers (qc_build_hdshk_pkt() during the handshake step, and
qc_build_phdshk_apkt() during the post-handshake step). These functions
remove a ring buffer from its list to build as much as possible datagrams.
Eache datagram is prepended of two field: the datagram length and the
first packet in the datagram. We chain the packets belonging to the same datagram
in a singly linked list to reach them from the first one: indeed we must
modify some members of each packet when we really send them from send_ppkts().
This function is also modified to retrieved the datagram from ring buffers.
We initialize the pointer to the listener TX ring buffer list.
Note that this is not done for QUIC clients as we do not fully support them:
we only have to allocate the list and attach it to server struct I guess.
We allocate an array of QUIC ring buffer, one by thread, and arranges them in a
MT_LIST. Everything is allocated or nothing: we do not want to usse an incomplete
array of ring buffers to ensure that each thread may safely acquire one of these
buffers.
Before this patch we reserved 16 bytes (QUIC_TLS_TAG_LEN) before building the
handshake packet to be sure to be able to add the tag which comes with the
the packet encryption, decreasing the end offset of the building buffer by 16 bytes.
But this tag length was taken into an account when calling qc_build_frms() which
computes and build crypto frames for the remaining available room thanks to <*len>
parameter which is the length of the already present bytes in the building buffer
before adding CRYPTO frames. This leaded us to waste the 16 last bytes of the buffer
which were not used.
This make at least our listeners answer to ngtcp2 clients without
HelloRetryRequest message. It seems the server choses the first
group in the group list ordered by preference and set by
SSL_CTX_set1_curves_list() which match the client ones.
This implementation is inspired from Linux kernel circular buffer implementation
(see include/linux/circ-buf.h). Such buffers may be used at the same time both
by writer and reader (lock-free).
Modify the I/O dgram handler principal function used to parse QUIC packets
be thread safe. Its role is at least to create new incoming connections
add to two trees protected by the same RW lock. The packets are for now on
fully parsed before possibly creating new connections.
Allocate everything needed for a connection (struct quic_conn) from the same
function.
Rename qc_new_conn_init() to qc_new_conn() to reflect these modifications.
Insert these connection objects in their tree after returning from this function.
Some SSL call may be called with pointer to ssl_sock_ctx struct as parameter
which does not match the quic_conn_ctx struct type (see ssl_sock_infocb()).
I am not sure we have to keep such callbacks for QUIC but we must ensure
the SSL and QUIC xprts use the same data structure as context.
Move the connection state from quic_conn_ctx struct to quic_conn struct which
is the structure which is used to store the QUIC connection part information.
This structure is initialized by the I/O dgram handler for each new connection
to QUIC listeners. This is needed for the multithread support so that to not
to have to depend on the connection context potentially initialized by another
thread.
We must protect from concurrent the tree which stores the QUIC packets received
by the dgram I/O handler, these packets being also parsed by the xprt task.
No need to call free_quic_rx_packet() after calling quic_rx_packet_eb64_delete()
as this latter already calls quic_rx_packet_refdec() also called by
free_quic_rx_packet().
Let's say that we have to insert a range R between to others A and B
with A->first <= R->first <= B->first. We have to remove the ranges
which are overlapsed by R during. This was correctly done when
the intersection between A and R was not empty, but not when the
intersection between R and B was not empty. If this latter case
after having inserting a new range R we set <new> variable as the
node to consider to check the overlaping between R and its following
ranges.
Make depends qc_new_isecs() only on quic_conn struct initialization only (no more
dependency on connection struct initialization) to be able to run it as soon as
the quic_conn struct is initialized (from the I/O handler) before running ->accept()
quic proto callback.
We remove the header protection of packet only for connection with already
initialized context. This latter keep traces of the connection state.
Furthermore, we enqueue the first Initial packet for a new connection
after having completely parsed the packet so that to not start the accept
process for nothing.
Move the QUIC conn (struct quic_conn) initialization from quic_sock_accept_conn()
to qc_lstnr_pkt_rcv() as this is done for the server part.
Move the timer initialization to ->start xprt callback to ensure the connection
context is done : it is initialized by the ->accept callback which may be run
by another thread than the one for the I/O handler which also run ->start.
Move the call to SSL_set_quic_transport_params() from the listener I/O dgram
handler to the ->init() callback of the xprt (qc_conn_init()) which initializes
its context where is stored the SSL context itself, needed by
SSL_set_quic_transport_params(). Furthermore this is already what is done for the
server counterpart of ->init() QUIC xprt callback. As the ->init() may be run
by another thread than the one for the I/O handler, the xprt context could
not be potentially already initialized before calling SSL_set_quic_transport_params()
from the I/O handler.
The name the maximum packet size transport parameter was ambiguous and replaced
by maximum UDP payload size. Our code would be also ambiguous if it does not
reflect this change.
Set the streams transport parameters which could not be initialized because they
were not available during initializations. Indeed, the streams transport parameters
are provided by the peer during the handshake.
Really signal the caller that ->accept() has failed if the session could not
be initialized because conn_complete_session() has failed. This is the case
if the mux could not be initialized too.
When it fails an ->accept() must returns -1 in case of resource shortage.
Deactivate the action of this callback at this time. I am not sure
we will keep it for QUIC as it does not really make sense for QUIC:
the QUIC packet are already recvfrom()'ed by the low level I/O handler
used for all the connections.
This file has been derived from mux_h2.c removing all h2 parts. At
QUIC mux layer, there must not be any reference to http. This will be the
responsability of the application layer (h3) to open streams handled by the mux.
We move ->params transport parameters to ->rx.params. They are the
transport parameters which will be sent to the peer, and used for
the endpoint flow control. So, they will be used to received packets
from the peer (RX part).
Also move ->rx_tps transport parameters to ->tx.params. They are the
transport parameter which are sent by the peer, and used to respect
its flow control limits. So, they will be used when sending packets
to the peer (TX part).
This bug may occur when displaying streams traces. It came with this commit:
242fb1b63 ("MINOR: quic: Drop packets with STREAM frames with wrong direction.").
An optimization was brought in commit 5064ab6a9 ("OPTIM: lb-leastconn:
do not unlink the server if it did not change") to avoid locking the
server just to discover it did not move. However a mistake was made
because the operation involves a divide with a value that is read
outside of its usual lock, which makes it possible to be zero at the
exact moment we watch it if another thread takes the server down under
the lbprm lock, resulting in a divide by zero.
Therefore we must check that the value is not null there.
This must be backported to 2.4.
The "process" directive on "bind" lines becomes quite confusing considering
that the only allowed value is 1 for the process, and that threads are
optional and come after the mandatory "1/".
Let's introduce a new "thread" directive to directly configure thread
numbers, and mark "process" as deprecated. Now "process" will emit a
warning and will suggest how to be replaced with "thread" instead.
The doc was updated accordingly (mostly a copy-paste of the previous
description which was already up to date).
This is marked as MEDIUM as it will impact users having "zero-warning"
and "process" specified.
Enable the 'slowstart' keyword for dynamic servers. The slowstart task
is allocated in 'add server' handler if slowstart is used.
As the server is created in disabled state, there is no need to start
the task. The slowstart task will be automatically started on the first
'enable server' invocation.
'slowstart' can be used without check on a server, with the CLI handlers
'enable/disable server'. Move the code to initialize and start the
slowstart task outside of check.c.
This change will also be reused to enable slowstart for dynamic servers.
Allow to use the check related keywords defined in server.c. These
keywords can be enabled now that checks have been implemented for
dynamic servers.
Here is the list of the new keywords supported :
- error-limit
- observe
- on-error
- on-marked-down
- on-marked-up
Allow to configure ssl support for dynamic server checks independently
of the ssl server configuration. This is done via the keyword
"check-ssl". Also enable to configure the sni/alpn used for the check
via "check-sni/alpn".
The ssl context is not initialized for a dynamic server, even if there
is a tcpcheck rule which uses ssl on the related backed. This will cause
the check initialization to failed with the message :
"Out of memory when initializing an SSL connection"
This can be reproduced by having the following config in the backend :
option tcp-check
tcp-check connect ssl
and create a dynamic server with check activated and a ca-file.
Fix this by calling the prepare_srv xprt callback when the proxy options
PR_O_TCPCKH_SSL is set.
Check support for dynamic servers has been merged in the current branch.
No backport needed.
Test that checks have been configured on the server before enabling via
the 'enable health' CLI. This mirrors the 'enable agent' command.
Without this, a user can use the command on the server without checks.
This leaves the server in an undefined state. Notably, the stat page
reports the server in check transition.
This condition was left on the following reorg commit.
2c04eda8b5
REORG: cli: move "{enable|disable} health" to server.c
This should be backported up to 1.8.
The issue is introduced with the commit c41d8bd65 ("CLEANUP: flt-trace:
Remove unused random-parsing option").
This must be backported everywhere the above commit is.
appctx_new() is exclusively called with tid_bit and it only uses the
mask to pass it to the accompanying task. There is no point requiring
the caller to know about a mask there, nor is there any point in
creating an applet outside of the context of its own thread anyway.
Let's drop this and pass tid_bit to task_new() directly.
Ilya reports in GH #1392 that clang 13 complains about totlen being
calculated and not used in fd_write_frag_line(), which is true. It's
a leftover of some older code.
Ilya reports in GH #1392 that clang 13 complains about a flag being added
to the "flags" parameter without being used later. That's generic code
that was shared from TCP but we can indeed drop this flag since it's used
for TFO which we don't have in socketpairs.
The CLI's payload parser is over-complicated and as such contains more
bugs than needed. One of them is that it uses strstr() to find the
ending tag, ignoring spaces before it, while the argument locator
creates a new arg on each space, without checking if the end of the
word appears past the previously found end. This results in "<<" being
considered as the start of a new argument if preceeded by more than
one space, and the payload being damaged with a \0 inserted at the
first space or tab.
Let's make an easily backportable fix for now. This fix makes sure that
the trailing zero from the first line is properly kept after '<<' and
that the end tag is looked for only as an isolated argument and nothing
else. This also gets rid of the unsuitable strstr() call and now makes
sure that strcspn() will not return elements that are found in the
payload.
For the long term the loop must be rewritten to get rid of those
unsuitable strcspn() and strstr() calls which work past each other, and
the cli_parse_request() function should be split into a tokenizer and
an executor that are used from the caller instead of letting the caller
play games with what it finds there.
This should be backported wherever CLI payload is supported, i.e. 2.0+.
Move the code to allocate/free the mux cleanup task outside of the polling
loop. A new thread_alloc/free handler is registered for this in
connection.c.
This has the benefit to clean up the polling loop code. And as another
benefit, if the task allocation fails, the handler can report an error
to exit the haproxy process. This prevents a potential null pointer
dereferencing.
This should fix the github issue #1389.
This must be backported up to 2.4.
When the LDAP response is parsed, the message length is not properly
decoded. While it works for LDAP servers encoding it on 1 byte, it does not
work for those using a multi-bytes encoding. Among others, Active Directory
servers seems to encode messages or elements length on 4 bytes.
In this patch, we only handle length of BindResponse messages encoded on 1,
2 or 4 bytes. In theory, it may be encoded on any bytes number less than 127
bytes. But it is useless to make this part too complex. It should be ok this
way.
This patch should fix the issue #1390. It should be backported to all stable
versions. While it should be easy to backport it as far as 2.2, the patch
will have to be totally rewritten for lower versions.
Ilya reported in issue #1391 a build warning on Fedora about mallinfo()
being deprecated in favor of mallinfo2() since glibc-2.33. Let's add
support for it. This should be backported where the following commit is
also backported: 157e39303 ("MINOR: pools: automatically disable
malloc_trim() with external allocators").
If an error was already reported on the H1 connection, pending input data
must not be (re)evaluated in h1_process(). Otherwise an unexpected internal
error will be reported, in addition of the first one. And on some
conditions, this may generate an infinite loop because the mux tries to send
an internal error but it fails to do so thus it loops to retry.
This patch should fix the issue #1356. It must be backported to 2.4.
The "unresolved" variable is unused since commit 9fa0df5 ("BUG/MINOR: acl:
Fix freeing of expr->smp in prune_acl_expr").
This patch should fix the issue #1359.
Pierre Cheynier reported some occasional crashes in malloc_trim() on a
recent glibc when running with jemalloc(). While in theory there should
not be any link between the two, it remains plausible that something
allocated early with one is tentatively freed with the other and that
attempts to trim end up badly. There's no point calling the glibc specific
malloc_trim() with external allocators anyway. However these ones are often
enabled at link time or even at run time with LD_PRELOAD, so we cannot rely
on build options for this.
This patch implements runtime detection for the allocator in use by checking
with mallinfo() that a malloc() call is properly accounted for in glibc's
malloc. It only enables malloc_trim() in this case, and ignores it for
other cases. It's fine to proceed like this because mallinfo() is provided
by a wider range of glibcs than malloc_trim().
This could be backported to 2.4 and 2.3. If so, it will also need previous
patch "CLEANUP: pools: factor all malloc_trim() calls into trim_all_pools()".
The sizeof() was printed as a long but it's just an unsigned on some
32-bit platforms, hence the format warning. No backport is needed, as
this arrived in 2.5 with commit 40ca09c7b ("MINOR: sample: Add be2dec
converter").
A bug was introduced by the commit 26eb5ea35 ("BUG/MINOR: filters: Always
set FLT_END analyser when CF_FLT_ANALYZE flag is set"). Depending on the
channel evaluated, the rigth FLT_END analyser must be set. AN_REQ_FLT_END
for the request channel and AN_RES_FLT_END for the response one.
Ths patch must be backported everywhere the above commit was backported.
When an error is returned to the client, via a call to
http_reply_and_close(), the request channel is flushed and shut down and
HTTP analysis on both direction is finished. So it is safer to centralize
reset of channels analysers at this place. It is especially important when a
filter is attached to the stream when a client abort is detected. Because,
otherwise, the stream remains blocked because request analysers are not
reset.
This bug was hidden for a while. But since the fix 6fcd2d328 ("BUG/MINOR:
stream: Don't release a stream if FLT_END is still registered"), it is
possible to trigger it.
This patch must be backported everywhere the above commit was backported.
If the end of input is reported by the mux on the conn-stream during a
receive, we leave without evaluating the channel policies. It is especially
important to be able to catch client aborts during server connection
establishment. Indeed, in this case, without this patch, the
stream-interface remains blocked and read events are not forwarded to the
stream. It means it is not possible to detect client aborts.
Thanks to this fix, the abortonclose option should fixed for HAProxy 2.3 and
lower. On 2.4 and 2.5, it seems to work because the stream is created after
the request parsing.
Note that a previous fix of abortonclose option was reverted. This one
should be the right way to fix it. It must carefully be backported as far as
2.0. A observation period on the 2.3 is probably a good idea.
Now, "Upgrade:" header is removed from such requests. Thus, the condition to
reject them is now useless and can be removed. Code to handle unimplemented
features is now unused but is preserved for future uses.
This patch may be backported to 2.4.
Instead of returning a 501-Not-implemented error when "Ugrade:" header is
found for a request with a payload, the header is removed. This way, the
upgrade is disabled and the request is still sent to the server. It is
required because some frameworks seem to try to perform H2 upgrade on every
requests, including POST ones.
The h2 mux was slightly fixed to convert Upgrade requests to extended
connect ones only if the rigth HTX flag is set.
This patch should fix the issue #1381. It must be backported to 2.4.
The sole purpose of the variable's usage accounting is to enforce
limits at the session or process level, but very commonly these are not
set, yet the bookkeeping (especially at the process level) is extremely
expensive.
Let's simply disable it when the limits are not set. This further
increases the performance of 12 variables on 16-thread from 1.06M
to 1.24M req/s.
Right now we have a per-process max variable size and a per-scope one,
with the proc scope covering all others. As such, the per-process global
one is always exactly equal to the per-proc-scope one. And bookkeeping
on these process-wide variables is extremely expensive (up to 38% CPU
seen in var_accounting_diff() just for them).
Let's kill vars_global_size and only rely on the proc one. Doing this
increased the request rate from 770k to 1.06M in a config having only
12 variables on a 16-thread machine.
The global table of known variables names can only grow and was designed
for static names that are registered at boot. Nowadays it's possible to
set dynamic variable names from Lua or from the CLI, which causes a real
problem that was partially addressed in 2.2 with commit 4e172c93f
("MEDIUM: lua: Add `ifexist` parameter to `set_var`"). Please see github
issue #624 for more context.
This patch simplifies all this by removing the need for a central
registry of known names, and storing 64-bit hashes instead. This is
highly sufficient given the low number of variables in each context.
The hash is calculated using XXH64() which is bijective over the 64-bit
space thus is guaranteed collision-free for 1..8 chars. Above that the
risk remains around 1/2^64 per extra 8 chars so in practice this is
highly sufficient for our usage. A random seed is used at boot to seed
the hash so that it's not attackable from Lua for example.
There's one particular nit though. The "ifexist" hack mentioned above
is now limited to variables of scope "proc" only, and will only match
variables that were already created or declared, but will now verify
the scope as well. This may affect some bogus Lua scripts and SPOE
agents which used to accidentally work because a similarly named
variable used to exist in a different scope. These ones may need to be
fixed to comply with the doc.
Now we can sum up the situation as this one:
- ephemeral variables (scopes sess, txn, req, res) will always be
usable, regardless of any prior declaration. This effectively
addresses the most problematic change from the commit above that
in order to work well could have required some script auditing ;
- process-wide variables (scope proc) that are mentioned in the
configuration, referenced in a "register-var-names" SPOE directive,
or created via "set-var" in the global section or the CLI, are
permanent and will always accept to be set, with or without the
"ifexist" restriction (SPOE uses this internally as well).
- process-wide variables (scope proc) that are only created via a
set-var() tcp/http action, via Lua's set_var() calls, or via an
SPOE with the "force-set-var" directive), will not be permanent
but will always accept to be replaced once they are created, even
if "ifexist" is present
- process-wide variables (scope proc) that do not exist will only
support being created via the set-var() tcp/http action, Lua's
set_var() calls without "ifexist", or an SPOE declared with
"force-set-var".
This means that non-proc variables do not care about "ifexist" nor
prior declaration, and that using "ifexist" should most often be
reliable in Lua and that SPOE should most often work without any
prior declaration. It may be doable to turn "ifexist" to 1 by default
in Lua to further ease the transition. Note: regtests were adjusted.
Cc: Tim Dsterhus <tim@bastelstu.be>
Variables names will be hashed, but for this we need a random seed.
The XXH3() algorithms is bijective over the whole 64-bit space, which
is great as it guarantees no collision for 1..8 byte names. But above
that even if the risk is extremely faint, it theoretically exists and
since variables may be set from Lua we'd rather do our best to limit
the risk of controlled collision, hence the random seed.
All variables whose names are parsed by the config parser, the
command-line parser or the SPOE's register-var-names parser are
now preset as permanent. This will guarantee that these variables
will exist through out all the process' life, and that it will be
possible to implement the "ifexist" feature by looking them up.
This was marked medium because pre-setting a variable with an empty
value may always have side effects, even though none was spotted at
this stage.
We certainly do not want that a permanent variable (one that is listed
in the configuration) be erased by accident by an "unset-var" action.
Let's make sure these ones are only reset to an empty sample, like at
the moment of their initial registration. One trick is that the same
function is used to purge the memory at the end and to delete, so we
need to add an extra "force" argument to make the choice.
In order to continue to honor the ifexist Lua option and prevent rogue
SPOA agents from creating too many variables, we'll need to keep the
ability to mark certain proc.* variables as permanent when they're
known from the config file.
Let's add a flag there for this. It's added to the variable when the
variable is created with this flag set by the caller.
Another approach could have been to use a distinct list or distinct
scope but that sounds complicated and bug-prone.
Storing an unset sample (SMP_T_ANY == 0) will be used to only reserve
the variable's space but associate no value. We need to slightly adjust
var_to_smp() for this so that it considers a value-less variable as non
existent and falls back to the default value.
Passing this flag to var_set() will result in the variable to only be
created if it did not exist, otherwise nothing is done (it's not even
updated). This will be used for pre-registering names.
When setting variables, there are currently two variants, one which will
always create the variable, and another one, "ifexist", which will only
create or update a variable if a similarly named variable in any scope
already existed before.
The goal was to limit the risk of injecting random names in the proc
scope, but it was achieved by making use of the somewhat limited name
indexing model, which explains the scope-agnostic restriction.
With this change, we're moving the check downwards in the chain, at the
variable level, and only variables under the scope "proc" will be subject
to the restriction. A new set of VF_* flags was added to adjust how
variables are set, and VF_UPDATEONLY is used to mention this restriction.
In this exact state of affairs, this is not completely exact, as if a
similar name was not known in any scope, the variable will continue to
be rejected like before, but this will change soon.
The names for these two functions are totally misleading, they have
nothing to do with samples, they're purely dedicated to variables. The
former is only used by the second one and makes no sense by itself, so
it cannot even get a meaningful name. Let's remerge them into a single
one called "var_set()" which, as its name tries to imply, sets a variable
to a given value.
This name was quite misleading, as it has nothing to do with samples nor
streams. This function's sole purpose is to unset a variable, so let's
call it "var_unset()" and document it a little bit.
The vars_init() name is particularly confusing as it does not initialize
the variables code but the head of a list of variables passed in
arguments. And we'll soon need to have proper initialization code, so
let's rename it now.
In ticket #1348 some users expressed some concerns regarding the removal
of the "grace" directive from the proxies. Their use case very closely
mimmicks the original intent of the grace keyword, which is, let haproxy
accept traffic for some time when stopping, while indicating an external
LB that it's stopping.
This is implemented here by starting a task whose expiration triggers
the soft-stop for real. The global "stopping" variable is immediately
set however. For example, this below will be sufficient to instantly
notify an external check on port 9999 that the service is going down,
while other services remain active for 10s:
global
grace 10s
frontend ext-check
bind :9999
monitor-uri /ext-check
monitor fail if { stopping }
This reverts commit e0dec4b7b2.
At first glance, channel_is_empty() was used on purpose in si_update_rx(),
because of the HTX ("b3e0de46c" MEDIUM: stream-int: Rely only on
SI_FL_WAIT_ROOM to stop data receipt). It is not pretty clear for now why
channel_may_recv() sould not be used here but this change introduce a
possible infinite loop with the stats applet. So, it is safer to revert the
patch, waiting for a better understanding of the probelm.
This means the abortonclose option will be broken again on the 2.3 and lower
versions.
This patch should fix the issue #1360. It must be backported as far as 2.0.
Since commit "BUG/MINOR: config: reject configs using HTTP with bufsize
>= 256 MB" we are now sure that it's not possible anymore to have an HTX
block of a size 256 MB or more, even after concatenation thanks to the
tests for len >= htx_free_data_space(). Let's remove these now obsolete
comments.
A BUG_ON() was added in htx_add_blk() to track any such exception if
the conditions would change later, to complete the one that is performed
on the start address that must remain within the buffer.
As seen in commit 5ef965606 ("BUG/MINOR: lua: use strlcpy2() not
strncpy() to copy sample keywords"), configs with large values of
tune.bufsize were not practically usable since Lua was introduced,
regardless of the machine's available memory.
In addition, HTX encoding already limits block sizes to 256 MB, thus
it is not technically possible to use that large a buffer size when
HTTP is in use. This is absurdly high anyway, and for example Lua
initialization would take around one minute on a 4 GHz CPU. Better
prevent such a config from starting than having to deal with bug
reports that make no sense.
The check is only enforced if at least one HTX proxy was found, as
there is no techincal reason to block it for configs that are solely
based on raw TCP, and it could still be imagined that some such might
exist with single connections (e.g. a log forwarder that buffers to
cover for the storage I/O latencies).
This should be backported to all HTX-enabled versions (2.0 and above).
It is quite common to see in configurations constructions like the
following one:
http-request set-var(txn.bodylen) 0
http-request set-var(txn.bodylen) req.hdr(content-length)
...
http-request set-header orig-len %[var(txn.bodylen)]
The set-var() rules are almost always duplicated when manipulating
integers or any other value that is mandatory along operations. This is
a problem because it makes the configurations complicated to maintain
and slower than needed. And it becomes even more complicated when several
conditions may set the same variable because the risk of forgetting to
initialize it or to accidentally reset it is high.
This patch extends the var() sample fetch function to take an optional
argument which contains a default value to be returned if the variable
was not set. This way it becomes much simpler to use the variable, just
set it where needed, and read it with a fall back to the default value:
http-request set-var(txn.bodylen) req.hdr(content-length)
...
http-request set-header orig-len %[var(txn.bodylen,0)]
The default value is always passed as a string, thus it will experience
a cast to the output type. It doesn't seem userful to complicate the
configuration to pass an explicit type at this point.
The vars.vtc regtest was updated accordingly.
In preparation for support default values when fetching variables, we
need to update the internal API to pass an extra argument to functions
vars_get_by_{name,desc} to provide an optional default value. This
patch does this and always passes NULL in this argument. var_to_smp()
was extended to fall back to this value when available.
The two functions vars_get_by_name() and vars_get_by_scope() perform
almost the same operations except that they differ from the way the
name and scope are retrieved. The second part in common is more
complex and involves locking, so better factor this one out into a
new function.
There is no other change than refactoring.
Most often "set var" on the CLI is used to set a string, and using only
expressions is not always convenient, particularly when trying to
concatenate variables sur as host names and paths.
Now the "set var" command supports an optional keyword before the value
to indicate its type. "expr" takes an expression just like before this
patch, and "fmt" a format string, making it work like the "set-var-fmt"
actions.
The VTC was updated to include a test on the format string.
Just like the set-var-fmt action for tcp/http rules, the set-var-fmt
directive in global sections allows to pre-set process-wide variables
using a format string instead of a sample expression. This is often
more convenient when it is required to concatenate multiple fields,
or when emitting just one word.
The log-format strings are usable at plenty of places, but the expressions
using %[] were restricted to request or response context and nothing else.
This prevents from using them from the config context or the CLI, let's
relax this.
We're using a dummy temporary proxy when creating global variables in
the configuration file, it was copied from the CLI's code and was
mistakenly called "CLI", better name it "CFG". It should not appear
anywhere except maybe when debugging cores.
When attempting to set a variable does not start with the "proc" scope on
the CLI, we used to emit "only proc is permitted in the global section"
which obviously is a leftover from the initial code.
This may be backported to 2.4.
When a variable starts with the wrong scope, it is named without stripping
the extra characters that follow it, which usually are closing parenthesis.
Let's make sure we only report what is expected.
This may be backported to 2.4.
In commit 9a621ae76 ("MEDIUM: vars: add a new "set-var-fmt" action")
we introduced the support for format strings in variables with the
ability to release them on exit, except that it's the wrong list that
was being scanned for the rule (http vs vars), resulting in random
crashes during deinit.
This was a recent commit in 2.5-dev, no backport is needed.
The set-var() action is convenient because it preserves the input type
but it's a pain to deal with when trying to concatenate values. The
most recurring example is when it's needed to build a variable composed
of the source address and the source port. Usually it ends up like this:
tcp-request session set-var(sess.port) src_port
tcp-request session set-var(sess.addr) src,concat(":",sess.port)
This is even worse when trying to aggregate multiple fields from stick-table
data for example. Due to this a lot of users instead abuse headers from HTTP
rules:
http-request set-header(x-addr) %[src]:%[src_port]
But this requires some careful cleanups to make sure they won't leak, and
it's significantly more expensive to deal with. And generally speaking it's
not clean. Plus it must be performed for each and every request, which is
expensive for this common case of ip+port that doesn't change for the whole
session.
This patch addresses this limitation by implementing a new "set-var-fmt"
action which performs the same work as "set-var" but takes a format string
in argument instead of an expression. This way it becomes pretty simple to
just write:
tcp-request session set-var-fmt(sess.addr) %[src]:%[src_port]
It is usable in all rulesets that already support the "set-var" action.
It is not yet implemented for the global "set-var" directive (which already
takes a string) and the CLI's "set var" command, which would definitely
benefit from it but currently uses its own parser and engine, thus it
must be reworked.
The doc and regtests were updated.
When the expression called in "set-var" uses argments that require late
resolution, the context must be set. At the moment, any unknown argument
is misleadingly reported as "ACL":
frontend f
bind :8080
mode http
http-request set-var(proc.a) be_conn(foo)
parsing [b1.cfg:4]: unable to find backend 'foo' referenced in arg 1 \
of ACL keyword 'be_conn' in proxy 'f'.
Once the context is properly set, it now says the truth:
parsing [b1.cfg:8]: unable to find backend 'foo' referenced in arg 1 \
of sample fetch keyword 'be_conn' in http-request expression in proxy 'f'.
This may be backported but is not really important. If so, the preceeding
patches "BUG/MINOR: vars: improve accuracy of the rules used to check
expression validity" and "MINOR: sample: add missing ARGC_ entries" must
be backported as well.
For a long time we couldn't have arguments in expressions used in
tcp-request, tcp-response etc rules. But now due to the variables
it's possible, and their context in case of failure to resolve an
argument (e.g. backend name not found) is not properly reported
because there is no arg context values in ARGC_* to report them.
Let's add a number of missing ones for tcp-request {connection,
session,content}, tcp-response content, tcp-check, the config
parser (for "set-var" in the global section) and the CLI parser
(for "set-var" on the CLI).
The set-var() expression naturally checks whether expressions are valid
in the context of the rule, but it fails to differentiate frontends from
backends. As such for tcp-content and http-request rules, it will only
accept frontend-compatible sample-fetches, excluding those declared with
SMP_UES_BKEND (a few such as be_id, be_name). For the response it accepts
the backend-compatible expressions only, though it seems that there are
no sample-fetch function that are valid only in the frontend's content,
so that should not cause any problem.
Note that while allowing valid configs to be used, the fix might also
uncover some incorrect configurations where some expressions currently
return nothing (e.g. something depending on frontend declared in a
backend), and which could be rejected, but there does not seem to be
any such keyword. Thus while it should be backported, better not backport
it too far (2.4 and possibly 2.3 only).
The parser checks first for "set-var" then "unset-var" from the updated
offset instead of testing it only when the other one fails, so it
validates this rule as "unset-var":
http-request set-varunset-var(proc.a)
This should be backported everywhere relevant, though it's mostly harmless
as it's unlikely that some users are purposely writing this in their conf!
Sometimes it is convenient to remap large sets of URIs to new ones (e.g.
after a site migration for example). This can be achieved using
"http-request redirect" combined with maps, but one difficulty there is
that non-matching entries will return an empty response. In order to
avoid this, duplicating the operation as an ACL condition ending in
"-m found" is possible but it becomes complex and error-prone while it's
known that an empty URL is not valid in a location header.
This patch addresses this by improving the redirect rules to be able to
simply ignore the rule and skip to the next one if the result of the
evaluation of the "location" expression is empty. However in order not
to break existing setups, it requires a new "ignore-empty" keyword.
There used to be an ACT_FLAG_FINAL on redirect rules that's used during
the parsing to emit a warning if followed by another rule, so here we
only set it if the option is not there. The http_apply_redirect_rule()
function now returns a 3rd value to mention that it did nothing and
that this was not an error, so that callers can just ignore the rule.
The regular "redirect" rules were not modified however since this does
not apply there.
The map_redirect VTC was completed with such a test and updated to 2.5
and an example was added into the documentation.
The bc_conn_err and bc_conn_err_str sample fetches give the status of
the connection on the backend side. The error codes and error messages
are the same than the ones that can be raised by the fc_conn_err fetch.
This new sample fetch along the ssl_bc_hsk_err_str fetch contain the
last SSL error of the error stack that occurred during the SSL
handshake (from the backend's perspective).
The locking in the dequeuing process was significantly improved by commit
49667c14b ("MEDIUM: queue: take the proxy lock only during the px queue
accesses") in that it tries hard to limit the time during which the
proxy's queue lock is held to the strict minimum. Unfortunately it's not
enough anymore, because we take up the task and manipulate a few pendconn
elements after releasing the proxy's lock (while we're under the server's
lock) but the task will not necessarily hold the server lock since it may
not have successfully found one (e.g. timeout in the backend queue). As
such, stream_free() calling pendconn_free() may release the pendconn
immediately after the proxy's lock is released while the other thread
currently proceeding with the dequeuing tries to wake up the owner's
task and dies in task_wakeup().
One solution consists in releasing le proxy's lock later. But tests have
shown that we'd have to sacrifice a significant share of the performance
gained with the patch above (roughly a 20% loss).
This patch takes another approach. It adds a "del_lock" to each pendconn
struct, that allows to keep it referenced while the proxy's lock is being
released. It's mostly a serialization lock like a refcount, just to maintain
the pendconn alive till the task_wakeup() call is complete. This way we can
continue to release the proxy's lock early while keeping this one. It had
to be added to the few points where we're about to free a pendconn, namely
in pendconn_dequeue() and pendconn_unlink(). This way we continue to
release the proxy's lock very early and there is no performance degradation.
This lock may only be held under the queue's lock to prevent lock
inversion.
No backport is needed since the patch above was merged in 2.5-dev only.
This option can be used to define a specific log format that will be
used in case of error, timeout, connection failure on a frontend... It
will be used for any log line concerned by the log-separate-errors
option. It will also replace the format of specific error messages
decribed in section 8.2.6.
If no "error-log-format" is defined, the legacy error messages are still
emitted and the other error logs keep using the regular log-format.
This option will be replaced by a "error-log-format" that enables to use
a dedicated log-format for connection error messages instead of the
regular log-format (in which most of the fields would be invalid in such
a case).
The "log-error-via-logformat" mechanism will then be replaced by a test
on the presence of such an error log format or not. If a format is
defined, it is used for connection error messages, otherwise the legacy
error log format is used.
One was in backend.c and the other one in hlua.c. No other candidate
was found with "git grep '^#if\s*USE'". It's worth noting that 3
other such tests exist for SSL_OP_NO_{SSLv3,TLSv1_1,TLSv1_2} but
that these ones are properly set to 0 in openssl-compat.h when not
defined.
The condition should first check whether `bsize` is reached, before
dereferencing the offset. Even if this always works fine, due to the
string being null-terminated, this certainly looks odd.
Found using GitHub's CodeQL scan.
This bug traces back to at least 97c2ae13bc
(1.7.0+) and this patch should be backported accordingly.
Using localtime / gmtime is not thread-safe, whereas the `get_*` wrappers are.
Found using GitHub's CodeQL scan.
The use in sample_conv_ltime() can be traced back to at least
fac9ccfb70 (first appearing in 1.6-dev3), so all
supported branches with thread support are affected.
The test on FIND_OPTIMAL_MATCH for the experimental code can yield a
build warning when using -Wundef, let's turn it into a regular ifdef.
This is slz upstream commit 05630ae8f22b71022803809eb1e7deb707bb30fb
Before threads were introduced in 1.8, idle_pct used to be a global
variable indicating the overall process idle time. Threads made it
thread-local, meaning that its reporting in the stats made little
sense, though this was not easy to spot. In 2.0, the idle_pct variable
moved to the struct thread_info via commit 81036f273 ("MINOR: time:
move the cpu, mono, and idle time to thread_info"). It made it more
obvious that the idle_pct was per thread, and also allowed to more
accurately measure it. But no more effort was made in that direction.
This patch introduces a new report_idle() function that accurately
averages the per-thread idle time over all running threads (i.e. it
should remain valid even if some threads are paused or stopped), and
makes use of it in the stats / "show info" reports.
Sending traffic over only two connections of an 8-thread process
would previously show this erratic CPU usage pattern:
$ while :; do socat /tmp/sock1 - <<< "show info"|grep ^Idle;sleep 0.1;done
Idle_pct: 30
Idle_pct: 35
Idle_pct: 100
Idle_pct: 100
Idle_pct: 100
Idle_pct: 100
Idle_pct: 100
Idle_pct: 100
Idle_pct: 35
Idle_pct: 33
Idle_pct: 100
Idle_pct: 100
Idle_pct: 100
Idle_pct: 100
Idle_pct: 100
Idle_pct: 100
Now it shows this more accurate measurement:
$ while :; do socat /tmp/sock1 - <<< "show info"|grep ^Idle;sleep 0.1;done
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
Idle_pct: 83
This is not technically a bug but this lack of precision definitely affects
some users who rely on the idle_pct measurement. This should at least be
backported to 2.4, and might be to some older releases depending on users
demand.
To be able to provide JA3 compatible TLS Fingerprints we need to expose
all Client Hello captured data using fetchers. Patch provides new
and modifies existing fetchers to add ability to filter out GREASE values:
- ssl_fc_cipherlist_*
- ssl_fc_ecformats_bin
- ssl_fc_eclist_bin
- ssl_fc_extlist_bin
- ssl_fc_protocol_hello_id
When we set tune.ssl.capture-cipherlist-size to a non-zero value
we are able to capture cipherlist supported by the client. To be able to
provide JA3 compatible TLS fingerprinting we need to capture more
information from Client Hello message:
- SSL Version
- SSL Extensions
- Elliptic Curves
- Elliptic Curve Point Formats
This patch allows HAProxy to capture such information and store it for
later use.
The lua initialization code which creates the Lua mapping of all converters
and sample fetch keywords makes use of strncpy(), and as such can take ages
to start with large values of tune.bufsize because it spends its time zeroing
gigabytes of memory for nothing. A test performed with an extreme value of
16 MB takes roughly 4 seconds, so it's possible that some users with huge
1 MB buffers (e.g. for payload analysis) notice a small startup latency.
However this does not affect config checks since the Lua stack is not yet
started. Let's replace this with strlcpy2().
This should be backported to all supported versions.
When a server is configured with name-resolution, resolvers objects are
created with reference to this server. Thus the server is marked as non
purgeable to prevent its removal at runtime.
This does not need to be backport.
Patch 211c967 ("MINOR: httpclient: add the server to the proxy") broke
the reg-tests that do a "show servers state".
Indeed the servers of the proxies flagged with PR_CAP_INT are dumped in
the output of this CLI command.
This patch fixes the issue par ignoring the PR_CA_INT proxies in the
dump.
Without this fix, the decode function would proceed even when the output
buffer is not large enough, because the padding was not considered. For
example, it would not fail with the input length of 23 and the output
buffer size of 15, even the actual decoded output size is 17.
This patch should be backported to all stable branches that have a
base64urldec() function available.
Relax the condition on "delete server" CLI handler to be able to remove
all servers, even non dynamic, except if they are flagged as non
purgeable.
This change is necessary to extend the use cases for dynamic servers
with reload. It's expected that each dynamic server created via the CLI
is manually commited in the haproxy configuration by the user. Dynamic
servers will be present on reload only if they are present in the
configuration file. This means that non-dynamic servers must be allowed
to be removable at runtime.
The dynamic servers removal reg-test has been updated and renamed to
reflect its purpose. A new test is present to check that non-purgeable
servers cannot be removed.
Mark servers that are referenced by configuration elements as non
purgeable. This includes the following list :
- tracked servers
- servers referenced in a use-server rule
- servers referenced in a sample fetch
In a future patch, it will be possible to remove at runtime every
servers, both static and dynamic. This requires to extend the server
refcount for all instances.
First, refcount manipulation functions have been renamed to better
express the API usage.
* srv_refcount_use -> srv_take
The refcount is always initialize to 1 on the server creation in
new_server. It's also incremented for each check/agent configured on a
server instance.
* free_server -> srv_drop
This decrements the refcount and if null, the server is freed, so code
calling it must not use the server reference after it. As a bonus, this
function now returns the next server instance. This is useful when
calling on the server loop without having to save the next pointer
before each invocation.
In these functions, remove the checks that prevent refcount on
non-dynamic servers. Each reference to "dynamic" in variable/function
naming have been eliminated as well.
A dynamic server may be deleted at runtime at the same moment when the
stats applet is pointing to it. Use the server refcount to prevent
deletion in this case.
This should be backported up to 2.4, with an observability period of 2
weeks. Note that it requires the dynamic server refcounting feature
which has been implemented on 2.5; the following commits are required :
- MINOR: server: implement a refcount for dynamic servers
- BUG/MINOR: server: do not use refcount in free_server in stopping mode
- MINOR: server: return the next srv instance on free_server
As a convenience, return the next server instance from servers list on
free_server.
This is particularily useful when using this function on the servers
list without having to save of the next pointer before calling it.
using the procctl api to set the current process as traceable, thus being able to produce a core dump as well.
making it as compile option if not wished or using freebsd prior to 11.x (last no EOL release).
THe http_update_update_host function takes an URL and extract the domain
to use as a host header. However it only update an existing host header
and does not create one.
This patch add an empty host header so the function can update it.
Add the raw and ssl server to the proxy list so they can be freed during
the deinit() of HAProxy. As a side effect the 2 servers need to have a
different ID so the SSL one was renamed "<HTTPSCLIENT>".
Ensure that no more than olen bytes is written to the output buffer,
otherwise we might experience an unexpected behavior.
While the original code used to validate that the output size was
always large enough before starting to write, this validation was
later broken by the commit below, allowing to 3-byte blocks to
areas whose size is not multiple of 3:
commit ed697e4856
Author: Emeric Brun <ebrun@haproxy.com>
Date: Mon Jan 14 14:38:39 2019 +0100
BUG/MINOR: base64: dec func ignores padding for output size checking
Decode function returns an error even if the ouptut buffer is
large enought because the padding was not considered. This
case was never met with current code base.
For base64urldec(), it's basically the same problem except that since
the input format supports arbitrary lengths, the problem has always
been there since its introduction in 2.4.
This should be backported to all stable branches having a backport of
the patch above (i.e. 2.0), with some adjustments depending on the
availability of the base64dec() and base64urldec().
The httpclient does a free of the servers and proxies it uses, however
since we are including them in the global proxy list, haproxy already
free them during the deinit. We can safely remove these free.
The sc-set-gpt0() parser was extended in 2.1 by commit 0d7712dff ("MINOR:
stick-table: allow sc-set-gpt0 to set value from an expression") to support
sample expressions in addition to plain integers. However there is a
subtlety there, which is that while the arg position must be incremented
when parsing an integer, it must not be touched when calling an expression
since the expression parser already does it.
The effect is that rules making use of sc-set-gpt0() followed by an
expression always ignore one word after that expression, and will typically
fail to parse if followed by an "if" as the parser will restart after the
"if". With no condition it's different because an empty condition doesn't
result in trying to parse anything.
This patch moves the increment at the right place and adds a few
explanations for a code part that was far from being obvious.
This should be backported to branches having the commit above (2.1+).
Implements a way of checking the running openssl version:
If the OpenSSL support was not compiled within HAProxy it will returns a
error, so it's recommanded to do a SSL feature check before:
$ ./haproxy -cc 'feature(OPENSSL) && openssl_version_atleast(0.9.8zh) && openssl_version_before(3.0.0)'
This will allow to select the SSL reg-tests more carefully.
Some users are facing huge CPU usage or even watchdog panics due to
the Lua global lock when many threads compete on it, but they have
no way to see that in the usual dumps. We take the lock at 2 or 3
places only, thus it's trivial to move it to a global function so
that stack dumps will now explicitly show it, increasing the change
that it rings a bell and someone suggests switch to lua-load-per-thread:
Current executing Lua from a stream analyser -- stack traceback:
loop.lua:1: in function line 1
call trace(27):
| 0x5ff157 [48 83 c4 10 5b 5d 41 5c]: wdt_handler+0xf7/0x104
| 0x7fe37fe82690 [48 c7 c0 0f 00 00 00 0f]: libpthread:+0x13690
| 0x614340 [66 48 0f 7e c9 48 01 c2]: main+0x1e8a40
| 0x607b85 [48 83 c4 08 48 89 df 31]: main+0x1dc285
| 0x6070bc [48 8b 44 24 20 48 8b 14]: main+0x1db7bc
| 0x607d37 [41 89 c4 89 44 24 1c 83]: lua_resume+0xc7/0x214
| 0x464ad6 [83 f8 06 0f 87 f1 01 00]: main+0x391d6
| 0x4691a7 [83 f8 06 0f 87 03 20 fc]: main+0x3d8a7
| 0x51dacb [85 c0 74 61 48 8b 5d 20]: sample_process+0x4b/0xf7
| 0x51e55c [48 85 c0 74 3f 64 48 63]: sample_fetch_as_type+0x3c/0x9b
| 0x525613 [48 89 c6 48 85 c0 0f 84]: sess_build_logline+0x2443/0x3cae
| 0x4af0be [4c 63 e8 4c 03 6d 10 4c]: http_apply_redirect_rule+0xbfe/0xdf8
| 0x4af523 [83 f8 01 19 c0 83 e0 03]: main+0x83c23
| 0x4b2326 [83 f8 07 0f 87 99 00 00]: http_process_req_common+0xf6/0x15f6
| 0x4d5b30 [85 c0 0f 85 9f f5 ff ff]: process_stream+0x2010/0x4e18
It also allows "perf top" to directly show the time spent on this lock.
This may be backported to some stable versions as it improves the
overall debuggability.
Include the correct .h files in http_client.c and http_client.h.
The api.h is needed in http_client.c and http_client-t.h is now include
directly from http_client.h
Reported by coverity in ticket #1355
CID 1461505: Memory - illegal accesses (UNINIT)
Using uninitialized value "sl".
Fix the problem by initializing sl to NULL.
Proxies must call proxy_preset_defaults() to initialize their settings
that are usually learned from defaults sections (e.g. connection retries,
pool purge delay etc). At the moment there was likely no impact, but not
doing so could cause trouble soon when using the client more extensively
or when new defaults are introduced and failed to be initialized.
No backport is needed.
Recent commit 83614a9fb ("MINOR: httpclient: initialize the proxy") broke
reg tests that match the output of "show stats" or "show servers state"
because it changed the proxies' numeric ID.
In fact it did nothing wrong, it just registers a proxy and adds it at
the head of the list. But the automatic numbering scheme, which was made
to make sure that temporarily disabled proxies in the config keep their
ID instead of shifting all others, sees one more proxy and increments
next_pxid for all subsequent proxies.
This patch avoids this by not assigning automatic IDs to such internal
proxies, leaving them with their ID of -1, and by not shifting next_pxid
for them. This is important because the user might experience them
appearing or disappearing depending on apparently unrelated config
options or build options, and this must not cause visible proxy IDs
to change (e.g. stats or minitoring may break).
Though the issue has always been there, it only became a problem with
the recent proxy additions so there is no need to backport this.
The X509_STORE_CTX_get0_cert did not exist yet on OpenSSL 1.0.2 and
neither did X509_STORE_CTX_get0_chain, which was not actually needed
since its get1 equivalent already existed.
RFC7540 states that :path follows RFC3986's path-absolute. However
that was a bug introduced in the spec between draft 04 and draft 05
of the spec, which implicitly causes paths starting with "//" to be
forbidden. HTTP/1 (and now HTTP core semantics) made it explicit
that the request-target in origin-form follows a purposely defined
absolute-path defined as 1*(/ segment) to explicitly allow "//".
http2bis now fixes this by relying on absolute-path so that "//"
becomes valid and matches other versions. Full discussion here:
https://lists.w3.org/Archives/Public/ietf-http-wg/2021JulSep/0245.html
This issue appeared in haproxy with commit 4b8852c70 ("BUG/MAJOR: h2:
verify that :path starts with a '/' before concatenating it") when
making the checks on :path fully comply with the spec, and was backported
as far as 2.0, so this fix must be backported there as well to allow
"//" in H2 again.
Most of the SSL sample fetches related to the client certificate were
based on the SSL_get_peer_certificate function which returns NULL when
the verification process failed. This made it impossible to use those
fetches in a log format since they would always be empty.
The patch adds a reference to the X509 object representing the client
certificate in the SSL structure and makes use of this reference in the
fetches.
The reference can only be obtained in ssl_sock_bind_verifycbk which
means that in case of an SSL error occurring before the verification
process ("no shared cipher" for instance, which happens while processing
the Client Hello), we won't ever start the verification process and it
will be impossible to get information about the client certificate.
This patch also allows most of the ssl_c_XXX fetches to return a usable
value in case of connection failure (because of a verification error for
instance) by making the "conn->flags & CO_FL_WAIT_XPRT" test (which
requires a connection to be established) less strict.
Thanks to this patch, a log-format such as the following should return
usable information in case of an error occurring during the verification
process :
log-format "DN=%{+Q}[ssl_c_s_dn] serial=%[ssl_c_serial,hex] \
hash=%[ssl_c_sha1,hex]"
It should answer to GitHub issue #693.
Change the User-Agent from "HAProxy HTTP client" to "HAProxy" as the
previous name is not valid according to RFC 7231#5.5.3.
This patch fixes issue #1354.
This commit implements an HTTP Client over the CLI, this was made as
working example for the HTTP Client API.
It usable over the CLI by specifying a method and an URL:
echo "httpclient GET http://127.0.0.1:8000/demo.file" | socat /tmp/haproxy.sock -
Only IP addresses are accessibles since the API does not allow to
resolve addresses yet.
This commit implements a very simple HTTP Client API.
A client can be operated by several functions:
- httpclient_new(), httpclient_destroy(): create
and destroy the struct httpclient instance.
- httpclient_req_gen(): generate a complete HTX request using the
the absolute URL, the method and a list of headers. This request
is complete and sets the HTX End of Message flag. This is limited
to small request we don't need a body.
- httpclient_start() fill a sockaddr storage with a IP extracted
from the URL (it cannot resolve an fqdm for now), start the
applet. It also stores the ptr of the caller which could be an
appctx or something else.
- hc->ops contains a list of callbacks used by the
HTTPClient, they should be filled manually after an
httpclient_new():
* res_stline(): the client received a start line, its content
will be stored in hc->res.vsn, hc->res.status, hc->res.reason
* res_headers(): the client received headers, they are stored in
hc->res.hdrs.
* res_payload(): the client received some payload data, they are
stored in the hc->res.buf buffer and could be extracted with the
httpclient_res_xfer() function, which takes a destination buffer
as a parameter
* res_end(): this callback is called once we finished to receive
the response.
Initialize a proxy which contain a server for the raw HTTP, and another
one for the HTTPS. This proxy will use the global server log definition
and the 'option httplog' directive.
This proxy is internal and will only be used for the HTTP Client API.
The wording regarding Host vs :authority in RFC7540 is ambiguous as it
says that an intermediary must produce a host header from :authority if
Host is missing, but, contrary to HTTP/1.1, doesn't say anything regarding
the possibility that Host and :authority differ, which leaves Host with
higher precedence there. In addition it mentions that clients should use
:authority *instead* of Host, and that H1->H2 should use :authority only
if the original request was in authority form. This leaves some gray
area in the middle of the chain for fully valid H2 requests arboring a
Host header that are forwarded to the other side where it's possible to
drop the Host header and use the authority only after forwarding to a
second H2 layer, thus possibly seeing two different values of Host at
a different stage. There's no such issue when forwarding from H2 to H1
as the authority is dropped only only the Host is kept.
Note that the following request is sufficient to re-normalize such a
request:
http-request set-header host %[req.hdr(host)]
The new spec in progress (draft-ietf-httpbis-http2bis-03) addresses
this trouble by being a bit is stricter on these rules. It clarifies
that :authority must always be used instead of Host and that Host ought
to be ignored. This is much saner as it avoids to convey two distinct
values along the chain. This becomes the protocol-level equivalent of:
http-request set-uri %[url]
So this patch does exactly this, which we were initially a bit reluctant
to do initially by lack of visibility about other implementations'
expectations. In addition it slightly simplifies the Host header field
creation by always placing it first in the list of headers instead of
last; this could also speed up the look up a little bit.
This needs to be backported to 2.0. Non-HTX versions are safe regarding
this because they drop the URI during the conversion to HTTP/1.1 so
only Host is used and transmitted.
Thanks to Tim Dsterhus for reporting that one.
Before HTX was introduced, all the HTTP request elements passed in
pseudo-headers fields were used to build an HTTP/1 request whose syntax
was then scrutinized by the HTTP/1 parser, leaving no room to inject
invalid characters.
While NUL, CR and LF are properly blocked, it is possible to inject
spaces in the method so that once translated to HTTP/1, fields are
shifted by one spcae, and a lenient HTTP/1 server could possibly be
fooled into using a part of the method as the URI. For example, the
following request:
H2 request
:method: "GET /admin? HTTP/1.1"
:path: "/static/images"
would become:
GET /admin? HTTP/1.1 /static/images HTTP/1.1
It's important to note that the resulting request is *not* valid, and
that in order for this to be a problem, it requires that this request
is delivered to an already vulnerable HTTP/1 server.
A workaround here is to reject malformed methods by placing this rule
in the frontend or backend, at least before leaving haproxy in H1:
http-request reject if { method -m reg [^A-Z0-9] }
Alternately H2 may be globally disabled by commenting out the "alpn"
directive on "bind" lines, and by rejecting H2 streams creation by
adding the following statement to the global section:
tune.h2.max-concurrent-streams 0
This patch adds a check for each character of the method to make sure
they belong to the ones permitted in a token, as mentioned in RFC7231#4.1.
This should be backported to versions 2.0 and above. For older versions
not having HTX_FL_PARSING_ERROR, a "goto fail" works as well as it
results in a protocol error at the stream level. Non-HTX versions are
safe because the resulting invalid request will be rejected by the
internal HTTP/1 parser.
Thanks to Tim Dsterhus for reporting that one.
Tim Dsterhus found that while the H2 path is checked for non-emptiness,
invalid chars and '*', a test is missing to verify that except for '*',
it always starts with exactly one '/'. During the reconstruction of the
full URI when passing to HTX, this missing test allows to affect the
apparent authority by appending a port number or a suffix name.
This only affects H2-to-H2 communications, as H2-to-H1 do not use the
full URI. Like for previous fix, the following rule inserted before
other ones in the frontend is sufficient to renormalize the internal
URI and let haproxy see the same authority as the target server:
http-request set-uri %[url]
This needs to be backported to 2.2. Earlier versions do not rebuild a
full URI using the authority and will fail on the malformed path at the
HTTP layer, so they are safe.
While we do explicitly check for strict character sets in the scheme,
this is only done when extracting URL components from an assembled one,
and we have special handling for "http" and "https" schemes directly in
the H2-to-HTX conversion. Sadly, this lets all other ones pass through
if they start exactly with "http://" or "https://", allowing the
reconstructed URI to start with a different looking authority if it was
part of the scheme.
It's interesting to note that in this case the valid authority is in
the Host header and that the request will only be wrong if emitted over
H2 on the backend side, since H1 will not emit an absolute URI by
default and will drop the scheme. So in essence, this is a variant of
the scheme-based attack described below in that it only affects H2-H2
and not H2-H1 forwarding:
https://portswigger.net/research/http2
As such, a simple workaround consists in just inserting the following
rule before other ones in the frontend, which will have for effect to
renormalize the authority in the request line according to the
concatenated version (making haproxy see the same authority and host
as what the target server will see):
http-request set-uri %[url]
This patch simply adds the missing syntax checks for non-http/https
schemes before the concatenation in the H2 code. An improvement may
consist in the future in splitting these ones apart in the start
line so that only the "url" sample fetch function requires to access
them together and that all other places continue to access them
separately. This will then allow the core code to perform such checks
itself.
The patch needs to be backported as far as 2.2. Before 2.2 the full
URI was not being reconstructed so the scheme and authority part were
always dropped from H2 requests to leave only origin requests. Note
for backporters: this depends on this previous patch:
MINOR: http: add a new function http_validate_scheme() to validate a scheme
Many thanks to Tim Dsterhus for figuring that one and providing a
reproducer.
While http_parse_scheme() extracts a scheme from a URI by extracting
exactly the valid characters and stopping on delimiters, this new
function performs the same on a fixed-size string.
txn functions can now be called from an action or a filter context. Thus the
return code must be adapted depending on this context. From an action, act.ABORT
is returned. From a filter, -1 is returned. It is the filter error code.
This bug only affects 2.5-dev. No backport needed.
CF_FLT_ANALYZE flags may be set before the FLT_END analyser. Thus if an error is
triggered in the mean time, this may block the stream and prevent it to be
released. It is indeed a problem only for the response channel because the
response analysers may be skipped on early errors.
So, to prevent any issue, depending on the code path, the FLT_END analyser is
systematically set when the CF_FLT_ANALYZE flag is set.
This patch must be backported in all stable branches.
The internal proxies should be part of the proxies list, because of
this, the check_config_validity() fonction could emit warnings about
these proxies.
This patch disables 3 startup warnings for internal proxies:
- "has no 'bind' directive" (this one was already ignored for the CLI
frontend, but we made it generic instead)
- "missing timeouts"
- "log format ignored"
User reported that the config check returns an error with the message:
"Configuration file has no error but will not start (no listener) => exit(2)."
if the configuration present only a log-forward section with bind or dgram-bind
listeners but no listen/backend nor peer sections.
The process checked if there was 'peers' section avalaible with
an internal frontend (and so a listener) or a 'listen/backend'
section not disabled with at least one configured listener (into the
global proxies_list). Since the log-forward proxies appear in a
different list, they were not checked.
This patch adds a lookup on the 'log-forward' proxies list to check
if one of them presents a listener and is not disabled. And
this is done only if there was no available listener found into
'listen/backend' sections.
I have also studied how to re-work this check considering the 'listeners'
counter used after startup/init to keep the same algo and avoid further
mistakes but currently this counter seems increased during config parsing
and if a proxy is disabled, decreased during startup/init which is done
after the current config check. So the fix still not rely on this
counter.
This patch should fix the github issue #1346
This patch should be backported as far as 2.3 (so on branches
including the "log-forward" feature)
When a lua filter declaration is parsed, some allocation errors were not
properly handled. In addition, we must be sure the filter identifier is defined
in lua to duplicate it when the filter configuration is filled.
This patch fix a defect reported in the issue #1347. It only concerns
2.5-dev. No backport needed.
In Channel and HTTPMessage classes, several functions uses an offset that
may be negative to start from the end of incoming data. But, after
calculation, the offset must never be negative. However, there is a bug
because of a bad cast to unsigned when "input + offset" is performed. The
result must be a signed integer.
This patch should fix most of defects reported in the issue #1347. It only
affects 2.5-dev. No backport needed.
Now an HTTPMessage class is available to manipulate HTTP message from a filter
it is possible to bind HTTP filters callback function on lua functions. Thus,
following methods may now be defined by a lua filter:
* Filter:http_headers(txn, http_msg)
* Filter:http_payload(txn, http_msg, offset, len)
* Filter:http_end(txn, http_msg)
http_headers() and http_end() may return one of the constant filter.CONTINUE,
filter.WAIT or filter.ERROR. If nothing is returned, filter.CONTINUE is used as
the default value. On its side, http_payload() may return the amount of data to
forward. If nothing is returned, all incoming data are forwarded.
For now, these functions are not allowed to yield because this interferes with
the filter workflow.
When a lua TXN is created from a filter context, the request and the response
HTTP message objects are accessible from ".http_req" and ".http_res" fields. For
an HTTP proxy, these objects are always defined. Otherwise, for a TCP proxy, no
object is created and nil is used instead. From any other context (action or
sample fetch), these fields don't exist.
This new class exposes methods to manipulate HTTP messages from a filter
written in lua. Like for the HTTP class, there is a bunch of methods to
manipulate the message headers. But there are also methods to manipulate the
message payload. This part is similar to what is available in the Channel
class. Thus the payload can be duplicated, erased, modified or
forwarded. For now, only DATA blocks can be retrieved and modified because
the current API is limited. No HTTPMessage method is able to yield. Those
manipulating the headers are always called on messages containing all the
headers, so there is no reason to yield. Those manipulating the payload are
called from the http_payload filters callback function where yielding is
forbidden.
When an HTTPMessage object is instantiated, the underlying Channel object
can be retrieved via the ".channel" field.
For now this class is not used because the HTTP filtering is not supported
yet. It will be the purpose of another commit.
There is no documentation for now.
It is now possible to write some filter callback functions in lua. All
filter callbacks are not supported yet but the mechanism to call them is now
in place. Following method may be defined in the Lua filter class to be
bound on filter callbacks:
* Filter:start_analyse(txn, chn)
* Filter:end_analyse(txn, chn)
* Filter:tcp_payload(txn, chn, offset, length)
hlua_filter_callback() function is responsible to call the good lua function
depending on the filter callback function. Using some flags it is possible
to allow a lua call to yield or not, to retrieve a return value or not, and
to specify if a channel or an http message must be passed as second
argument. For now, the HTTP part has not been added yet. It is also possible
to add extra argument adding them on the stack before the call.
3 new functions are exposed by the global object "filter". The first one,
filter.wake_time(ms_delay), to set the wake_time when a Lua callback
function yields (if allowed). The two others,
filter.register_data_filter(filter, chn) and
filter.unregister_data_filter(filter, chn), to enable or disable the data
filtering on a channel for a specific lua filter instance.
start_analyse() and end_analyse() may return one of the constant
filter.CONTINUE, filter.WAIT or filter.ERROR. If nothing is returned,
filter.CONTINUE is used as the default value. On its side, tcp_payload() may
return the amount of data to forward. If nothing is returned, all incoming
data are forwarded.
For now, these functions are not allowed to yield because this interferes
with the filter workflow.
Here is a simple example :
MyFilter = {}
MyFilter.id = "My Lua filter"
MyFilter.flags = filter.FLT_CFG_FL_HTX
MyFilter.__index = MyFilter
function MyFilter:new()
flt = {}
setmetatable(flt, MyFilter)
flt.req_len = 0
flt.res_len = 0
return flt
end
function MyFilter:start_analyze(txn, chn)
filter.register_data_filter(self, chn)
end
function MyFilter:end_analyze(txn, chn)
print("<Total> request: "..self.req_len.." - response: "..self.res_len)
end
function MyFilter:tcp_payload(txn, chn)
offset = chn:ouput()
len = chn:input()
if chn:is_resp() then
self.res_len = self.res_len + len
print("<TCP:Response> offset: "..offset.." - length: "..len)
else
self.req_len = self.req_len + len
print("<TCP:Request> offset: "..offset.." - length: "..len)
end
end
For filters written in lua, the tcp payloads will be filtered using methods
exposed by the Channel class. So the corrsponding C binding functions must
be prepared to process payload in a filter context and not only in an action
context.
The main change is the offset where to start to process data in the channel
buffer, and the length of these data. For an action, all input data are
considered. But for a filter, it depends on what the filter is allow to
forward when the tcp_payload callback function is called. It depends on
previous calls but also on other filters.
In addition, when the payload is modified by a lua filter, its context must
be updated. Note also that channel functions cannot yield when called from a
filter context.
For now, it is not possible to define callbacks to filter data and the
documentation has not been updated.
A lua TXN can be created when a sample fetch, an action or a filter callback
function is executed. A flag is now used to track the execute context.
Respectively, HLUA_TXN_SMP_CTX, HLUA_TXN_ACT_CTX and HLUA_TXN_FLT_CTX. The
filter flag is not used for now.
For now, there is no support for filters written in lua. So this function,
if called, will always return NULL. But when it will be called in a filter
context, it will return the filter structure attached to a channel
class. This function is also responsible to set the offset of data that may
be processed and the length of these data. When called outside a filter
context (so from an action), the offset is the input data position and the
length is the input data length. From a filter, the offset and the length of
data that may be filtered are retrieved the filter context.
It is now possible to write dummy filters in lua. Only the basis to declare
such filters has been added for now. There is no way to declare callbacks to
filter anything. Lua filters are for now empty nutshells.
To do so, core.register_filter() must be called, with 3 arguments, the
filter's name (as it appears in HAProxy config), the lua class that will be
used to instantiate filters and a function to parse arguments passed on the
filter line in HAProxy configuration file. The lua filter class must at
least define the method new(), without any extra args, to create new
instances when streams are created. If this method is not found, the filter
will be ignored.
Here is a template to declare a new Lua filter:
// haproxy.conf
global
lua-load /path/to/my-filter.lua
...
frontend fe
...
filter lua.my-lua-filter arg1 arg2 arg3
filter lua.my-lua-filter arg4 arg5
// my-filter.lua
MyFilter = {}
MyFilter.id = "My Lua filter" -- the filter ID (optional)
MyFilter.flags = filter.FLT_CFG_FL_HTX -- process HTX streams (optional)
MyFilter.__index = MyFilter
function MyFilter:new()
flt = {}
setmetatable(flt, MyFilter)
-- Set any flt fields. self.args can be used
flt.args = self.args
return flt -- The new instance of Myfilter
end
core.register_filter("my-lua-filter", MyFilter, function(filter, args)
-- process <args>, an array of strings. For instance:
filter.args = args
return filter
end)
In this example, 2 filters are declared using the same lua class. The
parsing function is called for both, with its own copy of the lua class. So
each filter will be unique.
The global object "filter" exposes some constants and flags, and later some
functions, to help writting filters in lua.
Internally, when a lua filter is instantiated (so when new() method is
called), 2 lua contexts are created, one for the request channel and another
for the response channel. It is a prerequisite to let some callbacks yield
on one side independently on the other one.
There is no documentation for now.
First of all, following functions are now considered deprecated:
* Channel:dup()
* Channel:get()
* Channel:getline()
* Channel:get_in_len()
* Cahnnel:get_out_len()
It is just informative, there is no warning and functions may still be
used. Howver it is recommended to use new functions. New functions are more
flexible and use a better naming pattern. In addition, the same names will
be used in the http_msg class to manipulate http messages from lua filters.
The new API is:
* Channel:data()
* Channel:line()
* Channel:append()
* Channel:prepend()
* Channel:insert()
* Channel:remove()
* Channel:set()
* Channel:input()
* Channel:output()
* Channel:send()
* Channel:forward()
* Channel:is_resp()
* Channel:is_full()
* Channel:may_recv()
The lua documentation was updated accordingly.
The main change is that following functions will now process channel's data
using an offset and a length:
* hlua_channel_dup_yield()
* hlua_channel_get_yield()
* hlua_channel_getline_yield()
* hlua_channel_append_yield()
* hlua_channel_set()
* hlua_channel_send_yield()
* hlua_channel_forward_yield()
So for now, the offset is always the input data position and the length is
the input data length. But with the support for filters, from a filter
context, these values will be relative to the filter.
To make all processing clearer, the function _hlua_channel_dup() has been
updated and _hlua_channel_dupline(), _hlua_channel_insert() and
_hlua_channel_delete() have been added.
This patch is mandatory to allow the support of the filters written in lua.
The hlua_checktable() function may now be used to create and return a
reference on a table in stack, given its position. This function ensures it
is really a table and throws an exception if not.
This patch is mandatory to allow the support of the filters written in lua.
Lua functions to set or append data to the input part of a channel must not
yield because new data may be received while the lua script is suspended. So
adding data to the input part in several passes is highly unpredicatble and
may be interleaved with received data.
Note that if necessary, it is still possible to suspend a lua action by
returning act.YIELD. This way the whole action will be reexecuted later
because of I/O events or a timer. Another solution is to call core.yield().
This bug affects all stable versions. So, it may be backported. But it is
probably not necessary because nobody notice it till now.
When a script is executed, it is not always allowed to yield. Lua sample
fetches and converters cannot yield. For lua actions, it depends on the
context. When called from tcp content ruleset, an action may yield until the
expiration of the inspect-delay timeout. From http rulesets, yield is not
possible.
Thus, when channel functions (dup, get, append, send...) are called, instead
of yielding when it is not allowed and triggering an error, we just give
up. In this case, some functions do nothing (dup, append...), some others
just interrupt the in-progress job (send, forward...). But, because these
functions don't yield anymore when it is not allowed, the script regains the
control and can continue its execution.
This patch depends on "MINOR: lua: Add a flag on lua context to know the
yield capability at run time". Both may be backported in all stable
versions. However, because nobody notice this bug till now, it is probably
not necessary, excepted if someone ask for it.
When a script is executed, a flag is used to allow it to yield. An error is
returned if a lua function yield, explicitly or not. But there is no way to
get this capability in C functions. So there is no way to choose to yield or
not depending on this capability.
To fill this gap, the flag HLUA_NOYIELD is introduced and added on the lua
context if the current script execution is not authorized to yield. Macros
to set, clear and test this flags are also added.
This feature will be usefull to fix some bugs in lua actions execution.
When at least one filter is registered on a stream, the FLT_END analyzer is
called on both direction when all other analyzers have finished their
processing. During this step, filters may release any allocated elements if
necessary. So it is important to not skip it.
Unfortunately, if both stream interfaces are closed, it is possible to not
wait the end of this analyzer. It is possible to be in this situation if a
filter must wait and prevents the analyzer completion. To fix the bug, we
now wait FLT_END analyzer is no longer registered on both direction before
releasing the stream.
This patch may be backported as far as 1.7, but AFAIK, no filter is affected
by this bug. So the backport seems to be optional for now. In any case, it
should remain under observation for some weeks first.
In tcpcheck_eval_send(), the condition to detect there are still pending
data in the output buffer is buggy. Presence of raw data must be tested for
TCP connection only. But a condition on the connection was missing to be
sure it is not an HTX connection.
This patch must be backported as far as 2.2.
The formatting of the buffer_dump() output must be calculated using the
relative counter, not the absolute one, or everything will be broken if
the <from> variable is not a multiple of 16.
Could be backported in all maintained versions.
A static server is able to support simultaneously both health chech and
agent-check. Adjust the dynamic server CLI handlers to also support this
configuration.
This should not be backported, unless dynamic server checks are
backported.
There is currently a leak on agent-check for dynamic servers. When
deleted, the check rules and vars are not liberated. This leak grows
each time a dynamic server with agent-check is deleted.
Replace the manual purge code by a free_check invocation which
centralizes all the details on check cleaning.
There is no leak for health check because in this case the proxy is the
owner of the check vars and rules.
This should not be backported, unless dynamic server checks are
backported.
If an error occured during a dynamic server creation, free_check is used
to liberate a possible agent-check. However, this does not free
associated vars and rules associated as this is done on another function
named deinit_srv_agent_check.
To simplify the check free and avoid a leak, move free vars/rules in
free_check. This is valid because deinit_srv_agent_check also uses
free_check.
This operation is done only for an agent-check because for a health
check, the proxy instance is the owner of check vars/rules.
This should not be backported, unless dynamic server checks are
backported.
Do not reset check flags when setting CHK_ST_PURGE.
Currently, this change has no impact. However, it is semantically wrong
to clear important flags such as CHK_ST_AGENT on purge.
Furthermore, this change will become mandatoy for a future fix to
properly free agent checks on dynamic servers removal. For this, it will
be needed to differentiate health/agent-check on purge via CHK_ST_AGENT
to properly free agent checks.
This must not be backported unless dynamic servers checks are
backported.
Currently there is a leak at process shutdown with dynamic servers with
check/agent-check activated. Check purges are not executed on process
stopping, so the server is not liberated due to its refcount.
The solution is simply to ignore the refcount on process stopping mode
and free the server on the first free_server invocation.
This should not be backported, unless dynamic server checks are
backported. In this case, the following commit must be backported first.
7afa5c1843
MINOR: global: define MODE_STOPPING
Test if server is not null before using free_server in the check purge
operation. Currently, the null server scenario should not occured as
purge is used with refcounted dynamic servers. However, this might not
be always the case if purge is use in the future in other cases; thus
the test is useful for extensibility.
No need to backport, unless dynamic server checks are backported.
This has been reported through a coverity report in github issue #1343.
This commit is the counterpart for agent check of
"MEDIUM: server: implement check for dynamic servers".
The "agent-check" keyword is enabled for dynamic servers. The agent
check must manually be activated via "enable agent" CLI. This can
enable the dynamic server if the agent response is "ready" without an
explicit "enable server" CLI.
Implement check support for dynamic servers. The "check" keyword is now
enabled for dynamic servers. If used, the server check is initialized
and the check task started in the "add server" CLI handler. The check is
explicitely disabled and must be manually activated via "enable health"
CLI handler.
The dynamic server refcount is incremented if a check is configured. On
"delete server" handler, the check is purged, which decrements the
refcount.
Implement a collection of keywords deemed safe and useful to dynamic
servers. The list of the supported keywords is :
- addr
- check-proto
- check-send-proxy
- check-via-socks4
- rise
- fall
- fastinter
- downinter
- port
- agent-addr
- agent-inter
- agent-port
- agent-send
Implement a mechanism to free a started check on runtime for dynamic
servers. A new function check_purge is created for this. The check task
will be marked for deletion and scheduled to properly close connection
elements and free the task/tasklet/buf_wait elements.
This function will be useful to delete a dynamic server wich checks.
It is necessary to have a refcount mechanism on dynamic servers to be
able to enable check support. Indeed, when deleting a dynamic server
with check activated, the check will be asynchronously removed. This is
mandatory to properly free the check resources in a thread-safe manner.
The server instance must be kept alive for this.
global maxsock is used to estimate a number of fd to reserve for
internal use, such as checks. It is incremented at startup with the info
from the config file.
Disable this incrementation in checks functions at runtime. First, it
currently serves no purpose to increment it after startup. Worse, it may
lead to out-of-bound accesse on the fdtab.
This will be useful to initiate checks for dynamic servers.
Remove static qualifier on init_srv_check, init_srv_agent_check and
start_check_task. These functions will be called in server.c for dynamic
servers with checks.
Allocate default tcp ruleset for every backend without explicit rules
defined, even if no server in the backend use check. This change is
required to implement checks for dynamic servers.
This allocation is done on check_config_validity. It must absolutely be
called before check_proxy_tcpcheck (called via post proxy check) which
allocate the implicit tcp connect rule.
Implement an equivalent of task_kill for tasklets. This function can be
used to request a tasklet deletion in a thread-safe way.
Currently this function is unused.
Remove the "DEPRECATED" marker on "enable/disable health/agent"
commands. Their purpose is to toggle the check/agent on a server.
These commands are still useful because their purpose is not covered by
the "set server" command. Most there was confusion with the commands
'set server health/agent', which in fact serves another goal.
Note that the indication "use 'set server' instead" has been added since
2016 on the commit
2c04eda8b5
REORG: cli: move "{enable|disable} health" to server.c
and
58d9cb7d22
REORG: cli: move "{enable|disable} agent" to server.c
Besides, these commands will become required to enable check/agent on
dynamic servers which will be created with check disabled.
This should be backported up to 2.4.
It is the second part of the fix that should solve fairness issues with the
connections management inside the SPOE filter. Indeed, in multithreaded
mode, when the SPOE detects there are some connections in queue on a server,
it closes existing connections by releasing SPOE applets. It is mandatory
when a maxconn is set because few connections on a thread may prenvent new
connections establishment.
The first attempt to fix this bug (9e647e5af "BUG/MEDIUM: spoe: Kill applets
if there are pending connections and nbthread > 1") introduced a bug. In
pipelining mode, SPOE applets might be closed while some frames are pending
for the ACK reply. To fix the bug, in the processing stage, if there are
some connections in queue, only truly idle applets may process pending
requests. In this case, only one request at a time is processed. And at the
end of the processing stage, only truly idle applets may be released. It is
an empirical workaround, but it should be good enough to solve contention
issues when a low maxconn is set.
This patch should partely fix the issue #1340. It must be backported as far
as 2.0.
On a thread, when the last SPOE applet is released, if there are still
pending streams, a new one is created. Of course, HAproxy must not be
stopping. It is important to start a new applet in this case to not abort
in-progress jobs, especially when a maxconn is set. Because applets may be
closed to be fair with connections waiting for a free slot.
This patch should partely fix the issue #1340. It depends on the commit
"MINOR: spoe: Create a SPOE applet if necessary when the last one on a
thread is closed". Both must be backported as far as 2.0.
There was no way to access the SPOE filter configuration from the agent
object. However it could be handy to have it. And in fact, this will be
required to fix a bug.
Nenad noticed that when leaving maintenance, the servers' last_change
field was not updated. This is visible in the Status column of the stats
page in front of the state, as the cumuled time spent in the current state
is wrong, it starts from the last transition (typically ready->maint). In
addition, the backend's state was not updated either, because the down
transition is performed by set_backend_down() which also emits a log, and
it is this function which was extended to update the backend's last_change,
but it's not called for down->up transitions so that was not done.
The most visible (and unpleasant) effect of this bug is that it affects
slowstart so such a server could immediately restart with a significant
load ratio.
This should likely be backported to all stable releases.
Right now we're using a DWCAS to atomically set the running_mask while
being constrained by the thread_mask. This DWCAS is annoying because we
may seriously need it later when adding support for thread groups, for
checking that the running_mask applies to the correct group.
It turns out that the DWCAS is not strictly necessary because we never
need it to set the thread_mask based on the running_mask, only the other
way around. And in fact, the running_mask is always cleared alone, and
the thread_mask is changed alone as well. The running_mask is only
relevant to indicate a takeover when the thread_mask matches it. Any
bit set in running and not present in thread_mask indicates a transition
in progress.
As such, it is possible to re-arrange this by using a regular CAS around a
consistency check between running_mask and thread_mask in fd_update_events
and by making a CAS on running_mask then an atomic store on the thread_mask
in fd_takeover(). The only other case is fd_delete() but that one already
sets the running_mask before clearing the thread_mask, which is compatible
with the consistency check above.
This change has happily survived 10 billion takeovers on a 16-thread
machine at 800k requests/s.
The fd-migration doc was updated to reflect this change.
This one is set whenever an FD is reported by a poller with a null owner,
regardless of the thread_mask. It has become totally meaningless because
it only indicates a migrated FD that was not yet reassigned to a thread,
but as soon as a thread uses it, the status will change to skip_fd. Thus
there is no reason to distinguish between the two, it adds more confusion
than it helps. Let's simply drop it.
If an error occured during the CLI 'add server' handler, the newly
created server must be removed from the proxy list if already inserted.
Currently, this can happen on the extremely rare error during server id
generation if there is no id left.
The removal operation is not thread-safe, it must be conducted before
releasing the thread isolation.
This can be backported up to 2.4. Please note that dynamic server track
is not implemented in 2.4, so the release_server_track invocation must
be removed for the backport to prevent a compilation error.
In 2.4, runtime server deletion was brought by commit e558043e1 ("MINOR:
server: implement delete server cli command"). A comment remained in the
code about a theoretical race between the thread_isolate() call and another
thread being in the process of allocating memory before accessing the
server via a reference that was grabbed before the memory allocation,
since the thread_harmless_now()/thread_harmless_end() pair around mmap()
may have the effect of allowing cli_parse_delete_server() to proceed.
Now that the full thread isolation is available, let's update the code
to rely on this. Now it is guaranteed that competing threads will either
be in the poller or queued in front of thread_isolate_full().
This may be backported to 2.4 if any report of breakage suggests the bug
really exists, in which case the two following patches will also be
needed:
MINOR: threads: make thread_release() not wait for other ones to complete
MEDIUM: threads: add a stronger thread_isolate_full() call
The current principle of running under isolation was made to access
sensitive data while being certain that no other thread was using them
in parallel, without necessarily having to place locks everywhere. The
main use case are "show sess" and "show fd" which run over long chains
of pointers.
The thread_isolate() call relies on the "harmless" bit that indicates
for a given thread that it's not currently doing such sensitive things,
which is advertised using thread_harmless_now() and which ends usings
thread_harmless_end(), which also waits for possibly concurrent threads
to complete their work if they took this opportunity for starting
something tricky.
As some system calls were notoriously slow (e.g. mmap()), a bunch of
thread_harmless_now() / thread_harmless_end() were placed around them
to let waiting threads do their work while such other threads were not
able to modify memory contents.
But this is not sufficient for performing memory modifications. One such
example is the server deletion code. By modifying memory, it not only
requires that other threads are not playing with it, but are not either
in the process of touching it. The fact that a pool_alloc() or pool_free()
on some structure may call thread_harmless_now() and let another thread
start to release the same object's memory is not acceptable.
This patch introduces the concept of "idle threads". Threads entering
the polling loop are idle, as well as those that are waiting for all
others to become idle via the new function thread_isolate_full(). Once
thread_isolate_full() is granted, the thread is not idle anymore, and
it is released using thread_release() just like regular isolation. Its
users have to keep in mind that across this call nothing is granted as
another thread might have performed shared memory modifications. But
such users are extremely rare and are actually expecting this from their
peers as well.
Note that that in case of backport, this patch depends on previous patch:
MINOR: threads: make thread_release() not wait for other ones to complete
The original intent of making thread_release() wait for other requesters to
proceed was more of a fairness trade, guaranteeing that a thread that was
granted an access to the CPU would be in turn giving back once its job is
done. But this is counter-productive as it forces such threads to spin
instead of going back to the poller, and it prevents us from implementing
multiple levels of guarantees, as a thread_release() call could spin
waiting for another requester to pass while that requester expects
stronger guarantees than the current thread may be able to offer.
Let's just remove that wait period and let the thread go back to the
poller, a-la "race to idle".
While in theory it could possibly slightly increase the perceived
latency of concurrent slow operations like "show fd" or "show sess",
it is not the case at all in tests, probably because the time needed
to reach the poller remains extremely low anyway.
Probably due to a copy-paste, there were two indent levels in this function
since its introduction in 1.9 by commit 60b639ccb ("MEDIUM: hathreads:
implement a more flexible rendez-vous point"). Let's fix this.
If an error occurs during a dynamic server creation with tracking, it
must be removed from the tracked list. This operation is not thread-safe
and thus must be conducted under the thread isolation.
Track support for dynamic servers has been introduced in this release.
This does not need to be backported.
Previous patch b5c0d65 ("MINOR: proxy: disabled takes a stopping and a
disabled state") allows us to set 2 states for a stopped or a disabled
proxy. With this patch we are now able to show the stats of all proxies
when the process is in a stopping states, not only when there is some
activity on a proxy.
This patch should fix issue #1307.
This patch splits the disabled state of a proxy into a PR_DISABLED and a
PR_STOPPED state.
The first one is set when the proxy is disabled in the configuration
file, and the second one is set upon a stop_proxy().
Rename the 'dontloglegacyconnerr' option to 'log-error-via-logformat'
which is much more self-explanatory and readable.
Note: only legacy keywords don't use hyphens, it is recommended to
separate words with them in new keywords.
update_freq_ctr_period() was using relaxed atomics without using barriers,
which usually works fine on x86 but not everywhere else. In addition, some
values were read without being enclosed by barriers, allowing the compiler
to possibly prefetch them a bit earlier. Finally, freq_ctr_total() was also
reading these without enough barriers. Let's make explicit use of atomic
loads and atomic stores to get rid of this situation. This required to
slightly rearrange the freq_ctr_total() loop, which could possibly slightly
improve performance under extreme contention by avoiding to reread all
fields.
A backport may be done to 2.4 if a problem is encountered, but last tests
on arm64 with LSE didn't show any issue so this can possibly stay as-is.
This function already performs a number of checks prior to calling the
IOCB, and detects the change of thread (FD migration). Half of the
controls are still in each poller, and these pollers also maintain
activity counters for various cases.
Note that the unreliable test on thread_mask was removed so that only
the one performed by fd_set_running() is now used, since this one is
reliable.
Let's centralize all that fd-specific logic into the function and make
it return a status among:
FD_UPDT_DONE, // update done, nothing else to be done
FD_UPDT_DEAD, // FD was already dead, ignore it
FD_UPDT_CLOSED, // FD was closed
FD_UPDT_MIGRATED, // FD was migrated, ignore it now
Some pollers already used to call it last and have nothing to do after
it, regardless of the result. epoll has to delete the FD in case a
migration is detected. Overall this removes more code than it adds.
If an MT-aware poller reports that a file descriptor was migrated, it
must stop reporting it. The simplest way to do this is to program an
update if not done yet. This will automatically mark the FD for update
on next round. Otherwise there's a risk that some events are reported
a bit too often and cause extra CPU usage with these pollers. Note
that epoll is currently OK regarding this. Select does not need this
because it uses a single shared events table, so in case of migration
no FD change is expected.
This should be backported as far as 2.2.
The skip_fd counter that is incremented when a migrated FD is reported
was abnormally high in with poll. The reason is that it was accounted
for before preparing the polled events instead of being measured from
the reported events.
This mistake was done when the counters were introduced in 1.9 with
commit d80cb4ee1 ("MINOR: global: add some global activity counters to
help debugging"). It may be backported as far as 2.0.
In 1.8, commit ab62f5195 ("MINOR: polling: Use fd_update_events to update
events seen for a fd") updated the pollers to rely on fd_update_events(),
but the modification delayed the test of presence of the FD in the report,
resulting in owner/thread_mask and possibly event updates being performed
for each FD appearing in a block of 32 FDs around an active one. This
caused the request rate to be ~3 times lower with select() than poll()
under 6 threads.
This can be backported as far as 1.8.
A bug was introduced in 2.1-dev2 by commit 305d5ab46 ("MAJOR: fd: Get
rid of the fd cache."). Pollers "poll" and "evport" had the sleeping
bit accidentally removed before the syscall instead of after. This
results in them not being woken up by inter-thread wakeups, which is
particularly visible with the multi-queue accept() and with queues.
As a work-around, when these pollers are used, "nbthread 1" should
be used.
The fact that it has remained broken for 2 years is a great indication
that threads are definitely not enabled outside of epoll and kqueue,
hence why this patch is only tagged medium.
This must be backported as far as 2.2.
In case of connection failure, a dedicated error message is output,
following the format described in section "Error log format" of the
documentation. These messages cannot be configured through a log-format
option.
This patch adds a new option, "dontloglegacyconnerr", that disables
those error logs when set, and "replaces" them by a regular log line
that follows the configured log-format (thanks to a call to sess_log in
session_kill_embryonic).
The new fc_conn_err sample fetch allows to add the legacy error log
information into a regular log format.
This new option is unset by default so the logging logic will remain the
same until this new option is used.
This new sample fetch along the ssl_fc_hsk_err_str fetch contain the
last SSL error of the error stack that occurred during the SSL
handshake (from the frontend's perspective). The errors happening during
the client's certificate verification will still be given by the
ssl_c_err and ssl_c_ca_err fetches. This new fetch will only hold errors
retrieved by the OpenSSL ERR_get_error function.