Since HAProxy 2.3, OpenSSL 1.1.1 is a requirement for using a
multi-certificate bundle in the configuration. This patch emits a fatal
error when HAProxy tries to load a bundle with an older version of
HAProxy.
This problem was encountered by an user in issue #990.
This must be backported in 2.3.
With the removal of the family-specific port setting, all protocol had
exactly the same implementation of ->add(). A generic one was created
with the name "default_add_listener" so that all other ones can now be
removed. The API was slightly adjusted so that the protocol and the
listener are passed instead of the listener and the port.
Note that all protocols continue to provide this ->add() method instead
of routinely calling default_add_listener() from create_listeners(). This
makes sure that any non-standard protocol will still be able to intercept
the listener addition if needed.
This could be backported to 2.3 along with the few previous patches on
listners as a pure code cleanup.
In create_listeners() we iterate over a port range and call the
protocol's ->add() function to add a new listener on the specified
port. Only tcp4/tcp6/udp4/udp6 support a port, the other ones ignore
it. Now that we can rely on the address family to properly set the
port, better do it this way directly from create_listeners() and
remove the family-specific case from the protocol layer.
At various places we need to set a port on an IPv4 or IPv6 address, and
it requires casts that are easy to get wrong. Let's add a new set_port()
helper to the address family to assist in this. It will be directly
accessible from the protocol and will make the operation seamless.
Right now this is only implemented for sock_inet as other families do
not need a port.
When a H1 message is parsed, if the parser state is switched to TUNNEL mode
just after the header parsing, the BODYLESS flag is set on the HTX
start-line. By transitivity, the corresponding flag is set on the message in
HTTP analysers. Thus it is possible to rely on it to not wait for the
request body.
CNT_LEN and TE_CHNK flags must be set on the message only when the
corresponding flag is set on the HTX start-line. Before, when the transfer
length was known XFER_LEN set), the HTTP_MSGF_TE_CHNK was the default. But
it is not appropriate. Now, it is only set if the message is chunked. Thus,
it is now possible to have a known transfer length without CNT_LEN or
TE_CHNK.
In addition, the BODYLESS flags may be set, independently on XFER_LEN one.
This flags is now unused. It was used in REQ_WAIT_HTTP analyser, when a
stream was waiting for a request, to set the keep-alive timeout or to avoid
to send HTTP errors to client.
It is now impossible to start the HTTP request processing in the stream
analysers with a partial or empty request message. The mux-h2 was already
waiting of the request headers before creating the stream. Now the mux-h1
does the same. All errors (aborts, timeout or invalid requests) waiting for
the request headers are now handled by the multiplexers. So there is no
reason to still handle them in the REQ_WAIT_HTTP (http_wait_for_request)
analyser.
To ensure there is no ambiguity, a BUG_ON() was added to exit if a partial
request is received in this analyser.
H1C_F_CS_* flags are renamed into H1C_F_ST_*. They reflect the connection
state. So "ST" is well suited. "CS" is confusing because it is also the
abbreviation for conn-stream.
In addition, H1C flags are reordered.
This is the reason for all previous patches. The conn-stream and the
associated stream are created as later as possible. It only concerns the
frontend connections. But it means the request headers, and possibly the
first data block, are received and parsed before the conn-stream
creation. To do so, an embryonic H1 stream, with no conn-stream, is
created. The result of this "early parsing" is stored in its rx buffer, used
to fill the request channel when the stream is created. During this step,
some HTTP errors may be returned by the mux. It must also handle
http-request/keep-alive timeouts. A significative change is about H1 to H2
upgrade. It happens very early now, and no H1 stream are created (and thus
of course no conn-stream).
The most important part of this patch is located to the h1_process()
function. Because it must trigger the parsing when there is no H1
stream. h1_recv() function has also been simplified.
For now, this part is unsued. But this patch adds functions to handle errors
on idle and embryonic H1 connections and send corresponding HTTP error
messages to the client (400, 408 or 500). Thanks to previous patches, these
functions take care to update the right stats counters, but also the
counters tracked by the session.
A field to store the HTTP error code has been added in the H1C structure. It
is used for error retransmits, if any, and to get it in http logs. It is
used to return the mux exit status code when the MUX_EXIT_STATUS ctl
parameter is requested.
When a log message is emitted from the session level, by a multiplexer,
there is no stream. Thus for HTTP session, there no status code and the
termination flags are not correctly set.
Thanks to previous patch, the HTTP status code is deduced from the mux exist
status, using the MUX_EXIT_STATE ctl param. This is only done for HTTP
frontends. If it is defined ( != 0), it is used to deduce the termination
flags.
The ctl param MUX_EXIT_STATUS can be request to get the exit status of a
multiplexer. For instance, it may be an HTTP status code or an H2 error. For
now, 0 is always returned. When the mux h1 will be able to return HTTP
errors itself, this ctl param will be used to get the HTTP status code from
the logs.
the mux_exit_status enum has been created to map internal mux exist status
to generic one. Thus there is 5 possible status for now: success, invalid
error, timeout error, internal error and unknown.
The cumulative numbers of http requests, http errors, bytes received and
sent and their respective rates for a tracked counters are now updated using
specific stream independent functions. These functions are used by the
stream but the aim is to allow the session to do so too. For now, there is
no reason to perform these updates from the session, except from the mux-h2
maybe. But, the mux-h1, on the frontend side, will be able to return some
errors to the client, before the stream creation. In this case, it will be
mandatory to update counters tracked at the session level.
An idle expiration date is added on the H1 connection with the function to
set it depending on connection state. First, there is no idle timeout on
backend connections, For idle frontend connections, the http-request or
keep-alive timeout are used depending on which timeout is defined and if it
is the first request or not. For embryonic connections, the http-request is
always used, if defined. For attached or shutted down connections, no idle
timeout is applied.
For now the idle expiration date is never set and the h1_set_idle_expiration
function remains unused.
When the conn-stream is detached for a H1 connection, there is no reason to
subscribe for reads or process pending input data if the connection is not
idle. Because, it means a shutdown is pending.
Conditions to set a timeout on the H1C task have been simplified or at least
changed to rely on H1 connection flags. Now, following rules are used :
* the shutdown timeout is applied on dead (not alive) or shutted down
connections.
* The client/server timeout is applied if there are still some pending
outgoing data.
* The client timeout is applied on alive frontend connections with no
conn-stream. It means on idle or embryionic frontend connections.
* For all other connections (backend or attached connections), no timeout
is applied. For frontend or backend attached connections, the timeout is
handled by the application layer. For idle backend connections, there is
no timeout.
We now only rely on one flag to notify a shutdown. The shutdown is performed
at the connection level when there are no more pending outgoing data. So, it
means it is performed immediately if the output buffer is empty. Otherwise
it is deferred after the outgoing data are sent.
This simplify a bit the mux because there is now only one flag to check.
Don't try to read more data if a parsing or a formatting error was reported
on the H1 stream. There is no reason to continue to process the messages for
the current connection in this case. If a parsing error occurs, it means the
input is invalid. If a formatting error occurs, it is an internal error and
it is probably safer to give up.
Mainly to make it easier to read. First of all, when a H1 connection is
still there, we check if the connection was stolen by another thread or
not. If yes we release the task and leave. Then we check if the task is
expired or not. Only expired tasks are considered. Finally, if a conn-stream
is still attached to the connection (H1C_F_CS_ATTACHED flag set), we
return. Otherwise, the task and the H1 connection are released.
Be prepared to have a H1 connection in one of the following states :
* A H1 connection waiting for a new message with no H1 stream.
H1C_F_CS_IDLE flag is set.
* A H1 connection processing a new message with a H1 stream but no
conn-stream attached. H1C_F_CS_EMBRYONIC flag is set
* A H1 connection with a H1 stream and a conn-stream attached.
H1C_F_CS_ATTACHED flag is set.
* A H1 connection with no H1 stream, waiting to be released. No flag is set.
These flags are mutually exclusives. When none is set, it means the
connection will be released ASAP, just remaining outgoing data must be sent
before. For now, the second state (H1C_F_CS_EMBRYONIC) is transient.
For now this buffer is not used. But it will be used to parse the headers,
and possibly the first block of data, when no stream is attached to the H1
connection. The aim is to use it to create the stream, thanks to recent
changes on the streams creation api.
Dedicated functions are now used to create frontend and backend H1
streams. h1c_frt_stream_new() is now used to create frontend H1 streams and
h1c_bck_stream_new() to create backend ones. Both rely on h1s_new() function
to allocate the stream itself. It is a bit easier to add specific processing
depending we are on the frontend or the backend side.
Instead of using H1S flags to report an error on the request or the
response, independently it is a parsing or a formatting error, we now use a
flag to report parsing errors and another one to report formatting
ones. This simplify the message parsing. It is also easier to figure out
what error happened when one of this flag is set. The side may be deduced
checking the H1C_F_IS_BACK flag.
Instead of using 2 flags on the H1 stream (H1S_F_BUF_FLUSH and
H1S_F_SPLICED_DATA), we now only use one flag on the H1 connection
(H1C_F_WANT_SPLICE) to notify we want to use splicing or we are using
splicing. This flag blocks the calls to rcv_buf() connection callback.
It is a bit easier to set the H1 connection capability to receive data in
its input buffer instead of relying on the H1 stream.
H1C_F_WAIT_OPPOSITE must be set on the H1 conenction to don't read more data
because we must be sync with the opposite side. This flag replaces the
H1C_F_IN_BUSY flag. Its name is a bit explicit. It is automatically set on
the backend side when the mux is created. It is safe to do so because at
this stage, the request has not yet been sent to the server. This way, in
h1_recv_allowed(), a test on this flag is enough to block the reads instead
of testing the H1 stream state on the backend side.
It is now possible to set the buffer used by the channel request buffer when
a stream is created. It may be useful if input data are already received,
instead of waiting the first call to the mux rcv_buf() callback. This change
is mandatory to support H1 connection with no stream attached.
For now, the multiplexers don't pass any buffer. BUF_NULL is thus used to
call stream_create_from_cs().
These info are only provided by the mux-h1. But, thanks to previous patches,
we can get them from the session directly. There is no need to retrieve them
from the mux anymore.
Since the idle duration provided by the session is always up-to-date, there
is no more reason to rely on the multiplexer cs_info to set it to the
stream.
When a log message is emitted from the session, using sess_log() function,
there is no stream available. In this case, instead of deducing the idle
duration from the accept date, we use the one provided by the session. 0 is
used if it is undefined (i.e set to -1).
These info are reset for the next transaction, if the connection is kept
alive. From the stream point of view, it should be the same a new
connection, except there is no handshake. Thus the handshake duration is set
to 0.
The idle duration between two streams is added to the session structure. It
is not necessarily pertinent on all protocols. In fact, it is only defined
for H1 connections. It is the duration between two H1 transactions. But the
.get_cs_info() callback function on the multiplexers only exists because
this duration is missing at the session level. So it is a simplification
opportunity for a really low cost.
To reduce the cost, a hole in the session structure is filled by moving
.srv_list field at the end of the structure.
IDLE frontend connections have no stream attached. The stream is only
created when new data are received, when the parsing of the next request
starts. Thus the keep-alive timeout, handled into the HTTP analysers, is not
considered while nothing is received. But this is especially when this
timeout must be considered. Concretely the http-keep-alive is ignored while
no data are received. Only the client timeout is used. It will only be
considered on incomplete requests, if the http-request timeout is not set.
To fix the bug, the http-keep-alive timeout must be handled at the mux
level, for IDLE frontend connection only.
This patch should fix the issue #984. It must be backported as far as
2.2. On prior versions, the stream is created earlier. So, it is not a
problem, except if this behavior changes of course (it was an optim of the
2.2, but don't remember the commit).
A copy-paste bug between {tcp,udp}{4,6}_add_listener() resulted in
using a struct sockaddr_in to set the TCP/UDP port while it ought to
be a struct sockaddr_in6. Fortunately, the port has the same offset
(2) in both so it was harmless. A cleaner way to proceed would be
to have a set_port function exported by the address family layer.
This needs to be backported to 2.3.
It seems to me that lua_close() must be called on all states at deinit
time, not just the first two ones. This is likely a remnant of commit
59f11be43 ("MEDIUM: lua-thread: Add the lua-load-per-thread directive").
There should likely be some memory leak reports when using Lua without
this fix, though none were observed for now.
No backport is needed as this was merged into 2.4-dev.
Lua dedicated TCP, HTTP and SSL socket and proxies must be initialized
once. Right now, they are initialized from the Lua init state, but since
commit 59f11be43 ("MEDIUM: lua-thread: Add the lua-load-per-thread
directive") this function is called one time per lua context. This
caused some fields to be cleared and overwritten, and pre-allocated
object to be lost. This is why the address sanitizer detected memory
leaks from the socket_ssl server initialization.
Let's move all the state-independent part of the function to the
hlua_init() function to avoid this.
No backport is needed, this is only 2.4-dev.
In case of successful unsafe method on a stored resource, the cached entry
must be invalidated (see RFC7234#4.4).
A "non-error response" is one with a 2xx (Successful) or 3xx (Redirection)
status code.
This implies that the primary hash must now be calculated on requests
that have an unsafe method (POST or PUT for instance) so that we can
disable the corresponding entries when we process the response.
The Cache-Control max-age and s-maxage directives should be followed by
a positive numerical value (see RFC 7234#5.2.1.1). According to the
specs, a sender "should not" generate a quoted-string value but we will
still accept this format.
When a response has an Age header (filled in by another cache on the
message's path) that is greater than its defined maximum age (extracted
either from cache-control directives or an expires header), it is
already stale and should not be cached.
The goal is to allow execution of one main lua state per thread.
This patch contains the main job. The lua init is done using these
steps:
- "lua-load-per-thread" loads the lua code in the first thread
- it creates the structs
- it stores loaded files
- the 1st step load is completed (execution of hlua_post_init)
and now, we known the number of threads
- we initilize lua states for all remaining threads
- for each one, we load the lua file
- for each one, we execute post-init
Once all is loaded, we control consistency of functions references.
The rules are:
- a function reference cannot be in the shared lua state and in
a per-thread lua state at the same time.
- if a function reference is declared in a per-thread lua state, it
must be declared in all per-thread lua states
The goal is to allow execution of one main lua state per thread.
The array introduces storage of one reference per thread, because each
lua state can have different reference id for a same function. A function
returns the preferred state id according to configuration and current
thread id.
The goal is to allow execution of one main lua state per thread.
"state_from" is a pointer to the parent lua state. "state_id"
is the index of the parent state id in the reference lua states
array. "state_id" is better because the lock is a "== 0" test
which is quick than pointer comparison. In other way, the state_id
index could index other things the the Lua state concerned. I
think to the function references.
The goal is to allow execution of one main lua state per thread.
This function will initialize the struct with other things than 0.
With this function helper, the initialization is centralized and
it prevents mistakes. This patch also keeps a reference to each
declared function in a list. It will be useful in next patches to
control consistency of declared references.
The goal is to allow execution of one main lua state per thread.
The array of states is initialized at the max number of thread +1.
We define the index 0 is the common state shared by all threads
and should be locked. Other index index are dedicated to each
one thread. The old gL now becomes hlua_states[0].
The goal is to allow execution of one main lua state per thread.
This patch opens the way to addition of a per-thread dedicated lua state.
By passing the hlua we can figure the original state that's been used
and decide to lock or not.
The goal is to allow execution of one main lua state per thread.
Stop using locks in init part, we will use only in parts where
the parent lua state is known, so we could take decision about lock
according with the lua parent state.
The goal is to allow execution of one main lua state per thread.
This commit introduces this variable in the core. Lua state initialized
by thread will have access to this variable, which reports the executing
thread. 0 indicates the shared thread. Programs which must be executed
only once can check for core.thread <= 1.
The goal is to allow execution of one main lua state per thread.
This function will be called for each initialized lua state, so
one per thread. The split transforms the lua state variable from
global to local.
The goal is to allow execution of one main lua state per thread.
This function will be called once per thread, using different Lua
states. This patch prepares the work.
The goal is to allow execution of one main lua state per thread.
The function hlua_ctx_init() now gets the original lua state from
its caller. This allows the initialisation of lua_thread (coroutines)
from any master lua state.
The parent lua state is stored in the hlua struct.
This patch is a temporary transition, it will be modified later.
The goal is to allow execution of one main lua state per thread.
This is a preparative work in order to init more than one stack
in the lua-thread objective.
The goal is to allow execution of one main lua state per thread.
Because this struct will be filled after the configuration parser, we
cannot copy the content. The actual state of the Haproxy code doesn't
justify this change, it is an update preparing next steps.
The goal is to no longer use "struct hlua" with global main lua_state.
The usage of the "struct hlua" is no longer required. This patch replaces
this struct by another one.
Now, the usage of runtime Lua phase is separated from the start lua phase.
The goal is to no longer use "struct hlua" with global main lua_state.
This patch returns NULL value when some code tries go get the hlua struct
associated with a task through hlua_gethlua(). This functions is useful
only during runtime because the struct hlua contains only runtime states.
Some Lua functions allowed to yield are called from init environment.
I'm not sure this is a good practice. Maybe it will be clever to
disallow calling this kind of functions.
The goal is no longer using "struct hlua" with global main lua_state.
if somewhere in the code, hlua_ctx_renew() is called with a global Lua
context, we have a serious bug. A crash is better than working with
this bug, so this patch remove a useless control.
In other way, this control were used during hlua_post_init() function.
The function hlua_post_init() used a call to the runtime hlua_ctx_resume()
function. This call no longer exists.
The goal is to no longer use "struct hlua" with global main lua_state.
The hlua_post_init() is executed during start phase, it does not require
yielding nor any advanced runtime error processing. Let's simplify this
by re-implementing the code using lower-level functions which directly
take a state and not an hlua anymore.
The goal is to no longer use "struct hlua" with global main lua_state
and directly take the state instead.
This patch removes the implicit dependency to this struct with
the function hlua_prepend_path()
Let's switch memory accounting to atomics so that the allocator function
may safely be used from concurrent Lua states.
Given that this function is extremely hot on the call path, we try to
optimize it for the most common case, which is:
- no limit
- there's enough memory
The accounting is what is particuarly expensive in threads since all
CPUs compete for a cache line, so when the limit is not used, we don't
want to use accounting. However we need to preserve it during the boot
phase until we may parse a "tune.lua.maxmem" value. For this, we turn
the unlimited "0" value to ~0 at the end of the boot phase to mark the
definite end of accounting. The function then detects this value and
directly jumps to realloc() in this case.
When the limit is enforced however, we use a CAS to check and reserve
our share of memory, and we roll back on failure. The CAS is used both
for increments and decrements so that a single operation is enough to
update the counters.
The function really has the semantics of a realloc() except that it
also passes the old size to help with accounting. No need to special
case the free or malloc, realloc does everything we need.
If the session is not established, the applet handler could leave
with the applet detached from the ring. At next call, the attach
counter will be decreased again causing unpredectable behavior.
This patch should be backported on branches >=2.2
With commit a1f12746b ("MINOR: traces: add a new level "error" below
the "user" level") a new trace level was inserted, resulting in
shifting all exiting ones by one. But the levels reported in the
__trace() function were not updated accordingly, resulting in the
TRACE_LEVEL_DEVELOPER not to be properly reported anymore. This
patch fixes it by extending the number of levels to 6.
No backport is needed.
When many concurrent requests targeting the same resource were seen, the
cache could sometimes be filled by too many partial responses resulting
in the impossibility to cache a single one of them. This happened
because the actual tree insertion happened only after all the payload of
every response was seen. So until then, every response was added to the
cache because none of the streams knew that a similar request/response
was already being treated.
This patch consists in adding the cache_entry as soon as possible in the
tree (right after the first packet) so that the other responses do not
get cached as well (if they have the same primary key).
A "complete" flag is also added to the cache_entry so that we know if
all the payload is already stored in the entry or if it is still being
processed.
Turn the "Accept-Encoding" value to lower case before processing it.
Calculate the CRC on every token instead of a sorted concatenation of
them all (in order to avoir copying them) then XOR all the CRCs into a
single hash (while ignoring duplicates).
Lua allows registering multiple sample-fetches, converters, action, cli,
applet/services with the same name. This is absolutely useless since only
the first registration will be used. This patch sends a warning if the case
is encountered.
This pach could be backported until 1.8, with the 3 associated patches:
- MINOR: actions: Export actions lookup functions
- MINOR: actions: add a function returning a service pointer from its name
- MINOR: cli: add a function to look up a CLI service description
This function will be useful to check if the keyword is already registered.
Also add a define for the max number of args.
This will be needed by a next patch to fix a bug and will have to be
backported.
This function simply calls action_lookup() on the private service_keywords,
to look up a service name. This will be used to detect double registration
of a same service from Lua.
This will be needed by a next patch to fix a bug and will have to be
backported.
These functions will be useful to check if a keyword is already registered.
This will be needed by a next patch to fix a bug, and will need to be
backported.
Operation luaL_openlibs() and lua_prepend path are processed whithout
the safe context, so in case of failure Haproxy aborts or stops without
error message.
This patch could be backported until 1.8
"lua-load" doesn't check if the expected parameter is present. It tries to
open() directly the argument at second position. So if the filename is
omitted, it tries to load an empty filename.
This patch could be backported until 1.8
The stats on haproxy.org reported ~12k GOAWAY for ~34k connections, with
only 2 protocol errorss. It turns out that the GOAWAY frame counter added
in commit a8879238c ("MINOR: mux-h2: report detected error on stats")
matches a bit too many situations. First it counts those which are not
sent as well as failed retries, second it counts as errors the cases of
attempts to cleanly close, while it's titled "GOAWAY sent on detected
error". Let's address this by moving the counter up one line and excluding
the clean codes.
This can be backported to 2.3.
A number of traces could be added, and a few TRACE_PROTO were replaced
with TRACE_ERROR. The goal is to be able to enable error tracing only
to detect anomalies.
It looks like they're mostly correct as they don't seem to strike on
valid H2 traffic but are very verbose on h2spec.
Sometimes it would be nice to be able to only trace abnormal events such
as protocol errors. Let's add a new "error" level below the "user" level
for this. This will allow to add TRACE_ERROR() at various error points
and only see them.
Since commit a8879238c ("MINOR: mux-h2: report detected error on stats")
we now have some error stats on stream/connection level protocol errors,
but some were improperly marked as stream while they're connection, and
2 or 3 relevant ones were missing and have now been added.
This could be backported to 2.3.
This patch adds a new logging variable '%HPO' for logging HTTP path only
(without query string) from relative or absolute URI.
For example:
log-format "hpo=%HPO hp=%HP hu=%HU hq=%HQ"
GET /r/1 HTTP/1.1
=>
hpo=/r/1 hp=/r/1 hu=/r/1 hq=
GET /r/2?q=2 HTTP/1.1
=>
hpo=/r/2 hp=/r/2 hu=/r/2?q=2 hq=?q=2
GET http://host/r/3 HTTP/1.1
=>
hpo=/r/3 hp=http://host/r/3 hu=http://host/r/3 hq=
GET http://host/r/4?q=4 HTTP/1.1
=>
hpo=/r/4 hp=http://host/r/4 hu=http://host/r/4?q=4 hq=?q=4
Since 2.3 default local log format always adds hostame field.
This behavior change was due to log/sink re-work, because according
to rfc3164 the hostname field is mandatory.
This patch re-introduce a legacy "local" format which is analog
to rfc3164 but with hostname stripped. This is the new
default if logs are generated by haproxy.
To stay compliant with previous configurations, the option
"log-send-hostname" acts as if the default format is switched
to rfc3164.
This patch addresses the github issue #963
This patch should be backported in branches >= 2.3.
In issue #958 Ashley Penney reported intermittent crashes on AWS's ARM
nodes which would not happen on x86 nodes. After investigation it turned
out that the Neoverse N1 CPU cores used in the Graviton2 CPU are much
more aggressive than the usual Cortex A53/A72/A55 or any x86 regarding
memory ordering.
The issue that was triggered there is that if a tasklet_wakeup() call
is made on a tasklet scheduled to run on a foreign thread and that
tasklet is just being dequeued to be processed, there can be a race at
two places:
- if MT_LIST_TRY_ADDQ() happens between MT_LIST_BEHEAD() and
LIST_SPLICE_END_DETACHED() if the tasklet is alone in the list,
because the emptiness tests matches ;
- if MT_LIST_TRY_ADDQ() happens during LIST_DEL_INIT() in
run_tasks_from_lists(), then depending on how LIST_DEL_INIT() ends
up being implemented, it may even corrupt the adjacent nodes while
they're being reused for the in-tree storage.
This issue was introduced in 2.2 when support for waking up remote
tasklets was added. Initially the attachment of a tasklet to a list
was enough to know its status and this used to be stable information.
Now it's not sufficient to rely on this anymore, thus we need to use
a different information.
This patch solves this by adding a new task flag, TASK_IN_LIST, which
is atomically set before attaching a tasklet to a list, and is only
removed after the tasklet is detached from a list. It is checked
by tasklet_wakeup_on() so that it may only be done while the tasklet
is out of any list, and is cleared during the state switch when calling
the tasklet. Note that the flag is not set for pure tasks as it's not
needed.
However this introduces a new special case: the function
tasklet_remove_from_tasklet_list() needs to keep both states in sync
and cannot check both the state and the attachment to a list at the
same time. This function is already limited to being used by the thread
owning the tasklet, so in this case the test remains reliable. However,
just like its predecessors, this function is wrong by design and it
should probably be replaced with a stricter one, a lazy one, or be
totally removed (it's only used in checks to avoid calling a possibly
scheduled event, and when freeing a tasklet). Regardless, for now the
function exists so the flag is removed only if the deletion could be
done, which covers all cases we're interested in regarding the insertion.
This removal is safe against a concurrent tasklet_wakeup_on() since
MT_LIST_DEL() guarantees the atomic test, and will ultimately clear
the flag only if the task could be deleted, so the flag will always
reflect the last state.
This should be carefully be backported as far as 2.2 after some
observation period. This patch depends on previous patch
"MINOR: task: remove __tasklet_remove_from_tasklet_list()".
This function is only used at a single place directly within the
scheduler in run_tasks_from_lists() and it really ought not be called
by anything else, regardless of what its comment says. Let's delete
it, move the two lines directly into the call place, and take this
opportunity to factor the atomic decrement on tasks_run_queue. A comment
was added on the remaining one tasklet_remove_from_tasklet_list() to
mention the risks in using it.
In process_runnable_tasks(), we walk the run queue and pick tasks to
insert them into the local list. And for each of these operations we
perform a few increments, some of which are atomic, and they're even
performed under the runqueue's lock. This is useless inside the loop,
better do them at the end, since we don't use these values inside the
loop and they're not used anywhere else either during this time. The
only one is task_list_size which is accessed in parallel by other
threads performing remote tasklet wakeups, but it's already
approximative and is used to decide to get out of the loop when the
limit is reached. So now we compute it first as an initial budget
instead.
This function is only called at a single place and adds more confusion
than it removes. It also makes one think it could be used outside of
the scheduler while it must absolutely not. Let's just move its two
lines to the call place, making the code more readable there. In
addition this clearly shows that the preliminary LIST_INIT() is
useless since the entry is immediately overwritten.
Commit a5a447984 ("MINOR: debug: add "debug dev sched" to stress the
scheduler.") doesn't scale with threads because ha_random64() takes care
of being totally thread-safe for use with UUIDs. We don't need this for
the stress-testing functions, let's just implement a xorshift PRNG
instead. On 8 threads the performance jumped from 230k ctx/s with 96%
spent in ha_random64() to 14M ctx/s.
This command supports starting a bunch of tasks or tasklets, either on the
current thread (mask=0), all (default), or any set, either single-threaded
or multi-threaded, and possibly auto-scheduled.
These tasks/tasklets will randomly pick another one to wake it up. The
tasks only do it 50% of the time while tasklets always wake two tasks up,
in order to achieve roughly 50% load (since the target might already be
woken up).
res.body may be called from a health-check. It is probably never used. But it is
possibe. In such case, there is no channel. Thus we must not use it
unconditionally to set the flag SMP_F_MAY_CHANGE on the smp.
Now the condition test the channel first. In addtion, the flag is not set if the
payload is fully received.
This patch must be backported as far as 2.2.
L7OKC may now be used as an error status for an HTTP/TCP expect rule. Thus
it is for instance possible to write:
option httpchk GET /isalive
http-check expect status 200,404
http-check expect status 200 error-status L7OKC
It is more or less the same than the disable-on-404 option except that if a
DOWN is up again but still replying a 404 will be set to NOLB state. While
it will stay in DOWN state with the disable-on-404 option.
Regarding the health counter, a check finished with the CONDPASS result is
now the same than with the PASSED result: The health counter is always
incemented. Before, it was only performed is the health counter was not 0.
There is no change for the disable-on-404 option because it is only
evaluated for running or stopping servers. So with an health check counter
greater than 0. But it will make possible to handle (STOPPED -> STOPPING)
transition for servers.
The parsing of the check options based on tcp-check rules (redis, spop,
smtp, http...) are moved aways from check.c. Now, these functions are placed
in tcpcheck.c. These functions are only related to the tcpcheck ruleset
configured on a proxy and not to the health-check attached to a server.
This option is now deprecated. It is recent, but it is now marked as
deprecated as far as 2.2. Thus, there is now a warning in the 2.4 if this
option is still used. It will be removed in 2.5.
Becaue the 2.3 is quite new, this patch may be backported to 2.3.
This option is now ignored because I/O check buffers are now allocated using the
buffer pool. Thus, it is marked as deprecated in the documentation and ignored
during the configuration parsing. The field is also removed from the global
structure.
Because this option is ignored since a recent fix, backported as fare as 2.2,
this patch should be backported too. Especially because it updates the
documentation.