There are a number of tests there which are enforced on tasklets while
they will never apply (various handlers, destroyed task or not, arguments,
results, ...). Instead let's have a single TASK_IS_TASKLET() test and call
the tasklet processing function directly, skipping all the rest.
It now appears visible that the only unneeded code is the update to
curr_task that is never used for tasklets, except for opportunistic
reporting in the debug handler, which can only catch si_cs_io_cb,
which in practice doesn't appear in any report so the extra cost
incurred there is pointless.
This change alone removes 700 bytes of code, mostly in
process_runnable_tasks() and increases the performance by about
1%.
In process_runnable_tasks() we perform a lot of dereferences to
task_per_thread[tid] but tid is thread_local and the compiler cannot
know that it doesn't change so this results in making lots of thread
local accesses and array dereferences. By just keeping a copy pointer
of this, we let the compiler optimize the code. Just doing this has
reduced process_runnable_tasks() by 124 bytes in the fast path. Doing
the same in wake_expired_tasks() results in 16 extra bytes saved.
In process_runnable_task(), after the task's process() function returns,
we used to check if the return is not NULL and is not a tasklet, to update
profiling measurements. This is useless since only tasks can return non-null
here. Let's remove this useless test.
As identified in issue #278, the backport of commit c594039225 ("BUG/MINOR:
checks: do not uselessly poll for reads before the connection is up")
introduced a regression in 2.0 when default checks are enabled (not
"option tcp-check"), but it did not affect 2.1.
What happens is that in 2.0 and earlier we have the fd cache which makes
a speculative call to the I/O functions after an attempt to connect, and
the __event_srv_chk_r() function was absolutely not designed to be called
while a connection attempt is still pending. Thus what happens is that the
test for success/failure expects the verdict to be final before waking up
the check task, and since the connection is not yet validated, it fails.
It will usually work over the loopback depending on scheduling, which is
why it doesn't fail in reg tests.
In 2.1 after the failed connect(), we subscribe to polling and usually come
back with a validated connection, so the function is not expected to be
called before it completes, except if it happens as a side effect of some
spurious wake calls, which should not have any effect on such a check.
The other check types are not impacted by this issue because they all
check for a minimum data length in the buffer, and wait for more data
until they are satisfied.
This patch fixes the issue by explicitly checking that the connection
is established before trying to read or to give a verdict. This way the
function becomes safe to call regardless of the connection status (even
if it's still totally ugly).
This fix must be backported to 2.0.
If an error occurred on the connection or the conn-stream, no syncrhonous send
is performed. If the error was not already processed and there is no more I/O,
it will never be processed and the stream will never be notified of this
error. This may block the stream until a timeout is reached or infinitly if
there is no timeout.
Concretly, this bug can be triggered time to time with h2spec, running the test
"http2/5.1.1/2".
This patch depends on the commit 328ed220a "BUG/MINOR: stream-int: Process
connection/CS errors first in si_cs_send()". Both must be backported to 2.0 and
probably to 1.9. In 1.9, the code is totally different, so this patch would have
to be adapted.
Errors on the connections or the conn-stream must always be processed in
si_cs_send(), even if the stream-interface is already subscribed on
sending. This patch does not fix any concrete bug per-se. But it is required by
the following one to handle those errors during synchronous sends.
This patch must be backported with the following one to 2.0 and probably to 1.9
too, but with caution because the code is really different.
Now that we can wake up a remote thread's tasklet, it's way more
interesting to use a tasklet than a task in the accept queue, as it
will avoid passing through all the scheduler. Just doing this increases
the accept rate by about 4%, overall recovering the slight loss
introduced by the tasklet change. In addition it makes sure that
even a heavily loaded scheduler (e.g. many very fast checks) will
not delay a connection accept.
When namespaces are used in the configuration, the respective namespace handles
are opened during config parsing and stored in an ebtree for lookup later.
Unfortunately, when the master process re-execs itself these file descriptors
were not closed, effectively leaking the fds and preventing destruction of
namespaces no longer present in the configuration.
This change fixes this issue by opening the namespace file handles as
close-on-exec, making sure that they will be closed during re-exec.
Change the tasklet code so that the tasklet list is now a mt_list.
That means that tasklet now do have an associated tid, for the thread it
is expected to run on, and any thread can now call tasklet_wakeup() for
that tasklet.
One can change the associated tid with tasklet_set_tid().
Instead of using the same type for regular linked lists and "autolocked"
linked lists, use a separate type, "struct mt_list", for the autolocked one,
and introduce a set of macros, similar to the LIST_* macros, with the
MT_ prefix.
When we use the same entry for both regular list and autolocked list, as
is done for the "list" field in struct connection, we know have to explicitely
cast it to struct mt_list when using MT_ macros.
See GH issues #141 for all the context. In short, registered signal
handlers are not inherited by other threads during startup, which is
normally not a problem, except that we need that the same thread as
the one doing the fork() cleans up the old process using waitpid()
once its death is reported via SIGCHLD, as happens in external checks.
The only simple solution to this at the moment is to make sure that
external checks are exclusively run on the first thread, the one
which registered the signal handlers on startup. It will be far more
than enough anyway given that external checks must not require to be
load balanced on multiple threads! A more complex solution could be
designed over the long term to let each thread deal with all signals
but it sounds overkill.
This must be backported as far as 1.8.
As stated in the RFC7540#5.1, an endpoint that receives any frame other than
PRIORITY after receiving a RST_STREAM MUST treat that as a stream error of type
STREAM_CLOSED. However, frames carrying compression state must still be
processed before being dropped to keep the HPACK decoder synchronized. This had
to be the purpose of the commit 8d9ac3ed8b ("BUG/MEDIUM: mux-h2: do not abort
HEADERS frame before decoding them"). But, the test on the frame type was
inverted.
This bug is major because desynchronizing the HPACK decoder leads to mixup
indexed headers in messages. From the time an HEADERS frame is received and
ignored for a closed stream, wrong headers may be sent to the following streams.
This patch may fix several bugs reported on github (#116, #290, #292). It must
be backported to 2.0 and 1.9.
The function parse_fcgi_flt() is called when the keyword "fcgi-app" is found on
a filter line. We don't need to compare it again in the function.
This patch fixes the issue #284. No backport needed.
This multiplexer is only available on the backend side. It may handle
multiplexed connections if the FCGI application supports it. A FCGI application
must be configured on the backend to be used. If not redefined during the
request processing by the FCGI filter, this mux handles all mandatory
parameters.
There is a limitation on the way the requests are processed. The parameters must
be encoded into a uniq PARAMS record. It means, once encoded, all HTTP headers
and FCGI parameters must small enough to be store in a buffer. Otherwise, an
internal processing error is returned.
The FCGI application handles all the configuration parameters used to format
requests sent to an application. The configuration of an application is grouped
in a dedicated section (fcgi-app <name>) and referenced in a backend to be used
(use-fcgi-app <name>). To be valid, a FCGI application must at least define a
document root. But it is also possible to set the default index, a regex to
split the script name and the path-info from the request URI, parameters to set
or unset... In addition, this patch also adds a FCGI filter, responsible for
all processing on a stream.
When an HTX message is formatted to an H1 or H2 message, pseudo-headers (with
header names starting by a colon (':')) are now ignored. In fact, for now, only
H2 messages have such headers, and the H2 mux already skips them when it creates
the HTX message. But in the futur, it may be useful to keep these headers in the
HTX message to help the message analysis or to do some processing during the
HTTP formatting. It would also be a good idea to have scopes for pseudo-headers
(:h1-, :h2-, :fcgi-...) to limit their usage to a specific mux.
This function will try to do a zero-copy transfer. Otherwise, it adds a data
block. The same is used for messages with a content-length, chunked messages and
messages with unknown body length.
To avoid code duplication in the futur mux FCGI, functions parsing H1 messages
and converting them into HTX have been moved in the file h1_htx.c. Some
specific parts remain in the mux H1. But most of the parsing is now generic.
Application is a generic term here. It is a modules which handle its own log
server list, with no dependency on a proxy. Such applications can now call the
function app_log() to log messages, passing a log server list and a tag as
parameters. Internally, the function __send_log() has been adapted accordingly.
Now, following sample fetches may be used to get information about
authentication:
* http_auth_type : returns the auth method as supplied in Authorization header
* http_auth_user : returns the auth user as supplied in Authorization header
* http_auth_pass : returns the auth pass as supplied in Authorization header
Only Basic authentication is supported.
Most of times, when a keyword is added in proxy section or on the server line,
we need to have a post-parser callback to check the config validity for the
proxy or the server which uses this keyword.
It is possible to register a global post-parser callback. But all these
callbacks need to loop on the proxies and servers to do their job. It is neither
handy nor efficient. Instead, it is now possible to register per-proxy and
per-server post-check callbacks.
Most of times, when any allocation is done during configuration parsing because
of a new keyword in proxy section or on the server line, we must add a call in
the deinit() function to release allocated ressources. It is now possible to
register a post-deinit callback because, at this stage, the proxies and the
servers are already releases.
Now, it is possible to register deinit callbacks per-proxy or per-server. These
callbacks will be called for each proxy and server before releasing them.
When an error occurred in a mux, most of time, an error is also reported on the
conn-stream, leading to an error (read and/or write) on the channel. When a
parsing or a processing error is reported for the HTX message, it is better to
handle it first.
Since the commit 1b8e68e8 ("MEDIUM: stick-table: Stop handling stick-tables as
proxies."), the target field into the table context of the CLI applet was not
anymore a pointer to a proxy. It was replaced by a pointer to a stktable. But,
some parts of the code was not updated accordingly. the function
table_prepare_data_request() still tries to cast it to a pointer to a proxy. The
result is totally undefined. With a bit of luck, when the "show table" command
is used with a data type, we failed to find a table and the error "Data type not
stored in this table" is returned. But crashes may also be experienced.
This patch fixes the issue #262. It must be backported to 2.0.
Recently Lua code which uses Proxy class (get_stats method) stopped
working ("table index is nil from [C] method 'get_stats'")
It probably affects other codepaths too.
This should be backported do 2.0 and 1.9.
In the function connect_server(), when we are not able to reuse a connection and
too many FDs are opened, the variable srv must be defined to kill an idle
connection.
This patch fixes the issue #257. It must be backported to 2.0
This only happens during the configuration parsing. First leak is the string
representing the last converter parsed, if any. The second one is on the error
path, when the allocation of the ACL expression failed. In this case, the sample
was not released.
This patch fixes the issue #256. It must be backported to all stable versions.
Since the legacy HTTP mode has been removed, this flag is not necessary
anymore. Removing this flag, a test on the HTX message at the end of the
function h2c_decode_headers() has also been removed fixing the github
issue #244.
No backport needed.
When a filter returns an error during the HTTP analysis, an error must be
returned if the status code is not already set. On the request path, an error
400 is returned. On the response path, an error 502 is returned. The status is
considered as unset if its value is not strictly positive.
If needed, this patch may be backported to all versions having filters (as far
as 1.7). Because nobody have never report any bug, the backport to 2.0 is
probably enough.
It is now possible to export stats using the JSON format from the HTTP stats
page. Like for the CSV export, to export stats in JSON, you must add the option
";json" on the stats URL. It is also possible to dump the JSON schema with the
option ";json-schema". Corresponding Links have been added on the HTML page.
This patch fixes the issue #263.
In several SSL functions, the XPRT context is retrieved before any check on the
connection. In the function ssl_sock_is_ssl(), a test suggests the connection
may be null. So, it is safer to test the ssl connection before retrieving its
XPRT context. It removes any ambiguities and prevents possible null pointer
dereferences.
This patch fixes the issue #265. It must be backported to 2.0.
It seems to be possible to have no frontend for a listener. A test was missing
before dereferencing it at the end of the function listener_accept().
This patch fixes the issue #264. It must be backported to 2.0 and 1.9.
This adds two extra fields to the stats, one for the current number of idle
connections and one for the configured limit. A tooltip link now appears on
the HTML page to show these values in front of the active connection values.
This should be backported to 2.0 and 1.9 as it's the only way to monitor
the idle connections behaviour.
As mentioned in previous commit, these flags do not map well to
modern poller capabilities. Let's use the FD_EV_*_{R,W} flags instead.
This first patch only performs a 1-to-1 mapping making sure that the
previously reported flags are still reported identically while using
the closest possible semantics in the pollers.
It's worth noting that kqueue will now support improvements such as
returning distinctions between shut and errors on each direction,
though this is not exploited for now.
In order to address the absurd polling sequence described in issue #253,
let's make sure we disable receiving on a connection until it's established.
Previously with bottom-top I/Os, we were almost certain that a connection
was ready when the first I/O was confirmed. Now we can enter various
functions, including process_stream(), which will attempt to read
something, will fail, and will then subscribe. But we don't want them
to try to receive if we know the connection didn't complete. The first
prerequisite for this is to mark the connection as not ready for receiving
until it's validated. But we don't want to mark it as not ready for sending
because we know that attempting I/Os later is extremely likely to work
without polling.
Once the connection is confirmed we re-enable recv readiness. In order
for this event to be taken into account, the call to tcp_connect_probe()
was moved earlier, between the attempt to send() and the attempt to recv().
This way if tcp_connect_probe() enables reading, we have a chance to
immediately fall back to this and read the possibly pending data.
Now the trace looks like the following. It's far from being perfect
but we've already saved one recvfrom() and one epollctl():
epoll_wait(3, [], 200, 0) = 0
socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
connect(7, {sa_family=AF_INET, sin_port=htons(8000), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLIN|EPOLLOUT|EPOLLRDHUP, {u32=7, u64=7}}) = 0
epoll_wait(3, [{EPOLLOUT, {u32=7, u64=7}}], 200, 1000) = 1
connect(7, {sa_family=AF_INET, sin_port=htons(8000), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
getsockopt(7, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
sendto(7, "OPTIONS / HTTP/1.0\r\n\r\n", 22, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 22
epoll_ctl(3, EPOLL_CTL_MOD, 7, {EPOLLIN|EPOLLRDHUP, {u32=7, u64=7}}) = 0
epoll_wait(3, [{EPOLLIN|EPOLLRDHUP, {u32=7, u64=7}}], 200, 1000) = 1
getsockopt(7, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
getsockopt(7, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
recvfrom(7, "HTTP/1.0 200\r\nContent-length: 0\r\nX-req: size=22, time=0 ms\r\nX-rsp: id=dummy, code=200, cache=1, size=0, time=0 ms (0 real)\r\n\r\n", 16384, 0, NULL, NULL) = 126
close(7) = 0
'ssl_sock' wasn't fully initialized so a new session can inherit some
flags from an old one.
This causes some fetches, related to client's certificate presence or
its verify status and errors, returning erroneous values.
This issue could generate other unexpected behaviors because a new
session could also inherit other flags such as SSL_SOCK_ST_FL_16K_WBFSIZE,
SSL_SOCK_SEND_UNLIMITED, or SSL_SOCK_RECV_HEARTBEAT from an old session.
This must be backported to 2.0 but it's useless for previous.
As discussed in issue #178, the change brought around 1.9-dev11 by commit
1eb6c55808 ("MINOR: lb: make the leastconn algorithm more accurate")
causes some harm in the situation it tried to improve. By always applying
the server's weight even for no connection, we end up always picking the
same servers for the first connections, so under a low load, if servers
only have either 0 or 1 connections, in practice the same servers will
always be picked.
This patch partially restores the original behaviour but still keeping
the spirit of the aforementioned patch. Now what is done is that servers
with no connections will always be picked first, regardless of their
weight, so they will effectively follow round-robin. Only servers with
one connection or more will see an accurate weight applied.
This patch was developed and tested by @malsumis and @jaroslawr who
reported the initial issue. It should be backported to 2.0 and 1.9.
When an error occurs in the post-parser callback which checks configuration
validity of the option outgoing-headers-case-adjust-file, the error message is
freed too early, before being used.
No backport needed. It fixes the github issue #258.
It's pointless to start to perform a recv() call on a connection that is
not yet established. The only purpose used to be to subscribe but that
causes many extra syscalls when we know we can do it later.
This patch only attempts a read if the connection is established or if
there is no write planed, since we want to be certain to be called. And
in wake_srv_chk() we continue to attempt to read if the reader was not
subscribed, so as to perform the first read attempt. In case a first
result is provided, __event_srv_chk_r() will not do anything anyway so
this is totally harmless in this case.
This fix requires that commit "BUG/MINOR: checks: make __event_chk_srv_r()
report success before closing" is applied before, otherwise it will break
some checks (notably SSL) by doing them again after the connection is shut
down. This completes the fixes on the checks described in issue #253 by
roughly cutting the number of syscalls in half. It must be backported to
2.0.