Randomly, haproxy could fail to start when a "http-request capture"
action is defined, without any change to the configuration. The issue
depends on the memory content, which may raise a fatal error like :
unable to find capture id 'xxxx' referenced by http-request capture
rule
Commit fd608dd2 already prevents the condition to happen, but this one
should be included for completeness and to reclect the code on the
response side.
The issue was introduced recently by commit 29730ba5 and should only be
backported to haproxy 1.8.
Adrian Williams reported that several balancing methods were broken and
sent all requests to one backend. This is a regression in haproxy 1.8 where
the server score was not correctly recalculated.
This fix must be backported to the 1.8 branch.
When an end of stream has been reported, we should not try to receive again
as the mux layer might not be prepared to this and could report unexpected
errors.
This is more of a strengthening measure that follows the introduction of
conn_stream that came in 1.8. It's desired to backport this into 1.8 though
it's uncertain at this time whether it may have caused real issues.
Commit 4974561 ("BUG/MEDIUM: h2: enforce the per-connection stream limit")
implemented a stream limit enforcement on the connection but it was not
correctly done as it would count streams still known by the connection,
which includes the lingering ones that are already marked close. We need
to count only the non-closed ones, which this patch does. The effect is
that some streams are rejected a bit before the limit.
This fix needs to be backported to 1.8.
The HTTP forwarding engine needs to disable lingering on requests in
case the connection to the server has to be suddenly closed due to
http-server-close being used, so that we don't accumulate lethal
TIME_WAIT sockets on the outgoing side. A problem happens when the
server doesn't advertise a response size, because the response
message quickly goes through the MSG_DONE and MSG_TUNNEL states,
and once the client has transferred all of its data, it turns to
MSG_DONE and immediately sets NOLINGER and closes before the server
has a chance to respond. The problem is that this destroys some of
the pending DATA being uploaded, the server doesn't receive all of
them, detects an error and closes.
This early NOLINGER is inappropriate in this situation because it
happens before the response is transmitted. This state transition
to MSG_TUNNEL doesn't happen when the response size is known since
we stay in MSG_DATA (and related states) during all the transfer.
Given that the issue is only related to connections not advertising
a response length and that by definition these connections cannot be
reused, there's no need for NOLINGER when the response's transfer
length is not known, which can be verified when entering the CLOSED
state. That's what this patch does.
This fix needs to be backported to 1.8 and very likely to 1.7 and
older as it affects the very rare case where a client immediately
closes after the last uploaded byte (typically a script). However
given that the risk of occurrence in HTTP/1 is extremely low, it is
probably wise to wait before backporting it before 1.8.
Tunnelled responses are those without a content-length nor a chunked
encoding. They are specially dealt with in the current code but the
behaviour is not correct. The fact that the chunk size is left to zero
with a state artificially set to CHUNK_SIZE validates the test on
whether or not to set the end of stream flag. Thus the first DATA
frame always carries the ES flag and subsequent ones remain blocked.
This patch fixes it in two ways :
- update h1m->curr_len to the size of the current buffer so that it
is properly subtracted later to find the real end ;
- don't set the state to CHUNK_SIZE when there's no content-length
and instead set it to CHUNK_SIZE only when there's chunking.
This fix needs to be backported to 1.8.
We used to switch the stream's state to HREM when seeing and ES bit on
the DATA frame before actually being able to process that frame, possibly
resulting in the DATA frame being processed after the stream was seen as
half-closed and possibly being rejected. The state must not change before
the frame is really processed.
Also fixes a harmless typo in the flag name which should have DATA and
not HEADERS in its name (but all values are equal).
Must be backported to 1.8.
Since last commit it's not required that the DATA frames are complete anymore
so better start with what we have. Only the HEADERS frame requires this. This
may be backported as part of the upload fixes.
We currently have a problem with DATA frames when they don't fit into
the destination buffer. While it was imagined that in theory this never
happens, in practice it does when "option http-buffer-request" is set,
because the headers don't leave the target buffer before trying to read
so if the frame is full, there's never enough room.
This fix consists in reading what can be read from the frame and advancing
the input buffer. Once the contents left are only the padding, the frame
is completely processed. This also solves another problem we had which is
that it was possible to fill a request buffer beyond its reserve because
the <count> argument was not respected in h2_rcv_buf(). Thus it's possible
that some POST requests sent at once with a headers+body filling exactly a
buffer could result in "400 bad req" when trying to add headers.
This fix must be backported to 1.8.
We'll try to process partial frames and for this we need to know the
padding length. The first step requires to extract it during the parsing
and store it in the demux context in the connection. Till now it was only
processed at once.
Even after previous commit ("BUG/MEDIUM: h2: work around a connection
API limitation") there is still a problem with some requests. Sometimes
when polling for more request data while some pending data lies in the
buffer, there's no way to enter h2_recv() because the FD is not marked
ready for reading.
We need to slightly change the approach and make h2_recv() only receive
from the buffer and h2_wake() always attempt to demux if the demux is not
blocked.
However, if the connection is already being polled for reading, it will
not wake up from polling. For this reason we need to cheat and also
pretend a request for sending data, which ensures that as soon as any
direction may move, we can continue to demux. This shows that in the
long term we probably need a better way to resume an interrupted
operation at the mux level.
With this fix, no more hangups happen during uploads. Note that this
time the setup required to provoke the hangups was a bit complex :
- client is "curl" running on local host, uploading 1.7 MB of
data via haproxy
- haproxy running on local host, forwarding to a remote server
through a 100 Mbps only switch
- timeouts disabled on haproxy
- remote server made of thttpd executing a cgi reading request data
through "dd bs=10" to slow down everything.
With such a setup, around 3-5% of the connections would hang up.
This fix needs to be backported to 1.8.
The connection API permits us to enable or disable receiving on a
connection. The underlying FD layer arranges this with the polling
and the fd cache. In practice, if receiving was allowed and an end
of buffer was reached, the FD is subscribed to the polling. If later
we want to process pending data from the buffer, we have to enable
receiving again, but since it's already enabled (in polled mode),
nothing happens and the pending data remain stuck until a new event
happens on the connection to wake the FD up. This is a limitation of
the internal connection API which is not very friendly to the new mux
architecture.
The visible effect is that certain uploads to slow servers experience
truncation on timeout on their last blocks because nothing new comes
from the connection to wake it up while it's being polled.
In order to work around this, there are two solutions :
- either cheat on the connection so that conn_update_xprt_polling()
always performs a call to fd_may_recv() after fd_want_recv(), that
we can trigger from the mux by always calling conn_xprt_stop_recv()
before conn_xprt_want_recv(), but that's a bit tricky and may have
side effects on other parts (eg: SSL)
- or we refrain from receiving in the mux as soon as we're busy on
anything else, regardless of whether or not some room is available
in the receive buffer.
This patch takes the second approach above. This way once we read some
data, as soon as we detect that we're stuck, we immediately stop receiving.
This ensures the event doesn't go into polled mode for this period and
that as soon as we're unstuck we can continue. In fact this guarantees
that we can only wait on one side of the mux for a given direction. A
future improvement of the connection layer should make it possible to
resume processing of an interrupted receive operation.
This fix must be backported to 1.8.
In order to allow demuxing when the dmux buffer is full, we need to
enable data receipt in multiple conditions. Since the conditions are a
bit complex, they have been delegated to a new function h2_recv_allowed()
which follows these rules :
- if an error or a shutdown was detected on the connection and the buffer
is empty, we must not attempt to receive
- if the demux buf failed to be allocated, we must not try to receive and
we know there is nothing pending
- if the buffer is not full, we may attempt to receive
- if no flag indicates a blocking condition, we may attempt to receive
- otherwise must may not attempt
No more truncated payloads are detected in tests anymore, which seems to
indicate that the issue was worked around. A better connection API will
have to be created for new versions to make this stuff simpler and more
intuitive.
This fix needs to be backported to 1.8 along with the rest of the patches
related to CS_FL_RCV_MORE.
If we can't demux pending data due to a stream buffer full condition, we
now set CS_FL_RCV_MORE on the conn_stream so that the stream layer knows
it must call back as soon as possible to restart demuxing. Without this,
some uploaded payloads are truncated if the server does not consume them
fast enough and buffers fill up.
Note that this is still not enough to solve the problem, some changes are
required on the recv() and update_poll() paths to allow to restart reading
even with a buffer full condition.
This patch must be backported to 1.8.
When a stream interface tries to read data from a mux using rcv_buf(),
sometimes it sees 0 as the return value and concludes that there's no
more data while there are, resulting in the connection being polled for
more data and no new attempt being made at reading these pending data.
Now it will automatically check for flag CS_FL_RCV_MORE to know if the
mux really did not have anything available or was not able to provide
these data by lack of room in the destination buffer, and will set
SI_FL_WAIT_ROOM accordingly. This will ensure that once current data
lying in the buffer are forwarded to the other side, reading chk_rcv()
will be called to re-enable reading.
It's important to note that in practice it will rely on the mux's
update_poll() function to re-enable reading and that where the calls
are placed in the stream interface, it's not possible to perform a
new synchronous rcv_buf() call. Thus a corner case remains where the
mux cannot receive due to a full buffer or any similar condition, but
needs to be able to wake itself up to deliver pending data. This is a
limitation of the current connection/conn_stream API which will likely
need a new event subscription to at least call ->wake() asynchronously
(eg: mux->{kick,restart,touch,update} ?).
For now the affected mux (h2 only) will have to take care of the extra
logic to carefully enable polling to restart processing incoming data.
This patch relies on previous one (MINOR: conn_stream: add new flag
CS_FL_RCV_MORE to indicate pending data) and both must be backported to
1.8.
The thread patches adds refcount for notifications. The notifications are
used with the Lua cosocket. These refcount free the notifications when
the session is cleared. In the Lua task case, it not have sessions, so
the nofications are never cleraed.
This patch adds a garbage collector for signals. The garbage collector
just clean the notifications for which the end point is disconnected.
This patch should be backported in 1.8
In register_name, before locking the var_names array, we check the variable name
validity. So if we try to register an invalid or empty name, we need to return
without unlocking it (because it was never locked).
This patch must be backported in 1.8.
This avoids possible 100% cpu usage deadlock on a EMAIL_ALERTS_LOCK and
avoids sending lots of emails when 'option log-health-checks' is used.
It is avoided to change the server state and possibly queue a new email
while processing the email alert by setting check->status to
HCHK_STATUS_UNKNOWN which will exit the set_server_check_status(..) early.
This needs to be backported to 1.8.
Commit 4cfede87a3 removed
`exit-on-failure` in favor of `no-exit-on-failure`, but failed
to update references to the former in user facing messages.
This should be backported to haproxy 1.8.
Commit 9470d2c ("BUG/MINOR: h2: try to abort closed streams as
soon as possible") tried to address the situations where a stream
is closed by the client, but caused a side effect which is that in
some cases, a regularly closed stream reports an error to the stream
layer. The reason is that we purposely matched H2_SS_CLOSED in the
test for H2_SS_ERROR to report this so that we can check for RST,
but it accidently catches certain end of transfers as well. This
results in valid requests to report flags "CD" in the logs.
Instead, let's roll back to detecting H2_SS_ERROR and explicitly check
for a received RST. This way we can correctly abort transfers without
mistakenly reporting errors in normal situations.
This fix needs to be backported to 1.8 as the fix above was merged into
1.8.1.
Since peers were ported to an applet in 1.5, an issue appeared which
is that certain attempts to close an outgoing connection are a bit
"too nice". Specifically, protocol errors and stream timeouts result
in a clean shutdown to be sent, waiting for the other side to confirm.
This is particularly problematic in the case of timeouts since by
definition the other side will not confirm as it has disappeared.
As found by Fred, this issue was further emphasized in 1.8 by commit
f9ce57e ("MEDIUM: connection: make conn_sock_shutw() aware of
lingering") which causes clean shutdowns not to be sent if the fd is
marked as linger_risk, because now even a clean timeout will not be
sent on an idle peers session, and the other one will have nothing
to respond to.
The solution here is to set NOLINGER on the outgoing stream interface
to ensure we always close whenever we attempt a simple shutdown.
However it is important to keep in mind that this also underlines
some weaknesses of the shutr/shutw processing inside process_stream()
and that all this part needs to be reworked to clearly consider the
abort case, and to stop the confusion between linger_risk and NOLINGER.
This fix needs to be backported as far as 1.5 (all versions are affected).
However, during testing of the backport it was found that 1.5 never tries
to close the peers connection on timeout, so it suffers for another issue.
The new admin state was not correctly commited in this case.
Checks were fully disabled but the server was not marked in MAINT state.
It results with a server definitely stucked on the DOWN state.
This patch should be backported on haproxy 1.8
The number of async fd is computed considering the maxconn, the number
of sides using ssl and the number of engines using async mode.
This patch should be backported on haproxy 1.8
There's a nasty case related to signaling all processes via SIGUSR1.
Since the master process still holds the peers sockets, the old process
trying to connect to the new one to teach it its tables has a risk to
connect to the master instead, which will not do anything, causing the
old process to hang instead of quitting.
This patch ensures we correctly close the peers in the master process
on startup, just like it is done for proxies. Ultimately we would rather
have a complete list of listeners to avoid such issues. But that's a bit
trickier as it would require using unbind_all() and avoiding side effects
the master could cause to other processes (like unlinking unix sockets).
To be backported to 1.8.
Since the split of the shctx and the ssl cache, we lost the ability to
disable the cache with tune.ssl.cachesize 0.
Worst than that, when using this configuration, haproxy segfaults during
the configuration parsing.
Must be backported to 1.8.
In hpack_dht_make_room(), we try to fulfill this rule form RFC7541#4.4 :
"It is not an error to attempt to add an entry that is larger than the
maximum size; an attempt to add an entry larger than the maximum size
causes the table to be emptied of all existing entries and results in
an empty table."
Unfortunately it is not consistent with the way it's used in
hpack_dht_insert() as this last one will consider a success as a
confirmation it can copy the header into the table, and a failure as
an indexing error. This results in the two following issues :
- if a client sends too large a header into an empty table, this
header may overflow the table. Fortunately, most clients send
small headers like :authority first, and never mark headers that
don't fit into the table as indexable since it is counter-productive ;
- if a client sends too large a header into a populated table, the
operation fails after the table is totally flushed and the request
is not processed.
This patch fixes the two issues at once :
- a header not fitting into an empty table is always a sign that it
will never fit ;
- not fitting into the table is not an error
Thanks to Yves Lafon for reporting detailed traces demonstrating this
issue. This fix must be backported to 1.8.
This is a regression in the commit 29730ba5 ("MINOR: action: Add a functions to
check http capture rules"). We must check the capture id only when an id is
defined.
This patch must be backported in 1.8.
This is explicitly forbidden by 7540#8.1.2, and may be used to bypass
some of the other filters, so they must be blocked early. It removes
another issue reported by h2spec.
To backport to 1.8.
h2spec rightfully outlines that we used not to reject these ones, and
they may cause trouble if presented, especially "upgrade".
Must be backported to 1.8.
At the moment there's only ":status". Let's block it early when parsing
the request. Otherwise it would be blocked by the HTTP/1 code anyway.
This silences another h2spec issue.
To backport to 1.8.
We currently don't use stream dependencies, but as reported by h2spec,
the spec requires that we reject streams that depend on themselves in
HEADERS frames.
To backport to 1.8.
h2spec reports that we unfortunately didn't enforce the per-connection
stream limit that we advertise. It's important to ensure it's never
crossed otherwise it's cheap for a client to create many streams. This
requires the addition of a stream count. The h2c struct could be cleaned
up a bit, just like the h2_detach() function where an "if" block doesn't
make sense anymore since it's always true.
To backport to 1.8.
As reported by h2spec, the h2->h1 gateway doesn't verify that ":path"
is not empty. This is harmless since the H1 parser will reject such a
request, but better fix it anyway.
To backport to 1.8.
The purpose here is to be able to signal receipt of RST_STREAM to
streams when they start to provide a response so that the response
can be aborted ASAP. Given that RST_STREAM immediately switches the
stream to the CLOSED state, we must check for CLOSED in addition to
the existing ERROR check.
To be backported to 1.8.
The h2spec test suite reveals that a GOAWAY frame received after the
last stream doesn't cause an immediate close, because we count on the
last stream to quit to do so. By simply setting the last_sid to the
received value in case it was not set, we can ensure to properly close
an idle connection during h2_wake().
To be backported to 1.8.
Due to a typo in the request maximum length calculation, we count the
request path twice instead of counting it added to the method's length.
This has two effects, the first one being that a path cannot be larger
than half a buffer, and the second being that the method's length isn't
properly checked. Due to the way the temporary buffers are used internally,
it is quite difficult to meet this condition. In practice, the only
situation where this can cause a problem is when exactly one of either
the method or the path are compressed and the other ones is sent as a
literal.
Thanks to Yves Lafon for providing useful traces exhibiting this issue.
To be backported to 1.8.
h2spec reports that we used to support a dynamic table size update
anywhere in the header block but it's only allowed before other
headers (cf RFC7541#4.2.1). In practice we don't use these for now
since we only use literals in responses.
To backport to 1.8.
If the hpack decoder sees an invalid header index, it emits value
"### ERR ###" that was used during debugging instead of rejecting the
block. This is harmless, and was detected by h2spec.
To backport to 1.8.
h2spec reported that we didn't check that no more than 7 bits of padding
were left after decoding an huffman-encoded literal. This is harmless but
better fix it now.
To backport to 1.8.
When a pseudo header is used, name.ptr is NULL and we must replace it
with hpack_idx_to_name(). This only affects code built with DEBUG_HPACK.
To be backported to 1.8.
In connect_conn_chk(), there were one case we could return with a new
conn_stream created, but no mux attached. With no mux, cs_destroy() would
segfault. Fix that by setting the mux before we can fail.
This should be backported to 1.8.
The first thread requesting a synchronization is responsible to write in the
"sync" pipe to notify all others. But we must write only once in the pipe
between two synchronizations to have exactly one character in the pipe. It is
important because we only read 1 character in return when the last thread exits
from the sync-point.
Here there is a bug. If two threads request a synchronization, only the first
writes in the pipe. But, if the same thread requests several times a
synchronization before entering in the sync-point (because, for instance, it
detects many servers down), it writes as many as characters in the pipe. And
only one of them will be read. Repeating this bug many times will block HAProxy
on the write because the pipe is full.
To fix the bug, we just check if the current thread has already requested a
synchronization before trying to notify all others.
The patch must be backported in 1.8
As with the call to cpuset_setaffinity(), FreeBSD expects the argument to
pthread_setaffinity_np() to be a cpuset_t, not an unsigned long, so the call
was silently failing.
This should probably be backported to 1.8.
This allows a calling script to show the first startup output and
know when to stop reading from stdout so haproxy can daemonize.
To be backpored to 1.8.
Check if master-worker pipe getenv succeeded, also allow pipe fd 0 as
valid. On FreeBSD in quiet mode the stdin/stdout/stderr are closed
which lets the mworker_pipe to use fd 0 and fd 1. Additionally exit()
upon failure to create or get the master-worker pipe.
This needs to be backported to 1.8.
"monitor-uri" may rely on "monitor fail" rules, which are processed
very early, immediately after the HTTP request is parsed and before
any http rulesets. It's not reported by the config parser when this
ruleset is misplaces, causing some configurations not to work like
users would expect. Let's just add the warning for a misplaced rule.
This BUG was introduced with:
'MEDIUM: threads/stick-tables: handle multithreads on stick tables'
The API was reviewed to handle stick table entry updates
asynchronously and the caller must now call a 'stkable_touch_*'
function each time the content of an entry is modified to
register the entry to be synced.
There was missing call to stktable_touch_* resulting in
not propagated entries to remote peers (or local one during reload)
Yves Lafon reported a breakage with 100-continue. In fact the problem
is caused when an 1xx is the last response in the buffer (which commonly
is the case). We loop back immediately into the parser with what remains
of the input buffer (ie: nothing), while it is not expected to be called
with an empty response, so it fails.
Let's simply get back to the caller to decide whether or not more data
are expected to be sent.
This fix needs to be backported to 1.8.
Commit 8d8aa0d ("MEDIUM: threads/listeners: Make listeners thread-safe")
mistakenly placed HA_ATOMIC_ADD(job, 1) to replace a job--, so it maintains
the job count too high preventing the process from cleanly exiting on
reload.
This needs to be backported to 1.8.
Commit 3e13cba ("MEDIUM: session: make use of the connection's destroy
callback") ensured that connections could be autonomous to destroy the
session they initiated, but it didn't take care of doing the same for
applets. Such applets are used for peers, Lua and SPOE outgoing
connections. In this case, once the stream ends, it closes everything
and nothing takes care of releasing the session. The problem is not
immediately obvious since the only visible effect is that older
processes will not quit on reload after having leaked one such session.
For now we check in stream_free() if the session's origin is the applet
we're releasing, and then free the session as well. Something more
uniform should probably be done once we manage to unify applets and
connections a bit more.
This fix needs to be backported to 1.8. Thanks to Emmanuel Hocdet for
reporting the problem.
The cache was not setting the hdrs_len to zero when we are called
in the http_forward_data with headers + body.
The consequence is to always try to store a size - the size of headers,
during the calls to http_forward_data even when it has already forwarded
the headers.
Thanks to Cyril Bonté for reporting this bug.
Must be backported to 1.8.
The shctx_init() function does not check anymore if the pointer is not
NULL, this check must be done is the caller.
The consequence was to allocate one shctx per ssl bind.
Bug introduced by 4f45bb9 ("MEDIUM: shctx: separate ssl and shctx")
Thanks to Maciej Zdeb for reporting this bug.
Must be backported to 1.8.
There was a deadlock in tcpcheck_main function. The server's lock was already
acquired by the caller (process_chk_conn or wake_srv_chk).
This patch must be backported in 1.8.
kqueue fd's are not shared with children after fork(), so the children
don't have to close them, and it may in fact be dangerous, because we may
end up closing a totally unrelated fd.
[wt: to be backported to 1.8 where master-worker broke on this, and
likely to older versions for completeness]
pendconn_get_next_strm() is called from process_srv_queue() under the
server lock, and calls stream_add_srv_conn() with this lock held, while
the latter tries to take it again. This results in a deadlock when
a server's maxconn is reached and haproxy is built with thread support.
By having the cache id on 33 bytes as the first member, it was
creating a hole and forcing the "hot" remaining part to be split
across two cache lines. Let's move the id at the end as it's used
only during config parsing.
We really don't want them to share the same cache line as they are
expected to be used in parallel. Adding a 64-byte alignment here shows
a performance increase of about 4.5% on task-intensive workloads with
2 to 4 threads.
Very often when debugging, the current task's pointer isn't easy to
recover (eg: from a core file). Let's keep a copy of it, it will
likely help, especially with threads.
This patch changes the behavior of the master during the exit of a
worker.
When a worker exits with an error code, for example in the case of a
segfault, all workers are now killed and the master leaves.
If you don't want this behavior you can use the option
"master-worker no-exit-on-failure".
During the migration to the second version of the pools, the new
functions and pool pointers were all called "pool_something2()" and
"pool2_something". Now there's no more pool v1 code and it's a real
pain to still have to deal with this. Let's clean this up now by
removing the "2" everywhere, and by renaming the pool heads
"pool_head_something".
Rename the global variable "proxy" to "proxies_list".
There's been multiple proxies in haproxy for quite some time, and "proxy"
is a potential source of bugs, a number of functions have a "proxy" argument,
and some code used "proxy" when it really meant "px" or "curproxy". It worked
by pure luck, because it usually happened while parsing the config, and thus
"proxy" pointed to the currently parsed proxy, but we should probably not
rely on this.
[wt: some of these are definitely fixes that are worth backporting]
It is now possible on a "bind" line (or a "stats socket" line) to specify the
thread set allowed to process listener's connections. For instance:
# HTTPS connections will be processed by all threads but the first and HTTP
# connection will be processed on the first thread.
bind *:80 process 1/1
bind *:443 ssl crt mycert.pem process 1/2-
Now, it is possible to bind CPU at the thread level instead of the process level
by defining a thread set in "cpu-map" directives. Thus, its format is now:
cpu-map [auto:]<process-set>[/<thread-set>] <cpu-set>...
where <process-set> and <thread-set> must follow the format:
all | odd | even | number[-[number]]
Having a process range and a thread range in same time with the "auto:" prefix
is not supported. Only one range is supported, the other one must be a fixed
number. But it is allowed when there is no "auto:" prefix.
Because it is possible to define a mapping for a process and another for a
thread on this process, threads will be bound on the intersection of their
mapping and the one of the process on which they are attached. If the
intersection is null, no specific binding will be set for the threads.
It was a temporary directive used for development purpose. Now, CPU mapping for
at the thread level should be done using the cpu-map directive. This feature
will be added in a next commit.
Now, processa and CPU ranges can be partially defined. The higher bound can be
omitted. In such case, it is replaced by the corresponding maximum value, 32 or
64 depending on the machine's word size.
By extension, It is also true for the "bind-process" directive and "process"
parameter on a "bind" or a "stats socket" line.
The prefix "auto:" can be added before the process set to let HAProxy
automatically bind a process to a CPU by incrementing process and CPU sets. To
be valid, both sets must have the same size. No matter the declaration order of
the CPU sets, it will be bound from the lower to the higher bound.
Examples:
# all these lines bind the process 1 to the cpu 0, the process 2 to cpu 1
# and so on.
cpu-map auto:1-4 0-3
cpu-map auto:1-4 0-1 2-3
cpu-map auto:1-4 3 2 1 0
# bind each process to exaclty one CPU using all/odd/even keyword
cpu-map auto:all 0-63
cpu-map auto:even 0-31
cpu-map auto:odd 32-63
# invalid cpu-map because process and CPU sets have different sizes.
cpu-map auto:1-4 0 # invalid
cpu-map auto:1 0-3 # invalid
Now, this function returns a status code to indicate a success (0) or a failure
(1) and the error message in set in <err> parameter. And the result of the parsing
is set in <proc> parameter.
Now, you can define processes concerned by a cpu-map line using a range. For
instance, the following line binds the first 32 processes on CPUs 0 to 3:
cpu-map 1-32 0-3
The documentation specifies that you can have several "process" options to
define several ranges on "bind" lines (or "stats socket" lines). It is uncommon,
but it should be possible. So the bind_proc bitmask in bind_conf structure must
not be overwritten at each new "process" option parsed.
This bug also exists in 1.7, 1.6 and 1.5. So it may be backported. But no one
seems to have noticed it, so it was probably never hitted.
The cache exhibited a but in process_stream() where upon abort it is
possible to switch the stream-int's state to SI_ST_CLO without calling
si_release_endpoint(), resulting in a possibly missing ->release() for
the applet.
It should affect all other applets as well (eg: lua, spoe, peers) and
should carefully be backported to stable branches after some observation
period.
BoringSSL early data differ from OpenSSL 1.1.1 implementation. When early
handshake is done, SSL_in_early_data report if SSL_read will be done on early
data. CO_FL_EARLY_SSL_HS and CO_FL_EARLY_DATA can be adjust accordingly.
HTTP/2 mandates the support of 16384 bytes frames by default, so we need
a large enough buffer to process them. Till now if tune.bufsize was too
small, H2 connections were simply rejected during their establishment,
making it quite hard to troubleshoot the issue.
Now we detect when HTTP/2 is enabled on an HTTP frontend and emit an
error if tune.bufsize is not large enough, with the appropriate
recommendation.
At the moment, the "client" timeout is used on an HTTP/2 connection once
it's idle with no active stream. With this patch, this timeout is replaced
by client-fin once a GOAWAY frame is sent. This closely matches what is
done on HTTP/1 since the principle is the same, as it indicates a willing
ness to quickly close a connection on which we don't expect to see anything
anymore.
As reported by Lukas, it causes more harm than good, for example on
prompt for authentication. Now we have an "http-request reject" rule
to use instead of "http-request deny" if we absolutely want to close
the connection.
Apparently the h2c client has trouble reading the RST_STREAM frame after
a GOAWAY was sent, so it's likely that other clients may face the same
difficulty. Curl and Firefox don't care about this ordering, so let's
send it first.
This one acts similarly to its tcp-request counterpart. It immediately
closes the request without emitting any response. It can be suitable in
certain DoS conditions, as well as to close an HTTP/2 connection.
The cache was relying on the txn->uri for creating its key, which was a
big problem when there was no log activated.
This patch does a sha1 of the host + uri, and stores it in the txn.
When a object is stored, the eb32node uses the first 32 bits of the hash
as a key, and the whole hash is stored in the cache entry.
During a lookup, the truncated hash is used, and when it matches an
entry we check the real sha1.
In case any stream was waiting for the handshake after receiving early data,
we have to wake all of them. Do so by making the mux responsible for
removing the CO_FL_EARLY_DATA flag after all of them are woken up, instead
of doing it in si_cs_wake_cb(), which would then only work for the first one.
This makes wait_for_handshake work with HTTP/2.
It can happen that we want to read early data, write some, and then continue
reading them.
To do so, we can't reuse tmp_early_data to store the amount of data sent,
so introduce a new member.
If we read early data, then ssl_sock_to_buf() is now the only responsible
for getting back to the handshake, to make sure we don't miss any early data.
There is a small unprotected window for a task between the wait queue
and the run queue where a task could be woken up and destroyed at the
same time. What typically happens is that a timeout is reached at the
same time an I/O completes and wakes it up, and the I/O terminates the
task, causing a use after free in wake_expired_tasks() possibly causing
a crash and/or memory corruption :
thread 1 thread 2
(wake_expired_tasks) (stream_int_notify)
HA_SPIN_UNLOCK(TASK_WQ_LOCK, &wq_lock);
task_wakeup(task, TASK_WOKEN_IO);
...
process_stream()
stream_free()
task_free()
pool_free(task)
task_wakeup(task, TASK_WOKEN_TIMER);
This case is reasonably easy to reproduce with a config using very short
server timeouts (100ms) and client timeouts (10ms), while injecting on
httpterm requesting medium sized objects (5kB) over SSL. All this is
easier done with more threads than allocated CPUs so that pauses can
happen anywhere and last long enough for process_stream() to kill the
task.
This patch inverts the lock and the wakeup(), but requires some changes
in process_runnable_tasks() to ensure we never try to grab the WQ lock
while having the RQ lock held. This means we have to release the RQ lock
before calling task_queue(), so we can't hold the RQ lock during the
loop and must take and drop it.
It seems that a different approach with the scope-aware trees could be
easier, but it would possibly not cover situations where a task is
allowed to run on multiple threads. The current solution covers it and
doesn't seem to have any measurable performance impact.
When a stream is aborted on timeout or any reason initiated by the stream,
and this stream was subscribed to the send list, we forgot to detach it
when freeing it, resulting in a dead node remaining present in the send
list with all usual funny consequences (memory corruption, crashes, etc).
Let's simply unconditionally delete the stream.
When the stats code was moved to an applet, it wasn't completely
cleaned of its usage of the HTTP transaction and it used to store
the HTTP status in txn->status and to set the HTTP request date to
<now> from within the applet. This is totally wrong because the
applet is seen as a server from the HTTP engine, which parses its
response, so the http_txn must not be touched there.
This was made visible by the cache which would always exhibit a
negative TR log, indicating that nowhere in the code we took care of
setting s->logs.tv_request while the code above used to continue to
hide this. Another side effect of this issue is that under load, if
the stats applet call risks to be delayed, the reported t_queue can
appear negative by being below tv_request-tv_accept.
This patch removes the assignment of tv_request and txn->status from
the applet code and instead sets the tv_request if still unset when
connecting to the applet. This ensures that all applets report correct
request timers now.
During high loads it becomes visible that the time drifts between threads,
sometimes showing tens of seconds after several minutes. The root cause is
the per-thread correction which is performed based on a local offset and
local time. But we can't use a unique global time either as we need the
thread-local time to be stable between two poll() calls.
This commit takes a stab at this problem by proceeding this way :
- a global "global_now" date is monotonous and common between all threads.
- each thread has its own local <now> which is resynced with <global_now>
on each invocation of tv_update_date()
- each thread detects its own drift based on its poll() timeout and its
local <now>, and recalculates its adjusted local time
- each thread then ensures its new local time is no older than the current
global time, otherwise it readjusts its local time to match this one
- finally threads do atomically update the global time to match its own
local one
This guarantees a monotonous global time and a monotonous+stable local time.
It is still possible by definition for two threads to report a minor time
variation on subsequent events but that variation will only be caused by
the moment they watched the time and are very small. When a common global
time is needed between all threads, global_now could be used as a reference
(with care). The wallclock time used in logs is still <date> anyway.
With threads, it became mandatory to implement a thread-local time with
its own correction. However, it was noticed that during high thread
contention, the time correction could occasionally be wrong, reporting
huge negative or positive timers in logs. This was caused by the
conversion between struct timeval and a single 64-bit offset, due to
an erroneous shift and due to a loss of sign during the conversion.
Given that time_t is not always signed, and that timeval is not really
needed here, better avoid playing dangerous games with these operations
and use a single 64-bit offset representing a signed 32-bit offset, for
the seconds part and an unsigned offset for the microsecond part.
It still supports atomic updates and doesn't cause issues anymore.
If we can't write early data, for some reason, don't give up on reading them,
they may still be early data to be read, and if we don't do so, openssl
internal states might be inconsistent, and the handshake will fail.
The current code only tries to do the handshake in case we can't send early
data if we're acting as a client, which is wrong, it has to be done on the
server side too, or we end up in an infinite loop.
While using mmap() to allocate pools for debugging purposes, kill -USR1 caused
libc aborts in deinit() on two calls to free() on proxies' tasks and the global
listener task. The issue comes from the fact that we're using free() to release
a task instead of task_free(), so the task was allocated from a pool and released
using a different method.
This bug has been there since at least 1.5, so a backport is desirable to all
maintained versions.
The cache was trying to remove objects from the tree while they were
already removed from it. We set the key to 0 as a check for not trying
to remove the object from the tree when we are still using the object.
The cli command "show cache" displays the status of the cache, the first
displayed line is the shctx informations with how much blocks available
blocks it contains (blocks are 1k by default).
The next lines are the objects stored in the cache tree, the pointer,
the size of the object and how much blocks it uses, a refcount for the
number of users of the object, and the remaining expiration time (which
can be negative if expired)
Example:
$ echo "show cache" | socat - /run/haproxy.sock
0x7fa54e9ab03a: foobar (shctx:0x7fa54e9ab000, available blocks:3921)
0x7fa54ed65b8c (size: 43190 (43 blocks), refcount:2, expire: 2)
0x7fa54ecf1b4c (size: 45238 (45 blocks), refcount:0, expire: 2)
0x7fa54ed70cec (size: 61622 (61 blocks), refcount:0, expire: 2)
0x7fa54ecdbcac (size: 42166 (42 blocks), refcount:1, expire: 2)
0x7fa54ec9736c (size: 44214 (44 blocks), refcount:2, expire: 2)
0x7fa54eca28ec (size: 46262 (46 blocks), refcount:2, expire: -2)
Since we switched to notify mode in the systemd unit file in commit
d6942c8, haproxy won't start if the daemon keyword is present in the
configuration.
This change makes sure that haproxy remains in foreground when using
systemd mode and adds a note in the documentation.
The special case of the Cookie header field was overlooked in the
implementation, considering that most servers do handle cookie lists,
but as reported here on discourse it's not the case at all :
https://discourse.haproxy.org/t/h2-cookie-header-splitted-header/1742
This patch fixes this by skipping all occurences of the Cookie header
in the request while building the H1 request, and then building a single
Cookie header with all values appended at once, according to what is
requested in RFC7540#8.1.2.5.
In order to build the list of values, the list struct is used as a linked
list (as there can't be more cookies than headers). This makes the list
walking quite efficient and ensures all values are quickly found without
having to rescan the list.
A test case provided by Lukas shows that it properly works :
> GET /? HTTP/1.1
> user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Firefox/45.0
> accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
> accept-language: en-US,en;q=0.5
> accept-encoding: gzip, deflate
> referer: https://127.0.0.1:4443/?expectValue=1511294406
> host: 127.0.0.1:4443
< HTTP/1.1 200 OK
< Server: nginx
< Date: Tue, 21 Nov 2017 20:00:13 GMT
< Content-Type: text/html; charset=utf-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< X-Powered-By: PHP/5.3.10-1ubuntu3.26
< Set-Cookie: HAPTESTa=1511294413
< Set-Cookie: HAPTESTb=1511294413
< Set-Cookie: HAPTESTc=1511294413
< Set-Cookie: HAPTESTd=1511294413
< Content-Encoding: gzip
> GET /?expectValue=1511294413 HTTP/1.1
> user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Firefox/45.0
> accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
> accept-language: en-US,en;q=0.5
> accept-encoding: gzip, deflate
> host: 127.0.0.1:4443
> cookie: SERVERID=s1; HAPTESTa=1511294413; HAPTESTb=1511294413; HAPTESTc=1511294413; HAPTESTd=1511294413
Many thanks to @Nurza, @adrianw and @lukastribus for their helpful reports
and investigations here.
The current H2 to H1 protocol conversion presents some issues which will
require to perform some processing on certain headers before writing them
so it's not possible to convert HPACK to H1 on the fly.
This commit modifies the headers decoding so that it now works in two
phases : hpack_decode_headers() only decodes the HPACK stream in the
HEADERS frame and puts the result into a list. Headers which require
storage (huffman-compressed or from the dynamic table) are stored in
a chunk allocated by the H2 demuxer. Then once the headers are properly
decoded into this list, h2_make_h1_request() is called with this list
to produce the HTTP/1.1 request into the destination buffer. The list
necessarily enforces a limit. Here we use 2*MAX_HTTP_HDR, which means
that we can have as many individual cookies as we have regular headers
if a client decides to break their cookies into multiple values. This
seams reasonable and will allow the H1 parser to decide whether it's
too much or not.
Thus the output stream is not produced on the fly anymore and this will
permit to deal with certain corner cases like reparing the Cookie header
(which for now is not done).
In order to limit header duplication and parsing, the known pseudo headers
continue to be passed by their index : the name element in the list then
has a NULL pointer and the value is the pseudo header's index. Given that
these ones represent about half of the incoming requests and need to be
found quickly, it maintains an acceptable level of performance.
The code was significantly reduced by doing this because the orignal code
had to deal with HPACK and H1 combinations (eg: index vs not indexed, etc)
and now the HPACK decoding is totally focused on the decompression, and
the H1 encoding doesn't have to deal with the issue of wrapping input for
example.
One bug was addressed here (though it couldn't happen at the moment). The
H2 demuxer used to detect a failure to write the request into the H1 buffer
and would then detect if the output buffer wraps, realign it and try again.
The problem by doing so was that the HPACK context was already modified and
not rewindable. Thus the size check is now performed first and a failure is
reported if it doesn't fit.
The current H2 to H1 protocol conversion presents some issues which will
require to perform some processing on certain headers before writing them
so it's not possible to convert HPACK to H1 on the fly.
Here we introduce a function which performs half of what hpack_decode_header()
used to do, which is to take a list of headers on input and emit the
corresponding request in HTTP/1.1 format. The code is the same and functions
were renamed to be prefixed with "h2" instead of "hpack", though it ends
up being simpler as the various HPACK-specific cases could be fused into
a single one (ie: add header).
Moving this part here makes a lot of sense as now this code is specific to
what is documented in HTTP/2 RFC 7540 and will be able to deal with special
cases related to H2 to H1 conversion enumerated in section 8.1.
Various error codes which were previously assigned to HPACK were never
used (aside being negative) and were all replaced by -1 with a comment
indicating what error was detected. The code could be further factored
thanks to this but this commit focuses on compatibility first.
This code is not yet used but builds fine.
We used to return >0 indicating a success when an error was present on the
connection, preventing the caller from detecting and handling it. This for
example happens when sending too many headers in a frame, making the request
impossible to decompress.
Clang reports this warning :
src/server.c:872:14: warning: address of array 'check->desc' will
always evaluate to 'true' [-Wpointer-bool-conversion]
Indeed, check->desc used to be a pointer to a dynamically allocated area
a long time ago and is now an array. Let's remove the useless test.
Clang complains that h2_get_n64() is not used, and a few other protocol
specific functions may fall in that category depending on how the code
evolves. Better mark them unused to silence the warning since it's on
purpose.
Call the shctx free_blocks callback in order to remove the row from the
cache tree.
Put the row in the hot list during allocation, forbid the blocks to be
stolen by a free or a row_reserve
This patch adds support for `Type=notify` to the systemd unit.
Supporting `Type=notify` improves both starting as well as reloading
of the unit, because systemd will be let known when the action completed.
See this quote from `systemd.service(5)`:
> Note however that reloading a daemon by sending a signal (as with the
> example line above) is usually not a good choice, because this is an
> asynchronous operation and hence not suitable to order reloads of
> multiple services against each other. It is strongly recommended to
> set ExecReload= to a command that not only triggers a configuration
> reload of the daemon, but also synchronously waits for it to complete.
By making systemd aware of a reload in progress it is able to wait until
the reload actually succeeded.
This patch introduces both a new `USE_SYSTEMD` build option which controls
including the sd-daemon library as well as a `-Ws` runtime option which
runs haproxy in master-worker mode with systemd support.
When haproxy is running in master-worker mode with systemd support it will
send status messages to systemd using `sd_notify(3)` in the following cases:
- The master process forked off the worker processes (READY=1)
- The master process entered the `mworker_reload()` function (RELOADING=1)
- The master process received the SIGUSR1 or SIGTERM signal (STOPPING=1)
Change the unit file to specify `Type=notify` and replace master-worker
mode (`-W`) with master-worker mode with systemd support (`-Ws`).
Future evolutions of this feature could include making use of the `STATUS`
feature of `sd_notify()` to send information about the number of active
connections to systemd. This would require bidirectional communication
between the master and the workers and thus is left for future work.
Commit 9aaf778 ("MAJOR: connection : Split struct connection into struct
connection and struct conn_stream.") had to change the way the stream
interface deals with incoming data to accomodate the mux. A break
statement got lost during a change, leading to the receive call being
performed twice even when CF_READ_DONTWAIT is set. The most noticeable
effect is that it made the bug described in commit 33982cb ("BUG/MAJOR:
stream: ensure analysers are always called upon close") much easier to
reproduce as it would appear even with an HTTP frontend.
Let's just restore the stream-interface flag and the break here, as in
the previous code.
No backport is needed as this was introduced during 1.8-dev.
A recent issue affecting HTTP/2 + redirect + cache has uncovered an old
problem affecting all existing versions regarding the way events are
reported to analysers.
It happens that when an event is reported, analysers see it and may
decide to temporarily pause processing and prevent other analysers from
processing the same event. Then the event may be cleared and upon the
next call to the analysers, some of them will never see it.
This is exactly what happens with CF_READ_NULL if it is received before
the request is processed, like during redirects : the first time, some
analysers see it, pause, then the event may be converted to a SHUTW and
cleared, and on next call, there's nothing to process. In practice it's
hard to get the CF_READ_NULL flag during the request because requests
have CF_READ_DONTWAIT, preventing the read0 from happening. But on
HTTP/2 it's presented along with any incoming request. Also on a TCP
frontend the flag is not set and it's possible to read the NULL before
the request is parsed.
This causes a problem when filters are present because flt_end_analyse
needs to be called to release allocated resources and remove the
CF_FLT_ANALYZE flag. And the loss of this event prevents the analyser
from being called and from removing itself, preventing the connection
from ever ending.
This problem just shows that the event processing needs a serious revamp
after 1.8. In the mean time we can deal with the really problematic case
which is that we *want* to call analysers if CF_SHUTW is set on any side
ad it's the last opportunity to terminate a processing. It may
occasionally result in some analysers being called for nothing in half-
closed situations but it will take care of the issue.
An example of problematic configuration triggering the bug in 1.7 is :
frontend tcp
bind :4445
default_backend http
backend http
redirect location /
compression algo identity
Then submitting requests which immediately close will have for effect
to accumulate streams which will never be freed :
$ printf "GET / HTTP/1.1\r\n\r\n" >/dev/tcp/0/4445
This fix must be backported to 1.7 as well as any version where commit
c0c672a ("BUG/MINOR: http: Fix conditions to clean up a txn and to
handle the next request") was backported. This commit didn't cause the
bug but made it much more likely to happen.
Upon stream instanciation, we used to enable channel auto connect
and auto close to ease TCP processing. But commit 9aaf778 ("MAJOR:
connection : Split struct connection into struct connection and
struct conn_stream.") has revealed that it was a bad idea because
this commit enables reading of the trailing shutdown that may follow
a small requests, resulting in a read and a shutr turned into shutw
before the stream even has a chance to apply the filters. This
causes an issue with impossible situations where the backend stream
interface is still in SI_ST_INI with a closed output, which blocks
some streams for example when performing a redirect with filters
enabled.
Let's change this so that we only enable these two flags if there is
no analyser on the stream. This way process_stream() has a chance to
let the analysers decide whether or not to allow the shutdown event
to be transferred to the other side.
It doesn't seem possible to trigger this issue before 1.8, so for now
it is preferable not to backport this fix.
A customer reported a crash when within the HTTP request some headers
were not set leading to the module to crash. So the module ignore them
since empty data have no value for the detection.
Needs to be backported to 1.7.
Instead of storing the SSL_SESSION pointer directly in the struct server,
store the ASN1 representation, otherwise, session resumption is broken with
TLS 1.3, when multiple outgoing connections want to use the same session.
Now, we process at most 200 active applets per call to applet_run_active. We use
the same limit as the tasks. With the cache filter and the SPOE, the number of
active applets can now be huge. So, it is important to limit the number of
applets processed in applet_run_active.
applets_active_queue is the active queue size. It is a global variable. So it is
underoptimized because we may be lead to consider there are active applets for a
thread while in fact all active applets are assigned to the otherthreads. So, in
such cases, the polling loop will be evaluated many more times than necessary.
Instead, we now check if the thread id is set in the bitfield active_applets_mask.
This is specific to threads, no backport is needed.
a bitfield has been added to know if there are runnable applets for a
thread. When an applet is woken up, the bits corresponding to its thread_mask
are set. When all active applets for a thread is get to be processed, the thread
is removed from active ones by unsetting its tid_bit from the bitfield.
tasks_run_queue is the run queue size. It is a global variable. So it is
underoptimized because we may be lead to consider there are active tasks for a
thread while in fact all active tasks are assigned to the other threads. So, in
such cases, the polling loop will be evaluated many more times than necessary.
Instead, we now check if the thread id is set in the bitfield active_tasks_mask.
Another change has been made in process_runnable_tasks. Now, we always limit the
number of tasks processed to 200.
This is specific to threads, no backport is needed.
a bitfield has been added to know if there are runnable tasks for a thread. When
a task is woken up, the bits corresponding to its thread_mask are set. When all
tasks for a thread have been evaluated without any wakeup, the thread is removed
from active ones by unsetting its tid_bit from the bitfield.
Since the commit cd7879adc ("BUG/MEDIUM: threads: Run the poll loop on the main
thread too"), the log buffers are allocated after the proxies startup. So log
messages produced during this startup was ignored.
To fix the bug, we restore the initialization of these buffers before proxies
startup.
This is specific to threads, no backport is needed.
At the end of the master initialisation, a call to protocol_unbind_all()
was made, in order to close all the FDs.
Unfortunately, this function closes the inherited FDs (fd@), upon reload
the master wasn't able to reload a configuration with those FDs.
The create_listeners() function now store a flag to specify if the fd
was inherited or not.
Replace the protocol_unbind_all() by mworker_cleanlisteners() +
deinit_pollers()
Does not use the deinit() function during a reload, it's dangerous and
might be subject to double free, segfault and hazardous behavior if
it's called twice in the case of a execvp fail.
After execvp fails, the signals were ignored, preventing to try a reload
again. It is now fixed by reaching the top of the mworker_wait()
function once the execvp failed.
When the master worker fail the execvp, it returns the wrong error
"Cannot allocate memory".
We now display the accurate error corresponding to the errno value.
Now we can show in dotted red the node being removed or surrounded in red
a node having been inserted, and add a description on the graph related to
the operation in progress for example.
Use a smaller and cleaner fixed font, use upper case to indicate sides on
branches, remove the useless node/leaf markers on branches since the colors
already indicate them, and show the node's key as it helps spot the matching
leaf.
Disable the cache if the append of data failed, it should never happen
because the allocated row size is at least equal to the size of the
object to allocate.
Forward the remaining headers with the data in the first call of
cache_store_http_forward_data().
Previously the headers were forwarded first, and the function left,
implying an additionnal call to cache_store_http_forward_data() for the
data.
Cc: Christopher Faulet <cfaulet@haproxy.com>
Use msg->sov to forward headers instead of msg->eoh. It can causes some
problem because eoh does not contains the last \r\n, and the filter does
not support to send the headers partially.
Cc: Christopher Faulet <cfaulet@haproxy.com>
If haproxy is started using the name of the binary only (i.e.
not using a relative or absolute path) the `execv` in
`mworker_reload` fails with `ENOENT`, because it does not
examine the `PATH`:
[WARNING] 315/161139 (7) : Reexecuting Master process
[WARNING] 315/161139 (7) : Cannot allocate memory
[WARNING] 315/161139 (7) : Failed to reexecute the master processs [7]
The error messages are misleading, because the return value of
`execv` is not checked. This should be fixed in a separate commit.
Once this happened the master process ignores any further
signals sent by the administrator.
Replace `execv` with `execvp` to establish the expected
behaviour.
This bug was introduced in commit 73b85e75b3.
This macro should be used to declare variables or struct members depending on
the USE_THREAD compile option. It avoids the encapsulation of such declarations
between #ifdef/#endif. It is used to declare all lock variables.
In spoe_acquire_buffer and spoe_release_buffer, instead of checking the buffer
against buf_empty, we now check its size. It is important because when an
allocation fails, it will be set to buf_wanted. In both cases, the size is 0.
It is a proactive bug fix, no real problem was observed till now. It cannot be
backported as is in 1.7 because of all changes made on the SPOE in 1.8.
Marcus Rückert reported that commit d8b3b65 ("BUG/MEDIUM: splice/threads:
pipe reuse list was not protected.") broke threadless support. Add the
required #ifdef.
In the case of Transfer-Encoding: chunked, there is no Content-Length
which causes the cache to allocate a too small shctx row for the data.
It's not possible to allocate a shctx row for the chunks, we need to be
able to allocate on-the-fly the shctx blocks during the data transfer.
This method was reserved for the HTTP/2 connection preface, must never
be used and must be rejected. In normal situations it doesn't happen,
but it may be visible if a TCP frontend has alpn "h2" enabled, and
forwards to an HTTP backend which tries to parse the request. Before
this patch it would pass the wrong request to the backend server, now
it properly returns 400 bad req.
This patch should probably be backported to stable versions.
At a number of places, bitmasks are used for process affinity and to map
listeners to processes. Every time 1UL<<(relative_pid-1) is used. Let's
create a "pid_bit" variable corresponding to this value to clean this up.
It happens that no single analyser has ever needed to set res.analyse_exp,
so that process_stream() didn't consider it when computing the next task
expiration date. Since Lua actions were introduced in 1.6, this can be
needed on http-response actions for example, so let's ensure it's properly
handled.
Thanks to Nick Dimov for reporting this bug. The fix needs to be
backported to 1.7 and 1.6.
This is useful to know what thread(s) an fd is scheduled to be
handled on. It's worth noting that at the moment the "show fd"d
doesn't seem totally thread-safe.
In commit 53a4766 ("MEDIUM: connection: start to introduce a mux layer
between xprt and data") we introduced a release() function which ends
up never being used. Let's get rid of it now.
When a stream_interface performs a shutw() then a shutr(), the stream
is marked closed. Then cs_destroy() calls h2_detach() and it cannot
fail since we're on the leaving path of the caller. The problem is that
in order to close streams we usually have to send either an emty DATA
frame with the ES flag set or an RST_STREAM frame, and the mux buffer
might already be full, forcing the stream to be queued. The forced
removal of this stream causes this last message to silently disappear,
and the client to wait forever for a response.
This commit ensures we can detach the conn_stream from the h2 stream
if the stream is blocked, effectively making the h2 stream an orphan,
ensures that the mux can deal with orphaned streams after processing
them, and that the demux can kill them upon receipt of GOAWAY.
There is an issue with how the RST_STREAM frames are sent. Some of
them are sent from the demux, either for valid or for closed streams,
and some are sent from the mux always for valid streams. At the moment
the demux stream ID is used, which is wrong for all streams being muxed,
and sometimes results in certain bad HTTP responses causing the emission
of an RST_STREAM referencing stream zero. In addition, the stream's
blocked flags could be updated even if the stream was the closed or
idle ones.
We really need to split the function for the two distinct use cases where
one is used to send an RST on a condition detected at the connection level
(such as a closed stream) and the other one is used to send an RST for a
condition detected at the stream level. The first one is used only in the
demux, and the other one only by a valid stream.
To be thread safe, the function pattern_exec_match copy data (the pattern and
the inner sample) in thread-local variables. But when the sample is duplicated,
we must check its type and not the pattern one.
This is specific to threads, no backport is needed.
When a write activity is reported on a channel, it is important to keep this
information for the stream because it take part on the analyzers' triggering.
When some data are written, the flag CF_WRITE_PARTIAL is set. It participates to
the task's timeout updates and to the stream's waking. It is also used in
CF_MASK_ANALYSER mask to trigger channels anaylzers. In the past, it was cleared
by process_stream. Because of a bug (fixed in commit 95fad5ba4 ["BUG/MAJOR:
stream-int: don't re-arm recv if send fails"]), It is now cleared before each
send and in stream_int_notify. So it is possible to loss this information when
process_stream is called, preventing analyzers to be called, and possibly
leading to a stalled stream.
Today, this happens in HTTP2 when you call the stat page or when you use the
cache filter. In fact, this happens when the response is sent by an applet. In
HTTP1, everything seems to work as expected.
To fix the problem, we need to make the difference between the write activity
reported to lower layers and the one reported to the stream. So the flag
CF_WRITE_EVENT has been added to notify the stream of the write activity on a
channel. It is set when a send succedded and reset by process_stream. It is also
used in CF_MASK_ANALYSER. finally, it is checked in stream_int_notify to wake up
a stream and in channel_check_timeouts.
This bug is probably present in 1.7 but it seems to have no effect. So for now,
no needs to backport it.
If the H1 parser would report a status code length not consisting in
exactly 3 digits, the error case was confused with a lack of buffer
room and was causing the parser to loop infinitely.
The H1 parser used by the H2 gateway was a bit lax and could validate
non-numbers in the status code. Since it computes the code on the fly
it's problematic, as "30:" is read as status code 310. Let's properly
check that it's a number now. No backport needed.
The build breaks on a machine without openssl/crypto.h because shctx
still loads openssl-compat.h while it doesn't need it anymore since
the code was moved :
In file included from src/shctx.c:20:0:
include/proto/openssl-compat.h:3:28: fatal error: openssl/crypto.h: No such file or directory
#include <openssl/crypto.h>
Just remove include openssl-compat from shctx.
Commit 522eea7 ("MINOR: ssl: Handle sending early data to server.") added
a dependency on SRV_SSL_O_EARLY_DATA which only exists when USE_OPENSSL
is defined (which is probably not the best solution) and breaks the build
when ssl is not enabled. Just add an ifdef USE_OPENSSL around the block
for now.
This adds a new keyword on the "server" line, "allow-0rtt", if set, we'll try
to send early data to the server, as long as the client sent early data, as
in case the server rejects the early data, we no longer have them, and can't
resend them, so the only option we have is to send back a 425, and we need
to be sure the client knows how to interpret it correctly.
With TLS 1.3, session aren't established until after the main handshake
has completed. So we can't just rely on calling SSL_get1_session(). Instead,
we now register a callback for the "new session" event. This should work for
previous versions of TLS as well.
My recent change in commit ce4e0aa ("MEDIUM: task: change the construction
of the loop in process_runnable_tasks()") was bogus as it used to keep the
rq_next across an unlock/lock sequence, occasionally leading to crashes for
tasks that are eligible to any thread. We must use the lookup call for each
new batch instead. The problem is easily triggered with such a configuration :
global
nbthread 4
listen check
mode http
bind 0.0.0.0:8080
redirect location /
option httpchk GET /
server s1 127.0.0.1:8080 check inter 1
server s2 127.0.0.1:8080 check inter 1
Thanks to Olivier for diagnosing this one. No backport is needed.
Commit 4ac4928 ("BUG/MINOR: stream-int: don't set MSG_MORE on SHUTW_NOW
without AUTO_CLOSE") was incomplete. H2 reveals another situation where
the input stream is marked closed with the request and we set MSG_MORE,
causing a delay before the request leaves.
Better avoid setting the flag on the request path for close cases in
general.
As part of the detection for intentional closes, we can kill the
connection if a shutw() happens before the headers. But it can also
happen that an invalid response is not properly parsed, preventing
any headers frame from being sent and making the function believe
it was an abort. Now instead we check if any response was received
from the stream, regardless of the fact that it was properly
converted.
It's pointless to requeue the task when we're closing, so swap the
order of the task_queue() and h2_release(). It also matches what
was written in the comment regarding re-arming the timer.
h2_detach() is called after a stream was closed, and it evaluates if it's
worth closing the connection. The issue there is that the connection is
closed too early in case there's demand for closing after the last stream,
even if some data remain in the mux. Let's change the condition to check
for this.
When the assignment of the connection state was moved into h2c_error(),
3 of them were missed because they were wrong, using H2_SS_ERROR instead.
This resulted in the connection's state being set to H2_CS_ERROR2 in fact,
so the error was not properly sent.
This one was created to maintain the knowledge that a stream was closed
after having sent an RST_STREAM frame but that's not needed anymore and
it confuses certain conditions on the error processing path. It's time
to get rid of it.
The call to xprt->snd_buf() was not conditionned on the presence of
data in the buffer, resulting in snd_buf() returning 0 and never
disabling the polling. It was revealed by the previous bug on error
processing but must properly be handled.
Some stream errors are detected on the MUX path (eg: H1 response
encoding). The ones forgot to emit an RST_STREAM frame, causing the
client to wait and/or to see the connection being immediately closed.
This is now fixed.
This flag was added after the GOAWAY flags were introduced and mistakenly
placed in the connection, but that doesn't make sense as it's specific to
the stream. The main impact is the risk of returning a DATA0+ES frame for
an error instead of an RST_STREAM.
snr_check_ip_callback() may be called with the server lock, so don't attempt
to lock it again, instead, make sure the callers always have the lock before
calling it.
The scheduler is complex and uses local queues to amortize the cost of
locks. But all this comes with a cost that is quite observable with
single-thread workloads.
The purpose of this patch is to reimplement the much simpler scheduler
for the case where threads are not used. The code is very small and
simple. It doesn't impact the multi-threaded performance at all, and
provides a nice 10% performance increase in single-thread by reaching
606kreq/s on the tests that showed 550kreq/s before.
process_runnable_tasks() needs to requeue or wake up tasks after
processing them in batches. By only refilling the existing ones, we
avoid revisiting all the queue. The performance gain is measurable
starting with two threads, where the request rate climbs to 657k/s
compared to 644k.
This patch slightly rearranges the loop to pack the locked code a little
bit, and to try to concentrate accesses to the tree together to benefit
more from the cache.
It also fixes how the loop handles the right margin : now that is guaranteed
that the retrieved nodes are filtered to only match the current thread, we
don't need to rewind every 16 entries. Instead we can rewind each time we
reach the right margin again.
With this change, we now achieve the following performance for 10 H2 conns
each containing 100 streams :
1 thread : 550kreq/s
2 thread : 644kreq/s
3 thread : 598kreq/s
This function is sensitive, let's make it shorter by factoring out the
unlock and leave code. This reduced the function's size by a few tens
of bytes and increased the overall performance by about 1%.
Currently the task scheduler suffers from an O(n) lookup when
skipping tasks that are not for the current thread. The reason
is that eb32_lookup_ge() has no information about the current
thread so it always revisits many tasks for other threads before
finding its own tasks.
This is particularly visible with HTTP/2 since the number of
concurrent streams created at once causes long series of tasks
for the same stream in the scheduler. With only 10 connections
and 100 streams each, by running on two threads, the performance
drops from 640kreq/s to 11.2kreq/s! Lookup metrics show that for
only 200000 task lookups, 430 million skips had to be performed,
which means that on average, each lookup leads to 2150 nodes to
be visited.
This commit backports the principle of scope lookups for ebtrees
from the ebtree_v7 development tree. The idea is that each node
contains a mask indicating the union of the scopes for the nodes
below it, which is fed during insertion, and used during lookups.
Then during lookups, branches that do not contain any leaf matching
the requested scope are simply ignored. This perfectly matches a
thread mask, allowing a thread to only extract the tasks it cares
about from the run queue, and to always find them in O(log(n))
instead of O(n). Thus the scheduler uses tid_bit and
task->thread_mask as the ebtree scope here.
Doing this has recovered most of the performance, as can be seen on
the test below with two threads, 10 connections, 100 streams each,
and 1 million requests total :
Before After Gain
test duration : 89.6s 4.73s x19
HTTP requests/s (DEBUG) : 11200 211300 x19
HTTP requests/s (PROD) : 15900 447000 x28
spin_lock time : 85.2s 0.46s /185
time per lookup : 13us 40ns /325
Even when going to 6 threads (on 3 hyperthreaded CPU cores), the
performance stays around 284000 req/s, showing that the contention
is much lower.
A test showed that there's no benefit in using this for the wait queue
though.
The first pid in the pidfile is now the parent, it's more convenient for
supervising the processus.
You can now reload haproxy in master-worker mode with convenient command
like: kill -USR2 $(head -1 /tmp/haproxy.pid)
Commit 0493149 ("MINOR: thread: report multi-thread support in haproxy -vv")
added information about thread support in haproxy -vv output but accidently
marked the message as "must_free" while it's a constant. This causes a segv
on the old process on clean exit if threads are enabled. It doesn't affect
the stability during operations however.
unbind_listener() takes the listener lock, which is already held by
enable_listener(). This situation happens when starting with nbproc > 1
with some bind lines limited to a certain process, because in this case
enable_listener() tries to stop unneeded listeners.
This commit introduces __do_unbind_listeners() which must be called with
the lock held, and makes enable_listener() use this one. Given that the
only return code has never been used and that it starts to make the code
more complicated to propagate it before throwing it to the trash, the
function's return type was changed to void.
The spin_unlock() was called just before setting the expiry to
TICK_ETERNITY, so if another thread has the time to perform its
update and set a timeout, this would would clear it.
Commit c3680ec ("MINOR: add severity information to cli feedback messages")
introduced a severity level to CLI messages, but one of them was missed
on "set server addr". No backport is needed.
Given that all spinning loops we've had since 1.8-rc1 were caused by
unbalanced lock/unlock, let's get rid of all return statements in the
locked check functions and only exit via a a single unlock place.
The "set server <srv> check-port" CLI handler forgot to return after
detecting an error on the port number, and still proceeds with the action.
This needs to be backported to 1.7.
Some unlocks were missing, resulting in deadlocks even with a single thread.
We really need to make these functions safer by getting rid of all those
remaining "return" calls and only leave using a goto!
Using peers or stick table we could update an freq_ctr
using a tick value with the first bit set but this
bit is reserved for lock since multithreading support.
Fix bugs due to missing unlock and recursive lock performing
http health check.
The server's lock scope was enlarged to protect all callers
of 'set_server_check_status' and 'chk_report_conn_err'.
This fix also protects tcpcheck against concurrency.
Before introducing the mux layer, tcp_connect() would poll for sending
to detect the connection establishment. It happens that the health
checks have apparently never explicitly enabled this polling and have
been relying on this implicit one.
Now that there's the mux layer, the conn_stream needs to be enabled
for polling as well and since it's not done in the checks, it's never
done and the check's request doesn't leave the machine, as can be
noticed with http checks.
The solution simply consists in going back to the well-known case
where we enable polling after connecting using cs_want_send() if we
have anything but just a plain connect(). The regular data path is
not affected because the stream interface code automatically computes
the polling needs based on buffer contents.
This situation which must not happen does in fact happen when feeding
artificial responses using errorfiles, Lua or an applet. For now it
causes the H1 response parser to loop forever trying to get a more
complete response. Since it cannot progress, let's return an error.
Don't bother testing if len is nonzero, we know it is, as we're in the
"else" part of a if (!len), and testing it confuses clang into thinking
ret may be left uninitialized.
Store object in the cache. The cache use an shctx for storage.
It uses an http-response action to store the headers and a filter to
store the body. The http-response action is used in order to allow
modifications by other actions before caching.
Forbid shctx to read more than expected, it allows you to use a greater
value as a len with shctx_row_data_get(), the size of the destination
buffer for example.
Previous commit ea3928 (MEDIUM: h2: apply a timeout to h2 connections)
was wrong for two reasons. The first one is that if the client timeout
is not set, it's used as zero, preventing connections from establishing.
The second reason is that if the timeout triggers with active streams
(normally it should not since the task is supposed to be disabled), the
task is removed (h2c->task=NULL), and the last quitting stream might
try to dereference it.
Instead of doing this, we simply not register the task if there's no
timeout (it's useless) and we always control its presence in the streams.
Till now there was no way to deal with a dead H2 connection. Now each
connection creates a task that wakes up to kill the connection. Its
timeout is constantly refreshed when there's some activity. In case
the timeout triggers, the best effort attempts are made at sending a
clean GOAWAY message before closing and signaling the streams.
The timeout is automatically disabled when there's an active stream on
the connection, and restarted when the last stream finishes. This way
it should not affect long sessions.
Given that we're processing data produced by haproxy, we know that the
situations where haproxy doesn't return anything are :
- request timeout with option http-ignore-probes : there's no reason to
hit this since we're creating the stream with the request into it ;
- tcp-request content reject : this definitely means we want to kill the
connection and abort keep-alive and any further processing ;
- using /dev/null as the error file to hide an error
In practice it appears that using the abort on empty response as a hint to
trigger a connection close is very appropriate to continue to give the
control over the connection management. This patch thus tries to send a
GOAWAY frame with the max_id presented as the last stream ID, then sends
an RST_STREAM for the current stream. For the client, this means that the
connection must be shut down immediately after processing the last pending
streams and that the current stream is aborted. This way it's still possible
to force connections to be closed using tcp-request rules.
After some long brainstorming sessions, it appears that "Connection: close"
seems to be the best signal from the L7 layer to indicate the need to close
the connection. Indeed, in H1 it is only present in very rare cases (eg:
certain unrecoverable errors, some of which could remove it now by the way).
It will also be added when the L7 layer wants to force the connection to
terminate. By default when running in keep-alive mode it is not present.
It's worth mentionning that in H1 with persistent connections, we have sort
of a concurrency-1 mux and this header field is used the same way.
Thus here this patch detects "Connection: close" in response headers and
if seen, sends a GOAWAY frame with the highest possible ID so that the
client knows that it can quit whenever it wants to. If more aggressive
closures are needed in the future, we may decide to advertise the max_id
to abort after the current requests and better honor "http-request deny".
For a graceful shutdown, the specs requries to discard frames with a
stream ID higher than the advertised last_id. (RFC7540#6.8). Well,
finally for now the code is disabled (see last page of #6.8). Some
frames need to be processed anyway to maintain the compression state
and the flow control window state, but we don't have any trivial way
to do this and ignore them at the same time. For the headers it's
the worst case where we can't parse headers frames without coming
from the streams, and we don't want to create such streams as we'd
have to abort them, and aborting would cause errors to flow back.
Possibly that a longterm solution might involve using some dummy
streams and dummy buffers for this and calling the parsers directly.
RFC7540#5.1 is pretty clear : "any frame other than WINDOW_UPDATE,
PRIORITY, or RST_STREAM in this state MUST be treated as a connection
error of type STREAM_CLOSED". Instead of dealing with this for each
and every frame type, let's do it once for all in the main demux loop.
RFC7540#5.1 is pretty clear : "any frame other than HEADERS or PRIORITY
in this state MUST be treated as a connection error". Instead of dealing
with this for each and every frame type, let's do it once for all in the
main demux loop.
The ID is respected, and only IDs greater than the advertised last_id
are woken up, with a CS_FL_ERROR flag to signal that the stream is
aborted. This is necessary for a browser to abort a download or to
reject a bad response that affects the connection's state.
Let's replace h2_wake_all_streams() with h2_wake_some_streams(), to
support signaling only streams by their ID (for GOAWAY frames) and
to pass the flags to add on the conn_stream.
When a stream sends a shutw, we send an empty DATA frame with the ES
flag set, except if no HEADERS were sent, in which case we rather send
RST_STREAM. On shutr(1) to abort a request, an RST_STREAM frame is sent
if the stream is OPEN and the stream is closed. Care is taken to switch
the stream's state accordingly and to avoid sending an ES bit again or
another RST once already done.